document
stringlengths
1
49.1k
summary
stringlengths
3
11.9k
Introduction Protected areas (PAs) provide a most important and effective way to protect global biodiversity and ecological environments and contribute to human health and well-being [1][2][3]. Declining financial support for PAs in developing countries and even in some developed ones, such as Australia, the US and Canada, suggests that developing PAs by solely relying on government inputs is unsustainable [4]. Nature-based tourism is a popular type of cultural ecosystem service that can enhance the emotional connection between human beings and nature and contribute to the financial sustainability of PAs [5][6][7][8]. It is estimated that the annual tourist arrivals at the world's PAs reach 8 billion [9], and that the economic value of PAs derived from the improved mental health of visitors is US$6 trillion a year [10]. The wider benefits of park visits have not been quantified [11]. However, with PAs being expected to make much wider ecological, social and economic contributions to sustainability and human well-being, PA managers face challenges in coordinating tourism with other goals, such as nature conservation and local community development [4,12,13]. Therefore, both the International Union for Conservation of Nature (IUCN) and World Tourism Organization (WTO) emphasize the importance of sustainability assessment and adaptive management of tourism in PAs, so as to bring into full play the role of tourism in poverty reduction, community development and biodiversity conservation [14,15]. PAs, as important nature-based tourism destinations, are complex adaptive systems that involve multiple stakeholders and are affected by social, economic and environmental factors [16][17][18][19][20]. Increasingly, the PA, the local community and the tourism within the PA are being recognized a complex system [21,22]. The significant impact of COVID-19 on PA tourism highlights the complex interdependencies among tourism, local communities and PAs, and such interdependencies should not be overlooked when seeking to improve PA sustainability [23,24]. A systematic way of thinking is therefore proposed to understand the interaction of key elements, the evolution of systems, and the assessment and management of PAs and local tourism [18,25]. Furthermore, Zhang et al. (2022) indicate that the interrelationships between subsystems provide an important and effective perspective for sustainability assessment of the PA tourism system [26]. Plummer and Fennell (2009) argued that sustainable tourism management in PAs should anticipate system dynamics and transformative changes [27]. However, traditional assessment methods tend to use sustainability indicators targeting current conditions and poor selection of the indicators often leads to the misidentification and misinterpretation of the changes over time. Research on systematic thinking suggested that future conditions may include more extreme and rapid changes than previously [21]. In addition, although previous studies have proposed many indicators on the sustainability of tourism in PAs, they have paid less attention to the coordination among subsystems [26]. Therefore, new methods acknowledging uncertainties, changes and frequent interactions of the subsystems are required. The coupling coordination degree model (CCDD) is an effective method used to evaluate the consistency and positive interaction among systems, and can reflect the trend of complex systems transforming from disorder to order [28]. In recent years, it has been extensively used in studies on the relationship between tourism and other systems and among components of the tourism system, such as tourism and environment [29][30][31], and the social-ecological status of island tourism destinations [32]. These studies reveal the importance and applicability of a coupled coordination perspective in measuring complex tourism systems. But they mostly focus on city (prefectural), provincial and national scales, and smaller scale studies represented by PAs are limited. In addition, since the current indicators for the CCDD are generally selected by authors [29,30,33], with other experts or stakeholders seldom engaged, the availability of comparable data and information and the objectivity of the assessment results are inevitably undermined. As the WTO (2004) noted, a participatory process can be productive, especially when key stakeholders and potential data providers are involved [14]. The Tibetan Plateau is widely recognized for its abundant biodiversity and diverse ecosystems, which are intricately linked to the livelihoods of over one billion people [34]. To safeguard the rich flora and fauna in this region, numerous PAs have been established, many of which are renowned tourist destinations [35,36]. The Qinghai Lake Nature Reserve (QLNR) is a typical example of these PAs. Given the vulnerable ecology and underdeveloped economy of the Tibetan Plateau, tourism in these PAs is expected to play a greater role in poverty reduction, community prosperity and biodiversity conservation. Thus, coordinated ecological-economic-social development is not only essential for the sustainability of each PA but also of paramount importance for realizing the United Nations Sustainable Development Goals and, specifically, for promoting green development on the Tibetan Plateau [37]. Due to the gaps identified in both practical management and theoretical assessment methods in PA tourism, this study aims to enhance the sustainability of PA tourism by focusing on subsystem relationships using the CCDD. To achieve this goal, three sub-aims have been identified: (1) improving the applicability of CCDD to the PA tourism system and enhancing the objectivity of indicator selection, (2) evaluating subsystem relationships and their changes in the PA tourism system, and (3) identifying obstacles to the sustainable development of PA tourism. --- Materials and Methods --- The Assessment Framework A growing body of research conceptualizes tourism as a complex adaptive system [21,38,39] or calls for systematic thinking in the conceptualization of the relationships among tourism, PAs, and the local communities [40,41]. Stone et al. (2021) believed that without clear identification of interacting variables, any study on PAs and tourism will reveal an incomplete and potentially confusing picture, as the complex interactions between system components will not be apparent [42]. Schianetz and Kavanagh (2008) also pointed out that systematic thinking is critical for assessing the sustainability of natural tourist destinations located in eco-environmentally fragile areas [25]. Sustainability indicators can provide managers with required information and are essential for improving tourism planning and management and promoting sustainable development [14,43]. Scholars have developed a series of indicators from one or more dimensions of ecology, economy and society to evaluate the sustainability of different scales and different types of tourism destinations [44][45][46][47]. However, more attention is paid to the sustainability of the ecological, social and economic dimensions themselves, and less to the relationship among the three [26]. In practical terms, the three dimensions are "pillars" of sustainable development with frequent interaction, and a balance must be struck between them [43]. For Bramwell and Lane (2011), the "balance" of economic, social and environmental sustainability is the cornerstone of sustainable tourism policies [48]. Systematic thinking makes it possible to analyze the relationship among the three dimensions. We define the PA, local community and tourism within the PA as a complex adaptive system composed of the three subsystems of society, economy and ecology. The economic subsystem mainly includes tourism-related economic factors within and around the PA, such as tourism revenue and tourist arrivals. The social subsystem mainly encompasses social and cultural factors within the PA and the adjacent communities, such as community participation, cultural preservation, and environmental education. The ecological subsystem mainly consists of natural elements within and around the PA, such as environmental quality and biodiversity conservation. The three subsystems frequently interact with one another through the flow of capital, information, and tourists, among other factors, and this encourages the system to evolve. In order to assess the PA tourism from the perspective of ecological-economic-social coordinated development, this study calculates the coupling coordination degree among the subsystems based on sustainability evaluation (Figure 1). The evaluation covers two parts. The first is the sustainability of the subsystems, which includes the sustainability of the social, economic, and ecological subsystems. The social sustainability index, economic sustainability index, and ecological sustainability index are the three names given to the evaluation outcomes, accordingly. The second part of the evaluation concerns the coupling coordination degree among the subsystems, including the comprehensive coupling coordination degree of the three subsystems, and the coupling coordination degree between each pair of subsystems. --- Study Area QLNR is located in the Qinghai Province, northwest of China (Figure 2). Qinghai Lake, the most important tourist attraction of the PA, is the largest saline lake of China and a wellknown tourist destination on the Tibetan Plateau. Furthermore, many people have lived by the lake for generations. Nature conservation, community prosperity, and sustainable tourism are three inseparable management objectives for QLNR [49]. The following are the reasons why we chose this reserve as a case study to assess the relationship between economic, social, and ecological subsystems of the PA tourism system. --- Study Area QLNR is located in the Qinghai Province, northwest of China (Figure 2). Qinghai Lake, the most important tourist attraction of the PA, is the largest saline lake of China and a well-known tourist destination on the Tibetan Plateau. Furthermore, many people have lived by the lake for generations. Nature conservation, community prosperity, and sustainable tourism are three inseparable management objectives for QLNR [49]. The following are the reasons why we chose this reserve as a case study to assess the relationship between economic, social, and ecological subsystems of the PA tourism system. --- Study Area QLNR is located in the Qinghai Province, northwest of China (Figure 2). Qinghai Lake, the most important tourist attraction of the PA, is the largest saline lake of China and a well-known tourist destination on the Tibetan Plateau. Furthermore, many people have lived by the lake for generations. Nature conservation, community prosperity, and sustainable tourism are three inseparable management objectives for QLNR [49]. The following are the reasons why we chose this reserve as a case study to assess the relationship between economic, social, and ecological subsystems of the PA tourism system. First, Qinghai Lake is an important node of two international bird migration channels in East Asia and Central Asia and also the only habitat of the Przewalski's gazelle [50]. It was recognized as a "Wetland of International Importance" by the Ramsar Convention in 1992. However, the monitoring results of 25 largest lakes in the world from 2008 to 2010 by United Nations Environment Programme (UNEP) showed that the load of human activities in Qinghai Lake had reached 90% [51]. Second, Qinghai Lake has been a popular nature-based tourist destination since the 1980s. In 2019, it received 4.43 million tourists. According to estimates by Zhao (2018) [52], the per capita ecological deficit relating to tourists in Qinghai Lake showed an overall rising trend, and tourist overload was common from 2001 to 2015. In 2017, in response to the central government of China's environmental inspection, which aimed to supervise and enforce local-level environmental protection policies, several scenic spots were closed, and numerous tourist facilities, including tents and bed and breakfasts, were demolished within and around the QLNR for non-compliance with PA management regulations. As a result, the duration of visitor stays within or around the reserve decreased, and the number of overnight stays also declined. Third, similar to most Chinese PAs, QLNR is home to many local people whose livelihoods and lives are closely linked to the reserve and its tourism development. There are 11 towns around the reserve with 5870 residents and 76.55 km 2 farmlands in the reserve. The establishment of the PA restricted local residents' activities such as grazing and planting and others that depend on natural resources. In order to supplement their income, many local residents have resorted to selling tourist souvenirs and taking on part-time jobs in nearby hotels and restaurants. Some individuals have even illegally opened access routes to the reserve and established small tourist attractions, offering paid services such as canola flower sightseeing, horse riding, and photography. However, managing these community residents poses significant challenges, and sometimes conflicts arise between the community and the authorities. The potential impacts of these changes in livelihoods on community resilience remain unclear. --- Index System Establishing the index system proceeded via the two key steps of selecting indicators and determining their weights. The process can be seen in the flow chart for index system establishment (Figure 3). --- Selection of Indicators This paper adopted the fuzzy Delphi method (FDM) to select indicators. The Delphi method is commonly used to select sustainability indicators, but its uncertainty, vagueness and subjectivity need be addressed [53]. The FDM applies fuzzy set theory to the Delphi method, which overcomes the shortcomings by reducing the number of questionnaire surveys, avoiding the distortion of individual expert opinions, and considering the fuzziness of the interview process [54]. The FDM with a dual-trigonometric fuzzy function especially uses a trigonometric fuzzy function and the grey zone testing method to integrate expert opinions, which is more objective than the calculation of geometric means [55]. Hence, this paper adopted the FDM with a dual-trigonometric fuzzy function to select sustainability indicators, and the process was as follows. --- 1. Step 1: Making a list of candidate indicators Following the principles of practicality, comparability, objectivity and data availability, 28 candidate indicators were generated by referring to the current literature [14,54,[56][57][58][59][60][61] and conducting semi-structured interviews with tourism stakeholders in QLNR (administrators, community residents and tourists). --- 2. Step 2: Establishment of the fuzzy Delphi expert group and questionnaire survey The key to the Delphi method lies in the expertise of experts assigned and their familiarity with the subject matter, rather than the number of experts [53]. Saaty and <unk>zdemir (2014) held that adding more experts who are less experienced may disturb the judgments of other experts and even lead to false conclusions [62]. Accordingly, 15 administrators and researchers familiar with tourism in PAs and having at least five years' professional experience in the related Forests 2023, 14, 890 6 of 22 sectors were invited to fill in the expert questionnaire from June to July, 2020. Eliminating invalid questionnaires with obvious missing answers or no discrimination of scores (e.g., 10 for all the maximum values and 0 for all the minimum values), eight valid responses were considered. As revealed in Table 1, the experts were equally distributed among researchers on PA tourism (3), administrators of tourism in QLNR (2) and administrators for PA tourism at provincial or national level (3), and were thus representative. This paper adopted the fuzzy Delphi method (FDM) to select indicators. The Delp method is commonly used to select sustainability indicators, but its uncertainty, vagu ness and subjectivity need be addressed [53]. The FDM applies fuzzy set theory to t Delphi method, which overcomes the shortcomings by reducing the number of questio naire surveys, avoiding the distortion of individual expert opinions, and considering t fuzziness of the interview process [54]. The FDM with a dual-trigonometric fuzzy functi especially uses a trigonometric fuzzy function and the grey zone testing method to in grate expert opinions, which is more objective than the calculation of geometric mea [55]. Hence, this paper adopted the FDM with a dual-trigonometric fuzzy function to lect sustainability indicators, and the process was as follows. Note: The same group of experts was consulted in the analytic hierarchy process (AHP). Forests 2023, 14, 890 7 of 22 --- 3. Step 3: Index selection After two rounds of fuzzy Delphi questionnaire surveys, 21 indicators were generated in total (Table 2). The questionnaire and its data analysis process can be seen in Appendices A and B. Though no academic consensus on the number of sustainability indicators has been reached, the WTO (2004) pointed out after the summarization of global practice that 12-24 indicators are optimal, as an excessively large number of indicators may drive up the cost of data acquisition and be difficult to use, while use of only a few indicators tends to overlook economic, ecological or social issues. By this standard, the number of indicators in this paper is suitable [14]. Note: M i -Z i <unk> 0 requires a second round of expert consultation (values in bold), and G i <unk> S i means the indicator should be deleted. --- Calculation of Weights Index data need to be standardized before weights are calculated. Formulas (1) and (2) were used to standardize the original data for a positive index x ij = x ij -min 1<unk>j<unk>n x ij max 1<unk>j<unk>n x ij -min 1<unk>j<unk>n x ij(1) and for a negative index x ij = max 1<unk>j<unk>n x ij -x ij max 1<unk>j<unk>n x ij -min 1<unk>j<unk>n x ij(2) where, x ij and x ij, respectively, refer to the original value and the standardized value of indicator j in year i; max 1<unk>j<unk>n x ij and min 1<unk>j<unk>n x ij are the maximum and minimum value of indicator j among all years (2010-2019). An x ij whose standardized result is 0 is replaced by 0.0001 to avoid null value in the subsequent calculation with the entropy method (EM). The analytic hierarchy process (AHP) is a common method to obtain the weight of sustainability indicators in the form of hierarchical data combined with experts' opinions [53,54]. It provides a way to systematize the complex issues of PA tourism with the advantage of being easy to operate and accommodating the views of different stakeholders [63]. This study used AHP to divide the indicator system into three hierarchical levels (Table 3), established the pairwise comparison matrix for each level, and invited the experts to compare each level of indicators pairwise on a scale of 1 to 9. Saaty and <unk>zdemir (2014) found that in the use of AHP, engagement of no more than 7 or 8 experts is more likely to make for effective and consistent judgments [62]. The eight experts in Table 1 were therefore invited to participate, and seven of them eventually completed the expert questionnaire. Yaahp was used to process the AHP questionnaire data, using calculation and consistency checks to obtain the indicator weight w j. The EM is commonly used to objectively calculate weights. Entropy is a measure of the uncertainty of indicator information. If the amount of information is higher, the uncertainty is lower and the entropy is smaller; if the amount of information is lower, the uncertainty is higher and the entropy is larger. Tang (2015) stated that the EM can avoid bias caused by subjective influence to a certain extent when determining the index weights by analyzing correlation degree and information among indexes [29]. The formulas are shown from (3) to (5). y ij = x ij <unk> m i=1 x ij(3) d j = 1 + 1 lnm m <unk> i=1 y ij lny ij(4) w j = d j <unk> m i=1 d j(5) In order to reduce the subjectivity of the AHP weight and make the assessment results more reliable, Formula (6) was used in combination of the EM and AHP weight to get the general weight w j. The results are shown in Table 3. w j = W j + W j 2(6) Forests 2023, 14, 890 The indicator data on the ecology subsystem and the economy subsystem for the sustainability assessment were sourced from the Qinghai Lake Protection and Utilization Administration of Qinghai Province, mainly including the Monitoring Report on the National Nature Reserve of Qinghai Lake (2010-2019) and statistics on number of tourists and tourism income over the years. Data on social subsystem indicators and some of the local economic development indicators were obtained from China Statistical Yearbook (County-level) and China Statistical Yearbook (Township) from 2010-2019. --- Coupling Coordination degree Model Suppose x 1, x 2, x 3 • • • x n are the indicators of the economy subsystem and x is the corresponding standardized value of x, then the economic sustainability index is f 1 (x) = <unk> n i=1 w i x i. w i represents the weight of indicator i in the economy subsystem. Similarly, the social sustainability index and the ecological sustainability index are f 2 (x) and f 3 (x), respectively. The coupling coordination degree among the subsystems was calculated using formulas (7) to (9). C = n f 1 (x) <unk> f 2 (x) • • • f n (x)/(f 1 (x) + f 2 (x) • • • f n (x)) n (7) T = <unk> 1 f 1 (x) + <unk> 2 f 2 (x) • • • <unk> n f n (x)(8) D = <unk> C <unk> T(9) where C represents the coupling degree, D represents the coupling coordination degree, <unk> is the weight coefficient of the corresponding subsystem, n is the number of subsystems. In the case of n = 3, T stands for the comprehensive sustainability index of the PA tourism system. By referring to the existing body of research [33,64,65], this paper defines the gradation criteria of the coupling degree and the coupling coordination degree, as shown in Table 4. We used the obstacle degree model to identify obstacle factors of the tourism system in QLNR. The formulas are as follows [66]: I ij = 1 -x ij(10) O j = F j I ij <unk> n j=1 F j I ij(11) Forests 2023, 14, 890 12 of 22 Q j = <unk> O j (12) where x' ij is the standardized value of indicator j in year i, I ij represents the deviation degree of indicator j, F j is the contribution degree of indicator j, which can be expressed by index weight, O j represents the obstacle degree of indicator j, Q j represents the obstacle degree of a subsystem. --- Results and Discussion --- Indicators and Weights As shown in Table 3, the indicator system aligns with the sustainable tourism management principles for PAs put forward by IUCN, including indicators on nature conservation, communities' right to development and cultural authenticity, continuous and fair development of the tourism economy and provision of valuable recreational experience [15]. These principles also echo the functional orientation of China's PA system, which aims to protect nature, provide high-quality ecological products, and maintain harmonious coexistence between humans and nature for sustainable development [67]. Specifically, in the economy subsystem, A 1 and A 2 have the same weight, indicating that both economic growth and economic efficiency are critical for economic development. In the society subsystem, nature education is the most important, with the sum of the weights of the three indicators, namely environmental interpretation facility (B 31 ), environmental interpreters (B 32 ) and capital input on nature education (B 33 ), accounting for 65.50% of the whole subsystem. This reflects the importance of nature education for tourism in PAs in serving social functions. In the ecology subsystem, C 2 exerts the greatest influence, accounting for 66.35% of the entire subsystem. More specifically, protection of key species (C 22 ) was given the highest weight with AHP, occupying 59.10% of the ecology subsystem. Thus, the biodiversity conservation represented by key species is the most important factor for the ecology subsystem. There is little difference in the weights of the three subsystems. The result that the ecology subsystem has the highest weight is consistent with the study by Yu (2006) in Tianmu Mountain Nature Reserve, which observed the principle of ecological conservation coming first [57]. What is different is that in the present study, the society subsystem carries more weight than the economy subsystem. Given the management objectives of promoting local development and ecological and cultural protection for PAs, we believe it rational to pay greater attention to social and cultural factors of Chinese PAs for two reasons. First, as many communities live in and around PAs in China, it is critical for sustainable tourism management in PAs to reduce conflicts between PAs and communities and win over the community support [68,69]. Second, unlike the western world's immersion in wilderness aesthetics, Chinese tourists uphold the traditional culture that the human is an integral part of nature and prefer landscapes with man and nature coexisting in harmony [70]. Cultural factors constitute one of the great appeals of tourism in PAs. According to the analysis methods and their computational formulas in this study, it is evident that the weighting of indicators not only directly affects the sustainability index, but also influences the results of coupling coordination degree and obstacle degree calculations. Therefore, the method chosen for determining indicator weights is of great importance. As indicated in Table 3, it is observed that the weights of certain indicators differ significantly when obtained using the AHP compared to the EM. Some indicators are regarded as important by experts and thus heavily weighted, but offer limited information, such as A 21, C 21 and C 22. For these indicators, weighting with the EM alone will not be able to reflect the importance of the indicators in practice. In contrast, some other indicators, such as B 31, B 32 and B 33, which showed rapid changes in the study period, will be neglected if only weighted with the AHP. Therefore, it is appropriate and necessary to combine both methods in an indicator system reflecting the temporal changes. --- Coupling Coordination Degree and the System Evolution --- Sustainability Index As shown in Figure 4, the sustainability index of QLNR tourism system and its subsystems fluctuated in 2010-2019, but generally trended upwards. The social sustainability index was at its lowest level in the three subsystems between 2010 and 2013, but has since maintained a steady upward trend overall since 2014. After 2017, it began to surpass the economic sustainability index. The ecological sustainability index exhibited fluctuations during the period between 2010 and 2016, but experienced a rapid increase after 2017, reaching a 10-year peak in 2019. The economic sustainability index continued to fluctuate over the decade and approached its lowest level in 2017. The gap in the sustainability index between the economic subsystem and the ecological subsystem widened further and further after 2017. garded as important by experts and thus heavily weighted, but offer limited information, such as A21, C21 and C22. For these indicators, weighting with the EM alone will not be able to reflect the importance of the indicators in practice. In contrast, some other indicators, such as B31, B32 and B33, which showed rapid changes in the study period, will be neglected if only weighted with the AHP. Therefore, it is appropriate and necessary to combine both methods in an indicator system reflecting the temporal changes. --- Coupling Coordination Degree and the System Evolution --- Sustainability Index As shown in Figure 4, the sustainability index of QLNR tourism system and its subsystems fluctuated in 2010-2019, but generally trended upwards. The social sustainability index was at its lowest level in the three subsystems between 2010 and 2013, but has since maintained a steady upward trend overall since 2014. After 2017, it began to surpass the economic sustainability index. The ecological sustainability index exhibited fluctuations during the period between 2010 and 2016, but experienced a rapid increase after 2017, reaching a 10-year peak in 2019. The economic sustainability index continued to fluctuate over the decade and approached its lowest level in 2017. The gap in the sustainability index between the economic subsystem and the ecological subsystem widened further and further after 2017. --- Coupling Degree As revealed in Table 5, from 2010 to 2019, the comprehensive coupling degree among the three subsystems and the coupling degree between each pair of subsystems was averaged at 0.8 to 1.0, a "superiorly high" coupling level. It means the three subsystems were closely connected and frequently interacted with each other. --- Coupling Coordination Degree According to Figure 5, from 2010 to 2019, the comprehensive coupling coordination degree among the three subsystems and the coupling coordination degree between each pair of subsystems showed an overall upward trend, but the coordination level remained unbalanced until 2019. Only the coupling coordination degree between the society subsystem and ecology subsystem reached the "barely balanced" level in 2019, the highest score in a decade. Specifically, the coupling coordination degree between the ecological subsystem and the social subsystem remained at the lowest level before 2016. However, it rapidly increased thereafter and reached the best-coordinated level among the four groups. On the other hand, the coupling coordination degree between the economic and social subsystems significantly decreased after 2016, becoming the worst-coordinated level among them. --- Stages of the System Evolution Combination of the evaluation results of the subsystem sustainability index and the coupling coordination degree shows that the tourism system in QLNR evolved across three stages (Table 6). During the first stage (2010-2014), the economy subsystem was leading in development, whereas the society subsystem lagged behind. The relationships between the three subsystems were "moderately unbalanced" in general, with the coupling coordination degree between the society and ecology subsystems being the lowest. During the second stage (2015-2017), the society subsystem took the lead in development, while the ecology subsystem lagged behind. The coupling coordination degree among three subsystems was at the "slightly unbalanced" level, and the coupling coordination degree between the economy and the society subsystems was relatively higher. During the third stage (2018-2019), the ecological sustainability index rose rapidly, while the economic sustainability index declined. The coupling coordination degree between the society and the ecology subsystems was relatively higher, while that between the economy and the society subsystems was the poorest at this stage. Consequently, it is now urgent to improve the development level and efficiency of the economy subsystem and enhance the coupling coordination degree between the economy and the society and ecology subsystems. Combination of the evaluation results of the subsystem sustainability index and the coupling coordination degree shows that the tourism system in QLNR evolved across three stages (Table 6). During the first stage (2010-2014), the economy subsystem was leading in development, whereas the society subsystem lagged behind. The relationships between the three subsystems were "moderately unbalanced" in general, with the coupling Rankings of subsystem sustainability index f 1 (x) > f 3 (x) > f 2 (x) f 2 (x) > f 1 (x) > f 3 (x) f 3 (x) > f 2 (x) > f 1 (x) Rankings of coupling coordination degree between subsystems Note: D 12 refers to the coupling coordination degree of economic subsystem and social subsystem. D 13 refers to the coupling coordination degree of economic subsystem and ecological subsystem. D 23 refers to the coupling coordination degree of social subsystem and ecological subsystem. --- Obstacle Factors for Sustainable Development and Management Implications The obstacle model can help us identify the obstacle factors for the sustainable development of the system [71]. In order to promote coordinated development among subsystems, we conducted an analysis of the obstacle degree for each subsystem and identified the factors that caused them. Table 7 lists the obstacle degree values and the top three obstacle factors for each subsystem from 2010 to 2019. The social subsystem had the highest obstacle degree during 2010-2013, followed by the ecological subsystem during 2014-2018, and the economic subsystem in 2019. This is roughly consistent in time with the three stages that QLNR tourism system has gone through and explains the main obstacle factors to the system development in each stage. Specifically, over the decade, the most common obstacle factors in the social and ecological subsystems were the three natural education-related indicators (B 33, B 32, B 31 ), and the wetland area (C 11 ), vegetation coverage area (C 12 ) and key species protection (C 22 ), respectively. In contrast, obstacle factors in the economic subsystem were more dispersed, with the most common indicators being tourism revenue structure (A 11 ) and growth in tourist numbers (A 23 ). In 2019, the economic subsystem posed the greatest obstacle to the sustainable development of the QLNR tourism system. The top three obstacle indicators for achieving sustainable economic development were identified as the per capita tourist consumption level (A 12 ), local economic growth (A 21 ), and the spatial distribution of tourism income (A 14 ). As revealed by the assessment results, the tourism development in QLNR was in the leading stage of ecological sustainability. However, the coupling coordination degree between the economy and the society subsystems was the lowest, and the economic subsystem had the highest obstacle degree in 2019. Therefore, it is critical to improve the economy development efficiency and enhance the coupling coordination degree between economy and the other two subsystems for sustainability of the whole system. Upon investigation, the decline in sustainability of the economic subsystem has been attributed to two significant events: the environmental inspection by central government in 2017 and the COVID-19 pandemic since 2019. The former led to a reduction in tourist attractions and reception facilities in and around the QLNR, resulting a change from a tourist destination to a transit point. The latter has caused a sharp decrease in tourists from outside Qinghai Province and a low motivation for tourism consumption within the province. With the aim of promoting the coordinated development of the economy, society, and ecology in Qinghai Lake Nature Reserve as a tourist destination, and based on the assessment of subsystem relationships and identification of obstacle factors, the following management insights can be derived. Socially, community participation in tourism needs to be strengthened. On the one hand, local communities can engage in farming or herding on a flexible schedule when the PA tourism is suspended, which can alleviate the tourism operation pressure on hiring full-time staff under unexpected situations such as epidemics. On the other hand, local communities can gain knowledge, ability and income through participation that contributes to the goal of PA to promote community prosperity. Meanwhile, as livelihoods become less dependent on natural resource extraction and income increases, conflicts between communities and PA managers are expected to decrease. In addition, as one of the important functions of PAs, nature education, especially in terms of facilities (B 31 ), personnel (B 32 ) and input (B 33 ), requires more attention to enhance ecological awareness among the population and foster emotional connections between people and nature, in order to gain public support for PA efforts. Ecologically, close attention should be paid to changes in some ecological indicators to reveal the influencing mechanisms of tourism. Restrictions on travel during the pandemic have created an opportunity for natural environmental restoration and less artificial interference with biodiversity [72]. Administrators and researchers can make use of the period to identify the ecological indicators that are most responsive to the weakened disturbance from tourism, such as animals, plants, and water (C 12, C 13, C 13 ). It is recommended to optimize tourism project planning by considering both the time of opening and spatial layout in light of the tourism-influenced mechanisms. This aims to identify the most favorable time and location for visits in order to mitigate the negative environmental impacts to key protection objects, such as waterbirds, Przewalski's gazelle, and plants. Diversified environmentally-friendly tourism projects are suggested to improve the tourism economy efficiency on the premise of ecological conservation. For instance, longdistance birdwatching and sightseeing by bicycle as well as nature education and the Tibetan cultural experience can be designed to prolong the stay of tourists, increase per capita spending (A 12 ), and drive up tourism income from diversified sources. Furthermore, providing information about accommodations in nearby towns or partnering with local lodging services can also increase the overnight stay rate of tourists, thus contributing to the economic benefits of tourism. For the PA tourism development, it is crucial to rely on peripheral areas of the PA to provide accommodation, dining, and other services as much as possible, in order to minimize the impact of tourism on the PA's environment and biodiversity, while also promoting the development of local communities. --- Conclusions The sustainable tourism development in PAs is a complex process, in which economic, social and ecological factors interact with each other and in which resource administrators, tourists and local communities, among other stakeholders, participate. Systematic thinking has offered us a holistic perspective of analysis. From the case of the QLNR tourism system, it is evident that changes in external factors such as policies can significantly improve the sustainability of one subsystem while potentially reducing the sustainability of another subsystem. Hence, the assessment of relationships among subsystems should not be overlooked, as the sustainability of a PA tourism system depends not only on the sustainability level of its individual subsystems, but also on their balance. In order to propose an integrated
Tourism is a significant way for the public to enjoy the cultural ecosystem services provided by protected areas (PAs). However, with PAs being expected to make much wider ecological, social and economic contributions to sustainability and human well-being, PA managers face challenges in coordinating tourism with other goals, such as ecological conservation and local community development. To address this challenge, we developed a sustainability assessment framework that considers the PA, local community, and tourism as a complex system comprising social, economic, and ecological subsystems from the perspective of subsystem relationships. The coupling coordination degree model and the obstacle degree model were applied to assess sustainability of the tourism system in Qinghai Lake Nature Reserve of China. The assessment results indicate that the sustainability index fluctuated between 2010 and 2019, but generally exhibited an upward trend, undergoing three stages and reaching the stage in 2019 where ecological sustainability took the lead. At this stage, the coupling coordination degree between the economy and society subsystems was at its lowest, and the economic subsystem faced the highest obstacle degree. The study demonstrates that involving scholars and administrators in the index selection process and considering both index information and management concerns when determining index weight makes the coupling coordination degree model more suitable for PA tourism systems. The assessment method developed in this study effectively reflects the temporal evolution of PA tourism system sustainability and provides valuable implications for coordinated ecological-economic-social management by analyzing obstacle factors.
and drive up tourism income from diversified sources. Furthermore, providing information about accommodations in nearby towns or partnering with local lodging services can also increase the overnight stay rate of tourists, thus contributing to the economic benefits of tourism. For the PA tourism development, it is crucial to rely on peripheral areas of the PA to provide accommodation, dining, and other services as much as possible, in order to minimize the impact of tourism on the PA's environment and biodiversity, while also promoting the development of local communities. --- Conclusions The sustainable tourism development in PAs is a complex process, in which economic, social and ecological factors interact with each other and in which resource administrators, tourists and local communities, among other stakeholders, participate. Systematic thinking has offered us a holistic perspective of analysis. From the case of the QLNR tourism system, it is evident that changes in external factors such as policies can significantly improve the sustainability of one subsystem while potentially reducing the sustainability of another subsystem. Hence, the assessment of relationships among subsystems should not be overlooked, as the sustainability of a PA tourism system depends not only on the sustainability level of its individual subsystems, but also on their balance. In order to propose an integrated evaluation approach that reflects the temporal evolution of the relationship among subsystems from the perspective of ecological-economicsocial coordinated development, we established a sustainability evaluation framework for the PA tourism system, which includes social, economic, and ecological subsystems, and identified a set of indicators in line with the development goals of sustainable tourism in the context of PAs by FDM. Subsequently, the CCDD and the obstacle degree model were used to reflect the temporal evolution of the sustainability of the reserve and identify the obstacle factors. Our paper makes a significant contribution to the literature on three aspects. Firstly, the CCDD was introduced to assess the relationships among subsystems of a PA tourism system. While more studies have focused on large-scale tourist destinations such as cities (prefectures) and provinces or national-level destinations, our study specifically focuses on the PA tourism system. Secondly, we adopted the FDM to include scholars and administrators in the index selection process, which makes the CCDD more applicable to the PA tourism systems. This is a departure from the norm, as the indicators of CCDD are usually selected solely by authors, without engaging other stakeholders. Lastly, to improve the applicability and objectivity of the evaluation, we combined the analytic hierarchy process and the entropy method to determine the index weight, taking into account both index information and management concerns. The results show that this is necessary for diachronic evaluation and sustainability management of the PA tourism system. --- Limitations and Future Research Given the variety of PAs and their wide differences across countries and regions in natural ecological, social and cultural conditions, and tourist preference, the indicator system should be tailored to actual situations when applied in other PAs. In addition, a case study is not sufficient to draw general conclusions. Therefore, it is important to undertake more studies on tourism systems in different categories of PAs or PAs in different regions in the future to identify characteristics of subsystem relationships and obstacle factors. For PA tourism systems, the coupling coordination degree assessment indicators should be adapted to the specific situations, such as conservation objectives and community conditions. Stakeholder participation is therefore crucial in selecting indicators. This paper involved administrators and related academics in the selection through the FDM. However, local residents in QLNR who generally speak the Tibetan language and have low literacy skills were not included due to difficulties in communicating and understanding of this method. Research in the future can include non-governmental organizations, tourists, local community residents and other stakeholders of tourism in PAs in selecting indicators and determining weights to better cater to local realities. In the expert consultation questionnaire, each expert is required to give a possible interval value [C i, O i ] and a definite value P i between C i and O i for each indicator to be evaluated, where i is an indicator to be evaluated, the minimum value C i is the "most conservative cognitive value" of i, and the maximum value O i is the "most optimistic cognitive value" of i. The steps of questionnaire analysis are as follows. Step In the expert consultation questionnaire, each expert is required to give a possible interval value [Ci, Oi] and a definite value Pi between Ci and Oi for each indicator to be evaluated, where i is an indicator to be evaluated, the minimum value Ci is the "most conservative cognitive value" of i, and the maximum value Oi is the "most optimistic cognitive value" of i. The steps of questionnaire analysis are as follows. Step 1: Conducting statistical analysis for each index i The extreme values other than "2 times standard deviation" were excluded, and then the minimum value (C i L, O i L), geometric mean value (C i M, O i M) and maximum value (C i U O i U) were calculated. The conservative trigonometric fuzzy function C i = (C i L, C i M, C i U) and the optimistic trigonometric fuzzy function O i = (O i L, O i M, O i U) were established (Figure B1). Step 2: Calculating the consistency degree of experts on indicators The grey zone was used to judge whether the expert opinions reached convergence and G i (representing the consensus degree of experts) was determined according to different situations. If C i U <unk> O i L, G =. It represents the non-overlapping interval of two trigonometric fuzzy functions, indicating that experts have reached a consensus on the index. If C i U > O i L, it means that the two trigonometric fuzzy functions have overlapping intervals. When Z i (Z i = C i U -O i L) <unk> M i (M i = C i M -O i M), it indicates that there are different Step 2: Calculating the consistency degree of experts on indicators The grey zone was used to judge whether the expert opinions reached convergence, and G i (representing the consensus degree of experts) was determined according to different situations. If C i U <unk> O i L, G i = C i M +O i M 2. It represents the non-overlapping interval of two trigonometric fuzzy functions, indicating that experts have reached a consensus on the index. If C i U > O i L, it means that the two trigonometric fuzzy functions have overlapping intervals. When Z i (Z i = C i U -O i L ) <unk> M i (M i = C i M -O i M ), it indicates that there are different opinions among experts, but the difference is very small, and then G i = O i M C i U -C i M O i L Z i +M i. When Z i > M i, it tells that the opinions of experts differ greatly, and the above steps need to be repeated until the opinions of experts on all indicators converge. Step 3: Calculating the threshold There are three commonly used methods for determining the threshold value (S): (I) According to established experience, set the threshold value at 5-7; (II) Determine S of indicator i by calculating the geometric mean of C i, O i, and P i, and then the geometric mean of the three geometric means; (III) Calculate the arithmetic mean value of P i as the threshold value. This paper chooses the second method, which is relatively objective. Step 4: Index selection According to Table 2, after the first round of consultation, a total of nine indicators had M i -Z i <unk> 0, indicating the expert opinions did not reach convergence. As a result of the second-round expert consultation, all the nine reached convergence, but seven with G value smaller than the threshold value S were deleted. After two rounds of fuzzy Delphi questionnaire surveys, 21 indicators were generated in total (Table 2). --- Data Availability Statement: The data presented in this study are available on request from the corresponding author. --- Author Contributions: Conceptualization, X.Z. and L.Z.; methodology, X.Z.; software, X.Z.; formal analysis, X.Z.; investigation, X.Z. and L.Z.; data curation, H.Y.; writing-original draft preparation, X.Z.; writing-review and editing, H.Y.; visualization, L.-E.W.; supervision, L.Z.; funding acquisition, L.Z. All authors have read and agreed to the published version of the manuscript. --- Conflicts of Interest: The authors declare no conflict of interest. --- Appendix A. The Questionnaire for Fuzzy Delphi Method The questionnaire for experts on sustainability indicators of tourism in Qinghai Lake Nature Reserve. --- Conflicts of Interest: The authors declare no conflict of interest. --- Appendix A. The Questionnaire for Fuzzy Delphi Method The questionnaire for experts on sustainability indicators of tourism in Qinghai Lake Nature Reserve Dear experts: We are researchers from ***. Due to the research needs, we are conducting a questionnaire survey on the sustainable development of tourism in Qinghai Lake Nature Reserve (QLNR). Please feel free to fill in the questionnaire anonymously and for scientific research purposes only. Your true opinions are very important for us to get objective and meaningful research conclusions. Thank you for your support and cooperation! Wish you good health, smooth work and a happy family! <unk> Instructions: This questionnaire uses the assignment method (the score is divided from 0 to 10). The higher the number, the more you approve of using the indicator for evaluation. The smaller the number, the less suitable the index is for the sustainability evaluation of tourism in QLNR. This questionnaire uses the assignment method (the score is divided from 0 to 10). The higher the number, the more you approve of using the indicator for evaluation. The smaller the number, the less suitable the index is for the sustainability evaluation of tourism in QLNR. Each This questionnaire uses the assignment method (the score is divided from 0 to 10). The higher the number, the more you approve of using the indicator for evaluation. The smaller the number, the less suitable the index is for the sustainability evaluation of tourism in QLNR.
Tourism is a significant way for the public to enjoy the cultural ecosystem services provided by protected areas (PAs). However, with PAs being expected to make much wider ecological, social and economic contributions to sustainability and human well-being, PA managers face challenges in coordinating tourism with other goals, such as ecological conservation and local community development. To address this challenge, we developed a sustainability assessment framework that considers the PA, local community, and tourism as a complex system comprising social, economic, and ecological subsystems from the perspective of subsystem relationships. The coupling coordination degree model and the obstacle degree model were applied to assess sustainability of the tourism system in Qinghai Lake Nature Reserve of China. The assessment results indicate that the sustainability index fluctuated between 2010 and 2019, but generally exhibited an upward trend, undergoing three stages and reaching the stage in 2019 where ecological sustainability took the lead. At this stage, the coupling coordination degree between the economy and society subsystems was at its lowest, and the economic subsystem faced the highest obstacle degree. The study demonstrates that involving scholars and administrators in the index selection process and considering both index information and management concerns when determining index weight makes the coupling coordination degree model more suitable for PA tourism systems. The assessment method developed in this study effectively reflects the temporal evolution of PA tourism system sustainability and provides valuable implications for coordinated ecological-economic-social management by analyzing obstacle factors.
INTRODUCTION Information and communication technologies (ICTs) are broadly defined as technologies used to convey, manipulate and store data by electronic means (Open University, nd). This can include e-mail, SMS text messaging, video chat (e.g., Skype), and online social media (e.g., Facebook). It also includes all the different computing devices (e.g., laptop computers and smart phones) that carry out a wide range of communication and information functions. ICTs are pervasive in developed countries and considered integral in the efforts to build social, political and economic participation in developing countries. For example, the United Nations (2006) recognizes that ICTs are necessary for helping the world achieve eight time-specific goals for reducing poverty and other social and economic problems. The World Health Organization also sees ICTs as contributing to health improvement in developing countries in three ways: 1) as a way for doctors in developing countries to be trained in advances in practice; 2) as a delivery mechanism to poor and remote areas; and 3) to increase transparency and efficiency of governance, which is critical for the delivery of publicly provided health services (Chandrasekhar & Ghosh, 2001). With the growth of the Internet, a wide range of ICTs have transformed social relationships, education, and the dissemination of information. It is argued that online relationships can have properties of intimacy, richness, and liberation that rival or exceed offline relationships, as online relationships tend to be based more on mutual interest rather than physical proximity (Bargh, McKenna, & Fitzsimons, 2002). In the popular book The World is Flat, Thomas Friedman (2005) argues that collaborative technologies -i.e., interactions between people supported by ICTs -have expanded the possibilities for forming new businesses and distributing valued goods and services for anyone. Educational theorist and technologist Curtis Bonk recently published a highly insightful and influential book called The World is Open (Bonk, 2009). Bonk (2009) argues that, with the development of ICTs, even the most remote areas of the world have opportunities to gain access to the highest quality learning resources. Proceedings from the 2004 International Workshop on Improving E-Learning Policies and Programs also showed that ICTs are helping transform governments through workforce transformation, citizen education, and service optimization (Asian Development Bank Institute, 2004). Innumerable accounts and data sources demonstrate that ICTs have reduced boundaries and increased access to information and education (see Bonk, 2009;Friedman, 2005), which has led the United Nations Educational, Scientific, and Cutural Organization (UNESCO) to focus on assisting Member States in developing robust policies in ICTs and higher education (UNESCO, nd). Although ICTs and the growth of the Internet are not without problems, a reality remains that both will continue to shape the global community. Other disciplines have recognized the importance of ICT and consider it to be a key part of professional development. For example, the National Business Education Association (NBEA) states: "mastery of technology tools is a requirement rather than an option for enhancing academic, business, and personal performance" (NBEA, 2007, p. 88). Resources are available that speak to the role of technology in the social work curriculum (e.g., Coe Regan & Freddolino, 2008;Faux & Black-Hughes, 2000;Giffords, 1998;Marson, 1997;Sapey, 1997) and in research and practice (e.g., Journal of Technology in Human Services). The National Association of Social Workers (NASW) and Association of Social Work Boards published a set of ten standards regarding technology and social work practice, which serves as a guide for the social work profession to incorporate technology into its various missions (NASW, 2005). Despite this interest in technology, the attention that the field of social work has given to ICTs in research, education, and practice does not match the efforts of other national and international organizations that view ICTs as critical to improving the lives of disadvantaged and disenfranchised persons, and necessary for all forms of civil engagement. The Council on Social Work Education (CSWE) calls for the integration of computer technology into social work education, but there are no explicit standards for integration or student learning (CSWE, 2008; see also Beaulaurier & Radisch, 2005). Asking other social workers, social work students, and social work educators can easily reveal that many are unaware of the NASW technology standards. A review of syllabi of social work courses will also show that ICTs, beyond e-mail communication, are generally not present in the educational environment. Consequently, social work students are not being adequately prepared in the use of ICTs, which are integral in the workforce today and will become even more important over time (Parrot & Madoc-Jones, 2008). In this paper, we argue that ICTs are of critical importance to advancing the field of social work. Specifically, they provide effecient and effective ways for organizing people and ideas, offers greater access to knowledge and education, and increases the efficiency and collaboration of our work. This paper takes the position that many aspects of the NASW Code of Ethics (1999) can be advanced through careful and thoughtful application of ICTs. Thus, competencies with ICTs and ICT literacy should be required learning outcomes in social work education and continuing education. This includes having the knowledge and skills to understand and use ICTs to acheive a specific purpose (i.e., competencies), in addition to knowing the major concepts and language associated with ICT (i.e., literacy). Within this framework, this paper identifies specific aspects of the Code of Ethics (1999), showing how ICTs play a critical role in achieving the desired values and principles. Recommendations on how ICTs can be more strategically incorporated in the classroom, along with potential pitfalls, are discussed. --- OVERVIEW OF ICTs --- ICTs in Society Computer technology is becoming more efficient, productive, and cheaper. Advances in technology are producing more powerful computing devices to create a dynamic virtual network that allows people all over the world to communicate and share information with each other. The growth and importance of the technology and the virtual network are underscored by two important laws. First is Moore's Law, which states that "integrated circuit technology advancements would enable the semiconductor industry to double the number of components on every chip 18 to 24 months" (Coyle, 2009, p. 559). Essentially, this means that the speed and productivity of a computer increases two-fold every 1.5 to 2 years. While such growth may not be sustained indefinitely, the exponential growth of technology realized thus far has reshaped our society and will continue to be a dynamic force in future generations. It is important that social workers understand the role that technology plays in shaping the lives of clients and the services that are delivered. The second law, Metcalfe's Law, states "the value of a network increases in proportion to the square of the number of people connected to the network" (Coyle, 2009, p. 559). These rapidly developing technologies, and the individuals that utilize them, are producing virtual networks of greater size and value. At the time Granovetter published his classic study on networks and employment (Granovetter, 1973), ICTs played almost no role in developing and maintaining network relationships. Today, Internet sites such as LinkedIn (www.linkedin.com) produce vast social networks that provide opportunities for professionals and employers to advertise and communicate. To effectively use social networks, whether for obtaining employment, securing resources, or obtaining information, social workers need to understand the capabilities of these networks, and how they can be effectively understood, managed, and utilized within a digital environment. --- ICTs in Higher Education Applications of ICTs for instituations of higher education have grown tremendously and will continue to shape the delivery of social work education. This is already realized through emerging distance education courses and other strategies for using technology in the social work classroom (e.g., Stocks & Freddolino, 1999;Wernet, Olliges, & Delicath, 2000). Courses offered online greatly assist students who are long distance commuters or students with disabilities. In both distance and local learning, many educators utilize course management systems (e.g., Sakai, Moodle, and Blackboard) for managing virtually every aspect of a course. These course management systems often provide students with tools to assist each other in learning the course material (e.g., synchronous and asynchronous communication). Largely because of these opportunities, some have even predicted that ICTs may eventually eclipse the traditional college classroom (see Bonk, 2009). Within colleges and universities, ICTs serve both administrative and academic functions. Students are able to accomplish a variety of tasks using computer networks that save the institution time and money, such as facilitating billing and payments to the school, requesting and obtaining financial aid and/or scholarships, class scheduling, requesting official transcripts, selecting housing locations, etc. With regard to social work research, ICTs are part of an infrastructure for newer research methodologies (e.g. Geographic Information Systems, computer simulations, network modeling), making it crucial for universities to harness technology to advance their research missions (Videka, Blackburn, & Moran, 2008). ICTs have the potential to help facilitate a more productive and effective learning environment for both social work students and professors. --- Continued Growth of ICTs Technology innovations are encouraging a trend towards the digitization of the world's information and knowledge, essentially creating stores of the accumulated human experience (Coyle, 2009). Computer technology has become integrated into the modern global society, serving a wide range of functions and purposes. With such growth are extensive arguments that Internet access is a human right because it is necessary to fully participate in today's society. 1 The Federal Communications Commission (FCC) announced plans, in conjunction with the US Department of Agriculture and Rural Development, to create a national broadband internet policy to help ensure all United States citizens have equal access to high speed internet (Federal Communication Commission, 2009). This policy, made possible through the Recovery and Reinvestiment Act of 2009, is specifically tailored for citizens who live in rural or underserved areas (Federal Communucations Commission, 2009). As the use of ICTs continues to grow, it is important to realize the importance of convergence, and how convergence shapes the transmission of information and service delivery. This concept refers to "the coming together of information technologies (computer, consumer electronics, telecommunications) and gadgets (PC, TV, telephone), leading to a culmination of the digital revolution in which all types of information (voice, video, data) will travel on the same network" (Coyle, 2009, p. 550). The creation and utilization of smart phones (e.g., BlackBerry, iPhone) is a key example of convergence, where one device has multiple functions and different applications, bringing technologies such as social networking, email, videorecording, and traditional cellular telephone service into one's pocket. Individuals of all age ranges are heavily involved in maintaining social connections through internet networks. For example, social networking websites, such as Facebook and MySpace, are used widely and boast highly active visitor populations. Facebook and MySpace each reached over 100 million active visitors by April of 2008 (Schonfield, 2008). The Internet and other telecommunication networks have an enormous impact on defining the future of human interaction, and to date, these changes have largely been positive across social contexts (Bargh, 2004). The field of social work needs to understand how these changes are influencing and will continue to influence all aspects of social work. As it relates to social work, it is critically important that such a research agenda builds an understanding of both the positive and negative impacts of human interaction. --- ICTs AND SOCIAL WORK ETHICS The growth of the Internet and use of ICTs has changed how we interact with each other and how we work (Bargh & McKenna, 2004). As the millennium generation (also known as generation Y) is raised in an environment with highly complex networks that make use of technology, their importance will continue to grow (Weller, 2005). The field of social work faces a critical need to incorporate ICTs into training social workers, delivering social work services, and the conduct of social work research. It is clear that ICTs, when thoughtfully and effectively used, can improve the various practice methods of social work (i.e., delivery of services, education, and research). Although the potential uses of ICTs have been well defined, to date there has been little discussion of the impact of ICTs on the principles of social work ethics. Provided below are specific examples of how ICTs appear necessary for ensuring the delivery of ethical social work practice. We highlight relevant aspects of the NASW Code of Ethics (1999) and provide specific examples. Ethical Principle: Social workers recognize the central importance of human relationships. ICTs play a major role in human relationships, which has implications for social work practice. More specifically, increasing numbers of people are engaged in relationships that are mediated by some form of ICT, including electronic messages (email), SMS text message, social networking (e.g., Facebook), instant messaging service, or video chat (e.g., Skype). Social workers need to have an understanding of the roles that such ICTs may play in the lives of their clients. This may involve understanding how communication processes are different compared to face-to-face interactions; such as the use of emoticons -that is, characters and symbols use to express non-verbals. Social workers also need to understand that many relationships develop and may occur exclusively online. For example, the Internet allows groups to convene around a common purpose, including the provision of self-help, social support, and psychoeducation. Depending on their format, such groups may be referred to as electronic groups, listservs, forums, and mail groups. The proliferation of these groups can be attributed to anonymity and their ease of access, particularly for persons with mobility problems, rare disorders, and those without access to face-to-face groups or professional services (Perron & Powell, 2008). A number of studies have tracked the patterns of communication within online groups, and have found that many of the processes used are the same as those used in face-to-face self-help groups (Finn, 1999;Perron, 2002;Salem, Bogat, & Reid, 1997). Given the prevalance of online relationships, social workers and other human service professionals must be aware of the positive (e.g., social support, see Perron, 2002), and negative effects (e.g., cyber-bullying, see Hinduja & Patchin, 2008) they have on their individual clients, with a clear understanding of how relationships are mediated by ICTs. Currently, the social work curricula emphasize the importance and development of in-person relationships, while little attention is given to understanding the role of online relationships and computer-mediated relationships. Ethical standard 1.07: (c) Social workers should protect the confidentiality of clients' written and electronic records and other sensitive information. (l) Social workers should take reasonable steps to ensure that clients' records are stored in a secure location and that clients' records are not available to others who are not authorized to have access. Increasing amounts of information are being saved and shared electronically (Rindfleisch, 1997). While training social workers in in all aspects of information security would be impractical, it is necessary that they have requisite knowledge for raising fundamental questions about electronic security, and to know when and where to seek additional information. This is particularly true in agencies that lack funding and resources to support information technology specialists. Without this basic knowledge, social workers can compromise the confidentiality of their client records or other important organizational resources, resulting in significant legal consequences and ethical violations. Ethical standard 1.15: Social workers should make reasonable efforts to ensure continuity of services in the event that services are interrupted by factors such as unavailability, relocation, illness, disability, or death. Natural disasters and personal factors can easily disrupt the continuity of social work services, and clients living in highly rural areas experience lack of services. ICTs provide options to help maintain or re-establish services during times of personal or community crises, which is described in numerous disaster management reports (e.g., Government of India, National Disaster Management Division, nd; United Nations, 2006;Wattegama, 2007). For example, if a service can be delivered electronically (e.g., psychotherapy) the only service barriers are ensuring that the client and service provider have computers or a mobile device with an Internet connection. Furthermore, the utility of virtual services such as remote psychotherapy (or more generally, "tele-mental health") is not limited to times of disaster. In fact, tele-mental health is used nationally for routine care in the Veterans Health Administration, in order to provide services to veterans in underserved areas (Department of Veterans Affairs, 2008.) To further illustrate the opportunity to deliver clinical services over ICTs, recent surveys estimate that about 60% of Americans used the internet to access health information in 2008 (Fox, 2009), and about half of all healthcare consumers endorsed that they would be likely to seek healthcare through online consultations if these services were made available (PriceWaterHouseCoopers Health Research Institute, 2009). Ethical standard 2.05: Social workers should seek the advice and counsel of colleagues whenever such consultation is in the best interests of clients. ICTs offer greater flexibility and support for seeking professional consultations, and numerous states permit online supervision. The sheer size of the online world suggests that no matter how specialized one's area of focus, like-minded colleagues can be located, and communities of practice may be established. For example, hoarding behavior is a fairly rare event in mental health services, particularly in comparison to other expressions of psychopathology (Steketee & Frost, 2003). Thus, issues on treating this problem and working with family members are rarely covered in the classroom. In the absence of ICTs, few training or consultation opportunities exist, but a simple search of hoarding as a mental disorder can reveal a wide range of potentially useful resources (including, but not limited to): contact information for experts and directories on hoarding behavior; video lectures on treatment; extensive collection of YouTube videos on providing information and personal accounts; and online support groups. Similar searches of other highly specialized areas such as disaster planning in social work, forensic interviewing of abused children, and inhalant abuse have also revealed a wide range of resources that are unlikely to be available to social workers in their local area. --- Ethical standard 3.07(a): Social work administrators should advocate within and outside their agencies for adequate resources to meet clients' needs. Creative uses of the Internet are emerging to support advocacy. For example, the online service GiveAnon (http://givinganon.org/) uses the powers of ICT to allow donors to connect with recipients, contributing financially, directly, and anonymously. ICT's ability to mask the identity of an online person or entity is creatively used in this case to help donors to provide assistance without revealing their own identity. Thus, they can serve as a powerful organizing and advocacy tool. Social workers are positioned to use this tool, and many others like it, to address various needs and solve problems. Further integration of technology in the curriculum on organizing and advocacy with ICTs can have potentially significant payoffs. A recent article in a leading health services journal, Health Affairs, Hawn (2009) describes how Twitter, Facebook, and other social media are reshaping health care. At the time this manuscript was written, it was reported that Chicago's Department of Human Services began using a system that enabled human service providers, agency coalitions and the community to manage client and resource data in real-time (Bowman Systems, 2008). Having real-time knowledge of available resources is critical for making effective and efficient referrals, particularly for crisis issues, such as psychiatric and substance use conditions, and housing. Ensuring adequate resources to meet clients' needs must be considered within the overall budget of an organization. ICTs are a necessary part of most social work service agencies. Many agencies have large expenses related to their ICT needs, especially software upgrades. However, organizations can take advantage of the benefits of open source software to decrease costs related to information technology. Open source software "is a development method for software that harnesses the power of distributed peer review and transparency of process. The promise of open source is better quality, higher reliability, more flexibility, lower cost, and an end to predatory vendor lock-in permits users to use, change, and improve the software, and to redistribute it in modified or unmodified forms" (Open Source Initiative, nd; see also Lakhani & von Hippel, 2003). From a user's standpoint, this software is freely available and can be modified to meet a given need. Many agencies use Microsoft Office but cannot afford expensive software or hardware upgrades that are required over time. As an alternative, the same agency could use an open source software package (freely available), such as OpenOffice (www.openoffice.org), which is compatible with the Microsoft Office suite. Cloud computing alternatives are another option -that is, software services that are provided over the Internet. The premise of cloud computing is that full software packages (e.g. Office suites, database applications) are provided over the internet, eliminating the need for expensive equipment to be purchased and maintained locally (e.g., intranet servers; Hayes, 2008). Google, for example, provides an entire set of office-related applications called Google Docs (http://docs.google.com) that can do word processing, spread sheets, and presentations. These applications do not ever need to be installed on a local computer or upgraded by the user. These applications are compatible with other proprietary software, most notably Microsoft Office. Although not typical, this major Cloud computing service is freely available to anybody with a Gmail email account (also free), and the programs and files can be accessed from any computer with an Internet connection. Social workers should have knowledge of such resources and understand how they may be a reasonable alternative to address existing agency needs, in addition to understanding the legal issues of remote data storage and security. Ethical standard 3.08. Social work administrators and supervisors should take reasonable steps to provide or arrange for continuing education and staff development for all staff for whom they are responsible. Continuing education and staff development should address current knowledge and emerging developments related to social work practice and ethics. A growing body of research shows that distance education can be as effective or more effective than face-to-face education (Bernard et al., 2004). Moreover, the educational literature is pointing to the changing characteristics of our students. For example, students of the Net Generation and Millenial Generation, who are the largest age group of consumers of social work education today, have different learning expectations and learning styles that will require social work faculty to change how they teach (see Diaz et al., 2009). Distance education is also increasingly relying on and innovating with ICTs, to facilitate student-to-teacher and student-to-student interactions, and collaborations. The field of social work could enhance its overall educational infrastructure through the effective use of ICTs. This would allow access to opportunities that would not be available or affordable using traditional face-to-face formats. The use of ICTs undoubtedly gives greater access to higher quality educational opportunities (Asian Development Bank, 2004;Bonk, 2009). Ethical standard 4.01. Social workers should strive to become and remain proficient in professional practice and the performance of professional functions. Social workers should critically examine and keep current with emerging knowledge relevant to social work. Social workers should routinely review the professional literature and participate in continuing education relevant to social work practice and social work ethics. Social workers have a daunting task of remaining current with the research in their area of practice. The reality is that the majority of research findings are disseminated and accessed electronically via the Internet. Many of the barriers that social workers face in accessing and even understanding the research may be overcome, in part, through the efficient and effective use of ICTs. For example, while many journals require expensive subscriptions, a growing body of journals are available online in an open access format. This is an important and complex philosophy; the immediate relevance is that open access gives social workers free and unlimited access to scientific articles (e.g., www.biomedcentral.com) which have been traditionally been available on a subscription basis (see Suber, 2003). Social workers have access to a wide range of electronic video and audio recording, also known as videocasts and podcasts, that discuss recent research developments. For example, social workers interested in psychiatric issues can easily find collections of grand rounds lectures archived by departments of psychiatry at medical schools throughout the United States. Many journals and other science-related newsrooms offer scientific findings in the form of emailed newsletters and electronic news feeds. Social workers can identify and subscribe to specific news feeds using real simple syndication (i.e., RSS feeders) that link to news articles in their area of practice. These resources, and many others, are freely available. However, social workers must have competencies with ICTs in order to identify and use quality resources. --- FUTURE DIRECTIONS Developing ICT Competencies and Literacy Given the growth and impact of ICTs in society and their implications for social work ethics, it is critical that social workers have both competency and literacy with ICTs. While competency refers to being able to use a given technology, literacy refers to the ability to access, manage, integrate, evaluate, and create information (Chinien & Boutin, 2003). It is beyond the scope of this paper to provide a coherent and comprehensive strategy for developing social worker competencies and literacies with ICTs. However, the literature on ICTs and educational innovations in higher education provide extensive resources that are generalizable to the field of social work. Social work educators will need to be proficient with ICTs in order to design assignments, activities, and projects that reflect the real-world use of ICTs. Beyond higher education, continuing education opportunities that respond to recent technology advances are also necessary in order to help social workers stay current with the most relevant and useful technologies. For example, by having basic competencies and literacies, social workers and social work students who want further introduction to ICTs can review the complete curriculum materials for a course entitled ICTs in Everyday Life through the Open University (http://www.open.ac.uk/), in addition to having access to materials for other courses. This is part of the open education movement that views education as a public good, and Internet technology provides the opportunity to share, use, and resuse knowledge (Creative Commons, nd). In absence of ICT competency and literacy, social workers will miss important educational opportunities for themselves and their clients. --- Challenges and Pitfalls of ICTs Despite the continued growth and expansion of technologies, many disenfranchised and disadvantaged persons still do not have access to ICTs or the Internet. While initiatives in the United States, and other respective countries around the world, are attempting to provide access to everybody, significant disparities within and across countries exist, particularly in African regions that have low Internet market penetration (Alden, 2004). By developing a stronger focus and infrastructure around ICTs in social work education, social workers will be better prepared to participate in a range of policy initiatives to support activities that seek to address these disparities in social, economic and political participation. In the training of social workers in ICTs, it is also important to recognize that not all technologies have resulted in added value to education. For example, Kirkup and Kirkwood (2005) argue that ICTs have failed to produce the radical changes in learning and teaching that many anticipated. This underscores the importance of ensuring ICT literacy among social workers -that is, having the ability to access and evaluate information using ICTs (Chinien & Boutin, 2003). This will help social workers select the optimal tools from a wide range of options. In the provision of clinical services, social workers must be aware that clinical needs can be (and currently are being) met through technologies such as telehealth and e-mail consultations (McCarty & Clancy, 2002). Recent surveys also suggest that clients welcome these new treatment options (Fox, 2009). Further research is still needed to better understand the effectiveness of Internet-mediated services. For example, the effectiveness of online psychotherapy shows promise but the existing research to date remains inconclusive (Bee et al., 2008;Mohr, Vella, Hart, Heckman, & Simon, 2008). The social worker using such technologies must consider how legal, ethical, and social principles apply, in addition to the advantages and disadvantages of online health services (see Car & Sheikh, 2004). Currently, the social work curriculum focuses almost exclusively on relationships in the absence of ICT mediated exchanges, but the growth of technology within the health care system makes these matters a priority in social work education. If such issues aren't addressed, the field of social work is at risk of not remaining competitive in the provision of health and psychosocial services. Moreover, without proper training, social workers in this arena of practice are at risk of delivering poor quality services or facing legal or ethical issues. Social work researchers and practitioners should work in earnest to document both the successful and unsuccessful initiatives involving ICTs in the field. Case examples can provide the basis for understanding how ICTs can be integrated to enhance various aspects of the process. Unfortunately, the current method of disseminating new information and practice is primarily through professional journals, where the general timeline of an article (the time it takes to have a manuscript submitted, reviewed, and subsequently published) will likely not be quick enough to keep up with the advances in technology. It behooves the field of social work to explore options to connect with other researchers and practitioners to share knowledge, particularly with social media. --- CONCLUSION The field of social work education, research, and practice is surrounded by rapid developments in ICTs. In order to ensure that social work practice upholds the standards and values of social work ethics, it is necessary that social workers are competent and literate in ICTs. This will position social workers at all levels of practice to help advance the lives of disenfranchised and disadvantaged persons through greater access to education, knowledge and other resources. While numerous ICTs have failed to realize their expected potential, the ongoing rapid growth of ICTs has created a context in which social workers cannot resist technology, but must understand the role it plays in everyday life. --- Author's note: Address correspondence to: Brian E. Perron, Ph.D., School of Social Work, University of Michigan, 1080 S. University Avenue, Ann Arbor, MI 48109. Email: [email protected].
are electronic tools used to convey, manipulate and store information. The exponential growth of Internet access and ICTs greatly influenced social, political, and economic processes in the United States, and worldwide. Regardless of the level of practice, ICTs will continue influencing the careers of social workers and the clients they serve. ICTs have received some attention in the social work literature and curriculum, but we argue that this level of attention is not adequate given their ubiquity, growth and influence, specifically as it relates to upholding social work ethics. Significant attention is needed to help ensure social workers are responsive to the technological changes in the health care system, including the health care infrastructure and use of technology among clients. Social workers also need ICT competencies in order to effectively lead different types of social change initiatives or collaborate with professionals of other disciplines who are using ICTs as part of existing strategies. This paper also identifies potential pitfalls and challenges with respect to the adoption of ICTs, with recommendations for advancing their use in practice, education, and research.
Résumé L'économie verte est proposée en tant que solution pour faire face aux crises écologiques incessantes et potentiellement irréversibles. Pourtant, les solutions environnementales dominantes reposent sur les mêmes logiques de simplification brutale et de déshumanisation qui soutiennent et renforcent les systèmes d'oppressions sociaux actuels et l'effondrement écologique en cours. Nous décrivons la transformation du paysage biophysique de la planète sous forme des monocultures de plantation, un modèle transposable sans égards pour les réalités locales. La plantation, en tant que cadre d'aménagement territorial de l'ère colonial, est un processus écologique en elle-même, fondée sur la discipline des corps et des paysages en parcelles efficaces, prévisibles, calculables et contrôlables pour favoriser la production de marchandises par le biais d'une optique de déshumanisation racisée et genrée. La singularité culturelle, physique, esthétique et politique visible de la parcelle de plantation, laquelle se veut objective et neutre, offre une représentation tangible de la manière dont la dégradation écologique se produit. Nous nous interrogeons sur la notion de « verdissement » en tant que stratégie de lutte contre les impacts imprévus de l'écologie des plantations coloniales en soulignant que de telles tactiques renforcent la logique de la plantation plutôt qu'elles ne le démantèlent. Nous commençons par conceptualiser la plantation historique et ses principes d'organisation biophysiques, cognitifs et corporels. Nous proposons ensuite des exemples de « verdissement » en tant que nouvelles formes plus inclusives (mais tout aussi nuisibles) de logiques de plantation, et identifions comment ces extensions de la logique de la plantation peuvent être détournées par des agents de résistance, qu'il s'agisse de mouvements sociaux ou de maladies et d'épidémies. Nous nous basons sur les certifications de la production de l'huile de palme dans le cadre de la Table ronde sur l'huile de palme durable en Colombie, ainsi que sur des programmes de reboisement compensatoire conçus pour compenser la destruction des forêts en Inde. Nous terminons en soulignant comment les écologies d'abolition peuvent servir de contrepoids à la logique des plantations en mettant en lumière les relations primordiales d'autoréflexivité, de réparation et de solidarité collective requises pour se désinvestir de l'écologie des plantations. Mots-clés: écologie politique, capitalisme, économie verte, racisme, Capitalocène --- Resumen La econom<unk>a verde se presenta como una solución para enfrentar el creciente y potencialmente irreversible colapso ecológico. Sin embargo, <unk>qué sucede cuando las soluciones ambientales se fundamentan en las mismas lógicas de brutal simplificación y deshumanización que mantienen y refuerzan los sistemas de opresión social y degradación ecológica? En este art<unk>culo, describimos la transformación del paisaje biof<unk>sico del planeta en patrones replicables de la parcela de la plantación. La plantación, como un modelo organizativo arraigado en la época colonial, representa un proceso ecológico continuo que se fundamenta en la reconfiguración de cuerpos y paisajes en parcelas eficientes, predecibles, calculables y controlables con el fin de optimizar la producción de mercanc<unk>as, basada en la deshumanización racializada y de género. La evidente singularidad cultural, f<unk>sica, estética y pol<unk>tica de la parcela, a pesar de su aparente objetividad y neutralidad, ofrece una representación tangible de cómo se manifiesta la degradación ecológica. En este art<unk>culo, cuestionamos la noción de "reverdecimiento" como estrategia para contrarrestar los efectos indeseados de la ecolog<unk>a de las plantaciones, argumentando que dichas tácticas refuerzan el modelo de plantaciones en lugar de desmantelarlo. En primer lugar, conceptualizamos la plantación histórica y sus principios organizativos, tanto biof<unk>sicos como cognitivos y corporales. Posteriormente, presentamos ejemplos de "reverdecimiento" como nuevas formas aparentemente más inclusivas pero igualmente perjudiciales de la lógica de la plantación. Finalmente, identificamos cómo estos aspectos de la lógica de la plantación son apropiados por actores de resistencia, desde movimientos sociales hasta enfermedades y epidemias. Consideramos como ilustraciones las certificaciones de sostenibilidad del aceite de palma a través de la Mesa Redonda sobre Aceite de Palma Sostenible (RSPO) en Colombia y los programas de reforestación compensatoria dise<unk>ados para contrarrestar la destrucción de los bosques mediante la expansión de plantaciones de monocultivos en India. Concluimos resaltando cómo las ecolog<unk>as de abolición pueden servir como un ant<unk>doto contra la lógica de las plantaciones y destacamos las relaciones necesarias de autorreflexión, reparación y solidaridad colectiva requeridas para alterar la lógica de la ecolog<unk>a de las plantaciones. Palabras Clave: ecolog<unk>a pol<unk>tica, capitalismo, econom<unk>a verde, racismo, Capitaloceno --- Introduction In response to climate crisis and ecological breakdown, green transitions are being increasingly demanded by multilateral environmental organizations, scientists, policymakers, global lending agencies, and corporations alike. Proposals such as 'green growth' and a 'green economy' build on a popularized sustainable development discourse by claiming that growth can and must continue but be'smarter' at internalizing unintended environmental side-effects -or externalities -into the economy. Renewable energy, certified niche products, and financialized Environmental, Social and Governance (ESG) portfolios are examples of how green products are leveraged to generate and capture new value and profit. Yet, the production of goods and services ("green" or otherwise) has its own ecological consequences. The desire to grow greener has meant the active manipulation of landscapes and labor relations to generate measurable (and lucrative) productive commodities in the name of sustainability (Neimark et al., 2021;Voskoboynik & Andreucci, 2022;Bigger & Webber, 2021). The "greening" agenda has not considered its own ecological effects beyond marginal efficiency improvements; this is because the underlying logic that drives intensive production systems erases and normalizes the global historical and colonial foundations of ecological breakdown (Sultana, 2022). This has been eloquently articulated by political ecologists and critical geographers in past decades (e.g. Sullivan, 2018;Andreucci et al., 2017;Pulido, 2017;Dempsey & Suarez, 2016;Büscher et al., 2014;Fairhead et al., 2012;Bakker, 2010;Smith, 2010). In this article, we analyze an organizational template that has shaped and continues to shape landscapes and labor relations over the past five centuries: the plantation. Plantations -historical and contemporary -are situated in particular geographies and linked to expansive supply chains and markets. The uniform monoculture of plantation ecology attempts to scrub away any historical register of place by treating land as terra nullius, devoid of cultural significance, use or value other than for the extraction of specific commodities (Lindqvist, 2014). Consequently, people and non-human nature are violently detached from their communities and relations, extending monoculture beyond a production model. Here monoculture also refers to the imposition of singular ways of understanding the world and patterns of thought; universal, linear, and fixed conceptions of time and space (e.g. Shiva, 1993;Castree, 2009;Escobar, 2018); the imposition of a'settler' distancing from nature ('something for the taking') (Burow et al. 2018); and structured and hierarchized categories of classifying people along racial, caste, ethnic, and gendered lines to optimize the instrumentalization of their labor (Ferdinand, 2019). However, this is not a complete or smooth process, and is rife with struggles for autonomy and subversion from people and polycultures alike (Tsing, 2015). We explore these contested landscapes through the lens of plantation ecology. Plantation ecology stands in stark opposition with ecologies that generate the conditions for abundant life to thrive, or a world where many worlds can co-exist (Escobar, 2018). Plantation ecology refers to the historically and geographically situated plot, defining how and where capital production intervenes in the web of life, while attempting to enroll emergent life into new plantations. Rather than amorphously reproducing the wheel as 'Plantationocene', plantation ecology should rather be understood as the set of dehumanizing ecological relationships that define capital accumulation in the web of life, or 'Capitalocene' (e.g. Moore, 2015). Since plantations signify a spatialized geography or physical plot of capital production, the term 'Capitalocene' more appropriately characterizes the underlying processes qualitatively (and irreversibly) transforming the web of life. These include the degradation of dehumanized bodies as cheap racialized labor, the violent homogenization of whole landscapes, and "just in time" production of new commodities to power global markets (Wolford, 2021;Davis et al., 2019;Sapp-Moore et al., 2019;Moore, 2015;Haraway, 2015;Haraway et al., 2016;McKittrick, 2013). Plantations are like templates shaping how commodity production is physically mapped onto the landscape, seascape, and even (increasingly) the spacescape. While maintaining the homogeneity of monoculture, they widen and deepen the commodity frontier -the process of accumulating value into and through new goods and services -and tap into emergent values and superficial representations of virtue and aesthetic judgements of beauty and taste. Plantation ecology has its roots in the colonial enslavement of African people as dehumanized, laboring bodies to produce raw goods in colonized landscapes for manufacturing hubs in urban centers in North America and Europe (McKittrick, 2013). Colonial expansion gave credence to wealthy capitalists in Europe to justify colonial subjects as darker-skinned sub-humans that were indolent, ignorant, dangerous, immoral and hence equivalent in stature to manipulable objects of nature (Koshy et al., 2022). European elites also leveraged racialized exploitation in overseas colonies to sustain class-based exploitation of working-class laborers within Europe. The abject dehumanization of millions of people through chattel slavery ensured a reliable and gratis labor force to funnel trillions of dollars in accumulated value from supply chains to the European capitals and their colonial outposts (Nally, 2011;Craemer et al., 2020). The profits and power relations generated by this system continue to shape the world today. Between 1990 and 2015, wealthy nations appropriated 12 billion tons of raw materials, 822 million hectares of land, 3.4 billion barrels of oil, and 188 million person-year equivalents of labor from former colonies and other nations distinguished along the racial color line (Hickel et al., 2022). Devaluation -or making inputs to production of less worth -is a functional property defining the ecological simplification and decimation of non-commodifiable life on the plantation. While quantification of resource and labor appropriation is beyond the scope of this article, we illustrate how "greening" solutions continue to embed devaluation, or the 'cheapening' of nature, life, and labor, as an organizing principle, further sustaining and reinforcing plantation ecology as the outcome of organizing land and labor for the elite capture of value (Moore, 2015). In the next section we further expound on plantation ecology as an organizing principle causing ecological breakdown, irrespective of whether commodity production is "green" or otherwise. Borrowing from Ritzer (2018), we frame our reflections around four plantation design principles of efficiency, predictability, calculability, and control of both resources and labor for optimal commodity production. By optimizing the production of commodified goods and services (eco-friendly, socially disruptive, or otherwise), we argue that these principles characterize an ecology in their own right and attempt to further cement monocultural social and natural environments. Building from the four design principles of the plantation, we then illustrate in Section 3 how "greening" aids the expansion of plantation ecology by geographically widening and deepening the commodity frontier. In Section 4 we draw upon examples from afforestation in India and sustainability certification in Colombia and Indonesia. These examples illustrate how resistance emerges amidst the continual failures of monocultural uniformity repackaged as "green." We conclude in Section 5 by inviting space for relationships of self-reflection, repair, and solidarity needed to build the abolition ecologies that obstruct the will towards singularity and sameness and the violent oppressions these entail. We invite researchers, activists, and civil society -inside and outside academia2 -interested in interrogating "green" solutions to consider how plantation ecology is a common denominator exacerbating climate and ecological breakdown. Resisting and dismantling plantation ecology can form a conceptual basis for building place-based solidarity against systems of oppression and for regenerating abolition ecologies. Abolition translates as restorative justice and the freedom to live away from environmental harm, racial discrimination, unjust gendered forms of labor, and class subjugation and constant threats of incarceration (Heynen and Ybarra, 2021;Gilmore, 2007;Pellow, 2019). Recognizing what Ferdinand (2019) claims is the tendency of environmental thought and anti-colonial thought to speak past each other, we hope these dialogues nurture transformative and collaborative thought and action. --- Plantation ecology The logic of the plantation operates as an organizational template historically shaping societal and ecological relations through the discipline of commodity production and pervasive dehumanization. Plantations are grounded in specific territories but are multiscale and linked globally across supply chains as well as the exploitation of racialized and gendered bodies as dehumanized labor, whose living relational connections to territory and knowledge systems are repackaged and made deadened as resources for commodity production (Yusoff, 2018). The plantation should be understood through the manifestation of a temporally and spatially specified plot, making and shaping monocultural environments with the express purpose of capital accumulation (Yusoff, 2018;Wynter, 1971). In turn, capitalism does not merely generate ecological consequences or "externalities," but is itself an ecological process generating and profiting off its own internal contradictions. This latter point is what has been termed capitalist world ecology -the dual interaction of human activity and environmental change as the production of capital in the web of life (Moore, 2015). The outcome of this globalized world ecology has resulted in the vast terraforming of the earth's surface. These extend to monocultures of industrial agriculture and timber, processing and manufacturing factories, and mine sites. Less often perceived as extractive but operating under the same principles of enclosure include tourist resorts and nature parks, gated communities, and whirring and energy-intensive "cloud" servers. Seascapes are also integrated through transnational shipping networks, timed to brutal same-day delivery schedules (Ajl, 2021). Commodified outputs from the plantation emerge across a series of productive processes including the direct production of goods and services across multiscalar supply chains, their financial derivatives (e.g. futures trading, crop insurance markets, green bonds, climate-smart adaptation funds, speculative climate finance), the disposal of wastes (e.g. the e-waste, circular economy, and recycling industries), the securitization and militarization of plantation borders (e.g. industrial prisons and detention centres), and secondary appropriation of surplus value through activities such as rent seeking (e.g. carbon offsets, recreational tourism, eco-gentrification in urban areas). As Wolford (2021: 7) states: "class, gender, and racial divisions were not invented for the plantation but in many ways, they were perfected there -strict hierarchies were laid down, justified and often internalized." It is important to emphasize that the "plantation" is not a synonym for "capitalism" but has been a kind of laboratory to position class relations, including racialized and gendered forms of dehumanization, as central to ordering people and nature alike for optimal commodity production. The establishment of racialized hierarchies of labor is but one (extremely brutal) process of class differentiation in optimizing the production and accumulation of capital (Koshy et al., 2022). Gendered divisions of labor underpin class differentiation through the exploitation of social reproduction and form the kernel of modernity's patriarchal origin (Mies, 2014, von Werlhof, 2013). Racialized and gendered subjugation to less-than-human status acts to proximate workers as equally manipulable as that of presumed non-human natures, perceived as passive resources (McKittrick, 2015, Yusoff, 2018). To the extent that green strategies abolish class and patriarchal relations is to realistically assess how these initiatives ecologize anew in particular places and settings, or conversely further pattern plantation ecology in more deceptively inclusive ways. In what follows, we first conceptualize four organizational principles of plantation ecology that characterise its precision and replicability. Then we analyse how these principles widen and deepen commodity frontiers, and the role that "greening" plays in expanding plantation ecology. We demonstrate that "greening" not only fails to disrupt global and multiscalar links of plantation discipline, but actively aims to reinforce and expand them in the name of minimizing risks to disruption (i.e. sustainability as sustaining the status quo). --- The organization of plantation ecology Sociologist George Ritzer (2018) noted how every aspect of society was rapidly following a blueprint resembling the experience of being served in a fast-food McDonalds restaurant. He identified four intertwined organizational principles, which Desmond (2019) traces through to the cotton fields of slave-owning plantations of the U.S. South and contemporary capitalist work culture. These four dimensions, which we now turn to, are efficiency, calculability, predictability, and control (Ritzer, 2018). Efficiency refers to obtaining the maximum amount of product or objective in the shortest time or cost possible. In theory, maximizing benefit and minimizing cost is a desirable objective, especially when considering the rapid social and ecological hemorrhaging occurring. When efficiency is applied in the context of plantation ecology, it refers to maximizing commodity production by reducing or further depriving the natures that make up the plantation's workforce (human and non) or by maintaining output through reductions in labor and resource costs (Shove, 2018). There is never a genuine attempt to become more efficient at a systemic level when unlimited growth and capital expansion is the aim, but only attempts to make material and energy extraction for commodity production quicker and more optimized. In this way, efficiency gains are immediately translated into new investable resources to expand production. This is a contradiction that English economist William Stanley Jevons had already identified in 1865 (Dale et al, 2017). Efficiency in achieving desired objectives, with either minimal cost, maximal potential to extract profit or both, is a predominant feature of economic justifications associated with "internalizing" environmental externalities. For the green economy, efficiency is invoked through the argument that the world's life support systems can and must be protected if (and only if) their expected returns are higher than any alternative use. For instance, Waldron et al. (2020) highlight how "nature protection" as a green financial market could increase total global economic output by upwards of US$454 billion per year by 2050 and possibly up to US$ 1 trillion annually if remaining areas of the earth not currently under industrial production could be framed as a "single underexploited type of asset" (p. 11). The authors employ this efficiency-oriented argumentation to underpin the adopted Global Biodiversity Framework at the most recent Conference of the Parties to the Convention on Biological Diversity in Montreal (COP15 in 2022). The second dimension is calculability. On the historical plantation, enslaved people's laboring potential was meticulously documented by plantation owners according to age, gender, and health status. In the shaping of plantation ecology, calculability is the capacity to quantify every aspect of the process of "product" delivery in terms of measurable indicators and targets, including increasingly creative ways to represent relational and subjective experience through quantified parcels of data. Desmond (2019) argues that the "cold calculation" in the control and precision of the laboring body has not altered since the days of exacting maximal labor per slave on historical plantations, but only that technology has become more sophisticated. These practices include surveillance of workers' emotional state to optimize productivity (e.g. Kaklauskas et al., 2011), upwards accountability and hierarchical reporting, achievement of ever-precise indicators and targets, and the overall precise quantification of output per unit of salary paid. Mbembe (2019: 14) refers to such measurement as a process in which all life itself becomes a "computational object" to be inserted into an algorithm to minimize costs and maximize labor potential. For the green economy, calculability is the capacity to quantify every aspect of the process of "product" delivery in terms of measurable indicators and targets, including relational and subjective experiences. Calculability lies at the heart of the logic underpinning carbon emissions trading schemes, which invent "measurable 'equivalences' between emissions of different types in different places" irrespective of context (Lohmann, 2009: p. 81). To illustrate this absurdity, a carbon molecule emitted by a hospital treating desperate war-torn patients in Aleppo becomes both qualitatively and quantitatively equivalent to a carbon molecule from a billionaire's yacht cruising the South Pacific. Or, in India's compensatory afforestation programme, the loss of 166 sq km of tropical rainforest on Great Nicobar Island is planned to be compensated with an equivalent amount of monoculture tree plantation 2,400 km away (e.g. Narain, 2023). Climate loss and damage compensation, forest or wetland loss, or even discussions on historic loss and damage, similarly tend to quantify otherwise incommensurable physical loss, cultural genocide and ecocide through monetary compensation or arbitrary equivalencies-excusing systemic changes and leaving power relations unchanged. The third dimension, predictability, ensures that product delivery or public policy is homogenized for consistency and buy-in. Without collapsing difference, the deviation of laboring bodies from a standardized formula of expected future production gains aligned to a mechanical clock time characterizes both the abject violence against laboring bodies on the historical plantation (e.g. Smith, 1997) as well as present-day industrial production discipline (e.g. Nanni, 2017). The insidious case of 133 enslaved Africans thrown overboard the Zong slave ship in 1783 to collect on insurance claims illustrates how important predictable delivery of private financialized human bodies was for plantation ecology (Sharpe, 2016). Similar to how milk gets dumped, or pigs get slaughtered during bottleneck delays in the supply chain (such as during the COVID-19 pandemic), rough seas, mutinies, and weather delays threaten(ed) the predictability of fully productive, dehumanized labourers to meet expected production of plantation crops. Producing predictable outcomes out of increasingly unpredictable climates continues today in the green economy. Paprocki (2018), for instance, illustrates how "climate adaptation" projects have been strategically targeted to depopulate coastal areas of Bangladesh to both dispossess small-scale fishers of their territories and cultural sovereignty, sucking them into precarious wage-labor relations in peri-urban slums, while simultaneously awarding contracts for lucrative sea wall construction projects financed by foreign investors. Like the rough waters of the Atlantic during the slave trade, climate change adaptation has become an opportunity to turn unpredictable risk into new value streams and new spinoffs of the plantation. Predictability is also crucial to justify returns on eco-investment associated with strategies like climate finance and carbon futures markets. For instance, commodity finance analysts have assessed the predictability of returns in carbon and ESG-related investment portfolios vis-à-vis other capital markets (Cornell, 2021;Cappucci, 2018). Verifiable offsets that avoid double-counting and ensure additionality -e.g. carbon sequestration that would not have happened without the offset, are major conundrums for climate financiers. Speculative finance in climate-smart real-estate and infrastructure depends on predictable returns of investment, irrespective of context, culture, history, climate, or underlying socio-political tensions and dynamics (Scoones and Stirling, 2020). The fourth dimension refers to control in maintaining the conditions of plantation (re)production and aligns with Mbembe's (2003) necropolitics to understand how hegemony on the historic plantation is sustained, as well as how rights to thrive are distributed to a few at the expense of the (slow) death by exploitation of countless others. Although control is presented here as a dimension parallel to the others in sustaining plantation precision and replicability, it might also be conceived as a form of biopower deployed across physical (e.g. military-industrial and surveillance technology) and cognitive landscapes to internalize or normalize the other three dimensions. Control operates through adherence to established path dependencies, including along lines of racial purity, patriarchy, and prioritizing settler futures (Mitchell and Chaudhury, 2020;Duncan, 2019). These can range from coercive social norms, formal laws and regulations by the state, scientific expertise, and nation-building discourses defining what is considered appropriate courses of action within the cognitive, cultural, and physical boundaries of hegemonic plantation logics. As political ecologists have long argued, the apparatus of science is weaponized to both build further on and improve the technics of governance that ultimately maintain or strengthen control over society (Scoones et al., 2015;Robertson, 2012;Jasanoff, 2004;Mumford, 1964). In this sense, the dimension of control is a clear exercise of power that makes it appear as though the contradictions of the plantation can always be'rendered technical' (Li, 2007) but necessarily managed within the confines of the plantation itself. In these contexts of discursive, political and economic hegemony, ecology is easily weaponized towards ecofascist agendas (Moore and Roberts, 2022). "Invasives" on the plantation, for instance, reflect non-human natures like pathogens, pests, and parasites that threaten expected yields and ultimately commodity futures markets. They may also include perceived threats from Indigenous and other subaltern groups whose inclusion in the club of "Humanity" (capital H) stands in the way of more efficiently exploiting their labor. The convergence of xenophobic nationalism and neoliberal capitalism (e.g. Arsel et al., 2021) is a holdover to a supposed 'golden era' of the historical plantation where all resistance could be violently suppressed. "Greening" interventions cannot be viewed in isolation from the expulsion of migrants, construction of border walls, forprofit anti-black carceral plantations continuously churning out cheap labor, control of women's reproductive rights, and everyday intimidation by police and paramilitary forces. These work in concert to reinforce plantation discipline, even in more eco-friendly and net-zero forms, and the ongoing colonial and patriarchal project that they underwrite (Arboleda, 2020;Federici and Linebaugh, 2018;Ferguson and McNally, 2015;Gilmore, 2007). --- Expanding the commodity frontiers of the plantation through the green economy While the four principles of plantation discipline described above help us to understand the culture and technique shaping plantation ecology, the frames of commodity widening, and commodity deepening illustrate how plantation discipline operates geographically and historically. The process of enrolling biophysical materials and laboring bodies into production takes place at the commodity frontier (Swyngedouw, 2006). 3 This frontier refers to an "underutilized" "outside", where relational values between people and non-people are violently subjugated to that of property and commodities for exchange value (Moore, 2010). The advancement of this frontier is manifested as resource imperialism, proceeding through militarized expansion across territorial space and dispossessing people of their sovereignty and relational entanglements to life (Harvey, 2005). The expansion of this to new places is what Moore (2015) refers to as "commodity widening." Commodity widening usurps land and its inhabitants and attempts to fold them into the efficient, calculable, predictable, and controllable social relations required for capital accumulation. Meanwhile, "commodity deepening" refers to hyper-intensified processes of producing commodities that are more refined, adaptive, and resilient to crises, without necessarily expanding production geographically. In the two sections below, we connect each of these frames with our discussion on plantation ecology and the green economy through the examples of climate debt and climate-smart agriculture respectively. --- Commodity widening The relation of commodity widening processes to historical plantations explains their geographic spread, particularly through the settler colonial occupations of the Americas, resulting in Indigenous genocide and an orchestrated global slave trade that set the wheels of white supremacy into motion. As Zuberi and Bonilla-Silva (2008) argue, once Africans were emancipated from slavery in the West in the 19 th century, resource imperialism and colonial subjugation continued and accelerated across commodity frontiers in Africa and Asia during the 20 th century and beyond. Commodity widening usurps land and its inhabitants and attempts to fold them into the efficient, calculable, predictable, and controllable social relations of the industrial plantation as described above. Banoub et al. (2020) identify how commodity widening takes place through a process of discovery, selection, and exclusion in the acquisition of vast new terrain for commodity production. The authors emphasize the spatial and temporal malleability of material natures as a function of their physical qualities as well as the labor to optimize the production of surplus value. Goods and services produced under the green economy, such as lithium batteries for electric vehicles or carbon offsets from tree plantations, follow in the practice of commodity widening, beginning with enclosure or resource capture of lithium or sequestered carbon stocks from common or customary land relations to private property regimes. Consequently, commodity widening is prone to what has been termed "green grabbing" (Fairhead et al., 2012). Commodity widening is tightly linked to low-interest bank loans and expanding relations of debt. Bank loans by colonial creditors and resulting debt bondage financed vast slave-owning plantations for commodity crops in the US South, the Caribbean, South Asia, North Africa, the Malaya peninsula and elsewhere (Upadhyaya, 2004;Harvey, 2019). In turn, debt-fueled commodity production to pay back creditors has pushed the frontiers of commodity expansion into new territories, disrupting already existing human-nature relationships, generating further ecological degradation and new speculative opportunities for investment in green financing like climate-smart agriculture to address the continuous environmental contradictions of production. These new speculative opportunities and the low-interest loans they encourage further the debtcommodity expansion of the agricultural commodity frontier, kicking the can of environmental problems further down the road, and the debt repayment-commodity frontier expansion continues ad nauseum. Since the 1990s, national debt relief through conservation agencies working with creditors in Europe and North America has been a popular approach for nature protection. These 'debt for nature' swaps involve writing-off sovereign debt by a creditor country, or conservation NGO working on their behalf, in exchange for conservation projects, thus offering economic "wiggle room" for countries to invest in ecological transitions (Svartzman & Althouse, 2022). Countries must achieve conservation outcomes, like the expansion of protected areas, by specific deadlines in these swaps and therefore must raise sufficient conservation finance to do so, often through the form of government loans or bonds devoted to terrestrial (e.g. "green") or marine (e.g. "blue") conservation agendas. These have grown in the wake of the COVID-19 pandemic (Akhtar et al., 2020), with new deals being arranged with Belize, Zambia, Ecuador, and Barbados. based) industries, adept at exploiting the value generated by conservation imagery, qualitatively transform the previous ensemble of situated ecological relations to put the terms and conditions of capital accumulation first. The outcome of these swaps results in exchanging one type of debt for another, allowing holders of green or blue bonds to profit from lucrative nature conservation strategies -including through real-estate speculation from conservation-based
The green economy is proposed as a solution to address growing and potentially irreversible ecological crises. But what happens when environmental solutions are premised on the same logics of brutal simplification and dehumanization that sustain and reinforce systems of oppression and ecological breakdown? In this article, we describe the transformation of the biophysical landscape of the planet into replicable blueprints of the plantation plot. The plantation as a colonial-era organizational template is an ongoing ecological process premised on disciplining bodies and landscapes into efficient, predictable, calculable, and controllable plots to optimize commodity production and is dependent on racialized and gendered processes of dehumanization. The visible cultural, physical, aesthetic, and political singularity of the plot, under the guise of objectivity and neutrality, permits a tangible depiction of the way ecological breakdown takes place. We interrogate the notion of "greening" as a strategy to combat the unintended impacts of colonial plantation ecology, arguing that such tactics further reinforce the template of plantation ecology rather than dismantle it. We first conceptualize the historical plantation and its biophysical, cognitive, and corporeal organizational principles. We then offer examples of "greening" as new, more inclusive (but equally detrimental) forms of plantation logics, and crucially identify how these extensions of plantation logic get co-opted by resistance agents, from social movements to disease and pestilence. We consider sustainability certifications of palm oil through the Roundtable on Sustainable Palm Oil (RSPO) in Colombia and compensatory afforestation programs designed
nature' swaps involve writing-off sovereign debt by a creditor country, or conservation NGO working on their behalf, in exchange for conservation projects, thus offering economic "wiggle room" for countries to invest in ecological transitions (Svartzman & Althouse, 2022). Countries must achieve conservation outcomes, like the expansion of protected areas, by specific deadlines in these swaps and therefore must raise sufficient conservation finance to do so, often through the form of government loans or bonds devoted to terrestrial (e.g. "green") or marine (e.g. "blue") conservation agendas. These have grown in the wake of the COVID-19 pandemic (Akhtar et al., 2020), with new deals being arranged with Belize, Zambia, Ecuador, and Barbados. based) industries, adept at exploiting the value generated by conservation imagery, qualitatively transform the previous ensemble of situated ecological relations to put the terms and conditions of capital accumulation first. The outcome of these swaps results in exchanging one type of debt for another, allowing holders of green or blue bonds to profit from lucrative nature conservation strategies -including through real-estate speculation from conservation-based tourism. While provisions can be made to foreclose social harm to marginalized populations, there is no requirement that this takes place. Similar strategies of debt-driven "greening" have come in the name of so-called "nature-based solutions" that disguise large-scale infrastructure projects under the banner of environmental consciousness (Chausson et al., 2023). Commodity widening also takes shape from the value grabbing of untapped rent value from nature (e.g. Andreucci et al., 2017;Fairhead et al., 2012), fueling ecologically and socially damaging economic spillovers like real-estate speculation (e.g. Gillespie, 2020). Rent refers to the instituting of property rights not used exclusively for new commodity production, but to extract value benefiting from aesthetic qualities, including the nebulous notion of being "nature positive", prime locations, cultural characteristics, carbon sequestration potential, or other positive externalities (Andreucci et al., 2017). These may result in exchanging carbon credits or certifying products or landscapes as "eco-friendly." Commodity widening through value grabbing from rent caters to morals, ethics and even calls for justice. Capitalizing on rent value requires reserve armies of lowskilled and precarious workforce to manage landscapes for nature-based solutions and palatably labelling them as green jobs (e.g. Neimark et al., 2021). The efficiency, calculability, predictability, and control dimensions of plantation ecology become best suited in locations where labor costs are low and the consumptive values of treating nature as an asset class are most optimal. --- Commodity deepening Commodity deepening occurs when spatial extensification of new territories is no longer possible. The commodity frontier advances through intensification that ramps up and hastens production. This involves technological innovation to further capitalize on otherwise difficult to obtain cheap natures and labor potential, identify and exploit surplus value and to further centralize control (Arboleda, 2020). In the case of agriculture, this commodity deepening process takes place through mergers or agreements between retailers, fertilizer and pesticide suppliers, shipping and seed companies, big tech digital agriculture platforms, multilateral banks, and "sustainable" development finance (Banoub et al., 2020). Some examples of commodity deepening include: artificial intelligence technology to identify and extract difficult-to-reach mineral ores and oil sands, genetic breeding of climate and pest-resilient crops, optimized exploitation of (now depopulated) commercial fish through aquaculture, shortening poultry production schedules through injections of ever-specialized hormones, or the use of drones and field sensors that provide data on soil conditions, fertilizer requirements, and monitor pests and many more (GRAIN, 2021). In terms of labor, commodity deepening has meant greater surveillance of individual productivity, stronger captivity of workers to dependency on high-interest credit lines and mounting debts, greater fragmentation of laboring classes through outsourcing across global supply chains and the disruption of meaningful union organizing of workers across these disparate chains. In short, plantation ecology is further deepened and reinforced through greater control over productivity to enhance the pace, direction, and consistency of surplus value generation (Banoub et al., 2020). One example of commodity deepening of plantation ecology emblematic to the green economy is the deployment of climate-smart agriculture. Touted discursively and institutionally by both governments, agribusiness, and among multilateral development and aid agencies, climate-smart agriculture leverages upon the branding of climate solutionism to further intensify industrial crop production through bioengineered crops. It wields already existing practices like herbicide usage for pest resistance and rebrands them as "climatesmart", reducing the need to till soils and release stored carbon (GRAIN, 2021). Yet enrolling these rebranding techniques and engineered technologies into plantation production systems directly and indirectly exacerbate the ecological breakdown they are meant to address. For instance, applying formulated herbicides to target particular pathogens has, in some cases, permitted these very pathogens to evolve and mutate in ways that adapt to the genetic selection of whole crops or livestock engineered to thrive with continued applications of these herbicide or antibiotics (Wallace, 2020). As the recent Covid pandemic painfully demonstrated, these risks (e.g. pathogen outbreaks and climate change) are ultimately offloaded onto workers of the plantation. Consequently, production relations of the plantation not only do not change but are further securitized and entrenched. Commodity deepening thus exerts a discursive, institutional, and material power to obscure existential risks that might alter the discipline of plantation ecology (Newell and Taylor, 2018). It rather redeploys concepts like regeneration and climate resilience in service of justifying new or existing commodities produced under already existing modes of plantation discipline, monocropping, financialized speculation and debt. Above all, commodity deepening does little to nothing to alter uneven patterns of value accumulation that accrue to end users of supply chains rather than returned to workers of the plantation (both human and non). While marginal material and energy efficiencies may result, the overall outcome is the expansion of yields and more efficient "just-in-time" delivery to retailers and consumers, especially when the same digital technologies are tied to algorithms for consumption preferences before consumers even know they desire something (GRAIN, 2021). Ultimately, such green branding for material and energy efficiencies is overwhelmed by faster economic throughput or rebound effect, making it even more difficult to transform production relations of plantation ecology that cause social and ecological harm (Nasser et al., 2020). Commodity deepening is metaphorically the act of digging a deeper hole to pull oneself out of it. Regardless of the extensive or intensive nature of surplus value generation (e.g. commodity widening or deepening), the subjugation of human and non-human bodies as devalued natures is crucial to the process of how plantation ecology becomes inscribed as the Capitalocene in the web of life (Moore, 2015). Figure 1 illustrates the characteristics of plantation ecology as thus far described. Both commodity widening and deepening involve financial speculation on expected future profits in light of uncertainties and risks. In doing so, both processes attempt to hold the future hostage by already foreclosing the agency of unborn non-human natures and other lifeworlds (Mitchell and Chaudhury, 2020;Whyte, 2017). The key here lies in the attempt. While this does not deny the variable of success in erasing and subduing lifeworlds as novel, disruptive, or innovative assets produced in plantations, it also reveals the systemic failures that are working to undo plantation ecology itself. --- From "greening" ecology to subjecting "green" to ecology For all its seeming pervasiveness, plantation ecologies are contradictions. By constantly generating social and ecological harm, they also generate the conditions to undo themselves. Yet, the crises it produces also become new opportunities to continuously subject people and nature as cheapened and discardable workers, raw materials, or wastelands to make way for new "eco-friendly" and inclusive plantation productseverything from climate change crop insurance for those willing to pay the premiums to LGBTQ+ friendly and accredited real estate companies that contribute to urban gentrification and a growing housing crisis. The issue is not in the intention towards inclusivity, it is rather in the lack of attention to the political economy within which such inclusivity resides. The way that plantation ecology reduces diversity to monoculture -even as it depends on such diversity as the substrate to reproduce, sustain, and expand the deadening and dehumanizing logics of monoculture -is what Katherine McKittrick (2013: 5) calls an "oppression/resistance schema," giving the plantation an inbuilt capacity to maintain itself by feeding off its own contradictions. Yet, subjecting novel branding strategies to the replication of plantation ecology removes the "green" clothes from the metaphorical emperor and opens up possibilities for more fundamental ecological transformations. One way to appreciate the relational character of plantations is to better understand how and by whom they are unmade. This requires understanding how situated sites of liberation and freedom are established, even if ephemeral (Gilmore, 2017). In their review of Johnhenry Gonzales' (2019) Maroon Nation, Heath (2022) describes Gonzales' account of how autonomous peasant economies of formerly enslaved workers on sugar plantations in Haiti transformed the production relations of plantation ecology. This was the result of political struggle to reassert specific definitions of freedom as tied to place and the formation of class consciousness and solidarity that emerged out of the struggle and culminated in the Haitian Revolution. Such consciousness continued to foster resistance against efforts of post-Independence elites to reassert plantation discipline, including in the discursive use of so-called "free" labor. Heath describes how autonomy and self-sufficiency by maroon communities facilitated escape and re-capture into the plantation economy through the liminal reappropriation of the plantation itself, for a moment in time and space, reasserting West African cultural traditions with the territory. Elsewhere, Glover and Stone (2018) describe how terraced landscapes of wet rice cultivation by the Ifugao in the Cordillera mountains of northern Luzon in the Philippines were the outcome of social, cultural, and spiritual resistance to colonial (Spanish) and imperial (American) attempts to reassert plantation ecology in the 19 th and 20 th centuries. A morphologically distinct landrace of rice (called tinawon) sustained and gave cultural meaning and purpose for the Ifugao in reclaiming their freedom from oppression. In these contexts, the notion of a plantation can no longer be totalized through uniformity, precision and replicability, dehumanization or value accumulation, but rather become sites of life generation premised on liberation from oppression and control. Tinawon rice is typically grown in only one harvest, combining deep spiritual connection and cultural meaning for the Ifugao, defining their political structure and economic relations, and the unique climatic, altitude, and ecological conditions of the Cordillera mountains (Ibid.). The close relation (or indeed complicity) between both human and non-human resistance to plantation ecology that these historic examples provide opens new avenues of reflection in the face of ecological breakdown and so called "green" solutions. As we have thus far described, "greening" strategies have tended to entrench plantation ecology through the generation of new forms of value capture, including through novel forms of resource and labor devaluation to produce "green" goods and services. But how do affected workers on the plantation (both human and non) engage in marooning practices by taking advantage of increasing social and ecological dislocations that continuously emerge from these so-called solutions? How might abolition from the ruins of the plantation be fostered by weaving new kinds of relationality, class consciousness, and solidarity to build political power (Stoetzer, 2018)? How might the "green" plantation be resisted by fostering alternative ecologies of liberation and abolition? We now turn to two examples of "greening" interventions that reproduce plantation ecology, yet also involve actions of resistance and defiance. These examples are summarized in Table 1. In these examples, we refer to our own empirical research (both published and unpublished), drawn from interviews conducted between 2017-2019 (for compensatory afforestation in India) and 2021-2022 (for the RSPO). We subsequently conclude with some lessons that point towards abolition ecologies. --- Journal of Political Ecology Vol. 30, 2023 509 ---------------b) Oil palm smallholders, who form the backbone of oil palm growers can hardly obtain sustainability certification due to prohibitive costs and limited knowledge of certification benefits (Abazue et al. 2019). --- Plantation ecology characteristic --- Predictability ---------------b) Increasing public awareness of the greenwashing of ecological and social impacts from sustainability-certified palm-oil-based biofuels, casting suspicion over public manipulation. (Kukreti, 2022) Table 1: Features of two "greening" interventions that: a) embed or reinforce plantation ecology through their theory and implementation within the so-called "green" economy and b) generate contradictions that resist and redirect plantation ecology. --- Compensatory Green certification schemes: The Roundtable on 'Sustainable' Palm Oil (RSPO) and its undoing The rapid expansion of palm oil monocultures by transnational and local firms in Southeast Asia, Central and West Africa, and more recently, Latin America has caused the erasure of social-ecological histories along with mass-scale incorporation or displacement of local communities in forest biomes among the richest in terms of biodiversity (Pye, 2019;McCarthy and Cramb, 2009). Dehumanized, laboring bodies brought into plantation logics of the oil palm plantation have been widely devalued, and differentiated according to gender, nationality, ethnicity, class status and subjected to sustained forms of exploitation (Bissonnette, 2013;Li, 2011). In Colombia, in a context of civil war, oil palm plantations have provided the justification and financial means for military and paramilitary forces to enclose and secure large tracts of land, dispossessing thousands from their territorial and cultural autonomy to further the accumulation of lands for commodity production (Hurtado et al., 2017;Maher, 2015;Palacios, 2012;Potter, 2020). In response to growing scrutiny from the more visible aspects of ecological destruction across palm plantation regions (e.g. orangutan deaths, forest and peat soil fires and haze, massive contribution to climate disruption) as well as labor practices, "sustainable" palm oil through certification has become a salient public relations 'fix' for the industry (Pye, 2019). The Roundtable on Sustainable Palm Oil (RSPO) is an initiative launched in 2004 by the WWF, the Malaysian Palm Oil Association (MPOA), Unilever, Migros (a retailing and refining chain), and AAK (a vegetable oil producer) with the goal of promoting the use and production of harm-free palm oil. The RSPO provides a platform for oil palm companies to engage in a supposedly third-party certification process that measures compliance with rules and standards approved by the consensus of its members, such as zero burning, herbicide use reduction and respect of labor regulations (Bain & Hatanaka, 2010). Its definition of sustainability relies on applying the right techniques and best practices such as selecting the best seeds and planting materials, technical fertilization based on soil surveys and nutritional assessments, adequate use of agrochemicals, and attention to drainage and water systems to increase the productivity, efficiency and profitability of RSPO members. 5Without changing the production system but using "green" as a license to both widen and deepen commodity frontiers, the RSPO offers a novel survival strategy for the dehumanizing logic of the plantation. Using the sustainability narrative, the RSPO has become the most prominent initiative to secure market shares for oil palm and assert large companies' social and environmental corporate responsibility and ESG portfolios. It effectively contributes to creating a "green" rent value within the broader political economy of intensive oil palm production and secures access to markets in places (like Western Europe) where consumers have higher purchasing power and environmental awareness, what has been termed 'ethical consumerism' (Pye, 2019: 220). The RSPO further legitimizes the idea that plantation agriculture can be regulated voluntarily by companies, if consumers are willing to pay more for an eco-certified product produced by the same plantation discipline that in fact never gets called into question. Opposition to plantation logics is thus effectively diffused through novel and flexible strategies that co-opt socio-environmental concerns and ultimately serve to extend the plantation. The RSPO's principles and criteria of sustainability do not address the structural problems of the industry, including land conflicts and dispossessions, labor exploitation, human rights violations, and environmental degradation caused by the continuous expansion of the industry (Pichler, 2013). RSPO certifications and membership can also be used by palm oil companies to legitimize and consolidate illegal land dispossessions and accumulation, and greenwash histories of violence, discrimination and conflict as is illustrated in the case of specific palm oil companies in Colombia associated with paramilitary violence, forced land dispossessions, and death threats to land claimants and indigenous populations nearby plantations (Comisión Interesclesial de Justicia y Paz, 2015; EIA, 2015; Somo & Indepaz, 2015). The combined apparatus of government, private sector, organized crime, paramilitary groups, and scientific institutions at the helm of the green economy falsely equate savagely simplified plantation discipline to the kinds of ecological plurality they claim to be regenerating. Oil palm production, however, exists outside the logic of the plantation. Despite profound disruptions brought by Western colonialism to the complex ecological relations developed by communities throughout history, small family farmers have remained the backbone of oil palm production in most parts of the world (RSPO, 2020). Even in Malaysia and Indonesia, where oil palm was initially introduced in the late 19th century as a plantation crop grown in centrally managed large-scale systems, it was rapidly taken up by hundreds of thousands of small family farmers and grown in diverse ecological systems (Bissonnette & De Koninck, 2017). In Northeast Brazil under Portuguese colonialism, African slaves brought with them oil palm seeds, which eventually enabled the emergence of a distinct Afro-Brazilian landscape. It produced the agroecological region now referred to as the Palm Oil Coast, Costa do Dênde, a clear marker of agency, cultural and territorial reappropriation (Watkins, 2015). Despite the horrifying logic of the plantation, the crop itself and the human relations formed around it can never be fully reduced to a predefined outcome premised on a factory model logic of production. Where the "green" plantation logic manifested through RSPO shows limits is precisely in the certification of small family producers or smallholders. The diversity and complexity of tenure arrangements, cultivation practices and access to information (Jelsma et al., 2017) renders small scale production less visible to the uniformity of "greening" practices. This is not to say that small-scale oil palm farmers fall outside power relations of plantation logics, which they indeed may aspire to in the hopes of generating profit as property owners. However, because they are highly heterogeneous and remain embedded within more embodied relations to land and labor, they actively shape ecological processes that fundamentally differ from that of monoculture. --- "Greening" development in India through compensatory afforestation The reproduction of plantation logics within India's green economy is a growing concern. Stories of resistance from plantation landscapes in the state of Odisha point, however, to insight in why and how these logics fail or get undone. In India, compensatory afforestation (CA) requires public and private agencies that deforest for roadbuilding, mining, or other development projects to plant an 'equivalent' forest elsewhere. While ostensibly a tree-planting project, CA is at its core a tree-cutting project, since every forest being cut is behind each (largely monoculture) plantation that exists through this program. In Keonjhar District, Odisha, monoculture tree plantations have been imposed on community lands for decades, often under the guise of "podu prevention" (Panda, 1999). Podu refers to a system of agroforestry that is often known as rotational agriculture, shifting cultivation, or swidden cultivation. Practitioners move from site to site, leaving fallows to regenerate and clearing a new patch for cultivation. Like many previous afforestation programs, CA site plans reveal that forest officers intentionally select podu sites for plantation, describing them as "podu-ravaged", "subjected to podu cultivation" or "conspicuously cultivated" (DFO, 2014). Aware that this will drive conflict and resistance from villagers, who cultivate or forage about half of their food basket in the forest (Valencia, 2019), site plans often include strategies to ensure "good humor" among villagers including celebrations that will inspire them to "protect the plantation" (DFO, 2014). Yet because communities are acutely aware that the spatial and ecological imposition of plantations on podu lands reflects a broader political project-threatening their livelihoods-they reject the counterintuitive assumptions of CA, including that Adivasi (i.e. Indigenous) customary rights, livelihoods, and cultures are obsolete; that monoculture plantations are equivalent to forests; and that plantation protection leads to "benefits." The state's pursuit of "efficiency" is reflected in its strong preference for teak (Tectona grandis). Teak is a favored species for plantation forestry given its quick growth, durability and economic value. But for communities in these areas, teak plantations are "utterly useless" (Valencia, 2019) in comparison to forests and regenerating fallows which offer fuelwood, fruits, roots, tubers, leafy greens, seeds, fodder, and other forest products. Resistance to teak has an important historical legacy in central India. The Jangal Katai Andolan (Forest-Cutting Movement) in the 1970s organized Indigenous communities to burn plantations, destroy saplings, and demolish forest department infrastructure (Sen, 2018, p. 195). In Keonjhar, Indigenous organizing against plantations initially focussed largely on species selection, with demands not to end plantations but to recognize communities' decision-making role in picking species that benefit them. Today, forest agencies claim to be undertaking a more "holistic" approach to plantations, including attention to polyculture. However, ground-level evaluations of CA plantations shows that, where plantations are indeed undertaken at all (e.g. Kukreti, 2021), teak remains the mainstay. The narrative that plantations depict efficiency is belied by the fact that plantations rarely survive (Rana et al., 2022). In Keonjhar, the plantation legacy is mired with failures spanning from the era of social forestry (e.g. Panda, 1999) to their new linkages with the green economy (Valencia, 2019). Ground truthing of CA plantation data has revealed that CA saplings may be planted, and a plantation may exist in principle, but within a few years sites are often reverted to shifting cultivation and replanted with traditional crops (Valencia, 2021). In 2013, the Comptroller and Auditor General of India released a report including evidence of unacceptable plantation survival rates, unmet offset objectives, and rampant financial mismanagement (MoEFCC, 2013). Hardline conservationists and retired forest officers challenged the program on similar lines, leading to a new law -the Compensatory Afforestation Fund Act, 2016. 6 Taken together, the veneer of efficiency crumbles. The dimensions of calculability and predictability particularly enrich an analysis of how CA connects with the broader green economy. As per India's Forest Conservation Act, to achieve "equivalence" between forests and plantations, deforesters must fund plantations that average 1,000 trees per hectare and must pay into a fund that approximates the net present economic value of foregone ecosystem services (e.g. biodiversity protection, carbon, water recharge) associated with deforestation spread across a 50-year period, to account for lost regeneration costs (Kohli et al., 2011). However, advocacy to push state forest agencies away from highly dense "block plantations" and towards "assisted natural regeneration" has created a perverse outcome. For instance, site maps for upcoming plantations in Thuamul Rampur, Odisha reveal that rather than simply targeting shifting cultivation fallows for block plantations at 1,600 trees per hectare, every square centimeter of village commons will be enrolled in plantation projects at a lower density (Valencia, 2019). Given that neighboring villages are often affected, this plan will convert interconnected, Indigenous shifting cultivation landscapes into archipelagos of homesteads within seas of fenced-off "green" state property. Compensatory afforestation plantations are unique within India's massive restoration portfolio as they have the power to delay forest clearances for expensive extractive projects. Predictability of plantation site availability, suitability, and execution is therefore key. One evidence of this is that in Odisha, the state land bank identifies lands for CA as lands for investment, thus increasing the risk of land dispossession. The political economy of land demands within which CA is embedded also creates a predictable procedural space. A site plan can simultaneously employ specific turns of phrase, including the assertion that desired lands for CA are "conspicuously cultivated" or "free of encroachment and encumbrance" (DFO, 2014). These phrases sanitize the lived experiences of people dispossessed by the plantation projects and conceal failures in due process with no consequence. Predictability is a core component of compensatory afforestation and restoration logics more broadly because of the calculated commitments around tree planting that India has made to which plantations must fulfill. India's global commitment to afforestationat 26 million hectares -is second only to China. India has long committed to increase from present forest cover of 21% to 33% (for more on these rather arbitrary numbers and definitions, see Davis & Robbins, 2019). The resulting power struggle between communities and the State invokes the fourth dimension of control. Reflecting on the planned proliferation of plantations across podu patches in her village, one senior woman asserted: "We will not allow them. If they do plantation everywhere, what land will be left? How will we survive?" (Valencia, 2019). While plantation policies and plans occur at the higher levels of the forest bureaucracy, exertion of control is often up to the lower-level rangers, guards, and watchmen. Strategies such as hiring labor from outside villages (or dominant communities within the villages), negotiating 'deals', and manufacturing consent through illegitimate local institutions are employed to ensure that saplings are planted, as a bare minimum commitment, and to justify calculated plantation quotas and statistics on forest coverage (Choudhury & Aga, 2019;Fleischman, 2014;Gerber, 2011) Communities may take these in stride, with an ultimate plan of reclaiming the land from the scrawny saplings to plant millets (Cenchrus americanus and others) or niger seed (Guizotia abyssinica) instead. But what binds the plantation ecology of CA to the green economy is the equivalence that each monoculture planted justifies for deforestation elsewhere. Here, a unique contradiction emerges. CA plantations are telecoupled to deforestation. They exist to mitigate harm, while extending harm and control by placing forests in the hands of many into the hands of the State forest agencies. While attempts continue to gobble up grassland areas and seeming "wasteland", from the eyes of the government, for their conversion to forested monoculture plantations, efforts to reclaim land back in the hands of land users continues to follow suit. Meanwhile, plantations are spreading to distant locations, increasing the State's grip on territories in other jurisdictions. The social impacts of CA reflect the scale at which people most vulnerable to the impacts of climate change and ecological breakdown will also be most threatened by the green economy solutions supposedly aimed at addressing impacts. It also reveals the fragility of the plantation: for all the efforts to make plantations efficient, calculable, predictable and controlled, communities assert that it doesn't take much to pull up a weak teak sapling and plant millets or niger seed instead (Valencia, 2019). Modifications to the Forest Conservation Act in 2023 will make it easier to divert forests to expedite developments in the name of being in "national interest and concerning national security," thus exonerating the need for forest compensation at all in some cases. These security-related infrastructures include, among others, projects for planned commercial wildlife safaris, ecotourism projects, public works in so-called "left wing extremism" areas and in any location within 100 km of an international border with India (Sharma, 2023). It is also expected that forest plantations can be designed to maximize carbon sequestration for tradeable carbon credits and to render development projects carbon neutral (Ibid.). --- Never quite a conclusion, towards abolition ecologies Describing plantation ecology is not meant to showcase how "green" is being done wrong and how it can be done right. That would be too precise, too replicable, too rational, even if it was indeed possible. Moreover, the "greening" of plantation ecology is not limited to specific interventions like compensatory schemes or sustainability certifications but may also apply to sweeping economic transition programs like the Green New Deals 7 of wealthy, industrialized nations (Ajl, 2021) that do not pay attention to the logics we have described here. Ecological solutions cannot come from plantation ecology -the same discipline and design that has only sharpened the knife blade of ecological breakdown and inequality, precipitating the loss of sociocultural imaginaries and capacities to intervene and generate alternatives. Liberation from the plantation requires dismantling the plantation rationalizer in our collective minds. This means policy un-friendly recommendations amidst an ever-tightening State/corporate nexus that regenerate a praxis of worlds (plural) in common, crucially grounded in desires for freedom from oppression and dehumanization. To us, the demands for social and environmental justice that defy the imposition of plantation discipline by twinned state and private sector interests are ecologizing practices, meaning that they reanimate thought and being in ways that stimulate conditions for the proliferation of alternative socio-ecological relations. These spaces reflect relationships to the land that have historically regenerated conditions for living out of both desire and survival (McKittrick, 2013). A plethora of questions emerges as to how these forms of existence terraform landscapes of hope against hope, of people and non-people alike, temporally through situated encounters and geographically across territories. Such ecologies are not predictable, efficient, calculable, controllable nor are they replicable, but rather reflect the unlikely kinships of place-making that emerge amidst the ruins of the plantation (Stoetzer, 2018). They do not deepen or widen plantation commodity frontiers; yet could just as easily be essentialized as new desired endpoints and themselves driven into new production systems that placate any attempt to reroute 7 Compromises to labor that underpin welfare-based social democracies in wealthy industrialized countries of the North depend fundamentally on the pillaging of dehumanized labour and cheap (renewable) energy and material extraction in the Third World (Ajl, 2021). The proposed Green New Deals of Ed Markey/Alexandra Ocasio-Cortez in the US and the European Green Deal risk being strategically imbricated within the logics of plantation ecology as described. the template of plantation ecology. Put differently, they could all too easily be romanticized into new equitable, politically-correct, diverse, and inclusive plantations -that fail to address uneven dimensions of value and knowledge accumulation. Only profound solidarity across the social fragmentations embedded in plantations can overcome the tendency to reproduce plantation logic. This requires internationalist and intersectional solidarity movements that encompass agrarian and fisherfolk demands for autonomy over food production systems, Indigenous struggles for territorial sovereignty, and demands for decent working conditions that reflect lived experiences across gender, race, and immigration status. It is therefore imperative that so-called "greening" solutions be scrutinized for the tyrannical interests of the 1% that they ultimately serve. As we have argued in this piece, dehumanization and ecological simplification are not merely technical issues of mal-distribution or improper recognition within plantation discipline, but are fundamental conditions of its existence and expansion (Ajl, 2021;Coulthard, 2014). Reclaiming autonomy from the plantation has inspired decolonial and abolitionist thinkers from the Black radical tradition (e.g. Angela Davis, bell hooks, Kimberly Crenshaw, Saidiya Hartman, Clyde Woods, and Ruth Wilson Gilmore); Indigenous ecologists like Potawatomi scholar Kyle Powys Whyte, Yellowknives Dene scholar Glen Coulthard, Michi Saagiig Nishnaabeg scholar Leanne Betasamosake Simpson, and Unangax scholar Eve Tuck; anti-imperialist and non-Eurocentric decolonial scholars like Liberian activist and academic Robtel Neajai Pailey, Cameroonian historian Achille Mbembe, Peruvian sociologist An<unk>bal Quijano, and Bolivian sociologist and historian Silvia Rivera Cusicanqui, as well as anticaste philosophers and contemporary thinkers like Jyotirao Phule, Babasaheb Ambedkar, E.V. Ramaswamy Periyar, Suraj Yengde, and Kancha Ilaiah among many others. The process and material outcomes of attaining freedom from the plantation is what Heynen and Ybarra (2021) refer to as abolition ecologies, characterized as embodied relationships between people and territory imbricated within a struggle for liberation from state-sanctioned violence, criminalization, and dispossession. The movement to "Defend the Atlanta Forest", which aims to halt a planned police training facility whose construction threatens the safety and environment of neighboring Black communities and is an act of ecocide in an era of ecological breakdown and on the sacred stolen territory of the Muscogee Creek people, is an abolition ecology in the making (Bernes, 2023). It intertwines the efforts of prison abolitionists, dreamers of Black liberation from the carceral state and legacies of plantation oppression, Indigenous activists and environmentalists alike through a'movement of movements.' Together, these actors root themselves with the plants and animals of the forest through a my
The green economy is proposed as a solution to address growing and potentially irreversible ecological crises. But what happens when environmental solutions are premised on the same logics of brutal simplification and dehumanization that sustain and reinforce systems of oppression and ecological breakdown? In this article, we describe the transformation of the biophysical landscape of the planet into replicable blueprints of the plantation plot. The plantation as a colonial-era organizational template is an ongoing ecological process premised on disciplining bodies and landscapes into efficient, predictable, calculable, and controllable plots to optimize commodity production and is dependent on racialized and gendered processes of dehumanization. The visible cultural, physical, aesthetic, and political singularity of the plot, under the guise of objectivity and neutrality, permits a tangible depiction of the way ecological breakdown takes place. We interrogate the notion of "greening" as a strategy to combat the unintended impacts of colonial plantation ecology, arguing that such tactics further reinforce the template of plantation ecology rather than dismantle it. We first conceptualize the historical plantation and its biophysical, cognitive, and corporeal organizational principles. We then offer examples of "greening" as new, more inclusive (but equally detrimental) forms of plantation logics, and crucially identify how these extensions of plantation logic get co-opted by resistance agents, from social movements to disease and pestilence. We consider sustainability certifications of palm oil through the Roundtable on Sustainable Palm Oil (RSPO) in Colombia and compensatory afforestation programs designed
r, Suraj Yengde, and Kancha Ilaiah among many others. The process and material outcomes of attaining freedom from the plantation is what Heynen and Ybarra (2021) refer to as abolition ecologies, characterized as embodied relationships between people and territory imbricated within a struggle for liberation from state-sanctioned violence, criminalization, and dispossession. The movement to "Defend the Atlanta Forest", which aims to halt a planned police training facility whose construction threatens the safety and environment of neighboring Black communities and is an act of ecocide in an era of ecological breakdown and on the sacred stolen territory of the Muscogee Creek people, is an abolition ecology in the making (Bernes, 2023). It intertwines the efforts of prison abolitionists, dreamers of Black liberation from the carceral state and legacies of plantation oppression, Indigenous activists and environmentalists alike through a'movement of movements.' Together, these actors root themselves with the plants and animals of the forest through a myriad set of human and non-human relations premised on social and ecological justice. Abolition ecologies are the biophysical and socio-spatial relations that shape and are shaped by legacies of resistance from (neo)colonial oppression rooted in situated categorizations of dehumanization (e.g. antiblack, anti-Dalit) and ecocide (Sultana, 2022). An abolition ecology means dismantling the infrastructure of plantation ecology and putting an end to the possibility that plantation "irrationalities", conceived as economic externalities, could ever be enrolled back into a more diverse and inclusive plantation. An avenue of necessary inquiry resides in how such dismantling ought to take place, without falling prey to co-optation. Does care for social and environmental justice ultimately require blowing up pipelines, referring to the title of Andreas Malm's 2021 book? It may be, as Gruba<unk>i<unk> and O'Hearn (2016) argue, that abolition ecologies are deeply liminal as the example of maroon ecologies in Haiti illustrate. This means that they may not "exist" as such, but are immanent in resistance to being named, mapped, or fully analyzed (Harney & Moten, 2013). This immanence of resistance is itself the relationality reflecting what ecological complexity means and to which care-ful attention is needed in doing away with plantation discipline, yet often with little guarantees. The "Gesturing Towards Decolonial Futures" collective have recently reflected on ways to ensure that efforts made towards decolonization are not re-routed into the same desires and entitlements that lead to colonial practice, rendering decolonization a weaponized buzzword that serves colonial interests (Stein et al., 2021, Tuck andYang, 2021). Part of this responsibility lies in affective affirmation of "staying with the trouble" (e.g. Haraway, 2015) without being content with residing in a space of mere intellectual critique of coloniality. This does not exonerate the "reproduction of modern/colonial desires and habits of being" (p.10). The collective identifies "circularities" or pitfalls that ensnare engagement with decolonization back into colonial practice, highlighting how they positions themselves aspirationally within what they call "the house modernity built" to better contextualize the pathways that engagement with decolonization may take. In each case, they proceed by walking readers through the mistake-ridden journey of trying better, with attention to humility, curiosity, attention to difference, self-complicity and long-haul discomfort with the trouble we find ourselves in. By keeping an eye to the ways resistance and response strategies to plantation logics fold back into what they seek to escape from, it becomes possible to hold out a "horizon of possibility" without cynically writing off the recurring inevitability of the plantation. Part of this practice involves disinvesting in the unethical and deadening trajectory of the plantation but without arrogance as to the "correct ways" of fashioning alternatives. This involves taking the lead of abolitionist and anti-colonial struggles as well as through individual and societal commitment to "hospicing" the harmful everyday practices and habits of being that consciously and unconsciously reproduce plantations (Stein et al., 2021). Undoing the "green" plantation is an undertaking in taking ecology seriously, and by that we mean opening the deeply political horizon of how harmful habits of thinking and being are reproduced in society and in ourselves.
The green economy is proposed as a solution to address growing and potentially irreversible ecological crises. But what happens when environmental solutions are premised on the same logics of brutal simplification and dehumanization that sustain and reinforce systems of oppression and ecological breakdown? In this article, we describe the transformation of the biophysical landscape of the planet into replicable blueprints of the plantation plot. The plantation as a colonial-era organizational template is an ongoing ecological process premised on disciplining bodies and landscapes into efficient, predictable, calculable, and controllable plots to optimize commodity production and is dependent on racialized and gendered processes of dehumanization. The visible cultural, physical, aesthetic, and political singularity of the plot, under the guise of objectivity and neutrality, permits a tangible depiction of the way ecological breakdown takes place. We interrogate the notion of "greening" as a strategy to combat the unintended impacts of colonial plantation ecology, arguing that such tactics further reinforce the template of plantation ecology rather than dismantle it. We first conceptualize the historical plantation and its biophysical, cognitive, and corporeal organizational principles. We then offer examples of "greening" as new, more inclusive (but equally detrimental) forms of plantation logics, and crucially identify how these extensions of plantation logic get co-opted by resistance agents, from social movements to disease and pestilence. We consider sustainability certifications of palm oil through the Roundtable on Sustainable Palm Oil (RSPO) in Colombia and compensatory afforestation programs designed
Introduction The basic question addressed by this research project is if it is possible to become an intellectual without conflict and identity crisis, i.e., how a young person of Roma origin can break out of the constraints of disadvantage amid contemporary circumstances. 1 Research on the Roma in Hungary comprises several approaches to the topic of integration and, primarily, the integration of peripheral groups, from a number of different perspectives. 2 Among other efforts, investigations are being conducted on ethnic coexistence situations, 3 but the issue of the Roma language and the shift between languages 4 have also been explored even more intensively, and several social researchers have examined the topic of ethnic mixed marriages, too. 5 If we look at the situation of Roma groups living in Hungary, we can find numerous research efforts that explore the relationship between hosting communities (in our case, the majority Hungarian society) and immigrating ones (in this case, groups of Gypsies). One of the frequently analyzed topics of assimilation research in Hungary is the development of cultural relations between Hungarians and Gypsies, which is usually analyzed within the conceptual framework of assimilation, integration, cultural adaptation and dissimilation. This study deviates from the focal points listed above and examines the circumstances of Roma youth launching into intellectual careers. By presenting a case example, it aims to illustrate yet another important aspect of coexistence. One the one hand, the case study is a suitable tool to highlight the topicality of the issue, while on the other hand, it also proves that the life situation it presents cannot be solved just by involving those concerned and affected (living in it) alone. Specifically, this study offers an analysis of the life path of a young Vlach Roma couple with a college degree, which reveals the complex mechanism of influence of the social, cultural and economic conditions that maintain what I call "intermediate exposure." Its chief objectives include identifying the problem and outlining further investigation possibilities of the related topic. 2 József Kotics has conducted numerous field research projects in Hungary and in regions inhabited by Hungarians beyond the border. For details of the theoretical-methodological approach to and research findings on Hungarian-Roma coexistence, see Kotics 2020. Gábor Biczó proposes the introduction of the concept of "ressentiment" to help interpret Hungarian-Roma coexistence situations. In his work, this concept, as an analysis of the culture of resentment can help understand what processes take place in the affected minority communities. Cf. Biczó 2022. Norbert Tóth investigated the impact of segregation and school segregation on the social empowerment of those affected in the Vlach Roma community of a small settlement, examining among other features the indicators of further education and school performance. Cf. Tóth 2019. 3 There are several comprehensive analyses available on this topic. For further details, see Kovács et al. 2013;Biczó et al. 2022. 4 See, for example, Bartha 1999; Nagygy<unk>ryné 2018. 5 For further details, see Tóth-Vékás 2008; Gyurgy<unk>k 2003. Based on the research findings so far, it can be concluded that young people of Roma origin who participate in higher education while coming from a disadvantaged position in terms of their family sociocultural background and then try to make a living as intellectuals after graduation, find themselves in an existentially, psychologically and socially unstable situation between the majority society and their own immediate community, which might be dubbed the state of "intermediate exposure."6 --- Research background and circumstances In addition to the work done for a period of ten years in a Roma college for advanced studies, on which the present study is based, anthropological field research conducted primarily in Roma communities residing in disadvantaged areas of North-Eastern Hungary has provided specific information for this analysis. Apart from community studies, research on Roma intellectuals has also received a basis from exploring the role of social individuals in local communities. At the level of the social role of the individual, cultural shift processes in local communities, such as changes in the value system, can be properly identified. In the light of the changes in the value system of local societies determining the coexistence of the majority and the minority, the following question of practical significance can be pretty well examined: How is it possible to resolve the stereotypes that dominate the Hungarian-Roma antagonism that can often be observed in the social space? Furthermore, I have sought to understand during the course of my research what external factors sustain and operate the oppositional structure of ethnic coexistence. 7 The concept of "intermediate exposure" makes it possible to interpret what it means to be caught between two "worlds" at the mercy of the system of stereotypes that dominate the relationship between majorities and minorities. Becoming an intellectual of Roma origin in Hungary is a complicated process, and it cannot be simply described as the graduation process on the basis of performance in higher education. Young people, most of whom are disadvantaged because of their backgrounds, have to face mobility challenges dur-ing the years they spend in college, which automatically presuppose external supportive institutional conditions. 8 The most important component of this kind of support is the network of Roma colleges for advanced studies (Roma Szakkollégiumok) in Hungary, all members of which operate as genuine integrated institutions, where young people of either Hungarian or Roma origin form communities together. The rules of operation of this system, which are applied as prescribed, prevent these colleges from forming segregated inclusions in higher education. 9 Besides supporting the chances of success in higher education, Roma colleges for advanced studies also take care of the important task of strengthening Roma identity. It is a common experience that young people from Roma families face an identity crisis in higher-education institutional settings, which is often accompanied by an identity conflict. Researching the life paths of successful persons of Roma origin, Margit Feischmidt identified the cause of the identity conflict as follows: "in most cases, the intention to assimilate and the majority rejection encoded in institutional discrimination and/or everyday racism" 10 may be behind the phenomenon. Young Roma intellectuals drifting into an "intermediate exposure" situation encounter institutional discrimination and identity conflicts primarily not during their years at college or university, but later on in the labour market. Experience shows that the Roma college for advanced studies system is not yet fully prepared for the challenges facing employees, since academic success alone -as experience so far has shown -does not guarantee success in life outside the "institution." --- The circle of those concerned Determining the number of gypsies living in Hungary is a difficult task in several respects and, even today, it is primarily the issue of identification that 8 An important experience related to describing the problem of "intermediate exposure" was that, as the director of Balázs Lippai Roma College for Advanced Studies (2016-2018), I developed a "helping-supporting" work method (Tesz-Vesz-Koli), which can also be applied to disadvantaged Roma university students. It basically helps students to develop their individual skills and abilities and to orient themselves in the higher-education environment by building on their individual aptitudes. (For an introduction to the working method, see Szabó 2016.) 9 Find more details on the integrative efficiency of Roma colleges for advanced studies in Biczó 2021. 10 Feischmidt 2008. proves to be a challenge for social researchers. If we look at the figures of the census conducted every ten years, we can see that, in 2011, as many as 315,583 people declared themselves to be of Roma nationality. 11 Another important aspect of the data that can be gleaned from the survey is that only <unk>1% of Roma people have a higher-education degree. A different methodological approach was applied by the research group of the University of Debrecen in their project conducted between 2010 and 2013, in which the territorial location and distribution of the Roma living in Hungary was primarily examined. Using the method of expert estimation and external classification, they estimated the number of people of Roma origin to be about 876,000.12 Pic. Nr. 1: Settlements where students of Roma colleges for advanced studies come from in 2020. Biczó-Szabó 2020: 34. Another nationwide survey was conducted in 2020, when Gábor Biczó and the author of this study conducted a comprehensive analysis of the members of the 11 Roma colleges for advanced studies operating in Hungary. 13 It revealed that Roma students in colleges for advanced studies are present in higher education in all 15 fields of study and in a total of 122 different majors. We learned from the study that the geographical recruitment environment of students was also fairly diverse, with those coming to higher education representing a total of 204 different settlements. It can be clearly seen in the map above that the vast majority of the members of colleges for advanced studies at that time came from parts of Hungary that are most densely populated by Romas (North-Eastern Hungary and South Transdanubia) according to data collected by the Pénzes-Tátrai-Pásztor research group during the period under review. Furthermore, it can also be seen that, within the distribution of the residential settlements of college for advanced studies members according to legal status, those coming from small towns and villages are in a higher proportion than those coming from metropolitan or urban areas. Thus, the circumstances of the disadvantaged source environment fundamentally determine the initial state the compensation for which decisively shapes the development of students' college/university years. For most of them, university or college life means a significant change in relation to where they come from. At the same time, based on the experience gained from the follow-up of Roma college-for-advanced-studies graduates, the real challenge for them begins after graduation. They are faced with a choice between four options: 1) One option is to return to their original living environment and try to make a living locally in their profession. 2) Another solution is to return to their original living environment and, in the absence of a job opportunity matching their profession, find employment in another sphere; typically in jobs that do not require a college degree. 3) They may also decide to look for a job related to their profession, but in a larger city or in the capital, even if it is at a considerable distance from their place of residence. 4) As a final variation, they can continue their studies in post-graduate education, taking advantage of the "protective system" of the university and the college for advanced studies sphere. The above categories represent a valid analytical framework for practically all young Roma intellectuals -students who have joined the Roma college-foradvanced-studies network. After completing their studies, Roma young people who have just graduated do not always follow the path they had planned beforehand, but rather the one that "opens up" to them, so to speak. After graduation, their career depends on the openness of the immediate majority environment to integration and the specificities of the personal living environment. Intermediate exposure: a case study R. K. grew up in a traditional Vlach Gypsy family in the settlement Hodász, located in one of the most disadvantaged areas of North-Eastern Hungary. According to tradition, her parents talked to her in the Roma language, and she learned Hungarian in kindergarten. Her parents tried to protect her from all new influences, which meant she would not be allowed to go anywhere alone. With the exception of school trips, R.K. did not leave her residential area, since according to gypsy culture, young girls were not allowed into foreign environments. "I have loved travelling since I was a child. That's what I always fought for with my father that I would be going still. That there is no such thing, that I am not going. Let's say for a hike, or rather, I say this, which I really wanted. That's all I wanted to do, to go on trips, to see the world, to get to know the cultures. For me, that was what I really wanted." 14 The internal conflict with cultural traditions was thus evident at a very young age. "Because when I was little, I missed them, I didn't ride trains, I only got on a train for the first time when I was 18 or 19. I didn't take the bus, only when I was in high school, and I really liked to go and live, because my dad didn't really give in; he was scared, and this desire only grew stronger and, when I could, I tried to do everything." 15 During her high school years, R.K. saw an example of some young people living in similar sociocultural circumstances choosing further education, but this was not a natural alternative for her. "And when we went to grammar school, I didn't care about the fact that I would go to further education now, but to have my high school diploma, and then what has to come will come. And then my sister and our cousins, and then they said they were going to college. But I didn't really care about that either; let them do whatever they want, and then something will happen to me. I didn't care much about it; I always tried to have it with the present." 16 In R.K.'s family, her brother and cousins, with the support of their high school teachers, decided to go on with their education. However, through this move, they met with complete resistance from their family environment. --- "And they weren't allowed into dormitories first; they would have been allowed into school. Due to tradition, it is not very customary to let girls into dormitories and into the world so much. [...] But in the end, my sister wanted it so badly that they had to, they had to agree." 17 R.K. was able to get into higher education because one of her sisters had already started her university studies a year before, following her own path and, therefore, there was an opportunity for them to move to the same dormitory. After successfully graduating from high school, R.K. was admitted to the University of Debrecen, where she started her studies in infant and early childhood education in 2015. Going to university opened up a new path for her: a new environment and new challenges in everyday life. At the same time, membership in the college for advanced studies and the dormitory companions also meant security for her, as she had a large number of acquaintances and relatives in the institution from her settlement of origin. However, her biggest support and supporter was R.M., with whom they had already entered into a relationship during the training. "And then it was in 2017 that we eloped in the traditional way. The way it happens is that we were still in the dormitory, and then we went to Budapest, and then I phoned my mother from there that I was already with B, that we had run away. And then we went home; we were getting ready at home, and then we discussed when the two families would take me home, because I can't go home alone, they can't come to me either, but until this family takes me home, together with my husband, we won't really be able to meet." 19 Despite the majority environment as well as the newly experienced system of customs and norms, the family tradition proved to be strong, so they decided to marry according to Gypsy customs. General experience shows that the majority society is unable to make sense of the tradition-following Roma marriage customs and is less accepting of the practice of "elopement". This is primarily due to the fact that they do not have sufficient information about the Gypsy customs, so eloping as a form of marriage usually only strengthens negative prejudices. 18 The situation has been handled with surprising flexibility on the part of her family. The father, defying the majority stereotype that education has no value for Gypsies, made a single request: "we can do whatever we want, but we should get the degree, that's all he wants done. So they have already understood and accepted how much a degree or a profession is needed for a young person, be that a Roma or non-Roma youth. That was his request. And I was already a woman then, and even then, it was important." 20 R.K. then successfully graduated at the same time as her husband, who earned a vocational training qualification in higher education. He finished his studies with very good results, and always completed his practical classes receiving unanimous praise. After graduation, they planned to live and work as an intellectual couple according to the values expected to be shared by the majority middle class, so she and her husband moved into an apartment in the city where R.K. did her internship. Their goal was to get a job as soon as possible. They planned everything consciously; they wanted to make ends meet independently and without family support. "And then we didn't move home but tried to find a job there. To find a job, it was very difficult fresh out of college, and we were unemployed for a year." 21 The reason for the unsuccessful job hunt and repeated rejections was always R.K.'s ethnic background. Its external anthropological features are rather telling at first glance; everyone classifies her right away as belonging to a specific ethnic community of origin. Besides the efforts to find a job, she also managed to join a competency development training course, which indirectly contributed to her successful employment later. "For a year in the same dwelling, feeling aimless and all these other things and, then and there, I developed quite nicely; I felt this about myself, and then when I completed this little training, in August, I was admitted to the nursery on Görgey Street in Debrecen as an early childhood educator, and I have been working there ever since." At work, the initial fears and inhibitions soon disappeared, as she quickly gained acceptance both among children and parents, as well as towards her colleagues due to her professional competence and kind, helpful attitude. An important part in the development of their seemingly stable situation was played by the immediate social environment that surrounded them. However, the COVID-19 pandemic suddenly created unexpected circumstances. "Our lives changed a lot because, due to the virus, we just packed our stuff and moved home on an impulse. [...] My husband opened this second-hand clothes store on June 1, 2020, which went very well, and it was also actually convenient. I only did the cleaning part, which I did, whereas my mother-in-law, she is a shop assistant; she has that qualification, and then she worked in it, she was the employee. And then everything went quite well but, for some reason, I felt so out of place, and I guess my partner felt the same way. And then our lives took a big turn, [...] we didn't feel at home here, so we moved back and we rented an apartment in Debrecen." 23 Since R.K. acquired a lot of new life experiences during the years at university by taking part in several trips in Hungary and abroad, meeting quite a few new people, seeing and experiencing new life situations from up close, she could no longer imagine her life only along the traditional Gypsy female role expectations. Thus, the feeling of "intermediateness", of belonging neither here nor there, became a constant part of her life. "We didn't stay there long, as it turned out that I was expecting a baby, then we moved home again, and then we realized that this house was a refuge for us. And then, from that point onwards, we started to renovate this house, to care a little bit about it. We forged new goals that bound us here to stay in Hodász." 24 tréninget, akkor augusztusban Debrecenben, ott felvettek a Görgey utcai bölcs<unk>débe, mint kisgyermeknevel<unk>, és azóta is ott dolgozom.] 23 They silicate-block and brick house in the segregated neighborhood of Hodász was inherited by them from her husband's grandparents. This predominantly Roma environment and the fundamental social, cultural and economic differences between the village and the "big" city required a high degree of adaptation efforts from the young couple. Despite this, their willingness to help their own community, their readiness to do something, along with their professional commitment was well demonstrated in the fact that R.K. and her husband established an association in 2017 with the aim of supporting disadvantaged young people in Hodász in order to help them catch up. Through organizing summer camps, distributing donations and hosting various professional events and public lectures, they tried to promote the strengthening of Roma identity and breaking out of disadvantages in the lives of Roma and non-Roma young people in need. "On top of all, we were renovating a house. So, it was very stressful for both of us but, even though we were building, despite all these goals and dreams that we had and partially realized, Debrecen, for some reason, it always remained the true desire of our hearts, and we moved again. That time, already there, to Civis Street. By the way, I planned that for myself at the time, and M. also said that it would be like this. We would then take it from here when the little one would be born, and then I would go back to work from there. Well, but it didn't happen that way because, in January 2021, M. became very ill. He was also involved in organizing education, and that was also our livelihood. And because of the virus and illness, he couldn't do this job, and so we couldn't pay the rent of the apartment we lived in, even though we loved it very much. We had a great time there: there was the post office, the convenience store, everything. Just as much company for our needs, which was enough. The colleagues were there, in that part and in the house next door, and there it was very good. But still we had to move home. Rather, I forced this, because I already saw that the following month would be rather tight, and then we did not wait but moved home." The feeling of being vulnerable to circumstances gradually became their dominant life experience. "Here in Hodász, regardless of the fact that we have this house here, and we keep building it and making it pretty, we do it, yet there will never be a better workplace for us here. Therefore, we have no reason to stay and to live here. Family is the only thing that binds us to this place, but I think, wherever we go, we will always come back to visit home." 26 The stalling of mobility filled their lives with constant conflict. After the events of starting an intellectual career, a successful departure and mobility, the turn of getting forced back into their original environment became a reality circumstance determining their situation of "intermediate exposure". "There are a lot of things happening in my life that pull me back, [...] I think the fact that I can't open up at home because the role is different also plays a big part in this. Like, let's say elsewhere, in the rented apartment. It's completely different, even though no one tells me; it's just that it's supposed or not supposed to be done here at home, and there is no such thing there. Maybe that's why my life here is uncomfortable. I think it's because it's very, very difficult for me to live here."27 At this point, her Roma identity and the traditional value system brought along from her original community created an obstacle in R.K.'s self-interpretation that she could not reconcile with her changed role in life and the stalling of her upward mobility. This situation usually takes the form of a permanent conflict of roles. "I don't know why that, here in Hodász, as if our horizons were narrowing and our opportunities were also narrowing. And maybe that's why I, or maybe that's why I don't feel so good at home. There wouldn't be any problems anyway; there just aren't many, so many, no, I don't see my life as bright and beautiful here at home as elsewhere. These are mainly settlements like Hajd<unk>böszörmény and Debrecen, Debrecen, where people can really reach their full potential and live their lives as they want. I feel good regardless of finances. I'll see if it turns out somehow. We would like to no longer live in an apartment, but in a house that is our own. Whatever we don't have to pay for monthly, and we can sit outside in the summer and stuff like that. Now we would like to flee, we would go to Debrecen but, for the time being, there is no prospect yet." 28 --- Lessons from an in-depth interview According to the in-depth interview with R.K., she felt that her life was unhappy at the time of the recording. The question is how to analyze this phenomenon within the conceptual framework of "intermediate exposure", as a general phenomenon determining the social mobility of young Roma intellectuals. The majority middle-class value expectations portray the trajectory of the average intellectual's career as a schematic process of events: successful university admission after graduation from high school, followed by a successful completion of requirements in college, graduation, employment, tax payment and establishing a family. The compulsion to conform to normative expectations and the role expectations adapted to them are inherent in living as an intellectual. The subject chosen for our analysis, R.K., comes from a traditionalist Vlach Gypsy native language environment. Following a successful high school graduation, she went to the University of Debrecen, where she became a certified infant and early childhood educator. She managed to find a job in her line of profession, got married at college, started a family, and is currently on GYES [maternity leave in Hungary]. In her story, the conventional order of the stages of her career is different in the respect that marriage and having children did not allow her to stabilize in her role as an employee just after graduation. As an important feature of background circumstances, her insufficient financial background made it im- possible for her to pay rent and maintain living standards in a city and, at the same time, it was a fundamental reason for her reintegration into her original segregated environment. Beyond all that, however, the question is how the "intermediate exposure" applies in the light of the fundamentally norm-following trajectory. Also, why and how does the integration process get stuck in a kind of permanent transition? The case of R.K. provides us with an opportunity to interpret the nature of "intermediate exposure" and its long-term survival as well as to address the question of how it constitutes an obstacle to integration into mainstream society. R.K. took the career path of Roma intellectuals and found herself in a liminal situation. Both in her self-definition and in the qualification of her environment, it is often stated that she is "too Hungarian for Gypsies and too Roma for Hungarians". Following this approach, we may reckon that it depends solely on her personal choice whether she remains a Roma person or becomes a Hungarian one by assimilating. However, the fact is that, whatever is decisive here is not so much her own decisions but rather her circumstances. Do young Roma intellectuals, exposed to the state of "intermediate exposure", really have a choice, or is it their external environment that forces them to make certain decisions? On the one hand, R.K. is a "victim" of the social expectations of her own cultural community whenever she is in her original environment. The traditional Vlach Gypsy customs, as it can be clearly seen from the previous briefly outlined compilation, represent an important system of values and norms for her, as well as a point of orientation and a cohesive community. Family customs in R.K.'s life are not present as a choice, since she was born into the culture and there is simply no question asked concerning her transcending them in any way whatsoever. In fact, she perceives and understands her own situation as a committed follower of traditions. By contrast, the system of values and norms of the majority society has acted as unavoidable factors shaping her career, her mobility and her chance of becoming an intellectual. This latter has become an inalienable part of her personality, especially through the patterns she has followed for so many years in educational institutions. R.K. does not intend to completely break up with her original community, but the way of life and lifestyle offered by the opportunities inherent in her intellectual career and what R.K. indeed experienced after taking up employment do act as a kind of counterpoint to her original environment. Consequently, the efforts to harmonize these two "worlds" seem to bump into serious obstacles in everyday practice. "Intermediate exposure" thus means that she cannot actually meet the expectations of either of these communities without contradicting herself. Although she has gained all the knowledge and experience to work as a graduate intellectual, she cannot maintain it because her financial means do not allow her to lead the life she desires. Forced back into her own community, she experiences the consequent situation as an irresolvable step backwards, which she defends herself against by constantly referring to the planning of relocation. At the same time, it is also a fact that she cannot fully become part and parcel of her own original community either, because she cannot follow a professional career path parallel with community expectations and, when she pushes this urge into the background, she gets into a conflict with herself. The dilemma of this life situation, at least as it seems, cannot be solved on one's own, without outside help. The assertion of intermediate exposure indicates that, on the basis of personal life expectations and education, social status cannot be effectively reconciled with both the opportunities and the physical circumstances. --- Theoretical conclusions Based on the overall research experience gained so far, it may be safely stated that one of the hindering factors of the social integration of young Roma intellectuals is the development of "intermediate exposure" into a condition that rules and determines their personal life path. During the course of examining the relationship between the Roma minority in Hungary and the majority society, it is important to keep in mind that the relations between ethnic groups and local communities are subject to dynamic change processes. The reasons for this can be traced back to both external and internal influences on communities, as well as a combination of these. The process of changes occurring in social group relations and the relations of individuals involved in them is interpreted by the discipline of anthropology from a number of different aspects. In this analysis, we intend to raise some of the most important elements of the general structural issues associated with the social mobility of intellectuals of Roma origin. The low number of intellectuals of Roma origin in Hungarian society -less than 1% of Roma people graduate from college -gives rise to the hypothesis that, for most people, a career as an intellectual is a first-generation undertaking, which turns into a process involving a change of social status. The change of status resulting from mobility can sorted out by relying on <unk>rpád Szakolczai's conceptual approach for interpreting the integration process of Roma intellectuals. As a basic principle, the author proposes to use four categories of analysis closely related to each other when describing the phenomenon of status change: (1) liminality, (2) imitation, (3) trickster and (4) schizmagenesis. 29 From the perspective of our topic, it is primarily liminality that requires further explanation. The concept of liminality has a long history in anthropology, as it was first introduced by Arnold van Gennep in 1909 in his book Rites of Passage. His thesis was based, among other things, on his field research in Madagascar. Gennep contends that rites of passage are "universal anthropological phenomena that accompany individuals and communities through various transitional points in human and social life, helping to make the transition between two stable states" 30. In his interpretation, liminality is the middle stage of a rite of passage, which is also a central moment. What does this all mean? In order to emerge from the liminal phase, one must meet certain requirements and, where appropriate, tests, depending on socio-environmental characteristics. 31 Rites of passage are associated with the transition from one age group or one human condition to another. Gennep's book was translated into English only in the 1960s. Then, in 1963, the concept of liminality was introduced into academic anthropology in connection with the name and work of Victor Turner. It was then, within this framework, that the interpretation of rites of passage became a priority topic of anthropological research. Since the 1990s, the term has been increasingly commonly used to analyze societies as they move and transform. Important research on the social integration of Roma intellectuals in Hungary has been carried out by Klára Gulyás, who proposes to summarize their mobility characteristics in the concept of permanent liminality. In her interpretation, this refers to the life situation that characterizes the development of the social role identity of Roma graduates in the process of becoming intellectuals. This condition occurs when they "move away from their community of origin as a result of the social/mobility process, but do not become accepted members of the majority professional community and the broader majority community" 32. However, the analysis of mobility trajectories based on in-depth interviews shows that the concepts of liminality and permanent liminality may only partially describe the situation in which Gypsies living in Hungary find themselves upon starting intellectual careers. The anthropological meaning of liminality is that the transient nature of this state of existence is temporary, and the situation itself, as well as the social condition associated with it, necessarily ceases to exist. Contrarily, the concept and meaning of "intermediate exposure" emphasize that, in the light of the careers of numerous intellectuals of Roma origin, this state of being is not temporary, and the condition of being trapped between different social expectations and oftentimes systems of prejudices cannot be overcome. While liminality has a start and an end 30 Gennep cited by Szakolczai 2015: 5. 31 Liminal conditions can be ritually regulated trials, as in the case of rite of passage ceremonies, or simply a series of ritualized events as, in most cultures, the observance of cultural rules around marriage. 32 Gulyás 2021: 8. point, and those affected can pass through this stage as soon as they are incorporated, "intermediate exposure" -at least, as research experience reflects this -is a permanent state. 33 The life path of the young person of Roma origin presented in this study, who is a college graduate and an intellectual, taking a white-collar career path, highlights the duality that is called the life experience of neither belonging here nor there, while the socio-cultural characteristics that make up the general circumstances of the situation allow a comprehensive interpretation of the phenomenon. Ultimately, this topic could be discussed and sorted out as a general obstacle to the social integration of Roma intellectuals. Based on our professional experience, it can be stated that, in order to eliminate "intermediate exposure" in the phase of liminality, attendance and assistance from the majority society is required. Help or assistance here means giving young college graduates of Roma origin the opportunity to prove themselves on the labor market and, at the same time, a chance to become full members of society while preserving elements of their own culture. In addition, it is equally important that they should arrange their relations with their own original environment in such a way that their change of status and role would not evoke a voice of rejection on the part of said environment, but would allow them to see the opportunity offered by the role model.
There are numerous obstacles to the advancement of Roma young people coming from disadvantaged social environments. Among these, the phenomenon that can be described by the expression köztes kitettség [verbatim: intermediate exposure] stands out. Social integration is an integration/assimilation practice complying with majority norms, which also means moving away from the values of one's own local environment. According to the experience gained from research conducted on this topic, there are a lot of Roma young people who are trapped between two "societies" -their own sociocultural environment and the majority environment -and, consequently, find themselves in a special situation. The aim of this study is to shed light on the general context and the social significance of the phenomenon described above through recording field experiences and applying case analyses.
to become full members of society while preserving elements of their own culture. In addition, it is equally important that they should arrange their relations with their own original environment in such a way that their change of status and role would not evoke a voice of rejection on the part of said environment, but would allow them to see the opportunity offered by the role model.
There are numerous obstacles to the advancement of Roma young people coming from disadvantaged social environments. Among these, the phenomenon that can be described by the expression köztes kitettség [verbatim: intermediate exposure] stands out. Social integration is an integration/assimilation practice complying with majority norms, which also means moving away from the values of one's own local environment. According to the experience gained from research conducted on this topic, there are a lot of Roma young people who are trapped between two "societies" -their own sociocultural environment and the majority environment -and, consequently, find themselves in a special situation. The aim of this study is to shed light on the general context and the social significance of the phenomenon described above through recording field experiences and applying case analyses.
As a corollary of rapid economic development, middleincome countries are experiencing a rapid nutritional transition, featuring marked changes in diet and lifestyles (1). In this context, major transformations in the food retail sector have been observed, including a sharp rise in the number of supermarkets (2). In urban areas of developing countries, large-scale food retailers are tending to replace traditional markets, neighbourhood stores and street sellers; this process is referred to by some authors as'supermarketisation' (3). Until recently attention was focused more on the potential consequences of such supermarketisation for the agricultural sector (4,5), and the results of the few studies linking the development of supermarkets to possible changes in food shopping habits and dietary intake have been mixed. However, a recent comprehensive review of the dietary implications of supermarket development worldwide (6) clearly showed that the continued development of supermarkets will have major implications. Beyond the influence on food consumption for regular users, the implications of the development of supermarkets for dietary intake at the population level also depend on the prevalence of exposure to these retail outlets. Regarding this issue in developing countries, some authors (7,8) have proposed a three-step model of diffusion in which supermarkets first appeal to upperincome consumers, then to the middle class and finally to the urban poor, because prices tend to drop as supermarkets continue to spread. However, in urban areas of developing countries, supermarkets currently appear to coexist alongside small-scale commercial outlets (9), central food markets, neighbourhood stores and sellers of street food. Among the characteristics of supermarkets that have implications for consumers' diets are their location and format (6,10). However, to our knowledge, no study has yet analysed the socio-economic characteristics of the shoppers who use these different retail formats. Tunisia (a North African country) is experiencing major economic, epidemiological and nutritional changes (11,12), with a rise in the number of modern supermarkets including the recent opening of two 'hypermarkets' in the vicinity of the capital city, Tunis. Building on a previous paper on the associations between supermarket use and dietary intake (13), the objective of the present analyses was to examine the socio-economic characteristics of shoppers using different retail formats in Tunisia, and their motivations for doing so. The retail formats were large supermarkets, medium-sized supermarkets and traditional outlets. --- Methods --- Study area Tunisia, a south Mediterranean country, is located between Algeria and Libya, has a population of 10 million and a middle level of development (ranked 91/177 on the Human Development Index composite scale in 2005 (14) ). Our study area was Greater Tunis, with about 2 million inhabitants (15). It is the most developed and urbanised area in Tunisia and has the most supermarkets. Medium-sized supermarkets have existed in Tunisia for decades, but since the beginning of the 2000s, a major change in the food retail landscape has taken place with the opening of two 'hypermarkets' in the Greater Tunis area. This has also had indirect results in that established supermarket chains have started opening new outlets as well as modernising their internal layout and sales practices (16). --- Subjects A cross-sectional survey was conducted in November-December 2006 in Greater Tunis. Based on data from the 2004 census, the survey used a random, two-level (census area, household) clustered sample of households (17). In each household, the person in charge of main food shopping was interviewed. --- Data Part of the survey questionnaire was derived from a preliminary qualitative phase (face-to-face interviews and focus group discussions) to identify the relevant contextual information. --- Socio-economic characteristics Socio-economic and demographic data were collected at both individual and household levels (Table 1). An asset-based household economic level proxy was computed by multiple correspondence analysis (18) from dwelling characteristics, utilities and appliances. The first principal component was used as a proxy of relative household wealth (12,19) and was used in analyses after breakdown into tertiles of increasing level (low, medium and high). --- Type of outlet used for main food shopping Although some analyses pertained to supermarkets, distinction was made between'medium-sized supermarkets' (MSM) and 'hypermarkets', i.e. 'large supermarkets' (LSM), according to their surface area ($10 000 m 2 for LSM). One reason for the choice of this definition, among others (16), was because, beyond their surface area, hypermarkets in Greater Tunis differ from medium-sized supermarkets in that they are located in a shopping mall comprising a wide range of shops, cafe <unk>s/cafeterias and a car park, offer a wider range of fresh food departments (catering, bread and pastries, butcher, fishmonger) and also have larger non-food departments. Finally, although supermarkets of medium size are quite evenly distributed throughout Greater Tunis, both hypermarkets are located in the outskirts of the area. In this survey, 'grocers' (attar) are independent familyrun food outlets with a sales area of less than 50 m 2 (reference (16)). The term'market' refers to traditional open-air or covered markets in town centres or neighbourhoods with rows of retailers (6). The survey questionnaire included items for which interviewees were asked to rank in order of priority (1st, 2nd or 3rd) the three types of outlets where they most frequently did their main food shopping, and also, for supermarkets, included items regarding time and distance to the outlets. For each type of retail outlet (LSM, MSM, grocer, market), binary variables coded whether interviewees used that type of outlet for their main food shopping (regardless of the rank). From the variables pertaining to MSM and/or LSM, a three-category hierarchical variable was computed: never shopped at supermarkets/shopped at MSM only (regardless of other types of outlets but excluding LSM)/shopped at LSM (regardless of MSM or other type of outlets). For both MSM and LSM, easy access (v. not) was defined as living less than 5 km or less than 30 min from a retail outlet. --- Reasons for using the different types of food outlet The questionnaire featured open questions, where subjects could state whatever reasons or motivations they associated with the use of each type of outlet. From the exhaustive list of answers, the twelve most frequently declared items were identified (Table 3) and used in the analyses. --- Data collection The questionnaire was translated into Arabic, pre-tested and validated with the target population. Subjects were interviewed at home by specially trained local nutritionists. --- Ethics The Tunisian National Statistical Council reviewed and approved the study (visa no. 11/2006). The surveyed subjects were informed of their right to refuse to take part and of the strict respect of the confidentiality of their answers, and gave their verbal consent to take part in the study. --- Data management and analysis Data entry, including quality checks and validation by double entry of questionnaires, was performed with Epi-Data version 3?1 (EpiData Association, Odense, Denmark). Data management was performed with the Stata statistical software package version 9?2 (StataCorp LP, College Station, TX, USA). We assessed the associations between the multinomial response variable coding shopping at supermarkets (LSM, MSM or never) and socio-economic variables using multivariate multinomial logit regression models (20). The strength of (crude or adjusted) associations was assessed by relative risk ratios, using 'never' as the reference response variable category. Correspondence analysis was used for analysis of associations between the type of retail outlet and reasons stated for their use (18). All analyses took into account characteristics of the sampling design (21) (clustering, sampling weights also including a post-stratification on sex, age and urban v. rural) using the appropriate svy commands of the Stata software. The complete-case analysis method was used to deal with missing data. Results are given as the estimate with its design-based standard error or confidence interval. The first type error rate was set at 0?05 for all analyses. --- Results --- Socio-economic characteristics From a total of 753 households that were to be included in the study, 724 households were actually surveyed. Most (Table 1) were from an urban area and mean household size was 4?7 (SE 0?1; n 723). One-third of the households (data not shown) declared they owned a car. Two-thirds of the households declared they had a steady income, but only a minority declared they owned a credit card. Those in charge of food shopping were predominantly female; the mean age was 46?2 (SE 0?6) years, most were married; 24?0 % had no schooling at all, while 43?1 % had reached secondary level or higher; the majority (67?4 %) said they did not work outside the home. --- Type of outlet used for main food shopping Out of the total of 724 households, 58?8 (SE 4?3) % used supermarkets for their main food shopping (LSM and/or MSM), but only 27?3 (SE 3?6) % declared using LSM (regardless of MSM, grocer or market) and 32?2 (SE 2?9) % used only MSM (i.e. regardless of grocer and market but excluding LSM). Finally, only 4?5 (SE 1?3) % of households used only supermarkets for their main food shopping. Concerning time and distance (n 711), 74?1 (SE 4?2) % had 'easy access' to MSM v. 23?9 (SE 4?5) % only to LSM. Most households, 93?8 (SE 1?6) %, used their nearby grocer and 26?5 (SE 2?8) % used the market. --- Socio-economic factors associated with shopping at supermarkets Results of multinomial regression models are presented in Table 2 (n 703, complete-case analysis subsample). Crude associations showed that urban households were much more likely to shop at both MSM and LSM v. never, but in adjusted analyses the association persisted only for MSM. That small households shopped more at MSM v. never in unadjusted analysis did not stand the adjustment but persisted somewhat for shopping at LSM (linear trend P 5 0?001). For MSM, the sizeable unadjusted association with the economic level of the household was drastically reduced by the adjustment; conversely, the spectacularly strong unadjusted association between likelihood of shopping at LSM and increasing economic level, though reduced, was still remarkable once the confounding of other socio-economic variables was taken into account. Households with a steady income, a credit card or easy access were twice as likely to shop at MSM v. never, but when adjusted only steady income was still associated; for shopping at LSM v. never, unadjusted associations were stronger for steady income, owning a credit card and easy access, but although still significant, were much reduced after adjustment, indicating that their effect was greatly (though not entirely) confounded by other socio-economic variables. Concerning the characteristics of the person in charge of food shopping, age was not associated with use of MSM or LSM either before or after adjustment, even if the effect of the adjustment was towards more use of supermarkets by younger people. Neither the sex nor the marital status of the person in charge of food shopping was associated with the use of supermarkets. In unadjusted analyses, a high education level was clearly associated with shopping at MSM and even more for LSM; however, once adjusted, a strong independent association with education level only persisted for MSM, while it was much reduced for LSM (twice lower than for MSM). The observed unadjusted association with the professional occupation of the person was mostly confounded by other socio-economic variables. Regarding access issues, additional analyses were also performed to specifically try to assess associations of supermarket use with car ownership (detailed data not shown). Unadjusted analysis revealed that it was indeed more associated with LSM than MSM use but its effect was entirely confounded by socio-economic variables. --- Reasons for choice of type of retail outlet Table 3 lists weighted percentages pertaining to the reasons (rows) given by users for their choice of a specific type of retail outlet (columns). Out of the twelve items, only two pertained to the food products themselves. An equal number of five items was related to characteristics of the store or the shopping itself; among these items, proximity was most often quoted by retail category but also over the whole sample of subjects. Figure 1 displays the combined rows/columns on the two first axes of the correspondence analysis of the choice data. The first and second axis account for respectively 77?1 % and 18?5 % of total inertia, so that the residual information not taken into account is minor; the high percentage of inertia on the first axis and the typical 'horseshoe' shape of the mapping indicate a mostly one-dimensional structure. Contributions to inertia (data not shown) on the first axis of row and column points revealed that the salient feature was that the subjects contrasted 'large supermarkets' (chosen for the 'leisure' dimension of shopping there but not their 'proximity') v. the 'nearby grocer' (chosen mainly because of 'availability of credit', and 'proximity' but also 'emergency shopping' and 'fidelity', but not 'good prices' and not 'quality choice'). Contrasts observed on axis two (details not shown) resulted in a much lower level of information indicating that markets were quoted as being differentially chosen v. all other types of retail because of 'freedom of choice', v. 'large supermarkets' because of their 'proximity', and v. 'grocer' because of their 'good prices'. It should be noted that reasons for the choice of'medium supermarkets' were not very distinct, their profile being intermediary between 'large supermarkets' and other retail outlets. Variations around these overall trends were observed according to socio-economic characteristics (detailed data not shown). There was a strong decreasing relationship between household economic level and likelihood of quoting credit as a reason for shopping at the nearby grocer (40?0 (SE 3?5) %, 18?8 (SE 2?6) % and 8?8 (SE 2?5) % for the lower, middle and higher tertile of economic level, respectively, n 685, P, 0?0001). Conversely, the probability of declaring using the nearby grocer for emergency food shopping increased with economic level (4?2 (SE 1?9) %, 13?3 (SE 2?5) % and 25?2 (SE 5?2) % for the first, second and third tertile, respectively, n 685, P 5 0?0001). --- Discussion In the context of a rapidly evolving nutritional transition and major changes in lifestyle, the present study assessed the relative importance of different types of food retailer (modern and traditional), the socio-economic profiles of consumers and the reasons behind the choice of the different types of outlets in Greater Tunis. Concerning the overall use of supermarkets, while MSM were used by half the households, only just over a quarter of these consumers also used LSM. As expected, sharp contrasts between areas and socio-economic categories were observed as well as differences according to the type of outlet. A strong association was found with urban area only for supermarkets of medium size but not for large ones, but results pertaining to urban v. rural households should not be overemphasised given the mostly urban nature of the study population: the impact of supermarkets on peripheral rural areas warrants further research. Nevertheless, this result is not entirely surprising given the intraurban location of MSM in the district of Tunis v. more peripheral LSM. It also underlines the existence, all other things being equal, of location issues specific to the type of supermarket (rather independently of other socioeconomic factors, proximity was much more often quoted as a reason for the choice of MSM than for LSM). Regarding the inverse association between small household size and LSM use, it is likely related to a combination of more'modern' socio-cultural values (in relation with the demographic transition but also cultural values, e.g. whether or not several generations still live under the same roof) as well as the higher socioeconomic status of smaller households in the context. Although adjustment did reduce the strength of the association by half, it was still quite sizeable, especially for the smaller households; adjustment for socio-economic factors likely only partly accounts for the socio-cultural factors that underlie the relationship between the use of LSM and the size of the household. Concerning household socio-economic level, once adjusted, LSM use was shown to increase drastically with overall household wealth while the association was much weaker for MSM. Having a steady income was found to be independently associated with the use of both types of supermarkets. Having a credit card and easy access to supermarkets were quite specifically associated with LSM but nevertheless strongly confounded by other socio-economic factors (mostly household wealth). For these three factors, the association was nevertheless weak compared with household overall wealth. Among all the characteristics of the person in charge of food shopping, only a specific effect of a higher level of education was clearly associated with shopping at supermarkets, and the association was much stronger for medium-sized than large supermarkets. Concerning age, once adjusted for socioeconomic confounders, associations with age were in line with the hypothesis that shopping at supermarkets and especially LSM would be more frequent among younger customers; but conditional on size of the sample, this could not be inferred to the study population. Thus, overall, we found that the use of supermarkets is more frequent among socio-economically privileged and more educated consumers in Greater Tunis. This suggests that, in the Tunis area, although supermarkets have been there for a long time, supermarket development is still only at the first step of the model of diffusion. This contrasts with Kenya, a low-income country where 60 % of the 30 % poorest consumers shop at supermarkets (22). Given the three-step diffusion model, this implies that there are context-specific diffusion issues, either cultural or linked to different levels of economic development, or to the relative characteristics of the other types of -1•0 -0•5 0 0•5 1•0 1•5 Axis 1 (77•1 % of total inertia) Fig. 1 Bi-plot of the first two axes of the correspondence analysis of reasons stated for the choice of type of food outlet, Greater Tunis, Tunisia, 2006. Labels are centred on (x, y) coordinates; SM, supermarket food retail outlets. It could also be that, in Greater Tunis, MSM and LSM are not at the same stage of diffusion. If we consider that MSM and LSM have in common selfservice and differ mainly in their surface area, we could have expected fewer differences between consumer profiles in the two types of retailers. Yet, as indicated by the striking difference between MSM and LSM consumer profiles according to household economic level, we can hypothesise that LSM are at an earlier stage of the supermarkets' diffusion model than MSM, the latter being, at the same moment in time and in the same town, at a more advanced stage. It could also be that, rather independently of the three-step model, MSM and LSM have and will always have their specific consumers, with specific motivations (e.g. leisure for LSM). Another salient point of our results is that although for a tiny minority of consumers (4?2 %) the main shopping place is supermarkets to the exclusion of all other types of retail outlet, most households still shop at their neighbourhood grocer, whether or not they shop at supermarkets. This suggests that food shopping practices in Greater Tunis are in a transition stage with a combination of both modern and traditional retail food outlets. Indeed at the national level, even if modern supermarkets are increasing in concentration and popularity, the bulk of Tunisian food retailing is still dominated by small neighbourhood grocery shops (23) of which there are around 250 000 in the whole country. These shops are evenly distributed, including in strictly residential neighbourhoods that otherwise feature no commercial activity, so that most inhabitants of our study area are within short walking distance from an attar. The fact that food shopping still relies heavily on more traditional types of outlet is all the more true for shoppers whose socio-economic status is low, of whom only 4?9 % were found to use hypermarkets and only about a quarter to use MSM in addition to shopping at their local grocer or market. Overall, the reason that most contrasted the choice of grocers v. other types of retail was 'availability of credit'. In other contexts (Brazil and China), it has been shown that supermarkets are starting to offer consumers credit cards and even banking services (24) but in our study area, availability of credit was very clearly quoted mostly only for the neighbourhood grocer. Regarding incomespecific differences pertaining to the importance of the availability of credit, households of the lower tertile of economic level were five times more likely to quote this reason for their choice of grocer than those of the higher tertile. This may seem paradoxical, since purchasing food in small quantities from local retailers on a daily basis generally costs more (25), and this feature also stood out in the present study as the neighbourhood grocer was the type of retailer by far the least likely to be associated with good prices or promotions. Nevertheless, for the poorest consumers, the local grocery shop is the main and probably only place where they buy food due to lack of a sufficient steady income (which has been shown to be more associated with supermarket use) despite the fact that, regarding the food products, this type of outlet is much less frequently associated with good quality choice than the other three types of retailers. Interestingly, households from the higher income tertile were six times as likely as the lower textile to quote 'emergency shopping' as a reason for using the attar, indicating that although this type of retail outlet is widely used by all categories of households, the reasons for doing so are very different. In addition to financial matters, it was also shown that traditional food retail fulfils social functions, as consumers are still attached to their personal relationship with their local shopkeeper; indeed, this system better meets consumer's social and cultural expectations by allowing them to increase their contact with the outside world in a way that the modern distribution system cannot (8,26). Although the latter dimension was not directly assessed in our study, the fact that fidelity was much more often quoted as a reason for shopping at the attar v. other types of food retail is likely related to these social and psychological co-factors. The development of supermarkets is indeed an issue that concerns the diet of high-or middle-income consumers in our study area. Nevertheless, the almost exclusive use of street corner stores for food shopping by lower-income consumers is also an issue. In other settings, some authors have described the emergence of urban 'food deserts,' deprived areas where low-income people have poor access to whole foods e.g. to fruit and vegetables, with probable negative consequences for health (27,28). The main underlying factors are wealthier people moving from the centre towards the suburbs and with them the supermarkets that used to be located in the city centres. The situation is currently somewhat different in our study area as both traditional markets and many of the medium-sized supermarkets are still located in the downtown area. But this may change over time and indeed, despite the rise of supermarkets, the importance of corner stores should not be overlooked, e.g. for nutrition interventions targeted through the food retail sector (29)(30)(31). Regarding the characteristics of the study, its strengths are that the questionnaire was based on a preliminary in-depth qualitative study, that it featured detailed analyses according to the different types of supermarkets and food retail outlets and conducted a detailed assessment and analysis of the motivations behind the choice of the different types of outlets. As for its limitations, one is the cross-sectional design of the survey, which always makes it difficult to interpret observed associations as causal even when care is taken to adjust for relevant confounders (32). The quantitative analysis of declared motivations would have needed to be completed by exploring complex items in more detail (such as 'quality-choice' which could be interpreted differently depending on the type of product it actually refers to). Generalisability issues are always of importance. However, although a small country, Tunisia is emblematic both of fast emerging developing countries from an economic/development point of view, and also of a wide range of south and east Mediterranean countries that share societal and cultural issues. Nevertheless, the results of the present study regarding socio-economic characteristics associated with use of the different type of food retails outlets, though partly similar to those observed in Madagascar (33), do differ from those in observed in Kenya (22), Brazil (24) and Guatemala (34). These results show that supermarketisation in the developing world does not operate homogeneously and does not have the same effects in every country. Moreover, our results based on a cross-sectional analysis in 2006 are time-specific and whether or not the current trend in supermarketisation in developing countries will persist is an open question (33). In emerging countries, in the context of major economic and societal changes, changes in the food retail sector, including the rapid development of supermarkets, have been shown to have consequences for dietary intakes. Nevertheless, studies providing evidence regarding consumers' motivations as well as socio-economic profiles with respect to the type of food outlet for food shopping are rare in south Mediterranean countries. The present study is thus pioneering with respect to changes in food shopping attitudes and practices linked to the modernisation of food retailing in this context. Indeed, we derived substantiated results regarding the actual influence on food shopping habits: (i) the overall limited use of supermarkets by the study population; (ii) the still predominant role of neighbourhood grocers whether or not combined with supermarket use depending on socioeconomic status; (iii) the differential socio-economic profiles of customers of the different types of supermarkets; and (iv) the reasons that motivate use of the different types of outlet. South and east Mediterranean countries are experiencing a fast evolving nutrition transition where obesity and nutrition-related non-communicable diseases are becoming prevalent also among the lower socioeconomic strata (12). In this context, it could seem feasible and cost-effective for those in charge of nutrition policies to address this issue by implementing nutrition interventions (e.g. financial incentives, nutrition education, promotion of 'healthy' products, informative labelling) only through centralised types of retail such as supermarkets. But the results of the present study underline that such interventions would likely both not cover a significant part of the population and mainly reach only customers of higher socio-economic status, with thus the risk of increasing inequalities regarding food consumption and nutrition-related non-communicable diseases instead of reducing them.
Objective: In the context of the nutrition transition and associated changes in the food retail sector, to examine the socio-economic characteristics and motivations of shoppers using different retail formats (large supermarkets (LSM), medium-sized supermarkets (MSM) or traditional outlets) in Tunisia. Design: Cross-sectional survey (2006). Socio-economic status, type of food retailer and motivations data were collected during house visits. Associations between socio-economic factors and type of retailer were assessed by multinomial regression; correspondence analysis was used to analyse declared motivations. Setting: Peri-urban area around Tunis, Tunisia, North Africa. Subjects: Clustered random sample of 724 households. Results: One-third of the households used LSM, two-thirds used either type of supermarket, but less than 5 % used supermarkets only. Those who shopped for food at supermarkets were of higher socio-economic status; those who used LSM were much wealthier, more often had a steady income or owned a credit card, while MSM users were more urban and had a higher level of education. Most households still frequently used traditional outlets, mostly their neighbourhood grocer. Reasons given for shopping at the different retailers were most markedly leisure for LSM, while for the neighbourhood grocer the reasons were fidelity, proximity and availability of credit (the latter even more for lower-income customers). Conclusions: The results pertain to the transition in food shopping practices in a south Mediterranean country; they should be considered in the context of growing inequalities in health linked to the nutritional transition, as they differentiate use and motivations for the choice of supermarkets v. traditional food retailers according to socio-economic status.
Background The emerging adulthood (age span of [18][19][20][21][22][23][24][25] is traditionally viewed as a time of optimal health with low levels of morbidity and chronic disease [1,2]. At the same time, young adults appear to be more prone to psychosomatic health symptoms, depending on their individual life satisfaction and perceived future outlook [3,4]. Characterized by changing life circumstances, personal growth and the manifestation of a certain lifestyle, the emerging adulthood is a distinct life phase [5,6]. In comparison with other age groups, young adults tend to consume more alcohol, tobacco and drugs [7,8]. Therefore, this life stage occurs as a vulnerable and critical time, in which specific health interventions might help paving the way for a healthy lifestyle. Especially university life can hold several challenges for students impeding the manifestation of a health supporting behavior [9]. On the one hand, the variety of study formats opens up considerable freedom for individually adaptable life concepts, such as studying alongside a part-time or fulltime job, flexible lecture periods or studying during parental leave. The proportions on the spectrum from purely physical presence on site to exclusively digital forms of learning and examination from home can be selected according to the students' individual life situation [10]. The university setting receives increased attention in the context of prevention, both because of the described health situation of students and a steady growth of the higher education sector [10]. Especially Universities of Applied Sciences (UAS) register an increasing number of students due to offering simplified access for professionally qualified persons, (study) flexibility and a high diversity of studies in the form of dual and part-time courses [10,11]. On the other hand, this freedom and flexibility seem to come with a price. Changes in stress situations and strain parameters can be observed when it comes to meeting work and study requirements. Some studies identified factors such as double and multiple burdensome-situations, a disruptive studyfamily-balance, an uneven study-leisure-time-balance and severe work-related psychological stress situations [12][13][14][15]. Other requirements that students face during their studies include, for example, mastering demanding curricula, time-consuming workloads as well as mental and emotional challenges [16]. Current research of students' health in Germany reveals an increased burn-out potential, an overall increased stress load, an above-average level of anxiety, sleep disorders, physical symptoms such as body aches or back pain and an overall subjectively lower-rated health status than comparable cohorts [12,[17][18][19][20][21]. As part of the HISBUS Panel, a large-scale crosssectional study with a total net sample of n = 6198, female participants in particular reported physical and psychological complaints. Additionally, about 75% of the HISBUS cohort stated to suffer from physical complaints several times a month [17]. The students' health status seems to reflect the consequences of permanent overload in diverse ways. Studies indicate, that a poor state of health might result from the interaction of multiple factors, e.g., an insufficient health behavior or a low degree of health literacy [22]. The majority of studies pictures a linear relationship between the three health dimensions, stating that health literacy influences the health behavior of a person and thereby impacts health outcomes [23]. Contrary to that, some studies report a different constellation of the three health dimensions, where this linearity has not been observed at all or not even discover an association between health literacy and certain health behaviors, e.g. smoking health professionals [24,25]. In fact, current studies on college students' health behavior and health literacy point to a linear as well as reciprocal relationship. Accordingly, a linear view with only consecutive seems to fall short, for the dynamic of interactions, feedback effects as well as antecedents and consequences cannot be integrated [26]. Accompanying, external or social factors can increase the interaction of the health dimensions, influencing the state of health positively or negatively. With regard to health behavior, the above-mentioned stressors have a negative effect on the amount of students' physical activity and nutritional behavior [17,27,28]. Drug and alcohol consumption have also been shown to increase among students [17,29]. Although to interpret with caution, the HISBUS Panel [17] attested students a poorer health behavior in many aspects compared to non-students of the same age. In particular, the results revealed lower levels of physical activity, increased alcohol and nicotine use [29], abuses of cocaine and cannabis, as well as increased intake of painkillers [17]. In this context, health literacy is an important individual competence and related to an overall literacy. It includes knowledge, as well a set of cognitive, social and motivational skills, enabling people to access, understand, appraise, and apply health information [26,30,31]. Also, health literacy entails the capacity of making health-related judgements, taking decisions and establishing health-promoting behaviors on a daily basis (e.g., a healthy diet, physical activity, stress management) [32][33][34]. This understanding suggests, that health literate students are more likely to address the requirements and burdens described. Despite the need of gaining more understanding of the complex nature of the relationship between the abovementioned health dimensions, these studies also show different characteristics of the health dimensions among the students. This suggests the necessity of different approaches within the framework of possible health interventions. Against this background, the aim of this cohort study is to gain insight in the relationship and change of UAS students' health literacy, health status and health behaviors during their studies. Empirical inventories of student health differ both in their understanding of health and in the indicators collected [35,36]. Thus, the cohort study's assessment incorporates the broad categories of Dietz et al's systematic umbrella review [36] to provide further clarification on the factors influencing student health (substance use, mental health/wellbeing, diet and nutrition, physical activity, sleep hygiene, media consumption and others). In this context, the following research questions will be addressed: 1. How do health behavior, health status and health literacy change during the course of study and after graduation (12 months post)? 2. What influencing factors on health behavior, health status and health literacy of UAS students can be identified? --- Methods / design The German health promotion initiative "health-promoting university" is the overarching framework of the initiated Healthy Habits research project [37]. The cohort study is founded on a biopsychosocial and salutogenetic approach and assumes a multidimensional health continuum [38,39]. If the salutogenetic approach is applied to the health of individuals, a three-way split emerges, where the state of health dynamically results from the aspects of health behavior as a generalized source of resistance and health competence as a superordinate empowerment in the sense of coherence. In summary, this leads to an understanding of health as a multidimensional and dynamically interacting construct, with the three core dimensions health status, health literacy and health behavior (see Fig. 1). --- Design of the study The research design follows a longitudinal, prospective cohort study of enrolled UAS students at the IST University of Applied Sciences in Germany. STROBE (strengthening the report of observational studies in epidemiology) guidelines were applied in alignment with the research objective [40]. The frequency of data assessment is set to a semester-by-semester cycle (see Fig. 2). During the winter semester 2020/2021 the first semester students are being recruited for the first time. --- Sample and sample size Students have been invited by email to participate in the cohort study and additionally have been introduced to the Healthy Habits project (official German website under https://healthyhabits.ist.de/) in several seminars at Fig. 1 The multidimensional and dynamic construct of students' health as the underlying construct the beginning of the semester. The email contains information of the study, an invitation link to the research homepage and an identification code. The invitation email has been sent to all active and enrolled first semester students of all departments (sports business, fitness & health, tourism & hospitality, communication & business). Students, which have set their status to inactive (e.g. maternal break or personal matters) for more than one semester won't be included. Since this is an exploratory cohort study no formal sample size calculation was done. We assume the participation rate of firstsemester students to range from 20 to 40%. This would mean an average dataset of n = 400 per semester. This calculation is made defensively due to the constraints imposed by the Covid-19 pandemic. --- Data collection Data is collected online using a questionnaire tool implemented in a progressive web application. This app is specially programmed for this research project. The questionnaire can be edited step by step, answers are saved automatically. There is no possibility to skip single items. After answering all questions, the students can submit their results and with that make no further changes. Gathered data is stored on a separate server, taking into account current European as well as federal data protection security standards (DSGVO) in full. A connection to student records at the IST University of Applied Sciences is excluded, nor is the project team able to gain access to the user profile credentials. --- Variables under study and assessment Health status, health behavior and health literacy are registered on the basis of different domains, for which a positive correlation with the respective health dimension could be determined. Health-related quality of life, sleep quality, overall life satisfaction, self-perceived stress and self-perceived health status are seen as predictive measurements for health status [9,41,42]. To assess the dimension of health behavior the domains of health-related physical activity, screentime, nutritional behavior, alcohol consumption, smoking habits and drug consumption are referred to [43]. Health literacy is the only dimension which is validated as a construct itself and will therefore not be predicted through other surrogate constructs. Table 1 provides an overview of the selected constructs and the primary outcome parameters to operationalize the three health dimensions. To gather comparable data, the selection of variables was based as far as possible on similar studies on each of one of the three dimensions. The assessment is composed of 10 established questionnaire-based instruments with a total of 101 items. As Table 1 shows five instruments are used to assess health status. Health behavior uses a total of four instruments. One instrument has been selected to assess health literacy. To obtain a representative picture of students' health status a single-item of the Minimum European Health Module (MEHM1), 5 items of the German version of the Satisfaction With Life Scale (SWLS), 7 items of the German Life-and Study-Satisfaction-Scale (LSZ), 10 items of the German version of the Perceived Stress 1). Health-related behavior covers a variety of behavioral domains and their measurement in large cohort studies is very complex. For the described research project, the domains of physical activity, screentime, nutrition, smoking habits as well as alcohol and drug consumption are of interest. Related data is collected by using 8 items of the Physical Activity section of the European Health Interview Survey (EHIS-PAQ), 6 items of the Brief Alcohol Screening Instrument in Medical Care (BASIC). Smoking habits (1-3 items), drug consumption (7 items) and nutrition behavior (13 items) is assessed with a total of 23 adapted items of the FEG-questionnaire (original: Fragebogen zur Erfassung des Gesundheitsverhaltens [Questionnaire to assess health behavior]). Non-smoking participants have to answer only 1 item and are led to the next domain. To measure time spent with digital devices 6 items of the self-rated Screen-time Questionnaire [63] were selected, modified and supplemented. The 16-item European shortform of the health literacy Survey (HLS-EU Q16) concludes the assessment. The authors of this paper reviewed the critics of the original version of the HLS-EU [72] and therefore selected the latest updated shortform of the instrument. The published reference values as well as the statistical supported counter publication underline the benefits of the HLS [34]. For all instruments items' content and answering format are used as published and have only been modified to fit the digital progressive web application. --- Statistical analyses Descriptive statistics (mean, distribution standard deviation (SD), median, minimum, maximum, absolute and relative frequencies) will be conducted to describe the cohorts' sociodemographic features (gender [male/female/diverse]; age [year of birth]) and study-related characteristics (type of degree [BA/MA], field of study [health-related studies vs. non-health-related] and study format [dual/part-time/full-time]). This stratified analysis will apply for all statistical analysis. The changes in health behavior, health status and health literacy (research questions 1&2) will be each evaluated by means of variance analysis with measurement repetition. After checking the statistical model prerequisites, sociodemographic and study-related influencing factors on health behavior, health status and health literacy will be each tested by means of linear regression analysis. For all calculations the level of statistical significance will be set to p <unk> 0.05 [73] and SPSS® (Statistical Package for the Social Sciences, IBM, Version 27) will be used. --- Discussion Attending a university or UAS is a lifechanging event in general and can be a very formative phase of life for young adults. Students will learn to deal with stress, the burden of learning for exams, setbacks as well as successes and overall to take responsibility for themselves. Unfortunately, taking care of one's own health is not always priority number one during that phase of life. Current studies provide indications that students show a poor health behavior [17,29]. The overall consequences of an unhealthy lifestyle as well as the insufficient management of psychophysical requirements are not only reflected in a poorer state of health, but also have an impact on the course of the study. Lower academic performance, a significantly longer duration of study and even drop-outs are possible consequences [16,74]. According to the German Center for Higher Education and Science Research (original: Deutsches Zentrum für Hochschul-und Wissenschaftsforschung [DZHW]) the dropout rate ranges between 15 to 35% depending on the type of study and the subject [75]. To address these aspects efficiently and sustainably with interventions, requires a further understanding of how health changes during the course of study as well as of the impact of influencing factors. A mere consideration of health status does not fulfill the complexity, since it is not always known whether a poor health status results from an insufficient health behavior or a lack of competence. Recent research shows that only about 30.3% of students have sufficient health literacy [76]. There are also significant differences between male and female students. Furthermore, students with a migrant background as well as students with lower degrees (bachelor' degrees) and first semester students have significantly poorer health literacy [77][78][79]. These studies also suggest the existence of different target groups within the setting of UAS students which in turn should be approached differently with tailored interventions. To the authors best knowledge such comprehensive studies have not been sufficiently conducted yet in an UAS setting. Contrary to growing scientific interest in student health research in recent years, the current amount of data is consistently inadequate. Most of the existing studies either looked at the three health dimensions separately from each or are mostly based on cross-sectional examinations [9,17,20,41]. Longitudinal studies on the three health dimensions over the course of the study, on the other hand, are rare. Also, the quantity and quality of studies investigating the association between the described health dimensions and their mutual influence among themselves within the setting of students are insufficient as well. Despite the mentioned promising potential of the Healthy Habits research project, field research challenges as well as limitations have to be mentioned. In consequence of the Covid-19 pandemic the starting of participants' recruitment had to be postponed to December 2020. In addition, as a result of federal restrictions all in-person seminars are prohibited, so that for the entire winter semester 2020/2021 only online-based seminars are offered. First semester events such as initiations and other in-person inauguration seminars have been canceled. Therefore, the communication with the students can only take place digitally. Another potential distortion can be caused by assessment. After completing the app-based questionnaire, the results are displayed in form of a radar chart. Each health dimension is displayed separately, reflecting aspects of the selected assessment instruments. The authors are aware of the fact, that receiving an evaluation of one's questionnaire responses might be seen as a first health intervention, increasing students' awareness for health topics. The overarching intention is to motivate students to participate in the assessment sustainably. The Healthy Habits research project major strengths are the longitudinal design and the app-based approach to reach a more and more digital affine target group. This mainly digital approach widens the spectrum of possible interventions, which also varies by format, content and degree of individualization. Fields of actions (original: Handlungsfeld) are legally defined areas in which preventive interventions have to take place, including physical activity, diet, stress and addiction. Next to classic course interventions, additional formats may include gamification elements such as challenges or quizzes, push-up messages, podcasts, blogs, webinars or scribble videos. Also, it is possible to address subgroups or single individuals of the target group by assigning achieved assessment scores to certain interventions. The findings will bring greater understanding of how to address student's challenges with tailored preventive interventions. --- Availability of data and materials The datasets used and/or analysed during the study will be available from the corresponding author on reasonable request. --- Declarations --- Ethics approval and consent to participate For future publications based on the described research project ethical approval was granted by the independent ethics committee of German Sports University Cologne on October 21st 2020 (version 1.0; reference 146/ 2020) including participant information material, website information and informed consent form. The written consent to participate is given by the students with the first log-in to the research project website. --- Consent for publication Not applicable. --- Competing interests The authors declare that they have no competing interests. --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Background: The emerging adulthood is traditionally viewed as a time of optimal health, but also as a critical life span, characterized by changing life circumstances and the establishment of an individual lifestyle. Especially university life seems to hold several challenges impeding the manifestation of a health supporting manner, as many students tend to show a poorer health behavior and a higher amount of health-related problems than comparable age groups. This, along with a steady growth of the higher education sector, brings increased attention to the university setting in the context of prevention. To date, there are few empirical longitudinal and coherent cross-sectional data on the status of students' health literacy, health status, and health behaviors, and on the impact of the study format on students' health. The aim of this prospective cohort study is to reduce this research gap. Methods: Starting during winter semester 2020/21, the prospective cohort study collects data on health literacy, health status and health behavior on a semester-by-semester basis. All enrolled students of the IST University of Applied Sciences, regardless of study format and discipline, can participate in the study at the beginning of their first semester. The data are collected digitally via a specifically programmed app. A total of 103 items assess the subjectively perceived health status, life and study satisfaction, sleep quality, perceived stress, physical activity, diet, smoking, alcohol consumption, drug addiction and health literacy. Statistical analysis uses (1) multivariate methods to look at changes within the three health dimensions over time and (2) the association between the three health dimensions using multiple regression methods and correlations. Discussion: This cohort study collects comprehensive health data from students on the course of study. It is assumed that gathered data will provide information on how the state of health develops over the study period. Also, different degrees of correlations of health behavior and health literacy will reveal different impacts on the state of students' health. Furthermore, this study will contribute to empirically justified development of target group-specific interventions. Trial registration: German Clinical Trials Register: DRKS00023397 (registered on October 26, 2020).
Introduction A raft of recent research illustrates that many people continue to have sex well into and throughout later life (Bergstrom-Walan & Nielsen, 1990;Bourne & Minichiello, 2009;Field et al., 2013;Lindau et al., 2007;Mercer et al., 2013;Schick et al., 2010). Indeed, Swedish research by Beckman, Waern, Ostling, Sundh and Skoog (2014) illustrated that levels of sexual activity may be on the rise amongst older cohorts, suggesting it is increasingly important that we pay attention to the sexual health and well-being of older people. For many, sexual expression, pleasure and identity remains important as they age, although research also indicates there is diversity amongst older adults in this regard (Field et al., 2013;Fileborn et al., 2015a;Fileborn, Thorpe, Hawkes, Minichiello, & Pitts, 2015b;Gott & Hinchliff, 2003;Gott, Hinchliff & Galena, 2004;Lindau, Leitsch, Lundberg, & Jerome, 2006;Minichiello, Plummer & Loxton, 2004). Changes in social norms in the Englishspeaking West regarding the acceptability of divorce and re-partnering after divorce or the death of a partner also mean that there are greater opportunities for initiating new sexual and romantic relationships in older age (Bateson, Weisberg, McCaffery, & Luscombe, 2012;DeLamater, 2012;DeLamater & Koepsel, 2015;Idso, 2009;Nash, Willis, Tales, & Cryer, 2015). Accompanying advances in technology and the development of online dating have facilitated the process of finding sexual and romantic partners in middle and later life (Bateson et al., 2012;Malta, 2007;Malta & Farquharson, 2014). This continuation of sexual activity and shifts in sexual partnerships in later life have, however, been accompanied by increases in the rates of sexually transmitted infections (STIs). While older cohorts still make up a minority of STI diagnoses overall, rates in these groups have steadily increased in many countries across the Anglophone-West. For example, in Australia, rates of chlamydia diagnoses rose from 16.4 per 100,000 in 2010 to 26.6 per 100,000 in 2014 in the 55-64 age group (The Kirby Institute, 2015). Rates of gonorrhoea and syphilis in this age group also rose during this timeframe. This mirrors international trends across countries such as the U.S. and U.K. (Centers for Disease Control and Prevention, 2014;Minichiello, Rahman, Hawkes, & Pitts, 2012;Poynten, Grulich & Templeton, 2013;Public Health England, 2016). Despite rising STI rates in older populations, we know surprisingly little about their safer sex practices and knowledge of STIs. The limited research undertaken to date suggests that older people do not consistently practice safer sex (Altschuler & Rhee, 2015;Bourne & Minichiello, 2009;Dalrymple, Booth, Flowers, & Lorimer, 2016;de Visser et al., 2014;Foster, Clark, McDonnell Holstad, & Burgess, 2012;Grant & Ragsdale, 2008;Lindau et al., 2006;Reece et al., 2010;Schick et al., 2010), may lack effective condom use skills (Foster et al., 2012), and report low rates of testing for HIV and STIs (Bourne & Minichiello, 2009;Dalrymple et al., 2016;Grulich et al., 2014;Schick et al., 2010;Slinkard & Kazer, 2011). Several authors have noted that older heterosexual women may face a higher susceptibility to HIV and STI transmission on account of the physiological and hormonal changes that typically accompany ageing, such as decreased estrogen production leading to thinning of the vaginal wall and subsequent greater susceptibility to tears and cuts (Altschuler & Rhee, 2015;Brooks, Buchacz, Gebo, & Mermin, 2012;Idso, 2009;Johnson, 2013), and the low rates of condom use of post-menopausal women who no longer fear unintended pregnancy (Altschuler & Rhee, 2015;Bateson et al., 2012;Idso, 2009;Johnson, 2013;Lindau et al., 2006). Older men can be reluctant or unable to use condoms as a result of erectile difficulties (Idso, 2009;Johnson, 2013), and older men who take erection-enhancing medications can face a higher likelihood of contracting an STI (Smith & Christakis, 2009). However, we know comparatively little about how older adults understand and define "safer sex," nor the contextual factors that shape and inform their use of safer sex practices and the importance of safer sex to them. Older people grew up in a time when discussions of sex and sexual health were largely taboo, comprehensive sexuality education was generally not available (Cook, 2012;May, 2006;Pilcher, 2005), and STIs were highly stigmatised (though this is arguably still the case in many respects) (Altschuler & Rhee, 2015;Bourne & Minichiello, 2009;Grant & Ragsdale, 2008;Idso, 2009;Nash et al., 2015;Slinkard & Kazer, 2011). Additionally, the dominant sexual and gendered sexual scripts that older people grew up with may restrain their ability to openly negotiate condom use or other safer sex practices in new sexual relationships (Altschuler & Rhee, 2015;Bateson et al., 2012;Nash et al., 2015;Zablotsky & Kennedy, 2003). Further, safer-sex campaigns and policy are typically targeted towards younger people (Bateson et al., 2012;Bourne & Minichiello, 2009;European Expert Group on Sexuality Education, 2016;Gedin & Resnick, 2014;Kirkman, Kenny & Fox, 2013;Nash et al., 2015). Health care professionals are often reticent to discuss sex per se with older patients (Gott et al., 2004;Grant & Ragsdale, 2008;Nash et al., 2015;Nusbaum, Singh, & Pyles, 2004) and older people often wait for their health care provider to initiate discussions on sex (Lindau et al., 2006;Nash et al., 2015;Nusbaum et al., 2004;Slinkard & Kazer, 2011). These contextual factors may shape and limit the safer sex knowledge and practices of older people; however, further qualitative research is required to examine and confirm the extent to which this may occur (Bateson et al., 2012). There is currently a lack of research, particularly qualitative, on older people's knowledge of safer sex and the safer sexual practices they engage in (Bateson et al., 2012). Qualitative research is needed to provide a detailed understanding of the perspectives and decision making processes that older people engage in when having sex in circumstances that present a higher likelihood of STI transmission (Dalrymple et al., 2016). In particular, knowledge is currently lacking about the ways in which older people understand and define "safer sex," the importance they attach to safer sex in particular relationship contexts, the types of safer sex they use, and the potential barriers to using different types of safer sex. Our study, "Sex, Age and Me: a National Study of Sex and Relationships Among Australians Aged 60+" was established, in part, to examine these issues. Key aims of this exploratory project were to explore older adults' knowledge of, and use of STI prevention and safer sex. The first Australian study of its kind, Sex, Age and Me collected quantitative and qualitative data (see Lyons et al., under review, for further details). With regard to the latter, 53 qualitative interviews with older men and women were conducted, and a subset of findings pertaining to interview participants' understandings and use of safer sex is explored in this article. The findings have important implications for informing strategies aimed at stemming the rise of STI rates amongst older cohorts, within policy, health services and health promotion. --- Theoretical frameworka life course perspective Our research is situated within a life course perspective, which suggests that older people's understandings and practices of safer sex are shaped by and "within the context of both generational time and historical time" (Ballard & Morris, 2003, p. 134). Safer sex practices are themselves historically and culturally situated, and vary over time and cultural context (Donovan, 2000a(Donovan,, 2000b)). Additionally, a life course approach recognises the diversity within lived experience, and that people of the same chronological age may have different experiences based on their particular social and cultural locations (Ballard & Morris, 2003). Participants in our study belong to the "Baby Boomer" generation, and this likely shapes their current experiences and understandings of safer sex. The Baby Boomers are frequently credited with responsibility for leading the "sexual revolution" in the 1960s and 1970s across English-speaking Western countries, particularly the U.S., U.K., and Australia. The sexual revolution critiqued and challenged dominant sexual norms of the time, although the extent to which it actually influenced the sexual lives and practices of our participants' generation is contested (e.g., Fileborn et al., 2015a). For instance, some participants in Fileborn et al.'s (2015a) Australian study commented that the sexual revolution had impacted more significantly on their children's sexual lives than their own, and that their own sexual practices had continued to be shaped by conservative sexual norms. The Baby Boomers are likewise often credited with challenging dominant norms around ageing "appropriately," and in refusing to perform "older age" in the same way as their parents, particularly when it comes to sex (Fileborn et al., 2015a). Again, while there is likely to be considerable variation in the ways that Baby Boomers are actually approaching older age, it is important to situate the findings of our research within particular historical and contemporary contexts. --- Method --- Participants Fifty-three semi-structured individual interviews were conducted with Australian women (n = 23, 43.4%) and men (n = 30, 56.6%) aged 60 years and older from August 2015 to January 2016. Two female participants were aged in their mid-to-late 50s; these women were included in the study due to difficulties recruiting women for the interviews. Interview participants were recruited through the online survey conducted in phase one of the Sex, Age & Me study, which had attracted 2,137 participants from all major areas of Australia. Survey participants were recruited through a range of avenues, including an article published in The Conversation by two of the authors and subsequent media attention, age-targeted Facebook advertisements, through local and national ageing organisations, local governments, senior citizen clubs, and sexual health clinics. The survey sample was a convenience sample; however, we were able to target recruitment efforts towards specific key subgroups, and the sample was diverse, including participants from all major sociodemographic backgrounds and from all states and territories of Australia. Survey participants who were interested in taking part in a one-on-one interview were invited to provide their name and a contact email (these details were not stored with their survey responses). A total of 517 individuals expressed interest in taking part in an interview. Every third person who expressed interest was contacted, resulting in 175 individuals being contacted by email and provided with a participant information statement that explained the purpose of the study (to examine the sexual health, relationships, dating and sexual practices of older people, and knowledge of STIs), the general topics the interview would cover, what participation would involve, and the potential risks of taking part. These individuals were asked to contact the interviewer (Author 1) if they would like to participate. Of these 175 individuals contacted, 53 individuals from across Australia responded and agreed to take part. We did not recruit any more participants as data saturation was reached. An overview of the interview participants is provided in Table 1. [Table 1: Sample profile of Sex, Age and Me interview participants (n = 53)] Measures. The interview schedule focused on participants' understandings of sex and sexual satisfaction, the importance of sex and sexual satisfaction, their understandings and use of safer sex, their help seeking practices, and background demographic information. As the interviews took a semi-structured approach, additional lines of questioning were taken based upon the unique issues raised by each participant; however, the relevant questions from our interview schedule are included in Table 2. Procedure. Interviews were conducted by phone (n = 41), Skype (n = 10), or face-toface (n = 2) depending on the participant's preference and geographical location. While conducting interviews via Skype is a relatively novel approach, research to date suggests that conducting interviews in this way (and via phone) does not negatively impact upon data quality, and in some contexts may even enhance it (Hanna, 2012;Holt, 2010;Sturges & Hanrahan, 2004). On average, the interviews took 30-60 minutes to complete, were audio-recorded with the participant's consent, and transcribed by a professional service. The transcripts were de-identified, and participants assigned pseudonyms. Ethics approval was received from the La Trobe University Human Research Ethics Committee prior to the commencement of the research. [Table 2: Interview questions on safer sex] Analysis. The qualitative data were analysed using the software package NVivo, and followed a thematic analysis procedure outlined by Ezzy (2002) and Braun and Clarke (2006). The first-named author conducted the primary analysis. This process involved an initial close reading and preliminary coding of the transcripts. Notes were made identifying emerging themes, using the interview questions and core study aims (e.g., discourses on sex and relationships, understandings of safer sex) as initial code categories (i.e., a mix of inductive and deductive coding was used). In vivo codes were also identified throughout this process based on emergent themes and patterns within the data. This process was then repeated in NVivo, with the data sorted into code and sub-code categories. Particular attention was paid to the recurrent themes and patterns in the data, but also to cases that contradicted, complicated, or otherwise sat outside of the dominant thematic categories. This enabled us to account for the complexity and nuance in older people's experiences. A random sample of 10 interview transcripts was independently coded by the fifth-named author (WH) to ensure the validity of the coding, with both coders agreeing on the dominant thematic categories. --- Results --- What is safer sex? Participants were asked about their understandings and definitions of the term "safer sex," and the types of safer sex they used. There were five main themes identified: using condoms, preventing STI transmission, discussing STI history, STI testing, monogamy, avoiding certain sexual practices, and self-care. Some participants indicated that they did not have safer sex, and we examine their reasons for this briefly. Many participants offered complex and multi-faceted definitions and practices of safer sex, and their practices tended to evolve over the course of a relationship, although there was variation between participants in this regard. For many participants, "safer sex" referred predominantly to condom use, and these terms were used synonymously at times. The issue of trust often permeated these practices. --- Using condoms. Condoms were by far the most common element of participants' discussions of what "safer sex" is. Given the centrality of condom use in STI prevention and sexual health campaigns, this is largely unsurprising. For example, Karen (64 yrs, heterosexual, single) said that condoms were "primarily what I think of when I think of safe sex." While condoms are promoted as a key safer sex strategy, they are not an infallible method, particularly when used incorrectly or in preventing all STIs. Only a small number of participants acknowledged the limitations of condom use as a safer sex strategy. Kane (63 yrs, heterosexual, in a relationship) noted that "condoms are ineffective against some kinds of infections," such as crabs (pubic lice)although it is notable that Kane learnt this only after embarking on some pre-interview research on Wikipedia "about STIs just in case you asked me." Another participant, Tim (62 yrs, gay, in a relationship), viewed condom use as one component of safer sex strategies. Tim offered a comprehensive and sophisticated definition of safer sex, saying "safer sex is lower risk activities...using condoms, minimising exchange of bodily fluids and skin contact." Tim also believed that as a gay male he had been exposed to considerably more public health campaigns and education on sexual health than heterosexual people in his age group would have, and this likely accounts for his knowledge of safer sex. Tim was particularly concerned about rising rates of syphilis infection within the gay community, and commented that "condoms can reduce the risk but...you can get syphilitic sores in the mouth or elsewhere in the body." This suggests that condoms may be seen as a safer sex strategy for certain types of sexual practices, with Tim's comments implying that condoms are not used for oral sex. As we discuss below, having a strong understanding of what constitutes safer sex did not always follow through to participants' use of safer sex. Both Tim and Kane acknowledged the limitations of all safer sex strategies, with Tim noting that these practices lower, rather than erase, the probability of STI transmission. Condom use was strongly influenced by relationship context. Participants commonly discussed condoms as something that they used in new or casual sexual relationships. Gwen (65 yrs, heterosexual, single) said that she used "the old fashioned condom, particularly with anyone new." However, if these encounters progressed to a longer-term relationship Gwen would say to her partner "well let's go to the STD clinic and then we don't have to use condoms anymore, if we're both clear." A number of participants also discussed being strict with condom use with new sexual partners after either contracting an STI or being exposed to one in the past. For example, Martha (61 yrs, heterosexual, married) had a rule of "no condom, no sex" after she contracted genital warts from her first husband. Likewise, Rachel (64 yrs, heterosexual, in a relationship) insisted on using condoms with new partners after being exposed to hepatitis C, and only ceased using condoms with her current partner on the provision that they both have regular sexual health screenings. While participants such as Gwen and Rachel only phased out condoms after having STI tests, other participants viewed the use of condoms earlier in a relationship as "going through the motions." For instance, Beverly (66 yrs, heterosexual, single) described how she had new sexual partners use condoms early in their relationship: But it was more just like a perfunctory thing...because you know they weren't going to use condoms the whole time and so it was just in the beginning until I knew that I wanted to stay with them and then it was okay for them to stop using condoms. For Beverly, condom use was only seen as necessary while the relationship was in its formative stages. Progression to a more "serious" relationship rendered the use of condoms unnecessary; however, this decision was made in the absence of any STI testing or further discussion of sexual health. The cessation of condom use either with or without STI testing once a relationship became established appeared to be a common practice amongst our participants. Preventing STI transmission. Some participants defined safer sex as being more generally about STI prevention. While condoms were often an important part of this, these participants tended to focus more strongly on the prevention of disease transmission, rather than the particular strategies that might be used to prevent this. One participant, Zane (80 yrs, bisexual, married/open relationship), defined safer sex as "preventing somebody else or any two people passing on something that they've acquired God knows where, to another partner." Another participant, Amelia (73 yrs, heterosexual, in a relationship), commented "safe sex these days is more about not getting STDs than anything else." Amelia's remarks suggest that meanings of safer sex shift temporally. Indeed, many participants commented that when they were younger the concept of "safer sex" generally referred to pregnancy, rather than STI, prevention. For heterosexual participants who viewed safer sex as predominantly related to pregnancy prevention, this could render safer sex as an irrelevant concept to them once they (or their partner) were no longer able to become pregnant. Discussing STI history. Talking to a sexual partner about their STI and/or sexual history was another common component of safer sex. For some participants, this meant having an explicit conversation about their current STI status. For instance, Marty (77 yrs, heterosexual, in a relationship) said "if I had a conversation with somebody and was assured that they didn't have any sexually related diseases, then I'd probably feel fairly confident." For some participants, discussions about sexual health with their partner formed a key aspect of safer sex. As highlighted above, this could involve talking about when they would cease using condoms in a relationship, and to arrange for STI tests prior to this. Some participants utilised discussions with partners (or potential partners) as a way to determine whether condoms or other safer sex measures were necessary. Rather than involving explicit discussions on STI testing and sexual health history, these discussions provided opportunities to make a series of judgements about a partner's character and the perceived likelihood that they would have an STI. For example, Ivy (62 yrs, heterosexual, single) commented that "it's a whole new world compared to when I was young," and that because of this she always raised the issue of safer sex with new partners. However, in determining whether or not she Discussing sexual health as a safer sex practice was often based on the premise that participants trusted a sexual partner to tell them if they had an STI, or trusted them not to have an STI. Trusting a partner's response appeared to absolve the need to use other types of safer sex such as condoms. As Kane said, "my preference is not to use a condom and if I'm attracted to a woman my inclination is to trust her, and one of the things I trust her to do is not to give me an STI." However, another participant, Dani (71 yrs, heterosexual, in a relationship), highlighted the limitations of trust as a safer sex practice in this regard saying, "they could even say they've had a sexual test and be lying about it, couldn't you? Unless you saw the piece of paper. Yeah, I think I would be wanting to use condoms." STI testing. STI testing was mentioned relatively infrequently by participants in their definitions of safer sex. Shane (72 yrs, heterosexual, married) said that he would want to know that a new sexual partner "had their sexual health checked and had the tests [to be]...reassured that they didn't have any sexual disease." However, Shane qualified this by suggesting that he would be more concerned if the new sexual partner was male, or if they had not come from a long-term monogamous relationship. Again, this suggests that safer sex practices are seen as context dependent, and as less relevant to those involved in monogamous heterosexual relationships. Although only a small number of participants discussed STI tests in their definitions of safer sex, many more indicated that they had used STI testing as part of their safer sex practicesas the preceding discussion has illustrated. Some participantspredominantly womenreported that they insisted on their new partners taking STI tests before having unprotected sex (i.e., without a condom). For example, Tina (60 yrs, heterosexual, married) told her now husband, "either you're going to use condoms or we are all going to have the full suite of tests beforehand. He opted for the full suite of tests...we both had every test that you could possibly have." Wilma (61 yrs, heterosexual, widow) decided to have an STI test after being involved in a relationship with a man who she "wasn't totally trusting," although they had consistently used condoms. However, her doctor was disparaging of the need for her to be tested, saying he was sure Wilma would be fine. The tests only proceeded because of Wilma's insistence that "I really need to have one." A small number of participants also discussed using blood donations as a proxy for STI tests. For example, Kane (63 yrs, heterosexual, in a relationship) said when he was donating blood regularly "I was being tested...every fortnight, so I was pretty sure that I was clear." While blood donors in Australia are screened for blood borne viruses, they do not screen for all STIs, making this approach to testing limited and risky. Another participant, Aaron (65 yrs, heterosexual, single), said that he also gives himself "a check regularly as well, so I'm modern in that thinking." Aaron's comments imply that, for some older people, STI tests may be viewed as irrelevant or only of concern to young people. Gwen (65 yrs, heterosexual, single) saw the process of having an STI test and revealing the results to a new partner as developing "a whole higher level of trust between you...it actually brings you closer together I think." In this way, STI testing can be used as a mechanism for producing trust in a new relationship. Given the centrality of trust in safer sex, this has important implications for the framing of sexual health campaigns targeted towards older people. Monogamy. Monogamy was often used as safer sex, both within the context of longterm monogamous relationships, and for those who were entering into new relationships with someone who was previously in a monogamous relationship. For example, Xavier (65 yrs, heterosexual, married) said that safer sex was not important in his relationship as he had been with his wife for 42 years, and "safe sex is something you do with people you don't know...If we had any STDs we would've known by now." Another participant, Carl (62 yrs, heterosexual), was involved in three simultaneous, "monogamous" relationships, which he believed protected him from STIs as he believed his three partners did not have other partners. Others were more cautious. For example, Leila (61 yrs, heterosexual, married) said that while "you can relax a little bit" in a long-term relationship, she would "still be very careful...you really never know someone, you just don't." For those entering into new relationships, serial monogamy (or a relatively "inactive" sexual life) was seen as being protective against STIs. For instance, Dani (71 yrs, heterosexual, in a relationship) said that she did not have an STI test before having unprotected sex with her partner because "he wasn't having much sex, I don't think." Likewise, Oliver (66 yrs, heterosexual, friends with benefits relationship) said that he "didn't even think about" the issue of safer sex with his partner, because she had not been in a sexual relationship for a very long time. However, monogamy does not always offer protection against STIs, as Elli (59 yrs, bisexual, single) discovered when she contracted herpes after having unprotected sex with someone who had just left a 30-year monogamous relationship. Sexual practices. For a minority of participants, limiting their sexual practices to activities they viewed as lower risk was an important safer sex strategy. Notably, this strategy was mentioned by two male participants who had sex with men, who both discussed engaging in practices that presented a lower risk of HIV transmission. For example, Fred (60 yrs, bisexual, single/casual sexual relationships) did not have anal intercourse with his regular male sex partner, and said, "we don't do anything that is really hazardous in terms of HIV," though some of his sexual practices may expose him to other STIs. Likewise, Tim (62 yrs, gay, in a relationship) said for him safer sex might involve "lower risk" activities such as "kissing, mutual masturbation, digital stimulation and masturbation, anything that's essentially non-penetrative." Other participants indicated that they would simply not have sex with someone if they believed they might have an STI. As Dylan (65 yrs, heterosexual, longdistance relationship) noted, discussing the fallibility of condoms, "the only perfect one is to not do it, so if I'm worried I'll leave." Not practicing safer sex. Finally, a few participants indicated that they did not have safer sex in their sexual relationships. For example, Beverly (66 yrs, heterosexual, single), who was casually dating, said: I pride myself in looking after myself my mental health and my physical health but when it comes to sexual health you know I've been a bit irresponsible really and it's hard for me to sort of own up to that. Likewise, Carl (62 yrs, heterosexual, multiple relationships), who had multiple, simultaneous "monogamous" relationships said, "no way, I don't use condoms," while Kane (63 yrs, heterosexual, in a relationship) reported that "post-menopausal women are awfully cavalier" about condom use, so he had rarely used condoms throughout his multiple sexual relationships. Self-care and well-being. Some participants provided definitions of "safer sex" that extended beyond the prevention of STI transmission to include emotional, psychological, and physical well-being and safety in an intimate relationship. This type of definition was well encapsulated in Rachel's (64 yrs, heterosexual, in a relationship) comment that safer sex is: About knowing yourself really well, and understanding all the emotional aspects around sex...understanding...the brain chemistry behind attachment, behind sexual attraction, behind being sexually active...having an understanding of how your thinking works, being a bit mindful about your thinking. Another participant, Fred (60 yrs, bisexual, single/casual sexual relationships), highlighted an apparent paradox in the relationships between safer sex, caring for one's partner, and the role of trust and stigma relating to STIs. Fred noted that suggesting to a partner that they, for example, use a condom "has two contradictory effects. One is, 'I'm trying to look after you'. It's a positive message to the other person...But the other thing is 'I don't trust you and you shouldn't trust me'." This suggests that the emphasis on "trust" between sexual or romantic partners has the potential to hinder engagement in safer sex and self-care practices. --- Importance of safer sex Our discussion thus far has considered how older adults' understand and define the concept of "safer sex," and the safer sex strategies they used. We move on now to consider how important safer sex was to participants. The importance of safer sex seemed to be closely connected with relationship context and trust, perceived risk levels, and concern for personal and public health. These factors often co-informed one another, and were not mutually exclusive. Concern for personal health. For some, safer sex was important due to a concern for their personal health and a desire to avoid any unpleasant symptoms. For example, Sally (71 yrs, heterosexual, widow), who had experienced extensive health problems relating to her reproductive system, said safer sex was highly important to her as "I don't need to get infected with anything, I've had enough problems in that area." A number of individuals who worked in health care settings indicated that safer sex was important to them after being exposed to the early stages of the HIV/AIDS epidemic through health promotion strategies, being employed in the healthcare sector, or having friends or family members diagnosed with HIV/AIDS. Igor (78 yrs, heterosexual, married) previously worked in a HIV clinic and as a result was "determined that I was never going to die of HIV, nor was I going to impose it on somebody else." For others, avoiding STIs was a matter of "common sense." Juliet (69 yrs, heterosexual, in a relationship) said that although she did not view STIs as shameful, "if it's avoidable, it's just the most sensible thing." Others saw the prevention of STIs as a matter of personal responsibility and commitment to public health. For instance, Norman (69 yrs, heterosexual, married) said "it's obviously important to maintain a healthy population and not to spread disease by sexual means or any other if you can help it." Stigma. The importance of safer sex was also linked to the stigma attached to STIs and having multiple sexual partners, and the feelings of shame this engendered. Safer sex was important to Ivy (62 yrs, heterosexual, single) because she believed that having an STI "at this age...would ruin any future dating life." However, Ivy did not distinguish between different types of STIs and it was therefore unclear whether she was referring to treatable, non-treatable STIs or both. Stigma played a somewhat paradoxical role here: it simultaneously increased the perceived importance of safer sex, while also contributing towards a culture in which having an STI is highly shameful and difficult to discuss due to a fear of being ostracised or rejected as a sexual partner. It is also apparent that for Ivy, the stigma or shame associated with having an STI would be further compounded by her age ("at my age"). Safer sex as less relevant in later life. A minority of participants reported becoming more pragmatic about safer sex in later life. For example, Marty (77 yrs,
Rates of sexually transmitted infections (STIs) are increasing in older cohorts in Western countries such as Australia, the U.K. and the U.S., suggesting a need to examine the safer sex knowledge and practices of older people. This article presents findings from 53 qualitative interviews from the study "Sex, Age & Me: a National Study of Sex and Relationships Among Australians aged 60+." Participants were recruited through an online national survey. We consider how participants understood "safer sex," the importance of safer sex to them, the safer sex practices they used (and the contexts in which they used them), and the barriers to using safer sex. Older adults had diverse understandings, knowledge, and use of safer sex practices, although participants tended to focus most strongly on condom use. Having safer sex was strongly mediated by relationship context, trust, perceived risk of contracting an STI, concern for personal health, and stigma. Common barriers to safer sex included erectile difficulties, embarrassment, stigma, reduced pleasure, and the lack of a safer sex culture among older people. The data presented has important implications for sexual health policy, practice, and education and health promotion campaigns aimed at improving the sexual health and wellbeing of older cohorts.
of safer sex was also linked to the stigma attached to STIs and having multiple sexual partners, and the feelings of shame this engendered. Safer sex was important to Ivy (62 yrs, heterosexual, single) because she believed that having an STI "at this age...would ruin any future dating life." However, Ivy did not distinguish between different types of STIs and it was therefore unclear whether she was referring to treatable, non-treatable STIs or both. Stigma played a somewhat paradoxical role here: it simultaneously increased the perceived importance of safer sex, while also contributing towards a culture in which having an STI is highly shameful and difficult to discuss due to a fear of being ostracised or rejected as a sexual partner. It is also apparent that for Ivy, the stigma or shame associated with having an STI would be further compounded by her age ("at my age"). Safer sex as less relevant in later life. A minority of participants reported becoming more pragmatic about safer sex in later life. For example, Marty (77 yrs, heterosexual, in a relationship) said he was less concerned about contracting STIs compared to when he was younger as he took the view that: If I did get an STI I'd probably be able to get it cured fairly easily, and maybe it doesn't matter so much, and maybe even HIV would be less of a threat in that I don't have such a long life ahead that I'd have to live with it. Marty's position in the life-course clearly influenced his views about sexual risk taking and living with disease. Others reported that safer sex was relatively unimportant to them because they did not think it related to older people. Amelia (73 yrs, heterosexual, in a relationship), for example, thought that safer sex "possibly wouldn't even enter most people's mind to even do," as most of her generation grew up in the pre-AIDS era where "safer sex" related primarily to pregnancy prevention. As a result, Amelia said that even when she was exposed to safer sex messages "you sort of think, well it doesn't apply to me; that applies to young people." Likewise, Karen (64 yrs, heterosexual, single) commented that many people in her age group would hold the view that "they're in the safe category, that STDs is only something that [happens to] younger people who have...more than one partner," as a result of social norms when she was growing up that STIs were only an issue for sexually "promiscuous," "bad," or "dirty" people. Relationship context. The perceived importance of safer sex was also related to the relationship context. For instance, Amelia commented, "if you're in a more or less steady relationship and you trust the person you're with, it's not so important...it depends a lot on the relationship, what's safe and what's not." Likewise, many participants commented that safer sex would be important if they were to start a new relationship or dating casually should their current relationship end. Oliver (66 yrs, heterosexual, friends with benefits relationship) saw casual dating as being a particularly "high risk" time where safer sex would be more important, "while you're trying to find a more stable place to express your sexuality." Interestingly, Elijah (63 yrs, heterosexual, single), who was a long-term client of a sex worker, also viewed trust and relationship length as essential to the importance he placed on using condoms. For instance, if a sex worker offered to have unprotected sex at an early encounter "well, obviously she is doing that with everyone" making it a higher-risk decision. In contrast, Elijah said, "if it happens over a relationship period...you develop a trust." STI risk. The perceived risk or likelihood of contracting an STI also influenced the level of importance some participants placed on having safer sex. As noted above, many participants viewed monogamous, long-term relationships as "low risk"and in many respects this is a fair assessment, given that older people are indeed less likely to have an STI in comparison to their younger counterparts. Likewise, other participants made judgements about the perceived likelihood a partner had an STI based on their number of sexual partners or social standing. For some participants the perceived risk of contracting an STI was deemed to be low based on their past experiences. For example, Vaughn (71 yrs, heterosexual, in a relationship) reflected on how when he was young there was a "plague" of gonorrhoea. Vaughn said that "at the time I was having about 10 different women in a month...and I only caught gonorrhoea once...and I went through hundreds of people, literally." For another participant, Fred (60 yrs, bisexual, single/casual sexual relationships), the risks presented by unprotected sex were also a component of sexual pleasure and excitement. Fred admitted that while he had "taken more risks than any rational person would," these risks were "part of the 'fun at the fair.' And when you get away with it and then you go 'wow! That was a rush'." Dylan (65 yrs, heterosexual, long-distance relationship) believed that "the unsafe sex thing is a beat up in many of the same ways we beat up other safety things," and argued that the risks of unsafe sex were relatively trivial and easily addressed through medical treatment. Because of this, Dylan believed that safer sex was largely unnecessary. --- Barriers to safer sex Embarrassment. For some participants, negotiating safer sex with a partner was viewed as an embarrassing endeavour for a number of distinct reasons. Elli (59 yrs, bisexual, single), who had herpes, said that she felt daunted at the prospect of having to raise the issue of safer sex with any new sexual partners for the first time in her life. For Elli, this was daunting because of "the interference with spontaneity and just the embarrassment of having to tell somebody that I'm carrying the herpes virus, which just feels completely bizarre in terms of the amount of sex I've had." This embarrassment was linked to the stigma associated with having an STI, as well as the implications Elli believed this would have for her sexual reputation. Embarrassment about using safer sex was also linked to the fact that for many older adults, safer sex has not been a core part of their sexual repertoire. This was expressed by Jack (64 yrs, heterosexual, married), who said "I think it may be a little more confronting and embarrassing for older people...young people would probably...do it as a normal course of events." Vicki (73 yrs, heterosexual, single) believed that many older men were not knowledgeable about condom use because they had never (or rarely) had to use condoms growing up. Vicki commented that it "takes a really confident man to say 'I don't really know how to do this,' especially in bed," suggesting that embarrassment about ineffective condom skills may form a barrier to some older men having safer sex A number of participants commented that older adults were still influenced by the social norms and taboos surrounding sex when they were growing up, where "frank and fearless communication wasn't a big part of it" (Marty, 77 yrs, heterosexual, in a relationship). The lingering effects of these attitudes made discussing safer sex challenging for some older adults. For instance, Rachel (64 yrs, heterosexual, in a relationship) commented that norms around sexual "promiscuity" meant that for some older women admitting to being sexually active by, for example, requesting an STI test could be "deeply humiliating." However, some participants challenged the notion that embarrassment about safer sex was age-specific. Instead, embarrassment about sex was viewed as related to individual proclivity or personality traits, but, as Leila (61 yrs, heterosexual, married) argued, "that can apply at any age." Erectile difficulties. Erectile difficulties were a significant barrier to many men in using condoms as a form of safer sex. Participants with erectile difficulties frequently commented that using a condom would cause them to lose their erectionor they were unable to successfully put a condom on due to an insufficient erection. This suggests that safer sex education for older adults must extend beyond simply encouraging condom use, as for many this was simply not an acceptable avenue of protection. This also points to the importance of decoupling the often synonymous use of condoms with safer sex. Such thinking can limit the identification of alternatives to condom use that may reduce the risk of STI transmission. If safer sex is linked solely to condom use, then in the event that condoms can no longer be used, having safer sex becomes impossible. Lack of skills, experience, and safer sex culture. As many within the current older cohort did not receive comprehensive sexuality education when growing up, a lack of knowledge regarding STIs and safer sex practices was raised by participants as a major barrier to having safer sex. This was particularly the case for those who had been in longterm, monogamous relationships who had had no perceived need for safer sex other than to prevent pregnancy. For example, Elli (59 yrs, bisexual, single) said that safer sex is "just not part of the frame of reference with a lot of people over 55." Similarly, Edwin (66 yrs, heterosexual, married) commented that "our age group aren't equipped, we don't have the culture...for dealing" with safer sex. As a result, some older people may be lacking the knowledge, skills, and awareness to have safer sex. The assumption that "you know everything because you've reached this age" (Wilma, 61 yrs, heterosexual, widow) or that you should "know better" as an adult could also function as a major barrier to seeking out information on safer sex. Wilma commented that some people may "feel humiliated and they don't want to ask those questions of the doctor" due to the perception that they should already know about safer sex. This assumption and stigma around a lack of knowledge was actively perpetuated by some participants. For example, Dan (63 yrs, heterosexual, married) said that "by the time you get to our age you've been around the block once or twice so you'd be pretty stupid if you didn't know what it was all about." Another participant Gwen (65 yrs, heterosexual, single) believed that while older people did know about safer sex and had learnt about it when they were young, this knowledge was not being reinforced as they got older. Certainly, Gwen's comments reflect current evidence that safer sex education is targeted almost exclusively towards younger people (Kirkman et al., 2013). Stigma. The continued stigma surrounding STIs figured as a barrier for some participants in using or negotiating safer sex. While stigma around STIs is also an issue for young people this may be heightened for older people given the conservative norms governing sex when they were growing up. A number of women recounted stories where a male partner had been "insulted" after they asked them about their STI history. Vicki (73 yrs, heterosexual, single) believed this was because "in the old days...it was prostitutes and...loose women" who used condoms. Indeed, Rachel (64 yrs, heterosexual, in a relationship) shared an experience of a sexual partner refusing to have sex with her after she asked him to use a condom, saying to her "what sort of woman carries a condom...obviously you sleep around with everyone." Fred (60 yrs, bisexual, single/casual sexual partners) also indicated that this association could make discussing condom use "awkward" because it was akin to saying "'well, you must be promiscuous,' and that's not something most women want to think about themselves." It is notable that it was only female participants who reported feeling judged, and only women who were seen to be viewed negatively for raising the issue of safer sex (see also Dalrymple et al., 2016). This suggests that the barriers to using safer sex in later life operate in highly gendered ways. Reduced pleasure. As has been well documented in the literature on safer sex (e.g., Crosby, Yarber, Sanders & Graham, 2005;Higgins & Wang, 2015), the belief or experience that condoms reduce sexual pleasure was a disincentive to using condoms. Jack (64 yrs, heterosexual, married), for example, said that he "enjoy[s] sex more without a condom." Gwen (65 yrs, heterosexual, single) commented that it could be difficult to negotiate condom use with men who believed that condoms decrease or remove their sexual pleasure, "the usual classic complaint from men." While this is a common barrier to using condoms across all age groups, some male participants indicated that the impact on sexual pleasure was heightened in older age. For instance, Vaughn (71 yrs, heterosexual, in a relationship) said that "we're certainly not as sensitive as we were, so wearing a condom tends to make things very insensitive," and this could make it difficult to achieve orgasm. Vicki (73 yrs, heterosexual, single) believed that older men were "remembering using the old type of condoms, which...were thicker." As a result, older men's experiences of using condoms was potentially "more unpleasant than it needs to be now," suggesting that overcoming past experiences or assumptions in condom design may be needed to increase willingness to use condoms. For women who experienced vaginal dryness after menopause or due to various health conditions, condom use could be painful, although younger women have also reported experiencing vaginal irritation as a result of condom use (Crosby et al., 2005). While Sally (71 yrs, heterosexual, widow) acknowledged that use of lubrication could help with this, she doubted "whether I'd find anybody that would be willing to go through all the preparations necessary...it wouldn't be spontaneous." This suggests that while the physiological issue of vaginal dryness can make condom use difficult, beliefs around how sex "should" occurin this case, as a spontaneous, "natural" process without interruption or the use of sexual aidsalso act as a barrier to engaging in practices (such as using lubricant) that would facilitate condom use (see also Diekman, McDonald & Gardner, 2000). --- Discussion While participants in this study discussed a broad range of safer sex practices, there was a strong emphasis on the use of condoms in comparison to other forms of safer sex (such as STI testing or engaging in lower risk, non-penetrative sex)although data from the Second Australian Study of Health and Relationships (ASHR2) suggests that condoms are not commonly used by older Australians (de Visser et al., 2014). Likewise, while practices such as discussing sexual health and history with a partner were raised, this was often presented as a strategy for making a value judgement on the perceived likelihood that a partner would have an STI. This echoes the findings of Hillier, Harrison and Warr's (1998) earlier research with Australian high school students, who likewise reported that condom use was virtually synonymous with safer sex, while trusting a partner and informally discussing sexual history were key safer sex strategies. There was great variation regarding the extent to which safer sex was important to participants, and this was strongly mediated by relationship context. For many, safer sex was seen as relevant to new, casual relationships, and in contexts where a sexual partner was not "trusted," extending the findings of Dalrymple et al.'s (2016) research with late middle-age adults in the U.K. While the overall themes identified here are in many ways similar to studies conducted with younger age groups, the context and the ways in which these themes play out in the lives of older people is distinct and shaped by the interplay of ageism, cohort norms regarding sex, and more general stigma around STIs and sex. When it came to having safer sex, there was again much variation. While some participants placed great importance on using condoms and having STI tests with new partners, for others having safer sex was often context dependent and based upon assumptions about their partner's sexual health status. Trust was fundamental in shaping safer sex, with condoms or STI tests seen as unnecessary with a trusted partner. This echoes the findings of research undertaken with younger samples (e.g., Crosby et al., 2013;Hillier et al., 1998). For example, Crosby et al. (2013) reported that women in their sample aged 25 and older were more likely to believe that condom use signified a lack of trust in one's partner compared to their younger counterparts. Notably, while many of our participants were in what might be considered "low risk" (long-term, monogamous) relationships and were unlikely to contract an STI, even those engaging in comparatively "higher risk" sexual relationships did not necessarily view safer sex as relevant to them. Implicit in these attitudes was the assumption that STIs are visible to the naked eye, and that you can tell if someone has an STI. This reflects the findings of research with younger cohorts (e.g., Barth, Cook, Downs, Switzer & Fischhoff, 2002). The absence of sex education and a perceived lack of widespread condom use while growing up also meant that some older people may lack the knowledge, skills and cultural/social norms to have safer sexand this is more unique to older adults. It was apparent that norms and beliefs about safer sex from when participants were growing up continued to shape the understandings and practices of at least some older people. Some participants implied they were able to predict whether a partner had an STI based on their character and/or perceived number of sexual partners, and a number of female participants had experienced hostile responses from male partners after asking them to use condoms. The embarrassment and stigma associated with STIs and sex continued to act as a major barrier to discussing safer sex with a partner or healthcare provider, and this reflects the findings of research with younger cohorts (e.g., Barth et al., 2002;Hood & Friedman, 2011), though the impact of stigma plays out in different ways for older people. For instance, the stigma of having an STI may be compounded by the widespread cultural assumption that older people do not, or should not, have sex. Our findings have important implications for policy, practice, and sexual health promotion initiatives aimed at reducing STIs amongst older cohorts. There is a clear need to challenge gendered norms and stigma about safer sex and "promiscuity" held by some members of older cohorts. The belief that only "promiscuous" or "dirty" people have safer sex functioned as a major barrier to having safer sex (and particularly condom use), hindered the ability to negotiate safer sex (particularly for older women), and meant that many older men and women did not view themselves or their partners as "at risk" of, or likely to have, an STI. In many respects, these findings are similar to those from research with younger adults (e.g., Barth et al., 2002;Hillier et al.,1998). Sexual health promotion strategies must clearly communicate that older people are sexually active, susceptible to STIs, that safer sex practices are relevant to older people, and that STIs are a normal (though not completely inevitable) aspect of sexual activity. Relatedly, campaigns must seek to disrupt dominant sexual scripts that hinder safer sex. The notion that sex should be "spontaneous" and "natural," without interruption or discussion of any kind, could act as a barrier to discussing or having safer sex (see also Diekman et al., 2000;Galligan & Terry, 1993 for similar findings with younger samples)not to mention discussion of other components of sexual health and wellbeing, such as consent and the negotiation of pleasure (Dune & Shuttleworth, 2009). Such actions may help to shift safer sex cultures amongst older cohorts in a way that facilitates their use. Vitally, the promotion of safer sex must move beyond a sole focus on condom use to include a multi-faceted and holistic approach to sexual health promotion. For many individuals in this study, condom use was not appropriate due to erectile difficulties or other health issues. This represents a unique challenge for promoting condom use amongst older age groups (Schick et al., 2010), although other studies have also indicated that erectile difficulties can influence correct condom use amongst younger men (Crosby, Sanders, Yarber, Graham & Dodge, 2002;Graham et al., 2006;Sanders, Hill, Crosby & Janssen, 2014). While awareness of condoms as a safer sex strategy was high, there was considerably less discussion on STI testing (see also Hillier et al., 1998), and this coheres with findings from ASHR2 suggesting that participants aged 60-69 were the least likely to have had an STI test in the past 12 months. Regular STI testingparticularly for those with new or multiple sexual partnersrepresents a more accessible form of safer sex for those who are unable to regularly use condoms. Public health campaigns targeted towards older people could also include guidance on successfully putting a condom on a semi-erect penis. Likewise, efforts to normalise the use of lubricant during penetrative sex may also be of benefit, particularly given that some participants viewed lubricant use as disrupting the "natural" flow of sex. It is also notable thatwith the exception of two male participants who had sex with other menparticipants did not discuss engaging in sexual practices that presented lower risk of disease transmission as a form of safer sex (see also Hillier et al., 1998). It is unclear whether participants did not recognise this as a safer sex strategy (an issue that could be addressed through public health and educational campaigns), or whether heterosexual participants adhered to the idea that penetrative, penis-in-vagina intercourse constitutes "real" sex. Recent qualitative Australian research illustrates that older women hold diverse views of what "counts" as sex, though some still privileged penetrative heterosexual intercourse as "real" sex (Fileborn et al., 2015a;Fileborn et al, 2015b). Participants in the present study held similarly diverse views of what sex "is." Nonetheless, adherence to the view that penetrative intercourse constitutes real sex may prevent older people from engaging in lower-risk sexual practices as a form of safer sex, and suggests a need to continue to challenge and disrupt social and cultural norms that privilege penetrative intercourse. Additionally, it is important to note that many participants' understandings of "safer sex" were, in some respects, quite narrow. While there was a strong focus on STI prevention, issues such as sexual consent, wellbeing, and ethics were raised by only a small number of participants, despite these being key components of the World Health Organization (WHO) definition of sexual health (WHO, 2006). Given that our participants grew up in a context of limited sexuality education, it is possible that this continues to shape their current understandings of safer sexand this highlights the importance of situating safer sex within a life course perspective. Our findings suggest that sexual health campaigns for older people may also need to address broader issues such as those identified above, though this warrants further investigation. Some participants reported negative or dismissive experiences with healthcare providers after requesting an STI test. As previous research has illustrated, healthcare providers are often reluctant to address issues of sexual health with older patients (Gott et al., 2004;Kirkman et al., 2013). Our findings indicate the need for training and education for healthcare providers regarding sexual health in later life. There is a clear role here for healthcare providers to initiate discussions with older patients regarding sexual health, and to be receptive to this issue when raised by patients. Educational and other efforts targeted towards older people may also benefit from taking into account the major barriers and facilitators to safer sex reported by participants. Trust was essential to participants' understandings of safer sex, and the importance and use of safer sex. Having trust typically meant that there was no perceived need to have safer sex. However, as one of our participants suggested, having STI tests and discussing safer sex could in fact build trust between partners. By reframing safer sex as being fundamentally about trust and trust building, this may encourage older people to have safer sex. It is also important to challenge the notion that monogamy offers protection against STIs, and to encourage older people to have an STI test or to use other forms of safer sex with all new partners. As concern for personal health and well-being facilitated the use of safer sex, this could also underpin educational campaigns. For example, concern for health could be utilised to encourage older people to have an STI test, though such campaigns should be targeted towards older people in "high risk" groups for STIs, given the generally low likelihood of older people contracting STIs overall. Given that some participants reported that they did not see information about safer sex as relevant to them (see also Dalrymple et al., 2016), it is important that safer sex campaigns or educational resources be clearly targeted towards older people, or at least be inclusive of older populations. Any targeted resources may need to cover the "basics" of condom use and other safer sex practices, and provide a discrete and non-judgemental source of information. Such resources should also cover issues specific to older cohorts. For instance, this may include information on how condom design has changed over time to enhance sexual pleasure. --- Limitations There were some limitations of this study. As a qualitative study we were concerned with generating an in-depth exploration of participants' understandings and practices, and the findings presented here are not generalizable. The participants were generally highly educated, articulate, comfortable discussing sex, and from an Anglo-Saxon cultural background. Future research is necessary to identify any differences in attitudes and practices in more diverse demographic groups. Likewise, the majority of our participants identified as heterosexual, and the experiences of sexuality and gender-diverse older people require further examination. While participants were asked to respond to open-ended questions about what safer sex "is," and the safer sex practices they engaged in, the broader project from which this data stems (and particularly the online survey component of this project) had a strong focus on STIs and STI prevention. It is possible that this shaped our participants' definitions of safer sex and the types of safer sex they discussed. It is notable, for example, that participant discussions of safer sex focused almost exclusively on STI prevention as opposed to more holistic definitions inclusive of issues such as sexual consent and sexual pleasure. --- Conclusion Giving consideration to the sexual health of older people is becoming increasingly important, particularly with an ageing population where older people are remaining sexually active for longer and experiencing an increase in STI rates. This studythe first of its kind in Australia, and one of only a handful internationallyhas provided important insight into the complexities and nuances of older peoples' understandings of safer sex and their safer sex practices. Our findings point to a considerable degree of variation in practice and knowledge. Likewise, while there is some similarity in understandings and use of safer sex with young age groups, our findings suggest that there are unique contextual factors and implications for older people. The continued influence of a range of myths and misconceptions about safer sex and STIs was also apparent. Importantly, these findings present valuable insight into the --- ways in which we may begin to initiate change to help improve and support the sexual health and well-being of older populations.
Rates of sexually transmitted infections (STIs) are increasing in older cohorts in Western countries such as Australia, the U.K. and the U.S., suggesting a need to examine the safer sex knowledge and practices of older people. This article presents findings from 53 qualitative interviews from the study "Sex, Age & Me: a National Study of Sex and Relationships Among Australians aged 60+." Participants were recruited through an online national survey. We consider how participants understood "safer sex," the importance of safer sex to them, the safer sex practices they used (and the contexts in which they used them), and the barriers to using safer sex. Older adults had diverse understandings, knowledge, and use of safer sex practices, although participants tended to focus most strongly on condom use. Having safer sex was strongly mediated by relationship context, trust, perceived risk of contracting an STI, concern for personal health, and stigma. Common barriers to safer sex included erectile difficulties, embarrassment, stigma, reduced pleasure, and the lack of a safer sex culture among older people. The data presented has important implications for sexual health policy, practice, and education and health promotion campaigns aimed at improving the sexual health and wellbeing of older cohorts.
Introduction The magnitude of human activities has pushed us into the epoch of the Anthropocene, where we risk crossing planetary boundaries that would cause catastrophic and irreversible environmental changes, with negative consequences for human well-being [1]. It is predicted that anthropogenic environmental pressures will intensify in the future, resulting in further environmental degradation, climate change, and pollution, and impacting on the ability of natural capital to provide ecosystem services [1][2][3]. Ecosystems and their services, or "nature's contributions to people (NCP)" [4], are essential to support human well-being and development [2]. It is understood that natural capital underpins social, human, and built capita, and the interaction between these various forms of capital will determine the levels of well-being that humans could achieve in a particular context through, for example, ecosystem services [5]. Ecosystems and people are interdependent and intertwined through the concept of social-ecological systems. Social-ecological systems research looks at the reciprocal interactions between people and nature at various temporal and spatial scales [6]. Knowledge of social, ecological, and other components in a system, and on the use and benefit of ecosystem services, is needed in order to derive maximum benefit from interactions in a system. Social-ecological systems provide a basis for understanding the interlinked dynamics of environmental and societal change [6]. Since human activities are the major drivers in social-ecological systems, whereby they can either diminish or enhance ecosystem services and well-being [7], societal change would be essential to ensure ecosystem service protection and sustainability [8]. To foster societal change towards support for environmental management, we need an understanding of how biodiversity and ecosystem services are perceived by humans. Such perceptions would include the way in which humans observe, value, understand, and interpret biodiversity and ecosystem services [9]. Demands for ecosystem services are increased with increasing populations in cities [10], particularly in cities of the global south, that have added pressures of poverty, and direct dependence on ecosystem services for livelihoods and well-being of the poor [11,12]. Ecosystem services provide the foundation for economic opportunities to empower the disadvantaged [2]. The disruption of social-ecological linkages can have detrimental effects on communities, particularly when access to ecosystem services are denied [13], or when ecosystem disservices, such as floods or invasive species, are experienced. This raises the importance of understanding and strengthening social-ecological linkages, while ensuring that ecosystem services are managed appropriately, particularly in disadvantaged communities. Civic ecology initiatives, or "community-based conservation", aim to provide diverse environmental and socio-economic benefits through people-centred participatory approaches [14]. Civic ecology practices include environmental stewardship actions that enhance natural capital, ecosystem services, and human well-being, in social-ecological landscapes, such as cities [7]. While civic ecology practices are increasing and contributing to global sustainability initiatives, their contributions to ecosystem services are rarely measured [7]. In this study, we examined the understanding, use, and values of ecosystems and their services with regards to two low-income local communities, one peri-urban/rural and one urban, where some community members are implementing civic ecology initiatives. As a case study, we used the private sector-funded Wise Wayz Water Care (WWWC) programme, being implemented along the Golokodo and Mbokodweni Rivers, within Durban, South Africa (Figure 1). Using a mixed methods approach (household surveys, interviews, field observations, workshops), we investigated the following questions: (1) What are the values and perceptions held by the beneficiaries (people from the community working as part of the WWWC civic ecology programme), and the broader community, related to the WWWC civic ecology programme? (2) What are the various benefits of civic ecology practices to the social-ecological system of disadvantaged communities, particularly with respect to ecosystem services? (3) How do ecosystem services uses and values differ between the beneficiaries and the broader community? In answering these questions, we explored how increased knowledge of ecosystems through civic ecology practices in social-ecological systems contribute to the protection and increased use and benefit of ecosystem services, both for beneficiaries and other members of disadvantaged communities. --- Materials and Methods --- Study Area --- Socio-Economic Characteristic The WWWC work area, the study area (Figure 1), is situated in two peri-urban communities, Folweni and Ezimbokodweni, located in Durban, in the province of KwaZulu-Natal, South Africa. Both fall within the eThekwini Metro Municipal boundary. Folweni is more urban and is administered by eThekwini Municipality, while Ezimbokodweni is more peri-urban/rural and is jointly administered by eThekwini Municipality and Ingonyama Trust Board (traditional authority of communally owned rural lands). The study area is characterised as one of the poorest in Durban, with low education, employment, and income levels. In Folweni, 17% have no source of income and 37% earn less than ZAR 1600 (USD 99.60 @ USD 1/ZAR 16.06) per month, 35% have secondary education, only 6% have higher education, 53% of households have piped water inside the dwelling, 42% have flush toilets connected to a sewer, and 47% of households are headed by females [15]. Similarly, in Ezimbokodweni, 20% have no source of income, a third of the population earn less than ZAR 1600 per month, 30% have completed secondary education, only 2.8% have higher education, 10.7% households have piped water inside the dwelling, 4% have a flush toilet connected to a sewer, and 40% of households are headed by females [15]. Sewage infrastructure in the Folweni area is poorly maintained; most of Ezombokodweni utilises informal pit latrines, and is not serviced by waterborne sewer systems, with sewerage being noticed to surcharge into water courses in both areas [16]. A small number of households in Ezimbokodweni are located within the 1:100 floodplain of the Mbokodweni River. Solid waste is a problem, and smaller streams have become blocked by solid waste, invasive alien plants, and illegal sand mining, resulting in stagnant water that exposes the community to various water borne diseases [17]. Issues in the broader area, as noted in the Local Area Plan, include sanitation being a major problem (with failing and unhygienic ventilated improved pit latrines), lack of recreational facilities and meeting venues, lack of tertiary educational facilities, and poor/lack of housing facilities [18]. --- Bio-Physical Characteristics The climatic condition of the study area is moderate, situated in a coastal climatic zone, with mean annual temperatures of between 18.5 and 22 • C and a mean annual rainfall ranging between 820 and 1423 mm. The study site is traversed by the Mbokodweni and Golokodo rivers, which fall within the U60E quaternary catchment and the North Eastern Coastal Belt aquatic ecoregion [19]. Numerous wetlands and drainage lines are present along the rivers (Figure 1). River flows, widths, and depths vary across the study area, and between wet and dry seasons. Sites along the Golokodo River are up to 10m wide and 1m deep, and flows range from slow, to moderate, to fast. River substrates include sand and bedrock. Along the Mbokodweni River, widths and depths range from 3 to 20 m and 0.5 to 2 m, respectively, with moderate to fast flows. The dominant substrate is sand, bedrock, and cobble [17]. Results from biological monitoring of Durban's aquatic systems revealed that 71 of the 175 sites are considered to be in a poor state, and only 3 sites are in a near natural state [20]. Impacts on rivers include illegal spills and discharges, solid waste dumping, sand mining, poor operation of wastewater treatment works, realignment of watercourses, flow reduction, removal of riparian flora, and infestation by invasive alien plants [20]. The rivers in the study area are similarly classified as being impacted by solid waste pollution, bank and channel modification, and invasive alien plant invasion [17,21]. All of the sites are found in the KwaZulu-Natal Coastal Belt vegetation type, within the Indian Ocean Coastal Belt Bioregion [22]. This vegetation type is classed as endangered. Vegetation of significance is situated on settled areas, and along riverbanks, characterised by small valley forests and bushes. In the broader study area, vegetation included small patches of grasslands, many of which have been degraded due to settlement and subsistence farming activities [23]. The site is traversed by the Durban Metropolitan Open Space System (D'MOSS), and parts of the site are classified as Critical Biodiversity Areas [23]. D'MOSS is a formal municipal planning policy instrument that identifies a series of interconnected open spaces that incorporate areas of high biodiversity value and natural areas [20], with the purpose of protecting the globally significant biodiversity (located within the Maputo-Pondoland Biodiversity Hotspot) and ecosystem services within the city [24,25]. --- Case Study: Wise Ways Water Care Programme The Wise Wayz Water Care (WWWC) programme commenced in 2016 and brought together community members from Folweni and Ezimbokodweni (the "beneficiaries"), who were previously working as separate volunteer groups, mainly performing litter removal along the Mbokodweni and Golokodo river systems. Under WWWC, the beneficiaries are working and learning together, working towards improving the socio-economic and environmental conditions of their communities through the implementation of various environmental management interventions. This work was stimulated by flooding that damaged houses in the lower lying areas during a heavy rainfall event that occurred in 2016. The flooding was exacerbated by solid waste and alien vegetation blockages in the river systems, which resulted in flow and channel blockages that caused localised flooding. The beneficiaries (N = 130) include males (N = 41) and females (N = 87), with various levels of education, ranging from Grade 1 (lowest level of primary education) to Grade 12 (highest level of secondary education), with 1 person having tertiary education. The WWWC programme is managed by a non-profit organisation, i4WATER, through funding provided by a business operating in the Mbokodweni Catchment, and located in the Umbogintwini Industrial Complex (Figure 1), the African Explosives and Chemical Industry (AECI) Community Education and Development Trust, since 2016. The objectives of the WWWC programme include improving the environmental health of the lower Mbokodweni Catchment (the study area) and supporting sustainable livelihoods of beneficiaries as well as the greater community through training and skills development, alongside small enterprise development. Beneficiary training included invasive alien plant (IAP) identification, removal, and control; poultry and vegetable production (fertilisation, disease, and pest control; irrigation, harvesting, and marketing); environmental and aquatic management and monitoring (e.g., use of water-related citizen science tools, i.e., miniSASS, clarity tube, Escherichia coli (E. coli) swab); health and safety training; and community education and engagement. The beneficiaries of the WWWC programme implemented six environmental management interventions within natural areas in and around Ezombokodweni and Folweni, namely, (1) Solid waste management and removal: removal of waste from aquatic and terrestrial areas; (2) Recycling: waste collection and storage for recycling; (3) Invasive alien plant control: identification and control of invasive alien plants along rivers and streams; (4) Water quality monitoring: monthly biophysical monitoring of river water quality; (5) Community vegetable gardens: vegetable production (two gardens) using permaculture methods; (6) Community engagement: door-to-door community engagement, surveys, and knowledge sharing. Interventions were identified by beneficiaries in response to related challenges faced in the community, and were implemented with support from business funding, within the lower Mbokodweni catchment, at 20 sites, within Folweni (11) and Ezomkodweni ( 9), along various rivers, tributaries, wetlands, and open areas (Figure 1). Interventions considered in this study were undertaken over a 3-year period from 2016 to 2018. The removal of solid waste from the rivers took place 4 days per week by 45 team members who managed to collect an average of 1.1 tons of solid waste per month. The recycling team collected and separated the recyclable waste from the collected solid waste, which amounted to approximately 0.48 tons of recyclable waste per month. The community engagement and education team, of 44 members, visited homes in their areas 3 times per week to discuss the various socio-economic and environmental issues that the community is facing. The team also provided information and education to the homes they visited on how to address some of the challenges. The invasive alien plant clearing teams worked along 6.8 km of rivers, as well as in wetlands, to remove invasive alien plants. The team cleared 40 ha using mechanical methods. Species cleared included up to 28 species categorised as invasive in South Africa, primarily Diplocyclos palmatus, Canna indica, Arunda donax, Lantana camara, Melia azerdarach, Tithonia diversifolia, and Ricinus communis. The aquatic monitoring team conducted assessments at 22 sites on a monthly basis, analysed and interpreted the data collected, and used the findings to address the challenges undermining the river health. In the 2 community vegetable gardens, 28 team members worked daily to plant a variety of vegetables and herbs, including spinach, tomatoes, carrots, cabbage, kale, beetroot, and lettuce. --- Identifying Values and Perceptions of the WWWC Programme 2.3.1. Focus Group Meetings, Workshops, and Interviews In order to obtain more details on the operational aspects of the interventions, and to ascertain personal perceptions on the programme, we conducted focus group meetings with the WWWC implementers, i4Water, and 1 AECI representative, which involved open discussions of the WWWC programme. We also hosted 2 workshops with 20 and 60 WWWC beneficiaries. During the first workshop, beneficiaries were asked to participate in various individual and group activities in order to (1) identify the positive and negative events or aspects of the WWWC project; (2) identify strengths, weaknesses, opportunities, and threats related to the WWWC programme; and (3) note any changes in the community and biophysical environment that occurred due to the WWWC programme. Personal interviews were held with 9 beneficiaries and 1 coordinator from the programme funding institution in order to obtain greater insight into the WWWC programme, personal experiences, and the manner in which the programme had changed individuals' lives, including contributions to their livelihoods, sense of place, and health. --- Surveys We conducted surveys (N = 3) with beneficiary, community, and external stakeholders (including the WWWC funders, AECI, and government stakeholders (eThekwini Municipality), as well as the South African National Biodiversity Institute (SANBI) (Data S1), in order to identify individual understanding and perceptions of the WWWC programme and associated benefits to the community and beneficiaries, as well as the environment and ES use, and also to gather data on the social, ecological, and economic attributes of the study area [26]. These surveys also collected socio-economic and health data of participants. Open-ended questions were designed to extract perceptions of the value of the programme to the social-ecological-system of the study area. The three surveys were (1) beneficiaries survey, (2) community survey, and (3) key stakeholder online survey. Beneficiary surveys were conducted in a workshop setting (N = 60), community surveys were conducted at random households along the Mbokodweni and Golokodo rivers (N = 60), and key stakeholder online surveys were conducted via Survey Monkey (N = 6). The beneficiary and community questionnaires were translated into IsiZulu, and participants were allowed to choose the language of their preference to complete the questionnaires. Informed consent to utilise the outcomes of the study for research purposes was obtained from all participants, as required by the Ethical Approval. Data collected via the surveys were analysed using Statistical Package for Social Sciences (SPSS) 25. This study is limited in that surveys were only conducted after interventions were implemented. --- Site Visits The authors conducted site visits to Folweni, Ezimbokodweni, and selected WWWC work sites to identify the general living conditions of the community in the study areas (housing, water supply, waste management, etc.), and the biophysical condition of the areas where the WWWC interventions were implemented (wetlands and rivers, open spaces, etc.). Direct field observations were made, and photographs were taken for record purposes. We held on-site discussions with i4WATER and beneficiaries from each of the intervention teams. These visits were done to gain a deeper contextual understanding and gather firsthand data on the interventions and their impacts on site. --- Social-Ecological System Workshops with Beneficiaries In order to better understand the social-ecological system of the study area, we hosted the second workshop with WWWC beneficiaries (N = 60), who were randomly selected from the list of beneficiaries. We used A0 size maps as the focus of discussions, which showed the locations of WWWC work areas (WWWC programme boundary and locations of management intervention sites, e.g., water quality monitoring points, and solid waste removal sites). Maps were drawn using ArcGIS 10.4, showing the WWWC work sites relative to other landscape attributes and ecological habitats, namely, the D'MOSS, including wetlands, rivers, and vegetation habitats. Beneficiaries reflected on the maps and related their experiences in the study area. Key questions that were explored in the workshop related to existing or perceived understandings of (1) opportunities related to social activity, knowledge sharing, and natural resource use (e.g., water extraction, livestock grazing, and watering); (2) potential expansion of WWWC work areas; and (3) threats relating to health and safety, such as sources of pollution and illegal dumping of solid waste. --- Identifying Ecosystem Services Used and Valued Ecosystem services were identified from survey responses on the basis of the existing use or demand for that service. Surveys (as described above) were used to collect data on ecosystem service usage by (access), and values of, beneficiaries and community members. The ecosystem services included in the survey were (1) River water use: use of natural water from river or stream (e.g., for washing clothes or cars, or for general household use); (2) Natural material harvesting: gathering natural materials for various uses, e.g., medicinal plants or wood; (3) Subsistence use: direct use of natural resources to sustain life, e.g., food or water; (4) Agricultural use: crop or livestock production; (5) Cultural practices: use of natural areas for cultural practices or rituals; and (6) Recreation and leisure: use of natural areas for leisure or outdoor activities. --- Results --- Perceived Ecological, Health, Safety, and Socio-Economic Benefits from Civic Ecology Interventions Both the beneficiaries (from survey and workshops) and the broader community (from household surveys) reported positive changes in the community after civic ecology interventions had been implemented (Figure 2). These were in the observation that the area and stream were cleaner, but also indirect benefits such as improved education and less danger. Beneficiaries also identified the benefit of improved health, including having noticed a decrease in the number of mosquitos in the area due to the improvement in the river water flow. The benefit that was most noted by community participants and beneficiaries was that the area was cleaner after clearing solid waste pollution from the land and rivers. This work, coupled with the knowledge sharing on the dangers of littering and poor waste management by beneficiaries, has resulted in a reduction of dumping by residents. This cleanliness can be linked to a decrease in the risk of diseases associated with pollution, and reduction in risk of injury to humans and animals (e.g., reports that skin rashes no longer occurred after children played in the river, and a reduction in mosquitos), which are considered to be positive health outcomes [27]. From all the community respondents who reported to consume vegetables in the survey, more than half of the vegetables consumed were purchased from the WWWC, which shows that the programme provided a significant source of vegetables to the community. This has a positive impact on nutrition through facilitating improved access to a wider variety of fruit and vegetables, resulting in a more balanced diet, with positive effects on health and well-being [28]. WWWC vegetable irrigation was solely from river water. The community held knowledge of the different programmes being undertaken by the WWWC. Most of the community respondents heard about or interacted with the community engagement (88.2%), invasive alien plant (IAP) control (64.7%), solid waste removal and management (58.8%), vegetable gardening (54.9%), recycling (49%), and river water quality monitoring (23.5%) teams. All respondents who noted the area being cleaner also had knowledge of all the WWWC programmes, showing that community members could relate the work being done by beneficiaries to the positive changes taking place in their community. Comments made in the survey indicated that beneficiaries were appreciated by the community for the knowledge that they shared with respect to environmental education and management. Half of the external stakeholders, and over 40% of beneficiaries noted that the stream was cleaner after the programme was operational (Figure 2). Over 80% of stakeholders and one-third of beneficiaries noted that there was a decrease in invasive alien plants since the interventions were implemented. This was also visible from site observations (see Figure S1). Of the nine benefits beneficiaries experienced from working as part of the WWWC (survey) (Figure 3), more than 60% of beneficiaries experienced six or more benefits, with 96% of beneficiaries listing education on the environment as a benefit, followed by new business opportunities (76%), and increased water security (72%). The first formalised community-based small business was developed by some of the beneficiaries, Envirocare Management Systems (Pty) Ltd., providing prospects for income through invasive alien plant control and water quality monitoring services. External stakeholders similarly perceived the benefits to beneficiaries as high, with 83% noting increased education, 92% noting increased business opportunities, and 83% recognising personal development as benefits to beneficiaries (Figure 3). From the nine personal interviews that were conducted with WWWC beneficiaries, it was apparent that the WWWC programme had a positive impact on all nine individuals in terms of personal development through education and training, feelings of self-improvement, and increased hope for the future (see Data S2a,b). WWWC also experienced some challenges related to cost recovery, entry requirements for training courses, and illegal dumping (see Data S2c). An aspect of success that served to encourage sustainable participation in civic ecology initiatives was the increased knowledge, education, and training, which resulted in new skills that benefitted beneficiaries and the broader community, e.g., transitioning from subsistence farmer to small scale producer and undergoing first aid training (Data S2a). Such spin-off benefits to the broader community have strengthened social cohesion. --- Nature and Ecosystem Services Enhanced by Civic Ecology Interventions The natural areas that were enhanced by the interventions included terrestrial and aquatic habitats, e.g., wetlands, rivers/streams, riparian vegetation, and open space (natural areas zoned as public open space). The interventions made positive impacts on ecological areas, and were thus considered to have the potential to enhance ecosystem services. The habitats improved by the interventions are linked to the enhancement of numerous ecosystem services, including regulating services or Nature's Contributions to People (NCP), of water purification, flood mitigation, biological regulation, and/or disease control, as well as maintenance of biological diversity (genepool protection) (previously considered a supporting service [2], but now captured in regulating NCP [4]); cultural or non-material NCP of aesthetic, recreational, cultural, and education service; and provisioning services or material NCP of water supply, food, and harvesting products [4,29]. People accessed ecosystem services for water, agricultural production, and harvesting of medicinal plants and wood (see Table S1), and increased use of natural spaces for cultural and spiritual activities, since it had been cleaned by the beneficiaries, for example, using the wetland in Ezimbokodweni for cultural rituals (Umemelo-Zulu traditional coming of age ceremony for women) (see Figure S1). --- Ecosystem Services Uses and Values Ecosystem services were widely used and valued by the broader community (randomly selected residents) and beneficiaries (Figure 4). Ecosystem services used most were agricultural use (crop and livestock production), followed by subsistence use (use of natural resources to sustain life), and cultural uses. Beneficiaries valued subsistence ecosystem services the most, followed by aesthetic value and cultural value, while broader community members valued aesthetic, economic, and cultural services the most (Figure 4). Subsistence use-use of natural resources to sustain life, e.g., food, water; Aesthetic value: I enjoy the scenery and beauty of nature; Economic value-I benefit from nature through the sale of products, e.g., traditional medicine, vegetables, wood; Recreational value-I use natural spaces for leisure and outdoor activities. Life sustaining value-it produces goods, and renews air, water, and soil; Spiritual value-natural spaces are valued as being sacred for my religious practices. Cultural value-Natural spaces are important for my cultural practices and rituals and as a place for transferring cultural knowledge through generations; Subsistence value-it provides me with goods to sustain my life, e.g., food and water. River water was used most for the irrigation of subsistence crops, followed by livestock and personal use (see Figure S2). Participants also used river water for recreation, which was reported to have increased due to the improvement in the cleanliness of the area and the water, since WWWC had been operating. People reported to use the "now clean" river water for washing clothes and cars, as well as for flushing toilets. Business use (by beneficiaries and community members) of river water was for car washing, brick making, livestock, and sales from crop production. More beneficiaries used river water than broader community members for each category. During the workshop, locations of access to ES were reported, including wood and medicinal plant harvesting collection points in adjacent forests, recreational areas, and religious gathering sites. Threats and opportunities related to WWWC operation were also identified (see Table S1). In terms of frequency of river water use by community members and beneficiaries, respectively 28.5% and 40.7% used river water daily, 35.7% and 0% weekly (no beneficiaries reported to use river water weekly), 21.4% and 3.7% used river water monthly, and 14.2% and 48.1% used river water seasonally. --- Discussion --- Civic Ecology Contributes to Social-Ecological System Benefits and Ecosystem Service Protection and Enhancement High use of ecosystem services highlights the importance of natural capital for the livelihoods of people in the community. Similar to other studies, ecosystem services were widely used and valued by the community, and even more so by the beneficiaries as a means to enhance well-being through the mitigation of poverty and diversifying household livelihoods, enhance food security and access to nutritious food, enhance health, improve personal safety and security, access clean water and air, and promote social cohesion [2,30,31]. As found in similar studies, civic ecology practices were also initiated in response to a natural disaster (flood in 2016) [32]. In so doing, the beneficiaries were able to mitigate ecosystem disservices, through environmental management and enhancement of ecosystem services. This led to positive outcomes for both the beneficiaries and their communities [33]. This study confirms that civic ecology practices contribute to the provision of a variety of ecosystem services, including cultural services such as education and learning, social relations, and recreation [7]. We confirmed links between spiritual values and resource management [34], whereby management, environmental protection, and stewardship, increase when people associate spiritual and cultural value with natural areas [35]. The social-ecological interactions in the community influence the manner in which people value the environment, whereby valuation of biodiversity is determined by the practical function obtained from the ecosystems and ecosystem services that enhance the livelihoods of individuals [36]. The perceptions of values identified in this study assert that there is strong dependence of people on ecosystem services, and their understanding of this dependence has, in turn, motivated them towards voluntary environmental stewardship. We confirm that civic ecology practices both sustain human health [37] and lead to the creation of new natural capital [38]. Our study supports the understanding that local communities can benefit from projects that aim to integrate sustainable development and environmental management, and can create positive attitudes and perceptions towards conservation initiatives [39]. Such projects should aim to incorporate the environmental, social, and economic dimensions, including sustainable use of ecosystem goods and services, promoting dignified standards of life, and providing employment opportunities [39]. The results have governance implications. The interventions were able to address some of the impacts on Durban's rivers [20] and enhance terrestrial habitats within Critical Biodiversity Areas that are crucial to meet biodiversity targets [40], thereby reducing the pressure on government authorities who are mandated to manage these areas for conservation purposes. The outcomes of this study related to ecosystem service uses by disadvantaged communities can also be considered by authorities in preparing conservation plans, where such understanding may assist in determining the capacity of ecosystems to support both social and ecological communities [26]. This study highlights that local communities can leverage natural capital for well-being and social-ecological improvements and encourages policy support of civic ecology initiatives. --- Civic Ecology Provides Opportunities for Social Cohesion and Personal Development We show that social cohesion is critical for the achievement of sustainability and well-being [2], and that ecosystem services provide a basis for spiritual, cultural, and social cohesion experiences [4]. Such perceptions, when coupled with scientific evidence of positive outcomes of management interventions, provide a powerful combination for ensuring the sustainability of civic ecology programmes. Positive perceptions of community members of the impacts of environmental management can ensure both support for, and long-term sustainability of, management initiatives [41]. The perceptions of the direct relationships between the positive social-ecological changes taking place in the area and the work being done by the beneficiaries has strengthened social cohesion in the community. The involvement of the community in the selection and implementation of the interventions strengthened the sustainability of the interventions. Our study provides evidence that, contrary to the notion of the tragedy of the commons [42], by taking ownership and control of natural capital, local communities can successfully contribute to improved collective human well-being. --- Conclusions Our study showed that increased knowledge of ecosystems through civic ecology practices contributed to the protection and increased use and benefit of ecosystem services, both for beneficiaries and other members of disadvantages communities. Civic ecology practices have the potential to uplift impoverished communities through providing opportunities for education, as well as enhanced ecosystem service protection and access, and should, therefore, be encouraged and supported by government and policy. Given that contributions of civic ecology groups are increasingly recognised by governments for their contribution to natural capital, they need to be supported by the government and the private sector through policies aimed at achieving sustainability and well-being [43]. This study provides evidence of the potential for civic ecology initiatives, supported by private practice, to overcome the tragedy of the commons and enhance ecosystem services for low-income communities who are directly dependent on ecosystem services for their livelihoods and well-being. We call for increased governance support of similar civic ecology initiatives as a means to capacitate local communities to take ownership of natural capital and make gains in the plight against poverty and environmental degradation. --- Data Availability Statement: The data presented in this study are available on request from the corresponding author. --- Supplementary Materials: The following are available online at https://www.mdpi.com/2071-1 050/13/3/1300/s1: Figure S1: Ezombokodweni Wetland 2015 (before WWWC) and 2018 (after WWWC). Figure S2: Natural water used by beneficiaries and community members. Table S1: Socialecological system workshop findings. Data S1: Questionnaires/surveys. Data S2a: Stories of change; Data S2b: Comments made by beneficiaries, community members, and external stakeholders; Data S2c: WWWC challenges. Funding: This research is part of the SHEFS-an interdisciplinary research partnership forming part of the Wellcome Trust's funded Our Planet, Our Health programme, with the overall objective to provide novel evidence to define future food systems policies to deliver nutritious and healthy foods in an environmentally sustainable and socially equitable manner. This research was funded by the Wellcome Trust through the Sustainable and Healthy Food Systems (SHEFS) Project (grant no. 205200/Z/16/Z). The South African Research Chairs Initiative of the Department of Science and Technology and the National Research Foundation of South Africa (grant no. 84157) financially supported the research. The funding support of i4Water is also acknowledged for commissioning Rashieda Davids and Margaret Burger to undertake an associated study that facilitated data collection for this study. --- Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; or in the decision to publish the results.
Ecosystem services enhance well-being and the livelihoods of disadvantaged communities. Civic ecology can enhance social-ecological systems; however, their contributions to ecosystem services are rarely measured. We analysed the outcomes of civic ecology interventions undertaken in Durban, South Africa, as part of the Wise Wayz Water Care programme (the case study). Using mixed methods (household and beneficiary (community members implementing interventions) surveys, interviews, field observations, and workshops), we identified ecosystem service use and values, as well as the benefits of six interventions (solid waste management and removal from aquatic and terrestrial areas, recycling, invasive alien plant control, river water quality monitoring, vegetable production, and community engagement). Ecosystem services were widely used for agriculture, subsistence, and cultural uses. River water was used for crop irrigation, livestock, and recreation. Respondents noted numerous improvements to natural habitats: decrease in invasive alien plants, less pollution, improved condition of wetlands, and increased production of diverse vegetables. Improved habitats were linked to enhanced ecosystem services: clean water, agricultural production, harvesting of wood, and increased cultural and spiritual activities. Key social benefits were increased social cohesion, education, and new business opportunities. We highlight that local communities can leverage natural capital for well-being and encourage policy support of civic ecology initiatives.
Background The research literature identifies different types of decision-makers in the context of vaccinations: provaccination, hesitant (selective choice of when and for what to vaccinate) and anti-vaccination. Each type is marked by its own considerations and decision-making processes. Most studies point to lower levels of vaccination among minority population groups than among dominant groups [1][2][3][4]. Arabs living in Western countries as minority groups tend to vaccinate their children less than the dominant national group [5][6][7][8][9]. Nevertheless, a few studies show a higher vaccination rate among the children of Arab minorities living in Western countries [10]. The low vaccination uptake rate among minority groups in high income countries stems from various reasons. The main ones include lack of trust [11][12][13]; hostility toward the government [14]; medical staff language barriers and inability to understand patients' values, norms, language and behavior [6,15,16]; opposition to institutional recommendations [17]; inability to integrate into the life of the dominant society [18,19]; limited knowledge about vaccinations [16,20]; other social and economic factors, such as high income [21][22][23] and low educational level [6,24,25], both of which increase the chances of vaccination uptake; traditional beliefs [26]; and sex of child in the case of the HPV vaccine [27,28]. As opposed to the aforementioned Arab minority groups living in Western countries, parents in the Arab population of Israel are known to be "pro-vaccination" and tend to vaccinate their children at higher rates than the Jewish population, specifically against the human papillomavirus (HPV) and seasonal influenza [29]. These two vaccinations were recently introduced into the Israeli schools. The influenza vaccine is given at school both to boost vaccination uptake rates and because influenza is a common infectious disease among children. Moreover, the HPV vaccine can be targeted at school before children become sexually active. In 2013, the HPV vaccine was included as part of the planned routine vaccines given in school to girls in the eighth grade, and was later extended to include all eighth graders (13-14 years old), including boys [30]. According to the Israel Ministry of Health [30], in 2016 the uptake rate for the HPV vaccine in Arab schools reached 84% (96% among the Northern Bedouins), compared to 40% among the Jewish population. Similarly, in 2016 s grade pupils (7-8 years old) in Israel began receiving the live attenuated seasonal influenza vaccination at school. In 2017, third graders (8-9 years old) were also included in the school-located influenza vaccination program, with some children receiving the first dose of the vaccine and some receiving the second. Beginning in September 2018, fourth graders were also included in the school-located vaccination program, such that during the 2019-2020 influenza season, all pupils in the second to fourth grades were offered one dose of the seasonal influenza vaccine at school [31]. After the seasonal influenza vaccine was introduced to the school-located vaccination program in the 2016-2017 influenza season, the uptake rate for second graders in the Arab schools was 84%, compared to 47% among the Jewish population. The Ministry of Health's vaccination report for 2019 points to higher vaccination coverage in the Arab schools (81.4%) than in the schools in the Jewish sector (44-54%). The primary reason for not vaccinating children was parental refusal (94%). Children who required a second dose and never received influenza vaccinations in the past were instructed to complete the vaccination at their HMO (Health Maintenance Organization) [32]. It should be noted that uptake of these two vaccinations among the Jewish population is much lower than uptake of other routine school vaccinations, such as MMRV (96%) and Dtap IPV (95%) [33]. Alongside a large body of evidence indicating the effectiveness and safety of the HPV vaccine [34][35][36], the research literature also reveals a scientific controversy surrounding the safety of this vaccine. Several smaller studies examining the HPV vaccine reported side effects, some relatively minor such as pain at the injection site, fainting and dizziness, and some more serious, such POTS (Postural Orthostatic Tachycardia Syndrome), neurological disturbances (CRPS-complex regional pain syndrome), leg paralysis, autoimmune diseases and sympathetic nervous system deficiencies [37][38][39]. Barriers to the HPV vaccination are related to taboos in conservative societies prohibiting sexual relations before marriage [40][41][42][43]. These fears are common among the Arab population as a whole, and particularly among the Muslim population, as well as among orthodox Jews [44][45][46][47]. Moreover, despite studies pointing to the effectiveness of the seasonal influenza vaccine [48][49][50], some studies report a controversy surrounding its effectiveness [43][44][45][46]. Studies have pointed to varying effectiveness according to age group: 54% (age 6-17), 61% (under the age of 5), 70% (6 months-8 years), 73% (2-5 years), 78% (6 months to 7 years). Regarding influenza vaccine efficacy, different research studies have also shown that vaccine efficacy varies by age group: 28% (age 2-5), 59% (6 months-15 years), 60% to 83% (6 months -7 years), 61% (under the age of 5, age 6-17) and 69.6% (5-17 years) [5,51,52]. In 2020 the Arab population of the State of Israel numbered about two million people, constituting 25% of the general population. Of these, 82% are Muslim, 9% are Christian and 9% are Druse. Fifty-three percent of Arab families live in poverty, compared to 14% of Jewish families. Over the years, the educational level of the Arab population has improved, yet the educational gaps between Arabs and Jews remain large. Of Arab women between the ages of 25 and 34, 29% completed 16 or more years of education, compared to 50% of Jewish women in the same age group [53]. Jewish society is divided into several groups: secular (45%), traditional (35%), religious and very religious (16%) and ultra-Orthodox (14%). In this study we examined the traditional group, which is located on a spectrum somewhere between religious and secular [54]. For the most part, traditional Jews observe specific commandments and traditions considered to be clear signs of traditional belief. They do so not necessarily out of strict compliance with Jewish law but rather out of a sense of identification and belonging with the Jewish people or out of a belief that these traditional values must be safeguarded to guarantee the existence of the Jewish people [54]. The Israeli school system is marked by a great deal of segregation. Arabs and Jews do not attend the same schools. Moreover, the very religious and ultra-Orthodox groups attend different schools from the secular and traditional groups and sometimes from each other [55], leading to inequality in education, research and policy. Jews and Arabs also tend to live in different residential areas under separate municipal authorities, pointing to spatial politics and discrepancies between Jews and Palestinians within Israel [56]. In view of the interesting phenomenon of high vaccination rates among the Arab population of Israel, this study focuses on the factors related to decision-making among Arab mothers in Israel regarding these two vaccinations: seasonal influenza and HPV. These two vaccinations were chosen for two reasons: 1) They were recently introduced to the school-located vaccination program. 2) Both are a matter of controversy-regarding safety in the case of the HPV vaccination and regarding effectiveness in the case of the influenza vaccination. In addition, very few research studies have examined vaccination uptake rates among the various subpopulations in Arab society, with most research tending to consider the Arab population as a single entity. This study seeks to examine these two issues. It investigates the variables influencing vaccination uptake among subgroups in the Arab population (Muslims, Christians, Druse and Northern Bedouins), while comparing vaccination uptake to that of the national Jewish population (secular and religious groups). The overarching goal of this study is to rank the extent of uptake of these two vaccinations-seasonal influenza and HPV-among the subgroups in Arab society and in Jewish society, from the highest uptake rate to the lowest. The specific research objectives are as follows: To compare vaccination uptake of the HPV and seasonal influenza vaccines in the Arab population to that in the Jewish population. To identify and characterize the variables associated with mothers' uptake of these two vaccinations. To compare vaccination uptake of the HPV and seasonal influenza vaccines in the different ethnic subgroups To compare the differences between ethnic subgroups for each variable associated with mothers' vaccination uptake. --- Methods --- Research population The research population included mothers with children in both of the following two age groups: 1. A child in second or third grade, such that the mothers must decide whether their child should get the seasonal influenza vaccination that was recently introduced to the school-located vaccination program. 2. A child in the eighth grade, such that the mothers must decide whether their child should get the HPV vaccine, also part of the school-located vaccination program. Mothers who had children both in elementary school and in middle school were included, while mothers who had children in only one of these two age groups were excluded from the study. We chose mothers with two children of different ages in order to compare the mothers' decision-making with respect to the two different vaccinations. Our rationale in choosing mothers was that commonly mothers are the primary parent in the family when it comes to making decisions about vaccinations [29]. --- Sampling method and research procedure The sample was chosen by means of stratified sampling [57] according to the ethnic subgroups examined. The sampled subgroups were of equal size rather than in accordance with their relative proportion in the population of Israel. Hence, each group had the same number of participants, facilitating group comparisons. After the study was approved by the Ethics Committee of the Faculty of Social Welfare and Health Sciences at the University of Haifa (Approval No. 118/16), participants were recruited by means of stratified heterogeneous sampling [58] at schools in a number of different localities in Israel. During the period March 30, 2019 through October 20, 2019, questionnaires were distributed manually to eighth-grade pupils who had younger siblings in second or third grade. The children who met the study's inclusion criteria were given a letter asking their parents to participate in the study and providing the researchers' contact details. Parents who indicated their willingness to participate (gave their informed consent) received a questionnaire, which they returned to the school a few days later. The response rate was 92%. The sampling method was manual rather than via an online questionnaire because a substantial portion of the Arab population, and particularly the Northern Bedouin population, has low digital literacy [59]. --- Research tools Prior to the quantitative study described in this paper, we conducted preliminary qualitative research using personal interviews with mothers of children at the targeted ages. The interviews focused on decision-making with respect to vaccinations [29]. Based on the results of this preliminary qualitative research and on validated questionnaires from the research literature focusing on different variables relevant to our research objectives [43,60], we constructed a questionnaire (see Additional file 1) that was culturally adapted to the different subgroups in our study. After constructing the questionnaire, we calculated the Cronbach's alpha value for items that appeared to be associated with measures of theoretical significance in order to validate each measure. Cronbach's alpha is used to provide a measure of the internal consistency of a test or scale and is expressed as a number between 0 and 1. Internal consistency describes the extent to which all the test items measure the same concept or construct and hence reflects the inter-relatedness of the items within the test [61]. The questionnaire included socio-demographic data such as respondent's age, number, age and sex of children, education, income, residential area, level of religiosity and ethnicity. It also included questions about vaccination uptake based on the mothers' self-reports regarding the two relevant vaccinations recommended by the Ministry of Health: seasonal influenza and HPV. The statements in the first part of the questionnaire referred to variables related to vaccinations in general (called "general variables"). These included attitudes toward vaccinations (e.g., "All vaccinations recommended by the health authorities are safe"); trust in doctors (e.g., "When it comes to vaccinations, I trust my family doctor because he is the expert and knows more than I do"); trust in the system (e.g., "I trust the health system in Israel because of its high quality of care and service"); and low health literacy, referring to the extent to which the mothers think they are capable of seeking and reading information about vaccinations (e.g., "I don't have time to look for information about vaccinations so I make do with what the medical team (nurse and doctor) tells me"). The statements in the second part of the questionnaire focused on variables associated with each vaccination separately (called "specific variables"). For example, with respect to perceived risk, the questionnaire included statements about perceived risk of each disease and perceived risk of each vaccination (influenza and HPV, respectively). It also included statements related to perceptions regarding the inclusion of these vaccinations in the school-located vaccination program as a legitimizing factor for giving children these two vaccinations. Respondents were instructed to respond to each statement on a five-point Likert scale. The statements were grouped and defined as independent variables according to subject area (attitudes, trust, low health literacy and inclusion in the school vaccination program). An examination of the correlations between all the independent variables yielded correlation coefficients less than 0.5. Therefore, we ran a multiple regression model. We also examined the associations between these variables and the dependent variable (uptake of the two types of vaccination: seasonal influenza and HPV) (see Table 1). --- Reliability and validity During questionnaire construction, the questions were formulated in Hebrew and translated into Arabic. They were then translated into Arabic a second time by a second translator to examine their cultural appropriateness and wording. After that, we conducted a pilot study among a sample of 80 participants to validate the content and check the wording to make sure it was culturally appropriate for the target population. After data collection and entry, quality control was applied to discover any errors in data entry. The quality control entailed examining the range of data for each question and generating distributions. In addition, the variables were examined for outliers [62] and tested to determine whether they met the assumption of normality. --- Data analysis To compare vaccination uptake between the Jewish and Arab populations, we calculated the uptake rates for the two groups for the two vaccines. We used McNemar's test to examine the significance of the differences between the uptake of the two vaccines in each of the subgroups. To identify the variables associated with mothers' uptake of the two vaccines, we first conducted separate multiple logistic regressions according to type of vaccination, with uptake of the specific vaccine-HPV or influenza-as the dependent variable. Examination of the correlations between all the independent variables yielded coefficients that were all less than 0.5. Therefore, we were able to run a multiple regression model assuming no multicollinearity. We ran the multiple regression in two stages: In the first stage we ran the general variables and the specific variables in the multiple regression model to test the effect of each variable. In the second stage, we removed the variables that were not significant and ran the multiple regression again with the significant variables only to examine the exact effect of the variables on vaccine uptake. To examine the differences between the various subgroups with respect to variables associated with mothers' uptake, first we used descriptive statistics and calculated the means of the variables among the different ethnic groups. Second, we conducted posthoc testing for all the dependent variables: attitudes, trust in the system, trust in the doctor, low health literacy, school-located vaccination program, and risk perception of both vaccines. We then conducted a multiple comparison analysis using the Tukey correction to examine the significant differences between the various ethnic groups. --- Results --- Sample description A total of 693 mothers participated in the study. The participants included mothers from almost the entire spectrum of the Israeli population. The Arab population was defined as the primary research population, while the national Jewish population (secular and religious/ traditional groups) served for comparison purposes. Note that the ultra-Orthodox population was not included in the study. Table 2 shows their sociodemographic characteristics, followed by the mothers' education by ethnic groups and monthly income by ethnic groups (Tables 3 and4 --- respectively). --- Differences in uptake between Arab and Jewish populations The research findings reveal differences in uptake of the two vaccinations between the Arab and Jewish populations, such that Arab mothers have a higher uptake rate for both vaccinations (HPV -90%; influenza -62%) than Jewish mothers (HPV -46%; influenza -34%) (Fig. 1). The differences shown above are statistically analyzed in subsequent sections. Note that due to differences between the two vaccinations, we analyzed each of them separately. In addition, we found that in each case different factors influence vaccination uptake. Therefore, to examine the variables associated with mothers' uptake of the two vaccinations, we computed two multiple logistic regression models and entered ethnicity as an independent variable in each. The models examined both general and specific variables associated with vaccine uptake. Furthermore, McNemar's test results reveal significant differences in uptake according to type of vaccination, showing that uptake of the HPV vaccination is significantly higher than uptake of the seasonal influenza vaccination in both populations: Arab (p <unk>.0001) and Jewish (p = 0.0014). --- Variables specifically associated with mothers' uptake of seasonal influenza vaccination The first model for seasonal influenza vaccination included the general variables of ethnicity, attitudes, trust in the system, trust in the family doctor, school-located vaccination program and health literacy and the specific variables of vaccine risk perception and disease risk perception. On this model, the general variables of attitudes (p = 0.3286) and trust in family physician (p = 0.2715) were not significant. Therefore, to examine the precise effect of each variable on influenza vaccination uptake we decided to eliminate these two variables and run the multiple regression with significant variables only. Trust in the medical system was significant in the first model (p = 0.0199), but was no longer significant when entered into the reduced model. Therefore, the reduced model did not include this variable. Table 5 shows the variables found to be significantly associated with uptake of the seasonal influenza vaccination. The results show that the odds of flu vaccination uptake among Arab mothers is above three times the odds among Jewish mothers. Low health literacy is positively associated with Flu vaccination uptake, where for each unit for literacy index, the odds of the uptake increases by 43%. Inclusion in the school-located vaccination program is positively associated with Flu vaccination uptake, where for each unit for Inclusion in the school-located index, the odds of the uptake increases by 84%. Perceived risk of influenza vaccination is negatively associated with Flu vaccination uptake, where for each unit for Perceived risk of influenza vaccination index, the odds of the uptake decreases by 75%. Perceived risk of seasonal influenza disease is positively associated with Flu vaccination uptake, where for each unit for Perceived risk of seasonal influenza disease index, the odds of the uptake increases by 75%. --- Variables specifically associated with mothers' uptake of HPV vaccination The first model for HPV vaccination included the general variables of ethnicity, attitudes, trust in the system, trust in the family doctor, school-located vaccination program and health literacy and the specific variables of vaccine risk perception and disease risk perception. On this model, the general variables of attitudes (p = 0.3147), trust in family physician (p = 0.4995), low health literacy (p = 0.1324) and disease risk perception (p = 0.7337) were not found to be significant variables. Therefore, to examine the precise effect of each variable on HPV vaccination uptake we decided to eliminate these variables and to run the multiple regression with significant variables only. A moshav is a form of rural living unique to the State of Israel in which a group of residents live together in a joint financial arrangement. These residents are known as moshav members. Unlike the historical kibbutz framework, in the moshav the family is an independent financial unit operating in a framework of mutual assistance. Every moshav member is allocated a plot of land, which in most cases is used for agriculture [63] b A kibbutz is a form of communal living unique to Zionism, the pre-state Yishuv period and the State of Israel, based on Zionist aspirations to resettle the Land of Israel as well as on the socialist values of human equality and of a joint economy and ideology. A kibbutz is usually a small locality with only a few hundred residents and supports itself through agriculture and industry [64] Table 6 shows the variables found to be significantly associated with HPV vaccination uptake: The results show that the odds of HPV vaccination uptake among Arab mothers is above six times the odds among Jewish mothers. Trust in the health system is negatively associated with HPV vaccination uptake, where for each unit for Trust in the health system index, the odds of the uptake decreases by 26%. Inclusion in the school-located vaccination program is positively associated with HPV vaccination uptake, where for each unit for Inclusion in the school-located index, the odds of the uptake increases by 51%. Perceived risk of HPV vaccination is negatively associated with HPV vaccination uptake, where for each unit for Perceived risk of HPV vaccination index, the odds of the uptake decreases by 61%. Besides, the odds of HPV vaccination uptake for female youth is 59% lower than the odds of uptake for male youth. --- Differences in mothers' uptake of the two vaccination types by ethnic group Examination of the ethnic subgroups reveals differences in mothers' vaccination uptake. With respect to mothers' uptake of the seasonal influenza vaccination, the highest uptake rates were found in the Northern Bedouin (74%) and Druse (74%) groups, followed by the Muslim group (60%). The lowest uptake rate in Arab society emerged among the Christians (46%). Moreover, secular Jewish mothers exhibited a lower uptake rate (38%) than any of the Arab groups, though higher than the religious/traditional Jewish mothers (26%), who exhibited the lowest uptake rate. With respect to HPV vaccination, the Northern Bedouin population exhibited the highest uptake rate (99%) of all the subgroups. The Druse population also exhibited a relatively high uptake rate (92%), as did the Muslim group (92%). Again the Christians exhibited the lowest uptake rate among the Arab society (82%). the secular Jewish mothers exhibited an HPV uptake rate of (53%), which was lower than all the Arab subgroups yet higher than the religious/traditional Jewish mothers (33%), who exhibited the lowest HPV vaccination uptake rate (see Fig. 2). The results of the McNemar's test (Table 7) show that in addition to differences between the ethnic groups with respect to uptake of the two vaccinations, each ethnic group (except for the religious Jewish group) exhibited significant differences in uptake according to vaccination type: HPV vs. seasonal influenza. The findings show that HPV vaccination uptake is significantly higher than seasonal influenza vaccination uptake in all the subgroups except for the religious Jewish group, where the difference is not significant. --- Variables associated with vaccination uptake according to ethnic subgroup Examination of the variables associated with uptake of the two vaccinations according to ethnic subgroup revealed differences in the means of both the general and the specific variables for each vaccination type, as illustrated in Tables 8,9, 10, 11, 12, 13, 14 and the accompanying Figs. 3,4, 5, 6, 7, 8, 9. The ANOVA for the dependent variable of trust in the health system revealed a significant difference between the different ethnic groups [F(5,687) = 24.13, P <unk> 0.0001]. Multiple comparison analysis using the Tukey correction to examine the significant differences between the ethnic groups showed that Christian, Muslim and Druse women had a significantly higher level of trust in the health system than Jewish women (secular and religious) and Bedouin women. The ANOVA for the dependent variable of trust in the family doctor revealed a significant difference between the ethnic groups [F(5,687) = 19.45, P <unk> 0.0001]. The multiple comparison analysis using the Tukey correction showed that Bedouin women exhibited a significantly higher level of trust in the family doctor than all the other groups, except for Druse women. Moreover, the level of trust in the family doctor among Jewish women (secular and religious) was significantly lower than that of Arab women in all the ethnic groups. The ANOVA for the dependent variable of Low health literacy revealed a significant difference between the ethnic groups [F(5,687) = 52.04, P <unk> 0.0001]. Multiple comparison analysis using the Tukey correction showed that Bedouin women exhibited the highest level of Low health literacy, with a significant gap between them and all the other groups. Secular Jewish women exhibited the lowest level of Low health literacy, with a significant gap between them and three other groups-Bedouin, Druse and Muslim women. The ANOVA for the dependent variable of general attitudes toward vaccination revealed a significant difference between the ethnic groups [F(5,687) = 24.53, P <unk> 0.0001]. Multiple comparison analysis using the Tukey correction showed that Bedouin women exhibited the highest level of support for vaccinations, significantly higher than that of all the other groups. Druse and Muslim women were second in their level of support for vaccinations. The other groups-Christian and Jewishexhibited a lower level of support for vaccination, with religious Jewish women exhibiting the lowest level of support, significantly lower than all the other groups with the exception of secular Jewish women. The ANOVA for the dependent variable of vaccinations given at school revealed a significant difference between the ethnic groups [F(5,687) = 41.67, P <unk> 0.0001]. Multiple comparison analysis using the Tukey correction showed that giving the vaccinations at school was the most significant factor for Bedouin women, significantly higher than for all the other groups. Jewish women (secular and religious) rated this factor as significantly lower than the Arab women from all the ethnic groups. The ANOVA for the dependent variable of risk of seasonal influenza vaccine revealed a significant difference between the ethnic groups [F(5,687) = 2.81, P = 0.0161]. Multiple comparison analysis using the Tukey correction showed that perceived risk of the seasonal influenza vaccine was significantly higher among religious Jewish women than among Muslim and Druse women. No other significant differences in perceived risk were found among the other ethnic groups. The ANOVA for the dependent variable of risk of seasonal HPV vaccine revealed a significant difference between the ethnic groups [F(5,687) = 28.4, P <unk> 0.001]. Multiple comparison analysis using the Tukey correction showed that perceived risk of the HPV vaccination was significantly higher among Jewish women (secular and religious) than among Arab women in all the ethnic groups. Moreover, a significant difference in level of perceived risk of the HPV vaccination was found between Christian and Bedouin women, with Christian women perceiving the vaccination as riskier than Bedouin women. For summary, Among the general variables, trust in the family doctor exhibited the highest mean in all the The variable of low health literacy exhibited a low mean in all the ethnic groups except for the Northern Bedouin mothers, who reported major difficulties in searching for information about vaccinations. The Christian mothers had the highest literacy of all the Arab groups in searching for information, and the secular Jewish mothers had the highest literacy of all the subgroups. With respect to seasonal influenza vaccination, Jewish mothers (and specifically religious as opposed to secular mothers) perceive the vaccination as more risky than Arab mothers from all the subgroups, except for Christian mothers, whose risk perceptions were equivalent to those of the secular Jewish mothers. With respect to the HPV vaccination, the highest risk perceptions were among the religious Jewish mothers and the lowest among the Northern Bedouin mothers. --- Discussion This pioneering research study provides an in-depth examination of decision-making processes among subgroups in Arab society in Israel with respect to two vaccinations recently introduced to the school-located vaccination program: the HPV vaccination and the seasonal influenza vaccination. The study describes the variables associated with vaccination uptake among subgroups in Arab society as well as among certain segments of the Jewish population (secular and religious Jews). The study's findings show that the variable of including the two vaccines in the school program is the primary variable influencing Arab mothers' decisionmaking with respect to the HPV and seasonal influenza vaccinations. Vaccination inclusion in the school-located vaccination program encourages parents to vaccinate their children and increases the chances of vaccination uptake. With respect to framing strategies in health communication, vaccination inclusion in the schoolbased program grants the vaccination medical legitimacy, which also influences parental uptake [65]. These findings are in line with those of other studies showing various reasons for parental preference for vaccinating their children at school, among them lack of access to medical services, limited time to take children for vaccinations, inability to leave work for this purpose and more [66][67][68]. Perceived risk of the vaccination itself is also associated with mothers' decision-making processes. This finding is compatible with other studies showing that parents decide not to vaccinate their children based on high risk perceptions related to a lack of trust in vaccination safety [14,50,69,70]. Moreover, as in many studies, the findings of this study indicate that high risk perceptions about the illness are also associated with mothers' uptake of the vaccinations. That is, the more risky mothers perceive an illness, the greater their chances to uptake a vaccination that prevents it [7,8,[71][72][73]. The findings also show an association between trust in the medical system and decision-making with respect to the HPV vaccination. Other studies that examined decision-making for HPV vaccination among parents in Arab minority groups in Western countries also found this variable to be significant [74][75][76]. Yet despite high vaccination compliance, trust in the system is not very high even among the subgroups of Arab mothers. These findings can be explained by two factors: 1) Campaigns and explanatory materials designed to promote HPV vaccination in Arab society are not sufficiently transparent and lack cultural appropriateness [2,65]; 2) The recommendations of doctors and nurses, considered by Bedouin society to be reliable sources of information, are not sufficiently explicit [29,75,77]. Contrary to the findings of many studies worldwide, the findings of the current study show that health literacy and difficulties in searching for information about vaccinations are positively associated with mothers' decision-making. That is, the lower the mothers' health literacy and the more difficulties they have in searching for information, the more likely they are to uptake vaccinations [78][79][80][81]. This high vaccination uptake rate despite low health literacy can be explained by the fact that these mothers do not search for impartial information about the vaccination but rather receive their information exclusively from the health system. Because they do not search for information, these mothers are not exposed to the scientific controversy surrounding the HPV vaccine [37][38][39] or to the questions raised about the effectiveness of the influenza vaccine [5,51,52]. Various studies have shown that minority groups usually have low health literacy, are less exposed to scientific controversies surrounding vaccinations and are less hesitant about vaccinations [2,29,82]. With respect to the sex of the child in the case of the HPV vaccination, the results of the current study are in line with other studies showing that the child's gender plays a role in mothers' decision-making regarding the HPV vaccination [27,28,50,68,82]. Indeed, the findings of the current study show that mothers are more likely to vaccinate boys than girls. In conservative societies, and particularly in Arab society, the matter of sexuality is generally taboo, particularly among women. Therefore, men in conservative societies are thought to be more likely to engage in frequent sexual relations than women, leading to the assumption that mothers are more likely to decide to give the HPV vaccination to their male children [83,84]. With respect to the various population subgroups, the findings point to differences in mothers' uptake rates. Specifically, the Northern Bedouin population emerged as the group with the highest vaccination uptake rate among all the Arab subgroups. We propose several explanations for this finding. First, it is possible to assume that these high vaccination rates derive from the fact that a significant portion of Northern Bedouin mothers are illiterate (more than 60%) [85]. Consequently, their health literacy is low and their ability to search for, read and analyze health information in general and information about vaccinations in particular is limited [86][87][88]. Several studies indicate that mothers with a high level of education have lower vaccination uptake rates due to their ability to search for information about vaccinations and make decisions based on facts and on "informed consent" [89]. Furthermore, the findings show that Bedouin mothers vaccinate their children despite their mistrust of the health system. It is reasonable to assume that the main and perhaps only information source for Northern Bedouin mothers is the Ministry of Health. Studies have shown that Bedouin mothers usually take institutional health directives seriously and implement them regardless of their level of trust [89,90]. Moreover, despite this low level of trust in the system this group has a very high level of trust in doctors, making the family doctor's recommendation a highly influential factor in mothers' decision-making regarding vaccines. Thus they fully adopt the recommendations of the ministry or the doctor representing the health system [73,91]. These findings contradict the findings of two studies conducted among the Bedouin population in the south of Israel, which showed that these Bedouins do not complete their children's vaccination programs due to their lack of access to health services and lack of trust in the government [7,73,91]. It is important to note that the Bedouins living in the south, mainly those in unrecognized villages, have less convenient access to medical services than those living in the north examined in this research, whose superior access to medical services enables them to complete the vaccination programs. The results of this study also show that the Druse population has the second highest uptake rates for both vaccinations. There are several ways to interpret this finding. Many members of the Druse population serve in the Israeli military forces. This fact, together with their high levels of trust in the government and its decisionmakers [92], may explain their high uptake of various types of vaccinations. Moreover, a substantial portion of the Druse population identifies itself with the dominant Jewish national group rather than the minority Arab population. Over the years, a picture has emerged of Druse solidarity with the Zionist ethos, while the Druse simultaneously distance themselves from the Arab and Islamic themes resonant among the Israeli-Arab sector of society [86,93]. Their desire to be part of the dominant Jewish population may lead to their similar or even higher vaccination uptake. Yet this interpretation may be qualified by the recently formulated basic law defining Israel as the Nation-State of the Jewish People, 1 which may influence the reciprocal relations between the Druse and the State of Israel. Hence, future research is needed to verify this interpretation. The research findings also indicate that Muslim mothers are third in uptake rate for the two vaccinations. Examination of the variables associated with vaccination uptake shows that the variable of inclusion in the school-located vaccination program is one of the most significant variables associated with Muslim mothers' decision-making about the two vaccinations. It is possible to assume that including these vaccinations in the school program provides these mothers legitimization to vaccinate their children along with a convenient way to do so [7][8][9]29]. With respect to the Christians, the final subgroup in the Arab population, the findings show that Christian mothers have the lowest vaccination uptake rate of all the Arab subgroups for both vaccinations. This finding can be explained by the fact that the Christian Arab population differs from the Muslim, Northern Bedouin and Druse groups in that they are more educated. Indeed, Christian society is marked by high socioeconomic status and a more modern lifestyle (for example, lower fertility rates) [94,95]. Their relatively low vaccination uptake may be tied to their higher education and literacy levels, which enable Christian mothers to search for information from other sources [94][95][96]. Thus, the Christian mothers may be exposed to discourse on controversies surrounding vaccinations. The research findings also show that like the Christian mothers, secular Jewish mothers
Background: Parents in the Arab population of Israel are known to be "pro-vaccination" and vaccinate their children at higher rates than the Jewish population, specifically against human papilloma virus (HPV) and seasonal influenza. Objectives: This study seeks to identify and compare variables associated with mothers' uptake of two vaccinations, influenza and HPV, among different subgroups in Arab and Jewish society in Israel. Methods: A cross-sectional study of the entire spectrum of the Israeli population was conducted using a stratified sample of Jewish mothers (n = 159) and Arab mothers (n = 534) from different subgroups: Muslim, Christian, Druse and Northern Bedouins. From March 30, 2019 through October 20, 2019, questionnaires were distributed manually to eighth grade pupils (13-14 years old) who had younger siblings in second (7-8 years old) or third (8-9 years old) grades. Results: Arab mothers exhibited a higher rate of uptake for both vaccinations (p < .0001, HPV -90%; influenza -62%) than Jewish mothers (p = 0.0014, HPV -46%; influenza -34%). Furthermore, results showed that HPV vaccination uptake is significantly higher than seasonal influenza vaccination uptake in both populations. Examination of the different ethnic subgroups revealed differences in vaccination uptake. For both vaccinations, the Northern Bedouins exhibited the highest uptake rate of all the Arab subgroups (74%), followed by the Druse (74%) and Muslim groups (60%). The Christian Arab group exhibited the lowest uptake rate (46%). Moreover, the uptake rate among secular Jewish mothers was lower than in any of the Arab groups (38%), though higher than among religious/traditional Jewish mothers, who exhibited the lowest uptake rate (26%). A comparison of the variables associated with mothers' vaccination uptake revealed differences between the ethnic subgroups. Moreover, the findings of the multiple logistic regression revealed the following to be the most significant factors in Arab mothers' intake of both vaccinations: school-located vaccination and mothers' perceived risk and perceived trust in the system and in the family physician. These variables are manifested differently in the different ethnic groups.
that the variable of inclusion in the school-located vaccination program is one of the most significant variables associated with Muslim mothers' decision-making about the two vaccinations. It is possible to assume that including these vaccinations in the school program provides these mothers legitimization to vaccinate their children along with a convenient way to do so [7][8][9]29]. With respect to the Christians, the final subgroup in the Arab population, the findings show that Christian mothers have the lowest vaccination uptake rate of all the Arab subgroups for both vaccinations. This finding can be explained by the fact that the Christian Arab population differs from the Muslim, Northern Bedouin and Druse groups in that they are more educated. Indeed, Christian society is marked by high socioeconomic status and a more modern lifestyle (for example, lower fertility rates) [94,95]. Their relatively low vaccination uptake may be tied to their higher education and literacy levels, which enable Christian mothers to search for information from other sources [94][95][96]. Thus, the Christian mothers may be exposed to discourse on controversies surrounding vaccinations. The research findings also show that like the Christian mothers, secular Jewish mothers, who are in fifth place in vaccination uptake, vaccinate their children at lower rates than all the Arab subgroups. As indicated 1 Basic Law: Israel as the Nation-State of the Jewish People, informally known as the Nation-State Bill or the Nationality Bill, anchors the national Jewish values of the State of Israel in a Basic Law, after many such values were already anchored in other laws. This Basic Law specifies the nature of the State of Israel as the nation-state of the Jewish people, the place where the Jewish people has a natural right to selfdetermination, a right that is exclusive to the Jewish people. The law also anchors the status of the state flag and state emblem and of "Hatikva" as the state anthem. It determines the use of the Hebrew calendar and the holidays of Israel and states that Hebrew is the official state language. The law also states that Jewish immigration is to be encouraged, that Jerusalem, complete and united, is the capital of Israel, and that Arabic is not a state language but has a special status in the state. by the current research, due to their high educational level, their high level of knowledge about vaccinations and their more hesitant attitudes toward vaccinations, Jewish mothers tend not complete their children's vaccination programs [2, 3, 7-9, 96, 97]. One of the more surprising findings of this study is related to uptake of the HPV vaccination among conservative population groups. The HPV vaccination is intended to prevent cervical cancer and genital warts caused by the human papillomavirus, which is transmitted through sexual relations. Arab society is considered to be a conservative and traditional society [29,84], particularly in the context of sexuality and sexual relations prior to marriage, which are a social taboo [40][41][42][43][98][99][100]. The findings of this study show that Arab mothers, without exception, vaccinate their children against the human papillomavirus at higher rates than Jewish mothers, despite the relationship between this vaccination and sexual activity. This finding can be explained by the lack of transparency that characterizes explanatory materials geared to increase awareness about the HPV vaccine among the Arab population. In another study in which we analyzed Arabic language explanatory materials issued by the Ministry of Health and the HMOs, we found that these materials did not refer to the sexual context of the vaccination, provided only partial information and were not culturally appropriate to Arab society [65]. Because Arab mothers are usually only exposed to information issued by the establishment and are unable to search for and process other information, it is reasonable to assume that they treat these materials as a reliable source of information and a basis for making decisions. Thus, promoting the HPV vaccine as preventing cancer serves to reframe the relationship between this vaccination and sexuality and increases the probability that the conservative Arab population will uptake the HPV vaccination. Religious Jewish society exhibits a cultural resemblance to Arab society in that it is also conservative and prohibits sexual relations before marriage. Nevertheless, the findings of this study show that the religious Jewish population differs from the Arab population with respect to vaccination uptake, as reflected in lower rates of HPV vaccination. These differences can be explained by the higher level of health literacy among religious Jewish mothers compared to Arab mothers, pointing to their greater ability to search for information and learn about the scientific controversy surrounding the vaccination and its association with sexuality, thus reducing their chances of HPV vaccination uptake [2,29,75]. This study was not designed to compare the Arab minority population in Israel to other Arab minorities worldwide regarding these two vaccinations. This issue should be the topic of future research. This study has several limitations. First, the research was based on mothers' self-reports regarding their vaccination uptake, increasing the chances of report bias. Second, the study focused on the Arab population as the main research population and the Jewish population as a comparison group and did not examine subgroups in Jewish society. We recommend extending the study to the Jewish population and examining the decisionmaking processes regarding these two vaccinations among different Jewish subgroups. Moreover, additional research is warranted to examine mothers' decisionmaking with respect to various vaccinations, including identifying different variables that may have been associated with vaccination uptake over the years and detecting changes in vaccination trends, if any. --- Conclusions This pioneering research study reveals variations in vaccination uptake among different population subgroups. The study points to the important influence of variables related to trust, literacy and legitimacy of school vaccination. It also shows that all Arabs cannot be lumped together as one monolithic group. Indeed, they exhibit major differences Examining variables associated with uptake of the two vaccines can provide decision-makers an empirical basis for tailoring specific and appropriate interventions to each subgroup in order to achieve the highest vaccination uptake rate possible. The research also makes an important contribution to the literature on inequity in vaccination uptake as it exemplifies the variations within broad ethnic minority groups, which should be considered in policies and in practice. Moreover, media campaigns targeting the Arab population should be segmented to appeal to the various sub-groups according to their attitudes, needs and health literacy. The abilities and tools available to mothers must be reinforced so they can make intelligent decisions that are not based exclusively on trust in a third party such as the health or education system. Vaccination hesitancy is on the rise worldwide, including in Jewish society in Israel. For this reason, it is important to take the public's feelings of hesitancy into consideration and to build trust in the medical system. Note that this research was conducted before the coronavirus crisis in Israel, and it is likely that the crisis has affected vaccination uptake in Arab society as well. Future research is therefore needed to continue investigating these subgroups to examine the impact of COVID-19 on their attitudes toward vaccinations and their vaccination uptake. --- Availability of data and materials Requests for more detailed information regarding the study should be addressed to the corresponding author. --- Abbreviations HPV: Human papillomavirus; Flu: Influenza --- Supplementary Information The online version contains supplementary material available at https://doi. org/10.1186/s12939-021-01523-1. Additional file 1. Research questionnaire. --- Authors' contributions NAES carried out this research as part of her PhD dissertation under the supervision of AGE and GSM. NAES conceptualized the study, reviewed the literature, conducted the data analysis, written the manuscript and took full responsibility for the study. AGE provided input on the study conceptualization, data analysis and writing the first drafts of the manuscript. GSM, ND, SBG and RG critically reviewed the manuscript and helped shape the final version of the manuscript. All authors approved the final manuscript. --- Declarations --- Ethics approval and consent to participate This study was approved by the ethics committee of The Faculty of Social Welfare and Health Sciences at the University of Haifa (confirmation number 118/16). All the study participants gave their consent to participate in the research. The research does not provide any medical or personal information by which each participant can personally identified, thus ensuring anonymity. --- Consent for publication All the study participants gave their consent to publish the research. --- Competing interests The authors declare that they have no competing interests. --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Background: Parents in the Arab population of Israel are known to be "pro-vaccination" and vaccinate their children at higher rates than the Jewish population, specifically against human papilloma virus (HPV) and seasonal influenza. Objectives: This study seeks to identify and compare variables associated with mothers' uptake of two vaccinations, influenza and HPV, among different subgroups in Arab and Jewish society in Israel. Methods: A cross-sectional study of the entire spectrum of the Israeli population was conducted using a stratified sample of Jewish mothers (n = 159) and Arab mothers (n = 534) from different subgroups: Muslim, Christian, Druse and Northern Bedouins. From March 30, 2019 through October 20, 2019, questionnaires were distributed manually to eighth grade pupils (13-14 years old) who had younger siblings in second (7-8 years old) or third (8-9 years old) grades. Results: Arab mothers exhibited a higher rate of uptake for both vaccinations (p < .0001, HPV -90%; influenza -62%) than Jewish mothers (p = 0.0014, HPV -46%; influenza -34%). Furthermore, results showed that HPV vaccination uptake is significantly higher than seasonal influenza vaccination uptake in both populations. Examination of the different ethnic subgroups revealed differences in vaccination uptake. For both vaccinations, the Northern Bedouins exhibited the highest uptake rate of all the Arab subgroups (74%), followed by the Druse (74%) and Muslim groups (60%). The Christian Arab group exhibited the lowest uptake rate (46%). Moreover, the uptake rate among secular Jewish mothers was lower than in any of the Arab groups (38%), though higher than among religious/traditional Jewish mothers, who exhibited the lowest uptake rate (26%). A comparison of the variables associated with mothers' vaccination uptake revealed differences between the ethnic subgroups. Moreover, the findings of the multiple logistic regression revealed the following to be the most significant factors in Arab mothers' intake of both vaccinations: school-located vaccination and mothers' perceived risk and perceived trust in the system and in the family physician. These variables are manifested differently in the different ethnic groups.
Introduction The novel coronavirus disease 2019 (COVID-19) caused by the SARS-CoV-2 virus has become a highly viral and infectious disease globally. The World Health Organization (WHO) [1] declared the COVID-19 pandemic on 11 March 2020. The pandemic is an unexpected, global phenomenon that has affected people not only by direct exposure to the disease but also indirectly via its various consequences, e.g., economic. The COVID-19 pandemic is the most profound global economic recession in the last eight decades [2]. Additionally, research shows that mental health problems associated with the pandemic extend to the general population and are not exclusively limited to individuals who have been infected [3]. Therefore, due to financial instability, the current pandemic can affect the mental health of individuals who are not at severe risk of becoming infected with COVID-19. The COVID-19 pandemic has considerably affected mental health. The review of mental health epidemiology indicates that a psychiatric epidemic cooccurs with the COVID-19 pandemic [4]. One group that is particularly susceptible to mental health deterioration during the ongoing pandemic is university students. Research has shown that student status (being a student) predicts mental health deterioration risk [5][6][7][8]. Thus, the education sector has been strongly disturbed by the COVID-19 pandemic [9]. The factors contributing to students' mental health issues in the pre-pandemic period are academic pressure [10] and financial obligations that may lead to poorer performance [11], and health concerns [12]. The additional risk factor of mental health problems is a young age. Even though young adults are less susceptible to COVID-19 infection [13], they are more susceptible to mental health issues during the ongoing pandemic [14 -16]. --- Post-Traumatic Stress Disorder (PTSD) and the COVID-19 Pandemic Post-traumatic stress disorder (PTSD) is in the category of trauma-and stressor-related stress disorders [17]. The DSM-4 criteria for PTSD relating to exposure assumed that the person experienced or was confronted with an event involving actual or threatened death or serious injury or a threat to the physical integrity of one's self or others (A1) and second, that the person's response involved intense fear, helplessness, or horror (A2) [17]. However, in the DSM-5, significant changes have been introduced. The DSM-5 requires certain triggers, whether directly experienced, witnessed, or happening to a close family member or friend, but exposure through media is excluded unless the exposure is work-related. In addition, the second criterion of subjective response (A2) has been removed [18]. Pandemics are classified as natural disasters. PTSD is one of the most-studied psychiatric disorders and is related to natural disasters [19]. However, the DSM-5 definition notes that a life-threatening illness or debilitating medical condition is not necessarily a traumatic event. Therefore, there is a claim that exposure to the COVID-19 pandemic cannot be treated as a traumatic experience causing PTSD due to the new criteria in the DSM-5 [20]. There is an ongoing debate regarding the possibility of the anticipatory threat of the COVID-19 pandemic to be a traumatic experience and, therefore, the possibility of psychological responses coherent with PTSD [21]. Additionally, recent research [22] strongly supports this claim and emerging research in this area. Following that research, we recognize the COVID-19 pandemic as a traumatic stressor event that can cause a PTSD-like response. Probable PTSD related to the pandemic ranges from 7% to even 67% in the general population [20]. A meta-analysis of 14 studies conducted during the first wave of the pandemic, between February and April, revealed a high rate of PTSD (23.88%) in the general population [23]. The prevalence rate of PTSD in students presents a wide range of variety. In the group of home-quarantined Chinese university students (n = 2485) one month after the breakout, the prevalence was 2.7%. However, Chi et al. [24] revealed that in a sample of Chinese students (n = 2038), the prevalence of clinically relevant PTSD reached 30.8% during the pandemic. Among a large sample of French university students (n = 22883), the rate of probable PTSD one month after the COVID-19 lockdown was 19.5% [25]. The predictors of PTSD in the Chinese university student sample were older age, knowing people who had been isolated, higher level of anxious attachment, adverse experiences in childhood, and lower level of resilience. However, gender, family intactness, subjective socioeconomic status (SES), and the number of confirmed cases of COVID-19 in participants' areas turned out to be irrelevant predictors [24]. Previous research showed that typically, women show higher rates of PTSD than men [26]. PTSD usually occurs almost twice as much in women compared to men [27]. This was also proven after natural disasters (earthquakes) among young adults [28]. However, gender role in PTSD prevalence was not confirmed during the COVID-19 pandemic. The meta-analysis showed that gender was not a significant moderator of PTSD [23]. Additionally, there is strong evidence that prior mental health disorders, particularly anxiety and depression, are predictors of PTSD [29]. Furthermore, previous exposure to traumatic events is a risk factor for PTSD [30]. The research showed a significant association between exposure to COVID-19 and the severity of PTSD symptoms in university student samples [25,31]. General exposure to COVID-19 turned out to be a significant risk factor for anxiety in Czech, Polish, Turkish, and Ukrainian university students while irrelevant for anxiety in Colombian, German, Israeli, Russian, and Slovenian students during the first wave of the pandemic [32]. The same study showed that also depression risk is associated with general exposure to COVID-19 among university students from the Czech Republic, Israel, Russia, Slovenia, and Ukraine. However, in Colombia, Germany, Poland, and Turkey, the exposure was irrelevant to depression risk among university students [32]. In the present study, we will refer to university students from six countries: Germany, Poland, Russia, Slovenia, Turkey, and Ukraine between the first wave (May-June 2020) (W1) and the second wave (mid-October-December 2020) (W2) of the COVID-19 pandemic. The countries in our study represent the cultural diversity depicted by traditional vs. secular and survival vs. self-expression values. The Inglehart-Welzel World Cultural Map [33] aggregates all countries into eight clusters based on the dimensions of those values. Four out of eight value clusters are exemplified in our study. Protestant Europe is represented by Germany; Catholic Europe by Poland and Slovenia; Orthodox Europe by Ukraine and Russia; and the African-Islamic region by Turkey. Therefore, these countries represent a great diversity of global cultural values. To present the ongoing pandemic situation in each of the six countries, we refer to the Oxford COVID-19 Government Response Tracker (OxCGRT), which enables tracking the stringency of government responses to the COVID-19 pandemic across countries and time [34]. The mean stringency index value varied in the W1 varied between 47.91 in Slovenia and 82.64 in Ukraine. During the W2, the lowest index was observed in Russia (44.80), while the highest was in Poland (75.00). The greatest increase of the OxCGRT was noted in Slovenia, while the greatest decrease of the index was in Ukraine. The detailed description of the stringency of restrictions in six countries during W1 and W2 is shown in Figure 1a. Since the national restrictions mainly refer to closing workplaces and economic measures, we assumed that in the countries that significantly waved the restrictions during W2 (e.g., Russia), the portion of university students who reported exposure to the COVID-19 pandemic in terms of losing a job and deterioration of the economic status would be lower during W2. We also analyzed the mean number of daily new cases and deaths based on an interactive web-based dashboard to track COVID-19 [35] (mean of the first and the last day of conducting the study in each country during the first and the second wave). The data on the mean number of daily cases presented in Figure 1b and on the mean number of deaths in Figure 1c show that in four countries (Germany, Russia, Turkey, and Ukraine), despite the higher number of daily cases and deaths due to COVID-19 during W2, the restrictions decreased. The largest increase in daily cases and deaths during W2 compared to W1 was noted in Poland, Russia, Turkey, and Ukraine. Our following hypothesis was that in countries with a higher number of cases and deaths during W2, the proportion of students reporting higher exposure to COVID-19 (symptoms, testing, hospitalizing, being in a strict 14-day quarantine, having infected friends/family, and experiencing death of friends/relatives) in W2 would be higher compared to W1. The main aim of this study was to verify the differences in the exposure to the COVID-19 pandemic in university students from Germany, Poland, Russia, Slovenia, Turkey, and Ukraine between the first wave (W1) and the second wave (W2) of the COVID-19 pandemic. We expected significant differences in various aspects of exposure to COVID-19 dependent on country, which might be interpreted in the context of stringency of restrictions and the number of daily cases and deaths due to the coronavirus. In this study, we acknowledge the COVID-19 pandemic as a traumatic stressor event that can cause a PTSD-like response. The second aim is to reveal whether different aspects of exposure to COVID-19 (symptoms, testing, hospitalizing, being in quarantine, having infected friends/family, experiencing the death of friends/relatives, losing a job, worsening of economic status), including previous diagnosed mental health problems (depression, anxiety, PTSD) and gender predict coronavirus-related PTSD severity risk in international samples of university students from six countries during W2. This study fills the gap in the literature related to the link between exposure to the COVID-19 pandemic and coronavirus-related PTSD during the second wave of the pandemic among students from six countries. --- Materials and Methods --- Participants The required sample size for each country group was computed a priori using the G*Power software (Düsseldorf, Germany) [36]. To detect a medium effect size of Cohen's W = 0.03 with 95% power in a 2 <unk> 2 <unk> 2 contingency table, df = 1 (two groups in two categories each, two-tailed), <unk> = 0.05, G*Power suggests we would need 145 participants in each country group (non-centrality parameter <unk> = 13.05; critical <unk> 2 = 3.84; power = 0.95). All the respondents were eligible for the study and confirmed their student status (being a current university student). The cross-sectional study was conducted in six countries with a total of 1684 students during the first wave of the pandemic-in Germany (n = 270, 16%), Poland (n = 300, 18%), Russia (n = 285, 17%), Slovenia (n = 209, 13%), Turkey (n = 310, 18%), and Ukraine (n = 310, 18%)-and a total of 1741 during the second wave, in Germany (n = 276, 16%), Poland (n = 341, 20%), Russia (n = 274, 15%), Slovenia (n = 206, 12%), Turkey (n = 312, 18%), and Ukraine (n = 332, 19%). The total sample of German students was recruited from University of Bamberg during the first measurement (W1) (n = 270, 100%) and the second measurement (W2) (n = 276, 100%). The Polish sample during W1 consisted of 300 students recruited from Maria Curie-Sklodowska University (UMCS) in eastern Poland (n = 149, 49%) and from University of Opole (UO) in the south of Poland (n = 151, 51%). During W2, Polish sample was comprised of 341 students from the same universities: UMCS (n = 57, 17%) and UO (n = 284, 83%). There were 285 Russian students in W1 and 274 in W2. Russian students were recruited from universities located in Sankt Petersburg: Peter the Great St. Petersburg Polytechnic University (W1: n = 155, 54%; W2: n = 156, 54%), Higher School of Economics (HSE) University (W1: n = 90, 31%; W2: n = 39, 14%), and St. Petersburg State University of Economics and Finance (W1: n = 42, 15%; W2: n = 78, 29%). The total sample in Slovenia was comprised of students recruited from University of Primorska in Koper during W1 (n = 209, 100%) and W2 (n = 206, 100%). During W1, Turkish students were recruited from eleven Turkish universities, mostly located in eastern Turkey: Bingol University, Bingöl (n = 148, 48%); Atatürk University, Erzurum (n = 110, 35%); Mu gla S<unk>tk<unk> Koçman University, Mu gla (n = 35, 11%); A gr<unk> <unk>Ibrahim <unk>eçen University, A gr<unk> (n = 6, 2%); F<unk>rat University, Elaz<unk> g (n = 3, 0.8%); K<unk>r<unk>kkale University, K<unk>r<unk>kkale (n = 1, 0.3%); Adnan Menderes University, Ayd<unk>n (n = 1, 0.3%); Başkent University, (n = 3, 1%); Bo gaziçi University (n = 1, 0.3%), Dicle University, Diyarbak<unk>r (n = 1, 0.3%), and Istanbul University (n = 1, 0.3%). During W2, Turkish students were recruited from seven Turkish universities: Atatürk University, Erzurum (n = 110, 35%); A gr<unk> <unk>Ibrahim <unk>eçen University, A gr<unk> (n = 71, 23%); Bingol University, Bingöl (n = 57, 18%); I gd<unk>r University, I gd<unk>r (n = 26, 8%); Mu gla S<unk>tk<unk> Koçman University, Mu gla (n = 20, 7%); Başkent University, (n = 16, 5%); and Bursa Uluda g University, Bursa (n = 12, 4%). Ukrainian students represented Lviv State University of Physical Culture (W1: n = 310, 100%; W2: n = 332, 100%;). Female students constituted 70% of the sample (n = 1174) during W1 and 73% (n = 1275) during W2. The majority of the participants lived in rural areas and small towns in W1 (n = 1021, 61%) and in W2 (n = 1029, 59%). Most of students were at the first cycle studies (bachelors' level) (W1: n = 1269, 75%; W2: n = 1324, 76%). The average age was 22.80 (SD = 4.65) in W1 and 22.73 (SD = 3.86) in W2. The median of age was 22. Students reported prior professional diagnosis of depression (n = 356, 20.40%), anxiety (n = 287, 16.50%), and PTSD (n = 205, 11.80%). The data regarding previous diagnosis in Germany were not collected due to an electronic problem. The sociodemographic profiles of the participants in W1 and W2 are highly similar and comparable. Detailed descriptive statistics and previous diagnoses of depression, anxiety, and PTSD for each country during W1 and W2 are presented in Table 1. All the questions included in the Google Forms questionnaire were answered in Poland, Slovenia, Czechia, Ukraine, and Russia. In those countries, participants could not omit any response; therefore, there were no missing data. However, in the German sample, the study was conducted via SoSci Survey, and there were missing data (n = 5, 0.02%). Therefore, hot-deck imputation was introduced to deal with a low number of missing data in the German sample. --- Study Design This repeated cross-sectional study among students from Germany, Poland, Russia, Slovenia, Turkey, and Ukraine was conducted during the first wave (W1) (May-June 2020) and the second wave (W2) (mid-October-December 2020) of the pandemic. The first measurement (W1) results concerning depression and anxiety have been already carefully described in a previous publication [32]. A cross-national first measurement was conducted online between May and June in the following countries: Germany (2-25 June), Poland (19 May-25 June), Russia (01-22 June), Slovenia (14 May-26 June), Turkey (16-29 May), and Ukraine (14 May-02 June). The second measurement during W2 was conducted between mid-October and December 2020 in Germany (15 October-1 November), Poland (11 November-1 December), Russia (28 October-8 December), Slovenia (10 October-15 December), Turkey (18 November-8 December), and Ukraine (15 October-15 November). The survey study was conducted via Google Forms in all countries except Germany. This country exploited the SoSci Survey. The invitation to participate in the survey was sent to students by researchers via various means, e.g., Moodle e-learning platform, student offices, email, or social media. The average time of data collection was 23.26 min (SD = 44.03). In Germany, students were offered a possibility to enter the lottery for a 20 EUR Amazon gift card in W1 and 50 EUR in W2 as an incentive to participate. No form of compensation was offered as an incentive to participate in the five other countries. To minimize bias sources, the student sample was highly diversified regarding its key characteristics: the type of university, field of study, and the cycle of study. Sampling was purposive. The selection criterion was university student status. The study followed the ethical requirements of the anonymity and voluntariness of participation. --- Measurements 2.3.1. Sociodemographic Survey Demographic data included questions regarding gender, place of residence (village, town, city, agglomeration), the current level of study (bachelor, master, postgraduate, doctoral), field of study (social sciences, humanities, and art, natural sciences, medical and health sciences), the year of study, and the study mode (full-time vs. part-time). The questionnaire was primarily designed in Polish and English. In the second step, it was translated from English to German, Russian, Slovenian, Turkish, and Ukrainian using backward translation by a team consisting of native speakers and psychology experts according to guidelines [37]. The participants were asked about their previous medical conditions regarding depression, anxiety, and PTSD diagnosed by a doctor or other licensed medical provider. The answer 'yes' was coded as 1, 'no' as 0. --- Self-Reported Exposure to COVID-19 Exposure to COVID-19 [38] was assessed based on eight questions regarding the COVID-19 pandemic in terms of (1) symptoms that could indicate coronavirus infection; (2) being tested for COVID-19; (3) hospitalization due to COVID-19; (4) experiencing strict quarantine for at least 14 days, in isolation from loved ones due to COVID-19; (5) coronavirus infection among family, friends, or relatives; (6) death among relatives due to COVID-19; (7) losing a job due to the COVID-19 pandemic-the person or their family; and (8) experiencing a worsening of economic status due to the COVID-19 pandemic. Participants marked their answers to each question, coded as 0 = no, and 1 = yes. Each aspect of the exposure to COVID-19 was analyzed separately. The self-exposure to COVID-19 items was developed based on methodology proposed by Tang et al. [31]. --- Coronavirus-Related PTSD The coronavirus-related PTSD was assessed using the 17-item PTSD check list-specific version (PCL-S) [39] on a five-point Likert scale ranging from 1 = not at all to 5 = extremely, with the total score ranging from 17 to 85. Higher scores indicated higher PTSD levels. A lower cutoff score (25) [40] is used for screening reasons. However, higher cutoff points (44) and ( 50) [41] are dedicated to minimalizing false positives or diagnoses. We have used PCL-S based on the DSM-4, as we wanted to be sure that we measure coronavirus-related PTSD. The specific stressful-event-related PTSD was acknowledged as the COVID-19 pandemic. Therefore, we have utilized the specific version and asked about symptoms in response to a specific stressful experience: the COVID-19 pandemic. We have also added the COVID-19 pandemic aspect to each of the items. Participants estimated how much they were bothered by this specific problem (the COVID-19 pandemic) in the past month. Therefore, we have not explored general PTSD but specific stressful-event-related PTSD. The Cronbach's <unk> in the total sample in this study was 0.94. --- Stringency Index We used the Oxford COVID-19 Government Response Tracker (OxCGRT) to portray the stringency of government responses to the COVID-19 pandemic across countries and time [34]. The stringency level is composed of various indicators. It refers to community mobility: restrictions on gathering, workplace closings, public school closings, cancelation of public events, stay at home requirements, transport closings, international travel restrictions, restrictions on internal movement, and economic measures: fiscal measures, income support, debt/contract relief, and international support. The indices regarding public health issues are: testing policy, public information campaigns, contact tracking, investments in vaccines, emergency investment in health care, vaccination, and facial coverings. The stringency of government responses is the reaction to the pandemic spread in each country. Those measurements are rescaled to a value ranging from 0 to 100, where 100 denotes the strictest restrictions. The timing was crucial for the stringency-level evaluation. The stringency rate in this study was calculated based on the stringency value mean in the first and the last day of data collection in each country. This index portrays the pandemic situation for the general population in each country well. --- Statistical Analysis The statistical analysis included descriptive statistics: mean (M), standard deviation (SD), and 95% of confidence interval (CI) with lower limit (LL) and upper limit (UL). The analysis was conducted in SPSS27. To verify the first hypothesis regarding the change in exposure to COVID-19, we have utilized the Pearson <unk> 2 independence test for each country and each aspect of exposure to COVID-19 separately using a 2 <unk> 2 contingency table. Phi (<unk>) value was used to assess the effect size [42]. An effect size equal to 0.1 is considered a small effect, 0.3 a medium effect, and 0.5 a large effect. We have shown the prevalence rate for coronavirus-related PTSD. The following step was to verify whether the various aspects of the COVID-19 pandemic exposure are associated with coronavirusrelated PTSD in university students. We conducted multivariate logistic regression analysis for the coronavirus-related PTSD risk among the international student sample from the six countries. All predictors were entered into the model simultaneously. The multiple regression models reveal risk factors in their simultaneous effect on mental health. Therefore, the multivariate regression model is closer to actual psychological complexity than the bivariate model, where the particular factors independently predict mental health issues. --- Results The Person's <unk> 2 independence test showed a significant difference between measurement during W1 (May-June 2021) and W2 (mid-October-November) in each of the six countries regarding the various aspects of self-reported exposure to COVID-19. The <unk> coefficient value allowed for the assessment of the effect size [42]. --- Comparison of Self-Reported Exposure to the COVID-19 Pandemic A significantly higher proportion of students experienced symptoms of coronavirus infection during the second wave in the total international sample of university students. However, the effect size was small. Similarly, in Poland, Russia, Slovenia, and Turkey, the proportion of students experiencing COVID-19 symptoms was significantly higher in W2, although the effect size was small. A significant medium effect size was noted in Ukraine. Therefore, the most pronounced increase in the proportion of students experiencing the COVID-19 symptoms during the second wave was observed in Ukraine. However, the one country where there was no significant effect was Germany. Therefore, the university students in Germany did not experience higher exposure to the infection in the second wave, unlike all other students from the five countries. However, a significant medium effect sized was observed in German students regarding testing for coronavirus. In all other countries and the total sample, the effect was also significant but small. Therefore, all university students reported a higher number of tests in W2, but the difference was the highest in Germany. The exposure to being hospitalized for coronavirus was relatively small. Only five participants (0.30%) in W1 and 21 (1.21%) answered yes to this question in the total sample. However, the difference was significant. A significantly higher proportion of students was hospitalized in Poland and Turkey during W2, although the effect size was small. In Germany, Russia, Slovenia, and Ukraine, the difference was insignificant. A higher proportion of students experienced being in a strict quarantine during W2 than W1 in Poland, Turkey, Ukraine, and the total sample. However, in Germany, Russia, and Slovenia, the differences were trivial. In all countries and the total international sample, the exposure to friends or relatives infected with the COVID-19 was higher during W2 than W1. A large significant effect was observed in Turkey, a medium effect in Ukraine and the total sample, while a small effect was observed in Germany, Poland, Russia, and Slovenia. Similarly, the proportion of students who experienced a loss of friends or relatives due to the COVID-19 significantly increased during W2 compared to W1. The medium effect was observed in Turkey, while a small effect was prevalent in all other countries and the international sample. The proportion of students who experienced losing a job due to the COVID-19 pandemic was lower during W2 than W1 in the international sample and Ukraine. However, in other countries, the effect size was small. There was no significant drop in Germany, Poland, Russia, and Turkey. Mixed results were observed regarding the self-reported deterioration of economic status due to the pandemic. In the total sample, the difference between W1 and W2 was trivial. However, an increase in the proportion of students declaring that their economic status worsened was observed in Poland. On the other hand, there was a significant drop in the proportion of students claiming worse economic status during W2 in Russia. All effects were small regarding this aspect of exposure. There were no significant differences in Germany, Slovenia, Turkey, and Ukraine. The results of the comparison are shown in Table 2. --- Descriptive Statistics and Prevalence of Coronavirus-Related PTSD Descriptive statistics showed that the mean value of coronavirus-related PTSD was 38.08 (SD = 15.49) among students from Germany, Poland, Russia, Slovenia, Turkey, and Ukraine during W2. A detailed description is presented in Table 3. Note. M = mean; CI = confidence interval; LL = lower limit; UL = upper limit; SD = standard deviation. The prevalence of coronavirus-related PTSD risk was presented at three cutoff points, according to the recommendations in the presented literature [40,41]. The proportion of students with coronavirus-related PTSD risk at three cutoff scores (25, 44, and 50) is presented in Table 4. --- Logistic Regression for Coronavirus-Related PTSD Risk Multivariate logistic regression for coronavirus-related PTSD risk during the second pandemic wave showed significant models for a moderate, high, and very high risk of PTSD among an international sample of university students from Germany, Poland, Russia, Slovenia, Turkey, and Ukraine. The predictors were eight aspects of self-reported exposure to COVID-19 controlling for gender and previous clinical diagnosis of depression, anxiety disorder, and PTSD. All predictors were included simultaneously using the enter method. Results are presented in Table 5. The model of moderate risk of coronavirus-related PTSD (Cutoff Point 25) revealed only three predictors to be relevant among eight items describing exposure to the coronavirus pandemic: experiencing COVID-19 symptoms (Item 1), COVID-19 infection among friends and family (Item 5), and the deterioration of economic status due to the pandemic (Item 8). Students who experienced COVID-19 symptoms and whose family or friends were infected had 1.5 times higher odds of moderate risk of PTSD. Those who reported worsening economic status due to the pandemic were almost two and half times more frequently in the moderate PTSD risk group. In addition, female students were two times more likely to develop moderate PTSD. Coronavirus-related PTSD was three times more likely among students with a previous clinical diagnosis of PTSD. The regression models for high and very high risk of PTSD revealed a different set of predictors. In those two models, the significant predictors were the same with similar adjusted odds. Students who had a family member or friend die from coronavirus infection were twice as likely to be in a coronavirus-related PTSD-risk group. Additionally, students exposed to the COVID-19 pandemic in terms of losing a job (own or in the one's family) and worsening economic status were 1.6 times and 1.8 times more likely to be in a (very) high coronavirus-related PTSD-risk group, respectively. Finally, worsening of economic status was a significant predictor of high and very high risk of PTSD. Among demographic factors, female gender and previous diagnosis of depression and PTSD were associated with a twofold higher risk of coronavirus-related PTSD. --- Discussion In this study, we showed the significance of differences in aspects of exposure to the COVID-19 pandemic in university students from Germany, Poland, Russia, Slovenia, Turkey, and Ukraine between the first wave (W1) and the second wave (W2) of the COVID-19 pandemic with regard to the stringency index. We also showed the prevalence and predictors of coronavirus-related PTSD. To the authors' knowledge, this is the first study undertaking this theme among university students from eight countries during W2. Our study revealed the differences in exposure to COVID-19 among university students in Germany, Poland, Russia, Slovenia, Ukraine, and Turkey during W1 (April-May 2020) and W2 (October-December 2020). The prevalence of coronavirus-related PTSD risk for 25, 44, and 50 cutoff scores was 78.20%, 32.70%, and 23.10%, respectively, during W2. We have also performed the prediction models of coronavirus-related PTSD risk for each cutoff score in the international sample of university students during W2. We expected that in countries such as Russia, where the restrictions were significantly waved during W2, the worsening of economic status and job loss due to the COVID-19 pandemic would significantly decrease. The mean stringency of restrictions in the six countries was lower during W2 compared to W1. However, the ratio of students in the international sample who have lost a job during W2 was significantly lower compared to W1. In contrast, the ratio of students whose economic status worsened due to the pandemic was not significantly different during W2. Therefore, the most significant job loss experience by a student or a family member was more evident during W1 (31%) than W2 (25%). However, the deterioration of economic status was still on the rise even during W2 (although insignificant) and concerned over half of the international student sample (55%). The lowest proportion of students exposed to worsening economic during W2 was noted in Germany (29.92%), while the highest (over 50%) in Poland, Ukraine, and Turkey, at 72.14%, 70.41%, and 63.78%, respectively. In contrast, the proportion of French students who reported a loss of income was significantly lower and reached only 18.30% in June-July 2020 [25]. In accordance with our expectations, the rate of students who experience worsening economic status due to the pandemic was significantly lower in Russia during W2 due to the significant wave of the restrictions, whereas it was higher in Poland, where the restrictions were more stringent. In congruence with Hypothesis 2, exposure to COVID-19 among the total sample of students has risen. During W2, a higher proportion of students in all countries reported experiencing symptoms of COVID-19 compared to W1, except Germany, even though the number of new cases daily was almost 20 times higher during W2 (n = 7762) than during W1 (n = 392) in the general German population. On the other hand, the difference in the frequency of testing for COVID-19 was the largest in the German sample. Therefore, although the ratio of German students who experienced having infected friends/family or losing a loved one was higher during W2, the portion of German students who experienced COVID-19 has not increased. It might be due to the significant increase in testing among German students. There was significant growth in the percentage of hospitalized students in strict quarantine in Poland and Turkey. Additionally, in Ukraine, the ratio of students in a compulsory 14-day quarantine was elevated during W2. In congruence with the numbers in the general population, the percentage of students who experience losing a family member or friends due to COVID-19 was higher in all countries. However, the largest increase of daily coronavirus-related deaths was among the Polish and Russian general populations. In contrast, among the student population, the highest increase was declared in Turkey. Similar to previous research among Turkish students [43], it would seem that the student sample was overexposed to the bereavement experience. However, there were concerns regarding the reliability of COVID-19 data in Turkey, as it appeared that the prevalence of the disease (particularly total deaths) might be underreported [44,45]. The mean for the coronavirus-related PTSD risk in the international sample of students from six countries in this study has exceeded the lowest cutoff score (25), which is used for screening reasons [40]. The prevalence at this cutoff point was very high and indicated that over 78.20% of students are at coronavirus-related PTSD risk in this study. Every third student (32.70%) is at high PTSD risk (Cutoff Point 44), and almost every fourth student (23.10%) is at a very high PTSD risk (Cutoff Point 50). The high cutoffs are used to minimalize false positives or diagnoses [41]. The prevalence of PTSD risk at the beginning of the first wave of the COVID-19 pandemic in young adults in the USA [46] and China [16] with the use of PCL-C was 32% (Cutoff Point 44) and 14% (Cutoff Point 38), respectively. Research with the use of the PCL-5 at Cutoff Point 32 in the general population showed a total of 7% of people experiencing post-traumatic stress symptoms in the Chinese sample (January/February, cutoff score-33) [47] and 13% in five western countries [22]. However, the Italian general sample, using a modified 19-item PCL-5-based-PTSD questionnaire, revealed a total of 29% of people experiencing PTSD symptomatology [48]. The highest prevalence (67% demonstrating high PTSD level) was in a general Chinese population, with a different measurement (IES-R) [49]. Various measurements and cutoff scores hinder the comparison to our sample. Additionally, the presented studies were conducted during the first wave of the pandemic. However, referring to the specific cutoff score (44), the prevalence of coronavirus-
This study aimed to reveal differences in exposure to coronavirus disease during the first (W1) and the second (W2) waves of the pandemic in six countries among university students and to show the prevalence and associations between exposure to COVID-19 and coronavirus-related post-traumatic stress syndrome (PTSD) risk during W2. The repeated crosssectional study was conducted among university students from Germany, Poland, Russia, Slovenia, Turkey, and Ukraine (W1: n = 1684; W2: n = 1741). Eight items measured exposure to COVID-19 (regarding COVID-19 symptoms, testing, hospitalizing quarantine, infected relatives, death of relatives, job loss, and worsening economic status due to the COVID-19 pandemic). Coronavirus-related PTSD risk was evaluated by PCL-S. The exposure to COVID-19 symptoms was higher during W2 than W1 among students from all countries, except Germany, where, in contrast, the increase in testing was the strongest. Students from Poland, Turkey, and the total sample were more frequently hospitalized for COVID-19 in W2. In these countries, and Ukraine, students were more often in quarantine. In all countries, participants were more exposed to infected friends/relatives and the loss of a family member due to COVID-19 in W2 than W1. The increase in job loss due to COVID-19 was only noted in Ukraine. Economic status during W2 only worsened in Poland and improved in Russia. This was due to the significant wave of restrictions in Russia and more stringent restrictions in Poland. The prevalence of coronavirus-related PTSD risk at three cutoff scores (25, 44, and 50) was 78.20%, 32.70%, and 23.10%, respectively. The prediction models for different severity of PTSD risk differed. Female gender, a prior diagnosis of depression, a loss of friends/relatives, job loss, and worsening economic status due to the COVID-19 were positively associated with high and very high coronavirus-related PTSD risk, while female gender, a prior PTSD diagnosis, experiencing COVID-19 symptoms, testing for COVID-19, having infected friends/relatives and worsening economic status were associated with moderate risk.
41]. The prevalence of PTSD risk at the beginning of the first wave of the COVID-19 pandemic in young adults in the USA [46] and China [16] with the use of PCL-C was 32% (Cutoff Point 44) and 14% (Cutoff Point 38), respectively. Research with the use of the PCL-5 at Cutoff Point 32 in the general population showed a total of 7% of people experiencing post-traumatic stress symptoms in the Chinese sample (January/February, cutoff score-33) [47] and 13% in five western countries [22]. However, the Italian general sample, using a modified 19-item PCL-5-based-PTSD questionnaire, revealed a total of 29% of people experiencing PTSD symptomatology [48]. The highest prevalence (67% demonstrating high PTSD level) was in a general Chinese population, with a different measurement (IES-R) [49]. Various measurements and cutoff scores hinder the comparison to our sample. Additionally, the presented studies were conducted during the first wave of the pandemic. However, referring to the specific cutoff score (44), the prevalence of coronavirus-related PTSD risk was similar in the student sample in our study (33%) during the second wave of the pandemic among young adults in the USA (32%) [46]. On the other hand, the used PCL-C version was general and did not refer to COVID-19 as a specific stressful event [46], such as in our study. In contrast, a single-arm meta-analysis [50] of 478 papers and 12 studies showed that the prevalence of PTSD in the general population during the COVID-19 pandemic was 15%; therefore, it was significantly lower than among students in this study. There are inconsistent data regarding the prevalence of PTSD in the student population. In French university students one month after the COVID-19 lockdown, the prevalence of PTSD risk measured by the PCL-5 (Cutoff Score 32) was 19.50% [25]. Among Chinese college students, using the abbreviated PCL, conducted in February 2020, the prevalence was 31% [24]. The smallest prevalence, reaching 2.7%, was noted in Chinese university students [31]. The measurement in this study was PCL-C, with a cutoff score of 38. The repeated cross-sectional research among French students revealed that 16.40% of students developed probable PTSD in the second measurement. The increase in the second measurement [25] can explain the high prevalence at the screening level (Cutoff Point 25) in our sample (78.20%). The prediction models for coronavirus-related PTSD risk differed due to the severity of risk regarding the exposure to experiencing symptoms of COVID-19, testing for COVID-19, and infection of friends or family members. In the prediction model of moderate PTSD risk (Cutoff Point 25), these were important factors, while in the more severe PTSD risk models (Cutoff Points 44 and 50), they were irrelevant. The following significant predictors for the more severe PTSD risk models were experiencing symptoms of COVID-19, losing a family member or friends because of COVID-19, job loss (by the participant or family member), and worsening of economic status due to the COVID-19 pandemic. However, experiencing the loss of a friend or family member and job loss were not relevant predictors for moderate coronavirus-related PTSD risk. Testing and hospitalization for COVID-19, as well as being in strict 14-day quarantine, were not significantly associated with coronavirus-related PTSD risk in any model. The results are similar to research among Chinese students [31], where longer home quarantine was not associated with PTSD. However, in the French university sample, having lived through quarantine alone was a significant factor associated with probable PTSD [25]. The lack of association of quarantine experience with PTSD risk in this study can be due to the low proportion of exposed students (11%). Prior medical diagnosis reported by students regarding depression was associated with high and very high coronavirus-related PTSD. Prior PTSD diagnosis was associated with a moderate and very high risk of coronavirus-related PTSD in the international sample. These results are aligned with previous findings [30]. However, prior anxiety diagnosis did not turn out to be relevant for PTSD risk in this study. Contrary to other research [23,24] showing insignificance of gender as a PTSD moderator among young adults during the COVID-19 pandemic, we found that female students were twice as likely to develop moderate, high, or very high coronavirus-related PTSD risk. A similar assessment of PTSD risk was recognized in previous research [26,27] regarding natural disasters [28]. This inconsistency might be due to the time of the study, as the previous research shows results from the first wave of the pandemic, whereas, in our study, results come from the second wave. Due to the longer period, gender differences might be more pronounced among students. --- Limitations There are some limitations to the present study. First, the study is of a repeated cross-sectional character and is not longitudinal. Second, the study utilized self-report questionnaires. Therefore, the results might be subject to retrospective response bias. Additionally, the research sample is convenient. The lack of representation of the student population limited to specific regions in each country seem to be a burden in generalizing the results, particularly in the Turkish case, where the majority of students come from a highly volatile region of Eastern Turkey. Additionally, we utilized the PCL-S based on the DSM-4 instead of the PCL-5 based on DSM-5. However, the PCL-S enables the measurement of PTSD with regard to a specific stressful experience: the COVID-19 pandemic. The majority of participants were female students (70%); however, this balance reflects the real gender balance in most of the surveyed countries, where the percentage of female students reaches 60% [51][52][53][54]. Considering the limitations and strengths of this study, future research directions should be the study of exposure and coronavirus-related PTSD from a cross-cultural perspective with longitudinal design in a representative sample. It should be noted that this study was conducted before introducing open public vaccination programs. We could expect that access to vaccination will mitigate the negative psychological aspect of the COVID-19 pandemic. However, students have ambivalent attitudes towards vaccination programs, particularly non-medical students [55]. Therefore, this access might also be a source of psychological distress in the future. --- Conclusions This study shows that, besides exposure to COVID-19 symptoms, the loss of relatives because of COVID-19, female gender, and a prior diagnosis of a mental health disorder, the economic aspect of the pandemic plays a vital role in the susceptibility to high coronavirus-related PTSD risk. Even though the proportion of students who have experienced worsening economic status has not increased during W2, it still considered over half of the student sample from six countries in this study. Therefore, additional financial support for students could mitigate coronavirus-related PTSD risk, particularly in Poland, Ukraine, and Turkey. The analysis of the federal restrictions' stringency shed light on an increase of worsening economic status in Poland (where the restrictions were more stringent) and a decrease in Russia, where the restrictions were waived despite a high number of new daily cases. The German case shows the importance of frequent testing; however, this research was conducted before open public access to the COVID-19 vaccine. --- Data Availability Statement: The materials and methods are accessible at the Center for Open Science (OSF), titled: Mental Health of Undergraduates During the COVID-19 Pandemic [56]. The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. --- Conflicts of Interest: The authors declare no conflict of interest.
This study aimed to reveal differences in exposure to coronavirus disease during the first (W1) and the second (W2) waves of the pandemic in six countries among university students and to show the prevalence and associations between exposure to COVID-19 and coronavirus-related post-traumatic stress syndrome (PTSD) risk during W2. The repeated crosssectional study was conducted among university students from Germany, Poland, Russia, Slovenia, Turkey, and Ukraine (W1: n = 1684; W2: n = 1741). Eight items measured exposure to COVID-19 (regarding COVID-19 symptoms, testing, hospitalizing quarantine, infected relatives, death of relatives, job loss, and worsening economic status due to the COVID-19 pandemic). Coronavirus-related PTSD risk was evaluated by PCL-S. The exposure to COVID-19 symptoms was higher during W2 than W1 among students from all countries, except Germany, where, in contrast, the increase in testing was the strongest. Students from Poland, Turkey, and the total sample were more frequently hospitalized for COVID-19 in W2. In these countries, and Ukraine, students were more often in quarantine. In all countries, participants were more exposed to infected friends/relatives and the loss of a family member due to COVID-19 in W2 than W1. The increase in job loss due to COVID-19 was only noted in Ukraine. Economic status during W2 only worsened in Poland and improved in Russia. This was due to the significant wave of restrictions in Russia and more stringent restrictions in Poland. The prevalence of coronavirus-related PTSD risk at three cutoff scores (25, 44, and 50) was 78.20%, 32.70%, and 23.10%, respectively. The prediction models for different severity of PTSD risk differed. Female gender, a prior diagnosis of depression, a loss of friends/relatives, job loss, and worsening economic status due to the COVID-19 were positively associated with high and very high coronavirus-related PTSD risk, while female gender, a prior PTSD diagnosis, experiencing COVID-19 symptoms, testing for COVID-19, having infected friends/relatives and worsening economic status were associated with moderate risk.
Introduction 1.Background The rapid expansion of urban areas, coupled with advances in technology and the need to improve citizens' living conditions and well-being, has placed greater emphasis on the role of landscapes in the city. In response to these challenges, the "Smart City" concept has emerged, drawing on the notion of a "Smart Earth" first introduced by IBM in a thematic report in 2008 [1]. A smart city is a modernized urban environment that leverages diverse electronic methods and sensors to collect specific data, intending to manage assets, resources, and services effectively and ultimately enhance overall city operations [2]. Integrating information and communication technology (ICT) and Internet of Things (IoT) technologies into smart cities has enabled greater information transparency and digitalization of city life, empowering citizens with the tools and data they need to make informed choices on a daily basis. The concept of "Smart" (Japanese for'sum<unk>to-ka') has garnered significant attention in urban development and intellectualization, as reflected by its widespread adoption across various industries. Key terms such as ICT, IoT, artificial intelligence (AI), and 5G are now firmly entrenched in the public consciousness. The growing prevalence of "Smart Cities" necessitates using the IoT as an information network platform, enabling the efficient collection and processing of big data. The concept of smart cities goes beyond traditional urban development, aiming to optimize city operations and enhance the quality of life for citizens. By leveraging information and communication technology (ICT) and the Internet of Things (IoT), smart cities enable effective management of assets, resources, and services. Integrating ICT and IoT technologies empowers citizens with real-time data and tools, fostering information transparency and digitalization in various aspects of urban life. These advancements provide the foundation for a smarter and more efficient urban environment. With urban development on the rise, there has been a surge in the construction of "Smart Parks", such as Haidian Park [3], Longhu G-PARK Science Park [4] in Beijing, China, Xiangmi Park [5] in Shenzhen, China, Arashiyama Park (Nakanoshima area) [6], and The Keihanna Commemorative Park [7] in Kyoto, Japan, and the Palace Site Historical Park [8] in Nara, Japan. These parks represent an innovative approach to providing citizens with better green spaces. Within this context, our study focuses on a specific type of park, namely zoos. Zoos play a vital role in developing smart cities by serving as integral components of urban landscapes. These institutions contribute to cities' overall well-being and sustainability by providing green spaces, wildlife conservation efforts, and opportunities for education and research. In the context of smart cities, zoos act as catalysts for sustainable development and success, aligning with the core principles and objectives of these technologically advanced urban environments. The more targeted visitor traffic and richer ecological environments in zoos make their intellectualization more impactful and meaningful. Beyond their role as recreational spaces, zoos fulfill critical functions such as wildlife conservation, environmental education, and scientific research. These activities directly contribute to the sustainable development, public engagement, and technological innovation aspects of smart cities. Zoos not only provide citizens with opportunities to connect with nature but also serve as platforms for raising awareness about biodiversity and environmental sustainability. The COVID-19 pandemic has significantly impacted zoos worldwide, with many facing operational and financial difficulties. The decrease in visitor numbers, which is one of the main sources of revenue for zoos, has severely impacted their operations. In addition, the increased costs of maintaining the animals and providing them with food and other necessities have also contributed to the financial difficulties that zoos face. To cope with these challenges, zoos have implemented cost-cutting measures, reducing staff, animal collections, and conservation programs. In Japan, for example, in 2020, feed costs at Tobu Zoological Park [9] increased by <unk>5-6%. In a 2021 survey conducted by NHK, 97% of zoos in Japan said they had closed temporarily during the prior year [9]. Since then, admission revenues in tourism in Japan have decreased to a staggering number due to a sharp decline in inbound visitors from overseas [10]. It can be seen from this that supporting and sustaining zoos during crises is crucial, given their significant contributions to animal conservation, education, and research. It is imperative to find ways to overcome these challenges and ensure the long-term viability of zoos in the context of smart cities. --- Cases and Situation Innovative smartening projects have been implemented in Japan to mitigate the negative impact of the COVID-19 pandemic on zoos. For instance, KDDI, a Japanese company, launched the "one zoo" online platform, which featured prominent zoos such as the Asahiyama Zoo and the Tennoji Zoo [11]. The platform allowed users to observe animals in real time and make donations to animal protection associations through membership purchases. Additionally, the platform rewarded users with zoo tickets or souvenirs. However, despite the developers' efforts to enhance the zoo tour experience, the project was discontinued on 31 May 2022 [11] due to a lack of online activity. The developers had not considered user feedback on each smartening project promptly and lacked objective analysis, leading to the project's failure. Land 2023, 12, 1747 3 of 25 Another example is Tokyo Zoonet's online platform [12], Tokyo Zoovie, which comprises four members of the Tokyo Zoological Park Society (Tokyo Dobutsuen Kyokai): Ueno Zoological Gardens, Tama Zoological Park, Tokyo Sea Life Park, and Inokashira Park Zoo. The platform provides visitors with a guided tour of the four zoos using an animal map and 3D models, and VR tours are also available. In addition, Ueno Zoo is part of the Tokyo Metropolitan Park Association, and it offers smart functions in the Tokyo Parks Navi platform, such as the ability to collect stamps, look up tour routes, blogs, and automatic tour recommendations, making it very user-friendly. The development of smart platforms for zoos, such as "one zoo" and Tokyo Zoonet, highlights the increasing utility of intellectualization in addressing the operational and financial difficulties these institutions face. However, it is crucial to objectively assess the practicality and effectiveness of these smart functions and determine whether there is actual demand from visitors for such features. To this end, this study aims to model and analyze these issues quantitatively, enabling zoo managers to make informed decisions regarding the zoo's development, identify potential cost savings, and gain insight into visitor needs and preferences compared to the wider market. By providing an objective and data-driven analysis of the efficacy of smart functions in zoos, this research will contribute to these vital institutions' sustainable development and success. The current state of Japanese smart zoos is in a preliminary phase, necessitating a standardized and objective set of regulations to identify good and bad smart implementations. Nevertheless, at the current stage, most smart projects are focused on multimedia functions to enhance the visitor experience. There are relatively few projects centered on big data and ecological conservation. Thus, the judgment criterion will focus on visitor feedback rather than efficacy values. The data collection component of this study will take the form of a questionnaire, asking respondents to rate the importance and performance of each smart item on a scale from 1 to 5. AHP (analytical hierarchy process) weights will be calculated based on this questionnaire data, and FCEM (fuzzy comprehensive evaluation method) will be employed to obtain numerical results for the objective indicators of intellectualization. In addition, IPA (importance-performance analysis) will be utilized to evaluate each smart project, assess its current development status, and obtain opinions. Through this study, zoo managers can identify the appropriate direction for zoo development, achieve significant cost savings, determine visitor needs and preferences, and compare their zoo with the broader market. As demonstrated in our previous research, we have already conducted a comprehensive examination of Ueno Zoo in Tokyo using the above-mentioned methodology [13]. Moreover, for the current investigation, our focus will shift to Kyoto Zoo in Kyoto City, a highly illustrative metropolis that has experienced a fiscal crisis in the past ten years [14], prompting an extensive effort to revitalize its economic landscape through a multifaceted smart city plan. Kyoto Zoo is an ideal site for our research because of this complex milieu. Furthermore, our investigation aims to explore the divergences between the intellectualization of zoos as a general practice and the unique challenges and opportunities that arise from zoo development within the comprehensive framework of a smart city. --- Literature Review The current discourse in Japan within the academic community has shifted toward embracing the notion of smart zoos. However, it is important to note that the term commonly used in Japan is "Intellectualization of zoos" (or D <unk>butsuen no sum<unk>to-ka), which is often regarded as an integral component of the broader smart parks concept. "SMART PARK: A TOOLKIT", from the Luskin School of Public Affairs, UCLA [15], provides a comprehensive understanding of the concept of smart parks, laying out a framework for evaluating such parks based on their spatial characteristics from the perspective of designers, park managers, and advocates. While this model offers a satisfactory level of specificity in defining various program parameters, the missing objective data evaluation system remains a critical gap. Similarly, "Research on the Construction Framework of Smart Park: A Case Study of Intelligent Renovation of Beijing Haidian Park" offers a systematic approach to evaluating smart parks based on their functions [3]. However, the study does not include a comprehensive survey of tourists' emotions and objective data, limiting its applicability. To address this gap, the article "How smart is your tourist attraction? Measuring tourist preferences of smart tourism attractions via an FCEM-AHP and IPA approach" [16] adopts a pioneering approach to incorporate FCEM-AHP and IPA methods into analyzing the weighting of parks and tourism preferences. The study leverages a questionnaire to collect data and uses AHP to determine weight sets, while a fuzzy comprehensive evaluation approach is applied to derive the strengths and weaknesses of the park. Although the study model provides a comprehensive framework, it has several limitations, including the lack of clear project descriptions and illustrations in the questionnaire, resulting in limited understanding among interviewees. Additionally, many of the projects in the study require re-exploration due to changes over the past few years. Research on smart parks has recently entered an initial stage, with the establishment of frameworks for evaluating spatial characteristics and functional aspects. However, a critical gap needs to be addressed regarding objective data evaluation systems and comprehensive surveys of tourist feedback data. Previous studies have made notable contributions by adopting innovative approaches like FCEM-AHP and IPA methods to analyze park weighting, tourism preferences, and strengths and weaknesses. These studies lay a solid foundation for further research on smart zoos, particularly focusing on the Kyoto Zoo within the context of Kyoto Smart City. --- Research Purpose and Significance In another prior investigation, "Impact of Intellectualization of a Zoo through a FCEM-AHP and IPA Approach", the study pursued a methodical evaluation of the intellectualization process of Ueno Zoo [13]. The outcome revealed that Ueno Zoo is still in the nascent stage of intellectualization, with several components requiring further development for visitors to have an immersive tourist experience. Therefore, there is a pressing need to enhance the intellectualization and user-friendliness of the Tokyo zoos to create a more comprehensive and satisfactory tourist experience. Previous studies have provided a solid foundation for further research on smart zoos, with particular attention to Kyoto Zoo, utilizing the FCEM and IPA methodologies. Moreover, employing the same analytical framework would facilitate the comparative analysis of the degree of smartness and development orientation between Kyoto Zoo and Ueno Zoo. By employing the FCEM and IPA methodologies, the present study aims to quantitatively evaluate the intellectualization of Kyoto Zoo and compare it with Ueno Zoo, utilizing a consistent analytical framework. The ultimate goal is to enhance the intellectualization and user-friendliness of zoos in Japan, providing a more comprehensive and satisfactory tourist experience. --- Materials and Methods --- Study Area The selection of Kyoto Zoo as the study site was deliberate and based on several reasons. Firstly, it is the second-oldest zoo in Japan, after Ueno Zoo, and has a rich history and heritage. Secondly, Kyoto Zoo is a non-commercial entity that espouses humanistic values and promotes peace. In 1941, during the war, many animals at the zoo perished through a large-scale animal slaughter. Since 1942, the zoo has held memorial services almost every autumn to express gratitude and reinforce the importance of life [17]. As of June 2019, Kyoto Zoo is home to 570 animals of 123 species, comprising mammals, birds, reptiles, amphibians, and fish [18]. Hence, Kyoto Zoo is where visitors can appreciate animals, ponder their living conditions, experience life through animal interaction, and gain insights into human-nature relationships. It embodies a part of Kyoto's culture and revered traditions and underscores the importance of peaceful coexistence between animals and humans. Thirdly, although the zoo is not located in the city center, it is situated in the Okazaki Area of Kyoto, which is surrounded by popular tourist destinations such as the Kyoto City Kyocera Museum of Art, Okazaki Park, and the Heanjingu Shrine, thereby ensuring a steady flow of visitors and a conducive operating environment [19]. During the financial crisis faced by Kyoto City, Kyoto Zoo appealed for assistance via SNS platforms, seeking support from local shops and donations from the community to provide the animals a chance to survive [20]. Notably, a local pickle store donated radish roots and leaves not commonly consumed by humans to serve as animal food. This act served as an example of how intellectualization can contribute to regional collaboration and promote or influence certain sustainable development goals (SDGs). --- Identifying Evaluation Items of the Smart Zoo System of Japan (SZSOJ) The research process of this study is shown in Figure 1. Land 2023, 12, x FOR PEER REVIEW 5 of 28 insights into human-nature relationships. It embodies a part of Kyoto's culture and revered traditions and underscores the importance of peaceful coexistence between animals and humans. Thirdly, although the zoo is not located in the city center, it is situated in the Okazaki Area of Kyoto, which is surrounded by popular tourist destinations such as the Kyoto City Kyocera Museum of Art, Okazaki Park, and the Heanjingu Shrine, thereby ensuring a steady flow of visitors and a conducive operating environment [19]. During the financial crisis faced by Kyoto City, Kyoto Zoo appealed for assistance via SNS platforms, seeking support from local shops and donations from the community to provide the animals a chance to survive [20]. Notably, a local pickle store donated radish roots and leaves not commonly consumed by humans to serve as animal food. This act served as an example of how intellectualization can contribute to regional collaboration and promote or influence certain sustainable development goals (SDGs). --- Identifying Evaluation Items of the Smart Zoo System of Japan (SZSOJ) The research process of this study is shown in Figure 1. The present study seeks to explore the unique application of the concept of intellectualization in Japanese zoos, which is closely intertwined with the urban lifestyle of Japan. To achieve this aim, we draw upon ongoing projects at Kyoto Zoo, which has been observed to have a wide range of QR codes, making it a noteworthy feature for our primary classification. Furthermore, the zoo's official ecological sustainability plan identifies the ecosystem as another primary classification item. Our survey revealed that the mobile application in Kyoto Zoo has been discontinued. As a result, we have identified four primary classification items: QR code information function, ecology system, functions within the zoo, and official website function. The 26 secondary classification items are derived from these four primary categories. A summary of the concept definitions of these items is presented in Table 1. The present study seeks to explore the unique application of the concept of intellectualization in Japanese zoos, which is closely intertwined with the urban lifestyle of Japan. To achieve this aim, we draw upon ongoing projects at Kyoto Zoo, which has been observed to have a wide range of QR codes, making it a noteworthy feature for our primary classification. Furthermore, the zoo's official ecological sustainability plan identifies the ecosystem as another primary classification item. Our survey revealed that the mobile application in Kyoto Zoo has been discontinued. As a result, we have identified four primary classification items: QR code information function, ecology system, functions within the zoo, and official website function. The 26 secondary classification items are derived from these four primary categories. A summary of the concept definitions of these items is presented in Table 1. QR codes for science videos can provide visitors with educational and entertaining content about animal behavior, ecology, and conservation, enhancing their understanding of the natural world. --- 9 Regional activities QR code information QR codes about regional activities can inform visitors about cultural and recreational activities in the zoo's surrounding area, encouraging them to explore the local community. Animal education science live QR code information QR codes about live animal education events can provide visitors access to real-time animal behavior and conservation education, promoting a deeper understanding and appreciation of the zoo's mission. Animal protection organization QR code information QR codes about animal protection organizations can inform visitors about partner organizations and their efforts to conserve and protect endangered species worldwide. --- Ecology system Ecological cycle systems Ecological cycle systems in the zoo can sustainably manage waste, recycle resources, and maintain a healthy environment for animals and plants. --- Environmental sensors Environmental sensors can monitor the zoo's temperature, humidity, air quality, and other environmental factors, providing data for environmental management and animal welfare. Automatic watering Automatic watering systems can provide plants with appropriate amounts of water, thereby reducing water waste and ensuring plant health in the zoo. Eco-energy (solar power) Solar power can generate clean energy for the zoo, reducing its carbon footprint and promoting sustainable energy use. --- Ecological energy use information Information about ecological energy use in the zoo can educate visitors about the zoo's efforts to reduce energy consumption, promote renewable energy, and protect the environment. --- Functions within the zoo Free WIFI Free WIFI in the zoo can provide visitors access to online resources and enhance their overall experience. --- Electronic ticketing system Electronic ticketing systems can streamline ticket purchasing and reduce wait times for visitors, improving their overall experience in the zoo. --- Interactive animal education Interactive animal education can provide visitors with engaging and educational experiences, such as allowing them to interact with animals through devices or providing real-time feedback on animal behavior and health, promoting a deeper understanding and appreciation of the natural world. --- Animal state observation Animal state observation can monitor animal behavior and health, enabling the zoo to provide appropriate care and promote animal welfare. --- Animal status detection (camera) Animal status detection cameras can detect and monitor animal behavior and health, providing data for animal welfare management and research. Electronic information screen Electronic information screens can provide visitors with maps, schedules, and other relevant information about the zoo, enhancing their overall experience. --- Official website function Smart souvenir vending (photos) Smart souvenir vending machines can provide visitors with customized photo souvenirs, enhancing their zoo memories and promoting sustainable souvenir production. --- Official website function The zoo's official website provides visitors with comprehensive information about the zoo's animals, exhibits, events, and services. --- Tourism SNS The zoo uses social media platforms such as Facebook, Instagram, and Twitter to promote tourism activities and interact with visitors. --- Digital map The digital map of the zoo is accessible on mobile devices. It provides visitors real-time information about exhibits, events, and animal locations, facilitating navigation and enhancing the visitor experience. --- Data Collection This study gathered data from 117 highly qualified graduate students in landscape architecture enrolled at prestigious universities in Kyoto and Chiba. To ensure the veracity and credibility of the collected data, respondents were required to log in to their personal accounts before answering the Google questionnaire. Additionally, participants confirmed that they had experienced the Kyoto Zoo as a tourist, thus providing reliable insights into the smart zoo experience. Due to their academic backgrounds, the respondents could evaluate the smart zoo experience from a research-based perspective, while the completed tourist experience guaranteed the validity of the questionnaire. The questionnaire was designed with two levels of indicators (Level 1 and 2), and it included items that were assessed for their importance and performance on a scale of 1-5. The importance assessment scale ranged from 1 (not at all important) to 5 (very important), whereas the performance assessment scale ranged from 1 (very poor) to 5 (very good). The inclusion of graphical descriptions in each item aimed to prevent misidentification. The reliability of the questionnaire was also tested to ensure its quality. For the full list of questionnaire items, please refer to Supplementary File S1. In order to derive meaningful insights from the collected data, the study utilized a two-stage process. The importance rankings obtained from the survey results were utilized as objective data references in the first stage. To determine the weightage of each item, the study applied the AHP. The AHP-derived weights were then used in the FCEM. This method integrates the fuzzy theory, a widely recognized method for decision making in complex situations, and the analytic hierarchy process to evaluate complex systems. The FCEM was utilized to obtain the zoo's current results for construction effectiveness. In the second stage, the original 1-5 rating data obtained from the questionnaire were retained. The study employed IPA testing to assess the overall intellectualization construction degree and each specific item in the zoo. IPA is a widely used method for evaluating the performance of a system or product by examining the relationship between importance and performance. The results obtained from the IPA testing were then used to guide future zoo development, providing valuable insights that could be used to enhance the visitor experience and improve the zoo's overall effectiveness. --- AHP (Analytic Hierarchy Process) The analytic hierarchy process (AHP) is an essential tool for this study due to its rigorous and systematic approach to decision making. Developed by Thomas L. Saaty in the mid-1970s [21], the AHP combines qualitative and quantitative analyses to quantify group decisions and priorities. By breaking down complex problems into hierarchical structures and using pair-wise comparisons, the AHP determines the relative importance and weight of criteria and alternatives [22]. This allows decision makers to make wellinformed and transparent choices based on thorough analysis. Therefore, in our study, we adopted the AHP as a recognized method for systematically and hierarchically quantifying group decisions and weights. We used a pair-wise comparison of the weights of each item to assess the relative importance of different criteria within each item. To ensure the accuracy of the pair-wise comparison process, importance rankings were collected from the questionnaire, and the resulting data were transformed into percentages on a scale of 1-9. These percentages were then used to judge the relative importance of pair-wise comparisons among all items. The rankings of relative importance, as shown in Table 2, were obtained from this process. The vector U will also define each evaluated item set. The classification is defined as follows: The AHP method analyzes designated items based on their importance ranking and then constructs a judgment matrix. The maximum eigenvalue of the judgment matrix is calculated, and the resulting eigenvector is considered the evaluation weight vector A. However, a consistency test is performed to ensure the objectivity and rationality of the judgment. This is because the AHP method is prone to inconsistencies in the judgment matrix when respondents are asked to compare the importance of multiple criteria. Therefore, a consistency ratio (CR) is calculated to determine the degree of inconsistency in the judgment matrix. U = <unk>U 1, U 2, U 3, U 4 <unk> U 1 = <unk>U In the entirety of the computation, the deviation consistency index of the judgment matrix is represented by CI, which is calculated as CI = (<unk>-n) (n-1). A higher value of CI indicates poor consistency of the judgment matrix, whereas a CI value of 0 represents the complete character of the matrix. The consistency ratio, denoted as CR, is calculated using the formula CR = CI RI, where RI represents the average random consistency index. When CR <unk> 0.1, the consistency of the judgment matrix can be considered acceptable. --- FCEM (Fuzzy Comprehensive Evaluation Method) The fuzzy comprehensive evaluation method (FCEM) is needed for this study due to its ability to handle uncertainty and imprecise information. Based on the fuzzy set theory pioneered by Lotfi Zadeh [23], the FCEM allows for representing and manipulating fuzzy and uncertain data. With its application in various fields, FCEM enables the conversion of qualitative and uncertain assessments into quantitative measurements [24]. In this study, where perceptions of the concepts of "Smart" for visitors are inherently vague, FCEM is employed to analyze and evaluate the effectiveness of smart construction in the zoo. By utilizing FCEM, the study aims to provide a comprehensive assessment considering multiple factors and constraints. The FCEM calculation process is carried out in two steps using MATLAB. The first step involves establishing the fuzzy judgment matrix. The degree of membership of the Item Set Rm can be defined as follows: R m = <unk> <unk> <unk> <unk> <unk> R m1a R m1b • • • R m1e R m2a R m2b • • • R m2e............ R mna R mnb • • • R mne <unk> <unk> <unk> <unk> <unk> The weighting of Item Set A of the first classification calculated by AHP can be defined as: A = A 1 A 2 A 3 A 4 The weighting of Item Set Wm of the secondary classification calculated by AHP can be defined as: W m = W m1 W m2 • • • W mn As mentioned above, the symbol "m" signifies the primary classification category, while "n" denotes the number of sub-classification items. Moreover, the symbols "a-e" correspond to the five-point rating system, ranging from 1 to 5. By using this method, the degree of membership of the Item Set Rm can be established. The collected raw data from the questionnaire are then transformed into the "R" matrix, which is utilized to construct the fuzzy judgment matrix. The second step is to use the established matrix for the fuzzy comprehensive evaluation calculation as follows: C 1 = W 1 <unk> R 1 C 2 = W 2 <unk> R 2 C 3 = W 3 <unk> R 3 C 4 = W 4 <unk> R 4 B = A <unk> C 1 C 2 C 3 C 4 = b 1 b 2 b 3 b 4 b 5 The term "bi" value refers to the degree of membership of the evaluated item to each evaluation criterion, which is determined based on the evaluation statement (e.g., "excellent", "good", "moderate", "fair", and "poor") corresponding to the ranking system. The "bi" value is obtained by performing the fuzzy calculation based on the degree of membership between the evaluation statement and the evaluated item. The highest value obtained from this calculation represents the intellectualization result of Kyoto Zoo, indicating the zoo's level of intelligence and smartness in terms of its facilities, exhibits, and services. --- IPA The importance-performance analysis (IPA) is a widely used method for evaluating customer satisfaction by measuring the gaps between customer expectations and actual perceptions [25]. Utilizing a four-quadrant diagram, this method can swiftly identify the areas requiring attention, prioritize each demand indicator, and formulate a sound implementation plan. The IPA method has proven to be an effective and straightforward approach for measuring customer satisfaction and improving the quality of service [26]. Its ease of use and practicality make it a valuable tool for businesses seeking to enhance customer satisfaction and stay ahead of the competition. The mean value was computed for each item in the original questionnaire to perform IPA, and the resulting means for overall performance and importance were utilized as quadrant dividers. Figure 2 illustrates the chart that determines the position and stage of each item. proach for measuring customer satisfaction and improving the quality of service [26]. Its ease of use and practicality make it a valuable tool for businesses seeking to enhance customer satisfaction and stay ahead of the competition. The mean value was computed for each item in the original questionnaire to perform IPA, and the resulting means for overall performance and importance were utilized as quadrant dividers. Figure 2 illustrates the chart that determines the position and stage of each item. --- Results --- Results of the AHP The questionnaires demonstrated excellent recovery rates, and their reliability was assessed with values above 0.9. Additionally, validity was tested using the Kaiser-Meyer-Olkin measure of sampling adequacy with values greater than 0.5 and significant values less than 0.05. The detailed results are presented in Tables 3 and4. --- Results --- Results of the AHP The questionnaires demonstrated excellent recovery rates, and their reliability was assessed with values above 0.9. Additionally, validity was tested using the Kaiser-Meyer-Olkin measure of sampling adequacy with values greater than 0.5 and significant values less than 0.05. The detailed results are presented in Tables 3 and4. The relative importance of the questionnaire and the corresponding factors are presented in Table 5, with the AHP scores ranging from 1 to 9, reflecting the pair-wise comparisons. The AHP scores were derived from the participants' relative judgments percentage and indicated the priority and significance of each item in the evaluation process. The AHP method involves a systematic and pair-wise comparison of all items based on their relative importance, leading to a judgment matrix for each evaluation factor. As presented in Table 6, the judgment matrix for the first-level evaluation factors of SZ-SOJ has been established using the AHP method. Moreover, Tables 7-10 display the judgment matrices for the second-level evaluation factors. The consistency of all matrices has been evaluated, and the results indicate the accuracy and validity of the AHP analysis in this study. The consistency test was performed on all the results, which showed that the weight set obtained through AHP is valid and reasonable. The AHP analysis yielded varying weight values for each item, highlighting differences in their relative importance. For instance, U 3 (Functions within the zoo) in the firstlevel catalog had a weight value of 3.819%. In comparison, U 11 (Plants' QR code information) and U 18 (Animal education science videos QR code information) had a 6.917% weighting in the second-level catalog. In contrast, U 14 (Questionnaire research QR code information) had a weight value of only 1.162%. Similarly, U 19 (Regional activities' QR code information) had a 1.814% weighting, and U 110 (Animal education science live QR code information) and U 111 (Animal protection organization QR code information) had a combined weight of 3.349%. The weight of U 22 (Environmental sensors) was 3.355%, and that of U 25 (Ecological energy use information) was 7.183%. On the other hand, U 31 (Free WIFI) and U 34 (Animal state observation) had weights of 5.38%, while U 37 (Smart souvenir vending (photos)) had a weight of 2.15%, and U 42 (Tourism SNS) had a weight of 5.49%. Interestingly, these weights were lower than expected, suggesting that visitors or citizens may not necessarily share the same expectations as researchers or designers regarding the envisioned smart features. --- Results of FCEM The exact values for each second-level evaluation factor of the questionnaire can be found in Tables 11121314. Based on the membership degree of the Item Set R m, the following can be constructed: R 1 = <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> 0. <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> R 2 = <unk> <unk> <unk> <unk> <unk> <unk> 0. <unk> <unk> <unk> <unk> <unk> <unk> R 3 = <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> 0.32 0.29 0.15 0.14 0.09 0.32 0.30 0.20 0.13 0.05 0.28 0.21 0.27 0.13 0.11 0.32 0.26 0.21 0.12 0.09 0.29 0.25 0.20 0.13 0.14 0.34 0.22 0.25 0.12 0.07 0.20 0.33 0.23 0.10 0.14 <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> R 4 = <unk> <unk> 0.31 0.33 0.14 0.11 0.11 0.24 0.28 0.21 0.17 0.10 0.32 0.22 0.21 0.15 0.10 <unk> <unk> Afterwards, the first-level fuzzy comprehensive evaluation result can be obtained by using the assessment matrix C and the corresponding weight vector A, as B = A <unk> C. C 1 = W 1 <unk> R 1 C 2 = W 2 <unk> R 2 C 3 = W 3 <unk> R 3 C 4 = W 4 <unk> R 4 B = A <unk> C 1 C 2 C 3 C 4 B = 0.2694 0.2682 0.2034 0.1317 0.1298 The fuzzy comprehensive evaluation approach is commonly based on the maximummembership degree principle to determine the results. Upon analysis of vector B, it is apparent that the membership-degree values corresponding to the ranking system's categories of "excellent", "good", "moderate", "fair", and "poor" are 0.2694, 0.2682, 0.2034, 0.1317, and 0.1298, respectively. Notably, the highest membership degree value of 0.2694 is attributed to the "excellent" category. Therefore, the SZSOJ evaluation score for Kyoto Zoo is calculated to be 0.2694, which reflects an "excellent" rating. This finding indicates that the intellectualization construction efforts of Kyoto Zoo are commendable, resulting in high levels of visitor satisfaction and agreement with the zoo's intellectualization initiatives. --- Results of IPA The arithmetic mean of all factor scores can be calculated using SPSS 21.0 software on the unprocessed data collected from the questionnaire, as tabulated in Tables 15 and16. The generated IPA matrices are graphically depicted in Figures 2 and3, which enable us to visually identify the key areas of concern and prioritize the corresponding demands. The IPA results present a stark contrast to the findings from the questionnaire, as illustrated in Figure 3. Notably, Functions within the zoo (categorized under the first quadrant) exhibited a significantly higher score than the mean values in both importance and expressiveness, thus emphasizing the need for its continuous sustenance. Similarly, the Official website function (also belonging to the first quadrant) scored higher than mean values in both importance and expressiveness, marking its significance. However, The IPA results present a stark contrast to the findings from the questionnaire, as illustrated in Figure 3. Notably, Functions within the zoo (categorized under the first quadrant) exhibited a significantly higher score than the mean values in both importance and expressiveness, thus emphasizing the need for its continuous sustenance. Similarly, the Official website function (also belonging to the first quadrant) scored higher than mean values in both importance and expressiveness, marking its significance. However, the QR code information function and
The rapid pace of urbanization and the emergence of social challenges, including an aging population and increased labor costs resulting from the COVID-19 pandemic, have underscored the urgency to explore smart city solutions. Within these technologically advanced urban environments, zoos have assumed a pivotal role that extends beyond their recreational functions. They face labor cost challenges and ecological considerations while actively contributing to wildlife conservation, environmental education, and scientific research. Zoos foster a connection with nature, promote biodiversity awareness, and offer a valuable space for citizens, thereby directly supporting the pillars of sustainability, public engagement, and technological innovation in smart cities. This study employs a quantitative analysis to assess the alignment between smart projects and the distinctive characteristics of Kyoto Zoo. Through questionnaires, we collected feedback on performance and importance, and subsequently employed the analytic hierarchy process and the fuzzy integrated evaluation method to obtain quantitative results. The findings reveal the high level of intelligence exhibited by Kyoto Zoo, and the analysis provides insightful guidance that can be applied to other urban facilities. At the same time, we compared Kyoto Zoo with Ueno Zoo to see the difference in intellectualization achievements in different contexts in terms of data and systems.
16. The generated IPA matrices are graphically depicted in Figures 2 and3, which enable us to visually identify the key areas of concern and prioritize the corresponding demands. The IPA results present a stark contrast to the findings from the questionnaire, as illustrated in Figure 3. Notably, Functions within the zoo (categorized under the first quadrant) exhibited a significantly higher score than the mean values in both importance and expressiveness, thus emphasizing the need for its continuous sustenance. Similarly, the Official website function (also belonging to the first quadrant) scored higher than mean values in both importance and expressiveness, marking its significance. However, The IPA results present a stark contrast to the findings from the questionnaire, as illustrated in Figure 3. Notably, Functions within the zoo (categorized under the first quadrant) exhibited a significantly higher score than the mean values in both importance and expressiveness, thus emphasizing the need for its continuous sustenance. Similarly, the Official website function (also belonging to the first quadrant) scored higher than mean values in both importance and expressiveness, marking its significance. However, the QR code information function and Ecology system, both falling under the fourth quadrant, received below-average scores on both parameters, indicating their lower priority in the development program. Nevertheless, with sustained investment, these functions could be improved, and their recognition and value to visitors enhanced. In summary, Functions within the zoo is the preeminent and efficacious aspect. In contrast, the QR code information function and Ecology system require additional investment to increase visitors' acknowledgment of their worth. The findings of this study underscore the need for continual refinement and enhancement of the smart features of the SZSOJ to sustain and elevate visitor satisfaction and engagement. As such, the integration of usercentered design principles and feedback mechanisms should be prioritized in developing and implementing smart features in zoo environments. By doing so, the SZSOJ can reinforce its position as a cutting-edge smart zoo and provide visitors with an exceptional and memorable experience. Figure 4 provides clear evidence that the Animal status detection (camera) function is highly valued by visitors and, therefore, should be prioritized for continued devel-opment and maintenance. However, the Electronic information screen, Ecological cycle systems, Animal education science live QR code information, Animal protection organization QR code information, and Artwork QR code information are less highly valued by visitors. They should therefore be given lower priority in future development efforts. Conversely, visitors have expressed an interest in Plant QR code information, indicating its potential as a feature that could be further developed. Overall, the majority of the features fall in or around the center of the graph, with some outliers in the fourth quadrant, suggesting the need for consistent development and maintenance efforts. --- Results on Satisfaction of Zoo Visitors We harnessed the study to distill a singular gauge of zoo visitor contentment, reflecting their perceptions within the current context. Each score was multiplied by the corresponding item's satisfaction proportion, culminating in an averaged overall value harmonized with the weights of each first-level categorization, yielding a final satisfaction rating out of 5. The synthesis of different factors yielded a weighted mean satisfaction score in Kyoto Zoo of 3.43 (compared to Ueno Zoo's 2.70), affirming visitors' positive sentiments. Generally, a score greater than 3 indicates good satisfaction. This consolidated metric, aligned with scholarly practices, encapsulates smart features, sustainability, and visitorcentric amenities, reflecting the holistic zoo experience. This approach underscores methodological rigor, resonating with academic discourse, and deepens our understanding of smart zoos' impact on visitor satisfaction dynamics. --- Discussion --- Results on Satisfaction of Zoo Visitors We harnessed the study to distill a singular gauge of zoo visitor contentment, reflecting their perceptions within the current context. Each score was multiplied by the corresponding item's satisfaction proportion, culminating in an averaged overall value harmonized with the weights of each first-level categorization, yielding a final satisfaction rating out of 5. The synthesis of different factors yielded a weighted mean satisfaction score in Kyoto Zoo of 3.43 (compared to Ueno Zoo's 2.70), affirming visitors' positive sentiments. Generally, a score greater than 3 indicates good satisfaction. This consolidated metric, aligned with scholarly practices, encapsulates smart features, sustainability, and visitorcentric amenities, reflecting the holistic zoo experience. This approach underscores methodological rigor, resonating with academic discourse, and deepens our understanding of smart zoos' impact on visitor satisfaction dynamics. --- Discussion --- Findings from the Questionnaire The findings from the questionnaire survey conducted at Kyoto Zoo have yielded insightful results, with most items scoring similarly and possessing little disparity in terms of importance and expressiveness. However, some unexpected revelations emerged, such as the Functions within the zoo being ranked the least important among the four items in the first level of classification, exhibiting a significant value gap. In contrast, the QR code information function was surprisingly rated as the most important. Moreover, the questionnaire collection process and results differed from those of Ueno Zoo, and the following specific observations were identified: 1. Firstly, there was a marked difference between Kyoto Zoo and Ueno Zoo in terms of questionnaire awareness. The feedback from the questionnaire about Ueno Zoo revealed that many respondents needed to be made aware of the existence of some smart functions in the park if there were no accompanying photos. In contrast, clarity was sufficient for the completion of questionnaires at Kyoto Zoo, indicating a more thorough understanding of these smart functions among citizens. This may be attributed to the fact that the good promotion of smart features in Kyoto City's smart city project has fostered widespread acceptance and comprehension of smart functions among the populace [27], unlike in Ueno Zoo, where the importance and performance of many projects exhibit significant disparities. --- 2. Secondly, the present study examined and compared the feedback received from visitors at Kyoto and Ueno Zoos regarding the importance and performance of various smart functions. Interestingly, the results showed that there was a significant difference between the two zoos in the importance of Functions within the zoo. While this function was ranked the least important among the four items in the first level of classification in Kyoto Zoo, it was surprisingly ranked the most important function by respondents in the Ueno Zoo questionnaire. This may be due to the differing scale and positioning of the two zoos. Ueno Zoo, being a zoo with a large flow of people in the city center and many foreign visitors, may have visitors who pay more attention to offline interactive functions without the use of devices. In contrast, Kyoto Zoo, being a regional city zoo welcoming mostly resident visitors, may have visitors who expect newer and more innovative intelligent functions. Additionally, the respondents at Kyoto Zoo may have perceived Functions within the zoo as a basic feature that does not require much attention or specialness, as its project performance is similar to that of the city streets outside the park (e.g., the free Wi-Fi function at Kyoto Zoo uses the city Wi-Fi of Kyoto City). However, visitors to both zoos were found to value Official website functions highly, with visitors showing a strong demand for information about official releases. Moreover, the regional service nature of Kyoto Zoo may have contributed to the need for regional communication functions such as the QR code information function. These findings shed light on the different factors that may influence visitor perceptions and expectations of smart functions in zoos and highlight the need for zoos to carefully consider their unique visitor profiles when designing and implementing smart features. --- 3. Finally, we propose that the promotion of smart city projects in Kyoto City and the financial crisis of the past few years have raised awareness and expectations of smart cities, which may lead to higher average feedback scores on the importance scale in the future. --- Findings from Analytical Calculations The results of the FCEM analysis demonstrate that the intellectualization infrastructure of Kyoto Zoo is deemed "excellent" (with an FCEM evaluation score of 0.2694). This finding suggests that citizens can easily comprehend and appreciate the intellectualization features of the zoo. Although unexpected, this is a very positive outcome, as it indicates that Kyoto Zoo can effectively realize the intellectualization process within the Smart City framework, making it more accessible and integrated into citizens' daily lives. Furthermore, in contrast to the FCEM result of Ueno Zoo, which received a "fair" score, the importance of the smart city background and system is more prominently manifested in the smart zoo concept [13]. This is due to the smaller scale of Kyoto Zoo and its amiable service style. Therefore, the public may prioritize practical features that have frequent daily uses over those that appear technologically advanced, akin to the higher happiness satisfaction reported in small towns compared to big cities. The IPA analysis yielded results that differ significantly from the numerical importance ratings obtained from the questionnaire in the first classification level. We posit the following explanations: 1. The distinction arises from the questionnaire design, where importance is assessed solely at the first level of categorization. The respondents' direct voting on these firstlevel categories determines their importance, hinging on their judgment of the overarching functional categorization. In contrast, IPA generates an average value by incorporating all respondents' responses to second-level categorization items in the calculation. This approach is more specific and depends on each functional category's sub-item performance. The questionnaire's importance value directly stems from tallying first-level categorical items, while IPA calculates the mean of its second-level categorical items. --- 2. Overall satisfaction (derived from direct scoring of first-level categorical items in the questionnaire) may vary based on visitors' perceptions. For instance, the QR code information function, primarily focused on digital interaction, might prompt visitors to anticipate a comprehensive zoo intelligence. Conversely, "Functions within the zoo" is a broader category found in various Japanese zoos, making it challenging to associate directly with overall intelligence satisfaction. IPA's mean value for second-level category items differs in this aspect. Some first-level category items may exhibit relatively lower overall satisfaction scores but have sub-categories (e.g., "Animal Status Detection (camera)" within "Functions within the zoo") that garner high satisfaction. Consequently, these items receive higher values in IPA's mean value calculation. --- 3. The quantity of sub-items varies across each Level 1 categorical item. For instance, the first category, "QR code information function", encompasses 11 sub-items, whereas the fourth, "Official website function", includes only 3. This disparity in sub-item count could influence visitors' perceptions and expectations. The QR code information function, featuring numerous sub-items, might overwhelm visitors with its multitude of functions, possibly eliciting feelings of fatigue or numbness. Indeed, our subjective interviews revealed inquiries like, "Why doesn't the zoo consolidate all these functions into one platform?" The QR code information function and the Ecology system require further development and refinement to increase public and visitor awareness of their significance in driving the park's sustainable growth. The current strengths, weakness, opportunities, and threats of Kyoto Zoo are summarized in the SWOT chart in Figure 5. --- Comparison and Recommendations for Ueno Zoo Based on the Impact of Kyoto Smart City Regarding system classification, both Kyoto Zoo and Ueno Zoo are classified as shown in Figure 6. First, we will examine both zoos in a combined weighted order. We ranked the weights of the smart items of Kyoto Zoo (including the first-quarter classification and the secondlevel classification) obtained from Tables 6-10 and compared them with the items from Ueno Zoo. The results of the weights of the first-level classified items and the second-level classified items for each ranking are shown in Figures 7 and8. Figures 7 and8 show that the item weights in Kyoto Zoo exhibit a higher degree of differentiation than those in Ueno Zoo. The range between the maximum and minimum values in Kyoto Zoo is more pronounced. Furthermore, Figure 8 highlights that approxi-Land 2023, 12, 1747 20 of 25 mately 20 sub-items in Kyoto Zoo have weights below 20%, with 6 sub-items falling below 5%. Furthermore, there is a substantial disparity in the weights of the top four sub-items in Kyoto Zoo. In contrast, Ueno Zoo displays a relatively uniform distribution of item weights in the second classification level, resulting in a more balanced overall distribution. Interestingly, even the weights of the first three items in Ueno Zoo are identical. The magnitude of weighting also mirrors the visitors' level of expectation. In the case of Kyoto Zoo and Ueno Zoo, some of the programs with a high weighting (which, on the other hand, is interpreted as programs that visitors strongly anticipate) did not perform well and, therefore, did not end up in Quadrant 1 or even Quadrant 4 of the IPA results, which indicates that the programs developed by the zoos sometimes do not correspond to the actual needs of the visitors. Secondly, we need to compare the two zoos' respective performances at the current stage. Concerning the overall FCEM results (Kyoto: excellent, Ueno: fair), Kyoto Zoo aligns better with visitors' perceived needs for smart features. Additionally, in terms of the single satisfaction value (Kyoto: 3.43, Ueno: 2.70), Kyoto Zoo outperforms Ueno Zoo. In other words, based on the current state of development, Kyoto Zoo's smart projects are better suited to the needs of local tourists and the collaborative development required for a zoo. Despite Ueno Zoo having more construction funds and a larger scale, visitor feedback on its current performance prompts considerations about whether more advanced intellectualization is always better, or whether finding smart projects suitable for the public represents a more favorable development concept. Thirdly, we need to compare them regarding the overall project categorization framework. As no unified smart management platform exists, Kyoto Zoo cannot be classified using the same criteria as Ueno Zoo at the first level. However, it can be classified based on the direction of functional development. Currently, Kyoto Zoo has fewer first-level classifications due to the lack of smartphone applications. However, it has a strong QR code information function, classified as a first-level item. The Ecology system is also a primary development direction at Kyoto Zoo and a first-level item. First, we will examine both zoos in a combined weighted order. We ranked the weights of the smart items of Kyoto Zoo (including the first-quarter classification and the second-level classification) obtained from Tables 6-10 and compared them with the items from Ueno Zoo. The results of the weights of the first-level classified items and the secondlevel classified items for each ranking are shown in Figures 7 and8. On the other hand, the Official website functions and Functions within the zoo have fewer secondary classification sub-projects. Although Kyoto Zoo scored well in the overall IPA score, considering the limited classification coverage, we suggest that Kyoto Zoo should increase its coverage in future planning. Therefore, we recommend that Kyoto Zoo increase its classification coverage for better development. Although Kyoto Zoo's current intellectualization only focuses on its ecology system, the fact that it is already part of the smart city development plan and has proposed regional smart equipment is an encouraging sign. It is also promising that the city's pre-existing smart facilities, such as the smart traffic system, can be integrated with the zoo's intellectualization. With the ongoing development of the city's smart infrastructure, including the use of big data, human flow monitoring data, smart streetlights, and AI cameras, Kyoto Zoo has the potential to significantly enhance its smart capabilities. We strongly recommend that Kyoto Zoo take these opportunities into consideration when developing its future smart plans and categories. Doing so will allow the zoo to fully leverage its position within the smart city and take its intellectualization process to the next level. --- Conclusions The primary objective of this study is to ascertain the level of intellectualization in Japanese zoos by utilizing the FCEM analysis method while determining weights using the AHP. Additionally, this study aims to identify the current strengths and weaknesses of smart function developments in zoos through IPA and explore the prospects of such developments. At the same time, we compared Kyoto Zoo with Ueno Zoo to see the difference in intellectualization achievements in different contexts in terms of data and systems. Furthermore, this study aims to investigate the differences between Kyoto Zoo under the smart city system and a conventional smart zoo. As the concept of smart zoos is relatively novel, particularly in Japan, where smart cities are still in their developmental stages, we seek to refine objective system research methods to assess the intellectualization process more objectively, ultimately aiding zoos in Japan and around the world to become smarter. Our study results can be compared with current policies and be used to guide future developments in the field. However, it is important to note that there are some limitations that can inform future research. Firstly, the selection of the smart project was influenced by certain characteristics unique to Kyoto Zoo, such as its difference in service orientation and smart project offerings, which made it difficult to compare with Ueno Zoo using the same criteria. Instead, we had to rely on feedback from service recipients to analyze questionnaire responses. We plan to conduct a comparative study once a unified standard for smart zoos is established in Japan again. Secondly, due to geographical constraints, Kyoto Zoo's lack of a cell phone applications and a smart platform for unified management may have limited public perception of smart functions. These limitations highlight the need for more comprehensive and standardized evaluations of smart zoos in the future. In addition, future studies can explore more advanced and innovative smart functions in zoos, including advanced technologies like AI, the IoT, and big data analysis [28]. Moreover, as the concept of a smart city continues to evolve, it will be important to compare the development of smart zoos with other traditional parks in the city to better understand the impact of smart technology on the overall tourism industry. This can be achieved through AHP for decision making and can expand the scope of smart research beyond individual zoo analysis. --- Data Availability Statement: Not applicable. --- Conflicts of Interest: The authors declare no conflict of interest. --- Supplementary Materials: The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/land12091747/s1, Supplementary File S1: The following is the supplementary data related to this article.
The rapid pace of urbanization and the emergence of social challenges, including an aging population and increased labor costs resulting from the COVID-19 pandemic, have underscored the urgency to explore smart city solutions. Within these technologically advanced urban environments, zoos have assumed a pivotal role that extends beyond their recreational functions. They face labor cost challenges and ecological considerations while actively contributing to wildlife conservation, environmental education, and scientific research. Zoos foster a connection with nature, promote biodiversity awareness, and offer a valuable space for citizens, thereby directly supporting the pillars of sustainability, public engagement, and technological innovation in smart cities. This study employs a quantitative analysis to assess the alignment between smart projects and the distinctive characteristics of Kyoto Zoo. Through questionnaires, we collected feedback on performance and importance, and subsequently employed the analytic hierarchy process and the fuzzy integrated evaluation method to obtain quantitative results. The findings reveal the high level of intelligence exhibited by Kyoto Zoo, and the analysis provides insightful guidance that can be applied to other urban facilities. At the same time, we compared Kyoto Zoo with Ueno Zoo to see the difference in intellectualization achievements in different contexts in terms of data and systems.
Introduction As it relates to the literature "over the past few decades, education systems, especially in higher education, have been redefined. Such reforms inevitably require reconsideration of operational notions and definitions of quality, along with a number of related concepts. This reconsideration aligns with the core of higher education reforms: improving efficiency and compatibility with emerging social demands while adapting to competitiveness and accountability trends". [40] Thus, restructuring the university education system represents a convenient objective for the development of the Republic of Moldova. The strategic directions of monitoring and development of the university education, described in the policies of sustainable development, have to be elaborated in relation to the worldwide development trends of society. Researchers Gormaz-Lobos, C. Galarce-Miranda, H. Hortsch and C. Vargas-Almonacid are of the same opinion when referring to "the new demands of the society and the economy, the constant specializations of the scientific fields, and the incorporation of new technologies for teaching and learning make that the typical contemporary forms for the teacher academic training must be reviewed and analyzed". [19] Although the Education development strategy for the years 2021-2030 "Education-2030" of the Republic of Moldova [35] claimed a prospective, systemic, formative and dynamic education, centered on general-human and national values, and the prospective aspect remains less upgraded in the educational standards, manuals and curriculum of the university education system etc. However, as V. Popa [29; 30] sustains in the Report on specific objectives of the education and training system (Brussels, 2001), the representatives of the European Council started from the hypothesis that the society assigns to the education different points of centering, since what particularizes our times is not the existence of change, but its superaccelerated rhythms. Thus, it emphasizes the need to substantiate theoretically and methodologically a new field -Prospective Pedagogy (PP). Upgrading the educational process in the light of PP requires a responsible analysis, as the future creates increasingly higher requirements. These requirements need changes depending on PP trends, which will substantiate the elaboration of the new educational policies and the university education system. Thence, the scientific approach for a possible theoretical and methodological substantiation of the PP became one of the key matters of the modern pedagogy. The need to explore this field in the present is dependent on several factors: 1. the accelerated rhythm of the change, the globalization, the challenges of the 21st century, the innovations and the creativity, the internationalization of the university education; 2. the need to ensure the quality and the performance of human resources at the global, national and local level; 3. the lack of a sustainable policy at the state level in the field of PP; 4. the shortage of prospective investigations in relation to the education; 5. the weak information level of specialists in the field of education as the report between the demand of labor market, society and the university offer; 6. the skills of the specialist needed on the labor market. Simultaneously, when different aspects of education science are explored, the Prospective Pedagogy as fundamental field is not researched in details, is not conceptualized. This situation is seen as a dilemma or a shortcoming of the education sciences. The emphasized prospective character in education confirms its importance in training the personality to integrate into society and the labor market. In this sense, M Stanciu [34] considers that the young people have to be prepared prospectively. The education will have to give to the individual that "interior compass" orienting them better in the future. The purpose of the research resides in the theoretical and praxeological substantiation of the prospective education paradigm within the university in order to develop and to anticipate the educational process of the Republic of Moldova and to establish the level of planning the prospective skill in the university curriculum to train prospectively and professionally the emerging specialists, directed toward values of sustainable/prospective development of the education institutions/society. In order to achieve the purpose, several objectives of the research are outlined: 1. Analyzing in multi-aspectual way the epistemological fundamentals of the prospective education under the conditions of the continuous educational reforms in the permanent education. 2. Conceptualizing the Prospective Education as a new paradigm in founding the Prospective Pedagogy (PP); 3. Elaborating and validating by experiment the paradigm of the Prospective Education. --- Materials and Methods Our study implements theoretical analysis of the Education Cod focused on information analytics of advanced higher education institutions of the Republic of Moldova [7], as well as two major documents in the legislative system of Moldova concerning Moldova Higher Education Reform project and decision of the Government of the Republic of Moldova dated of 28.06.2017 Nr. 482 "Nomenclature of Domains training and specialties in higher education" [26]. The research methodology meets the object, the purpose and the referred sources and constituted of: a) theoretical methods: scientific documentation, theoretical synthesis, deduction, generalization and systematization, comparison, transfer of theories; b) experimental methods: pedagogical experiment including: direct observation, testing, questioning, conversation; c) statistic methods: data collection, mathematical statistics. To realize the multi-aspectual analysis of the prospective education's epistemological fundamentals, the specialty literature was analyzed to identify the place of PP in the system of education sciences. --- 3 Prospective pedagogy in the system of education sciences Following the analysis of the specialty literature, we observed that the Prospective Pedagogy localized differently in the system of the education sciences based on certain criteria. As argument, we propose a table showing the fact that the Prospective Pedagogy frames as fundamental theoretical field (see Table 1). Framing the Prospective Pedagogy in the fundamental field of the System of Education Sciences may be observed also at I. Bontaş, I. Jinga şi E. Istrate, [20], but these authors quote also <unk>t. Bărsănescu. O. Dandara [12] has the same opinion and she framed PP in fundamental sciences approaching it analytically in temporal context in the course "Pedagogy", appeared in 2010 and describing the classification of education sciences adapted according to E. Macavei [25]. Therefore, we ascertain that a good part of the authors of the Romanian area do not have research in the prospective field. We have to remark that the Model of M. Gatson [15] proposes as sciences those referring to the reflexive and prospective analysis of the future: education philosophy, education planning, and at C. Birzea [6] we find the education planning at the criterion: predominant research methodology. PP approached as fundamental field of education sciences permits to conclude that it has as object of study the prospective educationa new paradigm. There are many interpretations of the "paradigm" notion. In the science philosophy, G. Bergman introduced the term, but Thomas Kuhn contributed essentially to upgrading scientifically the term. The last defines the paradigm as a set of ideas, believes shared by the scientific community, based on prioritized scientific "realization" defining the researched and solved matterstaken from the practice of normal science." [22] As current aspect, sustains Dm. Patrascu, the paradigm circumscribes on what supports the new researches in the science philosophy. Paradigm -1) it is an initial conceptual schema, the model of stating a matter and its settlement, the research methods, dominated during a historical period; 2) theory (model, type of stating the matter) accepted as example in solving the research tasks. We understood by the pedagogical paradigm (from Greekmodel, ensample, learning) the general picture of education, as model for pedagogical action [27]. The most accepted interpretation of the "paradigm" term in our research is that of "model", aspiring to reproduce the essential elements of original, natural or socially studied phenomena and processes. --- Theoretical basis for the elaboration and the development of the prospective pedagogy The pedagogy is a theoretical-praxiological science, where the knowledge and the action, the explanation and the application, the theory and the practice, are inseparable sides of the perspective, from which it assumes the education as own object. This distinctive note of the pedagogy is emphasized by several authors. In this sense, R. Hubert mentions that the pedagogy is a practical science and thinking; E. Planchard considers that we differentiate the descriptive plan (of the knowledge) and the normative plan (of the action) in the pedagogy; J.S. Bruner, referring to the theory of training, shows that it was not only descriptive and explicative, but also prescriptive and normative, [apud 28] and D. Todoran highlighted that "the education science, the pedagogy tends to discover the laws intervening to develop the educational phenomena in order to control, manage and plan gradually [36] The theoretical basis constitutes of laws, theories, principles, conceptions, ideas from the field of pedagogy, philosophy, psychology, education philosophy, pedagogical anthropology, ethics and education sociology etc. The research relied on ideas, philosophical-anthropological concepts regarding the prospective education: the matters of selecting the science content for the organization of the education objects, the experience philosophy and the pragmatic instrumentalism [10], the social-economic matters of masses [16], the approaches of the report between education and society [13], the theories relating to the decision-makingthe model of expectations [39]. A peculiar role in this sense ascribes to the theories of preparing the human resources for the future [10], [37], [11], [36], the prospective triangle [18], the theories of the change [8], the theory of perspective [21], the theory of expectations [39], and others. We took into consideration, in the delimiting of Prospective Pedagogy in the education process, the theory of Romanian scientist D. Todoran describing prospective education (PE) as one of the sectorial dimensions, together with the economic, technological, political, cultural and social one [36]. The interdependence of these dimensions becomes an incontestable reality creating a climate of uncertainty and determines the research of global approaches of matters. Starting from the idea that "Prospective Pedagogy researches the education from the prospective of the future" [1] and that the prospective education is namely the type targeting explicitly the transformation and the future, we get to the core of the same antinomy shaking the education field: tradition or modernity, adaptation for mention [2] or innovation for overcoming. --- Functions of prospective education We deducted as applicability the functions exercised by prospective education: of anticipation, innovative, of adaptation, of planning, of integration in social life, of orientation. All these functions have to be seen in unity and interdependence. They represent, in the same time, a report reflecting the specific reality it meets, following its permanent adaptation to the requirements of global social system and to its main perspective subsystems. The analysis perspectives of PE presented the degree of complexity under the aspect of characteristics that were inserted in the research as conditions of conceptualizing PE. Therefore, the detailed analysis of scientists acceptations in the field of education sciences permitted to establish the specificity of PE: it supposes a holistic, transdisciplinary, probabilistic approach, it has a cognitive structure based on the predic-tion of an event or phenomenon of education, it represents a type of emerging knowledge, an anticipative, dynamic, participative, operational, euristic and innovative character [23]. --- Fig. 1. Functions of prospective education --- Conditions of Prospective Pedagogy to acquire the status of science The study of general epistemology enunciates four main conditions we have to accomplish for a certain field of knowledge to acquire the status of science (Pedagogy Fundamentals): 1. To have its own research object. The Prospective Pedagogy is a field of education sciences plainly justified, which has as its own research objectthe prospective education, it being not only a real complex field, but also of maximum importance claiming an appropriate scientific and rational approach. We identify in the specialty literature [2], [11] different approaches related to PE. We accepted as starting point in our research the definition of R. Dottrens characterizing PE by the orientation toward the future, having as object of study the probabilities of collective evolution in order to establish the fundamentals of an education adapted to the situations and the requirements of tomorrow. [11] P. Apostol [2] considers that PE refers to a global, peculiar systemic function of social complexes, the one of "production/reproduction", more exactly of forming types of personality appropriate to a society in a determined stage of its history. M. Stanciu asserts that PE is a methodical investigation of the future by using an approach favoring the changes and the renewals. In contrast to the futurology, the prospective tries to avoid the rupture between the past and the future [34]. At the core, the PE conceptual analysis supposes the clear delimitation of invoked term's functionality. We consider that PE as the study object of PP, may be interpreted from several perspectives: --- Orientative • as field of science, addressed to the study of factors, mechanisms of prospective construction, development of prospective and competent personality for the present and the future etc.; • as dimension of the education system and other systems; • as continuous process of gathering and forming values, prospective personality; • as study discipline of the educational process aimed at forming prospective skills; • as compound/method of integrating and applying in other disciplines; • as infusional element in the area of different disciplines etc. [23] 2. To elaborate a conceptual and own explicative system, compiled of concepts, judgments and laws (utterances), capable to overcome the descriptive phase and to allow the access to explanation and prediction. PP in the Republic of Moldova is currently at the phase of constituting a conceptual and own explicative system needing the development of exact and stable pedagogical language under the report of meanings. Although the degree of conceptualizing does not achieve the rigor and the precisions of other sciences, the scientific status of concepts and prospective pedagogical utterances cannot be questioned, as it is validated by the formal and non-formal educational practice, and was emphasized some concepts (prospective, prospective character, prospective education, prospective pedagogy etc.) in the specialty literature [5], [9], [10], [11], [36] etc. The conceptual analysis of proposed terms needs to delimit the notions and the respective contents. We find in the use of more terms in the specialty literature: prospective pedagogy, prospective education etc., each having, as we mentioned, a common spectrum of matters and a specific analysis of the educational field. This perspective leads toward the analysis of different approaches of PE. Due to these considerations, it relies on three existing concepts: prospective education [36], prospective pedagogy [25] and the prospective character of education [32]. Although the delimitation of these concepts was approached partially, a comparative analysis of them was not realized at theoretical-practical level. In our opinion, the prospective education is an anticipative activity, being orientated, by outcomes, toward future. We join the opinion of the scientist V. M. Cojocariu [8], mentioning that under the current society's conditions, accelerated evolution rhythm, it is necessary to impel the training a personality capable to settle matters of life and activity. As for the term the prospective education, D. Todoran [36] defines it as training the individual for future and in the future. The author sustains also that the prospective education, largely, covers any research and futurist construction, and narrowly, refers to researches and studies on the possible future in this field. The expression Prospective Pedagogy was introduced in the pedagogical language for the first time by G. Berger, who suggested the idea of a new direction, but he did not develop it. His ideas were taken by the promoters of the permanent education [5]. The analyzed definitions clarify the difference education them, where the prospective pedagogy aims at orienting the education toward the requirements of the future, and the prospective education represents a systemic study of educational systems models, of future processes and education systems, highlighting the conceptions of future education. Significant in our research is the prospective character of education imposing a double conditioning: the appropriate reporting to the characteristics of future society and the functioning of present society. Due to its prospective character, the education not only adapts to the specificity of anticipated changes, but also prepares the conditions resulting in these changes and it models by current actions the specificity itself of the future society. S. Cristea [9] reveals that one of internal characteristics of education policy is the prospective character emphasizing that the education activity always aims at a future, strategic, current and conjectural situation. The absence of the professional and scientific terminology of the "prospective" category, which would meet the correlative notion of "perspective", "proactive", leads to the terminological shortage. We want to specify that the need to substantiate the new notions in the pedagogical science is due to several factors: 1. the significant increase of the importance of prospective character of the prospective education and dimension in training and evaluating the processes of social and personal development along the way; 2. the need to institute within the education sciences the field that would substantiate the correlation of educational actions from the pastpresentfuture, reported to the present and future needs of the individual personality and the entire society. The educational system has to provide to the beneficiaries possibilities to develop the skills with prospective accent; 3. the non-conformity of traditional pedagogical notions used previously ("the perspective education", "the prospect of future, "the planning of education", "the education through change and development", "the education for tomorrow" and other variations), which do not reflect the essence of the object, on the contrary, it limits its comprehension, that's why the local culture comes with multiple and controversial connotations of "prospective" term. In essence, at level of concept, PE was reported to the current and perspective requirements of the society by orienting toward a new modality of education providing to the individual the possibility to face the unpredicted events, by anticipation and participation. Therefore, PE may be analyzed: largely and narrowly. --- <unk> Largely, PE provides a new value organization of the personality's expectations through value and significant hierarchy of skills contributing to the achievement of the educational ideal. <unk> Narrowly, PE represents an organized and designed process of personality's development for future from biological, psychological and social point of view, of training the consciousness and the proactive behavior of the active integration in the social life, which changes continuously [23]. For the development of the conceptual framework mentioned above, we propose the introduction of prospective skill as necessary functional category in the following formula: the prospective skill represents a finalized structure, generated by the mobilization of subject's internal resources quantum in a delimited framework of significant situations (pedagogically deliberate or spontaneous, with disciplinary or interdisciplinary character) and manifesting by anticipation, planning and sense assignments and action's direction [23]. --- To have own investigation methods and techniques of the study object, reunited in a scientific methodology able to provide and to produce true, verifiable and pertinent information about the reality studied by science. PP has own methods and techniques of researching the study object, reunited in a pertinent scientific methodology. Although many of methods are taken from other sciences, especially from psychology and sociology, economy, these methods are integrated in a unitary pedagogical methodology, adapted to the objectives, requirements and peculiarities of researching the educational phenomenon by the prospective dimension and character. In fact, the prospective pedagogical research contributes to the development of methods, to the validation of new research techniques, realizing transfers with other sciences participating in the interdisciplinary research of PE. Many theories directly bear the print of prospective methods: prospective analysis, Delfi method, future's alternative modelling and others. Together with empirical and theoretical methods of studying the future (D. Todoran, p. 205) are emphasized the methods of designing and modeling the future, methods of learning and methods of assessment. A varied series of prediction methods may be used in the prospective methodology, which are used in the training-educative process. Project development method may be useful in more stages of the decision-making process. The scenario may serve as approximate prediction technique of a "fascicle" in the stage of information, of possible evolution of matter or trend. As heuristic education methods, we use in our research the problematization, the method of projects etc. --- To have a praxeology of the field, i.e., principles, norms and rules of practical ac- tions, methods, tools in the sense of influencing, directing and controlling the phenomena it studies. PP has a praxeology of the field, norms and principles of practical action. It is, without doubt, the condition and, meanwhile, the peculiarity of PP. Before becoming a scientific theory, PP was approached in the practice: in the field of planning, business, environment, economy and policy studies [33], in developing the technology, as currently through Virtual Reality opens, new possibilities for the investigation and train-ing of Mental Rotational Ability, which is an important factor in the development of technical skills [3], or anticipating the evolution directions at the level of human resources, applying principles and methods verified experimentally, partially conceptualized by the philosophy, pedagogy, sociology etc. In this sense, PP is the theoretical conceptualization of PE experience. PP relies on the fundamental principles related to the educational process by considering the prospective specificity. In this sense, we consider that the functionality of PE Paradigm may be ensured by respecting the following specific principles: principle of social stringency and global approach, principle of temporal perspective, principle of social stringency, social and individual axiological principle, principle of learning by experience and principle of anticipating, orienting and designing prospectively the education. [23] It is important to complete these principles with the principle of golabalism and the constructivist principle The formulated principles serve as theoretical and normative foundations in order to achieve the expected effect to meet the PP objectives and represent the nucleus of elaborated model. Following the respect of PP principles in the education institutions, PE will make its presence felt, therefore being realized the desiderata of the education process to be rethought the present from the perspective of the future and implicitly, being provided the quality and the performance of the human capital. Together with other pedagogical sciences, PP may be considered as a science with theoretical, gnoseological character, answering to the question what is PE and, elaborating the prospective pedagogical theory, it contributes to the development of human knowledge, generally. PP is a science and has a praxiological character in order to answer to the question what is PE and therefore, relying on the laws of education and on the pedagogical theory with the strategies and the technologies (methods, forms, means) of training and educating the new generation, manifests hence as a science with an efficient educative action. PP is a dynamic science, open to changes and innovations, planning prospectively the appropriate strategies for the future. In this sense, we emphasize the issue of educational systems that due to the pandemic put a special emphasis on Internet Technologies in Distance Education which is necessary to reorganize, revise, implement and provide an opportunity for students to study [38], to became prospectives. In the context of the abovementioned facts, we propose the complex definition of the prospective pedagogy: Prospective Pedagogy (PP) represents a fundamental field of the education sciences, which based on the general and specifically prospective education strategies and laws, studies and substantiates appropriately the process of training the prospective personality. Largely, PP represents a fundamental field of education sciences with theoretical, praxiological and prospective character, which studies and manages the value adaptation and change process and has the outcome to train an integrally developed and prospective personality, capable to face the social transformations, substantiating plainly the potential and the skills, contributing to the achievement of the educational ideal. Narrowly, PP studies the organized and planned educative process of the personality's development for the future, of training a consciousness and proactive behavior of active integration in the social life in continuous change. The value validation of exposed ideas led toward new theoretical-applicative considerations, which ascertain theoretical and methodological fundamentals of PE. They will be pertinent as distinctive field of PP if it constitutes as educational paradigm with theoretical-applicative character by: • valorization of anticipation and planning as fundamental elements of EP; In this regard, we may assert that PE paradigm represents the series of interconnected models, centered on training the prospective skill of specialists, organized based on the general principles of learning and prospective approach of education, psychological-physiological and social characteristics of the individual, having as outcome the training of the prospective personality, expressed by anticipation, planning and direction toward future. The essence of PE Paradigm is shown in Figure 2. The process of training the prospective skills was ensured at two levels: <unk> through the special discipline "Prospective Education" (30 hours) for pedagogues (cycle I). <unk> integrated (planning the prospective skill in the curriculum of the discipline Professional Ethics) for students in Engineering and Information Technologies (cycle I). We observe a different skill training level when analyzing the experimental data as for the level of training the prospective skills at the discipline PE. Hence, at the level I, the anticipation skill increased from 54% up to 70% (specialty Psychopedagogy). Level IIthe anticipation skill increased from 40% up to 49%, and the level IIIit increased from 0% up to 11%. The same results were obtained by the infusion implementation of prospective skills in the discipline of Professional Ethics and Basics of Communication. In closing, we may mention the following: <unk> The created psychopedagogical conditions (implementation of PE model) contributed to the increase of the prospective skill training level at the level I and II to the stage of ascertainment, at level II and III at the stage of training; <unk> Training the prospective skills at the level III was registered in approximatively 20% of training experimental subjects. Constituting an important field of education sciences, PP elaborated/adjusted and continues to elaborate its specific categories, which compiles the fundamental language in educational knowledge and action. The PP categories are different dimensions, aspect, elements and their reports as: the prospective education; the prospective training; ideal, objectives and prospective educational principles; PE curriculum; methods and forms of didactic activity etc. Hence, PP is more than a paradigm or a pedagogical norm, it supposes the interference of a specific social system, which includes all learning experiences provided by the society of individual. The pedagogical experiment showed that the substantiation of PP as a field does not have to exist only as a scientific-theoretical construction for the development of education sciences, but, incontestably, it has to include a series of practical references ensuring its functionality. The basic conditionthe knowledge of scientific fundamentals of PP and the familiarization of university educational beneficiaries with its content. --- Conclusions In our opinion, the theoretical and methodological substantiation of PE supposes the awareness that it matters for all fields of professional training. The situation requires the needs to integrate PP in the university education system. 1. Analyzing the opinion of scientists in the field shows that PP is a science in a fundamental field or generally, of education sciences. The PP substantiation as an important field of education sciences relied on the main conditions that have to be met by a certain field of knowledge to acquire the status of science. In this context, in case of deep changes of our times related to the introduction and the use of new methods, prospective principles and technologies, accompanied by new organization forms of the education process, the prospective development of a personality becomes a valuable and strategic factor for each educational institution, respectively for the labor market.
The theoretical and methodological foundation of Prospective Pedagogy has become one of the key issues of contemporary pedagogy, when exploring various aspects of the science of education related to personality formation, to deal requiments for present and future. The analysis perspectives of the prospective education presented the degree of complexity in terms of the characteristics that were inserted in the research, as conditions of its conceptualization. Studies of general epistemology state four main conditions that a certain field of knowledge must meet in order to acquire the status of science: to have its own research object, to develop a conceptual and explanatory system; to have its own methods and techniques; investigation of the object of study reunited in a scientific methodology; and, to have a praxiology of the field, in the sense of influencing, directing, and controlling the phenomena it studies. Thus, it confirms the theoretical conceptualization of the paradigm of prospective education, as science of education.
Introduction During the COVID-19 pandemic caused by novel coronavirus SARS-CoV-2, fear, rumors, and misconceptions about the novel coronavirus have placed Asian Americans in the spotlight of blame and harassment [1][2][3][4][5]. Instead of preventing discrimination and xenophobia, government officials repeatedly labeled the virus the "Wuhan coronavirus" or the "China virus," potentially accelerating COVID-19 related racial attacks on Asian Americans. In March 2020, the Federal Bureau of Investigation issued a warning about a potential surge of hate crimes against Asian Americans [6]. In April 2020, the Center for Public Integrity reported that 32% of Americans have witnessed someone blaming Asian people for the pandemic [7]. From March 19 through August 5, 2020, over 2,500 instances of anti-Asian discrimination were reported to the Stop AAPI Hate Tracker, an online tool for reporting incidents of hate, violence, or discrimination against Asian Americans and Pacific Islanders in the USA [8]. Such discrimination may contribute to long-term distress, including depression, trauma, anxiety, and posttraumatic stress disorder [3,9,10]. The experience and magnitude of race-based traumatic stress can further impact individuals' perceptions of their ability to cope with such events [11]. Coping with anti-Asian discrimination and stress may be particularly challenging for Asian-origin refugees. Refugees with limited English proficiency face difficulty reporting harassment or seeking assistance in their preferred languages [12]. Refugees with lower socioeconomic status have reduced access to support services and coping resources both due to cost and also due to competing demands, such as work schedules. Refugees from Asia are less likely to seek mental health services due to stigma regarding mental illness, concern about being perceived as "crazy" and decreased emphasis on psychological solutions for emotional stress [3,13,14]. These barriers may be further amplified among refugees who are fearful about speaking out and drawing attention to their experiences due to premigration political repression, including repression that targeted individuals who advocated for themselves and their communities [15]. Additionally, refugee communities often have large populations of essential workers who are unable to work from home, so they may be exposed to harassment at worksites or when traveling to and from work [16,17]. Further, refugees with a perceived risk of COVID-19 exposure through work, e.g., health care personnel, may experience discrimination and stigmatization by individuals fearful of infection [18,19]. Despite these commonalities, experiences with pandemic-related discrimination are also believed to vary across different Asian American subgroups. The US Asian population comes from more than 20 countries, each with a unique history, language, and cultural background. Socialeconomic and health status look widely different across different Asian American subgroups, e.g., when comparing the experiences of English-proficient, white-collar professionals who migrate to the USA with the sponsorship of an employer versus those of predominantly working-class refugee communities. Though these differences significantly impact the risk of COVID-19 infection, very few states have included COVID-19 statistics for disaggregated Asian American subgroups in their public health reports. The majority of current research also ignores the heterogeneity of COVID-19 across different Asian American communities. The Bhutanese and Burmese refugee communities are two Asian American subpopulations with multiple risk factors for COVID-19 related discrimination. Bhutanese and Burmese refugee communities are also among the largest refugee communities resettled in the USA between 2000 to 2015, and they have among the highest foreign-born shares of any Asian-origin communities in the USA (Bhutanese 92%, Burmese 85%) [20]. Both communities have relatively high poverty rates (Bhutanese 33%; Burmese 35%; Asian American 12%; US population 15%). They also have lower rates of English proficiency (Bhutanese 27%; Burmese 28%; Asian American 70%) and are less likely to have a bachelor's degree relative to the general US population (Bhutanese 9%; Burmese 24%; Asian American 51%; US population 30%) [20,21]. The majority of Bhutanese refugees living in the USA are Nepali-speaking Lhotshampa who were forced to flee Bhutan due to political repression and ethnic violence culminating with the mass expulsion of Lhotshampa Bhutanese in the 1990s. After nearly two decades living in refugee camps in Nepal, this predominantly agrarian and multigenerational community was allowed to resettle in the USA beginning in 2007 [21]. Similarly, most Burmese migrants to the USA since 2006 are political refugees. Many come from rural regions where minority ethnic groups, such as the Karen and Chin, experienced recurrent repression and violence during armed conflicts between the national Burmese Army and ethnic opposition groups. More than a million people from Burma (now called Myanmar) have been displaced to neighboring countries, including Bangladesh, India, Malaysia, and Thailand. Most Burmese refugees in the USA lived in these areas prior to resettlement [20,21]. For these reasons, we hypothesize that Bhutanese and Burmese refugees are at high risk of pandemic-related discrimination and stress. However, to date, there has been limited data describing the experiences of these refugee populations during the pandemic. In this study, we measure the distribution of pandemic-related discrimination and stress, as well as identify predictors of these two measures among Bhutanese and Burmese refugees in the USA. --- Material and Methods --- Data Collection We conducted a cross-sectional study using a snowball sample. We limited participants to English-proficient individuals, age <unk> 18 years, and currently living in the USA from 5/15/20 through 6/1/20, we emailed or messaged an anonymous, online survey link to 19 bilingual Bhutanese and Burmese refugee community leaders identified through the study team's existing professional networks. These individuals were predominantly prior participants in community health leadership trainings or leaders of refugee-led community organizations. They were asked to complete the survey and share the link with peers who met inclusion criteria. To decrease potential selection bias, the survey invitation asked participants to share their experiences during the pandemic and did not specifically invite participants who had experienced discrimination. This study was approved by Ball State University's Human Research Protection Office (IRB#: 1605425). --- Measures --- Outcome To assess pandemic-related discrimination, participants were asked to answer three questions adapted from the Understanding America Study1, which asked if they had experienced the following at any time during the COVID-19 pandemic: (1) felt threatened or harassed from others as they think you might have the coronavirus, (2) felt others were afraid of you because they think might have the coronavirus, and (3) been treated with less respect than others because people think you might have the coronavirus. Responses were coded as binary variables with 1 (Yes) or 0 (No). We then generated an ordinal variable to measure the number of types of discrimination experienced by adding the outcomes of these three measures of discrimination. The ordinal discrimination measure was used for bivariate and multivariate analyses. We measured pandemic-related stress by asking participants to rate the following stress experiences during the COVID-19 pandemic: (1) nervous about current circumstances, (2) worried about my health, (3) worried about my family's health, and (4) stressed about leaving the house. Response options ranged from 1 = "does not apply at all" to 5 = "strongly apply." We first coded these experiences as binary variables with 1 (strongly apply) or 0 (does not apply at all, somewhat does not apply, neither applies nor does not apply, or somewhat applies). We then generated an ordinal variable to measure the amount of stress experienced by summing these newly coded binary measures of stress. The ordinal stress measure was used for bivariate and multivariate analyses. --- Covariates Covariates included in the adjusted models for pandemicrelated discrimination were having had COVID-19, having a family member who had COVID-19, being an essential worker during the pandemic, gender, age, education, and years spent in the USA, as these covariates are known to be associated with discrimination and stress from previous studies [2,[23][24][25][26]. COVID-19 infection was measured as a binary, self-reported outcome, using Yes/No responses to the following question, "Are you or have you been infected with the novel coronavirus?" Having a family member with COVID-19 was measured as a binary variable of whether anyone in the household is or has been infected with the novel coronavirus. Individuals working for pay at a job or business in the 7 days prior to survey completion were categorized as essential workers if their occupation corresponded to one described as providing "COVID-19 Essential Services" under Massachusetts Governor Baker's March 23, 2020 Emergency Order, updated on March 31 and April 28 [27]. Those whose occupation corresponded to essential services but who did not work in the past 7 days due to COVID-19 infection were also categorized as essential workers. Age was categorized as less than 31, between 31 and 40, and more than 40 considering the age distribution of our participants. Education was measured as secondary degree (junior high or senior high school), associate degree (community college, junior college, or technical school), and bachelor degree. Year spent in the USA was a continuous variable and represents an approximate measure of acculturation. The model for pandemic-related stress included these covariates, pandemic-related financial crisis, and the ordinal measure for pandemic-related discrimination. Financial crisis was included because it is a common cause of emotional distress. Financial crisis was a binary variable capturing if the participants' family had experienced financial crisis during the coronavirus pandemic. Since the relationship between discrimination and stress has been established in other contexts, pandemic-related discrimination was also included here [28,29]. --- Statistical Model We first examined the distribution of each outcome and covariate. We then conducted bivariate analysis to measure the association between participants' characteristics and pandemic-related discrimination and stress. We applied Fisher's exact tests and one-way analysis of variance (ANOVA) tests to measure differences in pandemicrelated discrimination and stress across categorical variables and continuous variables, respectively. Finally, we identified characteristics associated with pandemic-related discrimination and stress, applying adjusted ordinal logistic regression models. We tested proportional odds assumption of ordered logistic regression models to measure if the coefficients are equal across categories. Multicollinearity was tested and not found. Less than 5% of all measures were missing. Due to the small percentage, we considered all missing values to be missing at random. The significance level was set at 0.05 with a two-sided tail. Analysis was conducted using Stata/SE15.1. --- Results Table 1 shows the characteristics of the study participants. In total, 218 Bhutanese and Burmese refugees from 23 states 2 completed the survey. The majority were Bhutanese (86.2%), and just over half were male (60.1%). Approximately half were more than 30 years old (52.4%), received a bachelor's degree or higher (50.0%), and had an annual household income less than $50,000 (52.3%). The average time participants spent in the USA was 9.99 years. Nearly half of the participants were essential workers (41.7%). Nonetheless, pandemic-related job loss (46.3%) and family financial crisis (36.7%) were common. Nearly 7% of participants reported having been infected with the coronavirus. The same amount of the participants reported having family members infected with the coronavirus. Table 2 displays experiences with pandemic-related discrimination. Nearly one third of the participants (31.3%) reported experiencing at least one type of pandemic-related discrimination. A total of 15.1, 9.6, and 5.5% of the participants reported experiencing one, two, or three types of discrimination, respectively. Most often, participants reported feeling that other people were afraid of them (27.5%). Additionally, 12.8% of respondents reported feeling threatened or harassed, and 10.6% reported feeling as if they had "been treated with less respect than others as people think you might have the novel coronavirus." Table 2 also displays pandemic-related stress. More than two-thirds of participants (68.8%) experienced at least one type of pandemic-related stress. A total of 25.2, 17.4, 12.4, and 13.8% of the participants reported experience one, two, three, or four types of stress, respectively. Specifically, nearly one third of participants strongly endorsed feeling nervous about the current circumstances (33.9%), feeling worried about their health (28.0%), or feeling stress about leaving home (29.8%). Over half of participants strongly endorsed feeling worried about their family's health (60.6%). Table 4 shows the bivariate analysis of participants' characteristics and pandemic-related stress. Those who experienced more types of discrimination (P value <unk> 0.001), those who experienced financial crisis during the pandemic (P value = 0.013), and women (P value = 0.040) were more likely to experience more types of pandemic-related stress. Table 4 also displays the multivariate ordinal logistic regression model for pandemic-related stress. The results indicate a strong association between the amount of pandemic-related stress and the amount of pandemicrelated discrimination (one type of discrimination: OR 2.70, 95% CI 1.31, 5.58; two types of discriminations: --- Discussion This study describes characteristics associated with pandemicrelated discrimination and stress in two Asian refugee communities. Notably, the Understanding America Study reported that 0.9, 5.9, and 4.0% of Asian Americans reported feeling threatened or harassed by others, feeling others be afraid of them, or feeling they were treated with less respect than others as others thought they might have the coronavirus in the prior 7 days based on the data on May 23, 2020 [22]. While our survey did not use the same 7-day time frame, participants reported markedly high rates of discrimination. Our results are consistent with another online survey of Asian Americans during the pandemic [10]. We identity risk factors for experiences with discrimination in these communities, including having had COVID-19, having a family member with COVID-19, and being an essential worker. In addition to experiencing COVID-19related discrimination from others, those and their family members are infected tend to blame themselves or their family members for contracting the diseases, which makes it harder for them to fight COVID-19 related stigma [30]. In other studies, essential and frontline workers have reported high rates of social isolation, stigma, and discrimination due to their heightened risk of COVID-19 and others' fear of infection [31,32]. In our study, around 40% of the participants were essential workers. In the USA, a large number of refugees work in the healthcare settings, food supply chain functions, grocery stores, supermarkets, restaurants, and food services establishments, which may expose them to a high risk of pandemic-related discrimination [16,30]. However, there has been a lack of education, legislation, and policy to address this discrimination. Experiencing pandemic-related discrimination is associated with participants' experience of pandemic-related stress. While our cross-sectional study does not establish a causal relationship between pandemic-related discrimination and stress, this finding echoes previous studies showing that discrimination can lead to negative and long-term consequences for mental health [3,10,15,[33][34][35]. While societal strategies for decreasing discrimination are paramount, other researchers have also found that social support and coping strategies can buffer the immediate negative emotional impact of discrimination on Asian Americans [35]. Our study also suggests that experiencing financial crisis during the pandemic increases the likelihood of experiencing higher amounts of pandemic-related stress among Bhutanese and Burmese refugees. Between two predominantly lowincome populations, this is likely to be explained by the impact of financial crisis on individuals' access to basic necessities, such as food, shelter, or healthcare [35][36][37]. Women were more likely to experience higher amounts of pandemic-related stress than men. This result corresponds with recent findings of high levels of stress and fear of COVID-19 among women [38][39][40]. This gender difference may be explained by the disproportionate responsibility that many women face in taking care of children and other family members during the pandemic, as well as the disproportionate impact of pandemic-related job losses on women [41]. The study has limitations. Chief among them is reliance on a small, non-representative sample of English-proficient respondents, especially given that English proficiency is reported by just 27 and 28% of the overall Bhutanese and Burmese populations in the USA, respectively [20]. Additionally, levels of annual household income and educational attainment among our respondents were higher compared with others in their communities. For this reason, results may not be generalizable to the entirety of the Bhutanese and Burmese refugee communities in the USA. We also speculate that people with higher levels of concern about COVID-19 would have been more likely than people with lower levels of concern to complete the survey, so our results may overestimate the prevalence of COVID-19 related stress in these communities. The distribution of COVID-19 cases may also be an underestimate considering the marked shortage of SARS-CoV-2 tests in the USA when data were collected in May 2020 [42]. The prevalence of essential workers may also be underestimated, as it was defined by participants' working status in the week prior to the survey. Those who worked during the pandemic but not during the required timeframe due to none COVID-19 infection-related reasons were coded as non-essential workers. Finally, some of the measures of discrimination and stress have not been validated. We encourage other researchers to replicate our study with a representative sample and novel measures of key variables. --- Conclusions Reducing pandemic-related discrimination should remain a priority as we work to strengthen our public health response to the pandemic. Public officials should avoid terms such as "China Flu" and consistently condemn racism [1,43]. Public messaging should remain sciencebased. Because workplace incidents are potential civil rights violations and have been reported by multiple prior studies, we suggest that employers consider proactive and preventive actions [10,44]. Programs that enhance social support and teach coping skills may also buffer the immediate psychological impact of discrimination [10,35]. More importantly, policies, regulations, and education are needed to address pandemic-related stigma and discrimination. Finally, we recommend that larger national studies tracking experiences with discrimination and stress during the pandemic include Asian American subgroups with limited English proficiency [26,45,46]. --- Availability of Data and Material Available upon request. Code Availability Available upon request. --- Declarations --- Conflicts of Interest The authors declare that they have no conflict of interest. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Objectives To measure COVID-19 pandemic-related discrimination and stress among Bhutanese and Burmese refugees in the USA and to identify characteristics associated with these two measures. Methods From 5/15-6/1/2020, Bhutanese and Burmese refugee community leaders were invited to complete an anonymous, online survey and shared the link with other community members who were English-proficient, ≥18 years old, and currently living in the USA. We identified characteristics associated with pandemic-related discrimination and stress applying ordinal logistic regression models. Results Among 218 refugees from 23 states, nearly one third of participants reported experiencing at least one type of discrimination, and more than two-thirds experienced at least one type of pandemic-related stress. Having had COVID-19, having a family member with COVID-19, and being an essential worker were associated with discrimination. Discrimination, financial crisis, and female gender were associated with stress. Conclusions Reducing pandemic-related discrimination should remain a priority, as should the promotion of social support and coping strategies. Noting that this is a nonrepresentative sample, we recommend that larger national studies tracking experiences with pandemic-related discrimination and stress include Asian American subgroups with limited English proficiency.
Introduction Social media represents a valuable source of information for understanding how people perceive and discuss events. Internet discourse has given voice to millions of users, creating flows of information populated by many different viewpoints (Dodds, et al., 2011;Stella, et al., 2018;Ferrara, 2020). Identifying and understanding users' knowledge and emotional content poses a research challenge with crucial implications. Under a time of crisis like the current one, where the COVID-19 pandemic is revolutionising people's way of life all over the world, Internet discourse is key for understanding how large audiences are perceiving multiple aspects of the global emergency. With the right tools, online discourse can unlock perceptions of the pandemic, subsequent lockdowns and their aftermaths. This study adopts cognitive networks, tools at the fringe of computer science and psycholinguistics (Siew, et al., 2019), as a compass for exploring social discourse around post-lockdown reopening. Focus is given to unraveling the emotional dimensions of social discourse debating the multiple facets of reopening a whole country with the threat of a global pandemic. Language as embedded in online messages is used to reconstruct how individuals perceived the reopening and emotionally coped with multiple aspects of it. The identified emotional trends and the tools highlighting them help understanding the key issues faced by people during a reopening, their fears but also their hopes, all data useful for achieving effective future policy-making. To test this aim and the powerfulness of the above techniques, Italy is selected as a case study. --- Case study: Italy, COVID-19 and the lockdown Italy was the first European country to release lockdown after being severly struck by the COVID-19 pandemic (Bonaccorsi, et al., 2020). In this way, the social dynamics taking place among Italian users on social media anticipate the discourse of other countries about the reopening. The whole country was locked down one day earlier than COVID-19 being declared a global pandemic by WHO on 11 March 2020. Several studies investigated how Italians reacted to the sudden lockdown. Pepe and colleagues (2020) identified drastic drops of social mobility, which were confirmed also by other studies (cf., Bonaccorsi, et al., 2020). Stella and colleagues (2020) investigated the Italian Twittersphere in the first week of lockdown and found evidence for Italians expressing concern, fear and anger towards the economic repercussions of lockdown. These fears became reality, as the lockdown strongly amplified social and economic inequality across the country, as recently quantified by Bonaccorsi and colleagues (2020). After two months of nationwide lockdown, the slow down of the COVID-19 contagion and the pressure of restarting the economy both motivated the Italian government to release the lockdown. On 4 May 2020, social mobility was almost completely restored. People could travel within their own regions, attend public places and enjoy a mostly normal lifestyle. All while the novel coronavirus still circulated within the population and hundreds of casualties were still registered. This study investigates the emotions and ideas before, during and after the 4 May reopening. --- Research questions By adopting a cognitive network science approach, considering text as data representative of people's mindsets as expressed in texts (Stella, et al., 2019;Stella, 2020;Stella, et al., 2020), this work explores and compares how different ways of reconstructing knowledge and emotions can address the following research questions: RQ1: Which were the main general emotions flowing in social media about the reopening? RQ2: Were there emotional shifts over time highlighted by some emotion models but neglected by others? RQ3: Which were the most prominent topics of social discourse around the reopening? RQ4: How did online users express their emotions about specific topics in social discourse? RQ5: Were messages expressing different emotions reshared in different ways? The main contribution of this investigation is identifying the key topics of discussion around the reopening through a cognitive network science approach. Rather than focusing over COVID-19 or its specific hashtags, key ideas and their emotional perceptions are identified in language, within the social discourse taking place in Italy around the loose topical hashtag #fase2. This hashtag, which stood for a synonym of reopening in Italian news media, included a wide variety of topics of discussion, a constellation of facets of debate regarding the restart, each one associated with certain language, semantic frames and emotions. The investigation over time of these interconnected semantic frames/emotional perceptions is the main focus of this investigation. Key ideas and emotions in discourse are extracted through emotional profiling and sentiment analysis. These two approaches are compared in their ability to detect emotional fluctuations over time of the whole social discourse (RQ1-2), with emotional profiling highlighting more discussions of social debate than mere sentiment. Word frequency and cognitive networks are merged together in order to identify ideas of prominence for social discourse over time (RQ3-4). Emotional profiling around these prominent concepts outlines microscopic patterns of trust formation around the institutions and concern about the contagion that were not visible with the global-level emotional analysis. Behavioural trends towards messages containing different emotions are investigated and discussed in light of previous positive biases based on mere sentiment (RQ5). --- Background and related literature Stances in language. Identifying people's perceptions and opinions about something is a problem known as "stance detection" in computer science (Kalimeri, et al., 2019) and "authorial stance" in psycholinguistics (Berman, et al., 2002). The identification of a stance is crucial in every communication, in order to identify whether someone is in favour or against a given topic, e.g., a person expressing support of the economic measures promoted by a government or giving voice to criticism about a given campaign of social distancing. Historically, stance detection has focused over speeches and written text, like books or pamphlets, and used language analysis in order to reconstruct a stance, e.g., using positive words. This task was performed by linguists and required human coding (Berman, et al., 2002). Stances in social media. The advent of social media and the huge volumes of texts produced by online users made human coding impractical, motivating automatic approaches to stance detection with limited human intervention (cf., Mohammad, et al., 2016). The state-of-art in identifying (dis)agreeing stances in social media is represented by machine learning approaches (Hassani, et al., 2020), which capture linguistic patterns from a training set of labelled texts, create an opaque representation of different stances and then use it for categorising previously unseen texts (Ferrara and Yang, 2015;Mohammad, et al., 2016). This approach is powerful in detecting also additional features of stances like sentiment intensity (Kiritchenko, et al., 2014;Hassani, et al., 2020), e.g., how positively or negatively a given stance is. The main limit of machine learning is that the reconstructed representation of different stances cannot be directly observed. This issue prevents access to how knowledge and sentiment were structured in different stances, e.g., which concepts were associated with each other in a specific stance? To provide a transparent representation of knowledge and stances embedded in text, recent approaches have adopted cognitive network science (Siew, et al., 2019;Stella, et al., 2019). --- Cognitive networks as windows into people's minds. Cognitive networks model how linguistic knowledge can be represented in the human mind (Aitchison, 2012). Recent approaches overwhelmingly showed that the structure of conceptual associations in language is not only predictive of several cognitive processes like early word learning or cognitive degradation (cf., Siew, et al., 2019) but is also useful for reconstructing different stances in social media discourse (Stella, et al., 2018) or in educational settings (Rodrigues and Pietrocola, 2020;Stella and Zaytseva, 2020). Relying on these approaches, this manuscript adopts cognitive networks of syntactic associations between concepts for reconstructing the stances promoted by social discourse around specific aspects of the lockdown. Among many successful approaches building complex networks from text (Arruda, et al., 2019;Brito, et al., 2020;Rodrigues and Pietrocola, 2020), this work adopts the framework of textual forma mentis networks, representing syntactic and semantic knowledge in combination with valence and emotional aspects of words (Stella, 2020). Reminiscently of the networked linguistic repository used by people for understanding and producing language, i.e., their mental lexicon (Aitchison, 2012), a forma mentis network opens a window onto people's minds (and mental lexica). This is achieved through forma mentis networks giving structure to language, reconstructing the conceptual and emotional links between words in a text. In this way, forma mentis networks reconstruct a collection of stances expressed in a discourse, i.e., a mindset (in Latin forma mentis). Combining networks and emotions. Coupling syntactic/semantic networks and emotional trends makes it possible to understand how individuals perceived and directed their emotions towards specific entities. For instance, Stella, et al. (2019) found that high school students directed anxiety and negative sentiment towards math, physics and related concepts but not towards science. As a comparison, STEM researchers directed mostly positive sentiment towards all these topics. The interconnectedness between specific knowledge and the emotions surrounding/targeting it is the main element enabling for forma mentis networks (FMNs) to better understand how people perceive events and topics. --- Emotions in language. The emotional profile of a portion of language can be considered an extension of its sentiment. Whereas sentiment aims at reconstructing the valence of language, i.e., understanding its pleasantness, emotional profiling contains other dimensions like arousal, i.e., the excitement elicited by a given entity (Posner, et al., 2005), but also projection into the future, desires and beliefs (Plutchik, 2003;Scherer and Ekman, 2014). In cognitive neuroscience, the circumplex model of arousal and valence is one of the most simple yet powerful model for reconstructing the emotions elicited by words in language through their combined pleasantness and excitement (for more details see Methods and Posner, et al., 2005). The innovation brought by Big Data Analytics approaches to psycholinguistics opened the way also to alternate approaches mapping specific emotional states. The NRC Emotion Lexicon by Mohammad and colleagues identifies which words give rise to eight basic emotional states, like fear or trust among others (Mohammad and Turney, 2013). Relying on the theory of basic emotions from cognitive psychology (Plutchik, 2003;Scherer and Ekman, 2014), these eight states act as building blocks, whose combination can describe a wide range of emotions like elation, contempt or desperation. Emotions and behavioral trends in social media. On social media, understanding the emotional perception of different topics can be insightful also for understanding how knowledge with different emotional profiles spreads. Ferrara and Yang (2015) showed how messages with different emotions can be re-shared in different ways on social platforms. The authors identified a positive bias on Twitter, where online users reshared more those messages with a stronger positive sentiment. Other studies identified that not only sentiment but also the semantic content of tweets can boost message diffusion. For instance, Brady and colleagues (2017) found that content eliciting moral ideas was shared more by online users during voting events, linking this phenomenon not only to the sentiment expressed in tweets but also to their emotions. The importance of measuring emotional trends in social media motivated approaches like the Hedonometer, built by Dodds and colleagues (2011) in order to gauge people's happiness through real-world massive events. Aim and manuscript outline. In the current study, investigating semantic networks, valence, arousal and emotions will wholly be aimed at understanding how online users waited for, perceived and discussed the lockdown release. The Italian Twittersphere is used as a case study. The element of novelty of this manuscript is providing a network of interconnected topics, mapping how individuals discussed a variety of concepts, as expressed in their tweets, when discussing the loose topical hashtag #fase2 about the reopening. The Methods section outlines the novel methodological tools adopted to the above aim. The Results section investigates the individual research questions outlined above. Results are then combined together and commented in the Discussion section in view of the lockdown release. Current limitations and future research directions opened by this study are also outlined in that section. The Conclusions summarise the contributions of this work and its research questions. --- Methodology This part of the manuscript outlines the linguistic datasets and methods adopted and implemented in this work, referencing also previous relevant works and resources. Twitter dataset. This work relied on a collection of 408,619 tweets in Italian, gathered by the author through Complex Science Consulting's Twitter-authorised account (@ConsultComplex). The tweets were queried through the command ServiceConnect[] as implemented in Mathematica 11.3. Only tweets including the hashtag #fase2 (phase 2) were considered. The flags "Recent" and "Popular" were both used in ServiceConnect in order to obtain either recent tweets produced on the same day of the query or trending tweets, produced on earlier dates but highly re-shared/liked. This combination led to a Twitter dataset including both: (i) large volumes of tweets produced by individuals and (ii) a small fraction of highly reshared/liked tweets. Almost 1.5 percent of the retrieved tweets received more than 100 retweets. These "popular" tweets received on average 286 retweets and 401 likes. Even though these numbers are considerably smaller than those in the Twittersphere of English speakers, they are still remarkable in a population as small as the Italian one (where only 2.85 percent of Internet users has an active Twitter account in 2020, cf., https://gs.statcounter.com/social-media-stats/all/italy, last accessed 1 July 2020). Tweets were gathered between 1 May and 20 May 2020 in order to evaluate how online users perceive the release of national lockdown before, during and after the actual end of the lockdown on 4 May 2020. Tweets were ordered chronologically and categorised in each of the 20 considered days. Twitter IDs have been released on a OSF repository and are available for research purposes. Language processing. Each single tweet was tokenised, i.e., transformed into a series of words. Links and multimedia content were discarded from the analysis, which focused over linguistic content. Emojis and hashtags were translated in words. Emojis were translated by using Emojipedia (https://emojipedia.org/people/, last accessed 1 July 2020), which describes emoticons in terms of simple words, and appended to tweets. Hashtags were translated by using a simple overlap between the content of the hashtag without the <unk># symbol and Italian words (e.g., #pandemia became "pandemia", Italian for pandemic). Words in tweets were then stemmed by using SnowballC as implemented in R 3.4.4, called in Mathematica through the RLink function. Word stemming is important for getting rid of linguistic suffixes in Italian describing the plural and gender of a noun (e.g., "ministro" and "ministra" both indicate the concept of a minister) or the tense of verbs (e.g., "andiamo" or "andate" both indicate the concept of going). Previous evidence from psycholinguistics indicates that appending different suffixes to the same stem does not alter the semantic representation attributed to them (Aitchison, 2012), which is rather dependent only on the stem itself (e.g., ministro and ministra both elicit the same conceptual unit relative to minister). This flexibility of language in representing lexical units for denoting concepts has been shown to hold across multiple languages, including Italian (Aitchison, 2012). Stems and syntactic relationships between them were used in order to construct forma mentis networks. --- Forma mentis network. Textual forma mentis networks were introduced in Stella (2020) as a way of giving network structure to text. Forma mentis networks (FMNs) represent conceptual associations and emotional features of text as a complex network. In a FMN, nodes represent stemmed words. Links are multiplex (Stella, et al., 2017) and can indicate either of the following conceptual associations: (i) syntactic dependencies (e.g., in "Love is blissfulness" the meaning of "love" is linked to the meaning of "blissfulness" by the specifier auxiliary verb "is") or (ii) synonyms (e.g., "blissfulness" and "happiness" overlapping in meaning in certain linguistic contexts). These links were built by using the TextStructure syntactic parser implemented in Mathematica 11.3 and the Italian translation of WordNet (Bond and Foster, 2013). Emotional features are attributed to individual words/nodes. Valence, arousal and emotional eliciting (e.g., does a given word elicit fear?) were attributed according to external cognitive datasets. Notice that the approach adopted here was mostly "bottom-up", as the considered forma mentis network was built through the command TextStructure, which extracted syntactic relationships directly from text. However, FMNs used also semantic associations from WordNet, whose adoption for meaning attribution is considered a "top-down" approach in natural language processing. In this way, the combination of syntactic and semantic associations makes FMNs a hybrid or multiplex approach in capturing meaning from text (Stella, 2020). Cognitive datasets. This study used two different datasets for emotional profiling, namely the Valence-Arousal-Dominance (VAD) dataset by Mohammad (2018), including 20,000 words, and the NRC Emotion lexicon by Mohammad and Turney (2013), including 14,000 words. Both the datasets were obtained through human assessment of individual words, like rating how positively/negatively/neutrally a given concept was perceived or if a given word elicited fear, trust etc. Combinations of valence and arousal can give rise to a 2D space known as the circumplex model of emotions (Posner, et al., 2005), which has been successfully used for reconstructing the emotional states elicited by single words and combinations of them in text. In the circumplex model, emotions are attributed to words by passing through their locations in the 2D space (e.g., high valence/arousal corresponds to excitement). The NRC Emotion Lexicon enables a more direct mapping, indicating the specific words that elicit an emotional state in large audiences of individuals (Mohammad and Turney, 2013). The dataset includes 6 basic emotions (Joy, Sadness, Fear, Disgust, Anger and Surprise) and two additional emotional states (Trust and Anticipation). Whereas the six basic emotions are self-explanatory and identified as building blocks of more nuanced emotions by Ekman's theory in cognitive psychology (Scherer and Ekman, 2014), trust and anticipation include more complex dimensions. Trust can come from a combination of mere affect towards an entity (e.g., trusting a loved one) or rather from logic reasoning (e.g., trusting a politician who behaves rationally), see also (Plutchik, 2003). Anticipation is a projection towards the future that can be either positive or negative, like looking forward to meeting new friends or dreading the day of an exam (Scherer and Ekman, 2014). For this analysis, emotions and emotional states are used interchangeably. Valence/arousal scores and direct emotions were attributed to words in Italian, which were then linked in the forma mentis network according to the language used by social media users. Representing language as a network defines semantic frames and emotional auras. Representing social discourse as a complex network is advantageous. In fact, this representation conveniently enables the adoption of many network metrics to the aim of detecting text features. The simplest example is using conceptual associations to understand which emotions permeated discourse around specific concepts. For instance, Stella and Zaytseva (2020) found that students associated "collaboration" mainly with positive concepts and thus attributed to it a positive aura, i.e., a positive perception, which was confirmed by independent feedback. This study uses a more general measure of conceptual aura combining emotions and semantic frames. In a FMN, the network neighbourhood of a concept C identifies which words were associated to C by online users through syntactic and semantic associations in messages. According to semantic frame theory (Fillmore, 2006), these associations extracted from language bring contextual information which specifies how C was perceived, described and discussed by individuals. Checking the semantic content elicited by words in the network neighbourhood of C can, therefore, characterise the meaning attributed to C itself in social discourse. Hence, network neighbourhoods in a FMN represent the semantic frames attributed by individuals to concepts in language. Extracting semantic but also emotional information from these frames/neighbourhoods gives insights about people's perceptions and perspectives, i.e., auras, as attributed to concepts. --- Quantitative measuring of emotional auras. This study reconstructed the emotional aura or profile of a given concept by counting how many of its associates in the FMN eliticed a given emotion, analogously to past approaches (Mohammad and Turney, 2013;Stella, et al., 2020). Words linked to a negation ("non", "nessun" and "senz" in Italian) were substituted with their antonyms as obtained from the Italian WordNet. This operation preserved the flipping in meaning as expressed in text when negating words. The computed emotional richness was then compared against random expectation preserving the same empirical number of emotion-eliciting associates to a word but also randomising their emotions. A collection of 1,000 random samplings was performed for every empirical richness value reported in the main text, with error bars indicating standard deviation. A z-score indicating emotional richness higher or lower than random expectation at a significance level of 0.05 was also plotted in order to provide a clear visual clue about how individual concepts were perceived in social discourse. These z-scores were organised according to a flower layout and referenced in text as emotional flowers, with the center being the rejection region z <unk> 1.96 and petals representing emotional z-scores. Emotional flowers give an immediate visual impression of which emotions populate a given semantic frame more than random expectation. In fact, all the bars falling outside of the inner semi-transparent circle (i.e., the rejection region) indicate an emotional richness stronger than the random baseline. Notice also that in emotional flowers every ring outside of the semi-transparent circle indicates a z-score unit after 2, i.e., the first ring outside the flower centre is relative to a z-score of 3, etc., thus making it immediate to assess the strongest emotions in a semantic frame and attribute also a z-score to them. An example of the FMN as extracted from online discourse around "govern" (governate/government) is reported in Figure 1. Figure 1 reports the network neighbourhood of "govern" (government/govern), i.e., the frame of semantic/syntactic associations linked with "govern" in tweets. Nodes are stemmed words and links indicate syntactic or semantic relationships. Words are colored according to the emotion they elicit. In case one word elicits multiple emotions, the coloring is attributed according to the strongest emotion permeating a given semantic frame (like in Figure 1) or the whole social discourse (like in Tables 1A and2A in the Appendix). Figure 1: Users' language in tweets reflects their mental perceptions (left), reconstructed here as a forma mentis network outlining the emotions (bottom right) and semantic frame attributed to a concept, e.g., "govern" (top right). Words are emotion-coloured (cf., Methods). The number of words eliciting different emotions is reported as "emotional richness". Z-scores between empirical and expected emotional richness are reported as an emotional flower (bottom right). In the emotional flower in Figure 1, the bar of joy arrives up to the first ring outside of the semi-transparent circle, i.e., joy is relative to a z-score of 3. Reading words in the network and considering those emotions stronger than random expectation, i.e., with bars outside of the inner white circle in the emotional flower, make possible to assess that in all tweets between 1 May and 20 May, Italians discussed "govern" with more trust-, anticipation-and joy-eliciting words than expected. Also, jargon of different emotions co-existed together. Figure 1 illustrates also the cognitive approach adopted by this study. As schematized in Figure 1 (left), each Twitter user produces messages according to their mental lexicon, i.e., a cognitive system storing and processing linguistic knowledge and emotional perceptions about the world. Users communicate their knowledge and perceptions through language in tweets. Hence, Twitter messages contain conceptual associations and emotions. Extracting and aggregating these types of information enables the construction of a knowledge network representing social discourse, i.e., a forma mentis network (Stella, 2020). Notice that words are clustered in network communities of tightly connected concepts as identified with Louvain (Blondel, et al., 2008). Every network visualisation featured words translated from Italian to English. The translation process relied on the translations English-Italian provided already by the NRC Emotion Lexicon (cf., Mohammad and Turney, 2013). --- Beyond network neighbourhoods. FMNs make it possible to study social discourse also in terms of network centrality. In this study, frequency and closeness centrality were compared and used at the same time in order to identify prominent concepts in social discourse. Frequency is based on repeated tweets and indicates how many times single words appeared in the dataset on each day independently of other words. Closeness depends on the number of syntactic/semantic links connecting a word to all its neighbours (Siew, et al., 2019). A lower number of these connections indicates that a word is more directly syntactically related/associated to other concepts, expressing prominence in the underlying discourse or texts. Stella (2020) showed that, on benchmark texts, high closeness centrality in FMNs was able to identify text topics by highlighting prominent concepts. In cognitive network science, syntactic/semantic distance and closeness have been shown to be highly predictive of word prominence also beyond topic detection, in contexts like early word learning (Stella, et al., 2017). Temporal analysis. Emotional profiling and forma mentis networks are applied in order to reconstruct the main emotions and ideas around lockdown release as discussed online, on each day between 1 May and 20 May, in a fashion similar to the Hedonometer by Dodds and colleagues (2011). The stream of tweets is processed chronologically. When emotions are profiled, single tweets are considered. This means considering temporal trajectories of 400k points, one for each emotional state (e.g., fear, trust, anticipation, etc.), one for the total valence scores and one for total arousal scores of words in a tweet. These noisy trajectories were averaged over time. An exponentially weighted moving average was used in order to smooth noisy outliers over a short time window. The smoothing factor was chosen as an average over 10,000 different attempts of minimising the mean squared error of the 1-step-ahead forecasts using 10,000 tweets and starting from any random time between 00:00 1 May 2020 and 23.59 20 May 2020. An average of 0.00075 was identified for the smoothing factor of emotional time series, indicating the ability for the smoothed signal to detect shifts in emotions determined by an average of 1/0.00075 1,333 tweets. For valence and arousal, an average smoothing factor of 0.0006 was detected, corresponding to shifts including 1,667 tweets. This error minimisation technique was simple enough to preserve long-term changes and trends in the time series while also smoothing out short-term fluctuations. Emotional fluctuations. Emotional deviations were operationalised by deviations from the interquartile range of all detected signals in a given time window. Notice that the filtered signals and the observed deviations from interquartile range were not used in order to make forecasts or attribute statistical significance but only in order to qualitatively highlight potential shifts in social discourse. These potential deviations were then cross-validated by a frequency analysis of words/retweeting counts of tweets/forma mentis emotional auras in the considered time windows. --- Results --- RQ1: Which were the main general emotions flowing in social media about the reopening? Figure 2 reports the emotional profile of social discourse over time. Remember that the emotional profile corresponds to how rich in each emotion was the overall social discourse (emotional richness, see Methods). Importantly, non-zero signals of all emotions were found across all time windows. This means that social discourse about the reopening was never dominated by a single positive or negative emotion, like trust or fear. The reopening was rather perceived as a nuanced topic of discourse, where positive and negative emotional texts co-existed together, in agreement with other studies (Lima, et al., 2020;Gozzi, et al., 2020;Stella,et al., 2020). Figure 2 indicates that sentiment mostly remains stationary over time whereas emotional richness shows more complicated dynamics, with peaks and deviations. This is the core of RQ2. Figure 2 focuses on individual, non-cyclic deviations from stationary behavior, like peaks or deviations featured on individual days. Emotional fluctuations unveil social denounce, trust and joy. Several deviations from median emotional intensity are found in different time windows and for different emotions. Before the official reopening of 4 May, social discourse registered several fluctuations in terms of fear, anger, surprise and disgust. The morning of 2 May registered a progressive increase in anger co-occurring together with a spike of fear. According to Plutchik's (2003) theory of emotions, the alertness against a threat caused by fear can give rise to anger as a reaction mechanism so that the two emotions are not independent of each other. A closer investigation of the stream of tweets reveals the proliferation of highly retweeted tweets, in the morning of 2 May, mainly about: (i) political denounce of how the Italian government can use EU investments for reopening, expressing alarm about "vultures" preying on the misfortune of others, and (ii) gender gap denounce, criticising why only 20 percent of policy-makers enrolled by the Italian Government were women. The afternoon of 2 May also registered a decrease in surprise and a spike of disgust. The most frequent words/most retweeted messages on that time window indicated the continuation of negative/criticising political debate together with messages protesting against the security measures for public businesses like restaurants, hairdressers and beauty centres. These negative trends did not impact the average joy measured on the day, which remained fairly constant over time and was expressed in several tweets expressing hope and excitement about the incoming reopening. The observed decrease in surprise taking place on 2 May corresponded to the resharing of mass media articles, starting early in the morning and explaining the new measures concretely enabling the reopening, with jargon like "regol" (rules), "chiest" (ask), "intervist" (interview) and "espert" (expert). These articles explaining future events about the reopening also contributed to increasing anticipation, i.e., an emotional projection into the future (Plutchik, 2003). A delayed positive contagion. On 4 May, the day of the lockdown release all over Italy, emotional trends remained fairly constant over time. A drastic drop in negative emotions, co-occurring with a raise in positive ones, was found on 5 May, starting around 10 AM. A closer look at the stream of Twitter messages reveals that this massive change in global emotions was due to tweets of news reporting how the contagion slowed down in the three previous days. Messages expressing excitement about the reopening ("let's enjoy phase 2!") were the most retweeted ones on 5 May. Interestingly, positive messages included also: (i) desire for travelling, (ii) appreciation for the newfound freedom, and (iii) trustful instructions about how to use self-health sanitary tools, like facemasks, for living together with COVID-19. Hence, the emotional effects of the lockdown release were not observed on the same day of reopening, 4 May, but were rather delayed by one day and enhanced the overall flow of positive emotions on 5 May. Such delayed and drastic alteration in emotional profile provides evidence for a collective emotional contagion, indicating how the reopening was collectively perceived with mostly positive emotions by online users. --- Peaks of sadness and social distancing. Emotional trends remained mostly constant in the aftermath of the reopening, with strong fluctuations present on 11 May and 12 May. The sudden spike in sadness and disgust registered in the afternoon of 11 May and early morning of 12 May is related to tweets of complaint. The most retweeted messages in this time window expressed concern and complaint about a lack of clear regulations about social behaviour, exposing critical issues like large crowds assembling in public spaces and a difficulty for restaurants to guarantee social distancing. The most frequent jargon in this time window was "misur" (measure), "distanz" (distance) and "tavolin" (table ). At the same time, the Twitter stream also featured news of local COVID-19 outbreaks. A smaller peak in anger and disgust was featured on 20 May and mostly related to Twitter messages of political denouncement. In order to better understand the above emotional shifts, in the next section the same Twitter stream is analysed with the valence-arousal circumplex model (see Methods). Results are compared against the above ones obtained with the NRC Emotion Lexicon. --- RQ2: Were there emotional shifts over time highlighted by some emotion models but neglected by others? The above emotional fluctuations indicate changes in the global perception of social discourse that were confirmed by a closer look at the Twitter stream, indicating the powerfulness of the NRC lexicon to identify emotional transitions over time. Figure 2 (
Social discourse and reopening after COVID-19: A post-lockdown analysis of flickering emotions and trending stances in Italy Although the COVID-19 pandemic has not been quenched yet, many countries lifted nationwide lockdowns to restart their economies, with citizens discussing the facets of reopening over social media. Investigating these online messages can open a window into people's minds, unveiling their overall perceptions, their fears and hopes about the reopening. This window is opened and explored here for Italy, the first European country to adopt and release lockdown, by extracting key ideas and emotions over time from 400k Italian tweets about #fase2 -the reopening. Cognitive networks highlighted dynamical patterns of positive emotional contagion and inequality denounce invisible to sentiment analysis, in addition to a behavioural tendency for users to retweet either joyous or fearful content. While trust, sadness and anger fluctuated around quarantine-related concepts over time, Italians perceived politics and the government with a polarised emotional perception, strongly dominated by trust but occasionally featuring also anger and fear.Introduction Case study: Italy, COVID-19 and the lockdown Research questions Background and related literature Methodology Results Discussion Limitations of this study Conclusions
concern and complaint about a lack of clear regulations about social behaviour, exposing critical issues like large crowds assembling in public spaces and a difficulty for restaurants to guarantee social distancing. The most frequent jargon in this time window was "misur" (measure), "distanz" (distance) and "tavolin" (table ). At the same time, the Twitter stream also featured news of local COVID-19 outbreaks. A smaller peak in anger and disgust was featured on 20 May and mostly related to Twitter messages of political denouncement. In order to better understand the above emotional shifts, in the next section the same Twitter stream is analysed with the valence-arousal circumplex model (see Methods). Results are compared against the above ones obtained with the NRC Emotion Lexicon. --- RQ2: Were there emotional shifts over time highlighted by some emotion models but neglected by others? The above emotional fluctuations indicate changes in the global perception of social discourse that were confirmed by a closer look at the Twitter stream, indicating the powerfulness of the NRC lexicon to identify emotional transitions over time. Figure 2 (bottom row) reports the richness in valence and arousal of words as embedded in tweets. Despite the plot range being the same as in Figure 2 (top row), both valence and arousal remained mostly constant over time, hiding the emotional peaks and fluctuations observed with the NRC Emotion Lexicon. Notice that no fluctuations were observed even by manually tuning the smoothing factor of the valence/arousal curve. The only stronger deviation observed with the valence-arousal circumplex model is on 20 May, where the drop in valence and the increase in arousal are compatible with negative/alarming emotions like anger, in agreement with what was found with the NRC Emotion Lexicon. Reopening was a positive event but no "happy ending". The above results also indicate that the reopening after the lockdown was met with a positive emotional contagion over social media. The deluge of trust and joy, in combination with anticipation, indicate a positive and hopeful perception of restarting after a lockdown. The restart itself was not a happy ending, though. Negative emotions indicated a deluge of complaints and social denouncement about gender disparities, risks of inappropriate behaviour, difficulties to keep up with social distancing and political denouncement. In order to better understand how social discourse was structured across days, conceptual prominence over time is investigated in the next section. --- RQ3: Which were the most prominent topics of social discourse around the reopening? Prominent words combine fear and hopes about restarting. Tables 1A and2A (cf., Appendix) report the most frequent concepts and those words with the highest closeness centrality in FMNs, respectively, as extracted from daily social discourse around #fase2. Words are colored according to the emotion they elicit (see Methods). The negator "not" was consistently ranked first in all cases and was not reported for the sake of visualisation. Notice how on 3 May the most frequent word in social discourse was "doman" (tomorrow), indicating anticipation expressed by online users towards the reopening on 4 May. The concept of "govern" (government, to govern) was highly ranked by both frequency and closeness centrality across all days. This indicates that a substantial fraction of tweets was linked to the governmental indications and measures for the reopening, as identified also in emotional profiling. Jargon related to the COVID-19 pandemic like "cas" (case), "contag" (contagion) and "quaranten" (quarantine) remained highly central across the whole period, indicating that social discourse about the reopening was strongly interconnected with news about the contagion, as also indicated by Gozzi and colleagues (2020). Concepts like "nuov" and "mort" popped high in both frequency and closeness ranks on some days because of the reported news about local COVID-19 outbreaks. Inspirational jargon like "respons" (responsible), "affront" (to face), "entusiasm" (enthusiasm) and "sper" (hopeful) were prominent across the whole time period and with both measures. This quantitative evidence indicates that social discourse was strongly focused about a concretely positive attitude towards a responsible reopening. Frequency captures more negative jargon. As indicated by emotional profiling, these prominent and positive concepts coexisted with prominent but negative concepts, like "vergogn" (ashamed), "critica" (criticism) and swearing. These concepts were captured mostly by frequency rather than by closeness centrality, indicating the proliferation of negative messages repeating these concepts with fewer contextual richness when compared with positive concepts (which end up being more central in FMNs). An example of this trend is on 14 May, where frequency captures mostly blaming concepts whereas closeness identifies more general topics like "govern", "far" (do) and "misur" (measures). This difference calls for a more systematic comparison of frequency and closeness in identifying word prominence. --- Closeness captures contextual diversity. Frequency and closeness correlated positively across the whole period, with a mean Kendall Tau of 0.67 <unk> 0.04 (p<unk>10 -6 ) averaged over all 20 days. This value indicates that words ranked highly by closeness centrality tended to be ranked highly also by frequency. As an example, a scatter plot of log frequency and closeness centrality of individual words is reported in Figure 1A in the Appendix. The correlation between the two quantities is not perfect (e.g., equal to 1). On the one hand, closeness better captures contextual richness (Stella, et al., 2017), i.e., the numbers of different semantic contexts and frames featuring a concept, an example being meaning modifiers commonly occurring in different contexts like "non" that tend to have high closeness. On the other hand, high frequency but lower closeness identifies words with very narrow semantic frames, appearing always within the same context and bearing the same meaning, e.g., "shock" and "disordine" (disorder). Combining closeness and frequency can therefore highlight more nuances of the meanings attributed to words through a complex network approach. While frequency outlines that concepts like "govern" (government/to govern), "cas" (case) and "quaranten" (quarantine) remained highly ranked between 1 May and 20 May, closeness identified a different dynamics for "quaranten". In the first half of May, "quarantine" became monotonously less central in the forma mentis networks of daily social discourse, registering a decrease of almost 200 positions in its ranks. This difference indicates that quarantine kept being a frequent concept in social discourse but it appeared in fewer and fewer contexts, gradually becoming more peripheral in the discussion. This decrease reached a halt and an inversion of tendency on 15 May, after which "quaranten" acquired a higher closeness. Investigating the Twitter stream reveals that the increase in rank of quarantine registered after 15 May is due to many tweets reporting the decision of the Italian government to accept tourists with no obligation for self-quarantine. --- Closeness highlights dynamics invisible to Figure 3: Closeness and frequency ranks over time of "govern" (magenta), "case" (pink) and "quarantine" (green) and other prominent concepts on 4 May. --- RQ4: How did online users express their emotions about specific topics in social discourse? The previous sections characterised social discourse across days. Instead, this section explores how online users described specific concepts on a single day. --- What preoccupied online users on the vigil of reopening? The emotional profiles and semantic frame/FMN neighbourhood around "worried" ("preoccup") as extracted from the stream are reported in Figure 4. When talking about their preoccupations about #fase2, Italians displayed different emotional profiles between 3 May and 4 May. The day before the reopening, trust, anger and fear coexisted together (see also Figure 2A in the Appendix). The semantic content of the FMN contains information about the main concepts eliciting these emotions. Negative emotions mainly targeted/concentrated around the difficulties of reopening ("difficulties", "complain", "fear"), which were projected to/linked with "tomorrow". On the day of the reopening, 4 May, the anger of the previous day vanished and more hopeful words appeared (e.g., "success" and "hope"). On 4 May, preoccupation was linked to the institutions, featuring fear and sadness for their "absence". The links involving "not" contrasted the negative meaning of "worried" with positive, rather than negative, associations like "opportunity", "alive" and "respect". The links between "worried", "commerce" and "plight" also indicate that even on the day of the reopening, social media expressed concern about the economic repercussions of the lockdown for commerce. Jargon related to the contagion ("coma", "case", "contagious") indicates a concern about the COVID-19 contagion present even on the day of the reopening. Ending the quarantine was not a "happy ending". As reported above, the concept of the "quarantine" ("quarantin") became less and less prominent in social discourse in terms of closeness centrality, i.e., it became peripheral in the flow of social discourse by being presented in fewer and fewer different contexts. Did its emotional aura undergo also some transformation? Figure 5 compares the semantic-emotional frames of "quaranten" (quarantine) on 1 May (top) and 6 May (bottom). Before the reopening, social discourse around quarantine elicited trustful associations of anticipation towards the future, involving the government and celebrating the success of the quarantine in slowing down the contagion ("success", "volunteer", "gorgeous"). Traces of social denounce were present too, with links towards anger-related jargon like "ashamed", "damage" and "rebel". However, the registered emotional richness of anger around "quaranten" on 1 May was compatible with random expectation (see also Figure 3A in the Appendix). This positive perception of the quarantine did not last. Two days after the reopening, the threat of new cases of contagion was prominently featured in social discourse around the quarantine, as captured by the triad with "newcomer" and "contagion" and also by other negative associates like "isolated", "death", "coffin" and "long"-"forgotten". In a few days, positive emotions around the quarantine dissipated and were replaced by sadness. A closer check at the stream of tweets reveals that this flicker of sadness originated in news media announcements reporting local outbreaks of COVID-19. Reopening the country with the COVID-19 still circulating among the population disrupted the positive "happy ending" perception of the (end of the) quarantine. An unwavering yet nuanced trust in politics. Different emotions can coexist not only in the global social discourse but also around specific concepts. An example is "politics" ("polit"), which consistently featured a trust in its semantic/emotional frame higher than random expectation consistently between 1 May and 20 May (z-scores higher than 1.96). As reported in Figure 6, trust in politics on 2 May was mainly focused around the government ("govern"), its crew of experts ("expert") and its strategies for containing the contagion and countering the economic repercussions of the lockdown (see links with "launch", "plan" and "economy"). Although persisting over time, trust co-existed also with other emotions surrounding "politics" (see also Figure 4A in the Appendix). For instance, on 11 May, "politics" featured several associations with anger-eliciting words, like "dictatorship", "garbage", "controversial" and "ashamed", all concepts expressing political denouncement against controversial political measures of the lockdown. The FMN on 11 May also reports the source of this anger, as reported in those tweets registered during that time window, politics was considered to be "responsible" for and expected to find "money" for preventing small businesses from going "bankrupt". This burst of anger (with z-score > 1.96, cf., the emotional flower in Figure 6, bottom) is another example of flickering emotion. In this case, notice that anger and sadness co-existed with trust, indicating a persistent perception of trust in politics in online users' discussions. It has to be underlined that, as reported in the emotional flower in Figure 1, also "govern" (to govern/government) featured a trustful emotional aura. --- RQ5: Were messages expressing different emotions reshared in different ways? Previous studies already established that valence can influence the extent to which tweets are re-shared by online users (Ferrara and Yang, 2015;Brady, et al., 2017). In particular, Ferrara and Yang (2015) found a positive bias on Twitter, i.e., a tendency for users to share messages with a positive sentiment/valence. This section aims at testing whether differences in tweet sharing hold also beyond valence and across the whole spectrum of different emotions. Considering the emotions of moderately and highly retweeted messages. Attention was given to the most retweeted messages and their emotional content. Distributing tweets according to their retweet count, focus was given to the top 98.5 percent percentile, which included 5,942 tweets with a median of 205 retweets, a minimum of 100 re-shares and a maximum of 2,822 retweets. Tweets above the median of 205 retweets were considered as highly retweeted (HR). Tweets below the median of 205 re-shares were considered as moderately retweeted (MR). Using the NRC lexicon, the emotional profile of each single HR and MR tweet was computed. For every emotion, the two distributions of emotional richness resulting from HR and MR tweets were compared. With a statistical significance of 0.05, highly retweeted messages about the Italian reopening exhibited: 1. A lower emotional richness in anger than moderately retweeted messages (mean HR: 0.0452, mean MR: 0.0488, Mann-Whitney stat. 2.09 • 10 6, p = 0.0124); 2. A higher emotional richness in fear than MR messages (mean HR: 0.0874, mean MR: 0.0925, Mann-Whitney stat. 1.92 • 10 6, p = 0.0225); 3. A higher emotional richness in joy than moderately retweeted messages (mean HR: 0.0874, mean MR: 0.0925, Mann-Whitney stat. 1.91 • 10 6, p = 0.0068). For all the other emotions, namely disgust, sadness, anticipation, surprise and trust, no statistically significant difference was found between highly and moderately retweeted messages. Fear subverted the positive bias of resharing. In the current social discourse, tweets shared significantly more by online users elicited more joy, a higher fear and less anger. These results provide evidence confirming and extending the previous positive bias identified only over sentiment by Ferrara and Yang (2015). According to the circumplex model (Posner, et al., 2005), joy is an emotion depending on positive sentiment whereas fear and anger live in the space of negative sentiment. Finding that people tended to reshare more tweets with higher joy and lower anger represents additional confirmation of the positive bias that people tend to re-share content richer in positive sentiment. However, this tendency does not hold across the whole spectrum of emotions. Fear subverted the positive bias: online users tended to re-share messages richer in fear, and thus in negative sentiment. --- Discussion The main take-home message of this investigation is that the post-COVID-19 reopening in Italy was not a "happy ending", since social discourse highlighted a variety of semantic frames, centered around several issues of the restart and mixing both positive and negative emotions. This rich semantic/emotional landscape emerges as the main novelty of this approach, which transparently links together emotions (rather than more simplistic sentiment patterns) with the specific semantic frames evoking them and extracted from the language of social discourse. This extraction relies on a fundamental assumption: text production and therefore social media both open a window into people's minds (Aitchison, 2012;Ferrara and Yang, 2015). Under a time of crisis, like during a pandemic, being capable of seeing through such window is fundamental for understanding how large audiences are coping with the emergency (Bonaccorsi, et al., 2020;Gallotti, et al., 2020;Gozzi, et al., 2020). This challenge requires tools that provide a transparent representation of knowledge and emotions as expressed in social discourse. This work used computational cognitive science for seeing through the window of people's minds with the semantic/emotional analysis of tweets (Stella, et al., 2020), without explicitly relying on machine learning. The analysis performed here on social discourse in 400k Italian tweets, including #fase2 (phase 2) and produced between 1 May to 20 May 2020, provides several important points for discussion. The reopening was not a happy ending. As outlined within RQs 1, 3 and 4, emotional profiling provided evidence for a positive emotional contagion happening online after the day of the restart, with levels of trust, joy, happiness and anticipation all simultaneously higher than previously registered. This positive emotional contagion did not last and it did not feature the complete disappearance of negative emotions, like fear or anger, which rather co-existed with the others in social discourse. The coexistence of different types of emotional trends was found also in previous works about COVID-19 (Stella, et al., 2020) and it is not surprising, given the unprecedented range of socio-economic repercussions that the pandemic brought not only over the health system but also over social mobility and the economy (cf., Bonaccorsi, et al., 2020;Pepe, et al., 2020). What is more interesting is that such constellation of different positive, negative and neutral emotions cannot be focused only on the concept of "reopening" but it rather has to be distributed or scattered across circulating news or key topics of social discourse. This scattering creates a methodological challenge for understanding the targets and actors of these emotions. News flows and politicians were found to be relevant in driving emotions like disgust (see RQ1), which were invisible to sentiment analysis (see RQ2). Enhancing standard frequency-based lexical analysis with closeness (RQ3) highlighted a plurality of key concepts, brought by news and users' messages, being discussed in different ways across days. The semantic frames reconstructing how online users perceived such prominent concepts revealed a set of flickering emotions (RQ4), which were assessed in detail thanks to forma mentis networks. Notice that this approach gave focus to the cognitive structure of the language used by online users and not to their identity. The flickering emotions/conceptual prominence reported here might be the effect of a "topic drift" promoted by a handful of influential users, who brought attention on specific aspects of the reopening by launching additional hashtags or by simply targeting specific users while creating flaming content or trolling. The latter scenario has been frequently unearthed in previous studies focusing on the Twittersphere (Zelenkauskaite and Niezgoda, 2017;Bessi and Ferrara, 2016;Stella, et al., 2018;Ferrara, 2020), which showed how trolling and social bots might be capable of depicting political climate in ways rich of negative sentiment and anger-related emotions. The specific identification of the exact actors enabling emotional contagion and topic drift represents a very interesting research direction for future work. The limits of valence/arousal in social discourse analysis. On a methodological side, the results in RQs 1 and 2 indicate that the NRC Emotion Lexicon (Mohammad and Turney, 2013) is considerably more powerful than the circumplex model in detecting spikes and shifts in social discourse. This difference can be explained with the observation that social discourse is different from a single text or a book. In social media multiple individuals can participate in a conversation, often reporting different angles, perspectives or stances about the same topic. Hence, whereas in a book a single author usually reports a stance with a predominant emotional tone (Berman, et al., 2002), in social discourse multiple tones can co-exist together (Kalimeri, et al., 2019) and they could average out when considering valence/arousal. For instance, anger in the circumplex model corresponds to high arousal (excitement) and negative valence (negativity) whereas trust corresponds to low arousal (calmness) and positive valence (positivity). The coexistence of anger and trust, as found in the current dataset with the NRC lexicon, would average out the opposing contributions of angry/trusting messages. Hence, the current results provide strong evidence for the necessity of adopting emotion specific tools for the analysis of social discourse beyond valence/sentiment. While extremely useful in single-author texts, the valence-arousal circumplex model of emotions might not be suitable for the investigation of highly nuanced emotional profiles in social discourse, where multiple positive or negative emotions might co-exist together. Exploring the eight basic emotional dimensions of Twitter discourse, in terms of fear, anger, disgust, anticipation, joy, surprise, trust and sadness (Mohammad and Turney, 2013), highlighted spikes in social and political denounce of gender and economic inequality or outbreaks of news media announcements about the COVID-10 pandemic. These phenomena went unnoticed when considering the valence and arousal of social discourse (Posner, et al., 2005), underlining the necessity to move from general sentiment/arousal intensity approaches to more comprehensive emotional profiling investigations of social discourse. Cognitive networks and stance detection. This whole study revolves around giving structure to social discourse through complex networks. This procedure enabled a quantitative understanding of people's perceptions and stances toward various aspects of the nationwide reopening. To this aim, textual forma mentis networks were used, reconstructing syntactic, semantic and emotional associations between concepts as embedded in text by individuals (Stella, et al., 2019;Stella and Zaytseva, 2020;Stella, 2020). As explored within RQ3, closeness centrality in networks built from social discourse on each day consistently identified as central positive concepts, related to the government, the willingness of restarting and the necessity of establishing measures for rebooting economy and social places, but also negative words, related to attention about the contagion and new cases. Word frequency captured analogous prominent concepts but also tended to highlight more negative words, expressing political and social denounce. Closeness, based on conceptual distance, and frequency, based on word counts, did not perfectly correlate with each other and even offered different information about how conceptual prominence evolved over time. An example is "quarantine", which became progressively used in fewer and fewer different contexts, mainly related to local COVID-19 outbreaks and the decreasing epidemic curve, while remaining consistently highly frequent in discourse over time. Indeed, frequency neglects the structure of language used for communicating ideas and emotions, so that it is expected for frequency to provide different results when compared to closeness. Consider the simple example of a collection of 100 tweets, with 80 of them being the repetition of "I hate coronavirus" and the remaining 20 linking "coronavirus" with medical jargon in different ways (e.g., "One of the symptoms of the novel coronavirus affliction is cough", "The novel coronavirus is a pathogen originated in animals and transmitted to man", etc.). Keeping into account only frequency would identify social discourse as dominated by "hate" and "coronavirus" but it would miss the constellation of less frequent words giving meaning and characterising "coronavirus" itself through medical links and contextual associations, which are rather captured by closeness itself. The empirical and methodological aspects outlined above underline the necessity of considering not only frequency but also other structural measures of language, like network closeness, in order to better assess opinion dynamics and online public perceptions over social media. Flickering emotions surrounded specific facets of the reopening. As reported within RQ4, forma mentis networks also highlighted how people's perceptions of specific aspects of the reopening changed over time. On the day of lockdown release, the main preoccupations of Italians focused about the economy but were strongly contrasted with hopeful messages, semantically framing the reopening as a fresh new start for getting back to normality. Hope did not embrace all aspects of the reopening. Announcements of local COVID-19 outbreaks altered the perception of "quaranten" (quarantine), which was previously perceived as successful in reducing contagion. Trust vanished and was replaced with a sad perception of quarantine, related to sudden, local outbreaks (see also Gozzi, et al., 2020). Italians displayed also an unwavering trust in politics and the government across the whole considered time window. Notice that a persistent trust in politics and governments can be beneficial in guiding a whole nation towards successfully reopening after a nationwide lockdown (Massaro, 2020;Lima, et al., 2020). However, trust co-existed with other negative emotions on some days, indicating a nuanced perception portrayed in social discourse, combining trust in the institutions with political denounce, anger and sadness about delays or lack of clarity. The microscopic patterns observed in RQ4 indicate that the general analysis of social discourse in terms of emotional profiling is not enough for understanding the complex landscape of public perception in social media. A complex network approach, structuring concepts and emotions around specific events, represents a promising direction for future research understanding social media perception and dynamics (see also Arruda, et al., 2019;Brito, et al., 2020;Stella, 2020). Resharing behaviours and fear. As reported in RQ5, this study also investigated user behaviour in sharing content with certain emotional profiles. Tweets richer in joyful concepts were found to be more frequently re-shared to users, while the opposite was registered for anger. Retweeting more joyful and less angry tweets is compatible with the positive bias for users to retweet tweets with positive sentiment found by Ferrara and Yang (2015). However, this bias was subverted by fear, as in the current analysis online users were found to re-share more those messages richer in negative, fearful jargon. This tendency might be due to the strong affinity of the considered tweets with the COVID-19 contagion, a phenomenon met with fear and panic over social media (Stella, et al., 2020;Lima, et al., 2020). The observed pattern might therefore be a symptom of panic-induced information spreading (Scherer and Ekman, 2014). These distinct behavioural patterns mark a sharp contrast in the way that different emotions work over social media. Future research should investigate not only the amount of retweets but also the depth of content spreading in order to better understand how different emotions pervaded the Twittersphere. --- Limitations of this study The current analysis presents mainly four limitations which are accounted for and discussed in view of potential future research work. --- Accounting for cross-linguistic variations in emotions. This study investigated tweets in Italian by considering cognitive datasets, like the NRC Emotion Lexicon and the VAD Lexicon, which were not built specifically from native Italian speakers. In fact, these datasets were obtained in mega-studies with English speakers and then translated across different languages (cf., Mohammad and Turney, 2013;Mohammad, 2018). From a psycholinguistic perspective, translation might not account for cross-linguistic differences in the ways specific concepts are perceived and rated (Aitchison, 2012). In absence of large-scale resources mapping words to emotions directly from Italian native speakers, the above translations represent a valuable alternative, successfully adopted also in other studies (Stella, et al., 2020). With the advent of Mechanical Turk and other platforms for realising psycholinguistic mega-studies, future research should be devoted in order to obtain emotional lexica specifically tailored for Italian and other languages different from English. Focusing over user replies. The considered dataset mapped only tweets incorporating the #fase2 (phase 2) hashtag and did not consider user replies without that hashtag. This limitation means that the social discourse investigated here was mostly generated by individual users and was not the outcome of a user reply. As a consequence, by construction, the considered dataset is more focused on reporting the plurality of individual perceptions about the reopening, without considering trolling or debates spawning from post flaming (Stella, et al., 2020) or from malicious social bots (Ferrara, 2020). Notice also that the dataset included retweets and user mentions, which contributed to discussions between users. Future studies might focus more on the conceptual/emotional profiling of users storylines and discussions. Combining networks and natural language processing. From a language processing perspective, this study focused on extracting the network structure of syntactic and semantic relationships, it included word negation and reported also meaning modifiers. However, the current analysis did not amplify or reduce emotional richness according to other features of language like punctuation or adverbs (e.g., distinguishing between "molto gioioso"/very happy and "gioioso"/happy), like it was done in other studies (Stella, et al., 2018). Despite this lack of fine structure, the emotional profiles built and analysed here were still capable of highlighting events in the stream of tweets like the proliferation of messages about social/political denounce or strong fluctuations in the perception of specific aspects of the reopening as promoted by news media, e.g., local outbreaks of COVID-19 cases or quarantine-less tourism. More advanced methodologies combining the network approach and natural language processing (cf., Vankrunkelsven, et al., 2018) would constitute an exciting development for a more nuanced understanding of emotions in social discourse. Profiling the emotions of different discourse dimensions. Another limitation of the study is that it does not explicitly relate emotions and concepts to specific aspects of social discourse like knowledge transmission, conflict expression or support. The coexistence of hopeful, angry and fearful patterns highlighted in this study indicate an overlap of these different dimensions of conversation in social discourse. A promising approach for uncovering these dimensions and identifying the emotions at work behind conflict, knowledge sharing and support for the reopening would be the application of recent approaches to text analysis relying on deep learning (cf., Choi, et al., 2020). --- Conclusions This study reconstructed a richly nuanced perception of the reopening after national lockdown in Italy. The Italian Twittersphere was dominated by positive emotions like joy and trust on the day after the lockdown release (RQ1), in an emotional contagion dominated by hopeful concepts about restarting. It was not a happy ending (RQ3). Emotions like anger, fear and sadness persisted and targeted different aspects of the reopening, like sudden raises in the contagion curve, economic repercussions and political denouncement, even fluctuating from day to day (RQ4). User's behaviour in content sharing was found to promote the diffusion of messages featuring stronger joy and lower anger but also expressing more fearful ideas (RQ5). This complex picture was obtained by giving structure to language with cognitive network science (Stella, 2020) and emotional datasets (Mohammad and Turney, 2013). Whereas the valence/arousal model of emotions was unable to detect emotional shifts (Mohammad, 2018), the NRC Emotion Lexicon and its eight emotional states coloured a richly detailed landscape of global and microscopic networked stances and perceptions (RQ2). Reconstructing and investigating the conceptual and emotional dimensions of social discourse is key to understanding how people live through times of transitioning. This work opens a simple quantitative way for accessing and exploring these dimensions, ultimately giving a structured, coherent voice to online users perceptions. Listening to this voice represents a valuable cornerstone for future participatory policy-making, using social media knowledge as a valid tool for facing difficult times. --- About the author Massimo Stella is a lecturer in computer science at the University of Exeter and a scientific consultant and founder of Complex Science Consulting. His research interests include cognitive network science and knowledge extraction models for understanding cognition, language and emotions. He published 33 peer-reviewed papers and has a Ph.D. in complex systems simulation from the University of Southampton (U.K.). E-mail: massimo [dot] stella [at] inbox [dot] com --- This Appendix gathers tables and supporting information of relevance for the results reported in the main text. --- Table 1a: Daily top-ranked concepts according to frequency. Higher frequency indicates higher occurrence in tweets. Words eliciting negative (positive) emotions are in warm (cold) colors (see also Figure 1 in the main text). --- Table 2a: Daily top-ranked concepts according to frequency. Higher frequency indicates higher richness of different contexts in tweets. Words eliciting negative (positive) emotions are in warm (cold) colors (see also Figure 1 in the main text).
Social discourse and reopening after COVID-19: A post-lockdown analysis of flickering emotions and trending stances in Italy Although the COVID-19 pandemic has not been quenched yet, many countries lifted nationwide lockdowns to restart their economies, with citizens discussing the facets of reopening over social media. Investigating these online messages can open a window into people's minds, unveiling their overall perceptions, their fears and hopes about the reopening. This window is opened and explored here for Italy, the first European country to adopt and release lockdown, by extracting key ideas and emotions over time from 400k Italian tweets about #fase2 -the reopening. Cognitive networks highlighted dynamical patterns of positive emotional contagion and inequality denounce invisible to sentiment analysis, in addition to a behavioural tendency for users to retweet either joyous or fearful content. While trust, sadness and anger fluctuated around quarantine-related concepts over time, Italians perceived politics and the government with a polarised emotional perception, strongly dominated by trust but occasionally featuring also anger and fear.Introduction Case study: Italy, COVID-19 and the lockdown Research questions Background and related literature Methodology Results Discussion Limitations of this study Conclusions
Background Mental disorders account for 16% of the global disease burden in adolescents. The onset of half of all cases of mental disorders occurs by the age of 14 years, and the onset of 75% of all cases occurs by the mid-20s [1]. Adolescence is a moment of considerable physical, psychological, cognitive, and sociocultural changes and an expected period of crisis [2]. The natural transition from childhood to adult life could mask some mental health symptoms. Most mental disorders go undetected, dragging their consequences to adulthood and causing functional impairment [1]. Mental health problems can be divided into externalizing and internalizing behaviour problems [3]. The first group is characterized by behaviours that target the environment and others. In internalizing problems, behaviours target the individual, including common mental disorders and post-traumatic stress disorder. Common mental disorders correspond to a group of symptoms, including anxiety, depression, and somatic complaints, but not necessarily a pathology; common mental disorders are highly prevalent [4]. A systematic review estimated the prevalence of past-year and lifelong common mental disorders worldwide as 17.6% and 29.2%, respectively [5]. A study conducted in Brazil with adolescents showed a prevalence of common mental disorders of 30.0% [6]. Post-traumatic stress disorder is also a significant health condition that affects children and adolescents. It consists of the presence of intrusive thoughts relating to a traumatic event, avoidance of reminders of the trauma, hyperarousal symptoms, and negative alterations in cognitions and mood [7]. A metaanalysis showed that the overall rate of post-traumatic stress disorder in this group was 15.9% (95% CI 11.5-21.5) [8]. Another meta-analysis that focused on delayed post-traumatic stress disorder found that the proportion of post-traumatic stress disorder cases with delayed posttraumatic stress disorder was 24.8% (95% CI = 22.6% to 27.2%) [9]. Understanding the determinants of mental disorders is not an easy task, since these disorders are considered multifactorial phenomena. The literature has pointed out that genetic characteristics, the history of child development, and contextual factors are the main drivers of the development of mental illness among adolescents [10]. Among contextual factors, those considered the most important are low socioeconomic level, family conflicts and victimization of different forms of violence [11]. Adolescents can be especially vulnerable to community violence and its consequences [12]. At this stage, youths' circulation outside the home and without their families will increase [13]. Inexperience, emotional immaturity and the need to test limits, combined with this increase in community space circulation, could lead to exposure to violence and maximize its mental health effects. The increase in community violence in recent years is a global problem, and such violence is most frequent in low-and medium-income countries [14]. This review will focus on one contextual factor influencing mental disorders in adolescence: community violence [15,16]. Community violence is a type of interpersonal violence that occurs among individuals outside of personal relationships. It includes acts that occur in the streets or within institutions (schools and workplaces) [17]. In addition, community violence can be experienced directly (victimization) or indirectly (witnesses and hearing about). Estimating the impact of exposure to community violence on adolescents' mental health has been at the core of a large body of research. Two previous meta-analyses showed a mild to moderate and positive effect of community violence on adolescents' mental health [18,19]. However, these associations need to be confirmed since many primary studies were published after 2009. Additionally, there are still significant gaps to be addressed. For instance, it is not clear whether different degrees of proximity to community violence (victimization, witnessing, or hearing about) influence mental health outcomes (depression, anxiety, and post-traumatic stress disorder) at different magnitudes. Moreover, it is not clear whether gender, race and age can moderate this relationship, as well as other factors such as family constitution and interpersonal relations. This review's main objective is to systematize the scientific literature that has estimated the impact of community violence on adolescents' mental health. Other goals are (i) to investigate whether different proximity to community violence is associated with different magnitudes of common mental disorders or post-traumatic stress disorder and (ii) to identify whether gender, age, and race moderate the association between community violence and internalizing symptoms. --- Methodology All methods were carried out in accordance with the Preferred Reporting Items for Systematic Reviews and Metaanalysis Protocols (PRISMA-P) 2015 checklist and Joanna Briggs Institute Reviewers Manual-Chapter 7: Systematic reviews of aetiology and risk [20,21]. The protocol is registered in the International Prospective Register of Systematic Reviews (PROSPERO) -CRD 42019124740. The review question was:'Are adolescents exposed to higher levels of community violence at higher risk of developing internalizing mental health symptoms?' --- Eligibility criteria Population Following the World Health Organization (WHO) classification for adolescence, studies were selected if adolescents in the sample were aged 10 to 24 years. To be included in the review, adolescents participating in the studies needed to be in this age group at the time of outcome measurement [22]. There were no exclusion criteria. --- Exposure of interest Our exposure of interest is community violence. Community violence events that occurred inside institutions, such as schools and workplaces, and events of sexual nature, such as rape or other types of sexual aggression, were excluded. This choice was based on the fact that these types of community violence have different effects and magnitudes on adolescent mental health [23][24][25][26]. The inclusion criteria were original studies measuring community violence through questionnaires (answered by adolescents, parents, relatives or professionals responsible for the child and teachers) or crime rates. The exclusion criteria were original studies that included other types of violence, such as domestic violence, bullying, or sexual violence, that could not be separated from community violence. Comparison groups included adolescents not exposed or exposed to community violence at a lower level. There were no exclusion criteria for comparison. --- Outcomes/dependent variables This review considered studies that included internalizing symptoms as the primary outcome, represented by post-traumatic stress disorder, common mental disorder symptoms, depression, and anxiety. As inclusion criteria, studies that measured mental health symptoms through a questionnaire with the adolescents themselves, their parents, teachers, or professionals related to them and that had an association measure for the outcome were used. Exclusion criteria were applied for studies with association measures from regression models without adjustment. --- Study design This review included the following study designs: longitudinal, cross-sectional, and case-control. Case reports, case series, reviews, qualitative methodologies, interventions, descriptive studies, and methodologic studies were excluded. --- Information sources The search was performed in six allied health research databases: Medline (accessed through PubMed), Psy-cINFO, Embase, LILACS (Literatura Latino-americana e do Caribe em Ciências da Sa<unk>de), Web of Science, and Scopus. Regarding grey literature, only those corresponding to theses and dissertations were included. These were identified in the databases above, and "ProQuest Dissertation and Theses" was used to search for full texts. The search was conducted on February 5 th, 2019, and updated in January 14 th, 2021 and no filters for years of publication or language were applied. After the third phase of selection, all studies included in the review had their reference lists analysed by two independent researchers to search for additional works. --- Search strategy --- Search terms were based on the review question and were constructed with a librarian (APPENDIX I). The main concepts were as follows: "adolescents" OR "youth" OR "teenagers" AND "community violence" OR "urban violence" OR "neighborhood violence" AND "mental health" OR "anxiety" OR "depression" OR "post-traumatic" OR "internalizing" OR "psychological symptoms". A librarian worked on obtaining the full-text works, seeking bibliographic bases, libraries, and contact authors. --- Study selection Data selection was carried out in three stages: title, abstract and full texts. During all phases, two researchers performed critical readings to apply the pre-established inclusion and exclusion criteria. All stages were preceded by a pilot that included 10% of the total number of works in each phase (concordance rate 80-97%). In the first and second stages of selection, any disagreements were included. At the second stage of selection, we decided to exclude externalizing outcomes. In the third stage, we discussed all the discrepancies. When there were discrepancies, a third researcher was called. All reasons for exclusion were registered. The authors of five studies were contacted for clarification. The corresponding authors of each work in which queries arose during the selection phase were contacted by e-mail. In cases where we did not receive a response, a new e-mail was sent 15 days later. The queries referred to the presence of questions about sexual and school violence in the violence questionnaires and a lack of reported confidence intervals (CIs) in the studies. --- Data extraction Data were extracted using EpiData 3.1 with a standardized formulary tested in the pilot. Extracted information included the following: (i) study design, setting, times of measures and recruitment; (ii) demographic population; (iii) exposure characteristics -classification subtypes and measurement instrument; (iv) comparison group; (v) outcomes -types and measurement instruments; and (vi) association measures. Again, two review researchers worked independently. All papers included at this phase were discussed. In two studies, a third researcher was consulted to decide about discrepancies. At the end of the extraction phase, 42 studies were divided into two groups: 21 studies with complete information that were included in the meta-analysis and 21 studies with incomplete information included only in the qualitative synthesis. --- Assessment of methodological quality The quality of the studies was also evaluated independently by two researchers. The formulas used were adaptations, also tested in the pilot phase, from a predefined quality assessment form for cohort/case-control studies and descriptive studies published in the Joanna Briggs Institute Reviewers' Manual [27]. Studies were classified into three categories: low, intermediate, and high quality. Researchers defined cut-off points; all questions had the same weight in the final punctuation. Discrepancies were discussed, and a consensus was achieved in all cases. Critical appraisal tools are presented in APPENDIX IV. --- Synthesis of the results Results are presented in qualitative synthesis. A subgroup of 21 studies underwent quantitative synthesis. Forest plots were displayed to visualize the results. Heterogeneity was evaluated by the I 2 test, which describes the proportion of variation across the studies due not to chance but rather to heterogeneity [28,29]. The higher the percentage, the higher the level of heterogeneity. Because heterogeneity was still high when adopting the random effect model, reasons for these were investigated, and subgroup analysis was conducted -stratification by proximity to community violence (witness and victim) and types of outcomes (post-traumatic stress disorder, depression and internalizing symptoms) were performed. Because heterogeneity was still high in almost all forest plots, it was not possible to construct funnel plots to evaluate possible publication bias. We report our findings in accordance with PRISMA guidelines [30]. --- Results After a search in databases, 2987 works were identified, and no additional papers were found through other sources. Of these quantities, 1005 duplicates were removed, and the selection phase started with 1982 records. During stages 1 and 2 of selection, 1.119 records were excluded, leaving 863 for the third phase. The eligibility phase started with 42 works. Of these, 21 were included in the quantitative synthesis. Details are presented in Fig. 1. The results are presented in the following manner: 42 studies included in qualitative syntheses had their main characteristics presented in Table 1, and their results were described according to the review objectives. Quality assessments are presented in Table 2 and3 A subgroup of 21 studies could be meta-analysed. The first forest plots were generated and included all 21 studies. For these, we worked with the concept of general community violence and only one type of outcome, so for the studies that had more than one association measure (for victim and witnessing, for example), a weighted average was calculated, and the same was done for the studies that had more than one outcome. The I 2 value was 53.8%, with a p value of 0.003, thus indicating substantial heterogeneity [72]. Subgroup analysis was conducted with stratification by proximity of community violence (CV) (witness and victim) and then with types of outcomes (post-traumatic stress disorder, depression and internalizing symptoms). The only graphics presented were those with heterogeneity smaller than 60%, which corresponds to the subgroups of post-traumatic stress disorder and internalizing symptoms as outcomes. The results of the summary measures must be interpreted with caution. Only some of the qualitative synthesis studies presented complete data that would allow inclusion in the quantitative synthesis. The first graph generated (Fig. 2) shows high heterogeneity, and the graphs presented for the outcomes of post-traumatic stress disorder (Fig. 3) and internalizing symptoms (Fig. 4) do not show high heterogeneity but represent a small group of studies compared to the total number included in the review. Nevertheless, it was possible to see a small but statistically significant greater effect for post-traumatic stress disorder than internalizing symptoms. --- Table 3 Quality assessment of the included cross-sectional studies Answers: Y -Yes, N -No, U -Undefined. Score: N (1 point); U (2 points); Y (3 points). The studies were ordered according to their quality. Light grey colour -low quality; medium grey -intermediate quality; dark grey -high quality. The work by Grinshteyn et al. (2018) was evaluated as a longitudinal study because of its study design, but the results presented in Table 1 are classified as cross-sectional because they were statistically analysed using the procedures for cross-sectional studies. --- Study Randomized sample --- Sample definition --- Confounders Comparable groups --- Losses Outcome measurable --- Statistical analysis --- Exposure measurable --- Score --- Mental health symptoms and exposure to community violence Twenty-eight studies did not consider different degrees of proximity to violence in their analysis [26, 31-43, 45-50, 53, 55, 56, 58, 61, 62, 64, 65, 69, 71, 73]. Of these, twenty-three found a significant association between exposure and outcome (Table 1). Five studies did not find community violence to be a risk factor for internalizing mental health symptoms [26,34,43,53]. Le Blanc et al. [26] justified the lack of association between community violence and outcomes analysed by the fact that other types of violence (home and school) were considered in the statistical analysis and could have influenced the results for a null association. Farrel et al. [34] discussed their results in light of the desensitization hypothesis since the sample has a high prevalence of community violence [74][75][76]. Goldman-Mellor et al. [53] compared their sample perception of violence and objectively measured neighbourhood violence derived from criminal statistics. Perception of violence in the neighbourhood is a different concept than exposure to community violence because the first is related to how adolescents see the environment Fig. 2 Forest plot of studies with general community violence as exposure and any type of internalizing mental disorders as outcomes in which they live. The authors found that adolescents who perceived their neighbourhood unsafe had a nearly 2.5-fold greater risk of psychological distress than those who believed their neighbourhood was safe. Adolescents who live in areas objectively characterized by high levels of violent crime measured by criminal statistics were no more likely to be distressed than their peers in safer areas. Aisenberg et al. [43] also did not find an association between community violence and PTSD, and they suggested that other factors, such as one's relationship to the victim and one's physical proximity to the violent event, may influence this association. It is important to underscore that this is the only study included in this review considered low quality. Donenberg et al. [50] did not find an association between community violence and internalizing problems; specifically for externalizing problems in boys, some factors that could have influenced these results are a small sample and the fact that the measurement of community violence considered only witnesses. The subgroup of 20 studies that were meta-analysed had a summary measure of 1.02 (95% CI 1.01-1.02), showing that there is a small but statistically significant higher risk of internalizing mental health symptoms for adolescents exposed to CV. --- Differences in mental health according to the proximity of CV -victims of CV vs. witnesses of CV Fourteen studies considered proximity to community violence in the statistical analysis [32,40,44,51,52,54,57,59,60,63,[66][67][68]70]. Three of these studies found a gradient risk for mental health outcomes regarding proximity to community violence, which means a larger risk for victims compared to witnessing and/or witnessing compared to merely knowing of violent events [60][61][62][63][64][65][66][67][68]. Six works found an association for victims of community violence but not for witnesses of community violence, and one found a positive association for different forms of victimization using witnessing as a control [23,32,44,52,57,59,70]. One study found an association between all community violence measures and mental health Fig. 3 Forest plot of subgroups of studies that considered post-traumatic stress disorder as an outcome outcomes with the same magnitude, and three did not find an association either for the victim or for witnessing [32-44, 70, 71, 73]. The results indicate that higher proximity to violence was related to a higher risk for internalizing mental health symptoms. Grinshteyn et al. [54], in addition to a gradient of risk from victims to witnesses to those who merely knew about events, also found differences between violent events and non-violent events, the first one counting for a higher magnitude. The authors that did not find an association discuss the possibility of desensitization and other types of violence (school or family violence) as softening the effects of community violence on mental health [59]. The meta-analysis graphs with victim and witnessing subgroups were not considered because they presented high heterogeneity (61.1% and 67.6%, respectively). --- Assessment of community violence by crime statistics Six studies measured exposure to community violence with crime rates [31,35,48,53,54,71]. Grinshteyn et al. [54] defined crime rates using the crime rate per 1000 people in a given postal code. They also collected selfreport data for comparison. Their results pointed to a decreasing gradient risk from victims to witnesses to those who merely knew of violent events. When comparing criminal statistics with self-report measures, the results were positively significant only for depression and at a smaller magnitude. The authors discussed the importance of these area-level crime rates to be constructed in smaller geographic units and to be considered a larger variety of crimes. Goldman-Mellor et al. [53] measured perceived neighbourhood safety with self-respondent answers and objectively measured neighbourhood violence using a geospatial index based on FBI Uniform Crime Reports. Their results showed an association for the first measure but not for the second one, suggesting that perception of neighbourhood violence matters more for mental health than objective levels. Velez-Gomez et al. [71] and Cuartas et al. [48] utilized both criminal statistical analyses and homicide rates. The first group encountered a positive association only for the outcome "ineffectiveness" in early adolescents (10-12 years), and the second group encountered a positive relationship Fig. 4 Forest plot of subgroups of studies that considered internalizing symptoms as outcomes for common mental disorders and post-traumatic stress disorder. Gepty et al. [35] utilized criminal statistics classifying violent crimes and non-violent crimes and found a positive association with depressive symptoms for the first violent crime but not for non-violent crimes. Da Viera et al. [31], worked with criminal statistics related to adolescents' residence and school address and found that adolescents who live in areas with low crime and studies in areas with high crime have a larger chance of presenting anxiety, probably related to feelings of insecurity on the way to school. --- Influence of gender, race, and age on the association between CV and internalizing mental health symptoms Thirteen studies analysed gender as a moderator in the relation above, four of them found gender to be a potential moderator. Bacchini et al. [44] and Boney-McCoy et al. [69] found that girls are more affected by negative experiences of community violence than boys, reacting with high anxiety, depression, sadness and post-traumatic stress symptoms. Haj-Yahia et al. [55] found that girls had more internalizing problems than boys when they were victims of community violence but not witnessing, while Foster et al. [52] found a positive association between community violence and depressive and anxious symptoms only for witnessing but not for victims. The other seven works tested gender as a moderator and did not find differences between boys and girls in the association. Only two studies, one conducted in Israel [60] with Arabic and Jewish subjects and another in Chicago [47] with Latinx, Black and White individuals, tested race as a moderator of the relationship between community violence and mental health symptoms. In the first study, Jewish subjects reported higher levels of witnessing community violence, while Arabs reported higher levels of victims of community violence and post-traumatic symptoms over the last year, but this ethnic affiliation did not moderate the relationship between community violence exposure and PTSD. Chen et al. [47] worked with a large multi-ethnic sample in Chicago and found that Latinx and Black adolescents were more exposed to community violence, had higher levels of depression and delinquency, and had more risk factors, such as low family warmth, peer deviance, school adversity and community violence exposure. In addition, the results from regression models showed a higher chance of depression for White adolescents than for minority adolescents (Black and Latinx), which is explained in light of the desensitization hypothesis [77,78]. The only study that considered age as a moderator of the relationship above was the one conducted by Gomez et al. [71]. Even so, the stratification occurred with an age group that did not fit our inclusion criteria (8-10 years), so the results were presented only for the interval 10-12 years. --- Family support, communication skills, emotional regulation and contextual factors that affect adolescents' mental health when exposed to community violence Other factors appear to be moderators of the association between community violence and mental health symptoms [26,44,56,58,63,65]. Sun et al. [42], O'Leary [64] and Gepty et al. [35]. The most frequent were family characteristics such as mother and father support, parental monitoring, sibling support, and communication skills. Bacchini et al. [44], Howard et al. [58] and Ozer et al. [65] described that parental monitoring/support could reduce depression and symptoms of distress. Talking with their parents and expressing their fears could make young people feel protected, reducing feelings of isolation and danger. Ozer et al. [65] also found that sibling support was protective against post-traumatic stress disorder symptoms and depressive symptoms in adolescents exposed to community violence; teacher help did not have a protective effect on either outcome, and a tendency to keep their feelings to themselves was demonstrated to be a protective factor against post-traumatic stress disorder symptoms [65]. Haj-Yahia et al. [55] and O'Donnell et al. [63] did not find differences in chances of depression and post-traumatic stress disorder for adolescents' exposure to community violence when family support was present or teacher support for the first. Individual characteristics of personality and emotional functioning also appear in some studies as moderators. Le Blanc et al. [26] found that good communication and problem-solving skills protect adolescents' exposure to community violence from psychological stress. Sun et al. [42] encountered that internal dysfunction involving emotional dysregulation, such as self-harm, potentializes symptoms of post-traumatic stress disorder in adolescents exposed to community violence. O'Leary [64] found that expressive suppression, which refers to active inhibition of observable verbal and nonverbal emotional expressive behaviour, buffers the effect of community violence exposure on depression. Gepty et al. [35] studied the ruminative cognitive style, which is the tendency of an individual to be caught in a cycle of repetitive thoughts, and found that it also increases the chance of depression in adolescents exposed to violent crimes. Contextual factors were also evaluated as moderators. Cuartas et al. [48] studied the effect of living in a poor household, having been directly victimized or witnessing a crime, perceived neighbourhood as unsafe and social support and found that the first three of them potentiate the chance for post-traumatic stress disorder in adolescents' exposure to community violence and that perceived neighbourhood as unsafe also worsens the chances of common mental disorders. O'Donnell et al. [63] analysed adolescents from The Republic of Gambia, Africa, and found that positive school climate function as a protective factor between community violence exposure and post-traumatic stress disorder, and it was stronger for witnesses than for victims. Cultural factors related to ethnicity were also evaluated. Henry et al. [56] studied cultural pride reinforcement and cultural appreciation of legacy as potential moderators between community violence and depressive symptoms in a sample exclusively composed of African Americans. Cultural appreciation of legacy was found to be a protective moderator of this relationship, leading to the conclusion that teaching African American youth about their cultural heritage can help them cope with racial discrimination. --- Different risks for different outcomes Some studies analysed more than one outcome with the following distribution: depression (20), internalizing symptoms (16), post-traumatic stress disorder (15) and anxiety/stress (1). Different outcomes are associated with different magnitudes of community violence exposure, as shown in Table 1, and factors analysed as moderators of this association also act differently. The graphs of meta-analysis in subgroups by outcome that showed a heterogeneity below 50% and were therefore presented in this review were the studies with posttraumatic stress disorder as outcome and internalizing symptoms. The summary measures for post-traumatic stress disorder outcome were greater than 1 (1.12, 95% CI 1.05-1. 19), while for the internalizing symptoms, the outcome was borderline (1.02, 95% CI 1.00-1.04). --- Discussion The results of qualitative synthesis reinforced a positive relation encountered in the previous meta-analysis between community violence exposure and internalizing mental health symptoms in adolescents [18,19]. The summary measure from 20 studies in quantitative synthesis showed a small but positive association. The proximity of community violence appeared to be an essential factor contributing to the risk of mental health symptoms. Adolescents who are victims of community violence are at greater risk than those who are witnessing community violence. The summary measures of the victim and witnessing subgroups could not be considered due to the high heterogeneity. Regarding the outcomes analysed, studies showed different risk magnitudes for different outcomes. The summary measures for post-traumatic stress disorder were positive and small but larger than those for the subgroup of internalizing symptoms. Longitudinal studies provide stronger evidence than cross-sectional studies since they can establish cause and effect relationships [79]. Of the twelve studies with a longitudinal design included in this review, 10 showed at least one significant effect measure in the causal association between greater exposure to community violence and increased risk of developing internalizing mental disorders. This fact supports the idea that there is a causal association in this relationship. Regarding moderators mentioned in objectives (gender, age, and race), only female gender appeared to be a significant moderator in 4 studies. These differences between genders are also found in studies that consider externalizing symptoms; however, for this outcome, boys have more risk than girls when exposed to community violence. A possible explanation for this distinction is the difference in upbringing between boys and girls, especially in more traditional societies, where girls are encouraged to keep their emotions to themselves and to have more socially acceptable behaviour, while the boys are encouraged to reinforce their masculinity, sometimes through violent and deviant behaviour [80]. Age was also not tested in the majority of studies as a possible moderator. In the previous meta-analyses conducted by Fowler [19], which included children and adolescents, differences were found between these two stages of the life cycle, with teenagers having the greatest risk. In regard to teenagers, on the one hand, a tendency towards a greater circulation around the neighbourhood by older adolescents is expected when compared to younger adolescents, which can mean a higher exposure to community violence in the first group. On the other hand, the emotional maturation expected over the years can protect against the effects of violence on mental health. Given the scarcity of studies that assess this influence, we can point out this gap as an area to be researched in future studies. Race was tested as a moderator only in two studies -one with Jews and Arabs and the other with White, Black and Latinx subjects. The former study found that Latinx and Black adolescents are at higher chance of developing depression when exposed to community violence. It is important to highlight the fact that thirteen studies of the forty-two studies included in this review did not have any information about the race of participants. On the other hand, in the group of studies that classified race participants, some of them were composed exclusively of African Americans. It must be pointed out that the lack of this information, as well as the homogeneity of the samples, is an important failure of the studies. Previous meta-analyses could not evaluate race as modifiers because of these same problems [19]. As a counterpoint, a systematic review and metaanalysis showed that racism is linked to poor physical and mental health [81]. Since there is substantial gender inequality among victims of community violence, with boys more like to be victims, and racism is a critical factor that can influence mental health, it is important to study the effects of race on the association of community violence and mental health symptoms, as well as possible protective factors and interventions for this population [14]. The study by Henry et al. [56] is an example of how maternal messages of positive reinforcement of Black culture can protect against depressive symptoms in adolescents of this ethnic group who are exposed to community violence. An important aspect to be highlighted that appears in our results and in previous meta-analyses is the phenomenon of desensitization. This phenomenon can occur in areas of high levels of community violence. With chronic and recurrent exposure, individuals do not present as many depressive and anxious symptoms after a certain degree of community violence in a process of naturalization of barbarism [73,77,78]. This phenomenon should not be interpreted as beneficial, as this naturalization of violence may have negative effects on other outcomes. In relation to externalizing symptoms, for example, aggressive behaviour and delinquency, we can see the opposite effect: there is an increase in these behaviours in a progressive and linear way with an increase in violence. Most studies included in this review were conducted in the United States of America (27), followed by South Africa (4), Israel (3), Colombia (2), the Republic of Gambia (1), China (1), England (1), Switzerland (1), Italy (1), and Mexico (1). Globally, community violence varies according to region and country. According to the World Health Organization [82], homicide rates were highest in Latin America (84.4/100,000 in Colombia, 50.2/100,000 in El Salvador, 30.2/100,000 in Brazil) and lowest in Eastern European countries (0.6/100,000 in France and 0.9/100,000 in England) and Asia (0.4/100,000 in Japan). In this review, exposure rates to community violence were different between studies. For example, four studies conducted in Africa reported that 83.4% to 98.9% of subjects were witnesses of community violence, while 40.1% to 83.5% of subjects were victims of community violence [59,63,66,70]; in contrast, studies in the United States of America showed greater variation, witness of community violence (49-98%) and victim of community violence (10.3-69%) [26, 32-34, 36-39, 41, 43, 51-54, 56, 58, 62, 65, 68]. Part of this difference could be due to different methods for measuring community violence, but another part could be because of different population origins. Socioeconomic level, social inequalities, urban disorder, weather factors, and cultural factors can influence community violence exposure rates and can also influence how adolescents react to them [83][84][85]. Therefore, different territories can count on different levels of community violence and different ways to deal with it. Some studies of this review reinforced this aspect; for example, Cuartas et al. [48] studied the effect of contextual factors such as poverty in the neighbourhood and social support as potential moderators of the association of community violence and CMD and PTSD, confirming their hypotheses for the former. O'Donnell et al. [63] found that a positive school climate was a protective factor for youth who witnessed CVs about post-traumatic stress reactions. The authors highlighted the high levels of self-report hostile school climate that may reflect the school context's structural factors. However, considering the cultural aspects, any of the studies included in this review compared, for example, urban areas with rural areas. It would be an interesting comparison to examine. Considering these variations attributed to contextual and cultural factors, more studies conducted in different countries and cities would be relevant. In this review, some studies analysed the difference between exposure to violence measured by statistical criminalities and community violence self-report questionnaires or perceived violence [53,54]. The authors found differences in their results, as described in section 3.3. The first methodology has relevance because it is less costly and simpler to conduct and therefore has importance, especially in countries where there are few studies in this area. Nevertheless, studies that compare two forms of measuring violence (self-report and criminal statistics) can contribute to a better understanding of the differences between them. There are strengths and limitations that should be considered in this systematic review. Strengths include an extensive search of databases, contact with authors for clarification and no filters applied for year or language in the search, all of them contributing to a larger body of literature. Methodologies were constructed with alternate pairs of studies in the selection and extraction phase to avoid selection bias and errors in extraction. Studies included in the review were composed mostly of adolescents from schools or population-based samples and not from mental health services or other types of institutions, leading to a more representative sample. This review utilized a community violence concept that excludes sexual and school interpersonal violence, focusing on and estimating the effect of such violence on adolescents' mental health, which we considered a strength since it brings more specificity to the results. The main limitations were that different tools for exposure and outcome measures were used, leading to heterogeneous results and compromised pooling. Study designs and statistical analysis also differed between studies, which made comparison difficult. --- Conclusion This review confirmed a positive relationship between community violence, excluding sexual assault and school violence, and internalizing mental health symptoms in adolescents. Even though race and age did not appear to be moderators in most of the studies, girls were more sensitive to the effects of the exposure in some studies, showing that gender can be a possible moderator in this relationship. Other factors, such as family constitution, communication skills and emotional functioning, also seem to have an influence on this association. This review provides relevant information regarding the health and public safety field and can serve to direct public efforts to build policies to address the prevention and treatment of both community violence and mental disorders. This review also contributes to knowledge of these issues among health and education professionals. --- Supplementary Information The online version contains supplementary material available at https:// doi. org/ 10. 1186/ s12888-022-03873-8. --- Additional file 1. --- Additional file 2. --- Additional file 3. --- Additional file 4. --- Authors' contributions The listed authors conceived the project (CM, CL), developed the protocol (CM, CL), carried out the searches (CM), carried out the selection and extraction phase (CM, DF
Purposes: Mental disorders are responsible for 16% of the global burden of disease in adolescents. This review focuses on one contextual factor called community violence that can contribute to the development of mental disorders Objective: To evaluate the impact of community violence on internalizing mental health symptoms in adolescents, to investigate whether different proximity to community violence (witness or victim) is associated with different risks and to identify whether gender, age, and race moderate this association. Methods: systematic review of observational studies. The population includes adolescents (10-24 years), exposition involves individuals exposed to community violence and outcomes consist of internalizing mental health symptoms. Selection, extraction and quality assessment were performed independently by two researchers. Results: A total of 2987 works were identified; after selection and extraction, 42 works remained. Higher exposure to community violence was positively associated with internalizing mental health symptoms. Being a witnessing is less harmful for mental health than being a victim. Age and race did not appear in the results as modifiers, but male gender and family support appear to be protective factors in some studies.This review confirms the positive relationship between community violence and internalizing mental health symptoms in adolescents and provides relevant information that can direct public efforts to build policies in the prevention of both problems.
--- Conclusion This review confirmed a positive relationship between community violence, excluding sexual assault and school violence, and internalizing mental health symptoms in adolescents. Even though race and age did not appear to be moderators in most of the studies, girls were more sensitive to the effects of the exposure in some studies, showing that gender can be a possible moderator in this relationship. Other factors, such as family constitution, communication skills and emotional functioning, also seem to have an influence on this association. This review provides relevant information regarding the health and public safety field and can serve to direct public efforts to build policies to address the prevention and treatment of both community violence and mental disorders. This review also contributes to knowledge of these issues among health and education professionals. --- Supplementary Information The online version contains supplementary material available at https:// doi. org/ 10. 1186/ s12888-022-03873-8. --- Additional file 1. --- Additional file 2. --- Additional file 3. --- Additional file 4. --- Authors' contributions The listed authors conceived the project (CM, CL), developed the protocol (CM, CL), carried out the searches (CM), carried out the selection and extraction phase (CM, DF, VC), interpreted the findings (CM, JV, CL, WJ, DF, VC), drafted the manuscript (CM, JV, DF, VC), and approved the manuscript (CL, WJ). All authors have read and approve the manuscript. --- Availability of data and materials All data generated or analysed during this study are included in this published article (and its supplementary information files). --- Declarations Ethics approval and consent to participate Not applicable. --- Consent for publication Not applicable. --- Competing interests The authors declare that they have no competing interests. --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Purposes: Mental disorders are responsible for 16% of the global burden of disease in adolescents. This review focuses on one contextual factor called community violence that can contribute to the development of mental disorders Objective: To evaluate the impact of community violence on internalizing mental health symptoms in adolescents, to investigate whether different proximity to community violence (witness or victim) is associated with different risks and to identify whether gender, age, and race moderate this association. Methods: systematic review of observational studies. The population includes adolescents (10-24 years), exposition involves individuals exposed to community violence and outcomes consist of internalizing mental health symptoms. Selection, extraction and quality assessment were performed independently by two researchers. Results: A total of 2987 works were identified; after selection and extraction, 42 works remained. Higher exposure to community violence was positively associated with internalizing mental health symptoms. Being a witnessing is less harmful for mental health than being a victim. Age and race did not appear in the results as modifiers, but male gender and family support appear to be protective factors in some studies.This review confirms the positive relationship between community violence and internalizing mental health symptoms in adolescents and provides relevant information that can direct public efforts to build policies in the prevention of both problems.
Background Investigation into contributions by specific causes and age groups to absolute socioeconomic inequalities in total mortality is important to understand mechanisms of socioeconomic health inequalities and to establish policies and intervention programs to reduce socioeconomic inequalities in health. Many studies have reported the contribution of causes of death in specific age groups to socioeconomic mortality inequalities in Asia as well as in western countries [1][2][3][4][5]. They revealed that the pattern of the contribution by specific causes of death varied by countries, which informs of different policy priorities for different countries. Life expectancy is the expected years of life of a person remaining at a given age and a summary measure for mortality determined by the probability of death at each age [6]. It has important strengths in that it can be more easily understood than the age-standardized mortality rates by the public and compared between countries or changes over time [7][8][9]. In addition, life expectancy can be decomposed by death causes and specific age groups, which allows us to better understand mechanisms of socioeconomic inequalities in mortality. Decomposition of socioeconomic inequalities in life expectancy by ages or causes has been mainly performed in western countries [10][11][12]. Some studies showed agespecific contributions to socioeconomic inequalities in life expectancy over time [7,10,13,14] while other studies reported patterns of cause-specific contributions [6,[11][12][13][14]. However, there is still a paucity of studies investigating age and death cause contributions socioeconomic difference in life expectancy by socioeconomic position (SEP) with use of national data covering whole population. This study aimed to quantify age-and cause-specific contributions to socioeconomic differences in life expectancy at age 25 by educational level among adult men and women in South Korea (hereafter 'Korea') to provide evidence guiding intervention priorities. --- Methods --- Study subjects We used national death certificate and census data in 2005 from Statistics Korea. The number of total deaths aged 25 and over was 239,166 in 2005. After excluding data without any information on level of education, causes of death or age being missing or inaccurate, the present study included 236,128 deaths (98.7% of total deaths, 129,940 men and 106,188 women). In 2005 national census, 15, 215, 523 men and 16,077,137 women aged 25 and over were identified and included in this study. By law, all deaths must be reported to Statistics Korea within a month of their occurrence in Korea. Death registration in Korea is known to be complete for deaths occurring among those aged 1+ years since the mid-1980s [15]. Death certification by a physician was suggested as a very important factor to improve accuracy in reporting causes of death in Korea [16,17]. The proportion of death certified by physicians was 86.9% in 2005. The reliability of the educational level in death certificate data was reported to be substantial [18]. This study was approved by the Asan Medical Center Institutional Review Board, Seoul, Korea. --- Socioeconomic position (SEP) indicator A level of own education was used as the SEP indicator of this study. Educational attainment was categorized into elementary school graduation or less, middle or high school graduation, or college graduation or higher. Elementary school and high school in Korea correspond to the International Standard Classification of Education (ISCED) 1 and ISCED 3, respectively whereas there is no schooling system in Korea relevant to ISCED 4 [19]. College is classified as ISCED 5. Educational achievement among Korean population during the past decades has been remarkable along with the huge economic development. The enrollment rate in elementary school was 69.8% in 1951 but reached to 97.7% in 1980 and 98.6% in 2012 [20]. An explosive increase was observed for the enrollment rate in college or higher education skyrocketing from 4.2% in 1965 to over 60% in 2005. Thus, a very different educational distribution with age groups can be found in Korea. For example, 61.5% of women at age 25-29 years are classified into college of higher graduation while 76.1% of women at age 60-64 years are classified into elementary school graduation or less in 2005 [21]. --- Statistical analysis For life expectancy at age 25, life tables were constructed using 5-year probabilities of death by educational level. 5-year probabilities of death were calculated based on the age-specific death rates which were estimated from the number of death in death certificate data and the number of population in census data by age and educational level. Differences in life expectancy at age 25 by educational level were calculated. Age-and cause-specific contributions to the educational differences in life expectancy at age 25 were estimated using Arriaga's decomposition method [22]. The Arriaga method which has been widely used to decompose differences in life expectancy concerns a direct effect, an indirect effect, and an interaction effect of mortality difference on life expectancy. The direct effect reflects a consequence of a mortality difference in that age group. The indirect effect is due to a change in the number of survivors at the end of that age interval from a mortality change within a specific age group. The interaction effect results from the combination of the changed number of survivors at the end of the age interval and the lower (or higher) mortality rates at older ages. The total contribution of each age group to the change in life expectancy can be calculated by adding the direct, indirect and interaction effect [22,23]. By Arriaga's decomposition method, the difference in life expectancy can be decomposed into ages and causes of death which enable us to explain life expectancy differentials in terms of the contribution of each factor. Higher mortality rate in low SEP than high SEP makes a positive contribution to socioeconomic differences in life expectancy. In other words, a positive contribution refers a contribution to the increase in educational differentials in life expectancy. The total life expectancy differential by SEP is the sum of the number of years attributed negatively or positively by deaths in each age group or cause. Life expectancy was calculated by causes of death. A total of 8 broad and 17 specific (15 for men and 14 for women) causes of death were selected based on the main causes of death in South Korea [24] (see Table 1). Causes of death were coded using the 10 th version of the International Classification of Disease (ICD-10). --- Results Table 2 shows numbers of subjects and deaths and life expectancy at age 25 according to educational level. Middle or high school graduates accounted for about half of total subjects among both men and women (50.3% for men and 52.1% for women) while the numbers of deaths were the greatest among those with elementary school graduation or less (41.9% for men and 57.6% for women). Life expectancy at age 25 was 48.39 years in men and 54.75 years in women, respectively. Life expectancy stepwisely increased with education levels. Differences in life expectancy at age 25 between college or higher education and elementary or less education were 16.23 years in men and 7.69 years in women. Figure 1 shows age-specific contributions to the educational gap in life expectancy among Korean men and women. In men, those aged 40-44 as a single age group contributed most (13.9%) to educational differences in life expectancy at age 25 between college or higher education and elementary or less education. Contributions of ages between 35 and 49 to the educational differences in life expectancy were greater than those of ages other age groups. This was true for the educational differences between college or higher education and elementary or less education and true for the differences between middle or high school and elementary or less education. Meanwhile, older age groups aged 60-64 and over contributed significantly to the educational differences in life expectancy between middle or high school and college or higher education. Figure 1 also presents age-specific contributions in Korean adult women. Among women, younger age groups between 25 and 39 showed greater contributions than other older age groups. This was true for educational differences between college or higher education and elementary or less education and true for between middle or high school and elementary or less education. Meanwhile, older age groups aged over 65 contributed significantly to the educational differentials in life expectancy. Table 3 presents cause-specific contributions to the life expectancy gap by education in Korean men. In broad causes of deaths, the contributions by cancers were greater than those of cardiovascular diseases in men while in women the contributions by cardiovascular diseases surpassed the contributions by cancers. This pattern was true for all the comparisons between educational levels considered. In both men and women, the contributions by external causes were significant, substantially accounting for total educational differences in life expectancy (about 28-29% in men and 20-24% in women). Table 3 also shows contributions by specific causes. Liver disease, suicide, transport accident, cerebrovascular disease, and lung cancer played important roles in explaining educational differences in life expectancy in men. Especially, the most important contribution among specific causes was made by liver disease, explaining about 9-12% of total educational differences in life expectancy between college or higher education and elementary or less education and between middle or high school and elementary or less education. These large contributions were not found in women. In addition, suicide in men was the most important contributor to the educational differentials in life expectancy between middle or high school and college or higher education and the second most important contributors to other educational differences. In women, cerebrovascular disease, suicide, transport accident, liver disease, and diabetes mellitus were the main contributors to life expectancy differences by educational levels. Among those, contributions by cerebrovascular disease and suicide were most important. The leading cancers in Korea, lung, stomach, and liver cancers, showed relatively greater mortality rates in low education groups than high education group and thus positively contributed to the educational differences in life expectancy among both men and women. However, prostate cancer and colorectal cancer among men and breast cancer and colorectal cancer among women contributed negatively to the differences in life expectancy by some educational levels. Ill-defined causes were also in accounting for educational life expectancy differences in both men and women. Figure 2 presents patterns of contributions by major causes of deaths to educational life expectancy differences by age groups. In men, suicide and liver disease contributed significantly to the educational differences in life expectancy in younger ages between 35 and 49, while major contributions by lung cancer and cerebrovascular disease were found among men aged 60 or over. Similar findings were observed among women. Suicide and liver disease showed important contributions in younger age groups such as ages 35-39 while in older age groups of women diabetes mellitus, cerebrovascular disease, and ischaemic heart disease contributed significantly to the education life expectancy differences. Meanwhile, the magnitude of contribution by ischaemic heart disease was small or negative in older age men. --- Discussion Differences in life expectancy at age 25 between elementary or lower education (6 or less year schooling) and college or higher education (13 or more year schooling) in Korea were 16.23 years in men and 7.69 years in women. In Finland, the differences in life expectancy at age 30 between low education with 9 or less year schooling and high education with 13 or more year schooling were 6.96 years in men and 3.88 years in women in 1998-99, whereas the same educational differences in life expectancy at age 30 in Russia were 13.08 years in men and 10.21 years in women in 1998 [25,26]. The differences in life expectancy at age 25 between primary or lower education and university education in Lithuania were 16.75 years in men and 15.20 years in women in 2001 [27]. In Denmark, the educational differences in life expectancy at age 30 between primary or lower secondary education and tertiary education were 6.4 years in men and 4.7 years in women in 2011 [28]. Although it is hard to directly compare the magnitude of educational differences in life expectancy between countries due to the different educational categories and the study periods, results of this study suggest that the size of the educational differences in life expectancy seems to be relatively greater than that in northern European countries. Younger age groups were more important contributors to the educational differentials in life expectancy between elementary or less education and other two higher educational groups, while older age groups were more important in explaining the difference between middle or high school education and college or higher education. This was generally true for both men and women in this study. This may mean that the dismal effects of poor socioeconomic environment would appear at younger ages among people with extreme social disadvantages (i.e., elementary or less education among young ages). In Korea where the enrollment rate in middle school increased from 42% in 1970 to over 90% in 1990 [20], only 0.4-13.0% of people aged 25-49 had elementary or less educational attainment [21]. People with elementary or less education aged less than 35 years may signify the extreme social exclusion in Korea. These young and socially marginalized population in Korea might have experienced neo-liberal structural reforms resulting in increases in unemployment rates, enhancement of labor flexibility, and rise of income inequality as well as lack of generous social safety net during the periods of the economic crisis in 1998 and the credit card crisis in 2003. The main contributing causes of deaths at those age groups to the educational differences in life expectancy were suicide and liver disease in both genders. Korea has recorded the highest suicide rates among the Organization of Economic Co-operation and Development (OECD) member countries starting 2003 with upsurges during the Korea's economic crisis in late 1990s and during the credit card crisis in 2003 [29,30]. Suicide is the most frequent cause of deaths in 20s' and 30s' men and women in Korea, although the elderly had the greater suicide rates than younger age groups [31]. Prior Korean studies showed that men and women aged 35-44 had greater educational differentials in suicide mortality in both relative and absolute terms than older age groups [30,32]. Results of this study as well as other prior studies suggest that social changes into harsher labor market environment might have had a greater impact on socioeconomically marginalized educational groups with younger ages who did not have sufficient resources and skills to overcome socioeconomic difficulties in late 1990s and early 2000s. The main risk factors for liver disease are viral hepatitis and alcohol abuse [33,34]. According to the 2007 National Health and Nutrition Examination Survey of Korea by the Korea Centers for Disease Control and Prevention, the prevalence of the hepatitis B antigen positive among Koreans aged 19-49 is 2.1-4.3% and the prevalence of hazardous alcohol use is 44.5-45.8% [35]. Considering the relatively high rate of hepatitis B infection and alcohol abuse in Korea, social inequalities in hepatitis B viral infection and hazardous alcohol use might well have contributed to the significant part of the socioeconomic inequalities in liver disease [36][37][38]. Cerebrovascular disease and lung cancer in older age groups were the important causes of death in terms of differentials in life expectancy at age 25 especially between middle or high school and college or higher education. Cerebrovascular disease may be related to adverse childhood living conditions, along with liver disease, liver cancer and stomach cancer [39,40]. Poor socioeconomic environments and their inequitable distribution during and after Japanese colonial occupation and the Korean War (1950)(1951)(1952)(1953) might have had effects on the socioeconomic inequalities in mortalities from these causes. The prevalence of cigarette smoking, the main risk factor of lung cancer, reached over 50% before early 2000s with the highest rate being about 79% in 1980 among Korean men [41]. High smoking rates and high absolute differentials in smoking rates by educational level [42] might have contributed to the increase in mortality and mortality inequalities from lung cancer, especially in men. The percent contribution by cerebrovascular disease to the educational difference in life expectancy was greater among women than men while the percent contributions as well as absolute contributions (years) by lung cancer, stomach cancer, and liver cancer were greater in men than women. These results are similar to a previous study showing that, in women, the contribution of cerebrovascular disease was greater than that of cancer in southern and eastern European countries [1]. The biggest difference of the results in this study from findings in northern or western European countries is the size of the contribution by ischaemic heart disease to socioeconomic inequalities in mortality as indicated in prior Korean studies [32,43]. This study revealed that the contribution of ischemic heart disease was relatively small in Korea, accounting for 1-2% and 3-6% of total educational inequalities in life expectancy in men and women, respectively. Meanwhile, ischaemic heart disease was the most important contributor to total mortality inequalities in northern and western Europe [1,4]. However, mortality rates and absolute socioeconomic inequality in ischaemic heart disease are increasing rapidly in Korea [43]. Considering the secular trend of the westernized diet as risk factors for ischaemic heart disease in Korea [44], thorough monitoring on changes in socioeconomic inequalities of ischaemic heart disease is needed. Our study has strengths and limitations. We presented age-and cause-specific contributions to the socioeconomic inequalities in life expectancy at age 25 using Arriaga's decomposition method, while most previous studies showed only age-specific contributions and/or cause-specific contributions. Detailed quantification of age-and causespecific contributions to socioeconomic inequalities in life expectancy allowed us to present varying age-specific contributions by each cause of deaths and to develop priority age groups and causes of deaths for each cause and age group. However, we used unlinked data with death certificate and census data which may produce a numerator-denominator bias [45]. A prior Korean study examined this issue [18]. When the educational level was categorized into three categories (elementary school or less, middle or high school graduate, college or higher), the percentage agreement between death certificate data and health survey data was 89.4% and the kappa value was 0.75 [18], which means the reliability level was substantial [46]. Thus, we believe that the numerator-denominator bias would be minimal. --- Conclusions Educational differences in life expectancy were substantial in Korea. Liver disease and suicide were important contributors to the differences among younger age groups while cerebrovascular disease and lung cancer were important among older age groups. The age specific contributions for different causes of death to life expectancy inequalities by educational attainment varied with educational comparisons. Different age-specific distributions in educational levels due to remarkable improvement in education during the past decades may explain the findings as each educational attainment as SEP can have distinct meanings in the context and history in the Korean society. Exploring age-and cause-specific contributions to socioeconomic inequalities in life expectancy could allow us to better understand the nature of socioeconomic mortality inequalities and to specifically suggest priority areas for policy and intervention. --- Competing interests The authors declare that they have no competing interests. Authors' contributions KJC participated in study design and drafted the manuscript. YHK conceived the original idea for the study and gave critical comments on the draft manuscript. HJC participated in study design and critical revision of the manuscript. SCY supervised study design, performed the statistical analysis and gave critical comments on the draft manuscript. All authors read and approved the final manuscript.
Background: Decomposition of socioeconomic inequalities in life expectancy by ages and causes allow us to better understand the nature of socioeconomic mortality inequalities and to suggest priority areas for policy and intervention. This study aimed to quantify age-and cause-specific contributions to socioeconomic differences in life expectancy at age 25 by educational level among South Korean adult men and women. Methods: We used National Death Registration records in 2005 (129,940 men and 106,188 women) and national census data in 2005 (15, 215, 523 men and 16,077,137 women aged 25 and over). Educational attainment as the indicator of socioeconomic position was categorized into elementary school graduation or less, middle or high school graduation, and college graduation or higher. Differences in life expectancy at age 25 by educational level were estimated by age-and cause-specific mortality differences using Arriaga's decomposition method. Results: Differences in life expectancy at age 25 between college or higher education and elementary or less education were 16.23 years in men and 7.69 years in women. Young adult groups aged 35-49 in men and aged 25-39 in women contributed substantially to the differences between college or higher education and elementary or less education in life expectancy. Suicide and liver disease were the most important causes of death contributing to the differences in life expectancy in young adult groups. For older age groups, cerebrovascular disease and lung cancer were important to explain educational differential in life expectancy at 25-29 between college or higher education and middle or higher education.The contribution of the causes of death to socioeconomic inequality in life expectancy at age 25 in South Korea varied by age groups and differed by educational comparisons. The age specific contributions for different causes of death to life expectancy inequalities by educational attainment should be taken into account in establishing effective policy strategies to reduce socioeconomic inequalities in life expectancy.
income is associated with such positive measures of child well-being as cognitive skills, educational attainment, and child behavior (Graham, Beller, and Hernandez 1994;Knox and Bane 1994;Hernandez, Beller, and Graham 1995;Knox 1996;Argys et al. 1998). Unfortunately, only a minority (approximately 20 percent) of unwed nonresident fathers pay formal child support (Nepomnyaschy and Garfinkel 2007), yet an overwhelming majority of these fathers are involved with their children in other ways. Examples of this involvement include informal and in-kind contributions, as well as regular contact with their children (Waller and Plotnick 2001;Huang 2006;Nepomnyaschy 2007;Garasky et al. 2010;Nepomnyaschy and Garfinkel 2010). Much less research focuses on how these contributions of time, money, and goods affect children's economic circumstances. This study examines the effects of these different types of father involvement on children's experience of material hardship. Although poverty is the most commonly used indicator of serious economic distress, indicators of material hardship are complementary, and now commonly used, alternative measures (Beverly 2000(Beverly, 2001)). Such indicators include going without food, being evicted from one's home, delaying needed medical care, and having heat, electricity, or phone service shut off. These are not only important mediators of the relation between poverty and child well-being; they are found to be directly related to child well-being, and the relations are independent of income (Beverly 2001;Gershoff et al. 2007). For example, results from analyses that control for household income or poverty status find that children who live with food insecurity have worse health, lower cognitive skills, worse academic performance, and more behavior problems than those who do not live with food insecurity (Alaimo, Olson, and Frongillo 2001;Alaimo, Olson, Frongillo, and Briefel 2001;Cook et al. 2004;Ashiabi 2005;Slack and Yoo 2005;Whitaker, Phillips, and Orzol 2006;Rose-Jacobs et al. 2008;Zaslow et al. 2009). Some studies control for other indicators of socioeconomic status, pointing to the deleterious effects that multiple and cumulative hardships have on child wellbeing (Ashiabi and O'Neal 2007;Cook et al. 2008;Yoo, Slack, and Holl 2009;Frank et al. 2010). --- How Father Involvement Affects Material Hardship Low income is obviously one principal determinant of material hardship. Although material hardship disproportionately affects children living in poverty, 65 percent of families living between 100 and 200 percent of the poverty threshold are estimated to experience one or more hardships (Boushey et al. 2001;Gershoff 2003). Furthermore, the material hardship's respective correlations with income and poverty status are weaker than might be expected (Mayer and Jencks 1989;Cancian and Meyer 2004a;Short 2005;Sullivan, Turner, and Danziger 2008).1 There are at least two reasons for the modest size of the correlations. First, current income (the measure on which poverty status is most often based) is not a comprehensive indicator of a family's economic circumstances. In-kind transfers are not counted as income, nor is wealth or access to credit. All three of these resources may enable families to avoid hardship during periods of unemployment or other shocks to income (Shapiro and Wolff 2001;Sullivan et al. 2008). So too, Kathryn Edin and Laura Lein (1997) show that low-income mothers use a number of survival strategies to avoid hardship. For example, they may rely on social programs, friends, family, and underground employment. None of these strategies is usually included in income measures. Second, material hardship may result not only from a lack of resources but also from difficulty managing those resources (Heflin, Corcoran, and Siefert 2007). For example, in analyses that control for income and other indicators of socioeconomic status, families in which there are members with drug problems, alcohol problems, depression, or other indicators of poor mental health are found to experience more material hardship than families in which members lack those characteristics (Heflin et al. 2007;Sullivan et al. 2008). Fathers' contributions of time, goods, and money can affect mothers' resources and their ability to manage such resources. --- Fathers' Material Contributions and Children's Hardship Nonresident fathers' material contributions consist of formal cash support (that is paid through the formal child support system), informal cash support (cash that is given outside the formal obligation), and noncash support (in-kind contributions). Edin and Lein (1997) describe the numerous ways in which mothers use contributions from fathers to improve the economic circumstances in their households. For example, nonresident fathers' material support, whether it is provided through formal support, informal cash payments, or in-kind contributions, can directly affect the level of hardship in the mothers' house and can increase the household's income. Because cash contributions supplement the mother's income, they can readily be used to pay rent, utilities, and other bills, as well as to purchase food, clothing, and other necessities. Fathers' in-kind contributions can directly reduce hardship (e.g., contributions of food or clothing) or can allow the mother to address other needs with the income she would have spent on those items. Fathers can also reduce hardship by offering to pay rent, utilities, or telephone bills directly. Although fathers' provision of material support (formal support, informal cash support, and in-kind support) can reduce hardship in the mother's household, these different types of support are not interchangeable and could have different effects on material hardship (Nepomnyaschy 2007;Garasky et al. 2010). Formal support, which usually arrives in the mail at regular monthly intervals, may be more stable than informal support. It may allow the mother to plan for expenses and to avoid hardships. However, high levels of unemployment, prior incarceration, and other factors may impede efforts by many lowincome fathers to make regular child support payments through the formal system (Mincy and Sorensen 1998;Cancian and Meyer 2004b;Geller, Garfinkel, and Western 2008;Swisher and Waller 2008). In addition, the conditions that state policies impose on welfare benefits require recipients to relinquish their rights to formal child support collected on their behalf; in the majority of states, mothers on welfare receive none of the support provided by fathers (Roberts and Vinson 2004). These conditions therefore provide fathers with an incentive to informally contribute. Finally, fathers may have more control over how their payments are spent if they pay informally, leading to reduced hardship (Weiss and Willis 1985); however, informal support could increase hardship if mothers feel they must spend these contributions on items that are visible to the father (e.g., clothes, toys, or furniture), rather than on such necessities as rent or phone bills. It is also possible that fathers' material contributions could increase hardship if their provision of support leads to declines in support from friends, relatives, new partners, or other people in the mother's life. The reduction in other support would have to be greater than the support the father provides, however, and this seems highly unlikely. --- Fathers' Visitation and Material Hardship Fathers' physical contact with their children can also affect material hardship. Regular visits from fathers may constitute a free source of child care and may substitute for paid child care. Regular visits may also allow mothers to spend time in the labor force, increasing their income. Visitation may make the father aware of his child's needs and may induce him to directly help the mother avoid certain hardships. Informal and in-kind support is often provided when fathers come to see the child (Nepomnyaschy 2007;Garasky et al. 2010). Irregular cash or in-kind support of this sort (e.g., when a father pays a utility bill or the rent for a month) is not likely to be captured in the Fragile Families data by the measure of informal cash support or the measure of in-kind support. Fathers also may loan the mother money. As Yoram Weiss and Robert Willis (1985) theorize, a father's visit allows him to monitor how money is spent in the mother's household. Thus, fathers' visits can reduce hardship if mothers are induced to use money in ways that improve the well-being of the child. Finally, a father's regular visits and involvement with his child can reduce a mother's level of stress and provide a sense of security and stability. This sort of intangible support may help her to manage the financial resources available to her. Fathers' visits could also increase hardship. If the parents' relationship is conflictual or violent, his visits could increase stress and contribute to an increase in hardship. Finally, hardship may increase if fathers consume resources (food) while in the household or if they discourage contributions from friends, relatives, new partners, or other sources. In sum, the effects of a father's time with his children, whether those of time spent in the mother's house or in his, could either reduce or increase material hardship in the mother's household. --- Empirical Evidence of the Effects of Fathers on Material Hardship In their landmark ethnographic study of the survival strategies of single mothers, Edin and Lein (1997) find that the overwhelming majority of mothers rely on informal support from their networks, particularly from the fathers of their children. Much qualitative research on low-income single mothers and fathers confirms these findings (Roy 1999;Waller andPlotnick 1999, 2001;Pate 2002Pate, 2006;;Heflin, London, and Scott 2009). Insofar as nonresident fathers' involvement can be considered an indicator of social support, there is much evidence to suggest that social support and social networks have protective effects that can reduce material hardship (Mayer and Jencks 1989;Lee, Slack, and Lewis 2004;Sullivan et al. 2008). Little quantitative research focuses on the effect of nonresident fathers' involvement on material hardship in the mothers' household. The authors know of only two studies that examine this question, and this is the specific focus of only one study. In a study that controls for mothers' receipt of informal and formal child support, Bong Joo Lee and colleagues (2004) look at the effects of welfare receipt and work activities on four measures of material hardship among Temporary Assistance for Needy Families (TANF) recipients. They find that neither formal nor informal support is statistically significantly associated with rent, utility, or food hardship; they do find, however, that provision of formal support is statistically significantly associated with declines in the level of perceived hardship (a summary scale based on responses to four items that ask about feelings regarding one's own financial situation). Steven Garasky and Susan Stewart (2007) examine the effects of nonresident fathers' involvement (both financial and physical) on three measures of food insecurity in their children's households. They find that frequent visits (more than once per week) are consistently protective against food insecurity but that provision of child support is only statistically significantly protective against one measure of insecurity. They hypothesize that fathers make in-kind contributions while they are visiting and that such contributions are the mechanism through which fathers' visits affect hardship. However, they do not directly measure in-kind support and are not able to distinguish formal from informal cash support. Further, they measure hardship and fathers' involvement in the same time period. The analyses in the current study build on this work in a number of ways: by disaggregating financial support from fathers into formal cash support, informal cash support, and in-kind contributions; incorporating temporal ordering by using panel data; and employing eight indicators of material hardship. The analyses also consider mothers' individual attributes that may affect their ability to avoid hardships. These include physical health, mental health, impulsivity, cognitive ability, and access to social support. --- Data and Methods This article uses data from the Fragile Families and Child Wellbeing Study, a panel study of approximately 4,000 children born to unmarried parents between 1998 and 2000 in 20 large U.S. cities in 15 states. It takes advantage of four waves of panel data, starting with a baseline interview conducted when the children were born and following them up to age 5. Mothers and available fathers were interviewed at the hospital within a few days of the child's birth; fathers who were not at the hospital were interviewed elsewhere. Follow-up interviews with both parents were conducted by telephone when the child was approximately 1, 3, and 5 years old. Data in the Fragile Families study are representative of births to unmarried parents in the late 1990s in all U.S. cities with populations of 200,000 or more (see Reichman et al. [2001] for a detailed description of the study design). Of the unmarried mothers interviewed at baseline, 89 percent were reinterviewed at the 1-year follow-up, 86 percent were reinterviewed at the 3-year survey, and 84 percent were reinterviewed at the 5-year follow-up. At each wave, mothers were asked numerous questions pertaining to fathers' characteristics. Their responses provide detailed information about fathers, even if the fathers were not interviewed. The current study relies on mothers' reports about fathers' sociodemographic characteristics and involvement with their nonresident children. Although it would be ideal to have fathers' reports of their involvement with children, fathers were not asked about their child support payments to mothers at the 3-and 5-year interviews. In addition, though the Fragile Families survey was able to identify and interview a larger proportion of unmarried fathers than any other national survey, many fathers are missing from the data. 2 Estimates suggest that fathers missing from the data are more likely to be nonresident (the group on whom this study focuses) and are more disadvantaged on socioeconomic characteristics than those who were interviewed (Teitler, Reichman, and Sprachman 2003). Therefore, relying on fathers' reports could introduce nonresponse bias and could substantially reduce sample sizes across waves. The sample in the current study consists of mothers who were not cohabiting with the focal child's father at each follow-up interview (1-, 3-, or 5-year follow-up) and who were reinterviewed at least at the 1-year survey. The majority of mothers (69 percent) participated in more than one follow-up interview. Stacking the three waves of follow-up data creates an unbalanced panel of 4,469 repeated observations on 2,180 unique mothers. The sample sizes are 1,373 at the 1-year interview, 1,478 at the 3-year interview, and 1,618 at the 5-year interview. The increase in sample sizes from wave to wave reflects the trend that unmarried parents' cohabiting relationships end over time; however, a small number of mothers are lost to attrition from wave to wave. 3 Besides excluding mothers who were cohabiting with the focal father, the sample also excludes those on whom data are missing for variables of interest at each wave. Specifically, data from father involvement variables are missing for 213 mothers at the 1-year survey, for 203 mothers at the 3-year survey, and for 257 mothers 2 Seventy-five percent of eligible unmarried fathers (those who were associated with an interviewed mother) were interviewed at the baseline survey. Their follow-up response rates were 65 percent at the 1-year survey, 63 percent at the 3-year follow-up, and 61 percent at the 5-year follow-up. 3 Of the 3,711 unmarried mothers in the baseline sample, 3,293 were reinterviewed at the 1-year follow-up. Of these, 50 percent (1,642) were not cohabiting with the father at that follow-up. At the 3-year interview, interviews were conducted with 3,009 mothers who were unmarried at baseline. Of these, 58 percent (1,731) were not cohabiting at that point. At the 5-year interview, follow-up interviews were conducted with 2,921 mothers who were unmarried at baseline. Of these, 66 percent (1,934) were not cohabiting at that time. at the 5-year survey. Data on hardship variables are missing for 14 cases at the 1-year survey, for 19 cases at the 3-year survey, and for 8 cases at the 5-year survey. Data on covariates are missing for 63 mothers at the 1-year interview, for 57 mothers at the 3-year interview, and for 59 mothers at the 5-year interview. These criteria exclude a total of 207 mothers for whom observations are missing at every wave. Supplementary analyses based on balanced panel data (i.e., data in which each mother appears in all three follow-up waves) examine a subsample of 2,337 observations of 779 unique mothers. --- Material Hardship Material hardship, the outcome of interest, is measured using a series of questions posed in several national surveys, including the Survey of Income and Program Participation, the National Survey of America's Families, and the American Housing Survey (Beverly 2001). At all three follow-up surveys, mothers were asked whether they had to do any of the following things in the 12 months prior to the interview because there was not enough money: (1) receive free food or meals; (2) not pay the full amount of rent or mortgage payment; (3) not pay the full amount of a gas, oil, or electricity bill; (4) have gas or electric service turned off or oil not delivered; (5) have phone service disconnected; (6) be evicted from your home or apartment for not paying the rent or mortgage; (7) stay in a shelter, abandoned building, an automobile, or any other place that was not meant for regular housing, even for one night; (8) not seek medical attention for anyone in your household who needed to see a doctor or go to the hospital, because of the cost. The primary measure of material hardship is based on the number of hardships that a family experienced in the 12 months prior to each wave. The number of affirmative responses to these measures was used to create a variable with a possible range from zero to eight; zero indicates that the mother reports no hardships, and a score of eight indicates that she responded affirmatively to all eight items. However, prior research points to the fact that each of these measures of hardship may have different antecedents, may lead to different consequences, and may represent very different types of problems (Beverly 2000(Beverly, 2001;;Ouellette et al. 2004;Heflin, Sandberg, and Rafail 2009;Rose, Parish, and Yoo 2009). Each hardship indicator is therefore also analyzed separately. --- Father Involvement Both fathers' financial and physical involvement with their children are considered in this study. The research examines three types of contributions: formal child support, informal child support, and in-kind support. Formal child support is support received through an established child support order. Informal child support is any cash support received from the father outside of a formal order. In-kind support includes clothes, toys, medicine, food, or other noncash support provided by the father. Formal and informal cash support are measured by continuous variables that reflect the average amount of support provided per month at each wave for the period that the father was eligible to pay support. Fathers' eligibility to pay formal support is defined as the number of months elapsed since the start of the parents' child support order; for informal support, eligibility is defined as the number of months elapsed since the father stopped living with the mother. Eligibility for informal support is assessed at each wave (for fathers who never lived with the mother, it is the total reporting period at each wave). The authors choose to create a monthly amount of support received because the figure reported for the year preceding an interview is conflated with the length of time that a child support order has been in place or how long ago parents stopped cohabiting. For example, two mothers may report $1,000 of formal support received in the past year, but one obtained a child support order 2 months ago and therefore received $500 per month; the other mother has had an order for 10 months and therefore received only $100 per month. The effect of child support on economic circumstances in these two mothers' households will be very different, and using the yearly report of receipt masks those differences. Fathers who are reported to have paid no support are coded zero. In-kind support is measured as a dichotomous variable. The variable is positive if the mother indicates that, in the year prior to the interview, the father bought clothes, toys, medicine, food, or other items for the child. The response is considered to be affirmative if the mother reports that he often or sometimes bought those items; it is considered to be negative if she reports that he rarely or never bought them. Fathers' physical contact with children is measured as the number of days on which he saw the child in the 30 days prior to the interview; the measure includes fathers who did not see their child (and, thus, whose days of contact is zero). Among the father involvement variables, the highest estimated correlation is between in-kind support and the number of days of contact (0.59), but the correlations are also high between the number of days of contact and informal support (0.34), as well as between in-kind and informal support (0.34). The lowest estimated correlation is between formal and in-kind support (0.06). That between formal support and days of contact is also estimated to be low (0.004). Formal support is estimated to be negatively and statistically significantly correlated with informal support (-0.07). --- Covariates The analyses consider three broad categories of covariates: sociodemo-graphic characteristics (of mothers, fathers, and children); measures of the father's commitment to the mother and child at the baseline survey; and indicators of the mother's ability to avoid hardship. Also considered is the unemployment rate in the city where the mother is interviewed at each wave. Unemployment is entered as a time-varying covariate. The mean rate of unemployment for the pooled sample is 5 percent. Family sociodemographic characteristics-The analyses include mothers' reports about characteristics of mothers, fathers, and children. Parents' race or ethnicity is measured as non-Hispanic white, non-Hispanic black, Hispanic, and other. Parents' education is measured in three categories: less than a high school or general equivalency diploma, a high school or general equivalency diploma, and more than a high school or general equivalency diploma. Parents' age is also represented in three categories: under age 21, ages 21-29, and age 30 or over. In addition, the analysis considers whether the mother was born in the United States, whether she received TANF or food stamps in the year prior to the child's birth, whether the father worked in the week prior to the child's birth, the sex of the child, and whether the child was low birth weight. All these variables are measured at the baseline survey and do not vary over time. The analysis also includes several time-varying sociodemographic variables that are measured at each wave of the survey: age of the child (in months), the number of children under age 18 in the mother's household, the number of adults in the household, whether the mother has a new married or cohabiting partner, and the average monthly household income (minus child support received). Previous research finds that many of these variables are related to both hardship and fathers' involvement with their children. The authors expect that children of more advantaged mothers will have more involved fathers (because these fathers are also more advantaged) and those mothers will therefore be more likely to avoid hardship. Father's commitment to mother and child at baseline-This set of variables includes four items drawn from mothers' reports: the parents' relationship at the baseline survey (cohabiting, romantically involved but not cohabiting, just friends, or no relationship); whether the father contributed cash or any other resource during the pregnancy; whether he visited the mother and child in the hospital; and whether he intended to contribute to the child in the future. The father's commitment to the mother and child at baseline is likely associated with his investment in the child, as well as with the likelihood that he will contribute financially and be involved with the child. These fathers may also select mothers who are likely to avoid hardships. Mother's ability to avoid hardship-Prior research uses many of the previously described variables as proxies for the mother's ability to avoid hardship. The extensive data in the Fragile Families survey enable this study to include explicit measures of such attributes. The baseline survey provides measures of the mother's access to social support and of her health. Access to social support is measured as the sum of the mother's responses (yes = 1, no = 0) to three questions about whether, in the year following the interview, she would be able to count on someone in her family to (1) loan her $200, (2) provide her with a place to live, and (3) help her with babysitting or child care. Possible scores range from 0 to 3; higher scores indicate more access to social support. Maternal health is measured as a dichotomous variable for whether the mother reports excellent health as opposed to very good, good, fair, or poor. To measure mothers' cognitive ability, the study uses an eight-item word similarities test that is based on the Revised Wechsler Adult Intelligence Scale (WAIS-R).4 A six-item scale measures mothers' impulsivity. 5 The study also measures mothers' reports on the mental health of their mothers (i.e., the focal child's maternal grandmother). 6 The variables for mothers' cognitive ability, mothers' impulsivity, and maternal grandmothers' mental health are only measured at the 3-year survey; however, because the variables are assumed to be fixed over time, the analyses treat them as baseline measures. Finally, the study includes a measure of whether the mother reported at the baseline survey that she had a drug or alcohol problem. Mothers who have more access to social support, who are less impulsive, have higher cognitive scores, and better mental and physical health are expected to have fewer hardships than mothers who do not have these characteristics. Prior research establishes a strong link between access to social support (particularly the ability to borrow money) and a reduction in hardship (Mayer and Jencks 1989;Lee et al. 2004;Sullivan et al. 2008). Mothers' physical health, mental health, and cognitive ability also are linked to hardship (Danziger et al. 2000;Kalil, Seefeldt, and Wang 2002;Heflin et al. 2007;Sullivan et al. 2008); however, these relations could be endogenous. For example, experiences of material hardship may lead to poor mental health, and poor mental health can affect the ability to manage resources. To minimize this specification problem, the analyses use the measure of grandmother's mental health as an exogenous proxy for mothers' own mental health. A mother's current problem with drugs or alcohol may also be endogenous to material hardship; therefore, the analyses include mothers' baseline report of a drug or alcohol problem. --- Analytic Strategy First, descriptive statistics are presented for all previously described measures for the full sample and disaggregated by whether the mother reports any hardships. Next, pooled, crosssectional, ordinary least squares (OLS) models are presented, which estimate the number of hardships in the mother's household on measures of fathers' involvement. Nested models first control only for sociodemographic characteristics and then add indicators of fathers' commitment to the mother and child. The models finally add measures of mothers' ability to avoid hardships (some of these measures were not available in prior research). Standard errors in all models are adjusted to account for multiple observations on each individual over time. Selection bias-As with all observational studies, a number of potential biases limit the ability to make causal inferences. Families with fathers who pay support and visit their children may differ in unobserved ways from families with fathers who do not, and such differences may bias the estimated relations between fathers' involvement (either financial or physical) with their children and hardship in the mothers' household. In the extreme case, the estimated relation could be fully attributed to these unobserved differences, and there would be no causal relation between the two. For example, fathers may contribute less time and money to a custodial parent who is in poor mental health, has problems with drugs or alcohol, or has low cognitive skills and is not capable of making good choices for her family. The study addresses this potential bias in three ways. First, as discussed previously, the analyses control for many characteristics that are generally unobserved in many prior studies. Second, the study takes advantage of the panel structure of the Fragile Families data by including a lagged dependent variable in the OLS model: the number of hardships at the prior wave. Third, the analyses estimate models with individual fixed effects, which only examine effects within individuals. Inclusion of a lagged dependent variable reduces the possibility that fixed, unobserved differences drive the results, because these differences should be reflected in the lagged dependent variable. However, effects are still estimated within and between individuals. Unobserved heterogeneity is therefore still possible. Individual fixed-effects models rely only on changes within individuals; using them eliminates the possibility that the results are driven by constant unobserved differences between individuals, though this method does not address unobserved within-person differences that change over time. One drawback of fixed-effects analysis is that results are estimated only for those individuals whose values on the dependent variable change over time and for those who are observed at least twice in the data. This leads to a less representative sample and one that is substantially smaller in size than the full analysis sample. Because the analyses hold constant all characteristics that do not vary over time (within individuals), these regressions only include the variables that change over time. Supplementary analyses present results based on a balanced panel of observations. This panel includes only those cases in which the mother is observed in the sample at all waves. This analysis addresses the possibility that the results are driven by the mothers with the greatest number of observations, since these mothers contribute the most data. Other supplementary analyses examine each type of material hardship separately. Because each indicator of material hardship is a dichotomous variable, these analyses employ pooled, cross-sectional logistic regression models and fixed-effects logit models. Reverse causality-Another potential source of bias is reverse causality. Specifically, material hardship in the mother's household may affect fathers' financial or physical involvement. For example, a mother may call on the father for help because she is having financial problems, and he may provide some financial assistance when he comes to see the child. These events would lead to a positive but spurious association between fathers' involvement and hardship; such an association could offset or dominate the true negative causal effect (if there is one). Reverse causality could also lead to a spurious negative association between involvement and hardship. For example, a mother's experience of hardship (phone disconnected or eviction) may prevent the father from visiting the child. One potential way to disentangle the temporal ordering of effects is to measure fathers' involvement and hardship at the prior wave, to explicitly test whether hardship at that prior wave affects fathers' involvement in the current wave, and to consider whether involvement at the prior wave affects hardship at the current wave. The analysis uses Mplus software (version 4) to estimate these cross-lagged models within a structural equation modeling framework. In these models, the mean of father involvement at the 1-and 3-year surveys is used to predict hardship at the 5-year survey. So too, mean hardships from the 1-and 3-year surveys are used to predict father involvement at the 5-year survey. The models control for baseline characteristics as well as the lagged measure of the dependent variable (father involvement and hardship at the 3-year survey). The structural equation modeling framework, which estimates these reciprocal effects simultaneously, allows for the estimation of the effects of earlier father involvement on future material hardship independently of the effects of earlier father involvement on future father involvement and vice versa. --- Results --- Sample Description Outcomes-Table 1 presents descriptive characteristics for the full sample of mothers who had nonmarital births and reported at each wave that they do not reside with the father of the focal child. Nearly half (49 percent) of the sampled mothers report that they experienced at least one of the eight hardships measured in these analyses. On average, sample mothers report experiencing 0.99 hardships in the year prior to the survey. Utility and phone bills account for the most commonly reported hardships; 27 percent of recipients report that they did not pay the full amount of a gas, oil, or electricity bill, and 23 percent report that their phone service was turned off. Other hardships are reported less frequently. Fifteen percent of participants report that they did not make the full rent or mortgage payment, 12 percent report that they received free food or meals, and 9 percent report that their gas or electric service was shut off or oil was not delivered. Six percent report that someone in their household needed to see a doctor or go to the hospital but did not go because of a lack of money. The least commonly reported hardships were eviction for failure to make rent or mortgage payments (3 percent) and staying in a place not meant for housing (3 percent). On average, mothers reporting any hardship (2,171 participants; second column of table 1) report 2.04 total hardships and are much more likely than the full sample to experience each of the individual hardships. More than half (56 percent) of those who reported any hardship indicate that they did not pay all of a utility bill (gas, oil, or electricity) that was due, and 47 percent indicate that their phone service was shut off. Nearly one-third (30 percent) of these mothers report that they did not make a full rent or mortgage payment, and one-quarter report that they received free food or meals. Father involvement-Nearly half of fathers (48 percent) reportedly made an in-kind contribution during the period they lived apart from their children. Slightly fewer (42 percent) reportedly made an informal cash contribution. Informal cash contributions average $53 per month across all fathers. Far fewer fathers (only 21 percent) reportedly made formal payments. These payments average $39 per month across all fathers. Not reported in the table is the shift over time from in-kind and informal support to formal child support. As the time from the child's birth (and the length of time since parents ceased cohabitation) increases, reported informal support (which is initially high) declines and formal support (which is initially low) increases. These amounts become approximately equal at 36 months after the child's birth. After the 36-month point, formal support is estimated to become greater than informal support. (For an analysis of the effects of child support enforcement on informal and formal support over time, see Nepomnyaschy and Garfinkel [2010].) Sampled mothers report that well over half (57 percent) of fathers had contact with their child in the 30 days prior to the interview. Across the sample, fathers are estimated to have contact with their child on an average of 7.5 of the 30 days prior to the interview. Children living in families that reportedly experienced at least one hardship are found to be less likely to receive in-kind support from fathers than are children in families that report no hardship (46 percent vs. 50 percent). So too, children in families that report any hardship are found to receive less informal and formal cash support. They are reported to have less contact with their father (6.9 of the 30 days prior to interview) than children living in families with no hardships (8.1 days). These differences are statistically significant. Family sociodemographic characteristics-The mothers in this sample report that they are mostly nonwhite (64 percent identify themselves as non-Hispanic black and 21 percent identify themselves as Hispanic), have low education (39 percent did not complete high school), and were relatively young at the time of the focal child's birth (38 percent were less than 21 years old). Most report that they were born in the United States (93 percent). Fathers' characteristics are reported to be similar to those of mothers, but fathers are older (80 percent were age 21 or older at the baseline interview; 62 percent of mothers were age 21 or older at that time). Only 59 percent of these fathers were reported to be employed in the week prior to the child's birth. Nearly half of the
Children in single-parent families, particularly children born to unmarried parents, are at high risk for experiencing material hardship. Previous research based on cross-sectional data suggests that father involvement, especially visitation, diminishes hardship. This article uses longitudinal data to examine the associations between nonresident fathers' involvement with their children and material hardship in the children's households. Results suggest that fathers' formal and informal child support payments and contact with their children independently reduce the number of hardships in the mothers' households; however, only the impact of fathers' contact with children is robust in models that include lagged dependent variables or individual fixed effects. Furthermore, cross-lagged models suggest that material hardship decreases future father involvement, but future hardship is not diminished by father involvement (except in-kind contributions). These results point to the complexity of these associations and to the need for future research to focus on heterogeneity of effects within the population. Today, more than one in four U.S. children (26 percent) lives with only one parent (U.S. Census Bureau 2010). Moreover, half of all children born in the last several decades are predicted to spend some portion of their childhood in a single-parent family (Bumpass and Sweet 1989). Further, 41 percent of all births today are to unmarried mothers, and that figure is nearly 70 percent among black mothers (Hamilton, Martin, and Ventura 2009). Although some children in single-parent families live with their fathers, the overwhelming majority (84 percent) live with their mothers and have a living nonresident father (U.S. Census Bureau 2010). Research suggests that children growing up in single-parent families, particularly children born to unmarried parents, are much more likely to be poor and to experience more material hardships than those in two-parent families (Lerman 2002;DeNavas-Walt, Proctor, and Smith 2008). As a consequence, children in single-parent families also face disadvantage in a number of important domains: health, development, and educational attainment (McLanahan and Sandefur 1994;Magnuson and Votruba-Drzal 2009). Nonresident fathers' involvement in their children's lives, both through their financial contributions and their physical involvement, can ameliorate some of these disadvantages. Research suggests that child support payments from fathers increase income and reduce poverty in custodial mothers' households (Meyer and Hu 1999;Bartfeld 2000;Sorensen and Zibman 2000); however, other research suggests that payments from poor fathers are either too small or inconsistent to improve financial well-being in the mothers' household (Mincy and Sorensen 1998; Cancian and Meyer 2004b). Research also finds that child support
likely to receive in-kind support from fathers than are children in families that report no hardship (46 percent vs. 50 percent). So too, children in families that report any hardship are found to receive less informal and formal cash support. They are reported to have less contact with their father (6.9 of the 30 days prior to interview) than children living in families with no hardships (8.1 days). These differences are statistically significant. Family sociodemographic characteristics-The mothers in this sample report that they are mostly nonwhite (64 percent identify themselves as non-Hispanic black and 21 percent identify themselves as Hispanic), have low education (39 percent did not complete high school), and were relatively young at the time of the focal child's birth (38 percent were less than 21 years old). Most report that they were born in the United States (93 percent). Fathers' characteristics are reported to be similar to those of mothers, but fathers are older (80 percent were age 21 or older at the baseline interview; 62 percent of mothers were age 21 or older at that time). Only 59 percent of these fathers were reported to be employed in the week prior to the child's birth. Nearly half of the mothers (48 percent) report that they received TANF or food stamps at the time of the child's birth, and 12 percent report that the focal child was low birth weight (i.e., weighed less than 2,500 grams). On average and across the pooled years of data, mothers report that there are 2.4 minor children and two adults living in their households. Twenty-two percent of mothers report that they have a new married or cohabiting partner. On average, sampled mothers report approximately $1,700 of monthly household income across the pooled waves (this excludes income from child support). Sampled black and white mothers are more highly represented among those reporting experience of a hardship (65 percent of black mothers, 14 percent of white mothers) than among those reporting no hardship (63 percent black, 11 percent white). The percentage of Hispanic mothers in the subsample reporting a hardship (19 percent) is smaller than that of counterparts who reported no hardship (24 percent). In general, mothers who report material hardship are found to have lower levels of education; they are more likely to not have completed high school and less likely to have a diploma, though they are slightly more likely to have some post high school education than mothers reporting no hardship. Mothers with any hardship are also more likely to have been born in the United States than those without material hardship. Over half (52 percent) of mothers with any hardship report receiving TANF or food stamps at the birth of the child, but these benefits were received by only 44 percent of mothers who report no hardship. Fathers' race, ethnicity, and age are estimated to be statistically significantly associated with the mother's report of hardship, though neither paternal educational attainment nor work status at the child's birth is associated with hardship to a statistically significant degree. Mothers experiencing hardship report a greater number of children and a fewer number of adults in the household at each wave than mothers who report no hardship. As expected, mothers experiencing any hardship report $500 less in monthly household income (nearly 25 percent less) than those who report no hardship. These differences are statistically significant. Father's commitment to mother and child at baseline-At the baseline interview, one-third of mothers reported that they were cohabiting with the father, 42 percent reported that they were romantically involved but not cohabiting, 12 percent reported that they were friends, and only 14 percent reported that they had no relationship with the father. An overwhelming majority of fathers reportedly contributed cash or other items during the pregnancy, visited in the hospital, and intended to contribute to the child in the future. Mothers who reported any hardship are more likely to have cohabited with the father at the time of the child's birth but are less likely to have been romantically involved with him than mothers who reported no hardships. None of the other variables measuring fathers' commitment is found to be statistically significantly associated with report of any material hardship. Mother's ability to avoid hardship-Mothers report a high level of access to social support (the average score is 2.72 out of a possible 3 on this index); although only 30 percent of mothers reported that they were in excellent health at the baseline survey. Participants have an average score of 2.09 on the impulsivity score (out of 4; higher is more impulsive) and a score of 6.41 (out of 16; higher is better) on the test of mothers' cognitive skills (the WAIS-R word similarities index). On average, mothers report that their mothers have 0.63 mental health problems (out of 4), and 6 percent reported at the time of the child's birth that they have their own problems with alcohol or drugs. The levels of these hardship avoidance variables all differ to a statistically significant degree by whether mothers reported any hardship at the follow-up interviews. Mothers with at least one hardship report lower levels of access to social support, lower likelihood of being in excellent health, and higher levels of impulsivity than mothers who report no hardship. Mothers who experience any hardship report more mental health problems for their own mothers than those with no hardship. The likelihood of having a drug or alcohol problem is estimated to be greater among mothers reporting a hardship than among mothers with no reported hardship. Surprisingly, results from the WAIS-R test suggest that mothers with at least one hardship have higher scores on the cognitive skills test than mothers with no hardships. --- Father Involvement and Material Hardship Table 2 presents results from pooled cross-sectional OLS models that regress the number of material hardships (range 0-8) on fathers' involvement. The analyses presented here examine the full sample of mothers and control for different sets of covariates. Model 1 controls for sociodemographic characteristics of the respondent's family, model 2 adds controls for measures of fathers' commitment to the mother and child at the baseline survey, and model 3 adds controls for explicit measures of the mother's ability to avoid hardships. The first point to consider in this table is that father involvement variables remain relatively stable across the models, as various controls are added. The measures of fathers' formal and informal cash support, as well as of the number of days of contact, are respectively, negatively, and statistically significantly associated with the number of hardships in the mother's household. The magnitudes of the coefficients are not reduced as additional controls are added for parent and child characteristics (except for a slight reduction in the size of the formal support coefficient). Results from model 3 suggest that a $100 increase in either monthly informal or formal cash support is associated with 0.05 fewer reported hardships, resulting in a 5 percent decline in the number of reported hardships (0.99 [total number of hardships for the full sample] -0.05 = 0.94 [a 5 percent reduction]). Across all models, each extra day of father-child contact per month is associated with 0.01 (or 1 percent) fewer hardships in the mothers' household. These estimates find no statistically significant association between fathers' in-kind support and the number of reported hardships. Results in model 3 also suggest that the number of reported material hardships is not statistically significantly associated with the measures of maternal race, ethnicity, or age, once the other characteristics are included. In that model, mothers with more than a high school education are estimated to report 0.17 (or 17 percent) more hardships than those without a degree. The estimates for fathers' demographic characteristics reveal similar patterns, but there are a few differences. Fathers who are age 30 or older report 0.14 (or 14 percent) more hardships than fathers who are under age 21, and Hispanic fathers report 0.21 or (21 percent) fewer hardships than non-Hispanic white fathers, but neither of these coefficients is statistically significant at conventional levels in the fully controlled model (model 3). Some previous research finds that age (mostly mothers' age) is positively associated with several measures of hardship (Short 2005;Heflin et al. 2007; Parish, Rose, and Andrews 2009). So too, in some studies that control for income, non-Hispanic white mothers are found to be more likely to experience some types of hardship than are mothers of other racial and ethnic groups (Gundersen and Oliveira 2001;Bauman 2002;Heflin et al. 2007;Sullivan et al. 2008). Several prior studies that control for income and other sociodemographic characteristics also find either that education is not associated with hardship or that the associations run in unexpected directions (e.g., higher education associated with more hardship; Lee et al. 2004;Garasky and Stewart 2007;Leete and Bania 2009). Mothers who report receiving TANF or food stamps in the year prior to the child's birth are estimated to experience 0.17 (or 17 percent) more hardships than do those who report receiving no such benefits. As expected, the estimates in all three models indicate that mothers' reported monthly income (minus child support) is negatively and statistically significantly associated with the number of reported hardships; every $100 of income per month is associated with 0.01 (or 1 percent) fewer hardships. Finally, the unemployment rate in respondents' cities is estimated to be positively and statistically significantly associated with the number of reported hardships; estimates in model 3 suggest that each percentage point increase in unemployment is associated with 0.04 or (4 percent) more hardships. Mothers who reported that they were romantically involved or just friends with the father at baseline are found to have fewer hardships than mothers who reported that they were cohabiting at that time. Perhaps mothers in these relationships have a more difficult time adjusting to the father's absence than mothers who have been living without the father since the birth of the child. Finally, the results suggest that most of the variables measuring mothers' ability to avoid hardships are statistically significantly associated with the number of reported hardships. Mothers' access to social support at baseline is associated with fewer reported hardships; mothers' impulsivity, grandmothers' mental health problems, and mothers' own drug and alcohol problems are associated with more hardships. Mothers who reported a drug or alcohol problem at baseline are estimated to experience 0.22 (or 22 percent) more hardships than mothers who do not report such a problem. As results in the bivariate models (table 1) suggest, mothers with higher scores on the WAIS-R measure of cognitive skills are estimated to report more hardships. One potential explanation for this puzzling result is that these mothers may be better at reporting hardship than mothers with lower scores on this measure. Another important result from table 2 is the finding that the strength of some sociodemographic variables declines if other controls are added to the models, particularly controls related to mothers' ability to avoid hardships. The magnitudes of the coefficients for parents' race, ethnicity, age, and education are reduced by nearly half if these other variables are added. This finding confirms the importance of including these types of variables in studies of material hardship. It also suggests that the effect of demographic characteristics may be overestimated in previous studies' predictions of material hardship. However, the size of the monthly household income coefficient remains quite stable across models; this suggests that income (and fathers' involvement, as mentioned previously) is highly protective of material hardship, even after models are expanded to include controls for mothers' ability to avoid hardship. --- Unobserved Heterogeneity To address potential selection bias, table 3 presents a number of alternative specifications of the effects of father involvement on material hardship. Each column in the table represents a separate regression in which all previously discussed covariates are controlled. To facilitate comparison, the first column (OLS, unbalanced panel) repeats results from model 3 in table 2. The second column presents results from a model that includes the number of hardships from the previous wave (lagged dependent variable). The use of a lagged dependent variable should reduce unobserved heterogeneity in the results, yet the effects are still estimated within and across individuals. As expected, the number of hardships at the prior wave is found to be strongly associated with hardship in the current wave; each additional hardship at the 3-year survey is estimated to be associated with 0.40 (or 40 percent) more hardships reported at the 5-year survey. In the lagged dependent variable model, the coefficient for inkind support is estimated to be slightly larger than that obtained from the original OLS estimates. The use of the lagged variable reduces the size of the coefficients for formal and informal support, such that neither is statistically significant. The two models produce identical coefficients for the association between the number of days of contact and the number of reported hardships. This association remains highly statistically significant in both models. These results suggest that families with different levels of father involvement may differ in unobserved ways. These differences could drive the observed associations between father involvement and reports of hardship. However, fathers' contact with children is estimated to have a protective effect on hardship, and the effect does not appear to be driven by unobserved differences. The third column in table 3 presents results from models that include individual fixed effects. These effects are estimated only within individuals, and the underlying analyses hold constant all of the within-individual characteristics that do not vary over time. These models should eliminate all possibility that static unobserved differences drive the results. In the fixed-effects model, the estimated coefficient for informal support is substantially smaller than that in the lagged dependent variable model, and the fixed-effects estimate for formal child support remains the same as that for the lagged model. The coefficient for days of contact remains unchanged and highly statistically significant across all three models. Results from the fixed-effects model again provide evidence that the negative association between the number of days of fathers' contact and the number of reported hardships is not driven by static unobserved differences between families. In the fixed-effects model, the association between in-kind support and the number of reported hardships becomes strongly positive and statistically significant. Fathers' provision of in-kind support is associated with 0.16 (or 16 percent) more reported hardships. This result may suggest that reverse causality is a factor in this relation, such that mothers who experience material hardship may ask fathers for help, and fathers may respond by providing noncash contributions. For example, if a mother does not have money for groceries, she may call the father and he may purchase groceries for the household. This relation is observed when effects are estimated only within individuals, but it is suppressed in previous models, because those analyses average effects between and within individuals. The last two columns of table 3 present results from a balanced panel of mothers. This panel includes only those mothers for whom information is available from all three follow-up waves. These balanced panel analyses are conducted to reduce the possibility that the results are driven by the mothers with the most observations, another form of selection bias. The sample for these analyses is smaller than that used in the study's other models, because this sample excludes mothers for whom observations are missing at any of the waves. The OLS column in the balanced panel presents results from pooled cross-sectional OLS models (comparable to results in the OLS column of the unbalanced panel). The fixed-effects column of the balanced panel presents results from models with individual fixed effects (comparable to results in the fixed-effects column of the unbalanced panel). In general, estimates for the balanced panel models are very similar to those for the original, unbalanced panel models. In the pooled cross-sectional OLS model from the balanced panel, the magnitudes of the coefficients for informal and formal cash support are somewhat larger (more negative) than those in all previous models from the unbalanced panel; the magnitude of the estimated coefficient for the number of days of paternal contact is unchanged, but it is not statistically significant. The fixed-effects results in the balanced panel are estimated to be nearly identical to those of the unbalanced panel's fixed-effects model, though the coefficients from the balanced panel are not statistically significant because the samples are smaller in these models. --- Individual Indicators of Hardship Table 4 presents estimates of the association of father involvement with the eight individual, dichotomous (yes or no) indicators of material hardship. The top panel presents results from pooled cross-sectional logistic regression models, and the bottom panel presents results from fixed-effects logistic regression models. The figures in the table are odds ratios, and zstatistics are presented in parentheses. It is not surprising that the associations of father involvement vary across the different measures of hardship. In the pooled cross-sectional models, informal cash support, formal cash support, and the number of days of father-child contact are respectively and negatively associated with the hardship indicators (seven of eight indicators), though not all of the coefficients are statistically significant. These results suggest that each of the three types of support protects mothers against the measured hardships. Neither informal cash support nor the number of days of father's contact is found to be related to whether a member of the mother's household did not see a doctor or go to a hospital because there was not enough money to do so. So too, the pooled cross-sectional estimates identify no relation between formal support and whether the family did not pay the full amount due for a utility (gas, oil, or electricity) bill. The results are less consistent for in-kind support. The results in the top panel suggest that in-kind support is positively associated with some hardships and negatively associated with others, although only two coefficients are found to be statistically significant, and they are only marginally so. In the fixed-effects models (bottom panel of the table), sample sizes are much smaller than those in the pooled models because, as mentioned previously, fixed effects can only be estimated on individuals who experience a change on the dependent variable from wave to wave. Because these dependent variables are dichotomous, the chance that they will not change from wave to wave is much greater than would be the case in models that use a continuous measure of the number of hardships. Therefore, few of the coefficients reach statistical significance at conventional levels; however, the magnitudes of many of the coefficients are similar to or larger than those in the pooled cross-sectional models. Following the pattern of the fixed-effects results for the continuous measure of the number of hardships (table 3), these fixed-effects models estimate that in-kind support is positively associated with each measure of hardship, though only three of the coefficients are statistically significant. The results suggest that receipt of in-kind support increases the odds that a mother will not make the full rent or mortgage payment owed (by 62 percent) and the odds that a mother's utilities will be shut off (by 89 percent). Mothers who receive in-kind support are estimated to have 3.5 times greater odds of staying in a place not meant for housing. The fixed-effects results for informal and formal cash support are much less consistent. Receipt of informal cash support is estimated to reduce a mother's odds of staying in a place not meant for housing by a statistically significant 42 percent. Receipt of formal support is estimated to reduce a mother's odds of receiving free food by 22 percent and her odds of being evicted by 57 percent. The fixed-effects coefficient for the number of days of father-child contact in the month prior to survey is negatively and statistically significantly associated with four indicators of hardship. Each day of father-child contact is estimated to diminish the odds that a mother will report experiencing that hardship. Specifically, each day is estimated to reduce the odds of having phone service turned off by 2 percent, the odds of having utilities turned off by 2 percent, the odds of being evicted by 5 percent, and the odds of staying in a place not meant for living by 6 percent. The results thus far suggest that fathers' contact with children is consistently and negatively associated with hardship in the mothers' household. These associations persist across different models and specifications. The estimates for informal and formal cash support are less robust. In-kind support, by contrast, is found to be positively associated with hardship, but the association may be attributable to reverse causation. --- Reverse Causality In order to establish a temporal ordering of events and to explicitly examine the possibility of reverse causality, cross-lagged models are estimated. The results of these models are presented in table 5. Each stub-column item represents a cross-lagged model, and each model controls for previously discussed covariates that do not vary over time. The table presents the results of estimates with five dependent variables; the first two columns present results of estimates for the number of hardships reported during the year prior to the 5-year follow-up survey, and the last two columns present results for the four measures of father involvement (also measured at the 5-year survey). The independent variables of interest (lagged variables) represent averages across the 1-and 3-year surveys. (Alternative analyses measured the lagged variables only at the 3-year survey and at both the 1-and 3-year surveys; the results were similar to ones that are presented in table 5 and that use an average across the 1-and 3-year surveys.) Each model includes a measure of the lagged dependent variable, which is always positive and highly statistically significant. Also presented are standardized coefficients that allow for comparison of effect sizes across models and across different measures. The first two columns examine the effect of the listed fatherinvolvement measures, assessed at the prior waves, on hardship at the 5-year survey (the original direction of interest). The third and fourth columns present the effect of hardship at the prior waves on fathers' involvement at the 5-year survey (the reverse causal direction). The estimates identify no statistically significant association between the measure of lagged days of contact and hardship in the year prior to the 5-year interview. So too, no such association is observed between the measure of lagged hardship and days of contact in the month prior to the 5-year interview, but the standardized coefficient is 1.5 times larger for the reverse causation path. Furthermore, in the panels for informal and formal cash support, lagged hardship is estimated to be negatively and statistically significantly associated with informal and formal child support payments made in the year prior to the 5-year interview. Neither lagged formal child support nor lagged informal support is found to be associated with hardship in the year prior to the 5-year interview. The standardized coefficients from these models, which estimate the reverse causal direction, are more than twice the size of those in the other (hypothesized) direction (-0.05 vs. -0.02). In contrast, results from the inkind support panel suggest that such contributions reduce future hardship, although hardship is not found to predict future in-kind contributions. Taken together, these results suggest that causation is more likely to go from hardship to father involvement than from father involvement to hardship. --- Summary and Conclusion This article examines associations between different measures of fathers' financial and physical involvement with their nonresident children and material hardship in the mother's household. It takes advantage of longitudinal data that include multiple observations on each family over several waves of data. Estimates from cross-sectional pooled models suggest that fathers' formal cash support, informal cash support, and contact with their children respectively reduce the number of hardships reported by sampled mothers. These results persist in models that control for other types of involvement and for an extensive set of covariates. The estimated effects of father-child contact are more consistently robust in models with lagged dependent variables and individual fixed effects than are results involving cash supports. This finding suggests that fathers who are involved with their children may differ in unobserved ways from fathers who are not involved, and such differences may drive the results for formal and informal cash child support. The robustness of results for the association of father-child contact and hardship within both the models with lagged dependent variables and those with individual fixed effects suggests that this association is not driven by unobserved heterogeneity. These results are consistent with those of Garasky and Stewart (2007), who find that the effect of father visits is stronger than that of child support payments in reducing food insufficiency. An examination of the hypothesis that the associations might be due to reverse causation identifies stronger evidence that hardship decreases future father involvement than that father involvement decreases future hardship. As the preceding discussion notes, there are good reasons to believe that hardship diminishes levels of father involvement. If mothers have their phone service turned off, are evicted, or have to move to a shelter, fathers may find it difficult to visit and to contribute to their children (particularly through informal cash or in-kind contributions). The fixed-effects models looking at individual measures of hardship potentially point to this explanation. In those models, the strongest negative coefficients for days of father-child contact, informal cash support, and formal cash support are found for their association with eviction and staying in a place not meant for living. A related and more general explanation is that fathers' visits may decline when the mother experiences hardship. Arranging visits requires time and coordination on the mother's part. Her ability to make such arrangements may be impeded by experience of hardship. Finally, this study finds that in-kind support is positively and statistically significantly associated with contemporaneous hardship in fixed-effects models and negatively associated with future hardship. Together, the in-kind results suggest a process that begins with reverse causation: the mother experiences hardship, and the father comes to her aid. This process ends with the originally hypothesized causal path: father involvement reduces future hardship for the mother and child. In short, the results examining the relations between nonresident father involvement and material hardship in Fragile Families are far more complex than previously imagined. For a given family, causation very likely goes both ways, albeit at different times. Future research should focus on heterogeneity within the population. The recent work of Jacob Cheadle, Paul Amato, and Valarie King (2010) uncovers a number of different patterns of involvement among nonresident fathers. It is an important example of the type of research that is necessary. This article has a number of limitations that point the way for further research in this area. First, as mentioned previously, getting the temporal ordering of events is crucial when trying to understand how fathers' money and time spent with children affect economic well-being in the children's homes. Figuring out the appropriate lags and identifying data that measure these time periods is crucial. Related to the issue of temporal ordering is the possibility that time-varying unobserved characteristics may drive the results. If unemployment increases, fathers' involvement and mothers' hardship may be affected. Although both the fixed effects and cross-lagged models include the unemployment rate at the time of the 5-year survey, it is not clear if that is the appropriate time period or whether unemployment should be lagged. If unemployment should be lagged, it also is not clear how long the lag should be. Future research should also consider these questions. Second, this study cannot rule out the possibility of measurement error in the indicator of inkind support. Mothers are asked about fathers' provision of food, clothes, toys, medicine, and other items. It is hard to know how the mother would classify a situation in which the father takes the child to the doctor or pays the mother's electric bill. It is very likely that when a mother anticipates financial hardship, she calls the father and he provides assistance. However, some of these types of contributions are not picked up in the measure of in-kind support. Future surveys should focus attention on improving assessment of fathers' noncash contributions to children. Third, it is possible that the amount of support provided by fathers is measured with greater error than the amount of fathers' contact with children. If this is the case, the estimates of the effects of support provided are less precise than the estimates of the effects of contact. This study also has a number of strengths. First, the study finds that other characteristics of mothers, such as mental health (proxied with grandmother's mental health), impulsivity, and access to social support, are very strongly related to hardship. Those findings confirm results from previous research and reaffirm the need to include such types of variables in studies of family economic circumstances. Second, although the findings may provide more questions than answers, they clearly indicate that it is essential to look at these relations through a longitudinal lens. Results from prior research may be biased because they fail to consider the effects of fathers' involvement over time. Material hardship-such as food insufficiency, homelessness, utility shutoffs, and unmet medical needs-is known to be detrimental to children's health and well-being, over and above the effects of household income or poverty. In addition, these conditions are present in many households with incomes well above poverty thresholds. Children living in singleparent families are particularly at risk for hardship, especially those children born to unmarried parents. It is important to understand how nonresident fathers, through their payment of child support and time spent with children, can improve their children's lives. It is also important to understand how material hardship in the mother and child's household may disrupt father involvement. The results from this research point to the strong possibility that all types of father involvement are important for children, but the findings also underscore the difficulty of making causal statements in this type of research. Finally, these results highlight the gaps in knowledge and the need for further research in this area.
Children in single-parent families, particularly children born to unmarried parents, are at high risk for experiencing material hardship. Previous research based on cross-sectional data suggests that father involvement, especially visitation, diminishes hardship. This article uses longitudinal data to examine the associations between nonresident fathers' involvement with their children and material hardship in the children's households. Results suggest that fathers' formal and informal child support payments and contact with their children independently reduce the number of hardships in the mothers' households; however, only the impact of fathers' contact with children is robust in models that include lagged dependent variables or individual fixed effects. Furthermore, cross-lagged models suggest that material hardship decreases future father involvement, but future hardship is not diminished by father involvement (except in-kind contributions). These results point to the complexity of these associations and to the need for future research to focus on heterogeneity of effects within the population. Today, more than one in four U.S. children (26 percent) lives with only one parent (U.S. Census Bureau 2010). Moreover, half of all children born in the last several decades are predicted to spend some portion of their childhood in a single-parent family (Bumpass and Sweet 1989). Further, 41 percent of all births today are to unmarried mothers, and that figure is nearly 70 percent among black mothers (Hamilton, Martin, and Ventura 2009). Although some children in single-parent families live with their fathers, the overwhelming majority (84 percent) live with their mothers and have a living nonresident father (U.S. Census Bureau 2010). Research suggests that children growing up in single-parent families, particularly children born to unmarried parents, are much more likely to be poor and to experience more material hardships than those in two-parent families (Lerman 2002;DeNavas-Walt, Proctor, and Smith 2008). As a consequence, children in single-parent families also face disadvantage in a number of important domains: health, development, and educational attainment (McLanahan and Sandefur 1994;Magnuson and Votruba-Drzal 2009). Nonresident fathers' involvement in their children's lives, both through their financial contributions and their physical involvement, can ameliorate some of these disadvantages. Research suggests that child support payments from fathers increase income and reduce poverty in custodial mothers' households (Meyer and Hu 1999;Bartfeld 2000;Sorensen and Zibman 2000); however, other research suggests that payments from poor fathers are either too small or inconsistent to improve financial well-being in the mothers' household (Mincy and Sorensen 1998; Cancian and Meyer 2004b). Research also finds that child support
INTRODUCTION Worldwide, diversity of employees has become an issue of interest both at work and in the market. For any company that wants to be more dynamic and profitable should have views that have no boarders and should also assure the employees of diversity in daily running of the business and all the activities involved in everyday of the business (Childs & Losey, 2015). Globally, companies are trying to adjust themselves such that employees who have different backgrounds are able to acquire the right skills and also be supported to ensure that they are able to implement the corporate strategies (Ramirez, 2016). Proof of inclusion as a strategy of diversity in U.S. from Human Resource Institute, the establishments of a survey of the year 2001 conducted on a thousand privately and publicly owned organizations established that 56% offered diversity training on race, sixty eight on gender, forty five on ethnicity, thirty five on age, fifty four on disability, fifty seven on sexual orientation, and twenty four on religion (Kelly, Ramirez & Brady, 2016). The performance index of the company rose by seven percent with the private sector taking the bigger share of five percent. The reason why the public sector had a low performance index is because they are reluctant in integrating diversity in the management systems. According to Christian, Porter and Moffitt (2016), the minority workforce in the United States is expected to rise from 16.5% in 2000 to an estimated 25% in 2050. When the Review of Public Personnel Administration (ROPPA) was first published in 1980, White males accounted for 86% of all Senior Executive Service (SES) employees in the U.S. federal government. By 2008, that number had decreased to 65%. In addition to more racial/ethnic globalization has led to increases in cultural and linguistic diversity as well. About 18% of all households in the United States use a language other than English, and about 13% of U.S. residents were born in a different country (Rubaii-Barrett & Wise, 2018). Because of the apartheid system through which the policies on equity were added in the constitution in the year 1998, it has enabled SA to be the leading country in Africa that has embraced diversity. Although they have advanced much in democracy, the employees are still faced with discrimination and being treated unequally. The main indicator of preserve inequality in the system is the failure of black people being represented in the top positions in public institutions and also women are not represented and the disabled are almost totally unrepresented (Nel, Gerber, Van Dyk, Haasbroek, Schultz, Sono & Werner, 2017). According to Cross-Cultural Foundation of Uganda (2017), ethnic, political and religious diversity is posing a threat to diversity management in public organizations in Uganda. Diversity is manifested and perceived as a challenge to the workforce management; pluralism enhanced by environmental changes, individual and community initiatives, and intermarriages. The dilemma is how diversity can be integrated into the public organizations management fabric. There is also need to lobby for implementation of Equal Opportunities Act, diversity educational institutions, political parties and cultural institutions championing diversity management. In regard to diversity of employees, about half of the population in Nigeria is in the age of working yet the rate of employment is around twelve percent. Because of the interaction of foreign and local cultures because of multinational operations as well as impacts of globalization, it has made diversity of employees a challenge and at the same time a resource. In Nigeria, the FirstBank has 61% of its employees being male and the remaining 39% being females, while at the managerial level 66% are male and 34% female and at the board level 84% are male and 16% are female. Currently, the FirstBank has only nine women in its subsidiaries boards (Waller, 2016). With introduction of the new constitution, Kenya has introduced new demographic processes. The Kenyan constitution 2010 covers the issue of provision of equal opportunities in various areas such as the economic, cultural and social aspects (Namachanja, & Okibo, 2015). There are conventions in Kenya calling for inclusion of people from any societal context which include the appointments of the public sector. In the old dispensation there were no policies that allowed the some of the conventions and treaties to take effect. The effect was that there was disproportion in the public institutions in terms of the disabled individuals, gender and ethnic. Lack of equality could be as a result of various aspects such as practices, laws and policies that favored discrimination (Waiganjo et al., 2016). The inequalities were addressed by the 2010 Constitution under Articles 10 and 232 on values of the nation and principles of governing. The article emphasizes on strong identity in the nation; leadership as well as representation that is effective; equal opportunities and resources to all; development that is sustainable; governance that is good; and protecting of the vulnerable individuals and the marginalized. It is therefore the responsibility of the management of these public institutions to ensure that their staff members represent all the citizens professionally, academically, in terms of gender, age, disability, minority, race, ethnicity, etc. In Article 232 the constitution affords that the different communities in Kenya should be represented in the public service. Further, in Article 10 public organizations are required to ensure inclusiveness, protection of marginalized and vulnerable groups and non-discrimination. The constitution is specific on Articles 54-57 on individuals qualified for special rights of application, they include; society old members, children, disabled individuals, the youth, marginalized and minority groups. To make sure there is representation in the public service, the constitution provides for use special techniques and affirmative action so as to promote equal employment opportunities. This can be found in Article 27 4(d) which emphasizes on non-discrimination on the other hand 27(6) provides that the government should take affirmative action addressing the challenges faced by people who might have faced discrimination in some point in their life. The appointments on people with disabilities are indicated in Article 54(2) where 5% of the employment should consider these people. The issues about youth employment are found in Article 55. Affirmative action on marginalized groups and minorities employment is emphasized in article 56(c). The commission of National Gender and Equality was established by the 2011 Act, its roles include inter-alia, equality promotion, non-discrimination and mainstreaming gender issues, people with disabilities and marginalized individuals in national development. The Ethics Act provides for a business environment that supports diversity. Public officers are required to discharge their duties professionally and respect their colleagues in the public service. The 2015 Act focuses on values and principles. Public organization are required to ensure that both male and female, disabled persons and various ethnic groups form part of the employees in the public organizations. According to KNBS (2015) the public sector has approximately 700, 000 employees, from various races and ethnic groups, marginalized persons, people with disabilities and minorities. PSC survey (2013/14) revealed that the requirements of the constitution on two third rules on gender have not been implemented fully. About ethnic composition, PSC surveys have revealed that there are communities which are highly represented and others underrepresented more so from marginalized regions. Moreover, people with disabilities representation is also low (1%). This study sought to establish the influence of workforce diversity on employee performance in constitutional commissions of Kenya. --- Statement of the Problem Based on the report provided by Quality Assessment and performance improvement strategy (2016) it was established that Kenyan Constitution Commission witnesses low levels of performance of their staff members which resulted to reduced levels of employee's satisfaction by 8% for the period of 2015-2016. The unsatisfactory performance was attributed to employees inability to meet deadlines and poorly done tasks due to hiring of employees who are not qualified. The recommendation of the report regarding improvement of performance and level of production it also suggested that the commission should overhaul its practices of HR mainly regarding training of employees in new technology, empowerment of youth and eliminating discrimination, biasness and favors at work environment. According to NCIC (2016) report on audit it was established that the commission displayed inequality in race and ethnically. From the report it was established that out of 42 tribes in the country, only 10% take around 88% and twenty tribes combined do not constitute even 1% of the entire workforce. This implies that the public resources like salaries only benefit few communities which greatly affects the growth of the country and also affects the unity of the country and also a key cause of unfair delivery of services, (NCIC, 2016). Various studies (Dessler, 2016: Bekele, 2015;Nyambegera, 2017;Barlow et al., 2016) have focused on various aspects of diverse workforce diversity and furthermore they appreciate the issue of staff performance and the rate of nonperformance of organizations that is alarming because of diverse workforce. The studies were conducted in different contexts and nations. This study sought to fill the research gap by establishing the influence of workforce diversity on employee performance in constitutional commissions of Kenya. --- Objectives of the study The general objective of this study was to establish the influence of workforce diversity on employee performance in constitutional commissions of Kenya. The study was guided by the following objectives <unk> To determine the influence of gender diversity on employee performance in constitutional commissions of Kenya <unk> To analyze the influence of age diversity on employee performance in constitutional commissions of Kenya --- Research Hypotheses The study was guided by the following questions <unk> H A1 Gender diversity has a positive significant influence on employee performance in constitutional commissions of Kenya <unk> H A2 Age diversity has a positive significant influence on employee performance in constitutional commissions of Kenya --- LITERATURE REVIEW --- Theoretical Framework The discipline of workforce diversity in its effort to streamline the interactions of diverse workforce and annex its potential in organizations has borrowed a number of theories. The study was guided by social identification and categorization theory, similarity/attraction theory. --- Social Identification and Categorization Theory Diverse social category is the variation in the membership of social category. T can be due to differences in gender of the members or their age or even ethnicity, (Jackson, 1992). Due to the difference that exists in the groups, it can lead to reduced cohesiveness in the group or low levels of satisfaction among the members. Failure in the management of the differences there will arise conflict of relations and it has a negative impact on performance, (Williams & O'Reilly, 1998); Tjosvold et al., 2004). Based on this theory, people develop personal identity based on part of the categories to which they themselves belong (Hogg, Terry & White, 1995). Individuals tend to group themselves to those other members of the group that they share the same behaviours, attitude and attributes. Self-categorization is the term that is used in describing the process where an individual sees themselves as being part of a group (Kulik & Bainbridge, 2006). This theory implies that if the perceiver has a new target, comparison is done between the individual and the new target. People opt to find other groups when they discover that the group they targeted is different from what they perceived. It's a common thing for people to make comparison between themselves and other groups (Ashforth & Humphrey, 1995). The main aspects that are used in making comparison are the age, race and gender because they are the main characters that the perceivers sees and uses in identifying themselves and therefore applies the same in categorizing other people. The impact of self-categorization and social identity is that it leads to prejudice, conflict and stereotype (Kulik & Bainbridge, 2006). This theory has been applied in making predictions and comprehending the way diversity affects the attitude of the people and the way the group behaves. In explaining the impacts of diversity on the results of a person, the main argument is that the visibility and the character affect the feeling of identification (Tsui, Egan & O'Reilly, 1992). In groups identification is mainly depends on demographics of the individuals and it relates with biasness inside the group and conflict in the group. Through the expansion of the theories explaining the attitude of individuals and their traits, studies conducted on diversity have established that decisions made on diversity have a high likelihood of influencing the social activities in a group and the institution as well (Jehn, Northcraft & Neal, 1999;Pelled, Eisenhardt & Xin, 1999). Despite the fact that the theories of social categorization and social identity were created with the aim of explaining the impact of diversity that has been identified, some of the scholars have applied these theories in explaining the impact of personal diversity and value-based (Thomas, 1999). Employment of individuals of different genders is important for an organization. This is because their interaction can create new knowledge hence improving their performance. The theory supports the variable of gender diversity by linking the social identification and categorization theory to employee performance in constitutional commissions of Kenya --- Similarity/Attraction Theory The foundation of this theory is the notion that homogeneity in demographics of people increases their chances of being attracted and like each other. People who are from the same background may find that they gave a lot in common compared with those from a different background; this makes it easy for them to work together and come up with products or solutions to problems. Having similarities boosts one's value and ideas while disagreements bring about the question of ones values and ideas and it's not settling. Studies done have established that in circumstances whereby people get the chance to interact with various individuals, there is a high likelihood that the person will select someone they share the same characters (Berman, et al, 2008;Cassel, 2001). Researches done based on similarity/attraction concept established that lack of similarities led to less attraction among individuals manifesting through reduces communication, distorted information, and error in communication (Cameron & Quinn, 2002). Research based on this theory established that in organizations, there are high levels of diversity which have a high likelihood of leading to faulty work procedures. Faulty work will result to poor performance of workers. Individuals of different age groups have diverse knowledge. Therefore, incorporation of employees of diverse ages will promote the growth of employees and also improve the understanding of their tasks. The theory supports the variable of age diversity by linking the similarity/attraction theory to how employees perform in constitutional commissions of Kenya. --- Independent Variables Dependent Variable Figure 1: Conceptual Framework In companies the gender based inequality are reinforce and justified by stereotyping and biasness describing positively characteristics which leads to higher preference given to male (Leonard & Levine, 2016;Nkomo, 2016). This means that the companies prefer male employees that the female because of the perception that they perform better and have more ability to manage their duties. Carrel (2016) stated that there is a significant amount of diversity of employees that is not effective if gender factors aren't recognized and managed. It was also indicated tin the study that the greatest challenge to overcome is the perception that women and men aren't equal. Kossek, Lobel, and Brown (2015) indicated that in the entire world, the population of men at their working age and are employed are 80% while that of women is only 54%. Further, the position that women have been given in the society relates with care giving and domestic duties. Kochan, Bezrukova, Ely, Jackson, Joshi, Jehn, Leonard, Levine, and Thomas (2016), stated that it is very important for women to be provided with equal opportunities in a company because they are essential in the improvement of the performance of the company. The societal mandates did eliminate the policies that discriminate against some level of workers and for the companies that failed to implement the fair employment opportunities were faced with increased costs. Because of discrimination practices by organization, the organizations are forced to hire employees who are paid much higher compared to alternative and they are not very productive (Barrington & Troke, 2017). Moreover, Wentling and Palma Rivas (2015) indicated that companies that have employees who are diversified will provide better services because they understand their clients better (Kundu, 2016) Armstrong (2015) indicated that performance is determined by behaviour as well as outcome. The performer is the one ho displays their behaviour and changes the behaviour to action. Behaviours are results in their own way, it is the result of mental and physical effort directed towards a particular task. The performance of a worker is the combination of the actual outcome measures in reference to the intended goal. Kenney, ( 2016), stated that the way a staff member performs is determined based on the standards that are set by the company. Employees of any company have some things they expect from the company as a result of their performance. The employees are said to be good performs if they meet what the company expects of them and attain the goals of the company and the set standards. It implies that management that is effective and staff members task administration provide a reflection of the quality that is needed by the company and can be said to performance. Dessler (2017), stated that the performance of a staff members is the behavior that can be measured and that is relevant towards the achievement of the goals of the company. The performance of staff members in more than the personal factors but rather it includes external factors like the environment of in the office and motivation. The way they perform is measures mainly based on 4 factors; quality, dependability, quantity and work knowledge, (Mazin, 2015). As per Cole (2018) the performance of staff members is determined based on the standards that the company sets. Performance refers to achieving specific task that is measured against standards that have been determined already in terms of cost and speed, level of accuracy and completeness. Apiah et al, (2015) indicated that during the review of work performance that is when the performance of staff members is determined. Contextual performance is the activities that do not add to the main agenda of the company but supports the company in its social and psychological environment through which the goals of the company are pursued (Lovell, 2017). The Contextual performance is determined using other variables of an individual. They are inclusive of behaviours establishing the social of the organization and psychological context and assist staff members to carry out their main technical activities (Buchman et al, 2016). --- METHODOLOGY Research design adopted was descriptive crosssectional survey. Cooper and Schindler (2008) indicated that this type of studies is done one time. This kind of study helps the researcher in determining if at any particular time the variables are significantly related (Mugenda & Mugenda, 2008). For this study the targeted population was 15 Kenyan Constitution Commission staff at the headquarters in Nairobi. The population for the study was 623 managerial level employees who were working at Constitutional commissions head offices. The reason why managerial level employees were selected was because they had the information that was needed in this study. The study used the Krejcie and Morgan (1970) formula to determine the size of the sample. This study used stratified random sampling technique in selecting the sample. Questionnaire was used as the main tool for gathering data. The study adopted the Mixed methods data analysis method where inferential and descriptive analysis were performed. Both quantitative and qualitative data was collected. Quantitative data collected was analysed using descriptive statistics techniques. Content analysis was used to analyze qualitative data. Before the data was analysed, coding, cleaning and grouping of the data was done as per their variables. Pearson R correlation was used to measure strength and the direction of linear relationship between variables. Multiple regression models were fitted to the data in order to determine how the predictor variables affect the response variable. This study used a multiple regression model to measure the influence of workforce diversity on employee performance in constitutional commissions of Kenya. To determine any causal relationship, multiple linear regression analysis was conducted. The overall model was Y= <unk>o + <unk> 1 X 1 + <unk> 2 X 2 + <unk> Y = Employee performance X 1 = Gender Diversity X 2 = Age Diversity <unk> 1, <unk> 2, are regression coefficients to be estimated <unk>= Error term <unk> = the beta coefficients of independent variables --- RESULTS AND DISCUSSIONS The study selected a sample of 244 managerial level employees who were working at Constitutional Commissions head offices. All selected respondents were issued with questionnaires for data collection but the researcher was able to receive back only 217 questionnaires. The returned questionnaires formed a response rate of 88.9% as indicated in Therefore, since our response rate was above 70% it was considered to be excellent and was used for further analysis and reporting. --- Descriptive Results In this section the study presents findings on Likert scale questions where respondents were asked to indicate their level of agreement or disagreement with various statements that relate with the influence of workforce diversity on employee performance. --- Gender Diversity This study investigated whether there is a relationship of between gender diversity and employee performance. The findings presented in Table 1 showed that majority respondents agreed with various statements that relate with gender diversity. Regarding employment, 80.2% respondents were in agreement that the organization employs both genders (M=3.982); 80.6% agreed that when it comes to employee treatment, they are all treated fairly irrespective of their gender (M=3.889); 75.6% that both male and female employees are given the opportunity to show their potential (M=3.777). On training, 75.1% respondents agreed that both genders take part in decision-making (M=3.948); 77.4% agreed that the company encourages career development which involves all employees (M=3.738); and 77.4% that programs for training and development are created in a way that they fulfill the needs of both (M=3.698). With regard to promotion, 78.8% respondents were in agreement that the organization provides female employees with opportunities to grow (M=3.915); 72.8% agreed that both gender have an equal chance of being promoted (M=3.863); and 72.8% that promotion is a fair process in the organization (M=3.836). On gender evaluation, the study found that 80.6% respondents were in agreement that the organization has an employee evaluation system used to evaluate both genders (M=3.714); 75.6% agreed that performance evaluation of both genders is reviewed against set performance standards (M=3.751); and 80.6% that the organization provides feedback after and evaluation process (M=3.856). The study further established on fair treatment that 75.1% respondents agreed that in the organization the rules and regulations apply to employees of both gender (M=3.915); 77.4% agreed that each employee is recognized and rewarded for their accomplishments (M=3.699); and 77.4% that the organization treats employees as equals (M=3.678). Respondents also indicated other ways in which gender diversity affect employee performance in constitutional commissions of Kenya. They explained that when there is gender equality in the organization and equal opportunities for promotions of employees irrespective of their gender, they are motivated even more to put more efforts in their work. Diversification also in organizations, allows provision of better services because they get to understand their clients even better. Advantage of gender diversity is contingent on areas like the strategy of the company, culture, the environment and the people and the company. The study findings concurred with Naqvi, Ishtiaq, Kanwal, Butt, and Nawaz, (2016) that expansion in gender diversity in a group prompts inventiveness and development. They added that the process of making decisions turns out to be better and the final product is improved, boosting the performance of the group. It also agrees with Hoogendoorn, Oosterbeek and Praag (2013) who established that the group whose members were equally mixed in terms of their gender performed better in terms of sales and profitability compared to the groups that were dominated by male. --- Gender Evaluation The organization has an employee evaluation system used to evaluate both genders 5.5 5.5 7.4 74.2 7.4 3.714 1.251 Performance evaluation of both genders is reviewed against set performance standards 4.6 7.4 4.6 75.6 7.8 3.751 1.277 The organization provides feedback after and evaluation process 2.8 6.0 2.8 80.6 7.8 3.856 1.384 --- Fair Treatment The organization treats employees as equals 6.0 8.8 2.8 77.4 5.5 3.678 1.325 Each employee is recognized and rewarded for their accomplishments 1.8 5.1 14.7 77.4 0.9 3.699 1.331 In the organization the rules and regulations apply to employees of both gender 2.8 6.0 2.8 75.1 13.8 3.915 1.267 --- Age Diversity This study was concerned with investigation of whether there is a relationship between age diversity and employee performance in constitutional commissions of Kenya. From the results presented in Table 2, respondents agreed with various statements relating with age diversity. Regarding generation X, 85.3% of respondents agreed that baby boomers work to achieve organizational goals (M=3.994); 85.3% agreed that the organization employs individuals from generation X (M=3.961); and 87.6% that generation X work independently with minimal supervision (M=3.856). On generation Y, 78.8% respondents agreed that generation Y highly focus on developing their career (M=3.994); 88.9% that the organization employs individuals from generation Y (M=3.955); and 82.9% that generation Y prefer working as a team to achieve organization goals (M=3.836). Regarding Generation Z, 85.3% respondents agreed that generation Z collaborate with other organization members to achieve organizational goals (M=3.988); 94.5% agreed that generation Z is motivated by social rewards, mentorship, and constant feedback (M=3.961); and 83.4% that the organization employs individuals from generation Z (M=3.830). On equitable workplace, 78.8% respondents agreed that training in the organization is inclusive of diverse ages (M=3.935); 78.3% that the organization gives employee of different age groups equal opportunities (M=3.803) and 72.8% that equal opportunity brings together employees with diverse exposures (M=3.744). On inclusion of ages, 85.3% respondents agreed that in the organization promotion is inclusive of diverse age (M=3.994); 85.3% that a gender diverse team produces high quality decisions over a homogeneous team (M=3.961); and 87.6% that a gender diverse team enhance the organization's overall creativity and innovation (M=3.889). Respondents gave other ways through which age diversity affect employee performance in constitutional commissions of Kenya. Respondents indicated that there are those employees who are older and therefore have more experience and expertise and therefore assist the younger generation which in turn enhances performance. Others were of the opinion that age difference makes it challenging to work with others because of their interests and how they like to perform their tasks and therefore becomes challenging to work with. The findings of the study disagrees with Kunze, Boehm and Bruch (2017) that age diversity is by all accounts identified with the rise of an age discrimination atmosphere in organizations, which adversely impacts how the company performs through the intercession of emotional responsibility. The study also agrees with Joseph (2018) that age groups of workers and their performance were negatively correlated. --- Employee Performance This study was concerned with investigation of employee performance in constitutional commissions of Kenya. The findings presented in Table 3 showed that 74.2% respondents agreed that over the past five years, performance of employees had improved (M=4.021); 69.6% that age diversity in organizations has improved employee performance (M=3.988); 73.7% that highly performing workers get promotions easily in a company than lower performers (M=3.902); 73.7% that education diversity in the organization has helped to improve performance in the organization (M=3.902); 77.4% that social diversity has improved levels of employee performance in their organization (M=3.836); 69.1% that the company rewards employees for their good performance (M=3.810); and 70.5% that gender diversity in their organization has resulted to improved performance among employees (M=3.738). The study findings concurred with Sabwami (2018) that low performance and not accomplishing the set objectives may be experienced as disappointing or even as an individual disappointment and that highly performing workers get promotions easily in a company than lower performers. The company rewards employees for their good performance 4.6 5.1 7.8 69.1 13.4 3.810 1.142 Gender diversity in our organization has resulted to improved performance among employees 6.5 3.2 9.7 70.5 10.1 3.738 1.168 --- Inferential Results Relationship between study variables was determined by computing inferential statistics. The study computed correlation and regression analysis. --- Correlation Results Pearson R correlation wad used to measure strength and the direction of linear relationship between variables. The association was considered to be: small if <unk>0.1 <unk>r<unk> <unk>0.29; medium if <unk>0.3 <unk>r<unk> <unk>0.49; and strong if r> <unk>0.5. The findings presented in Table 4 showed that gender diversity had a strong positive and significant relationship with performance of employees in constitutional commissions in Kenya (r=0.793, p=0.000); age diversity was found to have strong positive significant relationship with performance of employees in constitutional commissions in Kenya (r=0.743, p=0.000). Based on these findings it can be seen that all the variables (gender diversity, age diversity) had significant relationship with performance of employees in constitutional commissions in Kenya. --- Multiple Regression Analysis Multiple regression models were fitted to the data in order to determine how the predictor variables affect the response variable. This study used a multiple regression model to measure the influence of workforce diversity on employee performance in constitutional commissions of Kenya. It was also used to test research hypothesis 1-2. --- Model Summary A model summary is used to show the amount of variation in the dependent variable that can be explained by changes in the independent variables. From the findings presented in --- Analysis of Variance Analysis of variance is used to test the significance of the model. The significance of both models, unmoderated and the moderated regression models were tested at 5% level of significance. For the unmoderated regress model, model 1, the significance of the model was 0.000 which is less than the selected level of significance 0.05. This therefore suggests that the model was significant. The findings further show that the F-calculated value (21.515) was greater than the F-critical value (F 5,211 =2.257); this suggests that the variables, age diversity, gender diversity can be used to predict employee performance in state corporations in Kenya. --- Beta Coefficients of the Study Variables The beta values that were developed were used to fit regression equations; the moderated and the unmoderated. For the regression equations fitted, Y = Employee performance; X 1 = Gender Diversity; X 2 = Age Diversity. The findings were also used to test the hypothesis of the study. From the findings of the first model, model 1, the following regression equation was fitted; Y= 0.920 + 0.388X 1 + 0.784X 2 From the equation above, it can be observed that when the rest of the variables (age diversity, gender diversity) are held to a constant zero, employee performance in state corporations in Kenya will be at a constant value of 0.920. The first hypothesis of the study was: H A1 Gender diversity positively and significantly affects performance of staff members in Kenyan constitutional commissions The findings also show that gender diversity has significant influence on employee performance in state corporations in Kenya (p=0.029<unk>0.05). The findings also show that the influence of gender diversity on employee performance is positive (<unk>=0.388). These findings suggest we accept the null hypothesis H A1 and conclude that gender diversity positively and significantly affects performance of staff members in Kenyan constitutional commissions.. The study findings agree with Hoogendoorn, Oosterbeek and Praag (2016) that group whose members were equally mixed in terms of their gender performed better in terms of sales and profitability compared to the groups that were dominated by mal The second hypothesis was: H A2 Diverse age positively and significantly affects performance of staff members in Kenyan constitutional commissions The findings also show that age diversity has significant influence on employee performance in state corporations in Kenya (0.007<unk>0.05). The findings also show that age diversity positively affects employee performance (<unk>=0.784). These findings suggest we accept the null hypothesis H A2 and conclude that diverse age positively and significantly affect performance of staff members in Kenyan constitutional commissions. The study findings agree with Backes-Gellner and Veen (2017) who examined whether age diversity inside an organization's workforce influences organization efficiency and found that expanding age diversity positively affects organization efficiency if and just if an organization takes part in innovative as opposed to routine undertakings. --- CONCLUSIONS AND RECOMMENDATIONS The study concluded that gender diversity positively and significantly affects performance of staff members in Kenyan constitutional commissions. The study revealed that gender diversity has significant influence on employee performance in state corporations in Kenya. The influence was also found to be positive. Gender diversity had a strong positive correlation with performance in state corporations. The study concluded that diverse age positively and significantly affects performance of staff members in Kenyan constitutional commissions. The conclusion was drawn from the findings that age diversity has significant influence on employee performance in state corporations in Kenya. The study also found that age diversity positively affects employee performance. Age diversity had a strong positive correlation with performance in state corporations. There is need to ensure that gender diversity in the organization. When employing staff, it is important to ensure that they are diverse; this will encourage their improved performance. Equal promotion of employees is important because it motivates employees to be dedicated to their work. It is important for the organization to ensure that there is age diversity among employees. It is also important for the organization to provide favorable environment and working conditions for employees depending on their age. With age comes experience and also young individuals are more innovative and adopt fast to new technology. Depending on the objective of the organization, the organization should select employees of appropriate age to suit the position they have created. State Corporation in Kenya should ensure there is ethnic diversity in the organization; this will increase employee performance. The organization should increase diversity and use work groups to maximally utilizing their great participation and synergy in order to boost employee and organizational performance. Policy makers in state corporations should set a strong example for diversity in the workplace by having policies that make management accountable for promoting inclusion. Hire managers based on their accomplishments and show the staff that gender, age and ethnic background have nothing to do with succeeding at the company. The study also recommended policy makers to establish a diversity policy that includes a requirement that the board of directors; establish measurable objectives for achieving greater gender diversity; and assess annually both the measurable objectives for achieving gender diversity and the progress in achieving them.
The study's objective was to establish the influence of workforce diversity on employee performance in constitutional commissions of Kenya. Specifically the study sought to determine the influence of gender diversity and age diversity on employee performance in constitutional commissions of Kenya. The study was guided by social identification and categorization theory, similarity/attraction theory. The study adopted a descriptive cross-sectional survey. Targeted population was 15 Kenyan Constitution Commission. The population of the study was staff members in the headquarters of the organization which was a total of 623 employees at managerial level. The sample of 244 members was used in the study and they were selected using Stratified random sampling method. Questionnaire was selected as data collection too where the researcher administered them to the entire sample selected. The study conducted pilot study to enable Validation and pretesting. The data gathered was analysed using SPSS version 23. The study analysed the data using descriptive and inferential statistics. Descriptive statistics were used in analysing quantitative data and the findings presented in tables, figures and graphs and in prose form. The study found that gender diversity positively and significantly affects performance of staff members in Kenyan constitutional commissions; diverse age positively and significantly affect performance of staff members in Kenyan constitutional commissions. Therefore, when employing staff, it is important to ensure that they are diverse; this will encourage their improved performance. Equal promotion of employees is important because it motivates employees to be dedicated to their work. It is also important for the organization to provide favorable environment and working conditions for employees depending on their age. The organization should increase diversity and use work groups to maximally utilizing their great participation and synergy in order to boost employee and organizational performance. The organization should ensure that there is education diversity among its employees, both management employees and juniors.
INTRODUCTION Maternal and childhood mortality remains key health challenges in several low-income and middle-income countries (LMICs). In 2019, approximately 5.2 million children died before their fifth birthday, more than 80% of these deaths occurred in sub-Saharan Africa and Central and South Asia. 1 Maternal mortality in sub-Saharan Africa and South Asia bore 86% of the estimated global burden in 2017. 2 Sub-Saharan Africa's maternal --- WHAT IS ALREADY KNOWN ON THIS TOPIC? BMJ Global Health mortality ratio of 546 per 100 000 live births is estimated to be the highest globally for any region. 3 In Maternal, Newborn and Child Health (MNCH), a vulnerable pregnant woman was defined as a woman who is threatened by physical, psychological, cognitive and/or social risk factors in combination with lack of adequate support and/or adequate coping skills. 4 On the other hand, resilience has been described as the capability of the public health and healthcare systems, communities, and individuals to prevent, protect against, quickly respond to and recover from health emergencies, particularly those whose scale, timing or unpredictability threatens to overwhelm routine capabilities. 5 Thus in MNCH, vulnerability and resilience are two divergent terms that tend to complement each other by acting as risk or protective factors, respectively, both at the individual level and at the health system. Pregnancy-related morbidity and mortality in LMICs are often preventable or treatable, but poverty, low maternal educational attainment and place of residence, among several other underlying factors, increase women's vulnerability to adverse maternal and child health outcomes. [6][7][8][9][10][11] Although multiple studies have examined these vulnerabilities, more attention needs to be paid to how they are patterned by gender to influence MNCH outcomes. Similarly, maternal resilience evidenced in women's ability to sustain life satisfaction, self-esteem and purpose amidst emotional, physical and financial difficulties associated with mothering and caregiving has been studied extensively. 6 8 10-12 However, there has been limited focus on how gender roles and norms may shape these factors. 12 Institutionalised power, social, political and economic advantages and disadvantages afforded to different genders influence power relations. Gender also intersects with other social determinants of health, including social class, race and ethnicity, 13 determines the hierarchy of social structure and power dynamics, and influences health outcomes. Health inequalities conditioned by gender are likely to put vulnerable populations at a further disadvantage. 14 Today, there is an increasing need for a critical and systematic assessment of the effect of gender norms, and gender inequality on the constraints faced by and opportunities available to vulnerable populations regarding MNCH. Theoretical and conceptual advances in global health have highlighted the importance of gender expectations, roles and relations in health promotion interventions. [15][16][17] For example, different gender expectations may result in greater vulnerability to mothers and children. Promising gender-sensitive practices in health have also emerged to address the HIV/ AIDS epidemic and influence maternal and child health outcomes. [18][19][20][21] The Sustainable Development Goals (SDG) aims to reduce maternal deaths to less than 70 per 100 000 live births by 2030 (SDG 3.1), neonatal mortality rate to at least as low as 12 per 1000 live births and under-5 mortality rate to at least as low as 25 per 1000 live births (SDG 3.2). These maternal and child health targets may be impossible to achieve if the critical factors shaping maternal and child health vulnerability and resilience are not well articulated. The SDG agenda must operate with gender as a cross-cutting aspect and therefore integrated within design, resource allocation, implementation, measurement and evaluation. Specifically, understanding how health systems respond to critical factors that shape the health and well-being of mothers, children and newborn is necessary. 22 This scoping review illuminates how gender differences and relations are relevant in providing important insight into how power structures and roles aggravate vulnerability or strengthen resilience in maternal and child health in LMICs. It provides new evidence on gendered dynamics in MNCH research that must be considered as we strive to programme interventions aimed at achieving the SDG targets on maternal and child health. --- METHODS We conducted a scoping review in accordance with Arksey and O'Malley's framework to examine the gendered dimension of vulnerability and resilience in MNCH in LMICs. 23 24 A scoping review was necessary for a broad and comprehensive analysis without consideration of publication quality. The review followed five stages: (1) identifying the research question; (2) identifying the relevant studies; (3) selecting the studies; (4) charting data and (5) collating, summarising and reporting results. --- Identification of relevant peer-reviewed literature This gender analysis was based on a larger scoping review aimed at developing a framework for vulnerability and resilience in MNCH in LMICs. The initial pool of literature was retrieved from major databases (ie, Medline, Embase, Scopus and Web of Science) based on a comprehensive and exhaustive search strategy that included appropriate keywords (see online supplemental appendix S1). This was supplemented by a grey literature search. The initial search was conducted on 15 January 2021 and updated on 1 March 2021. The search strategy was structured around three blocks: (1) population (ie, MNCH, health outcomes, healthcare utilisation and social capital), (2) exposure (ie, vulnerability, resilience and high-risk) and (3) setting (ie, lowincome and middle-income settings). Critical keywords and thesaurus heading terms were initially tailored to Medline and Embase searches and then adapted in other sources as necessary. Online supplemental appendix S1 shows the full search strategies for Medline and Embase. We also reviewed reports and technical papers from multilateral and bilateral organisations, foundations, international and local non-governmental organisations, such as the Bill & Melinda Gates Foundation, Jhpiego, Clinton Health Access Initiative, International Centre for Research on Women, Women's Health and Action Research Centre, Gender Watch and pharmacies. To gather as much evidence as possible, including --- BMJ Global Health high-quality literature regarding vulnerable populations in MNCH beyond the traditional sources, we incorporated the research from grey literature into this scoping review. We supplemented the database search with a bibliography search of key articles but found no relevant articles beyond what had already been extracted. We did not apply language restrictions in our search parameters and, thus, engaged translators to translate non-English publications. --- Study selection We developed and validated a high-performance machine learning classifier/algorithm (bidirectional encoder representations from transformers) to identify relevant studies focusing on vulnerability and resilience in MNCH from an initial pool of search results. Previous studies have reported the high predictive ability of machine learning models in title and abstract screening. [25][26][27] To train the machine learning algorithm, we randomly selected, screened and annotated the titles and abstracts of 500 records from the database. The performance of the model was evaluated against our classification based on precision, recall, specificity and accuracy scores. Subsequently, we applied the algorithm to review the abstracts and titles of the remaining publications to generate predictions to include or exclude them. Covidence, an online systematic review software, was used to manage the search outputs and screening of eligible studies (https://www.covidence.org/). Two researchers screened the identified manuscripts retained from machine-learning predictions using Covidence. A third researcher reviewed and resolved all conflicts. Titles and abstracts were first screened before a full-text review for possible inclusion in the study. We included studies based on four key criteria. First, if they focus on women (pregnant/lactating and teenage mothers) and/ or children (male and female) under 5 years. Second, if they focused on LMICs. We also included studies that focused on vulnerability, frailty or high risk and resilience in LMICs. Lastly, we included all study types including peer-reviewed publications, programmatic reports, and conference abstracts. There were no language restrictions nor exclusions based on the year of publication. --- Charting data To provide a holistic gender analysis, we adapted a conceptual framework for gender analysis in health systems research by Morgan et al. 28 The framework unifies several other frameworks focusing on health, health systems and development. 28 More importantly, the framework's unique focus on how power is constituted and negotiated makes it a valuable resource for understanding gender in terms of power relation and a source of disparity in health systems. The framework had five focal areas, namely, access to resources, division of labour, social norms, rules and decision-making, power negotiation, and structure/environment. All the articles that met the inclusion criteria for this study were further screened based on these five key gender dimensions. Relevant data were extracted into a data collection template developed on AirTable. Articles were screened and extracted if they fit any of the five dimensions of gender and power identified in the framework. We extracted the publication metadata (ie, name of the first author, year of publication, publication title and publication country) and additional data (eg, publication type, research design and methods, study context, indices of vulnerability and resilience, and key findings from the research). Categories for the focal areas were not mutually exclusive, which means that a study could belong and be counted in more than one category where evidence of such contributions exists. During the data analysis, we grouped the articles by their specific focus on the different dimensions of gender and power relations. Table 1 presents the details of the classifications. --- Collating, synthesising and reporting the results This review describes, first, the characteristics of the studies that meet the study inclusion criteria and, second, the findings. We report the summary statistics describing data collection methods, vulnerability/resilience context (eg, maternal or child/newborn health), gender dimension (eg, access to resources, division of labour, social norms, rules and decision making, power negotiation and structure/environment). We did not assess the quality or risk of bias for the included articles as the objective of this review was to scope and describe the breadth of gender dimensions in vulnerability or resilience in MNCH in LMICs. This review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) statement guidelines to enhance transparency in reporting scoping reviews. 29 Patient and public involvement statement Patients were not involved in the conduct of this study. --- RESULTS We identified 76 656 records through the database search (figure 1). We excluded 57 duplicate records and 73 638 abstracts that were flagged as potentially irrelevant to this study. Thereafter, we screened the remaining titles and abstracts (n=2871), we considered only 96 studies as relevant and selected them for full-text review. Of these, 79 studies did not meet the inclusion criteria and were excluded because of incorrect population, outcomes, setting, and study design or the lack of a gender focus in the analysis. Subsequently, 17 studies met our inclusion criteria for a promising gender-sensitive analysis of vulnerability and resilience in MNCH in LMICs. Online supplemental material S2 provides the details of these studies, including the year of publication, country of publication, context of the study, study design and key findings related to gender as regards vulnerability and resilience in MNCH. --- BMJ Global Health --- Study characteristics A total of 17 studies met the inclusion criteria for a gender analysis. Out of these, 13 focused on maternal health and four on child health (figure 2). Eleven studies focused on sub-Saharan African countries (figure 2), of which three were from Kenya. Resilience was a more dominant focus: eight on maternal health and two on child health (online supplemental material S2). Figure 3 presents the distribution of gender themes across maternal and child health contexts. Access to resources and decision making was the most common focus of the identified studies on both maternal (five) and child (three) health. Three studies examined power negotiation in relation to maternal (two) and child (one) health. Two studies also highlighted partner emotional or mental support in maternal health (two) and another two on the decision-making ability of mothers (two). Only a few studies examined how social norms and division of labour intersect with maternal health. --- Access to resources Access to resources emerged as the dominant genderfocused theme (8 of 17 studies). [30][31][32][33][34][35][36][37] In most studies, pregnant women or mothers lived in households characterised by low socioeconomic status and had lower levels of education, all of which are potentially related to poor access to maternal and child healthcare services. Among pregnant women living in a community of metropolitan Santiago, Chile, 31 low socioeconomic status was found to be related to deteriorating reproductive, maternal and neonatal health. Warren et al supported this finding and found that most women affected by fistula had secondary education as the highest level of education and a very low monthly income. 36 Most primary caretakers, including mothers, were not income earners and often relied heavily on their spouses or other household members for money. Access to resources also emerged as an important barrier to child healthcare. For example, Johnson et al demonstrated that the classification as orphan and vulnerable children (OVC) directly and indirectly influenced the risk of childhood morbidity (eg, diarrhoea, fever and acute respiratory infection). 30 This is because OVCs were more likely to be found in households headed by adults (40 years old), where the mother/caregiver had inadequate access to socioeconomic resources, such as inadequate education, and in urban areas. Many women were often in precarious positions, relying on their spouses for financial support to access healthcare services even during emergencies. A study in Kenya reported that irrespective of marital status, having male support (eg, husband, brother or uncle), particularly financial support and help in securing transport to hospitals for care, was critical. 36 Some women failed to attend clinics because of a lack of support from their husbands. Most husbands did not provide their wives with adequate funds for their needs during delivery. 32 The lack of rapid access to money was another important contributing factor to a child's deteriorating condition; it influenced the initiation of a treatment-seeking action, including where and by whom (all households) the action was performed. For instance, women in a study made many references to 'waiting to talk to my husband', 'waiting to be sent money from my husband,' and waiting for 'his permission to pursue an action.' 37 Such To what extent do women and men have the same access to education, information, income, employment and other resources that contribute to improvement in maternal, newborn, and child health? Do women have sufficient means to make decisions and access healthcare services without financial restrictions? Division of labour (Who does what?) Division of labour within and beyond the household and everyday practices How do women's social roles, such as childbearing, childcare, and infant feeding, affect their economic opportunities and access to health facilities? Social norms (How are values defined?) Social norms, ideologies, beliefs and perceptions How does stigma inhibit women's access to maternal healthcare services and are these available to unmarried women and teenage mothers? How do cultural norms about motherhood put women at risk of adverse health? Agency and decision making (Who decides?) --- Agency and decision making (both formal and informal) To what extent are women able to advocate for their health needs and contribute to household decisions that shape their and their children's health? Power negotiation (How is power enacted, negotiated or challenged?) Critical consciousness, acknowledgement/lack of acknowledgement, agency/ apathy, interests, historical and lived experiences, resistance or violence How is power enacted and negotiated in relation to maternal, newborn, and child health and how does power dynamics or women's experience of intimate partners contribute to adverse health for women, children and their families? --- BMJ Global Health gender-reinforced inequality in access to resources could subsequently affect the health-care-seeking behaviour of mothers and, ultimately, affect childcare, especially in the context of costly maternal healthcare services. --- Division of labour Only one study examined the dimension of the division of labour and how it intersects with maternal and child health. 38 This study included 36 Ugandan women who were admitted with obstetric near-miss and revealed that women's need to balance economic activities and reproduction often increased their vulnerability and ability to recover from obstetric complications. In such circumstances, social networks or social capital was generally perceived as an essential component of women's resilience because it provides women with financial, material, and emotional assistance, including those related to household responsibilities, such as childcare. 38 --- Social norms One study examined the dimensions of social norms in maternal health. 39 It explored how values related to motherhood are defined and how this definition shapes or inhibits women's access to maternal healthcare services or places women at risk of adverse health. An in-depth case study of a woman from Burkina Faso suggested that structural impediments, including motherhood and childbearing, limit individual resilience. 39 This case study noted that the high level of social pressure on women to bear children as soon as possible, even when they are not physically or mentally capable, and the stigma associated with childlessness exacerbate maternal mortality and morbidity risks. 39 These conditions contributed to the death of women in the case study, who could not be rescued from dying from childbirth-related complications despite having access to skilled birth attendance and emergency obstetric care. 39 --- Agency and decision making Two studies underscored the ability of women and mothers to make informed choices and contribute to decisions related to maternal and child healthcare. 40 41 For example, Prates et al showed women's inability to adequately plan the timing of childbirth because of poor socioeconomic status and inequalities in gender power, all of which contribute to multiparity. 41 More importantly, the existing power imbalance motivates male partner --- BMJ Global Health resistance to condom use as a means of family planning. 41 Additionally, Den Hollander et al in Ghana underscored women's low negotiating ability and autonomy in healthcare decision making. 40 The study reported wide power differences between health providers and women, especially in a context shaped by authority. Women were generally uninformed about their basic health information. A high level of therapeutic misconceptions was also observed in this study. Women were also reported to rely more often on a medical professional's opinion rather than being guided by their motivation. 40 --- Power negotiation Power negotiation also emerged as a dominant gender dimension of vulnerability and resilience in maternal and child health. This dimension refers to how power is enacted and negotiated in relation to MNCH and how power dynamics or women's experience of intimate partner violence contributes to adverse health for women, children and their families. Our analysis found two studies that examined power negotiation in maternal health 42 43 and one in child health. 44 Although seropositive status disclosure is a crucial aspect of HIV programming, women living with HIV were generally reluctant in disclosing their HIV status to their partner to avoid negative reactions from the latter, including intimate partner physical violence. 44 Men were often not in favour of having their wives tested, fearing the indirect disclosure of their own infection. 44 Nonetheless, partner involvement is crucial for prevention of motherto-child transmission (PMTCT), especially because this might require mothers to use antiretroviral therapy and formula feeding for infants. The authors recommended couple counselling and partner involvement in PMTCT programmes, as only testing women can increase their susceptibility to violence despite careful counselling. Furthermore, women's exposure to intimate partner violence could also affect other aspects of their health. For example, Vivilaki et al observed that the lack of or disappointment with partner support, poor marital --- BMJ Global Health relationship and emotional/physical abuse had been associated with high levels of postpartum anxiety and depression. 43 Likewise, McNaughton Reyes et al found that women exposed to intimate partner violence may be likely to experience persistent poor mental health across the antenatal and postnatal periods. 42 --- Partner emotional or affective support The two studies on partner emotional or affective support were primarily related to maternal health. 45 46 Families and partners often reacted negatively by rejecting unwed pregnant teenagers or teenage mothers. 46 These rejections were expressed differently, including avoiding pregnant teenagers or verbal abuse. 46 The analysis suggested that low-resilient women with threatened premature labour reported higher pressures from child support concerns after delivery, less active coping, less positive affect and more negative affect. 45 --- DISCUSSION This scoping review illuminates the gendered dynamics of vulnerability and resilience in MNCH research. Based on the 17 studies reviewed, we found that gender norms, roles and relationships significantly influence and reinforce vulnerability and resilience in maternal and child health. The role of gender-transformative interventions cannot be overemphasised in addressing these societal structures and widely held social values that perpetuate the gender inequities identified in this review. Our work highlights some promising gender-transformative interventions that should be prioritised in addressing vulnerabilities in MNCH (see table 2 for the summary). These are potential interventions based on the problems identified. Most importantly, women should have unhindered access to maternal and child healthcare services regardless of education, level of wealth, age or marriage. As highlighted in this review, access to resources was a dominant theme in 8 of the 17 reviewed studies. [30][31][32][33][34][35][36][37] Mothers in most of the studies reported having to wait for their husbands or other relatives for funds before they could access healthcare services. This process could pose a significant threat to them and their children's health and well-being, especially during emergencies. Women's access to healthcare services is compounded by socio-cultural stereotypes that impede maternal access to healthcare services, including marriage and adolescent motherhood. Multiple studies have highlighted how cultural stereotypes and stigma may hinder healthcare access for the same people who need the service the most. 47 In some cultural settings, unmarried women and adolescent mothers are unable to access care, partly because of the emphasis on marriage and motherhood in many African societies. Many women in search of assistance have fallen victims of human trafficking rings in baby factories where their babies are sold, and then they have been held against their will, thereby compounding their woes. 48 49 However, these barriers to healthcare access can be alleviated through a multisectoral intervention that addresses sociocultural stereotypes and the high costs of access to health services, including the cost of registration, treatment and care. For example, in Nigeria, the removal of user fees and increased community engagement for the most vulnerable is associated with a higher BMJ Global Health level of maternal health-seeking behaviour. 50 Similar findings have been reported in other LMICs, including China, Zambia, Jamaica and India. 51 52 Although the abolition of user fee policies is necessary to achieve universal access to quality healthcare, multiple studies have underscored that such policies are not sufficient to improve maternal healthcare utilisation. 53 54 The removal of user fees may increase uptake but may not reduce mortality proportionally if the quality of facility-based care is poor. 55 This may especially be salient in settings where healthcare access is limited by structural barriers related to the distance of health facilities or cost of transportation, waiting times and other additional costs. 56 57 Masiye et al emphasised that the cost of transportation is mainly responsible for limiting the protective effect of user fee removal on catastrophic healthcare among the poorest households. 57 This finding is supported by Dahab and Sakellariou, who identified transportation barriers as among the most important barriers to maternal health in low-income African countries. 56 In fact, one study in our review reported that receiving financial support and helping in securing transport to hospitals for healthcare is critical. 36 Previous studies have also highlighted that poorly implemented user fee removal policies benefit more well-off women than poor ones, and in cases where there are significant immediate effects on the uptake of facility delivery, this trend is not sustained over time. 58 59 Given these findings, there is an overarching need for comprehensive and multisectoral approaches to achieve sustainable improvements in maternal health. In some studies, women who received financial incentives as a part of neonatal care or conditional cash transfers reported better healthcare-seeking behaviours than those who did not. 60 Morgan et al emphasised that financial incentives can increase the quantity and quality of maternal health services and address health systems and financial barriers that prevent women from accessing and providers from delivering quality and lifesaving maternal healthcare. 60 There is also an increasing consensus on the need to engage the community and religious leaders in challenging many of the cultural impediments to healthcare access. Countries in which these have been attempted have reported huge successes in improving healthcare access and service utilisation. In several LMICs, women are tasked with the responsibility of childbearing and child-rearing; both could significantly affect women's economic productivity. Empowering women through skill acquisition could also offer a viable financial alternative and alleviate the high cost of accessing healthcare services, especially for women in low socioeconomic strata. Adequate incentives and support for mothers of children could also significantly ease the pressure on women to balance motherhood and economic activities. Some studies have reported the positive effects of programmes that help women with childcare. 61 62 Such empowerment programmes could also be extended to single women and women in sole-based or female-headed households, because these family types are characterised by low levels of education and household wealth. Another important gender dimension is the need for women and mothers to make decisions about their health and well-being. As highlighted in our review, women have <unk> Provide adequate support and affordable childcare for mothers to enhance their productivity and participation in the labour force. <unk> Incentivise programmes that motivate the involvement of men in childcare and house chores. Social norms (How are values defined?) <unk> Address issues regarding cultural stereotypes that impede maternal access to healthcare services, including those related to marriage and adolescent motherhood. This could be in the form of providing a friendly and safe environment for adolescent and unmarried mothers to access healthcare. <unk> Engage community leaders in alleviating social norms that put women and girls at risk of poor health. This includes social norms that limit the contributions of women beyond motherhood. --- Agency and decision making (Who decides?) <unk> Provide universal access to safe and effective means of contraception, irrespective of the level of education and wealth. <unk> Strengthen the capacity of women and girls through education and job creation to contribute significantly to household decision making. <unk> Empower women to make decisive decisions about whether they want to have a/another baby and when they want to do so. BMJ Global Health limited contribution to decision-making processes that are related to healthcare and family planning. 40 41 This limitation is complicated by power imbalances between women and their spouses and between women and healthcare workers. 40 41 One study found that women are only aware of condoms as a means of contraception and that their male partners resist to use condom. However, they are unwilling to use other means of contraception, perhaps because of the known or perceived side effects. Family planning services must be integrated into existing maternal and child health programmes, so that women are adequately equipped with sexual and reproductive health information and have the autonomy to choose their preferred means of contraception with minimal effects on pleasure. Male partner involvement is also crucial for PMTCT of HIV, especially because this requires mothers to use antiretroviral therapy and feed the child using formula feeding. 44 Although the involvement of the spouse during childbirth and child-rearing could alleviate some of the economic implications of motherhood, unfortunately, many male partners are not usually involved in childcare. 63 A few studies in our review reported on women's experiences of intimate partner violence and how this intersects with maternal and child vulnerabilities. [42][43][44] Women's exposure to intimate partner violence is associated with high levels of postpartum anxiety and depression and their experience of persistent poor mental health across the antenatal and postnatal periods. 42 43 The fear of intimate partner violence has also been reported to influence women's disclosure of HIV status to their spouses. 44 This occurs especially because men are often not favouring having their wives tested, fearing the indirect disclosure of their own infection. As recommended by Gaillard et al 44 and other scholars, 64 the continued counselling of women alone may not eliminate some of the maternal risks of intimate partner violence. However, MNCH programmes could alleviate these risks through couple-counselling and partner involvement in PMTCT programmes. Aside from increasing male partner involvement in reducing maternal risks of intimate partner violence, the development of effective systems and strategies for the reporting and management of intimate partner violence and abuse is important. Many LMICs have legal structures for seeking redress for intimate partner violence; however, reporting the same has not been effective. Multiple studies have examined women's motivation to remain in violent unions. [65][66][67][68] The findings of these studies, among several others, have highlighted the subsistence and stereotypes associated with being divorced, among others. As a result, strong systems may especially be important for women in low socioeconomic status who must remain in violent marriages for survival. Altogether, these findings have pointed to the need for context and a women-centric perspective in developing strategies to eliminate violence against women, as such strategies may be inefficient if they do not address some of the bottlenecks for combating violence against women. Some studies have reported the effectiveness of women's social empowerment combined with economic empowerment in reducing women's vulnerabilities to intimate partner violence. 69 Such interventions may also provide women with resources to access healthcare services and alleviate maternal experiences of intimate partner violence. However, these interventions could aggravate experiences of intimate partner violence, especially in settings where maternal empowerment is perceived to threaten established gender norms. [70][71][72] Nonetheless, multiple studies in Tanzania have reported that maternal empowerment has led to considerable reductions in physical intimate partner violence and posed no additional adverse health risks. 69 Watts and Mayhew 73 and Garc<unk>a-Moreno et al 74 recommended a more active approach, that is, to integrate health systems response into maternal and child healthcare. Today, there is a global consensus to strengthen healthcare professionals' ability to identify victims of intimate partner violence and provide first-line supportive care and referral to other care services. 74 A functional and well-financed health system is also important to prevent violence against women and respond to victims and survivors in a consistent, safe, and effective manner to enhance their health and well-being. 74 Health providers could probe women about their experiences of violence or evaluate them for any potential indicator of partner violence, such as any history of unexplained injury or maternal bleeding, preterm labour or birth, and foetal injury or death. 73 The healthcare system can also provide women with a safe environment in which they can confidentially disclose experiences of violence and receive a supportive response. Although our review addresses an important gap in the literature, it is not without limitation. The first is that the inclusion of articles in this review is based solely on their focus on vulnerability or resilience in LMICS. Therefore, studies on vulnerability or resilience outside of LMICs, in locations where pockets of vulnerable populations occur in high-income nations have not been captured. Additionally, while we made every attempt to find all accessible material, it is possible that we omitted some publications with distinct perspectives that were not represented in the review's evidence from grey literature particularly, given how broad it is. --- CONCLUSION Only a few studies have examined vulnerability and resilience in maternal and child health, especially in LMICs. We have identified some gendered dynamics of vulnerability and resilience in MNCH through this scoping review. Findings from this scoping review suggest that there is a great need to continue to empower women and mothers to access resources, contribute to decisions about their own health, and eliminate structural or social stereotypes that limit their agency. --- Contributors OAM conceptualised the review, developed the initial search strategy for the study, screened studies for eligibility, reviewed draft manuscript and supervised the overall research. OAU developed the search strategy and machine learning programme for study screening and screened studies for eligibility. EOO and FAS drafted the manuscript. NKI, ICM and BO screened studies for eligibility, performed data extraction and reviewed the draft manuscript. All authors read the manuscript, contributed to the revisions as required and approved the final manuscript. OAM takes responsibility for the overall content as the guarantor. Competing interests None declared. Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research. --- Patient consent for publication Not applicable. Ethics approval This study did not receive nor require ethics approval, as it does not involve human and animal participants. Provenance and peer review Not commissioned; externally peer reviewed. Data availability statement All data relevant to the study are included in the article or uploaded as online supplemental information. Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
Introduction Gender lens application is pertinent in addressing inequities that underlie morbidity and mortality in vulnerable populations, including mothers and children. While gender inequities may result in greater vulnerabilities for mothers and children, synthesising evidence on the constraints and opportunities is a step in accelerating reduction in poor outcomes and building resilience in individuals and across communities and health systems. Methods We conducted a scoping review that examined vulnerability and resilience in maternal, newborn and child health (MNCH) through a gender lens to characterise gender roles, relationships and differences in maternal and child health. We conducted a comprehensive search of peer-reviewed and grey literature in popular scholarly databases, including PubMed, ScienceDirect, EBSCOhost and Google Scholar. We identified and analysed 17 published studies that met the inclusion criteria for key gendered themes in maternal and child health vulnerability and resilience in low-income and middleincome countries. Results Six key gendered dimensions of vulnerability and resilience emerged from our analysis: (1) restricted maternal access to financial and economic resources; (2) limited economic contribution of women as a result of motherhood; (3) social norms, ideologies, beliefs and perceptions inhibiting women's access to maternal healthcare services; (4) restricted maternal agency and contribution to reproductive decisions; (5) power dynamics and experience of intimate partner violence contributing to adverse health for women, children and their families; (6) partner emotional or affective support being crucial for maternal health and well-being prenatal and postnatal. Conclusion This review highlights six domains that merit attention in addressing maternal and child health vulnerabilities. Recognising and understanding the gendered dynamics of vulnerability and resilience can help develop meaningful strategies that will guide the design and implementation of MNCH programmes in low-income and middle-income countries. ⇒ Socioeconomic inequalities place women and girls in precarious positions that adversely affect their vulnerability and resilience to health shocks. ⇒ Research on the gendered dimension of maternal and child health vulnerability and resilience is needed to fully evaluate how gender expectations may result in greater vulnerability for mothers, newborns and children or impact their resilience.⇒ This study provides new evidence on the gender dynamics of vulnerability and resilience in maternal, newborn and child health (MNCH) and how this impacts health outcomes.
Introduction Coronary heart disease (CHD) remains the leading cause of death and disability globally despite significant advances in its diagnosis and management over the past decades. In Australia alone, in 2017-2018 more than 580,300 adults (approximately 312 cases per 10,000 population) have self-reported CHD, which, in turn, accounted for 12% of all deaths and more than 160,438 hospitalisations (approximately 166 admissions per 10,000 public and private hospital separations). 1,2 In Australia, as in the USA and UK, 3,4 CHD disproportionately affects the most socially-disadvantaged and those living in the more remote geographic locations. 5 For example, the corresponding rates for prevalence, hospitalisation and death from CHD in the lowest socioeconomic areas are 2.2, 1.3 and 1.6 times that of the highest socioeconomic areas. 2 Similarly, the rates for CHD hospitalisation and CHD death in remote or very remote areas are 1.5 and 1.4 times that of major cities. These differences are partly due to the socioeconomic gradient in the prevalence of cardiovascular risk factors such as smoking and obesity. 2 Moreover, geographical disparities in both access to treatment and its affordability are likely contributors to the variation in the CHD burden in the Australian and other populations. A recent survey in Australia reported that of people who received a prescription for any medication in the past 12 months, 7% delayed getting or did not get the prescribed medication due to cost. 6 Moreover, a systematic review found that over half of the studies that focused on access to drug treatment for the secondary prevention of CHD reported lower treatment rates for patients with low compared with those with high socioeconomic status (SES). 7 Primary care is an important component in the secondary prevention of CHD. General practitioner (GP) visits, preparation of a chronic disease management plan and use of cardiovascular medications after hospitalisation for CHD have been shown to reduce the risk of emergency readmission and death from cardiovascular disease. 8,9 Guidelines for the management of all patients with CHD in primary care have been available in Australia since 2012. 10 However, as we have shown in a recent report, their adoption is not yet universal and significant disparities exist in their application such that men are more likely than women to receive a general practice management plan from their GP. 11 The aim of the current study was to investigate in a large national general practice dataset, MedicineInsight, whether disparities in the management of CHD exist based on socioeconomic indicators and remoteness of patient's residence. --- Methods MedicineInsight is a large-scale Australian national general practice database of longitudinal de-identified electronic health records established by NPS MedicineWise with core funding from the Australian Government Department of Health. [11][12][13] Adults (aged > _18 years) with CHD who had had > _3 encounters with their GPs, with last encounter being during 2016-2018, were included in this population-based study (Supplementary Material Figure 1 online). Patients with CHD were identified through an algorithm developed by NPS MedicineWise, 11 which utilised information from relevant coded entries or free-text terms recorded in at least one of three fields -diagnosis, reason for encounter, and reason for prescription (Supplementary Table 1). The general practice management plan for CHD is a tool developed in Australia for the secondary prevention of CHD in primary care. 14 The recommendations that this study investigated have been published. 11 Secondary prevention prescriptions were considered if these were prescribed during the study period. Missing data or lack of documentation of the measurement of risk factors were considered as non-assessment during the study period. The SES was based on the Socio-Economic Indexes for Areas -Index of Relative Socio-Economic Disadvantage (SEIFA-IRSD), 15 which is a residential postcode-based composite score that ranks geographic areas across Australia according to their relative socio-economic advantage or disadvantage. This study's SEIFA-IRSD scores were based on patients' most recent residential addresses as these were recorded in the last patient-GP encounter during the two-year study period. We further categorised the Australian Bureau of Statistics SEIFA-IRSD deciles into five groups. --- Statistical analysis The proportions of patients (a) with secondary prevention prescriptions during 2016-2018; (b) assessed for risk factors; and (c) who had achieved treatment targets were reported by SEIFA-IRSD fifths (i.e. first (most disadvantaged), second, third, fourth and fifth (least disadvantaged) and by residential remoteness (i.e. major city, inner regional, outer regional, and remote or very remote). The direct standardisation method was used to estimate age-and sex-standardised proportions utilising the prevalence of CHD in the Australian standard population as reported in the National Health Survey 2017-2018. 1 Differences by SES and remoteness in the age-and sex-standardised figures were evaluated, respectively, using chisquare tests. Spearman's rho correlation coefficient tested for monotonic changes in the relationship between SEIFA-IRSD and other variables. Secondary prevention prescriptions and number of treatment targets achieved were each modelled using a Poisson regression. To account for variations in achieving treatment targets during the study period, we ran the latter model using the Generalised Estimating Equations approach while accounting for three possible measurements of risk factors related to treatment targets shown in Supplementary Table 2. For each patient in the two-year study period, the baseline available, randomly selected and last available measurements were used. Single measurements per patient per study period were carried over to all three. The models adjusted for age, sex, residential remoteness, SES, indigenous status, state and territory, body mass index (BMI), smoking status, acute myocardial infarction, heart failure, diabetes, hypertension, stroke, chronic kidney disease, depression, anxiety, lifetime years of follow-up and number of patient-GP encounters during the two-year study period. The standard errors were adjusted for correlation within 438 general practices using the cluster sandwich estimator. In the treatment targets model, diabetes, hypertension, BMI and smoking were excluded as these were incorporated in the targets. The dose-response effects of different levels of socioeconomic disadvantage on number of secondary prevention prescriptions or number of treatment targets achieved were tested using likelihood ratio tests, with nested regression models being compared to determine whether a model was rich enough to capture data trends. The nested models that assessed treatment targets were based on the randomly selected measurements. --- Sensitivity analysis Sensitivity analyses were conducted by prevalent comorbidities. The forest plots, showing age-, sex-and SES-adjusted incidence rate ratios of study outcomes by condition, were constructed using random effect models. We further used multiple imputation by chained equations to generate the missing data on the randomly selected measurements using the mi Stata command, with 50 imputed datasets and final estimates obtained using Rubin's rules. 16 The Poisson regression modelling treatment targets was re-run using the imputed dataset. All analyses were performed using Stata/SE 15.0 (Stata Corp LP., College Station, Texas, USA). --- Ethics clearance --- Results General practice records for 137,408 patients with CHD (46.6% women) were analysed. Of these records, 81.8% were from 2016-2018, 15.8% from 2015-2017 and 2.3% from 2014-2016. --- Patient characteristics by SES and remoteness Patient characteristics varied by SES (Table 1). Patients belonging to the most disadvantaged fifth group were the oldest (mean age 67.0, SD 16.1 years compared with 66.2, SD 16.8 years in all other groups combined, p <unk> 0.001). This was reflected in a higher prevalence of comorbidities in this most disadvantaged fifth (Supplementary Table 3) and higher patient-GP encounters in the study period (Table 1). Socioeconomic disadvantage also varied by residential remoteness. Approximately 75% of individuals living in 'outer regional locations' belonged to the two lowest SES fifths compared with 58.4% in'remote or very remote locations' and 56.7% in 'inner regional locations' (Supplementary Table 4). Patients residing in major cities were the least socioeconomically disadvantaged with approximately one-quarter of patients in the lowest two SES groups. The oldest patients resided in inner regional locations while the youngest were in remote or very remote locations. Prevalence of major comorbidities was lower in this latter subgroup (Supplementary Table 4). --- Prescription of medications by SES and remoteness Higher proportions of patients from the most disadvantaged group were prescribed with any of the five recommended medications compared with other socioeconomic groups (Figure 1). A significant monotonic association between SES and being prescribed all of the four medications recommended for daily use (i.e. excluding shortacting nitrates) was observed, with number of prescribed medications incrementally increasing as SES declined (Spearman rho = <unk>0.106, p <unk> 0.001). In the risk-adjusted model, patients in the most disadvantaged fifth were 8% more likely to be prescribed more secondary prevention medications compared with the least disadvantaged group (incidence rate ratio (IRR) 1.08, 95% confidence interval (CI) 1.04-1.12, p <unk> 0.001) (Table 2). The highest proportions of patients prescribed with any of the medications for secondary prevention were observed in inner regional areas and the lowest proportions were observed in remote or very remote areas (Supplementary Figure 2), aligning with the different respective ages of these groups. In the risk-adjusted model, prescriptions in major cities, and inner and outer regional locations were alike whereas patients residing in remote or very remote areas were 12% less likely to be prescribed medications for secondary prevention than those in major cities (IRR 0.88, 95% CI 0.81-0.96, p = 0.003) (Table 2). --- Assessment of risk factors by SES and remoteness During the two-year study period, between 92% and 95% of individuals had their smoking status and blood pressure assessed by their GP whereas approximately 75% had their blood lipid profile tested and only 18-27% of individuals had their waist circumference (as a measure of central obesity) measured. A negative association between SES and risk factor assessment was observed, with factors being less evaluated as the SES rose (p <unk> 0.001 in all) (Supplementary Figure 3). In contrast, the assessment of risk factors by remoteness varied by risk factor assessed with increased proportions assessed in patients living further away from major cities (Supplementary Figure 4). --- Achievement of treatment targets by SES and remoteness Of the patients who had their risk factors assessed, and using the last available measurements, targets were more likely achieved in patients belonging to higher socioeconomic classes (Figure 2), with similar patterns observed when treatment targets were based on first-, randomly-selected-or last-available measurements, as shown in Supplementary Figure 5. In the risk-adjusted model that accounted for three possible measurements per patient, the likelihood of achieving treatment targets dropped incrementally as SES declined. Individuals residing in remote or very remote locations were least likely to achieve risk factor targets (Table 3). A dose-response effect between SES and number of treatment targets achieved was found (likelihoodratio test chi-square = 3.59, p = 0.309). In all models, interaction between socioeconomic disadvantage and residential remoteness was tested by the introduction of interaction terms into the regressions. No evidence of interaction was found based on the non-significant regression-derived p value for the interaction term: p > 0.05 in all. --- Sensitivity analyses To test for consistency, we further separately tested study outcome measures by prevalent comorbidities while comparing low to high SES halves with results consistently supporting the study's main findings (Figure 3). Results obtained following multiple imputation supported the study's main conclusions (Supplementary Table 5). --- Discussion This nationwide study of general practices in Australia indicates that among those living with CHD, secondary prevention management is influenced by levels of both SES disadvantage and patient residential remoteness, but in opposing ways. Individuals with CHD residing in remote or very remote locations were significantly less likely to be prescribed medications for secondary prevention compared with those living in major cities. They were also less likely to achieve treatment targets. Conversely, the most socioeconomically disadvantaged individuals were more likely to be prescribed medications for secondary prevention and were more likely to be assessed for cardiovascular risk factors (but less likely to achieve risk factors targets) compared with those who were the least socioeconomically disadvantaged. Australia provides universal health care, which includes subsidised healthcare services through the Pharmaceutical Benefits Scheme (PBS) and Medicare Benefits Scheme (MBS). Items listed on the PBS scheme usually involve a co-payment with a lower co-payment for low income earners and Indigenous Australians living with or at risk of chronic illness. 17 Despite these concessions a higher proportion of patients in the most disadvantaged groups do not fill prescriptions due to cost. SES disadvantaged patients with chronic diseases often struggle with out of pocket expenses negatively impacting on their health outcomes. 18 This may have contributed to the lower proportion who achieved targets in comparison with those in the least disadvantaged group. Patients from more disadvantaged areas are also likely to be at higher cardiovascular morbidity. An Australian study reported a dose-response relationship between socioeconomic disadvantage and admission to a coronary care unit or intensive care unit among patients presenting with non-traumatic chest pain. 19 The socioeconomic disparities observed in the current study may be attributed to a range of socioeconomic determinants of health and health behaviours, 20 rooted in social rank as determined by knowledge of risk factors of disease, 21 SES-associated educational gradients, 22 health literacy and patient-physician communication, 23 occupational hierarchy and income. CHD is a multifactorial disease with clinical, genetic, behavioural and lifestyle risk factors often interacting and contributing to a higher level of coronary risk. 24 Of these, modifiable lifestyle and behavioural risk factors, such as poor diet, physical inactivity, smoking and obesity disproportionately affect individuals coming from the most disadvantaged groups. Similar to our findings, studies have consistently reported such disparities in cardiovascular health also in countries with universal access to health care and after stratifying by smoking, comorbidity and obesity. 25 An Australian study on utilisation of health services in adults aged > _45 years reported that a higher proportion of people in less disadvantaged groups did not fill a script compared with more disadvantaged groups of the population. 26 Paradoxically, however, patients from the least disadvantaged group were more likely to have achieved more treatment targets compared with those from the most disadvantaged group. It is possible that patients in the least disadvantaged group had their CHD managed by specialists rather than GPs: the same health service utilisation study reported that a higher proportion of people in the least disadvantaged group claimed the MBS service for specialist treatment compared with other socioeconomic groups (55% versus 48-49%). 26 Alternatively, individuals in the least disadvantaged groups may have opted to reduce risk factor levels by non-pharmacological means through the modification of lifestyle and behaviour. In regard to CHD management by level of remoteness, dispensing rates for cardiovascular medication were generally higher in inner regional areas and lowest in remote or very remote areas despite the higher burden of CHD in rural populations, consistent with earlier reports. 27 Notably, our data do not suggest that this dispensing pattern is due to a lower SES status among those living in the most remote areas of the country; although major cities had the lowest proportion of the most disadvantaged individuals, there was little relation between SES status and remoteness. For example, in this sample, 75% of individuals living in 'outer regional locations' belonged to the two lowest SES fifths compared with 58% in'remote or very remote locations' and 57% in 'inner regional locations'. A key strength of the current study is that we used a large and contemporary national GP dataset in Australia. Nevertheless, our results may not be entirely representative at a regional level since general practices participating in MedicineInsight had to have had computerised records. 12 GP practices in locations that rely on paper-based records are not represented in this study. Our study utilised routinely collected data that are not intended for research purposes, hence there may have been errors in reporting and/or coding, and validation concerns. Missing information on blood pressure, smoking status and weight could be due to lack of documentation rather than lack of assessment. 13 We had no knowledge on contraindications which may have accounted for a small proportion of under-prescribing. We lacked information on specialist care, which may have contributed to the relatively lower prescription, but higher target achieved rates in the least disadvantaged group. We also lacked drug dispensing data which could have informed whether medication non-adherence or ineffective treatment led to non-achievement of treatment targets. Furthermore, any residential address changes over time were unknown to us and were unaccounted for. This study identifies important implications for policy and clinical practice, notably that despite Australia's universal healthcare system, the level of CHD management received is influenced by SES and remoteness of residence with the widest management gap observed in individuals coming from disadvantaged backgrounds and patients coming from remote or very remote locations. The documentation rates we report imply a continued need for programmes of support to increase screening for risk factors for CHD and documentation of related clinical information, in accordance with the recommendations in the National Health and Medical Research Council guidelines. 10 More research is needed to understand clinical and patient behaviours and assess whether incentives of policy may help drive change in health behaviours. --- Supplementary material Supplementary material is available at European Journal of Preventive Cardiology online. --- Author contribution GM analysed the data, co-drafted the manuscript and is guarantor of the study. CMYL conceived the design of the study, secured funding for the study, obtained the data and co-drafted the manuscript. FS and SR secured funding for the study. MW provided statistical oversight. CKC provided clinical advice. RRH conceived the design of the study and secured funding for the study. All authors a Model also adjusted for past years of follow-up, number of patient-general practitioner encounters and cluster effect within 438 general practices. --- CI: confidence interval
CHD patients (aged > _18 years), treated in 438 general practices in Australia, with > _3 recent encounters with their general practitioners, with last encounter being during 2016-2018, were included. Secondary prevention prescriptions and number of treatment targets achieved were each modelled using a Poisson regression adjusting for demographics, socioeconomic indicators, remoteness of patient's residence, comorbidities, lifetime follow-up, number of patient-general practitioner encounters and cluster effect within the general practices. The latter model was constructed using the Generalised Estimating Equations approach. Sensitivity analysis was run by comorbidity.
Introduction On Being Cosmopolitan and Religious I remember once declaring to a group of acquaintances in London that I consider myself to be both very religious yet also cosmopolitan. I was unsure whether their surprised expressions and cynical reactions were caused by my association between being cosmopolitan and religious per se, or my admission to being cosmopolitan and my specific identity as a Muslim. As a young veiled woman who abstains from alcohol and follows the main teachings of my Islamic faith, perhaps they could not comprehend what exactly I considered to be 'cosmopolitan' about myself. The fact I was socialising with them (all non-Muslims representing three different countries) in a global café chain in London, speaking in English and discussing American foreign policy in the global south, did not detract from the fact that I am, underneath all this, still 'a Muslim'. In true Jihad vs McWorld style (Barber, 2003) Islam appeared to summon up images of parochial and intolerant groups, following the 'word of God' while closing themselves off from any other worldly forms of progress or development. In direct contrast, they equated being cosmopolitan with adopting an open and outward perspective; about being modern (secular?), globalized and hungry for cultural diversity. If such a binary outlook is taken for granted, then surely it presupposes that cosmopolitan and religious perspectives will remain segregated worlds, leaving the possibility of being a cosmopolitan Muslim no more than an unachievable oxymoron. To challenge such a linear and somewhat naturalized preconception of how Muslims articulate perceptions of self and others, this paper demonstrates the complexities characterizing identities in a modern world of trans-temporality and intense mediated connectivity, and the ways in which identities are formed in layers (Georgiou, 2006) and informed by multiple attachments and connections 'of different types and at different levels' (Morley, 2000:232). Detailed ethnographic evidence from Egypt illustrates the ways in which young Muslim women negotiate their identities at the juxtaposition of age, class experiences, dominant discourses of gendered morality, religious values and a mediated articulation of global culture. As such, against a backdrop where mediated and non-mediated discourses represent inseparable spheres of influence in these women's lives, I analyze how an ongoing dialogue between the local and the global, self and others, distance and proximity, the secular and the religious coalesce both virtual routes and grounded roots through which they articulate a divine cosmopolitan imagination. A comparative class-based analysis between the experiences of young working-class and lowermiddle-class Egyptian women allows me to explore different ways it becomes possible to reconcile a religious and specifically Muslim identity with a cosmopolitan openness towards the world. I bring to the fore the centrality of transnational media as primary cultural resources through which these young women articulate and assess the world around them, both immediate and faraway. While almost every female participant in this study has never travelled outside of Egypt and, in many cases, does not even own a passport, they rely heavily on the media as their only 'passport' onto the outside world, expanding their imaginative horizons and exposing them to the possibility of alternative realities, lifestyles and modes of expression. I draw on Abu-Lughod's (1995) seminal and much celebrated analysis of female domestic servants' consumption of local televised serials in Egypt, and they ways in which the dramatic narrative became a reassuring private space in which these women could be excessively melodramatic, exploring other, more desirable situations and identities unavailable to them in their everyday lives. While broadening my own analysis to encompass transnational television, I bring to the fore evidence of how televised repertoires of globality function as dynamic multi-way channels of negotiation for my female informants that mutually-reinforce and shape both cosmopolitan and religious identities. On the one hand, Egyptian women's highly mediated cosmopolitan orientations are negotiated and filtered in relation to values and moralities that stem from very grounded religious identities. In turn, these religious identities themselves are being constantly weighed and re-assessed in light of a mediated exposure to diverse cultural happenings. For young Egyptian women, therefore, the question has never been whether it is possible for one to be a pious Muslim and modern cosmopolitan. For them, the dilemma is how such a delicate balance is best subjectively and physically balanced on the ground, allowing them to conform to the divine values of their religion and the moral boundaries of their society, while also making full use of the potentials offered by a diverse array of cross-cultural connections. For many, the incompatibility between Islam and cosmopolitanism was compounded in July 2013 after the doomed fate of political Islam in Egypt was sealed when Mohammed Morsi -the country's first ever President to arise from an Islamic party -was ousted by the military after just one year in office following days of mass civilian protest. The sour collapse of Muslim Brotherhood rule in Egypt has paved the way for numerous voices openly questioning whether Islam can ever be accommodating to modern, progressive and cosmopolitan ideals such as democracy and individual liberties (Nawara and Baban, 2014;Rakha, 2013). Crucially, although religion may have failed many Egyptians in relation to electoral politics and democratic representation, this must not detract from the fact that Islam continues to capture the hearts and minds of ordinary Egyptian citizens, claiming a constant and very natural presence which is highly visible, manifest and inseparable from the fabric of daily life. This was illustrated in the fact that although millions of Egyptians went out onto the streets to demand the early exit of the Brotherhood from the seat of power in the summer of 2013, concurrently, the constitutional declaration that was announced soon thereafter responded to mass pressure to recognize Islam as the state's official religion, and for Islamic Sharia to be clearly pronounced as its main source of jurisprudence. Mahmood (2005) captures how the discord between private and official articulations of religious discourse shaping Egypt's socio-political landscape is nothing new and dates back to the Islamic Revival of the 1970s. While a popular piety movement that developed in the 70s established religious knowledge as a vital means of organizing daily conduct for ordinary Egyptians, there were strong attempts to marginalize this under secular governance (Mahmood, 2005). In the light of such a complex and often contradictory political and social backdrop that has long plagued Egypt, and while recent post-uprising times involve struggles to (re)define how best to establish a modern nation-state both drawing on cosmopolitan democratic values while accommodating deep-seated religious principles, the timeliness of the discussion driving this paper is indisputable. I draw on nine months of rich ethnographic fieldwork conducted in Cairo and completed in the crucial few months immediately prior to the 2011 revolution. With access to such unique data, I hope to transfer the debate about religion's place in Egypt from formal discussion tables and parliamentary houses to the Egyptian people themselves, and especially women, whose opinion continues to be marginalized in the Egyptian public sphere. --- A Cosmopolitan Imagination and the Media in a Socially Divided Cairo The 2011 revolution was a very visible manifestation of the central role modern forms of media technology can play in helping shape the demands and social aspirations of Egypt's young generation. Women in particular emerged as central players within media space fuelling academic interest into the question of gender within the 'Arab Spring' and the ways prominent female activists played a central role in using their broad social media presence to mobilize and push for grassroots action. Enlightening as such research undeniably is, it often creates an artificial chasm between the media's seemingly insignificant and invisible role before 2011 and their substantial political potentials that Egyptian women'suddenly' discovered post-2011. 1 However, findings from my research illustrate that beyond the Internet's role within the immediate moment of radical revolutionary change, the more long-term, yet less glamorous, banal ordinariness associated with television consumption as a daily practice, should not disguise its potential as a vehicle often partly establishing the conditions for change or dissatisfaction. Such a thesis is supported by Morley (2006:104) in his argument that the impetus for political transformation often comes from the many'micro instances of "pre-political" attitude change' articulated through long term media consumption. On that score, I suggest that although media use was much less politicized, radical or even noted in the Egyptian public sphere prior to 2011, it still played a vital role in the everyday lives of women, particularly functioning as vital tools enabling them to assess, understand, negotiate and critique the world around them. Daily access to transnational television has allowed young women participating in this research to become increasingly globally interconnected and aware of the presence of the distant other within media space, allowing even those with the least means to be included within this reality of (virtual) interconnection (Schein, 1999;Silverstone, 2007;Ong, 2009). As 22-year-old Dalia told me: We live in a society where everything is controlled, particularly if you're a woman. Your family control where you can go and how you should dress, and the state controls how you live and what you can say. But they forget that we are a generation who has grown up with the media and so we see and hear alternatives; we know that there are other places in the world where citizens are respected, regardless of their gender, colour or economic background. What stops us from being like them? Life in Egypt has become unbearable and it's almost like a pressure cooker-we will explode at any moment. Eighteen months after this poignant assertion, Dalia and many young women who participated in this study, flooded Cairo's streets in a momentous revolution supported to a large extent by the media and underpinned by a demand for 'bread, freedom and social justice'. In this light, I use the term cosmopolitan imagination with reference to how cosmopolitanism, for these young women, takes the form of a dynamic subjective space driven by a sense of connection and belonging to the outside world. Primarily through the media, such an imagination expands the cultural horizons of young Egyptian women, allowing them to engage in a re-imagination of local particularities and to adopt a more reflexive understanding of limits placed on the self (Elsayed, 2010). As such, a cosmopolitan imagination in Cairo cannot be understood through linear categories of analysis often used in theories of cosmopolitanism that draw entirely on the experiences of Western secular and liberal contexts. In a situation where their own realities are so dismal, characterized by poverty and state repression, young Egyptians' cosmopolitanism is not about an ethical concern for a distant other (Silverstone, 2007;Chouliaraki, 2008). In a context where the majority of participants I worked with have never travelled outside of Egypt, cosmopolitanism is not about physical mobility, patterns of global travel and first-hand experience of the world (Hannerz, 1996). Furthermore, the centrality of the nation to their daily experiences means cosmopolitanism in Cairo is not about a rootless form of identification with a 'universal' common humanity, attributed mainly to Kant and his Enlightenment ideologies (Kant, 2010;Nussbaum, 1994). Even within Middle Eastern scholarship, the concept of cosmopolitanism has been deeply impoverished and underdeveloped (Hanley,2008) and has often become attributed solely to elite circles able to sustain exclusively Westernized lifestyles and secular forms of practice such as alcohol consumption (e.g. Zubaida, 2002). In contrast to such predetermined categories that approach cosmopolitanism as a static or fixed criterion (Hanley, 2008), my understanding of cosmopolitanism arises out of sustained empirical work and ethnographic research. I argue that cosmopolitanism in Egypt is exercised through internal heterogeneity (Elsayed, 2010) where these young women embedded within the specificities of daily life in Cairo internalise national, religious and transnational discourses in unique ways that lead to new avenues for self-understanding. Thus, by shifting the emphasis away from what cosmopolitanism should be to what cosmopolitanism actually means to these women, I approach cosmopolitanism as an actually-existing, practiced and lived identity that although physically rooted in place, becomes a multi-node space where both inward and outward facing cultural connections are dialectically interlinked. I draw particularly on Beck who argues that a cosmopolitanism existing in the real world is not an idealistic vision associated with a 'glittering moral authority'(2004:135), but a deformed entity that organically takes shape in different forms in an everyday context. Indeed, in my engagement with young Egyptian women, I illustrate that even within the same national context cosmopolitanism takes on two different forms in relation to socio-economic differences. Embarking from the above premise, it becomes possible to avoid positioning religious and cosmopolitan perspectives necessarily as dialogical counterpoints. Indeed, I argue how young Egyptian women's cosmopolitan imagination is not a way of abandoning or transcending local and religious ties, but in fact, as I illustrate below, these young women's religious identity is a fundamental springboard from where the negotiation of their cosmopolitan imagination commences, and a moral filter against which their understanding of the world is constantly measured. This is similar to Diouf's (2000) investigation of the biography of a rural Senegalese Muslim Brotherhood network which engaged creatively with a governing Western colonial order in ways that corresponded to, complemented, and ultimately benefited their Islamic identity. The case of young Egyptian women illuminates how the media have provided creative spaces for the revaluation and reflexive interpretation of local identities and particular experiences, thus giving them access to alternative routes through which they can be at once modern Muslims and pious cosmopolitans. For many, these may seem like contradictory pairings, but in a situation where the Koran and television set represent these women's two most important sources of information about the world, a divine cosmopolitan imagination could not be more natural. --- Television Consumption amongst Women in Cairo This research is based on the responses of 32 Egyptian Muslim women between the ages of 18 to 25, equally split between the lower middle and working class. Extensive changes that have befallen Egypt's social, political and economic fabric over the last five decades have rendered the idea of a single homogenous middle-class stratum increasingly redundant (Abdel Mo'ti, 2006;De Koning, 2009;Amin, 2000). What was once a sizeable, relatively coherent urban middle class (Abaza, 2006;Ibrahim, 1982) formed under Nasser's 1960s communist government, soon began to divide after the introduction of liberal economic policies by Sadat in the 1970s. A small wealthy, privately educated upper middle class able to sustain standards of cultural and economic capital associated with free markets and a global modernity came into existence alongside a majority lower-middle class who remained acquainted with more humble lifestyles affiliated with localized forms of belonging such as a public education and/or government sector jobs (Abdel Mo'ti, 2006;De Koning, 2009;Amin, 2000). Though my original research from which this paper is drawn involves comparisons between the upper middle, lower middle and working class, due to space restrictions of this paper, the current discussion focuses on the latter two. Importantly, I draw on Ibrahim's (1982) model and approach class in Cairo as a complex socio-economic category defined through a range of interrelated indicators including income, education, occupation and lifestyle. Conscious of the fact that accurately defining social class categories in a complex society such as Egypt is a mammoth task (Abu-Ismail and Sarangi, 2013;Beshay, 2014), education became a particularly vital indicator in my study and point of entry into the divergent lives of my two groups of women. Research has shown that type (public/private) as well as level (intermediate/higher) of education are effective measures of social class differences in Egypt as they underpin the distinct and segregated classed worlds of young Egyptians (Gamal El Din, 1995;Haeri, 1997). Subsequently, I approached the working class through a further education college which offers intermediate diplomas in computing, and the lower middle class through the Faculty of Education at one of Cairo's public universities. Respondents were split into four focus groups: two in the lower middle class and two in the working class, allowing me to cross-check the validity of each group session. The dynamic and interactive nature of the group discussions allowed me to become instantly aware of the centrality of the media to the daily lives of my young participants. Indeed, all the women I questioned admitted to having at least one television set in their household, while they all owned mobile phones and usually had indirect access to the internet through their friendship networks and frequent presence in internet cafes. In most cases, it was asserted -across both classes -that at least three hours of their daily time is dedicated to television consumption. As a result of their limited financial means and thus inability to entertain themselves outside the home, and in the case of the working-class -limited cultural capital acquired through basic education -it was clear to see how they depended greatly on television as an important vehicle of information, education and entertainment. Both groups of women had access to satellite broadcasting in their households, and thus terrestrial television was usually shunned in favour of regional Arabic channels. The MBC package 2, owned by prominent Saudi business tycoons, was especially well received. MBC2 -a 24-hour movie channel broadcasting contemporary Hollywood movies -and MBC4, dedicated entirely to the latest American serials and light entertainment programmes, were by far the most popular channels. Unsurprisingly, therefore, American movies and serials emerged as the two genres preferred the most across both classes, although Egyptian drama was also significantly popular. It was very interesting to learn how both groups of women stressed that they preferred to watch foreign movies on a regional broadcaster such as MBC rather than directly from a foreign source. This could be driven by the obvious fact that these women had little access to foreign channels in their households. Western (mainly American) channels in Egypt are predominantly available on exclusive satellite packages that carry a monthly subscription fee and, as such, are mainly accessible only to the wealthier upper classes. In contrast, most of my participants stated that they received the Eutelsat and Nilesat free-to-air satellites, which predominantly broadcast regional Arabic channels from across the Middle East at no ongoing monthly cost. Some of the women did mention that, occasionally, if a signal was strong enough, they were able to receive a limited number of European channels. However, the usual presence of one television set in the home, located in a communal area such as the living room meant that their viewing practices were often monitored by parents and older (male) siblings and thus subject to censorship procedures that usually involved European channels being encrypted. One participant mentioned that her father referred to foreign channels as the 'devil' that annulled ones prayers and thus he felt forced to ban such channels to ensure that more vulnerable members of the household -particularly women and children -are protected from the 'corrupting influences' of uncensored Western material. Other than the practicalities of access (or lack thereof) to foreign channels and the restrictions of monthly subscriptions, another two reasons drive these women's preference to follow Western programmes on regional Arabic broadcasters: firstly, translation services provided by MBC (either through subtitles or dubbing) mean these women can overcome their limited English-language abilities and enjoy movies in their native Arabic language. Secondly, nearly all women across both groups appreciated the censorship policies observed by MBC, and thus they felt much more comfortable watching these movies with the prior knowledge that any obscene language or overt sexual scenes would be removed. Indeed, being primarily owned by Saudi investors, and thus associated with 'one of the region's most tightly-controlled media environments' (BBC News Middle East, 2013) channels such as the MBC package are subject to strict self-censorship policies that avoid criticising the government or contradicting Saudi Arabia's ultraconservative Islamic Wahabi doctrine. Another noteworthy point is that foreign genres such as American movies were especially central in conversations related to these women's religious identity. Both groups claimed that Islamic TV channels -which were very abundant and popular at the time -formed an important part of their television consumption practices. Nevertheless, what was interesting, was that their interaction with these religious channels was characterised by more targeted viewing practices e.g. they would view them to follow a particular theological discussion or to listen to a fatwa on a particular issue, such as gender segregation. However, more generally, to be exposed to different cultures, and to explore how to be confident Muslims in touch with the rapid changes of contemporary times, Western drama genres represented more comprehensive and broadly informative windows onto the outside world. Thus, in essence -and as will be discussed further below -these young women are using secular, foreign media formats to partly negotiate what are very religious and locally rooted identities. This is a strong indicator of how these women's divine cosmopolitan imagination is comprised of multiple identity layers that shift continuously and very smoothly between mediated and non-mediated spheres of influence, allowing them to remain loyal to and observant of the moral boundaries of their faith, yet while reaping the benefits of an open and accessible transnational media network. I will expand upon these points in the remainder of this paper where I embark on a more focused class-specific discussion about the unique ways both groups of women merge religions and cosmopolitan perspectives. --- Lower Middle Class Women In my conversations with lower middle class women, their religious identity was accorded a position of central importance and was a primary factor in defining their sense of self. It was particularly interesting to hear how religion for these women was almost synonymous with, or interchangeable with their sense of national identity; I was often told that patriotic sentiments cannot be divorced from the strength of ones devotion to their religion or connection to God. As 23-year-old Hadeel informed me: To be a truly patriotic Egyptian, you firstly have to be a good Muslim who is well aware of their religion and its main ethos. Islam teaches people to live together despite their socio-economic or even religious differences, to respect their leader, to protect their nation against an enemy or intruder-and thus to be a loyal and respectful citizen. Such a strong religious assertion, that spanned the lower middle class, was very often in dialogue with a global articulation of culture. Indeed, many of the lower-middle class women I engaged with, talked about Islam as being a religion that is by default cosmopolitan, its main ethos strongly predisposed towards cross-cultural integration. According to Rowayda, 18, although Islam originated in the Arabian Peninsula, it obliges its followers to integrate with others of diverse backgrounds in order for its'message of peace' to spread across the globe. Verses from the Koran were routinely quoted in proof of this, such as 'We have made you into nations and tribes that you may know one another' or 'Travel through the earth and see how Allah originated creation'. Despite this, the limited financial capabilities facing many of these women means the majority of them see little hope of travelling beyond Egypt. In this context, therefore, their ability to see the world and experience different cultures via television and in the comfort of their own home is vital. As 22 year old Nadine told me: The world belongs to God; it is all His land and He has ordered us to travel, to integrate, to mingle and to explore. I believe that every Muslim should travel widely if they are able to do so as seeing first-hand the wonders of the world, the rich diversity characterizing different peoples and the beauty of this earth, will strengthen ones faith and love for God-who is ultimately the creator of all these miracles. In our modern times there is no excuse as television means we do not need to exert time, effort or money to go out to the world, but the world comes to us as we sit comfortably in our chairs. This quote is indicative of a divine cosmopolitan imagination that although remains firmly grounded through an obligation to observe and fulfill very specific religious duties, is simultaneously driven by a reflexive desire to refashion such duties as being part of a worldly and outward looking perspective craving knowledge of and participation with the global other. Ironically, in what they experience as an organic fusion of the divine and the secular, a modern and mainly Western-inspired technological medium such as television -very often shunned by older Islamic scholars as being'sinful' -has become these young women's primary means of integrating with other cultures; thus fulfilling what they consider to be a deeprooted religious obligation. Importantly, television is not simply a means for these women to observe a faraway world, but I quickly learned how the knowledge they glean from these mediated encounters becomes an intimate part of their self-assessment and the way they perceive, assess and make sense of their religion. The majority of these women were very keen to challenge the general misconception -usually amongst Muslims themselvesthat the purpose of their religion is reduced to fulfilling specific duties such as praying or fasting. For them, Islam is a more wholesome religion that extends far beyond the mosque or prayer mat. Being a Muslim is about being a productive member of society, having a strong work ethic, and treating those around you with respect. According to my female informants, such a holistic understanding of religion that encourages one to be better human being is where most Muslims fail, and where there is a vast need to learn from the experiences of other more developed cultures. As a result, the Western world -accessed predominantly through television -was considered to be a rich cultural fountain, and represented an important reference and point of comparison against which local and religious particularities were being routinely measured. In particular, the West's commitment to basic cosmopolitan and humanitarian values such as individuality, democracy and women's rights is something that they respect deeply and wish for in Egypt. As 23 year old Asmaa told me: The sad reality today is that it is the "unreligious" Western countries which respect and uphold basic human morals and values, while the Muslim world is a shame to us all. We have a lot we can learn from them (Western countries) and therefore as long as you have the right intention, the media represent important tools that generations before us never had, allowing us to engage directly and learn from the model of these more developed cultures, thus always pushing ourselves to become better Muslims. This was illustrated in a long discussion I once had with a group of these informants about the unsatisfactory way a raped woman is dealt with in Egypt. When I probed them in order to discover what had provoked such intense and critical opinions about a matter considered to be taboo in Egypt, I discovered, to my surprise, that it was an episode of the American teen drama 90210 broadcast on the regional MBC4 channel (discussed above) and accompanied by written Arabic translation. In the few weeks prior to our discussion, there was a key storyline where a lead female character was allegedly raped by a school teacher, and this created much interest amongst my female informants. Twenty-two year-old Amany was particularly impressed at how the raped victim within the dramatic scenario was treated respectfully and sensitively by those around her, while in Egypt, she believes: The girl would have been told to stay quiet as not to lose the reputation of herself and her family. The sad thing is, even though we are a Muslim country, our response is very un-Islamic in its disrespect for the victim. However, by having insight into how other cultures deal with such a situation, we might one day learn to adopt their humility and dignity. In light of the above, although television only provides women such as Amany with a selective representation of Western culture -usually fictional -it still is a powerful tool allowing them to confidently engage in reflexive cross-national comparisons. Differences are often pointed out between their own tangible and everyday experiences of corruption and dishonesty in Egypt, and between scenes in a film or serial, which they perceive to point to the transparency and integrity of Western culture. This point was confirmed by 21 year old Seham who said that regular exposure to such media often makes her feel 'disappointed and upset' as it discloses the true extent of the dire reality of life in Egypt and the situation of Muslims. Nevertheless she is willing to endure such temporary feelings in order to reap the 'long term benefits' of the media. In her words: 'Without the media we would be closed up on ourselves with no insight into alternative ways of life or what it could mean to be better people and better Muslims. If this was the case, would we ever have anything to strive towards'? What we can observe so far is a situation where these young women are undergoing a dynamic and imaginative engagement with a mediated Western culture as an attempt to negotiate for themselves a position as worldly, humanitarian and culturally sophisticated Muslims. Importantly, articulating their understandings of the world primarily through the lens of religion means that although lower middle class women accept the West as an important fountain of cultural advancement in many aspects, they simultaneously acknowledge it to be a potential source of immorality and religious laxness. What they have learned of Western culture through the media often confirms to them the'spiritual ignorance' of Westerners, which has resulted in what they consider to be their excessive materialism, objectification of women and sexual promiscuity. According to 18 year old Nesma, the West may be wealthy and scientifically advanced, but they remain'spiritually poor' and thus a potential danger of transnational media is that Egyptian youth may learn to be 'headonistic' like Westerners, 'becoming slaves to money and consumer objects rather than a higher divine order'. As a result, Nesma concludes that 'one must equip themselves with strong faith to ensure that they are well aware of their moral boundaries and a sense of what external values are acceptable or not to adopt.' It seems, therefore, that by including the West in a backwardness which involves a disregard for religiosity, these women confidently reverse common perceptions of religious people as being ignorant, stagnant and unprogressive (Elsayed, 2010). Thus, what these female informants display is a hybrid form of cosmopolitanism that blends a fascination with the West with a critical attitude. Although many of these women believe that what they are able to learn about the outside world through the media can help them to be more productive, worldly and sophisticated Muslims, they also consider themselves in a superior position to teach the Western world a vital lesson: the significance of faith and piety. Hence, for these young women, the world does not involve a set of one-way connections from the West to the rest, but is a more complex shared space we all mutually make and influence (Gable, 2010). According to Heba, in a world of open and instant communication, Egyptians and Muslims need to avoid always being passive receivers of what other people choose to send, and instead, should'strive to become active instigators and senders of their own media messages as this is the best way to educate the world about the beauty and mercy of our religion'. The internet especially was considered to be important in this respect as it enables them to create their own messages -through blogs, websites or tweets -that can then be broadcast uncensored onto millions of other users across the globe. Heba discussed how she volunteers for an English-language website called Islam Online, which aims to promote a modern, youthful and moderate image of Islam. In the context of the above discussion, therefore, while religion acts as a filter as to how these women's cosmopolitan orientations take shape and the contours of its moral boundaries, their religious identity in turn becomes more fluid, adapting and changing in relation to their exposure to the wider world. The end result is a divine cosmopolitanism that is not static, but dynamic and constantly evolving as grounded religious and mediated secular spheres of influence remain in close dialogue and interaction. --- Working Class Women My discussions with working-class women revealed a discourse heavily reflective of a strong religious identification that placed great emphasis on the centrality of Islam to their daily lives. Beyond a verbal assertion of religious devoutness, however, I felt they were not comfortable with me probing too much into the details of Islamic discourse or teachings. Being a Muslim myself, I was able to comfortably talk to both groups about religious matters, and I discovered that unlike the lower middle class, the working class' knowledge of fundamental Islamic teachings was often underdeveloped. Obviously, one's religious knowledge is associated with cultural capital and education. In a context where the majority of these young women have basic literacy and education skills, it should be no surprise that their familiarity with religious texts and their personal knowledge of Islamic discourse is often poor. Thus, it quickly became clear how their relationship to religion is based primarily on teachings and traditions passed down from their parents. In contrast, for the lower middle class, their educational capital has allowed them to comfortably engage with religious texts, so that through their own efforts of increasing religious understanding and perception, they are able to make more informed and reflexive decisions regarding religious practice. Perhaps here it is fitting to use Deeb's (2006) distinction between an 'authenticated Islam'(2006:21) that persons may experience based on piety and personal understanding, and an unreflexive relationship to Islam underpinned by a conformity to religious folklore and heritage passed down through generations. This premise is succinctly captured by Mahmood's (2005) female interlocutors, who formed part of the Egyptian women's mosque movement she was studying in the 1990s. According to these women, a 'popular religiosity' (Mahmood, 2005: 45) which has become rife amongst ordinary Egyptians has reduced Islamic knowledge to a'system of abstract values' (ibid.) that functions mainly as a public marker of a socially-desirable'religio-cultural identity' (ibid.:48) rather than a true and honest realization of 'piety in the entirety of one's life' (ibid.:48). Importantly, I do not aspire to make any judgments regarding which class is more religious or whose faith is more powerful. This is neither my place, nor does it fall within my research aims. What I am trying to say, however, is that while Islam is undeniably central to the lives of both groups, they have developed very different understandings of how religious discourse informs and shapes different aspects of their everyday lives. For the working-class, I observed a strong need to abide by familial expectations and hegemonic social structures that impose Islamic discourse as a strict set of divine values defining the limits of acceptable conduct and physical appearance. In this context, submission to Islamic principles becomes an overbearing moral framework for ensuring inclusion and social conformity and to uphold what their immediate society dictates is a'respectable' and 'honourable' reputation for women. This is particularly illustrated in the way the veil takes centre stage within working-class locales as a highly visible and public expression of these women's 'embodied piety' and 'well preserved' honour. As my rapport with these women increased, they often discussed that although their faith is a vital part of their self-identity and the ways they made sense of the world, they still felt fervently bitter at the way their parents very often imposed aspects of religion upon them in
With a focus on young Egyptian women, this paper explores the different ways it becomes possible to reconcile a Muslim identity with a cosmopolitan openness towards the world. Informed primarily by transnational television, these women articulate a divine cosmopolitan imagination through which they form multiple allegiances to God, the nation and global culture simultaneously. Thus, a close analysis of their regular consumption of transnational television helps challenge linear and somewhat naturalized preconceptions of how Muslims articulate perceptions of self and others. In the articulation of both their cosmopolitan imagination and religious identities, young Egyptian women have become skilled negotiators, moving within and between mediated and non-mediated discourses. They move physically within a grounded place that sets the moral boundaries for bodily existence, yet shift subjectively between disembedded spaces of mediated representation, often providing new contexts for meaning and inclusivity. The result, for young Egyptian women, is a divine cosmopolitan imagination.
, however, is that while Islam is undeniably central to the lives of both groups, they have developed very different understandings of how religious discourse informs and shapes different aspects of their everyday lives. For the working-class, I observed a strong need to abide by familial expectations and hegemonic social structures that impose Islamic discourse as a strict set of divine values defining the limits of acceptable conduct and physical appearance. In this context, submission to Islamic principles becomes an overbearing moral framework for ensuring inclusion and social conformity and to uphold what their immediate society dictates is a'respectable' and 'honourable' reputation for women. This is particularly illustrated in the way the veil takes centre stage within working-class locales as a highly visible and public expression of these women's 'embodied piety' and 'well preserved' honour. As my rapport with these women increased, they often discussed that although their faith is a vital part of their self-identity and the ways they made sense of the world, they still felt fervently bitter at the way their parents very often imposed aspects of religion upon them in a very didactic way without making any effort to actually teach them the fundamental principles of Islam. In this context, I was often told how they felt the media to play a central role in allowing them to undergo a process of self-exploration regarding what it means to be a young Muslim in the modern world. As 22 year old Kariman told me: Women like me are led like sheep -you don't really have much control over your life. Every aspect of your existence is under the spotlight if you're a woman in Egypt-what you wear, how you walk, how you talk to men. The more religious you "appear" to be, the better your reputation will be and thus your marriage opportunities. However, our parents make little effort beyond this to actually educate us about our religion or its main ethos. As I struggle to read a religious book, television makes it much easier for me to increase my knowledge and awareness, especially when there are so many available channels. This quote highlights the significance of television as a cheap and readily accessible medium allowing Egyptians like these young informants, especially those with basic literacy skills, to depend on it for education, information and entertainment. This was confirmed by another participant, Zeinab, who mentioned how she too sees television as a highly informative tool for education that helps broaden her horizons and knowledge about both worldly and religious matters. Zeinab discusses how she turns to religious channels in order to listen to fatwas or the opinions of prominent Islamic scholars on specific issues of importance to her such as praying or giving charity to the poor. However, simultaneously, a large part of Zeinab's viewing practices are also dedicated to regularly watching American movies and sitcoms. Zeinab focused particularly on the fact that although she wears her'veil with pride' she also wants to be a'smart, modern and fashionable Muslim woman,' and thus enjoys Western entertainment, particularly movies, as a means to remain in touch with the latest global fashion developments. In her own words: 'I watch and observe and then only take what suits me and complements my identity as a Muslim. My parents believe I'm too heavily influenced by what I see in the media, but I know my boundaries very well'. Zeinab's underlying assertion that the different values these women are exposed to in the media often create a tension with the existing ideals of the older generation appeared to be a very common sentiment. Adding to this conversation, Mariam argues how women in Egypt only have to read the newspaper or switch on their television to be exposed to stories of women in the Western world taking up important social and political roles as prime ministers, judges and scientists. 'Meanwhile' she continued 'our parents ban us from even talking to men!' (Elsayed,in press: 6). This demonstrates that, for working-class women, a feeling that their conduct is highly controlled by rigid parental expectations can be strengthened through their exposure to the media and an ability to witness alternative representations of gender roles. For Mariam, there is no contradiction between being a pious and devout woman as her religion dictates, whilst also being a'modern' careerfocused and fashion-conscious woman as is often the norm in the Western societies she observes on screen. The issue, according to Mariam and many of the other women, harks back to parents' narrow and parochial interpretation of religion. The previous section highlighted how the lower middle class have developed a very reflexive, rational and almost intellectual fusion between religious obligations and a cosmopolitan outlook. As we have seen, the working class experience more of a generational struggle to negotiate for themselves a third space within which they are able to conform to the essential teachings of their religion, yet while also challenging parental expectations through adapting and internalizing cosmopolitan principles they are exposed to in the media. Interestingly, many women in the working class discussed how television -particularly Western programmes -became the source of numerous clashes between them and their parents. As touched on above, this often resulted in foreign channels being encrypted, television viewing being censored by family members, or even as far as TV being banned in the home in a few cases. This generational chasm was confirmed very aptly in a discussion I once had with these women about pre-marital relationships. According to one of the participants, while she sees her parents as occupying a very sheltered, static, and inward-looking existence, she affiliates herself to a 'new' more culturally-mobile generation who, although remain pious, are exposed to the outside world through the media and thus far more versed in contemporary ways of life. Consequently, young women in Egypt have come to formulate very different needs to their parents, particularly demanding love and romance as pre-conditions to marriage. For an older generation, however, who continue to regard wedlock as the only legitimate and permitted form of contact between a man and a woman, dating becomes an immoral 'Western' concept their daughters internalize through an unregulated exposure to media which are at odds with the essential values of their faith and society. I have argued elsewhere (Elsayed, in press) how transnational media become important catalysts fuelling a generationally-specific'subcultural imagination' driving these young people to question and subvert hegemonic ideologies at the local level. In acquainting them with the possibility of alternative realities and ways of being: the media allow young Egyptians to develop a reflexive awareness of different sets of moralities informing social roles. Thus, in their encounter with a mediated outside world, young Egyptians' sense of morality and self-righteousness of dominant codes of practice in the nation come to be discussed, addressed and, as we will see, physically challenged (Elsayed,in press: 4). In essence, for the older generation -represented by parents, societal norms and traditional Islamic scholars -a mediated globalisation often becomes an uncontrollable culprit, synonymous with excessive Westernization, and thus primarily to blame for what they consider youths' lack of attachment to their religion. For these young women living in age of intense mediation and global connectivity, the boundaries between the'religious' and 'unreligious' are much more fluid and interchangeable, and thus television becomes a vital tool for exploring, defining and negotiating their identity as young Muslims in the 21 st Century. From an adult or outsider perspective it may appear that youth are caught between multiple contradicting cultural or religious repertoires (Nilan and Feixa, 2006). For a generation of technologically-competent and media-savvy youth, the media are naturally embedded within their processes of self-understanding and part of a daily struggle to grasp and make sense of a highly complex, interconnected and rapidly changing world. --- Conclusion This paper has explored some of the many and complex ways youth identities -in a Global South context -are being articulated within a world of increased cultural interdependence and highly mediated cross-national connections. As documented by the case of Egyptian women, mediation of everyday life that expands the horizons of their cultural repertoires beyond national space, makes distant systems of meaning relevant to their lives, and to their religious and national identities. In a situation where local affiliations and the particularities of geographical space remain central to these young women's identities, I have demonstrated how religious beliefs and national sentiment are not antithetic to cosmopolitanism. Instead, informed primarily by transnational television, these young women articulate a divine cosmopolitan imagination through which they form multiple allegiances to God, the nation and global culture simultaneously. The multi-layered nature of these young women's identities is captured in the way they display an intricate set of preferences towards the diverse media they have access to. As we have seen, such preferences do not fall neatly within a linear cultural proximity framework (Straubhaar, 1991), which assumes an automatic preference for local and national media. In a more recent reworking of this theory, Straubhaar and La Pastina (2005) maintain that media preferences must be recognized as more complex, taking place at multiple levels that conform to the different religious, cultural and political aspects that shape people's multilayered identities. As discussed in this paper, young Egyptian women's media preferences centre around the content and values represented by particular genres and programmes, rather than being reduced to the cultural origins of the media. We have seen this in the way secular Western media formats such as American movies become central to how these women negotiate their relationship to religion. In this context, in the articulation of both their cosmopolitan imagination and religious identities, young Egyptians have become skilled negotiators, moving within and between mediated and non-mediated discourses. They move physically within a grounded place that sets the moral boundaries for bodily existence, yet shift subjectively between disembedded spaces of mediated representation, often providing new contexts for meaning and inclusivity. In light of this dialectical interplay between proximity and distance, television, in exposing young Egyptians to representations of different cultural worlds, often provides a sense of detachment from the immediate, although not as a way of transcending the local or the religious, but in providing a new lens and context for imagining and reimagining proximate social experiences. The result, for young Egyptian women, is a divine cosmopolitan imagination. --- ENDNOTES
With a focus on young Egyptian women, this paper explores the different ways it becomes possible to reconcile a Muslim identity with a cosmopolitan openness towards the world. Informed primarily by transnational television, these women articulate a divine cosmopolitan imagination through which they form multiple allegiances to God, the nation and global culture simultaneously. Thus, a close analysis of their regular consumption of transnational television helps challenge linear and somewhat naturalized preconceptions of how Muslims articulate perceptions of self and others. In the articulation of both their cosmopolitan imagination and religious identities, young Egyptian women have become skilled negotiators, moving within and between mediated and non-mediated discourses. They move physically within a grounded place that sets the moral boundaries for bodily existence, yet shift subjectively between disembedded spaces of mediated representation, often providing new contexts for meaning and inclusivity. The result, for young Egyptian women, is a divine cosmopolitan imagination.
INTRODUCTION There is a high global burden of hypertension with an estimated 1.13 billion people worldwide reported to have hypertension, with most (two-thirds) living in low-and middle-income countries (LMICs) [1]. While in 1990, high systolic blood pressure (BP) was the seventh-leading risk factor by attributable disability-adjusted life-years (DALYs), in 2019, it had become the leading risk factor [2]. The African Region of the World Health Organization (WHO) has the highest prevalence of hypertension (27%) [1]. The increase in LMICs is due mainly to a rise in hypertension risk factors in their populations [1]. Several studies have reported the increasing prevalence of hypertension in Africa [3,4]. Nigeria, as the most populous country in Africa, is also a major contributor to the increasing burden of hypertension in the continent. Between 1995 and 2020, the estimated age-adjusted prevalence of hypertension increased from 8.5% to 32.5% [5]. A recent study also found a similar prevalence of 38% from a nationwide survey in Nigeria [6]. Current evidence shows that gaps in hypertension management were attributable to socio-demographic determinants [7][8][9] and lifestyle factors [10,11]. An earlier study had suggested that demographics and lifestyle variables determined racial differences in hypertension prevalence [12]. Nigeria has a rapidly growing population with increasing urbanization and numerous ethnic groups across the country's different regions. However, in Nigeria, the relationship between socio-demographic/lifestyle factors and hypertension is understudied. To address the existing gaps in evidence, this study was carried out as part of the Removing the Mask on Hypertension (REMAH) study, a nationwide survey of hypertension aimed at defining the true burden of hypertension in Nigeria. Previously published articles from the REMAH study focused on the study design [13], prevalence of hypertension [6], and prevalence of dyslipidemia [14]. This study intended to assess the socio-demographic and lifestyle factors associated with hypertension in a black population. The findings from this study may be useful for planning interventions and policies to prevent and control hypertension in Nigeria and other similar settings. --- METHODS --- Study design Data were derived from a subset of the REMAH study, a cross-sectional national survey on hypertension. The details of the study design have been reported in a previous study [13]. The study population comprised adults 18 years and older who lived in selected communities. A multi-stage sampling technique was used to select participants from 12 communities across six states of Nigeria. In the first stage, one state was selected from each of the six regions of the country. In the second stage, with the aid of the administrative data of the 2015 general elections of the Independent National Electoral Commission, we selected two local government areas (LGAs) in each state, consisting of urban and rural communities. For urban communities, we selected LGAs in state capitals including Abuja Municipal Area Council for Abuja (North-central), Gombe Municipal for Gombe (North-east), Gusau for Zamfara (North-west), Onitsha for Anambra (Southeast), Uyo for Akwa-Ibom (South-south), and Ibadan-North for Oyo (Southwest). Gwagwalada, Akko, Bungudu, Oyi, Nsit Ubium, and Akinyele LGAs were randomly selected for sampling the rural communities in these states. In the third and fourth stages, one ward from which one polling unit was randomly selected from the rural and urban LGAs. Fieldwork was carried out between March 2017 and February 2018. Out of 4665 adults invited, 4197 consented to participate in the REMAH study; however, only 3782 of them had the required data on socio-demography and lifestyle used for this study. We complied with the Helsinki guidelines for conducting research on human participants, and the study was duly approved by the University of Abuja Teaching Hospital Human Research Ethical Committee. --- Data collection Socio-demographic characteristics. Data on various socio-demographic characteristics were collected using an investigator-administered questionnaire. Marital status was grouped into married, unmarried, divorced/ separated, and widowed. The area of residence was either urban or rural. Work status was categorized into government-employed, non-government-employed, self-employed, non-paid, and unemployed. Educational status was classified into no formal education, primary, secondary, and tertiary education. Lifestyle measures. Trained fieldworkers administered a modified WHO STEPS questionnaire to obtain information on respondents' sociodemographic characteristics, physical activity, tobacco use, and alcohol consumption [15]. Physical activity was assessed using the International Physical Activity Questionnaire which enquired about physical activity during work and leisure. Weekly physical activity was computed by multiplying time spent (in minutes) on a given activity in the reported week by intensity in metabolic equivalents (in MET units) corresponding to that activity: 8 METs for vigorous work or recreational activities; 4 METs for moderate work or recreational activities; and 3 METs for walking activities [16]. The total weekly activity was obtained by totaling the weekly physical activity (expressed in MET-minutes/week) of the three kinds of activities. According to the global recommendation of the WHO on physical activity, respondents had high physical activity if total weekly activity was <unk>600 MET-minutes or low physical activity if <unk>600 MET-minutes. Tobacco use was defined as current tobacco use in any form of smoking, snuffing, and ingestion. Alcohol consumption was defined as current consumption of alcohol in any form and quantity. Blood pressure measurement. Blood pressure was measured by auscultation of the Korotkoff sounds at the non-dominant arm using a mercury sphygmomanometer, as previously described [6]. Participants rested in a seated position for at least five minutes, and observers obtained five consecutive BP readings at 30-60 s intervals. Systolic (phase I) and phase V diastolic BPs were measured to the nearest 2 mmHg. Standard cuffs with a 12 <unk> 24 cm inflatable portion were used. In instances where the upper arm circumference exceeded 31 cm, larger cuffs with 15 <unk> 35 cm bladder were used. A participant's BP was the average of the five consecutive BP measurements. Quality control measures were applied to ensure good quality measurement of BP by training observers to avoid odd readings, consecutive identical readings and zero end-digit preference. At intervals, these parameters were examined and when significant deviations were observed, observers were retrained. Hypertension was defined according to the 2013 guidelines of the European Society of Hypertension/European Society of Cardiology as systolic BP <unk> 140 mmHg or diastolic BP <unk> 90 mmHg or self-report treatment of hypertension using antihypertensive medications [17]. Data management and statistical analysis. Data were managed and analyzed using SAS software version 9.4 (SAS Institute, Cary, NC). We employed the Kolmogorov-Smirnov test to ascertain the normality of continuous variables. We used mean and standard deviation as measures of central tendencies and dispersion for normally distributed continuous variables. We further analyzed differences between the means of independent binary groups using t-test. Proportions were used to express all categorical variables and the differences between independent groups were analyzed using chi-square. We used logistic regression models to assess the relation of various socio-demographic and lifestyle factors with hypertension. Statistical significance was set at a significance level of p <unk> 0.05. --- RESULTS --- Characteristics of study participants Table 1 summarizes the characteristics of the study participants. Of 3782 participants, 1654 (43.7%) were men and 2128 (56.3%) were women. Majority (2483, 65.8%) of the participants were married, 1985 (52.5%) resided in rural areas, and 1280 (33.9%) had tertiary education. Hypertensive patients were older than their normotensive counterparts. On lifestyle, 1160 (30.7%) of the participants had low physical activity, 156 (4.1%) consumed tobacco while 1340 (35.4%) consumed alcohol. Only 3.2% of the study participants consumed both alcohol and tobacco, 8.1% were physically inactive and consumed alcohol, and 1.0% were physically inactive and consumed tobacco. --- Association between socio-demographic variables and hypertension Figure 1 shows the increasing positive association between different age groups and hypertension in women and men. Table 2 shows the association of other socio-demographic variables with hypertension. After adjusting for age and sex, in comparison to unmarried status, being married (OR = 1.88, 95% CI: 1.41-2.50) or widowed (OR = 1.57, 95% CI: 1.05-2.36) were positively associated with hypertension. After stratifying by sex, being married remained significantly associated with hypertension in women (OR = 1.80, 95% CI: 1.19-2.74) and men (OR = 2.14, 95% CI: 1.43-3.21) (Fig. 2). Unemployment/non-paid work was positively associated with hypertension (OR = 1.42, 95% CI: 1.07-1.88) while living in an urban area was not significantly associated with hypertension (OR = 1.11, 95% CI: 0.96-1.28). Compared with no formal education, primary (OR = 1.44, 95% CI: 1.12-1.85), secondary (OR = 1.37, 95% CI: 1.04-1.81), and tertiary education (OR = 2.02, 95% CI: 1.57-2.60) were associated with hypertension. --- Association between lifestyle variables and hypertension Table 2 shows the association between lifestyle and hypertension. Low physical activity was associated with hypertension by 23% (OR = 1.23, 95% CI: 1.05-1.42). Also, alcohol consumption was associated with hypertension (OR = 1.18, 95% CI: 1.02-1.37). --- DISCUSSION The key findings of our study showed that some sociodemographic and lifestyle factors were associated with hypertension. As age of participants increased, there was increasing association with hypertension. Being married, widowed, unemployed/non-paid, having higher education, low physical activity, and alcohol consumption were significantly associated with hypertension. Over the years, there has been an increase in the burden of hypertension in Nigeria. A recent systematic review reported an increase from 8.2% in 1990 to 32.5% in 2020 [5]. A previous publication from the REMAH study found the prevalence of hypertension was 38% [6]. Findings from a meta-analysis in Africa showed an estimated prevalence of 57% in an older adult population <unk>50 years which may indicate the increasing burden of hypertension with increasing age [3], just as our study noted the increasing association of hypertension with increase in age. In our study, we found that marital status was associated with the prevalence of hypertension. Being married and widowed increased the odds of having hypertension by 88% and 57% respectively in both men and women. In contrast to previous studies in Iran [18] and Poland [19], it was observed that married men have lower BP than their unmarried counterparts. The authors suggested that married men had better sleep, less stress, better moods and have a more healthy diet compared with unmarried men [18]. The study in Iran reported that married women have higher BP than the unmarried women. It has been reported that married women get stressed from taking care of their families [20]. A recent study in Ghana also explored the association of marital status with hypertension within sub-Saharan Africa [21]. Its findings showed that marital status was an independent risk factor for hypertension in Ghana for women but not for men, after controlling for lifestyle and socio-demographic factors. Our study showed reduced but significant association between marital status and hypertension for both women and men after adjusting for age. Possible explanations for this association among married men and women may be related to the social causation hypothesis. Within the Nigerian context, marriage is seen as an achievement that may influence one's socioeconomic conditions. With improved socio-economic status, there is a tendency towards purchasing foods away from home which are likely to be more of processed foods [22] with the increased risk of hypertension. Also, roles in the marriage could put more pressures on both women and men. In Nigeria, a married woman has to combine work with her domestic responsibilities of catering for her spouse and children [23]. A married man may have to take more responsibility to provide for the needs of his family [24]. All these may contribute to stress that can increase the risk of hypertension, thereby altering the potential emotional benefits of marriage. Another socio-demographic factor we found to be associated with hypertension was education. Educational attainment is said to be a strong measurable indicator of socio-economic status and it is usually fixed after young adulthood [25]. Previous studies from developed countries have reported that lower education tends to increase the risk of having hypertension [26,27]. These studies found that higher education may influence better awareness of hypertension, dietary and occupational choices. However, our study observed a higher association of tertiary education with hypertension. In the Nigerian context, attaining tertiary education may be linked with better occupational and economic opportunities and the tendencies towards urban lifestyles such as sedentary living, eating unhealthy foods, as well as engaging in more work to pay bills. Physical inactivity is a growing concern as a risk factor for cardiovascular diseases including hypertension due to increasing Fig. 1 Odds ratio of hypertension by age group in men and women. The x-axis represents age group (in years) while the y-axis shows the odds ratio. The square symbol represents odds ratio for men and the circle for women. urbanization and the tendency for sedentary lifestyles. We found an association of low physical activity with hypertension in our study. Recent studies continue to emphasize the beneficial effects of physical activity in the prevention and control of hypertension [28][29][30]. The WHO suggests that policies to increase physical activity should aim to ensure that among other measures, walking, cycling and other non-motorized forms of transport are accessible and safe for all [31]. In Nigeria, there is a plan for a national nonmotorized transport policy with focus to improve access for walking and cycling as most Nigerian roads lack walkways, with pedestrians and cyclists sharing the roadway with motorized transport [32]. The current road architecture greatly discourages walking and cycling as forms of physical activity due to the dangers posed by motorized transport. Furthermore, our study reported an association of alcohol consumption with hypertension. It has been well established in the literature that alcohol consumption increases the risk of hypertension. A recent systematic review buttressed that reducing intake of alcohol lowers BP in a dose-dependent pattern [33]. This emphasizes the importance of alcohol policies to reduce alcohol consumption. It has been reported recently that Nigeria has few alcohol-related policies with weak multi-sectoral action and funding constraint for their implementation and enforcement [34]. These policies address the need for limitation of access to alcohol, although, tax increase on alcohol and prohibition of alcohol advertisement were not addressed. With these policy gaps, there is need for more attention on alcohol control by developing a comprehensive policy to regulate its harmful use. Our findings may be generalised to other countries of sub-Saharan Africa, as most countries within the sub-region are undergoing demographic transition with implications for health. There is an ongoing population increase with an associated increase in the aging population while still having a large young population [35]. Although there is rapid urbanization in most countries within the sub-region, physical infrastructure that encourages physical exercise is lacking in most cities. This, coupled with poor regulation of consumption of alcohol and sugar-sweetened beverages may contribute to fuel the epidemic of hypertension in the region. One prominent strength of our study is its large sample size with participants recruited from the six regions of Nigeria. Hence, the findings of our study may be used to plan interventions or policies for the prevention and control of hypertension among similar populations. The results of this study should be interpreted within the context of the potential limitations. Our study was a cross-sectional study and hence, the findings do not infer causation in relation to socio-demographic/lifestyle factors and the prevalence of hypertension. A repeat BP measurement after at least two weeks apart would have ensured true diagnosis of hypertension according to the guideline. We, however, averaged five BP readings which may approximate closely to an individual's usual BP. Furthermore, we deployed standardized methodology to ensure good quality of BP measurement throughout the entire period of the survey so as to appropriately identify cases of hypertension. Digital devices may be considered in future studies to improve the quality of BP measurement. Also, some of the variables were assessed through participants' self-reporting and this might have had potentials to bias the findings of this study. Variables such as physical activity, tobacco use and alcohol consumption were prone to self-reporting bias, even though we employed trained research assistants to interview participants. In addition, tobacco use and alcohol consumption were not quantified; quantifying them may have generated dose-response association with hypertension in this study. Another key limitation in our study is the lack of data on participants' income, an important socio-demographic variable. The lack of data on income Fig. 2 Odds ratio of hypertension by marital status in men and women (adjusted for age). The x-axis represents marital status while the y-axis shows the odds ratio. The unshaded bar represents odds ratio for men and the shaded bar for women. may have limited our findings on the association of socioeconomic status and hypertension as we used education as an indicator of socio-economic status in our study. --- CONCLUSION In conclusion, we have reported the socio-demographic and lifestyle factors associated with the prevalence of hypertension in Africa's most populous country. Marriage, education, low physical activity, and alcohol consumption were significantly associated with hypertension. These may be associated with more cases of hypertension presenting to health facilities, with a rising burden of the disease. Hence, there is a need for counselling, health education and policy formulation and implementation targeting these factors to prevent and control hypertension. Nurses and community health extension workers should be trained on counselling in line with the task-sharing policy. Also, the plan for a national non-motorized transport policy in Nigeria with focus to improve access for walking and cycling should be expedited by both federal and state governments. On alcohol consumption, there is need for more attention on alcohol control through development of a comprehensive policy to regulate its harmful use and improve multi-sectoral action and funding for enhanced implementation of policy. Future research efforts include use of religious bodies to raise awareness of hypertension as well as serving as medium for counselling and health education on hypertension. The focus will be on the findings of the study which include marriage, education, physical activity, and alcohol consumption. --- DATA AVAILABILITY The dataset used in this study is available from the corresponding author on reasonable request. --- Summary table What is known about topic • Socio-demographic and lifestyle factors have been reported to be associated with hypertension in some studies in highincome countries --- • Most Nigerian studies focused on prevalence of hypertension at subnational levels or within small populations What this study adds --- • We identified a higher prevalence of hypertension among married people and those with higher educational status among adult Nigerians • Low physical activity and alcohol consumption were also associated with hypertension among adult Nigerians --- AUTHOR CONTRIBUTIONS ASA was responsible for extracting and analysing data, interpreting results, drafting, revising and approving the final manuscript. BSC was responsible for extracting and analysing data, interpreting results, revising and approving the final manuscript. DN was responsible for interpreting results, revising and approving the final manuscript. JES was responsible for interpreting results, revising and approving the final manuscript. ANO was responsible for extracting and analysing data, interpreting results, revising and approving the final manuscript. --- COMPETING INTERESTS The authors declare no competing interests. --- ETHICAL APPROVAL The study was duly approved by the University of Abuja Teaching Hospital Human Research Ethical Committee. --- ADDITIONAL INFORMATION Correspondence and requests for materials should be addressed to Azuka S. Adeke. Reprints and permission information is available at http://www.nature.com/ reprints Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
With the rising prevalence of hypertension, especially in Africa, understanding the dynamics of socio-demographic and lifestyle factors is key in managing hypertension. To address existing gaps in evidence of these factors, this study was carried out. A crosssectional survey using a modified WHO STEPS questionnaire was conducted among 3782 adult Nigerians selected from an urban and a rural community in one state in each of the six Nigerian regions. Among participants, 56.3% were women, 65.8% were married, 52.5% resided in rural areas, and 33.9% had tertiary education. Mean ages (SD) were 53.1 ± 13.6 years and 39.2 ± 15.0 years among hypertensive persons and their normotensive counterparts respectively. On lifestyle, 30.7% had low physical activity, 4.1% consumed tobacco currently, and 35.4% consumed alcohol currently. In comparison to unmarried status, being married (OR = 1.88, 95% CI: 1.41-2.50) or widowed (OR = 1.57, 95% CI: 1.05-2.36) was significantly associated with hypertension, compared with never married. Compared with no formal education, primary (
comorbidities making more challenging to care for them (2). To account for this, the term comorbidity was coined to represent the occurrence of other medical conditions in addition to an index condition of interest (3). Such comorbidity relationships occur whenever two or more diseases are present in the same individual more often than by chance alone (4,5). Multimorbidity is associated with the risk of premature death, loss of functional capacity, depression, complex drug regimes, psychological distress, declined quality of life, increase in hospitalizations and decreased productivity. This is also linked to economic burden for health-care systems and society (7,6). There is evidence linking comorbidity with social determinants of health (SDHs) such as cultural issues, social support, housing, demographic environment and SES. Together, these factors, make a source of complexity that creates potential vulnerability for someone facing it, many of them in disavantage for being in socioeconomically deprived areas. The association between SES and prevalence of multimorbidity has been recently established (8,11,10,9,12). Also, these complex phenomena have profound implications for the delivery of high quality care of chronic health conditions and remark the necesity of complex interventions to tackle multimorbidity (2,(13)(14)(15). Likewise, the non-random co-occurrence of certain diseases differs in different life-stages: the prevalence of multimorbidity increases with age, 60% of the events are reported among 65-74 year olds higher than the prevalence of each individual disease (16,17). Also, the sex is increasingly perceived as a key determinant of multimorbidity in cardiovascular disease (CVD). Although it has been seen as a predominantly men disease, due to men's higher absolute risk compared with women, the relative risk in women of CVD morbidity and mortality is higher increased by some medical factors (diabetes, hypertension, hypercholesterolemia, obesity, chronic kidney, rheumatoid arthritis and other inflammatory joint diseases) (18,19). Moreover, serious multiple chronic conditions are more common in older women (rather than 65) and may limit treatment alternatives (10,20,21). Multimorbidity presents many challenges, which may at times seem overwhelming. In such a scenario, evidence-based treatment guidelines, designed for single diseases, may lead to serious therapeutic conflicts (1). To circumvent such limitations, a personalized approach to medicine may benefit from the inclusion of ideas from a somewhat recent field of research, generally known as systems biology or network medicine -when applied to humans-. This approach offers the potential to decipher and understand the relationships between comorbidities at a much deeper level by considering coordinated instances (systems) rather than single conditions. A theoretical framework of diseasome has indicated that most human diseases are interdependent. These concept lead to anoher called the human disease network (HDN), a graph in which two diseases are connected if they have in common some biological, genetic, metabolic o even socioeconomic element (1). The present study is aimed to analyze the patterns of cardiovascular-associated multimorbidity stratified by life-stages, sex and socioeconomic status. Studying such patterns at a large scale will be useful, both to discover trends helpful for public health care planning, as well as to serve as additional clues to understand the complex interaction between genetic/molecular, clinic and social/environmental conditionants of cardiovascular diseases in the different age/sex/SES. Here, we will expand upon the outcomes derived from various comorbidity networks under consideration. These networks were constructed based on previously outlined criteria, taking into account structural characteristics arising from relationships between pairs of diseases. The focus will be on mutual information (MI) shared by diseases (depending on the frequency of their joint presence, as we will see later), acting as an indicator of co-occurrence between two conditions. This approach offers an additional perspective for discerning comorbidity patterns across diverse networks. To pinpoint comorbidities potentially linked to sex and/or SES within each network, we will employ the Page Rank Score (PRS), a network indicator of overall influence. This scoring system enables the numerical identification of diseases with greater relevance within each network (1). By factoring in MI between pairs of diseases, the PRS enhances precision, providing richer insights into comorbidity and multimorbidity phenomena in individuals. The main reserach question that we will aboard in this work is then How does the interplay between life-stage and socioeconomic status influence the comorbidity patterns in cardiovascular diseases among the Mexican metropolitan population, and what are the subtle associations and differences in comorbidity prevalence across age groups and socioeconomic strata? 2 Materials and methods --- Data acquisition (electronic health records) The National Institute of Cardiology 'Ignacio Chávez' (NICICH), one of Mexico's National Institutes of Health, is the reference hospital for specialized cardiovascular care in Mexico. The NICICH it is also a third level hospital receiving in-patients with related ailments such as metabolic, inflammatory and systemic diseases, whose treatment may involve immunology, rheumatology, nephrology, and similar specialities in addition to cardiology-related treatments (1). In this work we used the NICICH Electronic Health Record (EHR) database entries as recorded between January 1, 2011 and June 31, 2019. The EHR-database contains information on socioeconomic factors as well as the main clinical diagnosis that led to hospitalization, it also reports other diseases, disorders, conditions or health problems that the individuals may present. The SES as recorded in the institutional file is a well-defined construct that involves the weighting of variables related to education, employment status, family monetary income, access to public services (water, electricity, drainage) and housing conditions (rural or urban). The EHR management procedures of the institution are set to provide up to five main comorbidities. International Classification of Diseases, tenth revision (ICD-10) was used to identify and clasificate them. The full set of hospital discharged patients, with all types of diagnostics, age, sex and SES were considered in the time period under study, with the exception of those with incomplete information or erroneous coding. The study population included 47, 377 discharged cases. The cardiovascular comorbidities assessed included any disease registered in each case (see Figure 1). --- Data processing (ICD-10 coding) Once EHR data has been pre-processed to tabular format, disease and comorbidity relationships could be investigated. Mining, processing and cross-transforming ICD-10 data were performed using the icd (v. 4.0.9) R library (22) (https://www. rdocumentation.org/packages/icd/versions/4.0.9). While ICD codes are increasingly becoming useful tools in the clinical and basic research arenas, their use is not free of caveats and limitations (for a brief dicsussion of some of these in the context of current norms, please refer to the relevant paragrpahs in the discussion section). --- Statistical analysis A database of 47, 377 electronic health records (EHRs) was used as this study corpus. Analysis was stratified by age and sex group. Descriptive statistics were used to summarize overall information. The chronic conditions with the highest prevalence, stratified by SES, and the number of chronic conditions associated with each disease were computed. --- Cohort stratification For the purpose of statistical and network analysis, patients were stratified based on age and sex. The age groups were defined as follows: the 0-20 years old age bracket has 9,782 individuals (20.65%) of which 4,921 are women and 4,861 men; for the 21-40 years old range there were 6,939 individuals (14.65%) split into 3,593 women and 3,346 men; the 41-60 year old range included 13,690 persons (28.90%) with 5,095 women and 8,595 men; 14,537 (30.68%) individuals conformed the 61-80 years old group with 5,695 women and 8,842 men; lastly the 81 years and older group had 2,429 (5.13%) registered patients 1,187 women and 1,242 men. These strata were used to build the different comorbidity networks that will be presented and discussed later. --- Cardiovascular comorbidity network (CVCnetworks) Electronic health records data was processed using in-house developed code (in the R programming language) for the design and analysis of comorbidity networks as previously reported (1). Programming code for this study is available in the following public access repository: https://github.com/CSB-IG/Comorbidity_ Networks. Once the mining of the medical cases was carried out, a set of undirected networks (one network for each age/sex/SES bracket combination, see Subsection 2.4), was built based on the significant co-ocurrent diseases coded according to ICD-10 codes. Briefly, the origin and destination nodes in these networks are diseases as identified with their respective ICD-10 codes. Subsequently, a link was drawn between these nodes, as long as at least these diseases co-occurr in the same person within this group more often than by chance alone (hypergeometric test, with a False Discovery Rate (FDR) multiple testing correction FDR, 0:05). The strength of the comorbidity association was determined by using the MI calculated for each pair of diseases in the CVCnetworks by using a custom made script (available at https://github.com/CSB-IG/ICD_Comorbidity/blob/main/Disc_ Mut_Info.py) based on the mutual_info_score function of the sklearn.metrics Python package. --- Network statistics and visualization In network theory, one of the parameters used to evaluate the connections in the graph is the degree centrality (DC), the total number of links on a node or the sum of the frequencies of the interactions. The degree distribution of a disease is the number of ICD-10 codes associated with that disease. Aside from the node degree, a relevant centrality measure is the PRS (23) that captures the relative influence of a given node in the context of network communication. The Network Analyzer plugin (24) in the Cytoscape open source network analysis suite was used to explore and visualize the network (25) and also the CytoNCA package was used to calculate further network centrality measures (26). Betweenness centrality (BC) measure is used to assess the relevance of a given condition in the context of a node's influence in global network information flow. Weighted network analytics, PRS calculations and visualization were performed by using Gephi (27). In brief, MI will be used to assess the strength of comorbidity relations (i.e., a higher MI value represents a stronger comorbidity association between two diseased conditions). PRS on the other hand will be used to assess the relevance of a given disease in the context of the comorbidity network given its vicinity (i.e., a higher PRS value represents a higher potential to become a multimorbid condition). In the context of this vicinity, we will often refer to the set of diseases directly connected to a given disease as their comorbidity nearest neighbors (CNNs). In this study, a double-circle layout visualization was implemented, where nodes were arranged according to their PRS in a counter-clockwise direction, and the top 10 highest ranking diseases were placed in the center of the graph. Nodes were colored on a gradient scale from red (higher closeness centrality) to blue (lower closeness centrality). Additionally, the node size was determined based on their Betweenness Centrality measure, where larger nodes indicated a higher value of Betweenness Centrality. --- Results --- Cardiovascular comorbidity networks general results Comorbidity networks were built for the specific age/sex/SES as previosuly described and a general topological analysis was conducted prior to a detailed analysis of each network. present the main topological features of these networks. By examining the connectivity and structural patterns, significant relationships can be identified, which will be discussed later (A set of tables containing the full connectivity informations for all the networks can be found in the Supplementary Materials). The analysis of the various networks shown in Table 1 revealed that, overall, individuals with low SES exhibited a higher diversity of diseases, reflected in the larger number of nodes, often double or more compared to high SES networks. This phenomenon is attributed to health inequalities arising from constraints faced by this population, making them more susceptible to developing diseases not prevalent in high SES individuals or manifesting and being treated differently due to varying access to necessary resources, ranging from adequate nutrition to healthcare access. The clustering coefficient showed notable uniformity across all networks, with the network corresponding to men aged 61-80 with low SES exhibiting the highest clustering level. This suggests a higher likelihood of individuals in this network easily manifesting any disease within the network from an initial disease. While this observation doesn't sharply differentiate from other networks, it raises concerns about disease interactions and, consequently, treatment concerning pharmacological interactions. The higher prevalence of disease diversity in low SES may be associated with the greater density observed in high SES networks. This increased density implies more interconnections among all diseases in high SES networks. However, this doesn't necessarily indicate a higher propensity for comorbidity in high SES individuals, as evidenced by a considerably higher number of connections in low SES, particularly among men over 80. Furthermore, finding a higher network centralization in graphs for the age range of 0-20 years, with even greater centralization in high SES, may result from comorbidities influenced by factors related to birth. Additionally, there is a lower disease diversity in high SES, while the higher number of diseases in low SES diversifies the conditions centralizing comorbidity relationships. The average number of comorbidities, measured through the average neighbors in various networks, tends to be higher in older ages compared to young individuals. This trend is more pronounced in men than women. Notably, the decrease in the average number of comorbidities in the population over 80 contradicts existing literature. A closer analysis of the different CVC networks reveal that there are some pairs of diseases that are prevalent as the most relevant comorbidities in more than one group of age/sex/SES. Some of the most relevant of these pairs of diseases are shown in Figure 2. Understanding disease pairs that transcend demographic (e.g., age/sex/SES) boundaries may help us to provide a holistic view of health challenges and opportunities for intervention. It may contribute to more effective public health strategies and policies that consider the interconnected nature of diseases across diverse populations. Let us examine some of the implications. --- Cardiovascular comorbidity networks based on socioeconomic status in men and women aged 20 years or younger Upon visual inspection of the distinct networks depicted in Figure 3, discernible structural differences in their connectivity patterns come to light. A more nuanced examination of the network statistics could provide additional insights into shared features and commonalities. For example, Table 2 highlights the top 5 diseases with the highest comorbidity burden, as indicated by their respective PRS within the specified network. Presence of disease pairs in different stages of life by sex and socioeconomic status. --- FIGURE 3 Comorbidity networks for patients aged 0-20 years old, for both sexes in the low (LSES) and high (HSES) socioeconomic status. Nodes are ordered according with their Page Rank Score (PRS). High PRS nodes appear in the center. Size and color intensity (red implies higher values, blue lower values) of the nodes is also given by the PRS as a measure of relative importance in the network. Size and color of the edges represent the mutual information weight among disease pairs. Cruz-<unk>vila et al. 10.3389/fcvm.2024.1215458 Several commonalities emerge among these highly comorbid diseases. Regardless of sex or SES, the notable presence of Other specified congenital malformations of the heart (Q24.8) and Generalized and unspecified atherosclerosis (I70.9) is observed. Additionally, Other forms of chronic ischemic heart disease (I25.8) appears in three out of four networks, except for men with high SES, where it ranks 9th according to its PRS. A similar scenario unfolds for Atrial septal defect, which moves to the 7th rank in men of low SES. Notably, most highly prevalent and comorbid diseases in this age group exhibit a strong genetic risk component, likely explaining their consistently high rankings across all four networks, irrespective of sex or SES. Equally noteworthy, albeit for divergent reasons, are instances such as Unspecified cardiac insufficiency (I50.9), which holds a high rank solely among individuals of both sexes with high SES, and Chronic kidney disease, unspecified (N18.9), appearing exclusively in the top 5 for men and women of low SES. The association of unspecified chronic disease with low SES in children and young adults (up to 20 years old) suggests a probable link to environmental factors. Consequently, we opted to investigate its network neighborhood. Intriguingly, robust comorbidity relationships with M32.1 Systemic lupus erythematosus with organ or system involvement were identified in networks corresponding to different age/sex/SES categories. It is pertinent to note that the documented presence of what has been termed lupus nephritis in children is well-established (29)(30)(31)28). Notably, lupus nephritis can be specifically reported using the ICD-10 code M32.14 Glomerular disease in systemic lupus erythematosus, rather than the more general code M32.1. Nevertheless, the occurrence of juvenile systemic lupus erythematosus (JSLE) has been reported as a more active disease in children and young adults, characterized by faster progression and worse outcomes, including progressive chronic kidney disease, compared to its adult-onset counterpart, leading to poorer long-term survival. Studies indicate that lupus nephritis may affect up to 50%-75% of all children with JSLE. Consequently, analyzing the comorbidity landscapes associated with either concurrent N18.9 and M32.1 (or M32.14) may offer valuable insights for determining optimal diagnostic and therapeutic strategies to enhance patient outcomes. --- Cardiovascular comorbidity networks based on socioeconomic status in men and women aged 21-40 years Examination of comorbidity networks for individuals aged 21-40 years, encompassing both sexes and SES, reveals similar trends in highly comorbid conditions as observed in children and young adults (aged 0-21 years). Noteworthy diseases, including Other specified cardiac arrhythmias (I49.8), Other forms of chronic ischemic heart disease (I25.8), Chronic kidney disease, unspecified (N18.9), and Other specified congenital malformations of the heart (Q24.8), consistently rank among the top 5 conditions with high PRS in their respective networks, irrespective of sex or SES (see Table 3 and Figure 4). It is evident that, up to this age bracket, the most highly morbid conditions are largely shared across different SES. Notably, Other and unspecified atherosclerosis (I70.9), which does not appear in the top 5 for women with high SES in Table 3, is nonetheless ranked 6th in that particular subgroup. --- Cardiovascular comorbidity networks based on socioeconomic status in men and women aged 41-60 years In examining the networks for the current age range (depicted in Figure 5), notable diseases consistently rank among the top five TABLE 2 Top 5 diseases with a higher comorbidity burden in networks for men and women patients of low and high SES aged 20 years old or less as well as their PRS value. Comorbidity networks for patients aged 41-60 years old, for both sexes in the low (LSES) and high (HSES) socioeconomic status. Visualization parameters are as in Figure 3. Comorbidity networks for patients aged 21-40 years old, for both sexes in the low (LSES) and high (HSES) socioeconomic status. Visualization parameters are as in Figure 3. Consequently, a more in-depth analysis of these latter two diseases was undertaken. --- M-LSES W-LSES M-HSES W-HSES Node --- Cruz Regarding Unspecified heart failure (I50.9) in the low SES group, it maintains a substantial position, ranking eighth among both men and women based on their PRS. In contrast, Unspecified chronic kidney disease (N18.9) continues to feature prominently in the high SES group, ranking sixth among men and seventh among women according to their PRS. As no discernible differences were observed in SES based on the PRS, a first-neighbors analysis was conducted on the diseases listed in Table 4. This analysis considered the MI between pairs of diseases and examined the relationships forming between the diseases mentioned in the table. Notably, the relationship between Unspecified atherosclerosis (I70.9) and Unspecified chronic kidney disease (N18.9), sharing an MI of 0.018057, exhibited a distinction with respect to SES in women networks. --- Cardiovascular comorbidity networks based on socioeconomic status in men and women aged 61-80 years In this population, Table 5 highlights consistent representation of the same diseases among the top positions in all graphs, including Other forms of chronic ischemic heart disease (I25.8), Unspecified atherosclerosis (I70.9), Other specified congenital malformations of the heart (Q24.8), and Other specified cardiac arrhythmias (I49.8). Notably, Unspecified chronic kidney disease (N18.9) ranks sixth for high-SES men. Similarly, Unspecified rheumatic diseases of endocardial valve (I09.1) appears in sixth place for low-SES men and women across both strata. In the analysis of nearest neighbors, it was found that Other specified congenital malformations of the heart (Q24.8), consistently positioned in the networks from early stages of life, is linked to Unspecified rheumatic diseases of endocardial valve (I09.1) exclusively in low-SES men, sharing an MI of 0.015235 and occupying the fifty-second position among the relationships in this population. This phenomenon appears solely in this age range and in low-SES men (see Figure 6). For women, the relationship between Other specified congenital malformations of the heart (Q24.8) and Unspecified rheumatic diseases of endocardial valve (I09.1), absent in the present age range, is evident between 21 and 60 years old, exclusively in the low-SES group. --- Cardiovascular comorbidity networks based on socioeconomic status in men and women aged 80 and older In the population aged 80 and older, Table 6 reveals a consistent top three diseases across both sexes and SES (see Figure 7): Other forms of chronic ischemic heart disease (I25.8), Unspecified atherosclerosis (I70.9), and Other specified cardiac arrhythmias (I49.8). These conditions maintain their prominence throughout the lifespan of the study population, alongside Unspecified heart failure (I50.9) and Unspecified chronic kidney disease (N18.9). Regarding the latter two conditions, it is noteworthy that Unspecified heart failure (I50.9), absent among the top five diseases in the low SES group, ranks ninth for men and seventh for women in this stratum. On the other hand, Unspecified chronic kidney disease (N18.9), exclusive to men in the low SES group, is positioned nineteenth for men in the high SES group. For women, it appears sixth in the low SES group and twelfth in the high SES group according to their PRS. As for the remaining two diseases, they exhibit a relationship that is exclusive to women in the low SES group within the present age range. In this co-occurrence, the pair ranks ninetysixth out of 1,634 disease pairs, with a MI of 0.004605. It is noteworthy that, at the individual level, Acute transmural myocardial infarction of anterior wall (I21.0), which, according to the IPR analysis, is absent in women of high SES and men of low SES, takes the seventh place in the former case and the sixth place in the latter. Conversely, Unspecified rheumatic diseases of endocardium valve (I09.1) holds the thirteenth place of relevance for men of high SES. The relationship between Acute transmural myocardial infarction of anterior wall (I21.0) and Unspecified rheumatic diseases of endocardium valve (I09.1), observed in previous age ranges, is limited to the low SES group. Specifically, it appears only between the ages of 61 and 80 for men and from 41 up to the present age range for women. --- Discussion In this section, we will delve deeper into the outcomes derived from the various comorbidity networks, which were constructed based on the previously described criteria considering the structural characteristics arising from relationships between pairs of diseases. The analysis incorporates mutual information as an indicator of co-occurrence between two diseases, offering a supplementary perspective for discerning comorbidity patterns within these networks. Comorbidity networks for patients aged 61-80 years old, for both sexes in the low (LSES) and high (HSES) socioeconomic status. Visualization parameters are as in Figure 3. Cruz-<unk>vila et al. 10.3389/fcvm.2024.1215458 Shared pairs of diseases across several ge/sex/SAS strata (recall Figure 2) may provide information in the following dimensions: The identification of comorbidities potentially associated with sex and/or SES in each network involved the utilization of the Page Rank Score. This numerical measure allows us to pinpoint diseases with greater relevance within each network (1). The PRS enhances precision by considering the MI between pairs of diseases, thereby providing more detailed insights into comorbidity and multimorbidity phenomena in individuals. --- Comorbidity networks: general observations In examining the disparities across the analyzed networks, a noteworthy observation emerges: individuals with low SES Comorbidity networks for patients aged 81 years and older, for both sexes in the low (LSES) and high (HSES) socioeconomic status. Visualization parameters are as in Figure 3. Cruz-<unk>vila et al. 10.3389/fcvm.2024.1215458 generally exhibit a greater diversity of diseases, often double or more, compared to their high-SES counterparts. This pattern suggests that individuals with low SES face health inequalities, making them more susceptible to a broader spectrum of diseases. These diseases may either not occur in high-SES individuals or manifest differently, influenced by varying access to essential resources for their care-ranging from nutrition to healthcare services (34,33,32). Moreover, the heightened diversity of diseases in the low-SES group may be linked to the greater density observed in high-SES networks. This increased density results in more connections between all diseases in high-SES networks. However, despite the higher number of connections, individuals with low SES exhibit a significantly higher number of comorbidity relationships, as evident from Table 1. The finding of greater network centralization, particularly in the age range of 0-20 years-accentuated in high SES during these ages -may be attributed to specific comorbidities mediated by factors related to birth. This narrower diversity of diseases in high SES during these ages suggests distinct comorbidity patterns. Additionally, the greater number of diseases in low SES contributes to diversifying the conditions centralizing comorbidity relationships. Factors unique to low SES, such as overcrowding, nutrition, and structural conditions during infancy, expose individuals to different health challenges, potentially leading to varied patterns of comorbidity and multimorbidity in the short or long term (5,36,35). Similarly, our analysis revealed that the average number of comorbidities, measured by the average number of neighbors in the different networks, is higher in older age groups compared to younger individuals (refer to Table 1). However, this trend is more pronounced in men than in women, as indicated by our results. Notably, in the population aged 80 and older, a decrease in average comorbidities is observed, which contradicts the prevailing literature on multimorbidity phenomena (37,39,38). Among the most prevalent and clinically relevant conditions in various age groups, we consistently find Chronic kidney disease, unspecified (N18.9), Other specified congenital malformations of the heart (Q24.8), Other specified forms of chronic ischemic heart disease (I25.8), Heart failure, unspecified (I50.9), Unspecified atherosclerosis (I70.9), and Other specified cardiac arrhythmias (I49.8). This prevalence may stem from the interconnected nature of these conditions within the network, where several relationships involve equally significant diseases, influencing various physiological processes (1). However, it is crucial to note that the relevance of some of these conditions within the network may diminish or be absent in certain age groups, contingent on the SES of the patients, as we will discuss later. This observation that younger patients of low socioeconomic status are more likely to have comorbidities than older subjects of higher SES raises relevant questions. Some of these issues may be related to limited access to healthcare, social determinants of health in early life, nutrition and lifestile factors, environmental factors, educational attainment and healthcare utilization patterns, among other constraints (41,40). Since individuals with lower SES often face barriers in accessing healthcare services, including preventive care and early diagnosis, this may result in undiagnosed or untreated health conditions, contributing to the development of comorbidities. Also, early childhood experiences and SDHs, such as nutrition, access to quality education, and living conditions, significantly influence health outcomes later in life (42,43). Younger patients from low SES backgrounds may have experienced adverse childhood conditions that contribute to the development of health issues and comorbidities. Younger individuals with lower SES may have limited access to healthy food options, leading to dietary habits that increase the risk of conditions such as obesity, diabetes, and cardiovascular diseases (44,45). Living in socioeconomically disadvantaged neighborhoods can expose individuals to environmental factors that contribute to poor health outcomes. Environmental stressors, pollution, and lack of recreational spaces may impact the overall health of younger individuals from low SES backgrounds (46). In summary, a complex interplay of socioeconomic, environmental, and lifestyle factors contributes to the observation that younger patients of low SES are more likely to have comorbidities. These factors highlight the importance of addressing social determinants of health and implementing interventions that promote health equity and access to comprehensive healthcare services for all individuals, regardless of socioeconomic status. Let us examine these complex comorbidity patterns in more detail. --- Comorbidity networks in individuals aged 0-20 years In this initial age range, our analysis of the PRS initially highlighted two diseases that could be associated with low SES. First, Unspecified heart failure (I50.9), listed only among the top five in high SES according to Table 2, was revealed through a deeper analysis to maintain a prominent position in low SES, ranking among the top ten most important. This suggests that it is not exclusive to the high SES population (47). In contrast to Unspecified heart failure (I50.9), Chronic kidney disease unspecified (N18.9) predominantly affects men with low SES according to our results. There is evidence linking low SES to a predisposition to chronic diseases, including Chronic kidney disease unspecified (N18.9), either directly or as a consequence of preceding chronic diseases, with social determinants of health playing a crucial role (48,49). While further investigation is necessary, factors such as education may be related, suggesting that in this age range, the influence of this factor could stem from the family nucleus where infants and adolescents develop (51,50). Additionally, habits related to nutrition and lack of physical activity can impact the development of conditions closely related to N18.9, such as obesity (52,53), a significant health issue in Mexico from an early age (54,55). It is worth noting that the presence of Chronic Kidney Disease unspecified (N18.9) in the early years of life is related to congenital malformations and glomerulopathies as the main known causes (57, 56). These conditions may be linked to factors inherent to urbanization, overpopulation, and hygiene, which can negatively impact certain biological processes and increase the risk of developing these diseases (58). After the above and understanding that being Chronic Kidney Disease unspecified (N18.9) a condition that mostly affects low-SES men in this age range, its comorbidities will also may be specific to this population. Therefore, through an analysis of its first neighbors, we decided to examine its relationship with Systemic Lupus Erythematosus with organ or system involvement (M32.1), since they have a somehow direct relationship (59) and, in turn, it is also a disease that we can more commonly find in low-SES men (60,61) according to our findings. Given that Chronic Kidney Disease unspecified (N18.9) predominantly affects low-SES men in this age range, its comorbidities are likely specific to this population. Therefore, through an analysis of its first neighbors, we decided to explore its relationship with Systemic Lupus Erythematosus with organ or system involvement (M32.1), as they have a somewhat direct relationship (59). Moreover, it is also a disease more commonly found in low-SES men (60,61) according to our findings. Regarding this pair of diseases, lupus erythematosus tends to affect various vital organs, and although less frequent in children, it is more severe than in adults, with kidney disease present in 50%-90% of patients. Therefore, the close relationship between these conditions within the network is not surprising. The association of M32.1 with low SES may be attributed to the condition's multifactorial nature, involving genetic and environmental factors. Recurrent infections are also known risk factors for triggering the onset of the disease, with these types of infections more prevalent in families where young children live with school-aged children. Such situations are characteristic of overcrowded environments where multiple families coexist, a scenario more common among individuals with low SES. Premature or low birth weight babies, found more frequently in low-SES settings, are also significant factors in recurrent infections (62). Conversely, in women, Chronic Kidney Disease unspecified (N18.9) does not exhibit different impacts by SES, according to our data. This suggests that differences in its occurrence by SES may be less pronounced in this population, although further research is needed to confirm this. Additionally, the relationship with Systemic Lupus Erythematosus with organ or system involvement (M32.1) is present irrespective of SES. This may be related to the fact that women are more predisposed to developing M32.1. However, it's essential to consider that both diseases are linked to cardiovascular system issues, influenced by biological and lifestyle factors, aligning with the context of the data from which these networks were modeled. Therefore, other associated variables need to be considered to ascertain the significance of SES in the concurrent occurrence of these conditions in men. --- Comorbidity networks in individuals aged 21-40 years In the networks specific to individuals aged 21-40 years, we observe that although the same diseases maintain their
Cardiovascular diseases stand as a prominent global cause of mortality, their intricate origins often entwined with comorbidities and multimorbid conditions. Acknowledging the pivotal roles of age, sex, and social determinants of health in shaping the onset and progression of these diseases, our study delves into the nuanced interplay between life-stage, socioeconomic status, and comorbidity patterns within cardiovascular diseases. Leveraging data from a cross-sectional survey encompassing Mexican adults, we unearth a robust association between these variables and the prevalence of comorbidities linked to cardiovascular conditions. To foster a comprehensive understanding of multimorbidity patterns across diverse lifestages, we scrutinize an extensive dataset comprising 47,377 cases diagnosed with cardiovascular ailments at Mexico's national reference hospital. Extracting sociodemographic details, primary diagnoses prompting hospitalization, and additional conditions identified through ICD-10 codes, we unveil subtle yet significant associations and discuss pertinent specific cases. Our results underscore a noteworthy trend: younger patients of lower socioeconomic status exhibit a heightened likelihood of cardiovascular comorbidities compared to their older counterparts with a higher socioeconomic status. By empowering clinicians to discern non-evident comorbidities, our study aims to refine therapeutic designs. These findings offer profound insights into the intricate interplay among life-stage, socioeconomic status, and comorbidity patterns within cardiovascular diseases. Armed with data-supported approaches that account for these factors, clinical practices stand to be enhanced, and public health policies informed, ultimately advancing the prevention and management of cardiovascular disease in Mexico.
more common among individuals with low SES. Premature or low birth weight babies, found more frequently in low-SES settings, are also significant factors in recurrent infections (62). Conversely, in women, Chronic Kidney Disease unspecified (N18.9) does not exhibit different impacts by SES, according to our data. This suggests that differences in its occurrence by SES may be less pronounced in this population, although further research is needed to confirm this. Additionally, the relationship with Systemic Lupus Erythematosus with organ or system involvement (M32.1) is present irrespective of SES. This may be related to the fact that women are more predisposed to developing M32.1. However, it's essential to consider that both diseases are linked to cardiovascular system issues, influenced by biological and lifestyle factors, aligning with the context of the data from which these networks were modeled. Therefore, other associated variables need to be considered to ascertain the significance of SES in the concurrent occurrence of these conditions in men. --- Comorbidity networks in individuals aged 21-40 years In the networks specific to individuals aged 21-40 years, we observe that although the same diseases maintain their top positions according to their Importance Page Rank, the relationships some diseases have with their first neighbors vary. An illustrative example is the case of Other specified congenital malformations of the heart (Q24.8), which, exclusively in low SES for both men and women, retains a first-neighbor relationship with Chronic kidney disease, unspecified (N18.9) and Other forms of chronic ischemic heart disease (I25.8), a relationship suggested in previous literature (64,63). The multifactorial etiology of Q24.8 as a congenital malformation implies potential influences from genetic and maternal factors during pregnancy, including maternal health conditions such as diabetes, hypertension, and obesity (67,65,66,68,69). On the other hand, N18.9 and I25.8 share common factors associated with chronic diseases, such as unhealthy diets and sedentary lifestyles, believed to be more prevalent in low SES populations (70)(71)(72). Thus, the shared social determinants of these three diseases in low SES, including the presence of unhealthy habits, a family history of chronic diseases, and limited access to healthcare, could contribute to their co-occurrence in this population. While further research is necessary to confirm this relationship, the current study highlights the significant association between the concurrent occurrence of these diseases and the mutual information they share in their respective networks. A more specific and noteworthy case pertains to the association between Chronic Kidney Disease, unspecified (N18.9) and Other specified chronic ischemic heart disease (I25.8), observed exclusively in the men network of low SES individuals within this age range. This relationship is characterized by a significant MI score of 0.021196, indicative of its clinical relevance as a comorbidity in this population (74,73). The co-occurrence of these two diseases is anticipated due to the well-established predisposition of kidney disease to cardiovascular conditions, with I25.8 being a notable example. Furthermore, the incidence of I25.8 is known to be ageand sex-related, with a lower likelihood of development among women of childbearing age due to the protective effect of sex hormones (75). In this context, an intermediate disease, Other specified congenital malformations of the heart (Q24.8), could partially explain the observed association between N18.9 and I25.8. However, additional research is necessary to confirm this hypothesis. Notably, the exclusive appearance of this comorbidity in the low SES men network is likely linked to shared structural and lifestyle factors discussed earlier. --- Comorbidity networks in individuals aged 41-60 years The most relevant conditions within the networks for high SES remain consistent in the top five positions for both men and women. There are striking similarities in the lower SES, where Chronic kidney disease, unspecified (N18.9) replaces Heart failure, unspecified (I50.9). Nevertheless, the latter remains among the top ten most important conditions (i.e., high Page Rank Score) according to our results. Hence, among the most significant differences observed in this age range, it is notable that men, irrespective of SES, exhibit a first-neighbor relationship among all diseases included in the top five networks for men in this age range (see Table 4). In contrast, women, as per our data, manifest different configurations contingent on their SES. Notably, only a direct relationship between Atherosclerosis, unspecified (I70.9) and Chronic kidney disease, unspecified (N18.9) is evident in women of high SES. This pairing becomes noteworthy because it is the sole age range where a discrepancy surfaces concerning sex and SES. It suggests that solely women of high SES exhibit this comorbidity at earlier ages than those of low SES (76), unlike men who experience this comorbidity in both SES. Regarding this pair of diseases, it is established that Chronic kidney disease (N18.9) tends to foster the development of cardiovascular diseases, including Atherosclerosis (I70.9), due to deficiencies intrinsic to renal deterioration and its association with the cardiovascular system. This association becomes more prominent in advanced stages of renal disease (78,77), underscoring the close relationship between both conditions. The differentiated appearance in women based on SES, affecting initially women of high SES, is a counterintuitive phenomenon. Typically, chronic diseases are anticipated to emerge earlier or with more substantial impact in low SES populations due to the interplay of various factors inherent in low SES (81,82,80,79). Further research is warranted to comprehensively understand this phenomenon. --- Comorbidity networks in individuals aged 61-80 years old Regarding this age range, in general, the diseases that occupy the top positions in each network remain constant, according to their PRS, presenting some changes in terms of their level of relevance according to Table 5. That being said, we analyzed how the most relevant diseases were organized with others, present in the aforementioned table with the highest PRS, which showed us that Rheumatic diseases of endocardial valve unspecified (I09.1) only appears significantly connected to Other specified congenital malformations of heart (Q24.8) in low SES men (83,84). While this association may be expected given that cardiac malformations could contribute to the development of I09.1, the exclusive appearance of this relationship in men of low SES is noteworthy. In addition to the presence of a cardiac malformation, other risk factors that are associated with I09.1, such as poor oral health and hygiene (85, 86) or injectable drug use (87), may increase the likelihood of co-occurrence between these diseases in individuals of low SES. Studies have linked poor dental hygiene to low SES, which may result from limited access to education or health services (88,90,89). Likewise, low SES has been associated with higher drug consumption, possibly due to factors such as education, family background, place of residence, and social relationships (91). Injectable drug use has been specifically linked to poverty and unemployment, although this relationship requires further investigation (92). As can be seen in the previous paragraph, the joint presence of the aforementioned conditions, despite being influenced by biological factors, the weight in the occurrence of Unspecified rheumatic diseases of endocardial valve (I09.1) may be falling on the conditions that people go through throughout their development, giving rise to generating comprehensive interventions in the treatment and care of patients with congenital malformations. These interventions should not only focus on medical treatment, but also on the care of vulnerable groups, in order to reduce the gaps in inequality that could be influencing why both conditions affect more people with low SES (84,93). --- Comorbidity networks in individuals aged 80 years and older In this population, we found that despite the fact that the most relevant diseases in the different networks remain constant, the relationships they form between them can be different. An example of this is the one presented by Acute transmural anterior wall myocardial infarction (I21.0) and Unspecified rheumatic valve diseases (I09.1), as they only showed a relationship in low-SES women in this age range (95,94,93). Regarding this pair, the literature generally indicates that Acute transmural anterior wall myocardial infarction (I21.0) is a rare or infrequent complication in patients with Unspecified rheumatic valve diseases (I09.1), occurring in the acute phase of the disease, where coronary embolism is related to bacterial endocarditis, thus causing an acute myocardial infarction (96). This helps us to confirm that there may be biological processes involved in the co-occurrence of these two conditions, but leaves their relationship with low SES up in the air, so it is necessary to continue investigating this topic and also to analyze why it appears more frequently in women since, as mentioned above (97), both diseases have a strong and wellpositioned relationship according to our results. This could become more important if we take into account that mortality from Acute transmural anterior wall myocardial infarction (I21.0) increases directly with people's age (98), and according to our results, both are very well-positioned diseases in the network of women over 80 years old according to their PRS (Table 6). --- Cardiovascular comorbidity in the context of social determinants of health Social determinants of health are the conditions in which people are born, grow, live, work, and age, and they play a crucial role in shaping health outcomes. These determinants are influenced by the distribution of money, power, and resources at global, national, and local levels. The main SDHs include socioeconomic stutus, education, employment and working conditions, social support networks, healthcare access and quality, physical environment, social and economic policies, cultural and social norms, early childhood experiences and behavioral factors. Understanding and addressing these social determinants are essential for developing effective public health policies and interventions aimed at improving overall health and reducing health disparities (99). In the present context, our findings just presented point out to some general trends. More specific issues may be found by using the comorbidity network to navigate on local hospital EHRs that It is relevant however to point out that in this work, statistically significant associations are presented, but no causal or mechanistic explanations have been developed. Rather our study aims to be a starting point to study these, as well as a tool to inform hospital management and public health officials to help them in planning and policy development if possible. --- Summary of findings In what follows we will summarize the more relevant, general results observed by examining the comorbidity and multimorbidity patterns. These global (in the context of our analyzed populations) trends may help contextualize the highly variable landscape of cardiovascular comorbidity presented in this study and available at the Supplementary materials (i.e., the whole set of CVC networks). • Comorbidity networks in people aged 0-20 years old • Unspecified Heart Failure (I50.9) • Initially associated with high SES but remains important in low SES. • Indicates it is not exclusive to high SES populations. • Chronic Kidney Disease Unspecified (N18.9) • Primarily affects low-SES men. • Linked to low SES through factors like education, nutrition, and lifestyle. • Chronic Kidney Disease Unspecified (N18.9) • Replaces Unspecified Heart Failure (I50.9) in high SES. • Key comorbidity with Atherosclerosis (I70.9) in high-SES women. --- Relation to other studies Mapping the comorbidity and multimorbidity landscape of cardiovascular diseases has been an issue of interest in the international medical and biomedical research community for some time. Different approaches parallel and complementary to what we have just presented have been developed. These efforts span from the highly specific to the very broad approaches. One quite relevant example of the latter is MorbiNet, a Spanish study that analyzes a very large population consisting of 3,135,948 adult people in Catalonia, Spain. This work also mined EHRs but is focused exclusively in the relationship between common chronic conditions and type 2 diabetes (100). MorbiNet, as the present work is a network-based approach; there the authors build networks from odds-ratio estimates adjusted by age and sex and considered ) and pancreas cancer (OR: 2.4). Though their methods are in some sense similar to ours, there are some noticeable differences. Perhaps the most evident is that due to the large scale of their study they focus on common chronic diseases, somehow regardless of the outcomes and mainly related to one (admittedly extremely important) condition, type 2 diabetes. Also, their networks are unweighted, meaning that every comorbidity relationship above the significance threshold contributes to the comorbidity landcape on a similar fashion, whereas in our case, every comorbidity relationship is characterized by a mutual information value representing the relative strength of this association. Though not exactly a comorbidity analysis, the framework to study cardiovascular diseases from the standpoint of network science as presented by Lee and coworkers is worth mentioning (101). There, the authors establish a set of basic network theory principles that allowed them to look up to disease-disease interactions, uncovering disease mechanisms, and even allow for clinical risk stratification and biomarker discovery. A simialr approach is sketched by Benincassa and collaborators (102) though the scope is more limited to uncover disease modules. A hybrid network analytics/classical epidemiology approach is presented by Haug et al. (103). They analyzed multimorbidity patterns, representing groups of included or excluded diseases, delineated the health states of patients in a population-wide analysis spanning 17 years and encompassing 9,000,000 patient histories of hospital diagnoses (a data set provided by the Austrian Federal Ministry for Health, covering all approx. 45,000,000 hospital stays of about 9,000,000 individuals in Austria during the 17 years from 1997 to 2014.). These patterns encapsulate the evolving health trajectories of patients, wherein new diagnoses acquired over time alter their health states. Their study assesses age-and sex-specific risks for patients to acquire specific sets of diseases in the future based on their current health state. The population studied is characterized by 132 distinct multimorbidity patterns. Among elderly patients, three groups of multimorbidity patterns are identified, each associated with low (yearly in-hospital mortality of 0.2%-0.3%), medium (0.3%-1%), and high in-hospital mortality (2%-11%). Combinations of diseases that significantly elevate the risk of transitioning into high-mortality health states in later life are identified. For instance, in men (women) aged 50-59 with diagnoses of diabetes and hypertension, the risk of entering the high-mortality region within one year is elevated by a factor of 1.96 + 0.11 (2.60 + 0.18) compared to all patients of the same age and sex, respectively. This risk increases further to a factor of 2.09 + 0.12 (3.04 + 0.18) if they are additionally diagnosed with metabolic disorders. This study is simialr to ours in the sense that was not limited for particular diagnosis (though only considered 1,074 codes from A00 to N99, grouped into 131 blocks as defined by the WHO, which excludes congenital diseases that are quite relevant for children and young individuals) and it was based on mining ICD-10 codes from the EHRs. Their emphasis however, is different from ours since thay are more interested in patient trajectories which describe the health state of this patient at different points in time, rather than in general trends useful for hospital management. --- Limitations of the present study This study utilizes ICD-10 codes to document and classify disease conditions. It is important to note that the use of ICD-10 codes in research presents challenges and limitations, as the system was primarily developed for hospital administration and cost-estimation purposes, rather than as a controlled vocabulary for standardized clinical reporting or epidemiological research (1). Concerns about the suitability of ICD codes for other secondary purposes, such as research or policy interventions, have been raised due to coding errors found in patient data by some authors (105,106,104,107). The validity of ICD codes to identify specific conditions depends on the extent to which the condition contributes to health service use, as well as the time, place, and method of data collection (108). Diagnostic accuracy tests of ICD-10 codes have been conducted to evaluate features such as sensitivity, specificity, positive predictive values (PPV), and negative predictive values (NPV) for specific major diagnoses, major procedures, minor procedures, ambulatory diagnoses, co-existing conditions, and death status. These studies have generally found good-to-excellent coding quality for ICD-10 codes in these areas (1). Given these considerations, when using these codes for clinical purposes, careful evaluation is necessary since the actual subjects of interest may not be accurately defined. This may be critical in the assessment of chronic conditions. Moreover, ICD codes perform better with sets of diseases enriched for frequent, well-known conditions. It is noteworthy that in the specific case of Electronic Health Records in the NICICH, the administrative database coding, archiving, and retrieval procedures have been certified and validated by the World Health Organization (WHO) through the local 'Collaborating Center for WHO International Classification Schemes -Mexico Chapter' (CEMECE, for its Spanish acronym). These procedures are in agreement with ISO 9001:2000, ISO/IEC 27001 certifications, and with the Official Mexican Norm (NOM for its Spanish acronym): NOM-004-SSA3-2012 (1). --- Concluding remarks In conclusion, the analysis of comorbidity networks across different age groups and socioeconomic status reveals interesting patterns in disease co-occurrence. There are consistent associations between certain diseases, and these associations may vary based on age and SES. Moreover, the presence of certain comorbidities differ between men and women and across different age and SES, as expected. Some diseases, such as chronic kidney disease and specific cardiac conditions, consistently appear among the most relevant comorbidities across age groups and SES. Additionally, the study highlights specific associations, such as the relationship between unspecified heart failure, chronic kidney disease, and systemic lupus erythematosus with organ involvement, which may have implications for diagnostic and therapeutic strategies. Notably, the findings also suggest that individuals with low SES tend to exhibit a greater diversity of diseases, potentially indicating disparities in health outcomes and access to healthcare resources. The importance of social determinants of health in shaping comorbidity patterns is evident, emphasizing the need for comprehensive interventions that address not only medical aspects but also social and environmental factors. Overall, this study provides valuable insights into the complex landscape of comorbidities, shedding light on how age, sex, and SES contribute to the interconnected web of diseases. Further research and ongoing investigation are crucial to deepen our understanding of these relationships and inform more targeted and effective approaches to healthcare and disease prevention. --- Data availability statement The data analyzed in this study is subject to the following licenses/restrictions: Data was taken from annonymized Electronic Health Records from the National Institute of Cardiology Ignacio Chavez. Data summaries are available upon request. Requests to access these datasets should be directed to [email protected]. --- Author contributions EHL conceived the project, EHL and MMG directed and supervised the project, MMG and EHL designed and develop the computational strategy, EHL, MMG, FRA and HACA implemented the code and database search procedures, MMG, HACA, FRA and EHL conducted the calculations and validation, MMG, HACA, FRA and EHL analysed the results. MMG and EHL wrote the manuscript. All authors contributed to the article and approved the submitted version. --- Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. --- Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. --- Supplementary material The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fcvm.2024. 1215458/full#supplementary-material
Cardiovascular diseases stand as a prominent global cause of mortality, their intricate origins often entwined with comorbidities and multimorbid conditions. Acknowledging the pivotal roles of age, sex, and social determinants of health in shaping the onset and progression of these diseases, our study delves into the nuanced interplay between life-stage, socioeconomic status, and comorbidity patterns within cardiovascular diseases. Leveraging data from a cross-sectional survey encompassing Mexican adults, we unearth a robust association between these variables and the prevalence of comorbidities linked to cardiovascular conditions. To foster a comprehensive understanding of multimorbidity patterns across diverse lifestages, we scrutinize an extensive dataset comprising 47,377 cases diagnosed with cardiovascular ailments at Mexico's national reference hospital. Extracting sociodemographic details, primary diagnoses prompting hospitalization, and additional conditions identified through ICD-10 codes, we unveil subtle yet significant associations and discuss pertinent specific cases. Our results underscore a noteworthy trend: younger patients of lower socioeconomic status exhibit a heightened likelihood of cardiovascular comorbidities compared to their older counterparts with a higher socioeconomic status. By empowering clinicians to discern non-evident comorbidities, our study aims to refine therapeutic designs. These findings offer profound insights into the intricate interplay among life-stage, socioeconomic status, and comorbidity patterns within cardiovascular diseases. Armed with data-supported approaches that account for these factors, clinical practices stand to be enhanced, and public health policies informed, ultimately advancing the prevention and management of cardiovascular disease in Mexico.
INTRODUCTION Previous evidence consistently shows disadvantaged socioeconomic position (SEP) in childhood and adult life is associated with increased premature mortality risk. 1 However, the magnitude of the inequalities is likely context-specific and may therefore change across time. Evidence on these changes in the UK, however, is inconsistent. Inequalities in all-cause mortality by area-level measures of deprivation in adulthood appear to have increased from the 1980s to 2010s in Britain. 2 This contrasts with reports of narrowing inequalities over the same period by educational attainment 3 -observed trends may therefore be sensitive to the specific SEP indicator used. Existing studies investigating lifetime SEP and mortality associations have typically been limited to older cohorts (born in the 1930s-1950s), nonrepresentative samples, and are limited to single indicators of SEP-with childhood indicators recalled in adulthood. 1 The current study uses three comparable national British birth cohorts -born in 1946, 1958 and 1970-to investigate changes in inequalities in all-cause mortality risk across adulthood and early old age of three generations. The cohorts benefit from multiple prospectively ascertained SEP indicators. Previous evidence has examined the 1946 birth cohort in midlife 4 and found childhood SEP was associated with premature mortality risk. Given persisting inequalities in multiple diseases and other mortality risk factors across the studied period (1971-2016), 4 5 and the persisting inequalities in social and health outcomes in subsequent birth cohorts, 6 7 we hypothesised that inequalities, according to both childhood and adult SEP, in premature mortality would have persisted. --- METHODS --- Study design and sample We used data from three British birth cohort studies, which have reached midadulthood-born in 1946 (MRC National Survey of Health and Development (1946c)), 8 9 1958 (National Child Development Study (1958c)) 10 and 1970 (British Cohort Study (1970c)). 10 These cohorts have been described in detail elsewhere. 6 7 Analyses in the 1946c were weighted as this study consists of a social class-stratified sample. Participants were included in the current analysis if they were alive at age 26 years, had a valid measure of parental and/or own SEP and known vital status and date (from age 26 onwards). Paternal occupational social class at birth was used in 1958c and 1970c and at age 4 in 1946c (birth data were not used to avoid World War IIrelated misclassification); occupation was classified using the Registrar General's Social Class (RGSC) scale: I (professional), II (managerial and technical), IIIN (skilled non-manual), IIIM (skilled manual), IV (partly skilled) and V (unskilled) occupations. Maternal education collected at birth (1958-1970c) --- Mortality Death notifications were supplied from the Office for National Statistics and/or via participants' families during fieldwork. 11 12 --- Statistical analysis To aid cross-cohort comparisons, analyses were carried out across the following age ranges: 26-43 years (all cohorts), 26-58 years (1946c-1958c) and 26-70 years (1946c). For each SEP measure, cumulative death rates were calculated for each group. Cox proportional hazard models were used to estimate associations between each SEP indicator and all-cause mortality, following checks that the proportional hazard assumption held by calculating Schoenfeld residuals (online supplemental table S1). Follow-up was from age 26 to date of death-or was censored at date of emigration or at the end of each follow-up period for those still alive (age 43, 58 or 70). To provide single quantifications of inequalities, all SEP indicators were converted to ridit scores, resulting in an estimate of the Relative Index of Inequality. Cohort differences were formally tested using SEP<unk>cohort interaction terms. Models were adjusted for sex, and also conducted separately to examine if findings differed in each sex. To investigate if associations of SEP across life and premature mortality were independent of each other and thus cumulative in nature, (1) mutually adjusted models were conducted including paternal and own social class-and additionally, housing tenure given the suggested importance of wealth 4 ; (2) a composite lifetime SEP score was used in models, by combining these two or three indicators together and rescaling. 4 Multiple imputation was conducted to address missing data in SEP indicators (N=481 (1946c), N=514 (1958c), N=1236 (1970c)); complete case analyses yielded similar findings. Ten imputed data sets were used. Finally, to investigate if results were similar when examined in the absolute scale, models were repeated using logistic regression (dead/alive at the end of each follow-up period with those who emigrated excluded)-absolute differences in predicted probabilities of mortality were calculated. All analyses were conducted in Stata, version 16.0 (StataCorp LP, College Station, TX, USA). More disadvantaged SEP in both childhood (paternal social class) and early adulthood (education attainment, own social class and housing tenure) was associated with higher mortality risk, with 21 of 24 hours (HRs) being between 1.6 and 3.1 (figure 1). As anticipated, associations were least precisely estimated at 43 years, where there were fewer deaths; and in the 1946c, which is smaller than 1970c and 1958c. Across each age period, HRs were generally larger in 1946c than the two later-born cohorts, but the CIs for 1946c at younger ages were wide (figure 1 and online supplemental table S3; all cohort<unk>SEP interaction term p values were >0.4). For example, HRs of early death from 26 to 43 years comparing most to least disadvantaged paternal social class were 2.74 (95% CI 1.02 to 7.32) in 1946c, 1.66 (95% CI 1.03 to 2.69) in 1958c and 1.94 (95% CI 1.20 to 3.15) in 1970c. Associations were weaker for maternal education as an alternative indicator for childhood SEP, particularly for 1946c (online supplemental table S4). Housing tenure in adulthood was also associated with mortality, renters compared to homeowners had a consistently higher risk of death: HRs from 26 to 43 years were 2.06 (95% CI 1.03 to 4.12) in 1946c, 1.30 (95% CI 0.87 to 1.94) in 1958c and 1.61 (95% CI 1.00 to 2.60) in 1970c. --- RESULTS --- In In models including both paternal and own social class, associations were typically partly attenuated, but generally both variables were still associated with premature mortality. Additionally, there was some evidence that composite lifetime SEP scores had larger magnitudes of association with mortality than each indicator in isolation (particularly in later periods of follow-up; online supplemental table S5a). Findings were similar when housing tenure was included in models (online supplemental table S5b). Findings of persistent inequalities in premature mortality across each cohort were also found when examining on the absolute scale (online supplemental tables S6), and when conducted separately among men and women (online supplemental tables S7 and S8). There was suggestive evidence for stronger associations among females in the 1946c and among males in the 1970c. Figure 1 Associations between socioeconomic position and adult mortality risk: evidence from three British birth cohort studies. --- Short report DISCUSSION Despite declining mortality rates across the studied period (1971-2016), inequalities in premature mortality appear to have persisted and were consistently found for multiple SEP indicators in early and adult life. Our findings build on prior investigations which used 1946c but not younger cohorts 4 or repeated follow-up of adult cohorts 1 13 ; and seminal reviews which focus on area-based SEP indicators. 2 14 The persistence of inequalities, even in a period of marked changes to cultural, social, economic and population-wide health (eg, declines in CVD mortality rates) is suggestive of multiple time-depending pathways between SEP and mortality. 15 It is possible that, despite their overlap, each SEP indicator captures different pathways, resulting in their independent associations with mortality. For example, child SEP is associated with many mortality risk factors such as BMI independently of adult SEP, 6 and housing tenure may specifically capture wealth given the increasing value of housing in Britain -wealth is increasingly suggested to be an important healthrelevant SEP indicator. 16 The main causes of death within these cohorts were likely to have been cancers, coronary heart disease and unnatural causes. 17 Strengths of the study include the use of three large nationally representative studies, enabling long-run investigation mortality risk trends, and use of multiple SEP indicators across life. While we use multiple indicators of SEP, they are likely to be underestimates of socioeconomic inequality-wealth for example is only crudely approximated by home ownership, we lack comparable data on income and lacked power to investigate highest attained social class in midlife. Further, while RGSC is widely used in historic samples and official statistics (pre-2000), there is uncertainty in the criteria with which jobs were classified. While there were a small number of participants with missing outcome data, reassuringly the mortality rates in each cohort corresponded with the expected population at the time. 18 Our study was limited to all-cause mortality; however, trends in inequalities may differ by health outcome, for example, absolute inequalities in coronary heart disease appear to have narrowed in 1994-2008, 19 20 but inequalities in stroke remained unchanged. 20 Future studies with larger sample sizes are warranted to investigate trends in cause-specific premature mortality. Our findings reaffirm needs to address socioeconomic factors in both early and adult life to reduce inequalities in early-mid adulthood mortality. In contemporaneous and future cohorts, inequalities in premature mortality are likely to be significant barriers to a necessary component of healthy ageing: survival into older age. Twitter Meg Fluharty @MegEliz_. Contributors MEF, RH and DB were involved in the conception and design of the study; MF conducted the analyses and drafted the manuscript; and MEF, RH, GP, BP and DB revised the manuscript and approved for submission. Provenance and peer review Not commissioned; externally peer reviewed. Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peerreviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/ or omissions arising from translation and adaptation or otherwise. Open access This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https://creativecommons.org/licenses/by/4.0/. What is already known on this subject <unk> Disadvantaged socioeconomic position in early and adult life is associated with increased premature mortality risk. Relative inequalities in all-cause mortality by area deprivation have increased from the 1980s to 2010s in England, Wales and Scotland. However, this contrasts with reports of narrowing relative mortality inequalities by educational attainment, and these differences in trends by SEP assessment suggest differences in social stratification according to different measures. Therefore, while there is a known association of SEP with mortality, there is little evidence on how different SEP indicators are associated with mortality risk, and how these associations have changed across time. --- What this study adds
Introduction Disadvantaged socioeconomic position (SEP) in early and adult life has been repeatedly associated with premature mortality. However, it is unclear whether these inequalities differ across time, nor if they are consistent across different SEP indicators. Methods British birth cohorts born in 1946, 1958 and 1970 were used, and multiple SEP indicators in early and adult life were examined. Deaths were identified via national statistics or notifications. Cox proportional hazard models were used to estimate associations between ridit scored SEP indicators and all-cause mortality risk-from 26 to 43 years (n=40 784), 26 to 58 years (n=35 431) and 26 to 70 years (n=5353). Results More disadvantaged SEP was associated with higher mortality risk-magnitudes of association were similar across cohort and each SEP indicator. For example, HRs (95% CI) from 26 to 43 years comparing lowest to highest paternal social class were 2.74 (1.02 to 7.32) in 1946c, 1.66 (1.03 to 2.69) in 1958c, and 1.94 (1.20 to 3.15) in 1970c. Paternal social class, adult social class and housing tenure were each independently associated with mortality risk. Conclusions Socioeconomic circumstances in early and adult life show persisting associations with premature mortality from 1971 to 2016, reaffirming the need to address socioeconomic factors across life to reduce inequalities in survival to older age.
Introduction Tuberculosis (TB) is the second prime cause of mortality in Sub-Saharan Africa and remains a major worldwide public health problem despite the discovery of highly effective drugs and vaccines [1,2]. The HIV/AIDS pandemic further exacerbates the burden of TB. For example, 23.1% of patients diagnosed with HIV/AIDS in Sub-Saharan Africa are reportedly co-infected with TB [1,3]. Unfortunately, patients with TB are at risk of poor mental health and lower health-related quality of life (HRQoL) [4]. For instance, between 40 and 70% of patients with TB suffer from various common mental disorders such as depression and anxiety [4][5][6]. Regrettably, patients with poor mental health are unlikely to adhere to treatment regimens, and this decreases treatment efficacy [6,7]. Further, non-compliance leads to the development of drug-resistant TB which is expensive to treat and has an increased mortality rate [8]. Therefore, poor mental health perpetuates a vicious cycle of adverse health outcomes [5,6]. However, there is established evidence showing that patients who receive an adequate amount of social support (SS) are likely to have optimal mental health outcomes such as lower psychiatric morbidity [9] and increased HRQoL [10]. Social support is defined as the amount of both perceived and actual care received from family, friends and or the community [11]. Furthermore, SS is an essential buffer to adverse life events (e.g. diagnosis of TB), and higher SS leads to increased treatment adherence and improved treatment outcomes [12,13]. Logically, it can be hypothesised therefore that SS may improve HRQoL of patients with adverse life events such as TB. Unfortunately, there is a lack of evidence on the mental health of TB patients residing in low resource settings such as Zimbabwe, yet the burden of the disease is quite high. The present study, therefore, sought to establish how SS influences the HRQOL of patients with TB in Harare, Zimbabwe. --- Main text --- Study design, research setting and participants A descriptive, cross-sectional study was carried out on adult patients with TB in Harare, Zimbabwe. Participants were conveniently recruited from one low-density suburb primary care clinic and two infectious disease hospitals. These three settings were selected as they have the highest catchment of patients with TB of varying socioeconomic status. Applying the following parameters; TB prevalence rate of 28.2% (p = 0.282 and q = 0.718) [2], 95% confidence interval, and expected 10% incomplete records, the minimum sample size according to STA-TISTICA software was 347. We recruited patients; with a confirmed diagnosis of TB according to doctor's notes, aged <unk> 18 years, fluent in either English or Shona (a Zimbabwean native language) and had no other chronic comorbid conditions like HIV/AIDS, among others. --- Study instruments Social support and HRQoL were measured using the Multidimensional Scale of Perceived Social Support (MSPSS) and the EQ-5D, respectively. The MSPSS is a 12-item outcome which measures the amount of SS received from family, friends and significant other [14]. The MPSSS-Shona version is rated on a five-point Likert scale with responses ranging from strongly disagree = 1 to strongly agree = 5, and the scores are interpreted, the higher the score, the more significant the SS [15]. The EQ-5D is a generic HRQoL measuring participant' perceived HRQoL in the following five-domains: mobility, self-care, usual activities, pain, and anxiety/depression [16]. The severity of impairments is rated on a three Likert-scale, i.e. no problem, some problem and extreme problem. The responses are log-transformed to give a utility score which ranges from zero to one, a score of one presenting perfect health status. Respondents also rate their health on a linear visual analogue scale which has a score range of 0-100 and the higher the score, the higher the HRQoL [16,17]. The MSPSS and EQ-5D were selected for the present study as they; are standardised, generic outcomes with robust psychometrics, very brief, and have been translated and validated into Shona [14][15][16][17]. --- Procedure Institutional and ethical approval for the study was granted by the City of Harare Health Council and the Joint Research and Ethics Committee for the University of Zimbabwe, College of Health Sciences & Parirenyatwa Group of Hospitals (Ref: JREC/362/17). This study adhered to the Declaration of Helsinki ethical principles. Participants were approached as they were waiting for services at the respective research sites, and recruitment was done over 4 consecutive weeks. The principal researcher explained the study aims, and interested participants were requested to give written consent before participating. The questionnaires were self-administered to identified participants, and completed questionnaires were collected on the same day. --- Data analysis and management Data were entered into Microsoft Excel and analysed using SPSS (version 23), STATISTICA (version 14) and STATA (Version 15). Normality was checked using the Shapiro-Wilk Test and; participants characteristics, EQ-5D and MSPSS outcomes were summarised using descriptive statistics. Correlation co-efficiencies, Chi square/Fishers' exact tests, analysis of variance (ANOVA) and t-tests were used to determine factors influencing patients' social support and HRQoL. Subsequently, patients characteristics (age, marital status, educational level, employment status, perceived financial status and place of residence) and MSPSS and EQ-5D were entered in the structural equation model (SEM) as endogenous and exogenous variables, respectively. The following parameters were set as a minimum criterion for model fit; Likelihood Ratio Chi squared Test (<unk>ms 2 )-criteria value p > 0.05, Root Mean Square Error of Approximation (RMSEA)-criteria value <unk> 0.06, Comparative Fit Index (CFI)-criteria value <unk> 0.90, Tucker-Lewis Index (TLI)criteria value <unk> 0.90 and the Standardized Root Mean Square Residual (SRMR)-criteria value <unk> 0.06 [18,19]. --- Results The mean age of the participants was 40.1 (SD 12.5) years. Most patients were; males (53%), married (57.8%), educated (97.3%), unemployed (40.7%), stayed in highdensity suburbs (46.4%), stayed in rented accommodation (44.9%), stayed with family (74.4%), and reported of less than average levels of income (51.5%). Further, as shown in Table 1, patients received the least and highest amount of social support from friends [(mean 2.8 (SD 1.2)] and family [(Mean 3.7 (SD 1.0)], respectively, and frequencies of MSPSS responses are shown in Additional file 1. Patients considerably reported of pain, anxiety and depression (see Additional file 2 for frequencies of EQ-5D responses), and the mean HRQoL (EQ-5D-VAS) score was 51 (SD 18.1). The final model (Fig. 1) revealed that patients who received an adequate amount of SS had optimal/greater HRQoL, r = 0.33, p <unk> 0.001. Further; increased age, being unmarried, lower education attainment, lower SES and residing in urban areas were associated with poorer mental health. The model displayed adequate fit, except for the Likelihood ratio, most of the goodness of fit indices were within the acceptable thresholds (see Table 2), and the model accounted for 68.8% of the variance (see Additional file 3). --- Discussion The main finding of the present study was that patients who received an adequate amount of social support had optimal/greater HRQoL and this is congruent with previous studies [4,9,20]. However, patients reported lower HRQoL (mean EQ-5D VAS -51 (SD 18.1) when compared to that of healthy urban-dwellers residing in the same research setting who previously reported a mean score of 77.5 (SD 17.4) [17]. The HRQoL outcomes were however almost like those of Zimbabwean patients with HIV/AIDS [21] which demonstrates the impact of long-term conditions on patients' HRQoL. Invariably, pathological process/changes, e.g. persistent coughing, peripheral neuropathy, haemoptysis, fatigue and chest pain, and medication side effects such as excessive tingling sensations have been reported to contribute highly towards lower HRQoL [4,22]. Additionally, external/environmental factors such as cultural beliefs/myths and stigma are also likely to contribute towards depression, lower self-efficacy, and lower emotional well-being which ultimately results in lower HRQoL [1,7,23,24]. Evidence from a systematic review evaluating the HRQoL of South African patients with TB suggests that psycho-social burden, e.g. isolation and stigma dramatically impact patients' HRQoL when The coefficient of determination (SD) The greater, the better 0.7-good fit compared to the effects of clinical symptoms [25]. This is unfortunate given that stigma precludes patients from receiving an adequate amount of SS [7,23,25]. Several studies concur that patients with more magnificent SS are likely; to promptly initiate diagnosis and treatment [24], comply with treatment regimens [12,13], and have lower psychiatric morbidity [9] which will, in turn, leads to an increased HRQOL [4]. Discrepancies in the amount of SS received from family and friends is suggestive of societal stigma and or cultural influences. For instance, in the African context, it is often the responsibility of the immediate family and spouses to care for a sick relative [1]. This could explain differences in SS sources as most participants were married. Further, the present study also demonstrated the impact of contextual factors on caregivers' mental health as reported elsewhere [5,20,26]. For example, patients who were; educated, formally employed and had higher levels of income had higher levels of SS and HRQoL. Patients with more financial resources are likely to afford specialist support services and likely to use medications with fewer side-effects and are thus likely to have higher HRQoL [27]. This sharply contrasts with more impoverished patients who are likely to develop anxiety and or depression because of financial pressure [24,28]. Malnutrition and non-compliance to treatment regimens, e.g. medication intake, failure to attend scheduled follow-up appointments and lack of funds for purchasing drugs and investigative tests have been previously reported in patients residing in low-resource settings [1,7,24,27,28]. --- Conclusion The current study suggests that TB patients who receive a higher amount of social support are likely to have higher HRQoL in the Zimbabwean context. Also, given that patients reported lower mental health, there is a need to develop and implement patient wellness interventions. Further studies should utilise longitudinal and qualitative study designs and recruit patients residing in rural areas to understand the mental health of Zimbabwean patients with TB fully. Efforts should also be made to validate mental health outcomes in this population formally. --- Limitations Although this the first large-scale study to evaluate the impact of SS on the HRQOL of tuberculosis patients in Zimbabwe the study outcomes need to be interpreted with caution given the following limitations: • Participants and the research settings were conveniently selected. However, the setting represents the largest catchment areas of patients with TB in Harare. • The duration of TB diagnosis and treatment were not extracted, and these may have influenced the reported mental health. • Participants were only recruited from an urban setting thus outcomes may not be generalisable to all Zimbabwean patients given that more than 67% of Zimbabweans reside in rural areas [29]. • We only recruited participants who were proficient in either English and or Shona languages; Zimbabwe is a multilinguistic country. However, the study instruments were only adapted, translated and validated in the Shona language. • The psychometric properties of the study instruments were not formally tested in patients with TB. • Although we applied SEM, causality could not be inferred given the cross-sectional nature of the data. • Confounding variables such as the length of treatment duration, type of TB, amongst others were not documented, and this may partly account for the 31.2% of the variance which was not explained by the final model. and is the corresponding author. MC, CT and DM revised and contributed to the drafting/revision of the third and fourth versions of the manuscript in preparation for submission to the journal. All authors read and approved the final manuscript. --- Availability of data and materials The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. --- Additional files Additional file 1. Frequencies of responses on the MSPSS, N = 332. Table denotes frequencies of responses on the MSPSS, a 12-item social support outcome measure. Responses are rated on a five-point Likert scale, ranging from strongly disagree = 1 to strongly agree = 5. --- Additional file 2. Frequencies of responses on the EQ-5D, N = 332. Table denotes frequencies of responses on the EQ-5D, a generic health-related quality of life measure. Respondents indicate whether they had problems in with self-care, usual activities, mobility, pain/discomfort and anxiety/ depression on a three-adjunct scale. Responses are rated as "no problem", "some problem" and "extreme problem". --- Additional file 3. Variance explained by the model. Table denotes the variance accounted by the variables and the total model expressing the relationship between contextual factors, levels of social support and health-related quality of life. Abbreviations ANOVA: analysis of variance; CFI: Comparative Fit Index; EQ-5D: EuroQol five-dimension scale; HIV/AIDS: human deficiency virus/acquired immunodeficiency syndrome; HRQoL: health-related quality of life; LTI: Tucker-Lewis Index; MSPSS: Multidimensional Scale of Perceived Social Support; RMSEA: Root Mean Square Error of Approximation; SD: standard deviation; SEM: structural equation model; SRMR: Standardized Root Mean Square Residual; SS: social support; TB: tuberculosis. --- Authors' contributions CZ, MC, CT and JMD developed the concept and design of the study. CZ collected the data and drafted the first version of the manuscript with the assistance of DM. JMD conducted the data analysis and statistical interpretation, extensively revised the first version of the manuscript, prepared all prerequisite processes for articles submission, submitted the manuscript Author details 1 Department of Rehabilitation, College of Health Sciences, University of Zimbabwe, P.O Box A178, Avondale, Harare, Zimbabwe. 2 Department of Psychiatry, College of Health Sciences, University of Zimbabwe, P.O Box A178, Avondale, Harare, Zimbabwe. 3 School of Health and Rehabilitation Sciences, Faculty of Health Sciences, University of Cape Town Observatory, Cape Town 7700, South Africa. 4 Department of Psychology, University of Cape Town, Rondebosch, Cape Town 7701, South Africa. 5 Department of Physiotherapy, School of Therapeutic Sciences, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa. present manuscript. The manuscript is a product of the manuscript writing and systematic review workshops facilitated by Dr. Helen Jack (Harvard University/Kings College London). Further, the manuscript is also a practical application of the Academic Career Enhancement Series (ACES) program led by Dr. Christopher Merritt (Kings College London). The senior author utilized the skills acquired through the ACES program in both thesis supervision and mentoring of the first author in producing the first draft of the manuscript. Statistical skills learnt from the data analysis workshops by Dr. Lorna Gibson and Professor Helen Weiss (London School of Hygiene and Tropical Medicine) were also fundamental in enhancing the senior authors' statistical analysis and interpretation skills. --- Competing interests The authors declare that they have no competing interests. --- Consent for publication Not applicable as the manuscript does not contain any data from any individual person. --- Ethics approval and consent to participate Ethical approval for the study was granted by the City of Harare Health Department and the Joint Research and Ethics Committee for the University of Zimbabwe, College of Health Sciences & Parirenyatwa Group of Hospitals (Ref: JREC/362/17). Participants were treated as autonomous agents and were requested to sign written consent before participation. Pseudo-names were used to preserve confidentiality, data were stored securely, and only the researchers had access to the information gathered, and participants could voluntarily withdraw from the study at any time without any consequences. --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Objective: Tuberculosis (TB) is the second prime cause of mortality in Sub-Saharan Africa and remains a major worldwide public health problem. Unfortunately, patients with TB are at risk of poor mental health. However, patients who receive an adequate amount of social support are likely to have improved health outcomes. The study was done to establish how social support influences the health-related quality of life (HRQoL) of patients with TB in Harare, Zimbabwe. Data were collected from 332 TB patients and were analysed through structural equation modelling.The mean age of the participants was 40.1 (SD 12.5) years and most were; males (53%), married (57.8%), educated (97.3%), unemployed (40.7%), stayed with family (74.4%), and reported of less than average levels of income (51.5%). Patients received the most significant amount of social support from the family. Patients also presented with lower HRQoL as they considerably reported of pain, anxiety and depression. The final model accounted for 68.8% of the variance. Despite methodological limitations, the study findings suggest that social support optimises patients' HRQoL. Based on the patients' responses, it was noted that patients presented with lower mental health, therefore, there is a need to develop and implement patient wellness interventions.
How do inner city mothers in Belfast, particularly those raising young children in the first decade of the post-conflict era, negotiate a shifting normative landscape? How do they seek affirmation concerning the quality of their mothering in a changed context? These questions are explored in what follows through a focus on the social logic of maternal anxiety, the evaluative responses it generates (Burkitt 2012;Kemper 1978: 41), and its significance as a guide to the actions of mothers to the neighbourhoods where they live in Belfast's inner city, which continues to be divided and strongly marked by sectarian hostilities, as well as sporadic violence (Shirlow 2008). This paper aims to contribute to our understanding of the significance of emotions for status claims. In so doing, social emotions, such as anxiety, are treated not as psychic pathologies, or as reflections of the strain of structural imperatives to conform (Barbalet 2001;Hochschild 1979). Instead, they are understood to be important aspects of claims for social recognition, that is, claims for verification of the actor's authoritative status (Crossley 2011;Honneth 1995;McBride 2013), what Weber described as a'social estimation of honor'(1948: 186-7). Emotions are consequently understood as a central feature of agency, providing feedback to the self as a guide to further action (Burke and Stets 2009). Anxiety, for instance, is treated as a sign of insufficient power and status (Kemper 1978: 49), signaling actor unease both over the authority of specific actions (Denzin 2007;Kemper 1978;Lynd 1958), and more generally over the validity of the claim to be recognized as an authoritative actor. The classed and gendered character of maternal anxiety is examined in what follows through a focus on the relationship between the social dynamics of this emotion in the inner city, and what Hirschman (1970) identified as 'exit, loyalty and voice' types of attitudes, in this case towards segregated, multiply deprived residential neighbourhoods. What follows firstly considers the gendered character of contemporary parenting, and particularly the relationship between motherhood and anxiety, before examining the character of maternal recognition claims, notably through affirming boundaries of respectability and stigma, and through orientations to neighbourhoods. --- The Social Politics of Motherhood: Norms, Conflict, Anxiety Parenting remains a strongly gendered practice, with distinct social expectations attached to motherhood and fatherhood (Craig et al. 2014;Rose et al. 2015;Thomas and Hildingsson 2009). Doucet (2015) argues, drawing on feminist debates about the ethics of care, that parental responsibility, understood as a sense of obligation not only to provide practical care, but also to assume a generally attentive and responsive attitude towards those being cared for, remains largely gendered. This is despite changes in how caregiving tasks and time are shared between mothers and fathers (e.g. Kaufman 2013). As she argues, gendered parenting is not simply a matter of equally sharing household and care tasks, but more broadly reflects a'state of mind', or orientation towards the role of parent, an effect of gender norms. Mothers continue to be positioned as the primary parent, both in law and in social life, with the consequence that motherhood tends to be associated with distinct emotional dynamics (Hildingsson and Thomas 2013;Warner 2006). Anxiety in particular, an anticipatory emotion reflecting confidence in one's competence as an agent, is a significant aspect of motherhood, an effect of the gendered quality of social power and status (Kemper 1978: 66-7). Indeed, maternal anxiety is the focus of much sociological and psychological research (e.g. Glasheen et al. 2010;Hays 1996;Longhurst 2008;Warner 2006). By contrast, the lack of attention to a phenomenon of 'paternal anxiety' suggests that fatherhood may involve less anticipatory selffeelings and more 'consequent' emotions, those retrospective evaluations of one's specific actions, rather than of one's self (Kemper 1978: 49). This may explain why men are able to opt out of essential care-giving without compromising their sense of themselves as good, involved fathers, (Craig 2006;Rose et al. 2015;Thomas and Hildingsson 2009). The effort to parent well against this background of gendered role expectations draws mothers in particular into what Scott et al (1998) describe as ever-increasing 'risk anxiety' about their children. This involves endlessly monitoring and responding to perceived threats to safety, especially those posed to one's children from others and the wider environment, including those posed by one's children to other people, including other children (Scott et al. 1998: 689). Responsibility for assessing and preventing harms to children is increasingly borne by parents, rather than by experts and state agencies, as trust in these institutions has faded (Reich 2014;Warner 2006). Indeed, the effort to protect children from potential harm often focuses on sexual risk, as indicated by the emergence'stranger danger' education campaigns, as well as the politicisation of child sex abuse (Bell 2002;Lorentzen 2013). As Scott and colleagues argue, while risk of sexual harm to children is actually posed primarily by familiar people rather than strangers, parental anxiety about'stranger danger', and more recently paedophilia, although disproportionate to the risk, nevertheless does have a social logic (1998: 693). The moral norms associated with parenting tend to generate morally oriented actions as the role is activated in specific situations (Stets and Carter 2012: 124). The presence of unsupervised children in public places does tend to generate moral concerns about the quality of parenting (Wyness 1994). When parental duties are highlighted in this way, it guides the turn towards increased anxiety about child safety and surveillance of children's activities, however apparently disproportionate or irrational. The significance of gender as a source of unequal social status (Ridgeway and Bourg 2004) means that mothers are particularly susceptible to anxiety, as they feel the authority of their actions as competent parents to be continuously in question. Maternal anxieties and fears are neither primal nor entirely personal, but instead reflect the strains and recognition conflicts of the context where they take shape (Robin 2004: 11). Our interest in feeling at ease and claiming status as competent social actors involves the habitual effort to interpret signs of risk and danger successfully (Bourdieu 1977: 4;Goffman 1971: 249). Goffman argues that feeling at ease depends on being able to read and respond appropriately to relevant social cues, a skill which is only mastered through long-term familiarity with the context (1971: 249). That this sort of competence is more practical than cognitive explains why people tend to feel most at ease not in the most objectively safe places, but in those places where they have greatest experience, where they are able to cope best with the world around them, namely their homes, neighbourhoods, schools and places of work (Warr 1990: 893-5). This takes on a particular intensity in contexts such as Belfast, marked by a history of intergroup hostility and violence (Shirlow and Murtagh 2006). Despite the peace agreement reached in 1998, and the subsequent establishment of a relatively non-violent society (Mac Ginty and du Toit 2007), segregation remains, and conflicting neighbourhoods are physically separated by 'peace walls' in some instances (Leonard and McKnight 2011). A sense of personal safety in those areas of the city most marked by sectarian violence depends on being able to 'tell' or read a person's ethno-nationality from indicators such as where they are located in public space, as the fear of breaching spatial boundaries tends to support ongoing segregation (Burton 1978). The detailed quality of this spatial segregation, which can change from one end of a street to another, is important in sustaining wider social divisions (Peach 2000). While often not immediately obvious to the casual observer, it nevertheless acts as a crucial, if imperfect, clue about the identity of those in specific places, especially when more overt signals, such as colour-coded clothing referring to the flags of one or other nationality, Irish or British, are missing. Feeling that one is doing a good job as a mother is not easy in such a context, and is distinct from mothering during times of conflict, where women are often raising children as lone parents, possibly with extended female family members to assist, following the deaths or executions of men. The struggle to simply survive, often in the face of fear, poverty, trauma, poor health and dispossession, tends to define mothering during periods of political conflict (McElroy et al. 2010;Robertson and Duckett 2007). Motherhood also tends to become explicitly politicized during conflicts, symbolizing collective struggle, sacrifice and hopefulness, whether in radical, nationalist or religious terms (e.g. Aretxaga 1997;Peteet 1997;Zaatari 2006). Mothering in post-conflict situations is distinct from this, as the emphasis turns towards securing long-term stability (e.g. Taylor et al. 2011). Nevertheless, the experience of violence does continue to influence post-conflict mothering (Merrilees et al. 2011). Women currently raising children in Belfast's inner city are no longer doing so in a situation where their husbands, fathers, brothers, boyfriends have been injured, killed or imprisoned. The intensity of wartime collective emotions, especially fear, anger and hatred, have abated to some extent, and the risks associated with sectarian activity, including long-term imprisonment, severe injury or death, have reduced. Thus, fear of possible direct involvement in violence has been replaced by a less focused set of anxieties (Barbalet 2001: 156), such as that one's children may be exposed to or become involved in lower-level sectarian encounters or general anti-social behaviour (Taylor et al. 2011). Anxiety about the potential stigma of being perceived oneself as a bad mother, for instance by raising sectarian or 'anti-social' children, is also not insignificant. The effort to feel that one is mothering well in post-conflict contexts tends to involve responding with caution to the possibility of outbreaks of inter-group hostility in everyday life, alongside the more 'ordinary' anxieties about threats from reckless motorists and sexual predators. Furthermore, when parents find themselves worrying not about risks from strangers or long-standing adversaries, but about the threat posed by 'anti-social' children and young people in their own neighbourhood, the job of parenting seems yet more difficult and attitudes to the neighbourhood, whether those of loyalty and 'voice', or detatchment and exit, are activated. --- Responding to Risk Anxiety: Status and Stigma, Exit and Voice What follows explores the ways in which inner city mothers in Belfast experience and respond to typical anxiety about their ability to protect their young children from risk (Wyness 1994). These anxieties are somewhat intensified in Belfast by anxiety about sectarianism, particularly for those living in what are referred to as 'interface' areas, that is, those 'locations where Catholics and Protestants live side by side in mutually exclusive social worlds [...] in such a way that difference is sustained' (Leonard 2006: 227). The research focuses on residents of these segregated areas, which are characterised by multiple and high levels of deprivation (Northern Ireland Statistics and Research Agency 2010). The emphasis is on the quality of maternal anxiety and the evaluative responses to the social dynamics of these neighbourhoods that it generates (Kemper 1978: 47). --- The Study What follows draws on qualitative interviews with 39 Catholic and Protestant mothers of preschool aged children during 2009-10, living in segregated areas of inner north and east Belfast. The aim was to examine the everyday urban lives of mothers raising very young children in those areas of the city which had been central to decades of political conflict. The focus on perceptions of urban transformation meant that interviews did not gather detailed information about personal lives or family arrangements, instead concentrated on perceptions of change in the experience of living in and moving about the inner city. Participants were recruited through voluntary and community organisations, including statesponsored early years support centres, parent and toddler groups, and primary schools. Respondents were on average aged 26, with two children, at least one of which was of preschool age (under four). Nine respondents combined mothering with paid work, six in parttime and three in full-time employment, typically in community, care, retail or catering jobs. Research material was gathered by a fieldworker who had both social proximity to and distance from the research context. While she had grown up in another troubled part of the city, her status as a middle-class university researcher generated a social distance which seemed to take priority over that of ethno-nationality. Respondents spoke to her principally as a fellow mother, albeit from a different generation and social class, and took little interest in her ethnonationality. Multiple methods were adopted to maximise participation. These included non-participant observations, participant-directed photography, and semi-structured interviews with individuals (24), friendship pairs (five) and one group of four friends. Pair and group interviews were carried out with those who indicated a preference for this method, for both ethical and practical reasons. Interviews mostly took place in community settings, often in spaces provided by gatekeepers. The material from interviews is the focus of analysis in what follows. --- Managing Anxiety: Status and Stigma I'm always scared of someone coming round and kidnapping them or something. (Laura) Anxiety about their own and their children's safety was commonly expressed by mothers in interviews. Concerns, typical of many urban contexts, focused on risks from traffic or sexual predators. The specific character of the place added a concern about keeping children away from sectarian rioting and police attention, and, above all, keeping them away from involvement with the 'anti-social' activities of young people in the neighbourhood. The post-conflict context has changed the quality of these specific risk anxieties. Molly, for instance, a Protestant, worried that her son would be subjected to paramilitary violence from one of the various loyalist organizations, who tend to operate like urban gangs, competing within and between themselves for power in specific areas (Hamill 2011: 140-141): It's not Protestants fighting with Catholics so much anymore, but if you mess with one person in an organisation that's it, you've got them all after you, you know. That would worry me about [my son] growing up... Nevertheless, mothers struggled to accurately evaluate potential threats to their children's safety in a changed, post-conflict context, and so feel that they were good, responsible parents. The worry that children could be abducted by strangers echoes parental anxiety in other contexts, generated by repeated moral panics (Bell 2002;Pain 2008). Carol's anxiety prompted her to exercise more physical surveillance of their activities than she feels is common in her neighbourhood: I was born in this area [Catholic, Inner North] and I know a lot of people in it and I do feel safe in it and not as safe as I used to do and I don't feel safe for my children's point of view you know. [...] [N]ow they're allowed to play in the street in the summer, but I'm at the door like a stalker. Now I know they are young obviously anyway, but the rest of the parents in the street aren't like that, and in the other streets the kids are out running up and down [...] and I am like parked on the doorstep with a mug of tea watching them on their wee bikes, cos I don't feel safe for them with the traffic and I would always be afraid of somebody trying to put them into a car, and drugs is a big issue round here at the moment, which wasn't when I was growing up. Carol feels herself to be acting differently from her neighbours, who she perceives as practising a 'free range', or what Annette Lareau describes as a 'natural growth', approach to parenting. This affords children a lot of spatial freedom and control over their time, in contrast with the 'concerted cultivation' approach adopted by middle class parents, involving the detailed management of children's time and activities (Lareau 2003: 5). The more intensive approach that Carol adopts is important for her claim to be a good mother, protecting her children from harm as she allows them to play on the street. As her comments suggest, anxiety is a primary driver of her actions as a mother. Dawn's effort to manage her anxiety, so that she can allow her children a degree of independence, is not easy, and depends on setting up supervision networks, so that she can allow them more access to the world beyond the front door, a common parental strategy (Pain et al. 2005). It also depends on making a recognition claim about the quality of her parenting, in comparison with mothers who allow their children to be'street reared':... I know people will say 'Oh mine go to the park [by themselves]'. Well if my kids need to go to the park, I'll take them myself. 'Oh I just let them go on down', and I say 'Aye, you're just too bloody lazy to take them yourself, that's why'. Now don't get me wrong, that's just my perspective. My kids aren't street reared, by no means. [My italics] The distinction Dawn draws between her own careful supervision and those mothers whose children who are'street reared', involves making a stronger claim than Carol for recognition of the'respectable' quality of her role performance. As McLaughlin has argued, a concern with respectability 'appears to be particularly significant in situations where prestige through occupational attainment is difficult to achieve'(1993: 563). June similarly drew a distinction between her own area and a neighbouring housing estate, commenting that '[i]t's like kitchen reared here, [...] I just think there is a wee bit more decency [in comparison to the housing estate].' June is arguing here that children are reared in their own kitchens in her area learn how to act 'decently', in contrast to the'street reared' children of the housing estate, where attacks on strangers would not be unheard-of: '... you have to be rough over there, there are stereotypes [...] to live up to.' In this way, June and Dawn manage their risk anxiety by claiming recognition for the quality of their mothering, in June's case through direct comparison with the housing estate. This contrast between'street' and 'kitchen' child-rearing reflects similar distinctions found elsewhere. Mitchell and Green's working class respondents in the North East of England distinguished those children who play 'out the front' of their houses, on or in close proximity to the street, and, more respectably, 'out the back', in a more secured and supervised context (2002: 16). The prevalence of these sorts of status claims is not incidental to the wider politics of parenting. As Skeggs argues, '[r]espectability embodies moral authority: those who are respectable have it, those who are not do not'(1997: 3). The claim to respectability, moral authority and consequently status here, articulated by Dawn in terms of a more intensive style of mothering than her neighbours seem to employ, and by June as a more 'decent', domestically-focused style characteristic of her neighbourhood rather than specifically of herself, is an important anxiety management strategy which is caught up with broader responses to the 'hidden injuries' of class stigma (Sennett and Cobb 1972); the politicization of unsupervised children in public places; and the privatization of risk (Beck 1992). It isn't surprising then that Dawn's recognition claim depends on affirming this contemporary version of the moral character of parenthood. Such condemnations of the 'irresponsible' parent, who's'street reared' children engage in antisocial behaviour, are caught up in what Goffman describes as the two-role process through which the'stigmatized' and the 'normal' circulate in unrealized ways, prompting actors to either try and align themselves with one or other role, or detach from the situation (1963b: 163-4). That these are interaction roles, rather than simple characteristics of persons, means that participants in a situation can be perceived as performing one or other, regardless of how they might be perceived in other contexts. The potential stigma of being evaluated as a bad, irresponsible mother, as a result of the public behaviour of one's children, is an extremely painful experience, to be avoided if at all possible (Lynd 1958: 64). Feeling that one is regarded as a good mother appears to require responses to risk anxiety, for instance through surveillance and management of children's social interactions. Laura and Jessica, living in a Protestant area, regretted to some extent the loss of paramilitary control over public order: Years ago you wouldn't have got that much anti-social behaviour so you wouldn't have, but now it's just wild, cos they're getting away with it basically. [...] I think it's because all the paramilitaries have died down. Round here would be mainly UDA [Ulster Defence Association] and it's really died down now, where they can't, you know, maybe go and beat people, [...] and I think that's why the kids are running about going mad to be honest with you. (Laura) Karen, a Catholic, also worries about parents letting children 'go mad', as she puts it, in this changed context. For her, however, the rise in anti-social activity is caused not by a reduction in paramilitary control, but instead by a loosening of parental supervision in response to the ending of political violence: In Karen's view, mothers have abandoned their duty to protect children, in response to the emergence of peace, resulting in'mad' anti-social behaviour. For some, anti-social activity has become the primary focus of maternal anxiety, as well as resentment:...you see kids there, teenagers, and they're standing at the corner with joints and that in their hands, cannabis and stuff, and you're [thinking] like 'where's their mummies?' and stuff. Cos like, if my son had done it, like you'd murder [punish] him probably, though it probably does no better. But I would definitely not want him to go down that road, never. (Jade) The ambiguity Jade expresses about how to actually prevent young people from getting involved in drug taking doesn't detract from her strong conviction that good mothers should do their utmost to keep their children away from such activity, and that the mothers of these young people are failing in their duty to their children, as well as to their community. Parents of 'anti-social' children and young people are very much the focus of resentment and stigma, as they carry the blame for adding to the burden of respectable, responsible parenting in these neighbourhoods. The condemnation of bad parents is captured in a conversation amongst a small group of mothers whose children all attend the same school in a Protestant area: Kathy... it's different now, they're being cheeky to their own. They're terrorising their own people in their own areas. --- Vicky Well my [son] was in bed at the weekend [...] and there was two kids from that area out at 3 in the morning, and I think one was thirteen and one was fourteen or fifteen, calling my one to see if he could get out. --- And one of them couldn't get into his own house... [...] Sharon But where were their parents? A norm of parental responsibility is strongly affirmed here. While Sharon and Vicky have teenage as well as pre-school-aged children, and are sharply aware of the difficulties of managing their behaviour, the group nevertheless agrees that good parents know where their children are, make every effort to keep them safe, and prevent them from becoming a threat to others. Failure to do this is regarded as injuring the wider community, including other parents, who then must intensify their efforts to prevent their own children from getting involved. The mothers here agree that parents are primarily responsible, over all other authorities, for the actions of their offspring, a poignant conclusion for Vicky in particular, whose teenage son had died as a result of a drug overdose. Nevertheless, the conversation confirms the normative expectation that the 'good' parent, implicitly the mother, bears responsibility for children's actions and characters. As Goffman reflects,'stigma processes seem to [...] enlist[...] support for society among those who aren't supported by it'(1963a: 164). The dynamics of maternal anxiety in the post-conflict era has shifted to some extent, as political and sectarian violence has declined. Fear that children could get caught up in violence appears to have been transformed into more a generalized anxiety about potential exposure to a variety of safety risks. This anxiety can be understood as the 'emotional tone' (Elster 1989: 128) of competing social norms, firstly that children should have more physical freedom than would have been possible during the Troubles, and at the same time, that good parents should protect their children from sexual predators, reckless motorists, and 'anti-social' young people. Maternal efforts to position themselves as 'normal' by reproducing stigmatising processes, can be understood as an important strategy for responding to anxiety (Goffman 1963b: 163-4), and claiming status recognition. --- Responding to Anxiety: Exit, Loyalty and Voice The perception of risks outlined above, particularly of children becoming involved in sectarian and/or anti-social behaviour, tends to result in what Hirschman (1970) famously described as either an effort to 'exit', and find somewhere more'respectable' to live, or a 'voice' response, whereby those who either have few exit options, and/or who feel loyal to the community, try manage their risk anxiety by engaging in activities intended to improve the quality of social life in the area. --- Exit The decision to exit inner-city neighbourhoods was not easy, involving a decisive move away from close-knit, face-to-face communities, where reciprocal social support and solidarity is a vital resource in the face of political conflict and multiple forms of deprivation (Shirlow and Murtagh 2006: 20-21 The effort to do a good job as a mother, against a background of both community loyalty and risk anxiety, plays a crucial role in motivating these mothers' efforts to change the social dynamics of their neighbourhoods. As Kathy commented, 'I think if you're a parent too [...] you do want to [...] make this place a better place [...]. And not everybody wants to help, but there's that handful that say 'Well, we'll change the community', you know?' --- Conclusion The analysis presented here contributes to debates in sociology concerning the social significance of emotions, arguing that they do not simply indicate either a form of social conformity, or the strain of such conformity (e.g. Hochschild 2003), but that they are important aspects of interactive struggles for status recognition. Anxiety, understood as an indication of insufficient status and power, can provoke efforts to claim recognition for the former, in this context through the explicit affirmation of non-sectarian mothering. In other contexts which are free from sectarianism and a history of violent political conflict, claims for social status are likely to take a different form. The relative absence of alternative sources of power and status for those inhabiting Belfast's inner city makes non-sectarian mothering an important focus for making these recognition claims. Consequently, while Belfast mothers in the inner city, like their counterparts elsewhere, worry about risks posed to children from motor traffic passing through residential streets, or from sexual predators, the concern about protecting children and young people from sectarian or anti-social activity is intensely felt. The quality of parenting has become the focus of much attention, as the classed boundaries of respectability are reinforced and resentment builds against those who seem to let their children 'run mad'. For mothers raising young children in these circumstances, preferences about whether to remain or try to leave are not simply calculated in relation to objective measures of safety and danger. Instead, a sense of community loyalty, combined with the extent to which they feel at ease in these areas and able to claim recognition as good mothers, shapes attitudes to neighbourhoods. As Boal has argued, the combination of exit and voice responses to the social dynamics of residential segregation does reinforce social homogeneity (1976: 71). Although the 'voice' responses of women in this study tend to focus on non-sectarian mothering, for example by reducing the bitterness and hardness of inter-group attitudes, this may contribute towards softening the boundaries between communities. At the same time, a hardening in normative expectations concerning the moral duties of mothers is evident, not least through condemnations of those whose children grow up on the streets, rather than in their mothers' kitchens. Recognition (Palgrave Macmillan, 2012), and Abortion and Nation: The Politics of Reproduction in Contemporary Ireland (Ashgate, 2005). She has also worked on the social politics of breastfeeding, abortion and sex education, as well as motherhood and social change. --- sectarianism, for instance by sending her youngest son to a mixed play-scheme, she is doubtful that this will be enough to keep him, or his older sibling, safe from involvement. She views the cross-community contact that the children's play-scheme provides as valuable. However, she seems unsure that 'contact' schemes such as this, which aim to resolve inter-group tensions by bringing people from each side together in an organised way, offer a long-term solution, despite common claims that they do (see, e.g., Amir 1969;Hewstone and Brown 1986). In her effort to feel that she is mothering well, anxiety about sectarianism has taken priority. An important factor in Kylie's decision to exit is the continued presence of her ex-partner and his family in the area, which has reduced her sense of status and consequently her loyalty to the neighbourhood, making a move away more feasible by lowering the emotional costs of exit (Hirschman 1970: 78). Kylie's plan to exit the inner city was not common amongst the mothers in this study. The strong sense of belonging to an urban village, characterised by close and frequent interaction and support, particularly with extended family members, constituted a strong incentive to remain, despite commonly expressed risk anxiety. As Hirschman put it, 'loyalty holds exit at bay and activates voice'(1970: 78). Consequently, mothers commonly sought to exercise 'voice' to influence the social character of their neighbourhoods, an important way of claiming recognition for their responsible mothering. This may explain why Kathy claimed that 'the women are the voice of the community really'. --- Loyalty June had a similar attitude and sought to improve the quality of life in the inner city in quite direct ways. She had moved away from Northern Ireland as a young woman: I remember thinking [that] if I had children I didn't want them here, cos [...] you were forced to join all these paramilitary [youth organisations], [...] every other child used to be involved in it and you either had to become a Christian or get out of the country to get out of it. However, she had returned to live and raise her young family in the post-conflict era, and, among other things, had become involved in setting up a cross-community play scheme for toddlers in her area: We have people coming in every week and when people get to know this, that it isn't that big [paramilitary mural] on the wall, [...] whatever. It isn't that kind of hardness in the toddler group. [my italics] While somewhat surprised that women from diverse backgrounds were not put off by the prominent paramilitary murals on the external walls of the playgroup building, she nevertheless affirms the possibility of reducing the 'hardness' of sectarian attitudes through running such schemes for very young children. The second focus of 'voice' amongst these mothers was on local anti-social activity. During interviews, various efforts to provide young people with places to go and activities to get --- Author Biography Lisa Smyth is a Senior Lecturer in Sociology at Queen's University Belfast. Her research focuses on the normative and interactive quality of social status, with a particular focus on gender and families. She is the author of The Demands of Motherhood: Agents, Roles and
This paper considers the social logic of maternal anxiety about risks posed to children in segregated, post-conflict neighbourhoods. Focusing on qualitative research with mothers in Belfast's impoverished and divided inner city, the paper draws on the interactionist perspective in the sociology of emotions to explore the ways in which maternal anxiety drives claims for recognition of good mothering, through orientations to these neighbourhoods. Drawing on Hirschman's model of exit, loyalty and voice types of situated action, the paper examines the relationship between maternal risk anxiety and evaluations of neighbourhood safety. In arguing that emotions are important aspects of claims for social recognition, the paper demonstrates that anxiety provokes efforts to claim status, in this context through the explicit affirmation of non-sectarian mothering.
The optimization of athletes' wellbeing has been increasingly considered essential both in the academic and practical fields of high-performance sports. Various organizations, such as the International Olympic Committee, have highlighted its importance, particularly mental health. Moreover, the increased attention to athlete wellbeing in sport policy debates at the national level has led to the development and implementation of a support system for athletes' mental wellbeing in some countries. Nevertheless, the literature is limited to understanding the case of Japan. Interestingly, only 0.8% of the literature is available on "athlete" and "wellbeing" in Japanese compared to English journals up to 2019. Therefore, the purpose of this study was to identify (a) the current state of wellbeing of Japanese university studentathletes, (b) the level of knowledge about athlete wellbeing, and (c) the athletes' perception of the availability of wellbeing support in the national sports federations, (d) the athlete experience of support services, and develop the types of national support athletes expect and need from the government and national sports federations in the future. As a pilot study, a total of 100 Japanese university student-athletes (43 male, 57 female) from 17 Olympic and seven Paralympic sports completed an online survey. Consequently, the state of their wellbeing was self-perceived as good in all dimensions (i.e., physical, mental, educational, organizational, social, and financial). Moreover, the results showed low recognition of the term "athlete wellbeing" and a lack of knowledge of the availability and accessibility of appropriate support services. The results also showed that Japanese university student-athletes rarely seek help from experts, while 45% indicated "no one" to talk to. Interestingly, however, most athletes considered each dimension of wellbeing important in relation to their performance development. Based on the results, it is necessary to develop an education program, guidelines, and detection --- Introduction The optimal and holistic development as a human being is considered important for athletes to achieve their maximum potential both in performance and life after their athletic careers (Wylleman, 2019). Although participation in sports and physical activity benefits one's health and mental wellbeing in many ways (Biddle et al., 2015), pursuing excellence in high-performance sports is associated with various factors that may pose threats to the holistic wellbeing of athletes (MacAuley, 2012;Gouttebarge et al., 2019;Giles et al., 2020). Given those risks in a highly competitive environment, optimizing athlete wellbeing, particularly mental health, has received considerable attention in high-performance sports academic, political, and practical fields. The increase in interest might be triggered by some high-profile athletes openly and publicly discussing their challenges with mental health and wellbeing (Heaney, 2021). In the period between 2018 and 2020, several sporting organizations published consensus statements on athletes' mental health, including the International Olympic Committee (IOC) (Moesch et al., 2018;Schinke et al., 2018;Gorczynski et al., 2019;Reardon et al., 2019;Van Slingerland et al., 2019;Henriksen et al., 2020). At the same time, several national governments and sports organizations have conducted investigations and developed policies to guide the nation to promote and support athletes' mental wellbeing at a system level (Canadian Olympic Committee, 2015; Department for Digital, Culture, Media and Sport, 2018; English Institute of Sport, 2019; Australian Institute of Sport, 2020;High Performance Sport New Zealand, 2021). To operationalize the policy into practice, some leading countries have launched teams responsible for establishing and implementing the national support system and programs, mostly at the high-performance sports centers, as an integral part of athlete development. Those support frameworks appeared to include some of the common approaches proposed by Purcell et al. (2019): (a) providing support for athletes to equip them with a range of skills to self-manage distress, (b) educating key stakeholders (e.g., coaches, science and medicine practitioners, support service providers, etc.) in a highperformance environment to better understand and respond to symptoms regarding mental health and wellbeing, and (c) establishing multi-disciplinary teams and/or professionals to better support and manage prevention and reaction to athletes' problems with mental health and wellbeing. Despite mounting literature and practical implementation of policies to support athlete wellbeing, there are several limitations associated with athlete wellbeing. First, the majority of research have focused on athletes' physical and psychological/mental wellbeing, even in the last 2 years (Biggins et al., 2020;Schary and Lundqvist, 2021;Jovanovic et al., 2022). Thus, there is little to know about the athlete's wellbeing from a holistic perspective. Furthermore, Giles et al. (2020) argued that evidence-based intervention in athlete wellbeing is limited due to methodological and conceptual issues. Lundqvist (2011) also claimed that "wellbeing is treated as an unspecific variable, inconsistently defined and assessed using a variety of theoretically questionable indicators (p.118)." These methodological and conceptual issues associated with athlete wellbeing, therefore, make it difficult to carry out evidence-based interventions in practice (Giles et al., 2020). Moreover, despite most studies having been conducted in Western countries, there is still little information available about other regions, including Asia (Reardon et al., 2019). Additional research, therefore, contributes to knowledge in this area, particularly in developing the support policy and framework that could be operationalized in practice. Japan earned 27 gold medals and 58 total medals in the Tokyo 2020 Olympic Games, placing them in the top three nations for gold medals, which were the best results ever. Since the development of sports has become the responsibility of the government due to the enactment of the Basic Act on Sport in 2011 (Ministry of Education, Culture, Sports, Science and Technology, 2011), the landscape of Japanese high-performance sports has dramatically changed at all levels, such as policies, systems, the structure, and programs. However, there had been little discussion about athlete mental health and/or wellbeing until the COVID-19 pandemic struck, resulting in the Tokyo 2020 Games being postponed by 1 year. In fact, Kinugasa et al. (2021) reported that only 14 articles were available on "athlete" and "wellbeing" in the Japanese language; it was only 0.8% of those in English journals up to 2019. However, gradually more focus is being directed toward athletes' mental health, that is, a state of mental wellbeing. For example, Tsuchiya et al. (2021) argued the need for support for athletes' mental health by reporting the positive correlation with a psychological stress response to COVID-19. To contribute to Evidence-Based Policy Making (EBPM) in the high-performance sports field, the Japan Sport Council (JSC) launched a new research group in social sciences at the Japan Institute of Sports Sciences (JISS), a part of the Japan High Performance Sport Center (HPSC) (Kukidome and Noguchi, 2020). Given the limited evidence available in the field of athlete wellbeing in Japan, the group initiated the research project to provide some evidence to support the policy development into operationalization in Japan-that is, a pilot study with university student-athletes aiming to reveal (a) the current state of wellbeing of Japanese university student-athletes, (b) the level of knowledge about athlete wellbeing, (c) the student-athletes' perception of the availability of wellbeing support in the national sports federations, and (d) the student-athletes experience of support services on wellbeing, and develop the types of national support student-athletes expect and need from the government and national sports federations in the future. --- Materials and methods --- Participants The participants for the pilot study included 100 Japanese university student-athletes (43 male, 57 female) aged from 20 to 25 years (M = 21.3, SD = 1.2). The sample was limited to student-athletes who attend either undergraduate or postgraduate programs and belonged to the university's Athletic Department, participating in sports in an official event of the Tokyo Olympic and Paralympic Games 2020. The participants represented 18 Olympics (baseball and softball, basketball, athletics, volleyball, football, badminton, tennis, swimming, table tennis, archery, handball, judo, rhythmic gymnastics, rugby sevens, artistic gymnastics, karate, surfing, and water polo), and seven Paralympic sports (para-table tennis, para-badminton, para-swimming, para-archery, boccia, para-athletics, and para-judo). The participants were grouped into two categories: "elite" for those who have competed in international competitions representing Japan, including five serial medalists (36.0%), and "sub-elite" for the rest (64.0%). 11% of the participants were carded athletes in national (n = 1), senior (n = 4), youth (n = 3), and junior (n = 1) categories for less than 1 year (33.3%), 1-3 years (44.4%), and 4-6 years (22.2%). --- Measures Given that this pilot study was specifically designed for the initial investigation to capture the general trends of student-athlete wellbeing in Japan with the aim of providing evidence for developing the support system within the country, the instrument was self-developed in the Japanese language. To maintain the holistic nature of wellbeing, we developed the instrument in accordance with the Holistic Athlete Career Model (Wylleman, 2019). To validate this 48-item instrument, we used the Delphi method (Hsu and Sanford, 2007) by eight psychologists and social scientists with an excellent understanding of athlete wellbeing. The instrument was resurveyed until the experts reached a consensus (100% agreement by the eight experts), and the content validity and feasibility of the instrument were verified through this process. The reliability of the instrument was tested by administering the same instrument twice to the same 38 respondents, the participants within 1 week, and calculating the intraclasscorrelation coefficient (ICC). Test-retest reliability of the instrument was found to be good (r = 0.7 <unk> 0.3) (Hopkins, 2000). --- Demographic information The measurement consisted of 11 items to gather demographic information about the participants. These items included gender, age, place of living, working/educational status, sport type, the number of years played in their main sport, organizational type, carded category, the number of years played in their carded category, and the best performance record in their sport. --- Awareness of and state of athletes' wellbeing As "athlete wellbeing" is a relatively new concept in Japan, one item was included to understand the level of awareness in student-athletes. In addition, it comprised seven Likert-scale items to measure the state of wellbeing in each dimension (i.e., physical health, psychological health, balance with education/and or work, interpersonal relationships, organizational environment, financial security and stability, and legal security and safety). Their states of wellbeing in each dimension were asked over the past 3 years to account for COVID-19 spread mostly in 2020 in Japan, and a 5-point scale was used in most of the items (e.g., 1 = very good, 2 = somewhat good, 3 = not so good, 4 = not good at all, 5 = not sure). Furthermore, in order to take the degree of influence of COVID-19 into consideration, another seven items were added (e.g., Does COVID-19 have more influence on your wellbeing than usual before the pandemic?). --- Influence and importance of wellbeing in relation to athlete performance A total of 12 items were included in the instrument to reveal the perspectives of student-athletes on how much each dimension of wellbeing would influence performance and how important they perceived a state of wellbeing in their performance development. Those items scaled from 1 (very much) to 5 (not at all). --- Availability, experience, and expectation of support services Two items were specified to collect information about the availability of guidelines and support programs on athlete wellbeing and/or mental health within the national sports federation. Furthermore, a total of 25 items were prepared in order to investigate the student-athletes' experience of receiving support services in relation to their wellbeing. In contrast, one item was added to identify the level of expectation for developing national support services by the government and/or national sports federations. Those items were developed with the perspectives on general service provision in relation to information, detection, proactive and/or reactive support service, tools, and networking. --- Life satisfaction The overall satisfaction with life scores from the national wellbeing and quality of life survey were taken on an 11-point scale from 0 (not satisfied at all) to 10 (very satisfied) to compare the participants' scores with the general population in Japan (Cabinet Office, 2018). --- Procedures Ethical approval for this study was granted by the authors' sports science institute ethical review committee (Reference #047) in accordance with the Declaration of Helsinki. A written informed consent form describing the aim, methods, risks associated with participation, confidentiality considerations, and data ownership and management methods of the study was provided to the student-athletes before the participants filled out the web-based questionnaire. They could withdraw from participation at any time, even after they have agreed to participate in the study. After we obtained informed consent from the participants, they completed the survey using the web-based questionnaire system (Tokyo: Cross Marketing Group Inc.), taking approximately 15-20 min on a confidential and voluntary basis. The survey was conducted from February to March 2021. --- Analysis The Chi-square tests were used to determine the presence and magnitude of deviations away from expected distributions, and the significance level <unk> was set at 0.05. Correlation analysis was applied to identify the relationship between the items with the following thresholds: <unk> 0.1, trivial; 0.1-0.3, small; 0.3-0.5, moderate; 0.5-0.7, large; 0.7-0.9, very large; and 0.9-1.0, almost perfect (Hopkins et al., 2009). The Statistical Package for Social Science (SPSS) for Windows version 27 (Armonk, NY: IBM Corp.) was used for this analysis. A Welch's t-test was conducted for group comparison using RStudio statistical computing software version 1.4.1717 (Boston, MA: RStudio), and the significance level <unk> was set at 0.05. Uncertainty in true (population) effects values was expressed as 90% confidence limits. --- Results The current state of student-athlete wellbeing The state of the participants' wellbeing in the past 3 years was perceived as somewhat good in all physical (M = 1.91, SD = 0.94), mental (M = 2.05, SD = 0.97), educational (M = 2.10, SD = 1.07), organizational (M = 2.42, SD = 1.26), social (M = 2.06, SD = 1.04), financial (M = 2.19, SD = 1.06), and legal (M = 2.53, SD = 1.36) dimensions. Among the seven dimensions of wellbeing, the participants self-evaluated their legal wellbeing as the highest, indicating "relatively not good." In contrast, physical wellbeing at the lowest indicated "relatively good." The results of the correlation analysis showed that the overall satisfaction with life scores and the seven dimensions of wellbeing were insignificant (p > 0.05). However, the relationship between the overall satisfaction with life and wellbeing scores between the groups showed some significant differences (Table 1). In particular, moderate and small positive correlations were observed between the overall satisfaction with life and wellbeing scores in organizational and financial dimensions only in the elite group (r = -0.51, p = 0.001; r = -0.36, p = 0.031, respectively). No significant differences were observed in the sub-elite group. Furthermore, based on Chi-square tests between states of wellbeing and independent variables, no significant difference was found in gender, place of living, and Olympic sports compared to Paralympic sports. The performance level, however, showed significant differences in organizational (p = 0.002), financial (p = 0.004), and legal (p = 0.004) dimensions of wellbeing. The elite athlete group indicated not being good at all in organizational wellbeing (p = 0.02) and not so good in social wellbeing (p = 0.01), whereas somewhat good in legal wellbeing (p = 0.01) compared to the sub-elite group. Interestingly, only the sub-elite athlete group showed their uncertainty (i.e., not sure) about their wellbeing in organizational (p = 0.03), social (p = 0.004), financial (p = 0.04), and legal dimensions (p <unk> 0.000). There was no significant difference in the overall satisfaction to life score between the elite and sub-elite athlete groups [p = 0.26 (90% confidence limits -1.47 to 0.27)]. Given that this study was conducted in early 2021, the influence of COVID-19 on their wellbeing was observed. As a result, the COVID-19 pandemic was perceived to have an TABLE 1 The relationship between the overall satisfaction with life and wellbeing scores of the participants in the elite athlete group (represented Japan in the senior competition at the international level) and sub-elite athlete group (competed at the national level). impact on the state of student-athletes' wellbeing to some degree, as approximately half of the participants indicated either being greatly influenced or somewhat influenced in physical (57%), mental (61%), educational (52%), organizational (48%), social (48%), financial (49%), and legal (44%) wellbeing. --- Overall satisfaction with life Athletes' perception of the influence and importance of wellbeing for performance --- Influence on their performance About half of the participants considered their performance was greatly influenced or somewhat influenced by physical (56%), mental (53%), educational (50%), social (47%), financial (42%), and legal (38%) wellbeing (Table 2). Moreover, significant differences were observed between the elite and sub-elite groups in social, financial, and legal wellbeing [p = 0.03 (90% confidence limits -0.88 to -0.13), p = 0.004 (90% confidence limits -1.03 to -0.29), and p = 0.003 (90% confidence limits -1.15 to -0.44), respectively]. Thus, it was found that the studentathletes in the elite group perceived their state of wellbeing to affect their performance more influence on performance than the athletes in the sub-elite group. --- Importance of their performance Many participants considered the dimensions of physical (83%), mental (80%), educational (72%), social (78%), financial (76%), and legal (71%) wellbeing to be very important or somewhat important in relation to improving their own performance (Table 2). No significant difference was observed between the elite and sub-elite athlete groups (p > 0.05), meaning that most Japanese student-athletes consider wellbeing important for their performance development regardless of performance level. --- Availability of support policy, guidelines, and programs in national sports federations The results revealed that Japan's support systems and programs were rarely available for student-athletes. First, 11.0% of the participants indicated the availability of guidelines on athlete wellbeing and/or mental health from the national sports federations, whereas 35.0% responded "No," and 54.0% showed "I do not know." Second, only 18.0% revealed that their national sports federation has some kind of policy or implementation of it to support the athlete's wellbeing and/or mental health. In comparison, some national sports federations have policies but no implementation (11.0%). Third, 21% of the participants indicated no policy or actions within the national sports federations, whereas 50.0% did not know the availability. --- Student-athletes' experience of support for their wellbeing The results indicated that most of the student-athletes (85.0%) had never received support for their wellbeing. The reasons were identified as (a) a lack of knowledge about how to access those services (49.4%), (b) the lack of information about those services available to them (43.5%), (c) the lack of understanding of the necessity to receive such support (11.8%), and (d) the lack of a service provider from whom they can receive support (10.6%). Interestingly, nine of 15 participants (60.0%) who experienced athlete wellbeing support in the past revealed that they received support from educational institutions (i.e., high schools and universities) rather than national sports federations (n = 2) or the Japanese Olympic and Paralympic Committees (n = 1). The support services the 15 participants received in the past comprised educational programs to gain knowledge and information (46.7%), develop the athletes' skills such as resilience and/or coping (40.0%), and mental healthrelated services (40.0%). Individualized consultation (26.7%), as information delivery and education programs, seemed to be necessary. --- Information It was found that only 12.0% of the student-athletes knew the word and the meaning of "athlete wellbeing." In fact, 99.0% of them claimed, in their perception, that the national sports federations had never delivered information about their wellbeing to them. Moreover, 67.0% indicated that they had never obtained and/or gathered information about "athlete wellbeing." For the rest of the participants, the information sources varied from online movie (e.g., YouTube, SNS etc.) (18.0%), national sports federations (12.0%), literature (7.0%), information delivery from entourage (support staffs = 4.0%, coach = 3.0%, teammates = 3.0%, retired athletes = 2.0%), and website of IOC and/or International Sports Federations (IFs) (2.0%). --- Detection The results demonstrated the lack of a detection and monitoring system for student-athlete wellbeing. First, 77.0% of the participants responded that they had no experience when national sports federations approached them to understand their state of wellbeing. Despite the relatively low experience of the student-athletes (23.0%), the detailed detection methods utilized by the national sports federations in their approach were also specified as; (a) conversation with the coach and/or experts (11.0%), (b) informal conversation daily (9.0%), (c) utilization of measurement tools (8.0%), (d) individual confirmation from behavior such as continuous absence in training (5.0%), and (e) clinical diagnostic tests (3.0%). Interestingly, however, no participants indicated their experience in utilizing any tool for detection. --- Help-seeking behavior when faced with a threat or risk Most participants indicated that they had never witnessed or experienced behavior that could be considered a threat or risk to the student-athlete's wellbeing and/or mental health (84.0%). Among 16 participants who have witnessed or experienced inappropriate behavior, 31.2% of those shared or reported it to someone else, such as teammates or team staff (n = 6), and the national hotline set by the national sports federations, Japanese Olympic Committee, Japanese Paralympic Committee, or JSC (n = 4). The reason why the majority of the student-athletes (68.8%) did not share or report the case was that the athletes; (a) did not want to make it a big deal (45.5%), (b) were afraid of who reported (36.4%), (c) did not know whom to report to (18.2%), or (d) did not want to be involved in (18.2%). Of those who shared or reported it to someone else, however, 60.0% indicated their positive experience by expressing their satisfaction with the correspondence to the issues. Finally, the results demonstrated the lack of information and knowledge about the availability of a hotline, as 75.0% of the participants responded that they had never heard or been aware of the availability of a hotline. --- Help-seeking behavior when anxious or distressed The results also showed that 55.0% of the participants had someone whom they could talk to whenever they were anxious or distressed, including parents (61.8%), friends (60.0%), teammates (30.9%), significant others (25.5%), senior athletes (23.6%), brothers and sisters (18.2%), coaches (10.9%), and/or support service staff (5.5%). However, only 19.0% choose to approach experts to seek help. Those experts included psychiatrists (26.3%), clinical psychologists (21.1%), other psychological specialists (e.g., industry and school counselors) (15.8%), sports counselors (15.8%), and so on. Interestingly, 31.6% of those who sought help from experts identified with a coach. Their experience of working with the experts tended to be somewhat positive, as 47.4% indicated their satisfaction, whereas the same number of participants were not sure whether they were satisfied or not. Interestingly, the barriers to not seeking help from experts were identified; (a) lack of knowledge about where they could find the appropriate experts (37.0%), (b) uncertainty about the cost of receiving support (35.6%), (c) disbelief in the ability of experts to solve their problems (30.1%), (d) no clarity about whom to talk to (23.3%), (f) worries about eyes around them (17.8%), and/or (g) a feeling of embarrassment to seek help (12.3%). Athletes' expectations for the national support system and service programs for their wellbeing If the government and national sports federations were to develop the support system and service programs in Japan, 38.0% of the participants expressed their willingness to receive support, while 31.0% were reluctant to use the service in the future. The majority of the participants, however, agreed with the importance and necessity of the government and national sports federations developing the system and programs to promote and support athlete wellbeing in Japan (Figure 1). Based on the results, "coach education" was the most expected action (77.0%), followed by "develop a guideline" (76.0%), "clear statement on strategic plan or policy of national sports federations" (75.0%), and "set up the system to react when any problem occurs (investigation, measures, and penalties, etc.)" (75.0%). These results might indicate the need for coaches to understand the field of wellbeing while expecting the government and national sports federations to provide guidance. Considering that all items were somewhat equally supported and even the least expected item obtained 66.0%, it could be concluded that various actions could potentially be taken to develop the national support systems and programs in the future. --- Discussion and practical implications As there is convincing evidence that indicates pursuing excellence i nhigh-performance sports is associated with various factors that may become threats to the holistic wellbeing of athletes (MacAuley, 2012;Gouttebarge et al., 2019;Giles et al., 2020;Bennie et al., 2021), several high-profile countries in the Olympic and Paralympic Games, such as Canada, Australia, Netherlands, New Zealand, the United Kingdom, the United States, and so on, have started developing their own support systems and programs for athletes to pursue excellence both in performance and wellbeing in recent years. Japan is considered one of the world's leading countries in highperformance sports by placing in the top 3 in the summer Olympic Games of Tokyo 2020. However, little literature is available in the Japanese context (Kinugasa et al., 2021). As an initial investigation, this pilot study aimed to reveal the general trends of athlete wellbeing in Japan, particularly from FIGURE 1 The participants' expectations for the national support system and service programs on wellbeing on a 5-point scale (1 = strongly agreed, 2 = relatively agreed, 3 = relatively disagree, 4 = strongly disagree, 5 = not sure). the perspectives of university student-athletes. In the following, the discussion was carried out as per the four specific objectives of this study. First, this study aimed to investigate the current state of student-athlete wellbeing from a holistic development perspective (Wylleman, 2019). Based on the results, the Japanese university students demonstrated a relatively good state in all seven dimensions of wellbeing (i.e., physical, mental, organizational, social, educational, financial, and legal) despite the observation of COVID-19 influence to a certain degree. In fact, the overall satisfaction with life scores of the participants and the general population in Japan were similar (5.7 and 5.9, respectively) (Cabinet Office, 2018). The lower score of wellbeing in organizational and social wellbeing for the elite group somewhat supported the idea that elite athletes need more support than non-elite athletes as they face higher demands that may threaten their wellbeing. In addition, the results show that only the sub-elite athlete group indicates their uncertainty (i.e., not sure) about their wellbeing in organizational, social, financial, and legal dimensions, suggesting lower awareness of their wellbeing at the non-elite level. Those results implied that elite athletes need more support for their wellbeing. The holistic approach is preferable by providing not only physical and mental but also social and organizational dimensions of their wellbeing. The second objective of this study was to understand the level of knowledge about athlete wellbeing in university student-athletes. Given the little information available in Japanese, the result showed that the student-athletes were previously not familiar with the word "athlete wellbeing, " and the majority did not exactly know the meaning of it. However, given the description in the written form attached to the survey, approximately half of the student-athletes perceived their performance was significantly influenced or somewhat influenced by physical, mental, educational, social, and financial wellbeing. Moreover, more than 70.0% of the participants considered athlete wellbeing in all dimensions to be very important or somewhat important to improving their performance. These results have implications in two ways. One is that it is essential to raise awareness of athlete wellbeing in Japan so that athletes recognize the importance of self-care for their wellbeing, which, in turn, influences their performance. The other one is that those involved in the field of wellbeing should not take wellbeing apart from performance by understanding that those two are intercorrelated, at least from the perspective of student-athletes. In other words, the support for athlete wellbeing should be designed to align with the performance development plan and progress of the athletes. The third objective of this study was to reveal the athlete's perception of the availability of a support system within the national sports federation. In regards to the availability of policies, guidelines, and programs, the results suggested that (a) there were only a few national sports federations already accommodating the support policies, programs, and guidelines in their systems, and (b) the information might not be appropriately delivered to athletes despite the availability, or (c) the athletes were not eligible to access the service and information due to their performance level. As only 2% of the participants indicated their experience of receiving support services for their wellbeing from the national sports federation, it could be argued that few national sports federations obtain the support system within the organization supporting point (a) indicated above. Given that the results derived from the athletes' perception, however, further investigation of the national sports federation is necessary to conclude that they have not developed the policy, guidelines, and service programs for their athletes. The fourth objective was to investigate athletes' experiences of support services from various points of view, including information, detection, and seeking behavior in reacting to a threat and/or risk, as well as a feeling of anxiety and/or distress. Overall, the results proved that most of the university studentathletes had never experienced, at least in their recognition, receiving support services for their wellbeing in the past. In terms of information, 67.0% indicated that they had never obtained and/or gathered information about "athlete wellbeing." Interestingly, however, it was found that the lack of information about the support service available and where to access it was the number one reason cited by the student-athletes, rather than their rejection of the service. Despite the small sample size who obtained the information about wellbeing (33.0%), given the results indicating their behavior to seek information about wellbeing, it could be recommended to consider the use of an online platform such as YouTube and/or social networking sites (18.0%) in addition to the national sports federations (12.0%) and entourage (e.g., coaches, teammates, support staff, former athletes) (12.0%) as the channel for information delivery. Nevertheless, it should be cautious about the accuracy of the information, as only 2.0% indicated their experience of seeking information on the official website of IOC and/or Ifs. In order for student-athletes to systematically access the right information in the Japanese language, a somewhat "one-stop-shop resource center" could be a possible action, while conducting further research in the Japanese context is necessary to provide evidence for policy-makers and practitioners. Kinugasa et al. (2021) stated the definition of athlete wellbeing in Japanese, which could be used in policy and practice in the future. Regarding detecting problems associated with athlete wellbeing, the results showed that 77.0% of them had no experience of receiving this service from national sports federations. As for detection techniques, it was found that communication and/or interaction was more commonly used than the application of measurement tools and/or clinical tests. Furthermore, concerning help-seeking behavior, 84.0% claimed no experience of facing or witnessing inappropriate behavior that could be a threat or risk to the athlete's wellbeing. Among those 16.0% with experience, approximately 70.0% did not share or report it to someone because they did not want to make it a big deal (45.5%) and/or were afraid of who reported it (36.4%). Despite the availability of a hotline for wellbeing in a broader sense, only 4 participants have used it to report the problem. This was probably due to the lack of awareness, as 75.0% of the participants indicated that they had never heard of or been aware of the hotline. It was evident that the studentathletes tended to first report problems to their entourage rather than the official hotline set by organizations to seek help. Finally, the results indicated that 45.0% of the studentathletes did not have anyone to talk to about their anxiety or distress. Within the 55.0% of the student-athletes, it was found that approximately 60.0% of them would initially talk to their parents or friends rather than coaches or support staff. It implied that information and education to athletes and coaches are not enough but include parents and entourage to understand athlete wellbeing better. Additionally, despite the low rate of student-athletes (19.0%), coaches (31.6%), psychiatrists (26.3%), and clinical psychologists (21.1%) were the top three experts from whom student-athletes have sought help in the past, while non-psychology experts such as medical doctors and athletic trainers/physiotherapists (10.5%) were also indicated for their options. These results indicate that it is essential for the organization to consider the development of a network with experts in the fields of mainstream psychology and medicine, as well as the involvement of coaches within the support system in Japan. It should, however, be noted that only 8.0% of the student-athletes indicated their willingness to talk to experts, while 43.0% did not feel a need, and 30.0% could not seek help despite wanting to do so. Interestingly, the main barriers for the student-athletes were a lack of knowledge about where they could find the appropriate experts, their uncertainty and a feeling of incapability about the cost, and their distrust in the ability of experts to solve their problems. According to these results, it could be suggested that to facilitate the change in athletes' help-seeking behavior from experts, information and education, as well as a reference network to access the appropriate experts for their issues, are necessary as the barriers seemed not to be the stigma often associated with athletes. These findings could then lead to a discussion about the implications associated with this study's last objective, which was to identify the types of support student-athletes expect from the government and/or national sports federations in the future. It was interesting that most student-athletes strongly or relatively agreed to all of the proposed actions, including clear guidance of the direction, information gathering and delivery, athlete and coach education, the development of detection and monitoring tools, the settling of the system to react to problem occurrences, the employment of experts, and the development of a collaborative network system with experts, expert organizations, private companies, government, and national sports federations, and the development of a referral network. These results somewhat supported the argument that to implement policy into practice, increasing awareness and knowledge through information delivery is essential but not sufficient to address athletes' various needs for mental health and wellbeing (Purcell et al., 2019). The development of these support frameworks could be considered the common approach in a national system worldwide (Department for Digital, Culture, Media and Sport, 2018;Moesch et al., 2018;Australian Institute of Sport, 2020;High Performance Sport New Zealand, 2021). As those approaches were somewhat equally agreed upon (Figure 1), however, it was difficult to make the prioritization among those actions in this pilot
systems and improve information accessibility. Given that this pilot study's validity, reliability, and feasibility were verified, further studies should focus more on the wellbeing of Japanese elite athletes in high-performance sports (i.e., Olympic and Paralympic athletes).
from the government and/or national sports federations in the future. It was interesting that most student-athletes strongly or relatively agreed to all of the proposed actions, including clear guidance of the direction, information gathering and delivery, athlete and coach education, the development of detection and monitoring tools, the settling of the system to react to problem occurrences, the employment of experts, and the development of a collaborative network system with experts, expert organizations, private companies, government, and national sports federations, and the development of a referral network. These results somewhat supported the argument that to implement policy into practice, increasing awareness and knowledge through information delivery is essential but not sufficient to address athletes' various needs for mental health and wellbeing (Purcell et al., 2019). The development of these support frameworks could be considered the common approach in a national system worldwide (Department for Digital, Culture, Media and Sport, 2018;Moesch et al., 2018;Australian Institute of Sport, 2020;High Performance Sport New Zealand, 2021). As those approaches were somewhat equally agreed upon (Figure 1), however, it was difficult to make the prioritization among those actions in this pilot work. Interestingly, however, more than one in three athletes showed resistance to receiving support services even if the government and/or national sports federations establish those support frameworks in the highperformance sport system. These attitudes might be associated with a lack of knowledge and information, as observed in their experience of receiving support from experts rather than cultural stigma. Therefore, promoting athlete wellbeing is necessary to consider those obstacles when designing and planning the development of policies, systems, and programs to support athlete mental health and/or wellbeing to facilitate its utilization in better ways. In summary, this pilot study of university studentathlete wellbeing in Japan revealed the general trends in broader and holistic perspectives as little information was available. Based on the results, the current state of studentathletes' wellbeing was relatively positive despite the influence of COVID-19. Given the lack of information related to athlete wellbeing in Japan, the student-athletes demonstrated low recognition of the word and meaning of "athlete wellbeing." They indicated, however, that they perceived their state of wellbeing might influence their performance and, therefore, be important for their performance development. Nevertheless, in the perception of student-athletes, few national sports federations have policies, guidelines, and support programs in place for athletes. It was, therefore, evident that most of the student-athletes had never experienced the support service on wellbeing in terms of information, detection, and help-seeking behavior. Despite the uncertainty of utilizing the support provided, the student-athletes agreed that it was necessary for the government and/or national sports federations to take actions such as clear guidance of the direction, information gathering and delivery, athlete and coach education, the development of detection and monitoring tools, the settlement of the system to react to problem occurrences, employment of experts, and the development of a collaborative network system with experts, expert organizations, private companies, government and national sports federations, and the development of a referral network. Given these results, further investigations were required, particularly targeting athletes in high-performance sports (i.e., Olympic and Paralympic athletes) and national sports federations. --- Limitations and future direction There were some limitations associated with this pilot study. First, the COVID-19 pandemic affected the findings as the study was conducted during the State of Emergency in Japan. In fact, approximately half of the participants perceived the influence of the COVID-19 pandemic on their wellbeing. To account for the COVID-19 pandemic, the states of athlete wellbeing in each dimension were asked over the past 3 years. Since this investigation focused on the general trend of student-athletes' perceptions of their state and environment of holistic wellbeing, the instrument consisted of only one set of items specifically capturing the influence of COVID-19. Second, in terms of methodology, the sample size of 100 is limited for subgroup analysis. Therefore, further studies could suggest that an indepth analysis of athlete wellbeing, such as gender, length of time in the field, and status of physical limitations, with larger sample size, might support assuming the generalizability of the study. Finally, as the interval of 7 days for test-retest reliability might not be sufficient, a minimum time gap of a fortnight may be necessary for future investigation. Based on the findings from this pilot study, further investigation should be carried out to develop the national support system in Japan. First, future studies could target elite athletes (i.e., Olympic and Paralympic athletes) on a larger scale. Second, as the findings were only derived from athletes' perspectives, it could suggest investigating the national sports federation's point of view regarding the availability of athlete support systems and/or programs. Third, the researchers could consider the study about the wellbeing of the entourage because the issues and challenges associated with the topic of wellbeing are not necessarily limited to athletes as they also spend considerable time in a highly demanding environment (Breslin et al., 2017). Given the lack of information on the Asian population in the field of athlete well-being and mental health was evident (Reardon et al., 2019), international collaborative research in the Asian region is necessary. Furthermore, comparing Asian and Western countries could help in cultural considerations in developing each country's policies, systems, and programs. As the JSC, the mother organization of the JISS, is the only national sports agency responsible for grassroots to high-performance sports in Japan, the social science research group of the JISS will continue to study in this field to provide further evidence and information to support policy implementation in the field of athlete wellbeing by collaborating with researchers both in Asia and the world in the future. --- Data availability statement The datasets generated for this study will not be available due to the privacy of the participants. Please contact the corresponding author for any further information and any requests to access the datasets. --- Ethics statement The studies involving human participants were reviewed and approved by Japan Institute of Sports Sciences Ethical review Committee. The patients/participants provided their written informed consent to participate in this study. --- Author contributions YN conceptualized the study, developed the instrument, supervised the analysis, and drafted the manuscript from initial to final. CK conducted data analysis and drafted parts of the methods, measures, and results. TK supervised the whole process as the project leader, recruited participants, conducted the survey and data analysis, and drafted some of the methods, measures, and results in parts. All authors contributed to the article and approved the submitted version. --- Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. --- Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
systems and improve information accessibility. Given that this pilot study's validity, reliability, and feasibility were verified, further studies should focus more on the wellbeing of Japanese elite athletes in high-performance sports (i.e., Olympic and Paralympic athletes).
Introduction The aftermath of failures is not entirely measured in phrases of physical destruction; it's far similarly defined by means of the resilience and preparedness of the affected groups. This observe recognizes the significance of network-level tasks in catastrophe threat discount, aligning with the Sendai Framework for Disaster Risk Reduction 2015-2030, which emphasizes the need of localized movement and network engagement (UNDRR, 2015). Understanding the social dynamics that form vulnerability and resilience is essential for developing focused interventions that resonate with the numerous contexts of Indonesian villages. Recent studies have emphasized the role of social capital in improving network resilience (Aldrich & Meyer, 2015;Norris et al., 2008). Social capital, encompassing social networks, accept as true with, and shared norms, has been identified as a essential asset in put up-disaster recovery and preparedness (Aldrich, 2012). In the context of Indonesia, wherein communal ties often form the spine of day-by-day life, investigating the impact of social capital on catastrophe preparedness is specifically pertinent. The rise of weather exchange and its correlation with extended frequency and depth of herbal disasters has brought on a revaluation of current threat reduction strategies (IPCC, 2021). Recognizing this, our research goals to explore the adaptation of traditional practices and indigenous expertise within Indonesian villages as valuable coping mechanisms. These localized techniques, deeply rooted in cultural contexts, can offer particular insights into sustainable catastrophe preparedness. In precis, this study seeks to bridge the gap among theoretical frameworks and sensible packages by using carrying out a nuanced exam of social elements influencing vulnerability and the coping mechanisms deployed by using Indonesian villages. By doing so, we aspire to provide actionable pointers for policymakers, neighborhood authorities, and humanitarian companies to bolster the resilience of communities facing the ever-gift risk of herbal failures. The socio-monetary disparities inside Indonesia similarly enlarge the challenges confronted with the aid of inclined communities in the wake of disasters. Recent reports from the National Disaster Mitigation Agency (BNPB) highlight the disproportionate impact of herbal disasters on marginalized agencies, consisting of those with lower income stages and confined get right of entry to to schooling (BNPB, 2023). Addressing those disparities is not most effective a count number of humanitarian situation however also an essential element of constructing sustainable and inclusive disaster resilience. As we embark on this exploration of community resilience and catastrophe preparedness, it's far critical to recognize the dynamic nature of vulnerability. The effects of screw ups are not uniform across groups, and the potential to manage and get better is fashioned by using a complex interaction of socio-cultural, economic, and environmental elements (Adger, 2006). This study seeks to get to the bottom of those complexities with the aid of adopting a qualitative case take a look at method, allowing for an in-depth know-how of the unique challenges confronted by using special villages across Indonesia. Findings of this research are predicted to make contributions to the continuing discourse on catastrophe risk discount inside the Asia-Pacific vicinity, aligning with the Hyogo Framework for Action (UNISDR, 2005). By contextualizing the global frameworks within the specific socio-cultural panorama of Indonesian villages, this has a look at aspires to offer nuanced insights that could tell not handiest neighbourhood guidelines however additionally contribute to the wider international know-how of community resilience within the face of natural disasters. --- Methods A qualitative case examine technique became hired to delve into the intricacies of network resilience and catastrophe preparedness in Indonesian villages. This technique was deemed suitable for its potential to offer a nuanced understanding of the socio-cultural dynamics shaping vulnerability and the coping mechanisms followed via communities. Two geographically numerous villages were purposefully selected to symbolize different regions within Indonesia, ensuring a comprehensive exploration of studies and responses to natural screw ups. The records collection procedure encompassed a mixture of in-intensity interviews, attention organization discussions, and an analysis of local government catastrophe management plans. Semi dependent interviews were conducted with key community leaders, local authorities, and people with information in catastrophe control, providing insights into the social dynamics influencing vulnerability and community leaders' perceptions of powerful coping strategies. Separate attention institution discussions were organized within each village, involving a various institution of citizens. These discussions facilitated exploration of network-level perspectives, reports, and the communal techniques employed for disaster preparedness and restoration. Additionally, existing disaster control plans from the selected villages were analysed to contextualize the formal frameworks in region. This analysis aimed to uncover the interface between community-primarily based tasks and formal disaster management systems. Thematic analysis, following the guidelines proposed by using Braun and Clarke (2006), became hired for information analysis. Transcriptions of interviews and awareness group discussions were systematically coded, and emergent subject matters had been iteratively refined through a process of steady comparison. The studies crew prioritized ethical issues in the course of the observe. Informed consent become acquired from all members, and measures were taken to make sure the confidentiality and anonymity in their response. This qualitative methodology furnished a basis for a holistic exploration of the social dimensions of vulnerability and resilience, presenting valuable insights into the coping mechanisms that emerged inside the selected Indonesian villages. --- Results and Discussion --- Social Factors Influencing Vulnerability The evaluation found out a prominent theme associated with socio-financial popularity and its impact on vulnerability. In Village A, in which monetary disparities were more mentioned, network contributors expressed concerns about restricted assets for disaster preparedness. A resident remarked, "Many households here warfare to make ends meet, so making an investment in catastrophe kits or evacuation plans regularly takes a back seat." This sentiment become echoed throughout more than one interviews, highlighting the position of sociofinancial elements in shaping vulnerability. --- Community-Based Coping Mechanisms A routine subject that emerged from the records turned into the reliance on traditional practices as coping mechanisms. In Village B, residents emphasized the efficacy of network-organized drills primarily based on indigenous expertise. One participant shared, "Our ancestors exceeded down methods of predicting floods. We prepare drills to make certain each person is aware of what to do when those symptoms seem." This network-driven technique showcased the adaptive nature of conventional practices in improving disaster resilience. --- Interplay among Formal and Informal Networks The analysis underscored the elaborate relationship among formal and casual catastrophe control systems. Local authorities plans in each villages mentioned precise roles for community participation. A network chief in Village A emphasised, "We paintings closely with the local authorities. They offer sources, and we enforce techniques that fit our community." This collaborative technique highlighted the significance of integrating formal and casual networks for effective disaster preparedness. --- Educational Initiatives and Awareness Educational projects emerged as a crucial theme influencing vulnerability. In Village C, wherein educational stages have been comparatively higher, residents demonstrated greater cognizance of catastrophe risks and preparedness measures. An interviewee stated, "Our colleges frequently conduct drills, and children are taught approximately the local geography and capacity dangers." This subject emphasised the role of training in fostering a proactive approach to catastrophe preparedness. --- Social Capital and Trust The qualitative records highlighted the importance of social capital and consider in community resilience. In all villages, close-knit social networks performed a pivotal position in sharing records and coordinating efforts at some stage in failures. In summary, the outcomes reveal a complicated interplay of social factors shaping vulnerability and a diverse range of coping mechanisms inside Indonesian villages. The findings emphasize the significance of context-specific techniques that integrate neighborhood practices, leverage social capital, and bridge the distance among formal and casual disaster control systems. --- Gender Dynamics in Disaster Preparedness A nuanced topic that emerged turned into the function of gender in catastrophe preparedness. In Village C, ladies often took the lead in organizing and participating in network drills. A woman resident cited, "Women are generally the ones at home for the duration of the day. We make certain our households are conscious and organized for any emergency." This topic highlighted the particular contributions of women in fostering network resilience and challenged conventional gender roles in catastrophe control. --- Impact of Cultural Beliefs on Coping Strategies Cultural beliefs appreciably stimulated coping strategies in Village B, where a robust connection to nature and religious practices prevailed. Residents shared testimonies of looking for steerage from nearby religious leaders for the duration of instances of heightened catastrophe risk. "Our beliefs are deeply tied to the land. Before making any decisions, we consult with our spiritual leaders to interpret signs from nature," explained a community member. This subject emphasized the need to understand and combine cultural views into catastrophe preparedness initiatives. --- Challenges in Communication and Information Dissemination Challenges in communication emerged as a crucial subject matter impacting vulnerability. In Village A, wherein communication infrastructure become confined, residents faced difficulties in receiving well timed facts about drawing close screw ups. "We rely upon word of mouth, and sometimes the message does not reach everyone in time," expressed a player. This theme highlighted the need for progressed verbal exchange strategies, specifically in areas with constrained technological assets. --- Adaptive Capacity via Community Training The evaluation diagnosed network schooling applications as a key element in improving adaptive ability. In Village C, a proactive community-driven education initiative was credited with empowering residents to reply correctly to screw ups. "We have ordinary training classes on first useful resource, evacuation methods, or even fundamental seek and rescue abilties," shared a participant. This subject matter emphasised the positive impact of ongoing training programs in constructing the adaptive ability of communities. --- Government Support and Infrastructure The degree of presidency support and infrastructure emerged as a full-size theme influencing network resilience. In Village B, in which authorities initiatives were extra stated, residents expressed a sense of security derived from nicely-maintained evacuation routes and targeted shelters. "The authorities has invested in infrastructure that makes us feel more secure in the course of screw ups," said a community chief. This subject underscored the significance of governmental contributions in bolstering community resilience efforts. In end, the qualitative evaluation illuminated a diverse variety of topics, each contributing to the tricky tapestry of community resilience and disaster preparedness in Indonesian villages. These findings provide valuable insights for policymakers, nearby authorities, and humanitarian businesses to tailor interventions that address the specific needs and dynamics of groups going through the consistent threat of natural disasters. Social Factors Influencing Vulnerability: The identity of socio-economic disparities as a sizable subject matter aligns with broader global discussions on the disproportionate impact of failures on marginalized groups. Numerous studies, including Cutter et al. (2016), emphasize the link between socio-economic reputation and vulnerability, emphasizing that economically disadvantaged populations often face better risks and slower recuperation. In the context of Indonesia, this underscores the urgent want for targeted interventions that deal with socioeconomic disparities and make certain that susceptible communities are not left disproportionately pressured via the outcomes of failures. The World Bank's emphasis on inclusive regulations and social safety packages becomes especially applicable in mild of our findings, highlighting the significance of complete techniques that address underlying sociofinancial elements. The socio-monetary theme prompts mirrored image on the interconnectedness of catastrophe risk reduction and broader development desires. The Sendai Framework advocates for the incorporation of disaster threat reduction into improvement making plans, emphasizing the want to construct resilient groups thru inclusive and sustainable improvement (UNDRR, 2015). Our findings underscore the importance of now not most effective addressing instant vulnerabilities but additionally tackling systemic issues related to poverty, get entry to to training, and employment possibilities to beautify long-time period resilience. Community-Based Coping Mechanisms: The emergence of conventional practices as a coping mechanism aligns with worldwide popularity of the importance of indigenous understanding in catastrophe chance discount. The Sendai Framework recognizes the ability of conventional understanding and practices in enhancing resilience and calls for the incorporation of such wisdom into national techniques (UNDRR, 2015). The validated efficacy of network-organized drills based totally on indigenous information in Indonesian villages reinforces the idea that community-driven, culturally rooted procedures can play a pivotal role in building resilience. This highlights the need of preserving and integrating traditional practices into formal disaster management plans. Moreover, the topic of network-based coping mechanisms requires a revaluation of the dichotomy between "traditional" and "cutting-edge" procedures to disaster resilience. Integrating traditional practices into contemporary catastrophe hazard reduction techniques not most effective respects cultural heritage but additionally leverages the strengths of nearby groups. As discussions around the sector emphasize the significance of context-particular techniques, our findings emphasize the capacity synergies between age-antique practices and current procedures to create resilient communities capable of withstanding the evolving challenges of climate-related failures. Interplay among Formal and Informal Networks: The collaborative method between neighbourhood groups and formal disaster management systems aligns with global efforts emphasizing the significance of community engagement in disaster chance reduction. The International Federation of Red Cross and Red Crescent Societies (IFRC) highlights the want for community-led tasks and partnerships with formal systems to enhance resilience (IFRC, 2018). Our findings give a boost to the Hyogo Framework's principle of integrating community-based totally tasks into formal structures for powerful disaster resilience (UNISDR, 2005). This interplay between formal and informal networks emphasizes the significance of bendy frameworks that understand the strengths of each nearby and formalized strategies. The discussion at the interaction among formal and casual networks prompts concerns of power dynamics and inclusivity. It is essential to ensure that community voices are not simplest heard but additionally actively integrated into decision-making strategies. Recognizing the particular knowledge and strengths that communities convey to the desk is critical for the fulfillment of collaborative projects. As the global discourse on resilience shifts in the direction of participatory techniques, our findings underscore the significance of fostering actual partnerships that empower neighborhood communities as active dealers in catastrophe hazard reduction. Educational Initiatives and Awareness: The association between schooling and disaster cognizance aligns with international efforts to prioritize training as a essential factor of disaster chance reduction. UNESCO acknowledges the position of training in building a subculture of protection and resilience, selling know-how dissemination, and fostering knowledgeable choice-making within the face of failures (UNESCO, 2019). Our findings strengthen the idea that knowledgeable groups are better prepared to reply to and get over screw ups, emphasizing the want for complete educational projects that extend beyond formal school settings. Education and attention activate a mirrored image at the role of expertise dissemination in selling a subculture of preparedness. The International Federation of Red Cross and Red Crescent Societies (IFRC) advocates for community-primarily based education applications that empower individuals to take possession in their protection (IFRC, 2017). In the Indonesian context, our findings underscore the need for targeted initiatives to enhance focus and preparedness, especially in areas with decrease instructional access. This highlights the interconnectedness of schooling, network resilience, and sustainable improvement, reinforcing the significance of fostering a lifestyle of continuous studying and preparedness at all levels. Social Capital and Trust: The subject of social capital and consider resonates with global recognition of the position of social networks in improving resilience. Aldrich and Meyer (2015) emphasize the significance of social capital in put up-disaster restoration, highlighting how strong social bonds contribute to network resilience. The United Nations Office for Disaster Risk Reduction (UNDRR) recognizes the significance of social concord in withstanding and convalescing from failures (UNDRR, 2017). Our findings verify the intangible but vital position of social bonds in improving catastrophe resilience, calling for interventions that strengthen community ties to foster collective resilience. The discussion on social capital prompts concerns of social fairness and inclusivity. It is vital to well-known and cope with existing social inequalities that could impact the distribution of social capital within groups. Vulnerable businesses may additionally face additional limitations in having access to and benefiting from social networks, potentially exacerbating present disparities all through screw ups. As worldwide discussions increasingly emphasize the significance of leaving no person at the back of in disaster danger reduction efforts, our findings underscore the want for strategies that sell social inclusion and same get entry to to social capital, ensuring that the advantages of strong network ties reach all members. Gender Dynamics in Disaster Preparedness: The exploration of gender dynamics in disaster preparedness aligns with global requires gender-responsive methods in catastrophe risk discount. The Sendai Framework emphasizes the importance of gender equality in building resilience and highlights the particular vulnerabilities and strengths of different genders (UNDRR, 2015). The International Federation of Red Cross and Red Crescent Societies (IFRC) advocates for gender-touchy strategies that recognize and cope with the distinct needs of girls, men, women, and boys in disaster threat reduction (IFRC, 2016). Our findings underscore the significance of spotting and empowering the numerous contributions of girls in fostering community resilience, tough traditional gender roles. The subject of gender dynamics prompts concerns of intersectionality and the interplay between gender and other social factors. Vulnerable corporations, which includes women with lower socio-monetary popularity, can also face compounded demanding situations in catastrophe situations. Recognizing the intersectionality of vulnerabilities is vital for developing inclusive strategies that cope with the diverse needs of all community participants. As the worldwide discourse on gender equality in catastrophe hazard discount evolves, our findings spotlight the want for intersectional approaches that recollect the complex interaction of gender, socio-financial elements, and cultural dynamics in shaping resilience. --- Impact of Cultural Beliefs on Coping Strategies: The have an effect on of cultural beliefs on coping strategies aligns with broader discussions at the cultural dimensions of disaster danger reduction. The Centre for Research at the Epidemiology of Disasters (CRED) recognizes the significance of cultural heritage in shaping resilience strategies and calls for the maintenance of cultural practices in the face of changing threat landscapes (CRED, 2019). The Sendai Framework underscores the price of cultural diversity in enhancing resilience and advocates for strategies that admire and integrate nearby ideals (UNDRR, 2015). Our findings emphasize the want for culturally sensitive processes that apprehend and honor local beliefs in disaster preparedness initiatives, reinforcing the idea that cultural background is a precious asset in constructing resilient communities. The discussion on the effect of cultural ideals prompts concerns of cultural upkeep and the capability anxiety between modernization and traditional practices. As groups evolve and face growing publicity to global impacts, preserving cultural heritage will become essential for keeping resilience. Striking a balance among integrating conventional practices and adapting to trendy hazard landscapes is crucial. Our findings underscore the significance of recognizing and valuing cultural diversity as an indispensable factor of network resilience. This aligns with worldwide efforts to develop strategies that honor cultural identities even as concurrently addressing cutting-edge challenges in disaster danger reduction. Challenges in Communication and Information Dissemination: The identified challenges in communication resonate with international concerns approximately the virtual divide and information get entry to in catastrophe-inclined regions. The International Telecommunication Union (ITU) highlights the significance of enhancing verbal exchange infrastructure and ensuring equitable get entry to to information in disaster threat reduction (ITU, 2020). Our findings align with international calls for overcoming boundaries in facts dissemination to decorate early caution structures and community reaction techniques. Addressing those challenges is vital for constructing powerful conversation networks that reach all network contributors, regardless of their geographical area or technological assets. The dialogue on communique challenges activates concerns of inclusivity and the need for various verbal exchange channels. Recognizing that different segments of the populace can also have varied get admission to to conversation platforms is vital for growing inclusive techniques. Leveraging a aggregate of current technologies and conventional communique methods can make certain that data reaches a much wider audience. As the global discourse on conversation in catastrophe danger reduction advances, our findings underscore the significance of tailor-made processes that keep in mind the specific characteristics of every network and prioritize inclusivity in records dissemination. education applications calls for sustained efforts to ensure that groups are prepared with the expertise and abilties vital to respond effectively to disasters. The dialogue on adaptive ability via network training activates considerations of neighbourhood empowerment and the position of groups as lively retailers of their resilience. Empowering groups to take ownership of their protection and nicely-being is essential for building sustainable resilience. As international discussions an increasing number of cognizance on the shift from a response-oriented approach to a proactive and preparednessfocused strategy, our findings underscore the importance of fostering a lifestyle of continuous studying and talent development within communities. This aligns with worldwide efforts to promote community-driven tasks that beautify nearby adaptive capacity and make contributions to typical disaster resilience. Government Support and Infrastructure: The topic of presidency assist and infrastructure echoes global discussions at the function of governance in catastrophe resilience. Cutter et al. (2016) emphasizes the significance of governance and institutions in lowering disaster hazard, highlighting the need for effective regulations and infrastructure. The Sendai Framework recognizes the important position of governance in constructing resilience and requires the mixing of disaster threat reduction into country wide improvement rules (UNDRR, 2015). Our findings affirm the superb perceptions of government tasks and underscore the significance of persevered investments in infrastructure and policy frameworks that bolster network resilience. The dialogue on government assists and infrastructure activates considerations of responsibility and the want for transparent and inclusive governance. Ensuring that authorities tasks are aware of the desires of local groups and are applied in a obvious way is critical for constructing accept as true with. As the global discourse on governance in disaster chance reduction advances, our findings highlight the significance of fostering collaborative partnerships among governments and communities. This collaborative approach guarantees that rules and infrastructure investments align with the particular characteristics of every network, contributing to the improvement of resilient societies. --- Conclusion The multifaceted exploration of community resilience and disaster preparedness in Indonesian villages has illuminated essential insights into the elaborate interplay of social elements, coping mechanisms, and governance systems. The identified topics underscore the general significance of addressing socio-economic disparities, integrating traditional practices, fostering collaborative tactics between formal and informal networks, prioritizing education and cognizance, recognizing the position of social capital, selling gender-responsive strategies, honoring cultural ideals, overcoming conversation demanding situations, and empowering communities via education and authorities' aid. These findings contribute to the worldwide discourse on catastrophe danger reduction, emphasizing the need for context-precise, inclusive, and holistic tactics that empower communities as lively dealers in constructing resilient societies. Recognizing the interconnectedness of these issues, our examine advocates for included techniques that acknowledge the diverse dynamics inside Indonesian villages, in the end paving the manner for extra effective, sustainable, and community-centered catastrophe resilience tasks.
This research delves into the complexities of network resilience and catastrophe preparedness in Indonesian villages thru a qualitative case observe method. The analysis exhibits several interconnected subject matters, including socio-monetary disparities, the efficacy of traditional practices, collaborative dynamics between formal and casual networks, the impact of schooling and focus, the function of social capital, gender-responsive techniques, cultural influences, communique challenges, and the empowerment of groups through education and authority's support. These findings, aligned with global frameworks, emphasize the necessity of contextprecise, inclusive, and holistic procedures to address vulnerabilities and enhance resilience. The observe advocates for included techniques that empower communities as lively retailers in constructing resilient societies.
INTRODUCTION While substantially contributing to human wellbeing, the ocean is increasingly threatened by local human action and climate change 1. Marine protected areas (MPAs) are advocated as a key strategy for simultaneously protecting biodiversity and supporting coastal livelihoods 2,3. They are now part of the United Nations Convention on Biological Diversity and Sustainable Development Goals. Their level of protection encompasses fully protected areas where all activities are prohibited to a range of "partially protected MPA" that allow activities to different degrees 4,5. The former are known to deliver ecological benefits through exclusion of human activities [6][7][8], whereas the latter assume that conservation will be achieved through cooperation in the social space that leads to sustainable use 9. While scientific evidence shows that most benefits, including biodiversity conservation, food provisioning and carbon storage, stem from fully or highly protected areas, most established MPAs are of lower protection levels because of lobbying from current users and political bias towards creating many, rather than highly protected areas [6][7][8]10. Also, it has been argued that excluding people who are dependent on those areas for their livelihood might not be socially equitable 11, and that cultural and historical assessments should be part of MPA design. Potential benefits and beneficiaries must also be highlighted and understood at a local level to discuss trade-offs and address the ecological, social and economic requirements of sustainability 9. However, guiding principles are lacking on how to manage trade-offs in specific social-ecological systems (SES) 12. Indeed, while conceptual models of SES have been elaborated to characterize human-nature interactions and inform decisionmaking [13][14][15][16][17][18][19], and previous works have been developed [20][21][22], effective science-policy interfaces in marine environments are scant 8. There is, therefore, room for more effective and inclusive science-policy frameworks, including dedicated modeling approaches. Each step of collaborative prospective modeling from elaborating narratives to interpreting simulation results, including model conception, may help explore the ecological, social and economic consequences of management alternatives at a local level and in the context of ongoing climate change. For decision-makers, there is a growing awareness that integrating valuable scientific knowledge and stakeholders during the management process can offer better outcomes [23][24][25][26][27][28][29][30] and is less likely to result in resources' collapse 31,32. However, such integration raises three main challenges for science. First, how to collaboratively develop narratives that break with the usual approach based on ongoing trends-which has failed to mobilize transformative change 33 -by including stakeholders and scientists from a diversity of disciplines. Second, how to shift from resources toward ecosystem-based management, and addressing interactions among scales within SES 34 by using ecosystem-based modeling. Third, how to better align the modeling practice and illustration of trade-offs with the decision-making process, ultimately setting management rules 23 by fitting the modeling on MPA management plans. In this paper, we argue that bridging the gap between what the literature recommends and what is done on the field requires an innovative science-policy framework that identifies potential benefits, tackles necessary trade-offs and promotes collective deliberation on management measures and rules. To test this hypothesis, we hybridized research and decision-making through collaborative prospective modeling in the case of a French Mediterranean MPA (the Natural Marine Park of the Gulf of Lions), in the context of climate change. Climate change impacts on the ocean (e.g., sea level rise, temperature increase, pH decrease, and to a lesser extent, moisture decrease) are expected to alter marine ecosystems functioning 35. In the semi-closed Mediterranean Sea, climate change effects on ecosystems are already visible, with most noteworthy impacts reported being oligotrophication and diversity composition change 36,37. Hence, scientists, policymakers and stakeholders involved in the management of such MPA were involved in the present transdisciplinary and multi-actors' research. We followed a three-step process (Fig. 1) over a threeyear period (2015-2019), which entailed: (i) conducting three workshops in stakeholders' groups (see Supplementary Material Note 1); (ii) developing a social-ecological model through an agent-based modeling; (iii) collectively exploring the simulations' results. The study adds novelty from previous work [13][14][15][16][17][18][19][20][21][22] by combining participatory narrative-building with modeling to shape a deliberation tool in the marine environment. Although an economic analysis would be necessary to identify potential benefits and beneficiaries of different scenarios, such an analysis was not developed as it was beyond the scope of our study. Here, we describe how aiming for sustainability requires a framework for continued work that allows us to (i) build contrasting narratives for the future addressing biodiversity conservation, food provisioning and economic activity in the context of climate change; (ii) explore resulting strategies with a science-based SES model illustrating trade-offs; (iii) deliberate about results in order to adjust strategies. --- RESULTS --- Building disruptive narratives to open the range of possible futures Recent scientific works suggest that we need to move beyond classical scientific studies depicting future trajectories of decline Fig. 1 Key steps of the framework proposed. A three-step process over a three-year period that consists in conducting workshops in stakeholders' group for building disruptive narratives (step 1), developing a social-ecological model through agent-based modeling for implementing narratives translated into scenarios (step 2) and collectively exploring the simulations results eventually leading to modifying scenarios hypothesis and re-shaping scenarios (step 3). A pre-requisite to the three-step process consists in agreeing collectively on main issues to be addressed. that have failed to mobilize transformative change 23. Exploring different futures through narrative scenarios proves to be helpful to address MPA management issues in a constructive manner 36. Lubchenco and Gaines notably emphasize how narratives help in framing our thinking and action 38. Indeed, as in mythology or literature, narratives act as a reference framework to which one can refer to make decisions adapted to unpredicted but pictured contexts. In the present context, the challenge was to extend or amend our reference scheme by imagining transformative futures. Here, we did so by inviting scientists, stakeholders, and decision-makers to participate in three workshops led by a specialist in building prospective scenarios (see Methods). Each time, participants were split into three groups to progressively write a narrative about the Natural Marine Park of the Gulf of Lions by 2050 (see Supplementary Notes 1-2). It led to the writing of three original and transformative narratives (Table 1). 2050 was considered close enough to fit with the real political deadline, i.e., the completion of two management plans, and far enough to deal with some expected effects of climate change, such as the decline of primary production in marine ecosystems. --- Ecosystem-based modeling to address SES complexity Sustainably managing the ocean requires MPA managers to adopt integrated ecosystem-based management (EBM) approaches that consider the entire ecosystem, including humans (Fig. 2). While fishing affects target species, marine food webs and habitats (depending on fishing and anchoring gear), climate change is expected to influence the dynamics of all marine organisms in terms of growth and spatial distribution (including primary production). EBM focuses on maintaining a healthy, productive, and resilient ecosystem so it provides the functions humans want and need. It requires a transdisciplinary approach that encompasses both the natural dimension of ecosystems and the social aspects of drivers, impacts and regulation 39. Whether "end-to-end models" are recommended by marine scientists to study the combined effects of fishing and climate change on marine ecosystems, using one of these tools was beyond the scope of the project (see Methods, Overview of end-toend models). We therefore looked for alternative approaches and built on knowledge and data from the park management plan and on past research conducted on the area: ecosystem-based quality indexes (EBQI) describing the functioning of specific ecosystems and mass-balance models analyzing the overall ecosystem structure and fishing impacts (Ecopath with Ecosim) (see Methods, Ecosystems description). We mapped four major park habitats (see Fig. 3): "sandy & mud" (31 species), "rock" (18 species), "posidonia" (17 species), and "coralligenous" (15 species). Here, (group of) species are represented in aggregate form (biomass density) and linked together with diet ratios (see Supplementary Tables 1234). This ecosystem-based representation is at the core of our modeling exercise. To simulate ecosystem dynamics, we used the ecosystem food webs as transmission chains for the type of controlling factors described in the narratives: 40 bottom-up control (climate, management) and top-down control (fisheries, management). For each (group of) species, biomass variation results from the equal combination of two potential drivers on a yearly basis: the abundance of prey (bottom-up control, positive feedback) and the abundance of predators (top-down control, negative feedback) (see Methods, Food-web modeling). To link this food-web modeling with the driving factors described in the narratives, we adopted an agent-based modeling framework. Agent-based models (ABMs) are already used for SES applications and science-policy dialog (see Methods, Rationale for ABM). We then developed a spatially explicit model for the main dimensions of the MPA described in the narratives. To set up agents and their environments, we used data from the ecosystem-based representation and geographic information systems (GIS) layers provided by the MPA team. To model space, we used a regular grid, the size of each cell being related to the average size of an artificial reef village (0,25 km 2 ). In accordance with our prospective horizon, simulations were run by 2050 with an annual time step. The food-web model is located at the cell level with the previous year's outputs as input data for each new year. Other human and non-human agents are also represented at the cell level. At this stage, we modeled temporal dynamics but lacked important spatial dynamics, such as adaptive behaviors of human and nonhuman agents relocating their activities as a result of management measures. For now, interactions between agents are mostly made of spatial-temporal co-occurrence with restricted mobility. Despite this, we were able to simulate the variation in any group of species in terms of biomass density in the case of a change in primary production, fishing effort, artificial reef planning or reintroduction of species. To disentangle the efficacy of the MPA's management measures from climate change impacts, we ran each scenario with and without climate change (see Fig. 4). Indeed, the variation in primary production is the only difference among scenarios that does not depend on management choices at the MPA level. We could capture some of their propagation and final effects on indicators similar to those of the park management plan and the ecosystem function and natural resources targeted by the narratives: total biomass, harvested biomass, and diving sites access (see Methods, Modeling of drivers and indicators of ecosystem status). For now, all indicators are expressed in biomass quantity and number/share of accessible diving sites (physical units), not in economic value (monetary units). This would require an accurate economic analysis, which is to be developed in a future experiment. --- Informing management choices based on simulation results No scenario perfectly reached the objectives it was designed for (Fig. 4). However, they all draw interesting perspectives, such as the occurrence of unexpected co-benefits. In effect, the developed framework allows us to look at the building blocks of the scenarios and the combination of variables to explain the obtained results, as well as proposing explanations and suggesting new hypotheses for enhancing the efficacy of each scenario. Table 2 summarizes the major assumptions of the three scenarios developed by the project team based on the narratives. Scenario 1, "Enhancing total biomass", aimed at increasing biodiversity. Simulation results showed that undersea biomass varied little (-0.11%) despite the primary production decrease under climate change (see Supplementary Table 5). However, the trophic chain structure changed with a large increase in important species to local fisheries (see biomass variation of each group in Supplementary Table 6-10). For example, mackerel, whiting, hake, tuna, octopuses, and soles notably increased in muddy and sandy ecosystems; octopuses, seabass, echinoderms, bivalves, and gastropods in the coralligenous ecosystem; echinoderms, octopuses, and conger in the rocky ecosystem; suprabenthos, echinoderms, octopuses, conger, and scorpion fish in the Posidonia ecosystem. The increase in the above listed species is balanced due to the double prey/predator constraint by a decrease in the biomass of other existing species: benthic invertebrates and fish feeding on benthic crustaceans in muddy and sandy ecosystems; benthic macrophytes, scorpion fish, suprabenthos, and lobsters in the coralligenous ecosystem; suprabenthos, salema, seabass, and scorpion fish in the rocky ecosystem; and, worse, Posidonia itself, salema, and crabs in the Posidonia ecosystem. Simulation results also showed that fished biomass drops by 36%, which is consistent with the high share of fully protected areas (FPAs) in the absence of spatial dynamics and fishing effort relocation. Also, most diving sites that are currently appealing will no longer be accessible (-98%), which is expected to Table 1. Co-designed visioning narratives for the Natural Marine Park of the Gulf of Lions by 2050 built at the experts' workshops. Narrative 1: Protecting the ecological heritage and strengthening the marine food web Starting point: it reports the progressive deficiency of top predators and keystone species (e.g., groupers, sharks) and its corollary: the impoverishment of the whole trophic chain 74. But this scenario considers the uncertainties surrounding the idea of good ecological status 75 and shifting baselines [75][76][77]. Hence, specifying an ideal ecological state to achieve didn't make so much sense for the participants, who focused on preserving key habitats, keystone species, and enhancing the actual food chain 78. This strategy was inspired by the ecological concept: the more diversity there is, the greater the resilience of the system 79,80 Management rules: the participants imagined extending full protection up to 30% of the MPA. This ratio was chosen to echo the most ambitious existing target worldwide: the International Union for the Conservation of Nature recommendation that at least 30% of the entire ocean should benefit from strong protection. Participants also proposed stabilizing fishing effort and reintroducing top predators like groupers in the suitable habitats. Starting point: it is a strong awareness among the members of the group of the climate change expected consequences on marine primary production, being the first level of the food chain: 81,82 less nutrient availability for plankton development, through a limitation of river inflows and a reduction of coastal upwelling. Coupled with the actual decrease of nutrients flows due to dams on rivers and partial closure of estuaries, this would cause a decline of primary production, then affect the upper compartments of the ecosystems, including fished species. In order to avoid this global decline and to maintain the biomass of commercial species, stakeholders proposed actions to be taken on land, that are likely to restore good nutrient availability for plankton development and so on*. To create new sources of income, they suggested aquaculture could be developed in the lagoons in the form of multi-trophic farms (fish/oyster/algae or shrimp/oyster/algae). They also got inspired by "slow food" movements and invented a "slow fishing" style, in the sense that fishing should respect life cycles of different species and marine habitats, in terms of harvesting gears and anchoring systems. It would still be profitable enough for fishermen because the products would be eco-labeled and valued as such. --- Climate change Decline of primary production in marine ecosystems --- Fisheries Management rules: only this narrative allows increasing the fishing effort while artificial reefs for productive purpose are favored and commercial species are reintroduced. The share of fully protected areas is kept to current level (2% of the MPA). Climate change leads to a decline of primary production in marine ecosystems, that would be counterbalanced by a spatial development improving the circulations between lagoons, rivers and sea. *Management measures to be taken upstream: permaculture-type farming would improve soil quality; thus, the water runoff would supply rivers with good nutrients that would be transported to the sea and enhance plankton development. To ensure the good quality of water and nutrients, monitoring at the lagoon level should be performed. To avoid any eutrophication phenomenon, nutrients should not be blocked near the coast by facilities, so the channels of the lagoon should be left open and the undeveloped river mouths should be kept free. Aquaculture in the lagoons would also limit this risk. Starting point: it lies in the climate change expected consequences on the coastline and the consideration of a possible radical transformation in coastal livelihoods due to the loss of biomass of the sea induced by a primary production decrease 82. Even if the sea level rise consequences exceeded our time frame, participants considered it as a major driver of change. They presumed management would fail to prevent sea level rise and decided to put their efforts in making the best of the new resulting land/sea-scape. They invented a new economic model for the park area, valuing marine underwater seascapes, eco-friendly tourism around artificial reefs and wind turbines, or even an underwater museum around aesthetical artificial reef. --- Climate change Decline of primary production in marine ecosystems counterbalanced by permaculture-type farming and improved circulation between lagoons, rivers and sea --- Fisheries Management rules: participants assumed a commercial wind farm would be created allowing for a multifunctional exploitation of the water column, including educational sea trips. Artificial reefs villages would be densified to create a relief zone for the rocky coast diving sites. These reefs would have a cultural function, like an underwater museum. Their design would rely on ecological and aesthetical requirements. An intermediary target for fully protected areas was set after Member States Parties to the Convention on Biological Diversity (CBD) agreement to cover 10% of their coastal and marine areas with MPAs by 2020 (CBD A<unk>chi target 11). support habitat and species biomass regeneration but would mark the end of an attractive activity. --- Climate change Decline of primary production in marine ecosystems --- Fisheries Hence, scenario 1 proposed an extension of FPA up to 30% and localized it on the richest areas in terms of biodiversity, which leads to a sharp drop in the potential fished biomass indicator. While this strong protection may not be sufficient to trigger system recovery as a whole, it greatly changes the trophic chain structure, improving the biomass of some very important targeted fishing species (see Supplementary Table 6-10). This improvement could be seen as a co-benefit aligned with the analysis by Sala et al. 10. It opens avenues to move forward in searching for "win-win" strategies and opens a perspective of co-benefits for local fisheries in case spillovers occur and adequate fisheries management rules are to be defined. Moreover, if coupled with the same kind of measures that allow us to cancel the negative effect of climate change on primary production (as in scenario 2), scenario 1 would exhibit the best results in terms of total and living biomass variation, although these two indicators are insufficient to assess the quality of the ecosystem. Two hypotheses could be further tested: (i) the time horizon may not be sufficient, and/or (ii) the intensity of the reintroduction of grouper as a keystone species is insufficient given its low reproduction rate and longevity. Nevertheless, it would be interesting to review this scenario searching co-benefits strategies. A new version of the model could test pairing spatial use rights and different levels of protection within strategic zoning and a connected MPA network. It could also consider the spillover of marine organisms and the relocation of human activities due to FPA. In this case, it would be important to determine if the spillover of marine species would be enough so that the relocation of the fishing effort would not significantly affect ecosystem functioning of unprotected areas. In a timely manner, additional measures regulating the fishing effort from a strategic planning/ zoning perspective should complement the framework. Scenario 2, "Enhancing harvested biomass", aimed at increasing food provisioning. Simulation results showed that total fished biomass increases by 2% with or without considering climate change impacts on primary production, which matches the guideline of the narrative. However, fished biomass increases only in the muddy habitat, by >3%, while it decreases by between -3 and -32% in the other habitats, as a result of the counterbalancing effect of keeping the 2% share of FPA. Interestingly, the total biomass in the rocky habitat decreases less (with climate change) or even increases (without climate change) in scenario 2 compared to scenario 1. At the same time, while living biomass seems stable when climate change is not included (-0.03%), it will decrease with primary production (-0.89%) in contrast with scenario 1. Indeed, when compared to scenario 1, few species showed significant downward variation, except crabs in the Posidonia ecosystem. Also, even with the smallest FPA's share, currently appealing diving sites are reduced by 63%, which confirms that most existing diving spots are concentrated in areas of high natural value in or around the existing MPA. Scenario 2 favors fishing by increasing fishing effort (5%) and limiting FPA (2%). It also supports fishing with the reintroduction of target species and the densification of the species' habitats. This scenario notably avoids the negative effects of climate change on primary production due to ecological measures taken at the watershed level. However, comparative simulation results illustrate that marine park management measures alone would not generate such an effect. In view of Fig. 2 The social-ecological system (SES) of the Gulf of Lions marine protected area. Representing the Natural Marine Park of the Gulf of Lions as a social-ecological system outlining the main interactions on the territory to be addressed when talking about managing economic activities and environment protection. This representation has been issued through conducting a workshop held around a chronological matrix summarizing the mean features of the territory. the results, the fishing effort may have been increased too early, thereby canceling out the efforts made elsewhere. Moreover, catches might have been higher if the model had considered a shift of fishing activities from FPA to areas where fishing is allowed. Here, FPAs are located on rocky, Posidonia, and coralligenous habitats, which are areas of greatest natural value (GIS layer). Even if the share of FPA is the lowest in this scenario, almost all of the rocky habitat (excluding artificial reefs) is considered, which is one reason explaining the biomass increase in this habitat. This shows the importance of precise and strategic zoning in determining access rules in MPAs. This is also due to the densification of existing villages of artificial reefs and the creation of new villages in the rocky habitat. Three new hypotheses could be further tested: (i) maintaining the fishing effort at its 2018 level, (ii) increasing the introduction of target species, and (iii) enhancing the functioning of the trophic chain by reintroducing keystone species rather than target species of fishing? Finally, scenario 3, "Enhancing diving site access", aimed at increasing eco-tourism. Simulation results indicated that the main objective of the scenario is not achieved since diving access is restricted by 100% and 91% respectively in the coralligenous and rocky habitats, that host most currently appealing diving sites. At the same time, living biomass (and total biomass) decreases more than in scenario 1 (-0.6%) and less than in scenario 2 in the same climatic context of primary production reduction, reflecting the difference in FPA cover of the different scenarios. Interestingly, despite taking for granted the loss of historical ecosystems and traditional economic activities, and including primary production reduction, the total biomass increases by 0.12% in the rocky habitat, which is again a better score than what scenario 1 reached. Finally, fished biomass lowers by 14%, due to a 10% FPA's share, which is in accordance with a narrative that promotes the creation of alternative economic activities. Scenario 3 is the scenario that produces the most impressive results since diving site access was in sharp decrease, whereas the scenario was supposed to favor it. These results' explanation lies in a contradiction between the assumptions of the narrative. In fact, by placing 10% of the territory under full protection and locating these areas on sites of high biodiversity, FPAs are located on the very sites favored by divers. This contradiction between the goal of this narrative and the restricted access to FPA proves to be a determining factor in the success of the scenario. Retrospectively, this may seem obvious, but the exact delimitation of access rules to protected areas remains a hot topic. This scenario is of high interest because it illustrates an actual dilemma and confirms scenario 2's analysis that access rules need to be aligned and defined with precise and strategic zoning. Other hypotheses to be tested include allowing recreational diving access to FPA, while extractive activities remain prohibited. Fig. 3 The Gulf of Lions marine protected area food web. A snapshot of the trophic flows in the ecosystem during a given period describing the ecological functioning of the Natural Marine Park of the Gulf of Lions. --- DISCUSSION Our analysis highlights the usefulness of using a three-step (plus one) framework, hybridizing a collaborative modeling approach and a decision-making process (Fig. 1) as a way to identify both the future desired for an MPA and the pathways to get there. Similar collaborative approaches have been developed by the Commod community 18. A Commod-type project can focus on the production of knowledge to improve understanding of the actual SES, or it can go further and be part of a concerted effort to transform interaction practices with the resource or forms of socio-economic interactions 18. Ours is original as it aims not only to share a common understanding of the SES at present and help solve current challenges, but also to anticipate and create a shared future. Indeed, the proposed framework allows discussion of hypotheses concerning the future of the management area, which enables the reshaping of our thinking and the potential framing of new strategies. The framework acts as a dialog space for people concerned with SES and willing to support the implementation of management plans. This dialog space offers the possibility to realize that there is a difference between expectations or likely effects of management options and the complexity of reality. Indeed, the simulation results only sometimes illustrated the expected effects of the narratives. In this respect, our method paves the way for questioning beliefs, which did not occur in previous similar studies 10. It contributes to moving to informed-based strategies, as recommended by Cvitamovic et al. 41. The sciencepolicy future experiments we conducted considered placebased issues, participants knowledge, and imaginaries. Scientists coming from ecology and social sciences, decision-makers, and other MPA stakeholders all found the approach to be groundbreaking; by opening the box of scriptwriting, involved stakeholders experienced a way to construct new narratives and broaden solutions for ocean use, as advocated by Lubchenco and Gaines 38. However, such an approach must be taken cautiously, as it is time-consuming for all participants. At the beginning of the project, participants shared concerns about the usefulness of a prospective approach not connected to a real political agenda. The mobilization of tools during the workshops (see Methods, Prospective workshops) was beneficial to show how much the approach was anchored in reality, and allowed for creating a common ground. In the end, most participants underlined how instructive it was to meet with each other and exchange viewpoints on challenges concerning the future of the MPA rather than being consulted separately as it usually happens. Another interesting point is that the proposed framework fosters anticipatory governance capacity by testing assumptions, understanding interdependencies, and sparking discussions. It avoids policymakers acting in their own jurisdiction generating spillovers that modify the evolutionary pathways of related SESs or constraining the adaptive capacity of other policymakers 42. Lack of coordination between policy actors across jurisdictions and incomplete analysis of potential cascading effects in complex policy contexts can lead to maladaptation 42. In this regard, our framework can contribute to understanding the marine space as a "commons" 43 and to resolving issues facing an MPA as a decentralized governance institution. Marine parks are social constructs that must build on historical legacy and be invested with new commonalities to become legitimate and formulate acceptable, sustainable policies (see Supplementary Note 1). The framework also allowed to collaboratively explore the impacts of alternative management scenarios on marine SESs considering climate change, identifying benefits and beneficiaries, and resulting trade-offs among ecological functions supporting them. This experience led to interesting conclusions from the simulation results themselves. The latter showed that co-benefits may arise and be favored by a precise and coherent system of rules of access and uses complementing a more physical, biological, and ecological set of measures. Our findings showed that some trade-offs might satisfy several objectives, even if not those targeted first, opening the way to potential co-benefits, as shown by 10. For instance, the strong protection extension in scenario 1 changed each species' biomass distribution within each ecosystem, improving the biomass of some important fished species and opening avenues to search for "win-win" strategies. Similarly, measures allowing us to cancel the negative effect of climate change on primary production proposed in scenario 2 would increase the total biomass together with maintaining biodiversity in scenario 1. More generally, this research developed a companion modeling framework that would enable us to move forward in the search for win-win strategies by pairing strategic zoning of high protection and access rules. As far as we know, the co-designed model we developed is the only agent-based model combining collaborative and ecosystem-based modeling that can be used as a lab experiment to identify co-benefiting strategies in marine spaces. Nevertheless, some improvements are needed. There are avenues insofar as the model suffers from shortcomings. The first difficulty faced in the modeling exercise was the mismatch between spatial scales of ecological and climate modeling. While the former operates at the habitat scale (1 km2), the latter provides smoothed environmental variables at a resolution larger than 50 km2, unresolving taking into account thresholds leading to life cycle bottlenecks for instance. The latter points to the need to downscale climate projections at relevant scales for ecosystem functioning. Other concern relies on improving the modeling tool by describing spatiotemporal dynamics arising from the spillover of marine organisms 44, the resilience brought by population connectivity 45, and the relocation of human activities 46. Proceeding to a sensitivity analysis, or building alternative outputs indicators, allows disentangling and clarifies the different modeled effects inside each scenario. There is a tension here between rewriting scenarios and pertaining their collaborative scriptwriting which led to the scenarios implemented, very meaningful about the richness of the stakeholder's engagement. Finally, marine management should be an inclusive, iterative process, where modeling acts as an ongoing exploratory experiment to identify the conditions under which co-benefits and win-win strategies can be realized. Hence, the modeling process facilitates interactions between participants in a transparent and open process. One can thus imagine working sequentially until satisfactory results are obtained for any stakeholder involved. This search for a hybridized collaboration framework in the construction of policies proves particularly fruitful in creating a shared future and looking for sustainability. Fig. 4 Simulation results for the scenario 1 (S1), scenario 2 (S2), scenario 3 (S3) in each ecosystem. Those three scenarios correspond to the narratives that emerged from stakeholders' groups (SG) (see Table 1). Scenarios 1 and 3 make no special provision for the effects of climate change and therefore include an assumption about the effects of climate change (CC) in the form of reduced primary production. They are rated "+ CC". On the other hand, narrative 2 and thus scenario 2 provides for combating climate change, therefore it does not include an assumption regarding the impacts of climate change and is scored "no CC". S1 + CC: enhancing total biomass with primary production decreasing due to climate change. S2 no CC: enhancing harvested biomass without decreasing primary production. S3 + CC: enhancing diving sites access with primary production decreasing due to climate change. a Evolution of the total biomass representing the evolution of the sum of biomass of all species in each ecosystem between 2018 and 2050. b Evolution of the living biomass representing the evolution of the difference between the total biomass and the fished species biomass in each ecosystem between 2018 and 2050. c Evolution of the fished biomass representing the evolution of the sum of biomass of all fished species in each ecosystem between 2018 and 2050. d Evolution of the diving sites access in each ecosystem between 2018 and 2050. e Evolution of the share of fully protected areas in each ecosystem between 2018 and 2050. --- METHODS --- Prospective workshops Each of the three groups focused on fostering one of the three ecosystems' functions considered: production of total biomass, fish stock level for fishing activities or potential access to diving sites. They allow us to work on interactions between biodiversity conservation and economic development. Proxys used and related to these ecosystem functions are also aligned with the ones used in the park management plan, which helps for sciencepolicy dialog. To reach the objectives of the narratives, participants were especially requested to give indications about considering climate change impact or not, fishing effort evolution, spatial sea-users' rights (FPA), facilities planning (artificial reefs, floating wind turbines, harbors and breakwaters, multipurpose facilities) and ecological engineering (reintroduction of species), the main features of the social-ecological representation on which we all agreed (see Fig. 2). In order to help envisioning disruptive changes, we decided to draw on possible future land/sea-scapes of the MPA. Here land/sea-scapes are understood in several aspects: coastal viewpoint, marine natural or artificial habitats, above/undersea marine space occupation by humans and nonhumans. To do so, we introduced visual tools during the prospective workshops (see Supplementary Note 1): (i) an archetypal map of the MPA including typical features to recall main territorial issues without being trapped in too specific considerations: a city by the sea, the mouth of a river, an estuary, a rocky coast, a sandy coast; (ii) tokens related to the available means to reach the narratives' objectives: ecosystem status (primary production), fisheries evolution (fishing effort), facilities planning & ecological engineering (esthetical artificial reefs, floating wind turbines, harbors and break walls, reintroduction of species), sea-users' access and regulation (recreational uses and fully protected areas). Tokens were used to inform participants about the localization and intensity of each item, which helped shape the participant's vision of the future and link with the simulation model. (iii) cards describing real-world examples of what tokens stand for. They were used to broaden the participants' thinking scope by introducing stories in foreign places and at different times. Here, they helped illustrate alternative options among scenarios. --- Overview of end-to-end models End-to-end models represent the different ecosystem components from primary producers to top predators, linked through trophic interactions and affected by the abiotic environment 47. They allow the study of the combined effects of fishing and climate change on marine ecosystems by coupling hydrodynamic, biogeochemical, biological and fisheries models. Some are suited to explore the impact of management measures on fisheries dynamics with an explicit description of fishing stocks' spatial and seasonal dynamics, fishing activities and access rights (ISIS-F
Projecting the combined effect of management options and the evolving climate is necessary to inform shared sustainable futures for marine activities and biodiversity. However, engaging multisectoral stakeholders in biodiversity-use scenario analysis remains a challenge. Using a French Mediterranean marine protected area (MPA) as a marine social-ecological case study, we coupled codesigned visioning narratives at horizon 2050 with an ecosystem-based model. Our analysis revealed a mismatch between the stated vision endpoints at 2050 and the model prediction narrative objectives. However, the discussions that arose from the approach opened the way for previously unidentified transformative pathways. Hybridizing research and decision-making with iterative collaborative modeling frameworks can enhance adaptive management policies, leveraging pathways toward sustainability.
engineering (esthetical artificial reefs, floating wind turbines, harbors and break walls, reintroduction of species), sea-users' access and regulation (recreational uses and fully protected areas). Tokens were used to inform participants about the localization and intensity of each item, which helped shape the participant's vision of the future and link with the simulation model. (iii) cards describing real-world examples of what tokens stand for. They were used to broaden the participants' thinking scope by introducing stories in foreign places and at different times. Here, they helped illustrate alternative options among scenarios. --- Overview of end-to-end models End-to-end models represent the different ecosystem components from primary producers to top predators, linked through trophic interactions and affected by the abiotic environment 47. They allow the study of the combined effects of fishing and climate change on marine ecosystems by coupling hydrodynamic, biogeochemical, biological and fisheries models. Some are suited to explore the impact of management measures on fisheries dynamics with an explicit description of fishing stocks' spatial and seasonal dynamics, fishing activities and access rights (ISIS-Fish) [48][49][50] but they do not represent environmental conditions or trophic interactions, so their capacity to simulate the impact of fisheries management on ecosystem dynamics and possible feedbacks is limited. Others explicitly model trophic interactions between uniform ecological groups with biomass flows based on diet matrixes (Ecopath with Ecosim 51, Atlantis). They rely on the assumption that major features of marine ecosystems depend on their trophic structure; thus, there is no need to detail each species to describe the state and dynamics of the ecosystem. They can be used to explore the evolution of the system under variations in biological or fishery conditions but may lack flexibility to simulate regime shifts due to radical variations in such conditions. Some others do not set a priori trophic interactions, which are considered too rigid to explore the nonlinear effects of both fishing and change in primary production. They describe predation as an opportunistic process that depends on spatial co-occurrence and size adequacy between a predator and its prey (OSMOSE). Due to the simulation of emergent trophic interactions, it is particularly relevant to explore the single or combined effects of fishing, management and climate change on ecosystem dynamics. However, they do not properly describe fisheries dynamics (fixed fishing mortality) and must be coupled with fleet dynamics models (dynamic effort allocation) 52. --- Ecosystems description First, we selected three publications describing the specific ecosystem functioning associated with marine park habitats: the Mediterranean seagrass ecosystem 53, the coralligenous ecosystem 54 and the algae-dominated rocky reef ecosystem 55. Second, we selected two publications using the same massbalance model (EwE) to analyze the overall ecosystem structure and fishing impacts in the Gulf of Lion 56 and the northwestern Mediterranean Sea 57. They both provide a snapshot of the trophic flows in the ecosystem during a given period, which is based on a consistent set of detailed data for each group of species: biomass density, food requirements (diet matrix), mortality by predation and mortality by fishing. The former focuses on the Gulf of Lion but depicts a larger area than that of the park in terms of distance to the shore and especially depth (-2500 m against -1200 m). Thus, the rocky reef ecosystem that exists within the park is "masked" by the prevalence of sandy/muddy habitats. The latter depicts a wider part of the Mediterranean Sea but is comparable to the park in terms of depth (-1000 m against -1200 m) and provides useful information on the rocky reef ecosystem. Each ecosystem represents the following proportion of the whole system: muddy = 85.57%, sandy = 12.23%, rocky = 1.75, posidonia = 0.23 and coralligenous = 0.22%. For "rocky", "posidonia" and "coralligenous", we selected corresponding ecological groups and associated data (Ewe) from functional compartments (EBQI). For "sandy&muddy", we created an ad hoc conceptual model of the ecosystem functioning from the Gulf of Lion trophic chain (Ewe). --- Food-web modeling For each related group of species, the variation in the average density results from the equal combination of two potential drivers on a yearly basis: the abundance of prey (bottom-up control, positive feedback) and the abundance of predators (top-down control, negative feedback). To do so, we use data from the EwE publications listed above: biomass density, food requirements (diet matrix), mortality by predation and by fishing (see Supplementary Tables 11121314). For one species, the White gorgonian (Eunicella Singularis), we use site-specific data produced during the RocConnect project (http://isidoredd.documentation.developpementdurable.gouv.fr/document.xsp?id=Temis-0084332). To model the effect of prey abundance on their predators, the biomass of each group of species is described as the sum of its annual food requirements, detailing each prey (see Supplementary Tables 1234). While nothing happens to a prey species, there is no change in prey abundance, and the biomass of each predator species remains the same. If anything happens to a prey species, this translates into that species density, which then reflects its availability for feeding predators and eventually affects the biomass of predator species. The effect on the biomass of predator species is proportional to the change in prey species density and to the specific weight of prey species in each predator's diet. In other words, the more prey there is at the beginning of the period, the more of its predators there could be at the end. To model the effect of predator abundance on their prey, we follow the reciprocal reasoning of the above mechanism. Here, the biomass of each group of species is described as the sum of its annual catches by each other species (see Supplementary Tables 1234). Here again, while nothing happens to a predator's species, there is no change in predator abundance, and the biomass of each prey species remains the same. If anything happens to a predator's species, this translates into that species density, which is then reflected in its food requirements and eventually affects the biomass of prey species. However, this time, the effect on the biomass of prey species is inversely proportional to the change in predator species density and to the specific weight of predator species in each prey's mortality. In other words, the more predators there are at the beginning of the period, the less prey there could be at the end. There are only two exceptions to this rule: phytoplankton and detritus. The production of phytoplankton relies on photosynthesis, which requires water, light, carbon dioxide and mineral nutrients. These elements are beyond our representation, so we impose the value of the phytoplankton biomass density at each time step. Additionally, the value of phytoplankton biomass density is the variable used to represent the expected effect of climate change on primary production. The production of detritus comes from three sources: natural detritus, discards and bycatch of sea turtles, seabirds and cetaceans. In other words, the amount of detritus depends on the activity of other marine organisms. Here, we model the amount of detritus as a constant share of the total annual biomass. --- Rationale for ABM Most studies on MPA analyze how they succeed from an ecological point of view 56. Few others argue about the conditions under which they succeed from a socio-economical or cultural point of view (refs. 3,[58][59][60][61][62][63] ). Little work embraces both aspects of MPA [64][65][66]. Currently, agent-based models (ABMs) are convenient methods to integrate ecological and socioeconomic dynamics and are already used by researchers in ecology or economics for ecosystem management [67][68][69]. ABMs allow the consideration of any kind of agent with different functioning and organization levels 69,70, including human activities, marine food webs and facilities planning. ABMs are also usually spatially explicit, which favors connecting with narratives that are spatially explicit too. Basically, an agent is a computer system that is located in an environment and that acts autonomously to meet its objectives. Here, environment means any natural and/or social phenomena that potentially have an impact on the agent. For these reasons, ABMs are convenient methods to deal with SESs. The possibility of providing each kind of agent with a representation of the environment, according to specific perception criteria, is particularly interesting for applications in the field of renewable resource management 19. The ABM developed for SES management usually integrates an explicit representation of space: a grid with each cell corresponding to a homogeneous portion of space. Time is generally segmented into regular time steps. The simulation horizon (total time steps) corresponds to the prospective horizon. --- Modeling of drivers and indicators of ecosystem status In the Mediterranean Sea, current scientific consensus outlines a reduction of the primary production and changes in species composition in the ecosystems as an effect of climate change. However, trophic network re-organization linked to these species' composition changes is still an open debate. Hence, to model the expected effect of climate change on the ecosystems Natural Marine Park of the Gulf of Lions, we build on IPCC projections that consider a 10% to 20% decrease in net primary production under low latitudes by 2100 due to reduced vertical nutrient supply 71,72. Indeed, combined consequences of climate change like water temperature increase and hydric stress act synergistically to reduce primary production. The former reinforces stratification of surface waters resulting in a reduction in the supply of nutrients which leads to a decrease in primary production. The second also leads to a decrease in nutrients from the rivers to the sea impacting primary production Applied to our simulation horizon, this can be translated into an annual steady decrease of up to -4% in phytoplankton biomass density between 2018 and 2050. To model fisheries, we use the same rule as to model the effect of predator abundance on their prey, but here, this represents the effect of fishing effort on harvested species. As our entry point is traditional small-scale multispecies fisheries, we do not directly modify fishing effort by species but rather by fishing gear 56. A change in the fishing effort of a given fishing gear first affects the total biomass of its harvested species and then is allocated between each species after the fishing ratio from the base year. Thanks to the EwE publication on the Gulf of Lion and included data on landings by gear and by species 56, we were able to distinguish 4 fishing gear: trawls, tuna seiners, lamparos (traditional kind of night-time fishing using light to attract small pelagics), and other artisanal fishing gear. It does not include recreational fishing. To spatialize fisheries, we do not associate each fishing gear with specific locations or habitats: fishing effort by fishing gear is the same all over the area, with two exceptions. The former refers to FPA areas where any kind of fishing is forbidden (Cerbère-Banyuls Natural Marine Reserve). The latter refers to trawls and artisanal fisheries whose activity is constrained by practical or legal concerns. First, it is known that artisanal fisheries work mostly near the coast up to a maximum distance of 6 nautical miles and a maximum depth of -200 meters. Second, trawls are prohibited between 0 and 3 nautical miles (2013 Trawl Management Plan). Here, we do not model transfer effects between sites or towards new sites. To model diving, we use a GIS layer indicating the most popular diving sites in the park. With each diving site, we associate an annual number of visitors that fits known trends. Here, changes in diver attendance depend on the extent of fully protected areas prohibiting this practice. Here, again, we do not model transfer effects between sites or towards new sites. To model FPA and access rights, we use a GIS layer indicating the boundaries of the existing FPA (Cerbère-Banyuls Natural Marine Reserve). There fishing is prohibited. To model the creation of the new FPA, we target important natural areas. To do so, we use a GIS layer corresponding to a map from the park management plan that indicates important natural areas (see Supplementary Fig. 1a,b). More precisely, the map scales areas after their natural value using a "heat gradient" (see management plan for details). To reach the level of protection expected in each scenario, we downgraded the level of natural value required to be designated an FPA every 5 years between 2020 and 2030. Here, these levels of natural value are chosen to get closer to the expected level of protection. Areas to be protected are designated after their natural value, but the rules of attribution slightly change among scenarios. When protecting a large portion of the MPA (scenario 1, Supplementary Fig. 2), there is no need to first target a specific area: one is sure that all areas of great natural value will be included in the protected perimeter. Here, we seek to make progress on the overall MPA, and the only criterion to be designated a protected area refers to the level of natural value. When protecting a small portion of the MPA (scenario 2), one may want to make sure to protect consistent areas of great natural value rather than sparse micro hot points. To do so, we target the existing Marine Reserve and let new protected areas develop in its surroundings. When protecting a medium portion of the MPA (scenario 3), we use a combination of the two previous rules: in 2020, we target the surroundings of the Marine Reserve to be sure to protect this area of greatest natural value, while in 2025 and 2030, we also let protected areas develop elsewhere, after the local level of natural value. Concerning access rights, fully protected areas were intended as "no go, no take" zones/integral reserves during the workshops. Thus, we prohibit fishing and diving in the corresponding perimeters. To model facilities planning, we select artificial reefs and floating wind turbines. We do not represent harbors and break walls, as they were much likely associated with sea level rise during the workshops. This is a major issue but beyond the scope of this ecosystem-based modeling. To model ecological engineering and artificial reefs implementation, we use a GIS layer indicating their location, and we assume that they are comparable to natural rocky reefs 73. Thus, existing artificial reefs are associated with the same food web as the Rock ecosystem cited above. According to expert opinion, the occupancy rate of existing artificial reef villages inside the park is <unk>12%. To model their densification, we impose a steady annual increase in the biomass of each species until it reaches the equivalent of a 50% occupancy rate by 2050. To model the installation of new reefs in new villages, we replace a portion of sandy habitat with rocky habitat corresponding to an occupancy rate of 50%. Then, we describe a three-step colonization by marine organisms: (i) a pioneer phase of 1 year with the development of phytoplankton, zooplankton, detritus, macroalgae and worms; (ii) a maturation phase of 2-5 years with the development of suprabenthos, gorgonians, benthic invertebrates, sea urchins, octopuses and bivalve gastropods; (iii) a completion phase after 5 years, with the development of salema, sparidae, seabream, conger, seabass, scorpion fish, and picarel 73. To model floating wind turbines, we create a GIS layer from a map used by the management team of the park to initiate debates with stakeholders on possible locations of already approved experimental turbines and possible new commercial ones. During the workshops and the project team meetings, two possible adverse effects of floating turbines on the ecosystem were discussed. Some determined that the floating base and the anchorages would have a sort of "fish aggregating device" effect, while the location area would be prohibited from fishing. Other thought antifouling paint would prevent such an effect, while ultrasounds due to the functioning of turbines would trouble cetaceans. Here, we do not model these alternative effects because of time constraints and lack of scientific evidence and data to our knowledge. We model their possible progressive development every five years between 2020 and 2045 around the "overall" and "most acceptable" areas designated by the map using a propagation rule in the surroundings of already approved experimental turbines. To model multipurpose facilities, we add attendance indicators to artificial reefs and floating turbines in some cases. In scenarios 2 and 3, the development of a commercial wind farm is associated with the development of a touristic dedicated activity consisting of sea-visiting the area, explaining its purpose and possible effects on ecosystems. With each turbine, we associate an annual number of visitors deduced from assumptions on the number of opening days by year, number of visits by day, and number of passengers by visit. Here, visitor attendance follows from the development of a commercial wind farm. In scenario 3, a few artificial reefs are developed with both ecological and esthetic concerns and are associated with the development of a dedicated diving activity. With each reef, we associate an annual number of divers deduced from assumptions on the number of opening days by year, number of visits by day, and number of divers by visit. Here, visitors' attendance follows from recreational reef development. Two esthetic artificial reef villages are being developed in 2025 and 2035. To model the reintroduction of species, we focus on one heritage species in scenario 1 (grouper) and on two commercial species in scenario 2 (seabass and dentex). Concerning sites of reintroduction, we targeted rocky ecosystems and specifically existing artificial reef villages. Each year between 2020 and 2025, we repopulate from juveniles and adult individuals expressed in biomass equivalents. Here, priority is given to meeting the food needs of reintroduced species, corresponding to their estimated biomass levels, even if at the expense of the already established species. As biomass levels of reintroduced species are of the same order as those of top predators already represented in the rock ecosystem, this hypothetical situation calls for a later more complex representation of their competition for food. --- DATA AVAILABILITY The data that support the findings of this study are available in the Supplementary Materials. --- CODE AVAILABILITY The code that supports the findings of this study is available on GitHub at: https:// github.com/elsamosseri/SAFRAN. --- AUTHOR CONTRIBUTIONS All authors wrote and reviewed the main text. C.B., A.S. and X.L. designed Fig. 1. E.M. and X.L. designed Figs. 2 and --- COMPETING INTERESTS The authors declare no competing interests. --- ADDITIONAL INFORMATION Supplementary information The online version contains supplementary material available at https://doi.org/10.1038/s44183-023-00011-z. Correspondence and requests for materials should be addressed to C. Boemare. Reprints and permission information is available at http://www.nature.com/ reprints Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Projecting the combined effect of management options and the evolving climate is necessary to inform shared sustainable futures for marine activities and biodiversity. However, engaging multisectoral stakeholders in biodiversity-use scenario analysis remains a challenge. Using a French Mediterranean marine protected area (MPA) as a marine social-ecological case study, we coupled codesigned visioning narratives at horizon 2050 with an ecosystem-based model. Our analysis revealed a mismatch between the stated vision endpoints at 2050 and the model prediction narrative objectives. However, the discussions that arose from the approach opened the way for previously unidentified transformative pathways. Hybridizing research and decision-making with iterative collaborative modeling frameworks can enhance adaptive management policies, leveraging pathways toward sustainability.
Introduction Childhood overweightness and obesity prevalence have increased at an alarming rate and have become most serious global health challenges [1]. South Africa is not immune to these childhood health challenges in that, according to the outcomes of the South African National Health and Nutrition Examination Survey (SANHANES-1), the prevalence of overweightness and obesity in children aged 10 to 14 years are 12.1 and 4.2 percent, respectively, while in those children aged 15 to 17 years are 13.3 and 4.8 percent, respectively [2]. Unhealthy food and beverage advertising on television (TV) has been implicated in the development of childhood overweightness and obesity [3]. For instance, TV food and beverage advertising exposure has been shown to influence the amount of food that children who watch a lot of TV consume [4,5]. The most advertised food and beverages on TV tend to be high in fat, sugar and salt and low in essential minerals, vitamins, amino acids and fibre [6]. Children seem to also be more susceptible than adults to the persuasive approach used by TV marketers when advertising food and beverages [4,7]. Hence, children who watch food and beverage advertisements (ads) tend to choose these foods and beverages thinking they are healthy with less interest in knowing their nutrient content [7,8]. Food marketers have typically used a mixture of techniques to increase children's desire for unhealthy food and beverages [9,10]. An example of these techniques is the use of misleading claims that portray specific food and beverages as bringing about enhanced performance (e.g., in sport, at schools) [9]. Others have utilised cartoon-related characters that are known to increase brand recognition among children [11]. Advertisements also portray people who make unhealthy food choices to appear to have desirable outcomes [12]. Given the extensive evidence of the negative impact of food advertising on children, the World Health Organisation (WHO) advocated a control of TV food marketing especially with regard to marketing directed towards children [13,14]. This could support the creation of a food environment that promotes healthy dietary choices. The WHO also proposed that countries should develop and adopt policies to control the marketing of food towards children with a specific emphasis on the reach, frequency, creative content, design and execution of the marketing message [14]. The WHO has argued that policies initiated against the negative influence of marketing such as TV food advertising need to be comprehensive to be efficient [14]. In South Africa, the number of peer-reviewed studies investigating children's exposure to unhealthy food ads is limited. A study conducted by Van Vuuran between 2003 and 2005 [15] estimated that children were exposed to a daily average of 24 minutes of advertising. Subsequently, the South African government proposed a better control of TV food advertising to children in response to the call by the WHO [16]. This led to the development of a code for advertising by the Department of Health and the major food corporation's consortium [17,18]. The food and beverage advertising code was formally initiated by the Advertising Standards Authority [ASA] in 2008. This code led to a pledge (i.e., the South African Marketing to Children pledge) to adhere to the code signed by the members of the major food corporations in 2009. The core principle was to publicly pledge "to commit to marketing communications to children who are twelve years old and under, to promote healthy dietary choices and healthy lifestyles" [17]. This pledge, as the sole form of regulation to control food ads, put South Africa in the group where food industries regulate themselves (i.e., self-regulation). This type of regulation is widely known for non-compliance by food marketers [10,19]. It is therefore not surprising that even after signing the pledge, studies found a poor adherence to the TV advertising guidelines in South Africa with major infringements of the pledge being identified [20][21][22]. According to the aforementioned studies, unhealthy food advertising continues to be prevalent in South Africa. The most frequent ads shown during periods when children are likely to be watching TV are for desserts and sweets, fast foods, hot sugar-sweetened beverages, starchy foods and sweetened beverages as found by Mchiza et al. [21]. Additionally, 67% of alcohol-related ads are shown during family viewing time [21]. Based on these findings by Mchiza et al. [21], the following recommendations were made to the Department of Health (DoH) and the ASA of South Africa in 2014 [23]: 1. The prohibition of the advertising of foods and beverages high in fat, sugar and salt following the World Health Organization (WHO) recommendations; 2. The prohibition of alcohol ads especially when children are watching; 3. The restriction of the use of advertising techniques that appeal to children. Ads should not use cartoon characters and/or animations or include promotional offers and gifts or tokens. No research has assessed the rate of advertising of unhealthy food and beverages on children since the enactment of the 2014 food advertising recommendations by the South African DoH and ASA. Furthermore, no study has investigated the compliance of food marketers with the South African Marketing to Children pledge [17]. To address these gaps, this study investigated the extent and nature of advertising of unhealthy versus healthy food and beverages to children by the major South African TV broadcasting channels. --- Methods and Procedures The categories and techniques employed in this study were adapted from the International Network for Food and Obesity/Non-Communicable Diseases Research, Monitoring and Action Support (INFORMAS) module relating to the monitoring and benchmarking of unhealthy food promotion to children [24]. This approach was evaluated as adequate to assess the frequency and level of exposure of population groups (especially children) to food promotions, the persuasive power of techniques used in promotional communications and the nutritional composition of promoted food products. The South African Nutrient Profiling Model (SA-NPM) was used for the contextual adaptation of the tool [25]. --- Channels and Time of Broadcasting Food ads were recorded from the four major South African TV channels. Recordings were done from 15:00 to 19:00 (i.e., 4 h) for seven consecutive days from 23 April 2017 to 29 April 2017. This resulted in a total of 112 broadcasting hours for the four stations together. --- Description of the Target Audience of Broadcasting Channels Television in South Africa is funded from license fees and advertising and broadcasts on four free-to-air channels (South African Broadcasting Corporation (SABC) 1, 2, 3 and Enhanced Television (e-TV)) with a mixed entertainment and public service mandate. According to the SABC segmentation, all four TV channels focus on the same target audience during the following intervals. From 15:00-17:00 hours they target children; during this time, the child-focused programs shown are infomercials, educational programs and cartoons. Following this time, from 17:00-19:00 hours, the target becomes the whole family including children. During this period, talk shows and soap operas are shown (Table 1). For the sake of the current study, these two time periods form the stipulated period when children are expected to be part of the TV audience. --- Selection and Coding Procedures The data were collected manually by recording the live video broadcasted on the four TV stations concurrently within the stipulated period. A TV tuner (WinTV tuner), a Windows Media Centre compatible with Windows 10 and storage devices were the tools employed to carry out the task. Coding was done by two independent researchers (a nutrition expert with a PhD in nutrition and dietetics and a postgraduate researcher with a Master of Public Health specializing in nutrition). Both researchers independently viewed a playback of the recorded videos one TV station at a time. In the case of any disagreement, recoding was done until 100% agreement was reached. No distinction was made between unique and repetitive ads. Ads were selected if they fell into one of the following categories and were coded accordingly: (i) healthy food or beverage, (ii) unhealthy food or beverage, (iii) neutral (Table 2). Healthy foods were defined as core foods that are nutrient-dense and recommended for daily consumption. Unhealthy foods were non-core foods that are high in undesirable nutrients such as fat, refined sugars and salt. The neutral category consisted of food and beverage-related items that could not explicitly be labelled as healthy or unhealthy such as baby and toddler milk formula, tea and coffee. For each ad, the following information was collected: (i) television channel (TV station being recorded); (ii) name and type of program in which the ad was shown; (iii) date and time of the day when the ad was shown; (iv) assumed target audience of the ad; (v) company placing the ad; (vi) description of the product advertised; (vii) brand benefit claim (claims other than those relating to health that were directed towards developing positive perceptions about a company's product that might influence brand attachment), if any; (viii) description of health claim (any claim of the food or its constituent having an effect on health or being healthy or having a nutritional property) [24], if any; (ix) power strategy (this could be a promotional character or event or person employed to increase the persuasive power of an advertisement) [24], if any; (x) duration of the ad. Brand benefit claims, health claims and power strategies constituted the persuasive techniques that were studied. This study focused only on child and family viewing times. --- Statistical Analysis A descriptive analysis was done using measures of central tendency, standard deviations (SD) and ad rates for different ad subgroups (e.g., TV channel, viewing time). The differences between the healthy and unhealthy categories were calculated using a 1-sample proportions test with a Yates's continuity correction. For small samples, an exact binomial test was used. The analysis was done using R software [26]. --- Ethical Considerations The Humanities and Social Science Research Ethics Committee of the University of the Western Cape approved the methodology and exempted the ethics of the current research (Reference Number: HS19/6/6). The project is also registered with the University of the Western Cape's Higher Degrees Committee. The data did not include any personal information. --- Results --- General Description A total of 1629 ads were shown on the four TV channels combined of which 582 (35.7%) related to food and beverages. This corresponded to an average advertisement rate of 5.2 ads/c-h. Unhealthy food/beverage items constituted more than half (342: 58.8%) of the total food/beverage-related ads followed by neutral ads (150: 25.8%) and healthy food/beverage ads (90: 15.5%). The mean duration of the ads was 29.4 s and ranged from 6 to 45 s. The highest ad rate (6.1 ads/c-h) was recorded during family viewing time (Table 1). Unhealthy foods were advertised more than three times as often as healthy foods for the child viewing time and more than four times as often during family viewing time. The rates of ads for unhealthy food and beverages were significantly higher (p <unk> 0.001) than those for healthy food and beverages during the child and family viewing times. --- Food and Beverage Advertisements during Child and Family Viewing Time During child and family viewing time (Table 2), supermarket-related ads with only unhealthy foods advertised appeared most often (0.66 ads/c-h). This was followed by fast food-related ads with unhealthy and neutral options advertised (0.55 ads/c-h). Alcohol was advertised at 0.25 ads/c-h. Of the neutral category, ads about vitamin/minerals or other dietary supplements and sugar-free chewing gum had the highest rate (0.43 ads/c-h) (Table 2). South African Broadcasting Corporation 3 and e-TV had higher advertising rates than the other channels (7.4 ads/h and 5.3 ads/h, respectively) (Table 3). Unhealthy foods were advertised significantly more often than healthy foods especially by e-TV. The proportion of ads with a brand benefit claim was 96%. Moreover, ads frequently used more than one brand benefit claim. As shown in Table 4, overall, brand benefit claims were used at an average rate of 3.1 claims per channel-hour (claims/ch-h) for healthy foods and 10.1 claims/ch-h for unhealthy foods. Claims promoting children or family as the users of the product, emotive claims and claims using sensory-based characteristics were the top three claims amongst healthy and unhealthy categories (Table 4). The rates of using brand benefit claims were significantly higher (p <unk> 0.001) in unhealthy versus healthy products in all categories with the exception of the Suggested use (great for lunchboxes) category. The proportion of food ads with health claims was 45%. A few of these ads made more than one health claim. The total rate of health claims recorded for the unhealthy food category (1.7 claims/ch-h) was significantly higher than the rate for the healthy food category (1.2 claims/ch-h) (Table 5). The claim of the product containing a health-related ingredient was most frequently used (0.7 claims/c-h) in unhealthy foods. The rates of using heath claims were significantly higher (p <unk> 0.001) in unhealthy than healthy products for the Health-related ingredient and Nutrient comparative claim categories. Finally, 34% of ads used power strategies. A few of these ads used more than one power strategy (Table 6). Power strategies were more significantly used in ads on unhealthy food (2.2 power strategies per channel-hour) than in ads on healthy foods (0.3 power strategies per channel-hour). Celebrity endorsement, cartoon/company owned characters and promoting the food as being child tailored (e.g., using an image of a child) were the most used power strategies. The rates of using power strategies were significantly higher in unhealthy than healthy food products where celebrated individuals and sports events were used (p <unk> 0.01). --- Discussion This study investigated the exposure of South African children to unhealthy food and beverage ads. We identified 582 ads for food and beverages within the child and family viewing time. The overall rate of food and beverage-related ads was found to be 5.2 ads/ch-h. The four free-to-air TV channels advertised unhealthy foods at significantly higher rates than the healthy foods. Brand benefit claims and power strategies had significantly higher rates of use in unhealthy than healthy food ads. --- Advertisements for Unhealthy Food and Beverages during Child and Family Viewing Times There were almost four times as many ads for unhealthy (342: 58.8%) compared with healthy foods (90: 15.5%). This may predispose children who watch TV to choose unhealthy foods that are high in fat, salt and sugar due to their vulnerability to TV ads [5,6]. This violates the South African Marketing pledge that suggests a commitment by industry not to market food to children unless the aim is to advocate healthy dietary choices [17]. This is an indication that children in South Africa are at an increased risk of exposure to unhealthy food ads. Studies conducted in other countries have also found unhealthy foods to be proportionally more advertised than healthy foods. In Turkey, for instance, the number of ads for fast foods and beverages was found to be significantly higher than that for healthy food products [27]. In Thailand, the average ad rate of unhealthy food was also shown to be 2.9 ads/ch-h compared with the 0.2 and 0.9 ads/c-h for healthy and neutral categories, respectively [28]. Of particular concern were alcohol ads, which occurred at a rate of 0.25 ads/c-h during this period. This outcome is in violation of the SA DoH and ASA guidelines where it is clearly highlighted that no alcoholic beverages are to be shown when children are supposed to be viewing television [23]. Anderson [29] also argues that young people may be particularly susceptible to alcohol ads as they shape their attitudes, perceptions and expectancies about alcohol use. Indeed, Ausstin and Nach-Ferguson [30] found that children aged 7 to 12 years who enjoyed the alcoholic beverage ads to which they were exposed were more likely to try these beverages. Showing alcoholic beverage ads that may be appealing to children (e.g., by using celebrities and popular individuals) will more likely trigger them to become alcoholic beverage drinkers [30]. Another source of concern were the high rates of ads on sugar-sweetened beverages (SSB) because of the well documented detrimental effects they can have on children [31][32][33][34] --- Persuasive Techniques Persuasive techniques found in ads for all three food categories included power strategies, brand benefit claims and health claims. These techniques may be misleading as they may promote unbeneficial effects or mask harmful effects, which is of specific concern when it comes to unhealthy food [9,10]. Ads identified in the current study also carried various brand benefit claims (e.g., emotive claims, puffery) and power strategies (such as referring to famous sportspersons) that may make them more appealing. Power strategies were markedly employed to promote unhealthy foods more than healthy foods. Ads used, for example, the image of a child (child tailored) and non-sports celebrities as power strategies to promote unhealthy foods during this study. The use of cartoon characters and celebrated individuals to promote unhealthy food and alcohol are not new phenomena in South Africa. Mchiza et al. [21] had previously noted that, in 2010, 10% of the alcoholic beverage ads were shown on South African TV when children and family were supposedly watching. Mchiza et al. [21] also reported that these ads were promoted with the help of celebrated individuals such as movie actors, sportsmen and TV personalities. Delport [22] highlighted that techniques such as the use of cartoon characters are employed to create imagery of fun and excitement that appeals to children. Oyero and Salawo [9] assert that the use of health claims when advertising unhealthy food represents a derogation of the importance of healthy foods. With the lack of intellectual capacity and skills to deal with the appeal of these messages [35], children are even more likely to fall for this deception and may easier accept these false health claims as the truth. This may shape the way they see what is healthy and unhealthy and may engrain misconceptions in their brains about what is healthy while fostering unhealthy eating habits. Brand benefit claims were another persuasive technique utilised to advertise both healthy (3.1 claims/ch-h) and unhealthy food (10.1 claims/ch-h). Brand benefit claims have previously been used in South Africa, particularly those brand benefits that portray fun [21,36]. Mchiza et al. [21] found ads for desserts, sweets and sugar-concentrated beverages to contain portrayals of exaggerated pleasure sensations such as depictions of lovely taste, fun and addictive sensations. Pengpid and Pelzer [36] found similar claims and others such as improving one's social worth and status. According to Harris et al. [12], the use of fun and excitement imagery in food ads has caused an increase of consumption of food in those being exposed. Repetitive exposure to these brand benefit claims tends to lead to the development of a relationship with the brand [14], which can be exploited by marketers of unhealthy foods. --- Policy Implications The South African Marketing to Children pledge makes it clear that there should be no use of celebrities and licensed characters (such as cartoons) in advertising unhealthy foods to children [15]. The food and beverage advertising codes (which the food companies submitted to through the pledge) asserts that children are easily influenced and so they should not be misled with false or exaggerated advertising claims [17]. Signees of the pledge are admonished to be honest in their ads and not to take advantage of the lack of experience of children or knowledge in advertising foods to them. Thus, claims such as the emotive claims recorded in this study go against the social values of advertising under the food and beverage advertising codes [17]. The common use of these strategies to advertise unhealthy foods as identified in our study violates the South African Marketing to Children pledge [17]. With the many violations of the food and beverage advertising codes and the South African Marketing to Children pledge, it appears that the outcome of the self-regulatory approach adopted by South Africa is unsatisfactory. The persistent flouting of these codes, as revealed in the current study, by Mchiza et al. [21] and by Delport [22] comes as no surprise as self-regulation around the world has proven to be ineffective in limiting unhealthy food advertising to children [26]. This unsatisfactory outcome emanates from the laxity in the enforcement of self-regulation codes [19], which could be attributed to the intention/drive among industry players to make profits. Self-regulatory policies make it appear as though advertising is being controlled while in reality all of these policies seem to do is to stifle change [19]. An introduction of statutory regulations in South Africa would signify a refreshing change in the food advertising environment. Additionally, strict monitoring and the enforcement of significant penalties may serve as a deterrent to companies and television stations who disregard the policies. The above policies have been shown to be effective in reducing unhealthy food ads to children [37]. New regulations should strictly control the use of persuasive strategies in unhealthy food ads. Educating food marketers on the importance of adhering to the policies for controlling food advertising may help bring about attitudinal change. A watershed period after which unhealthy food ads would be allowed could also be considered. --- Strengths and Limitations of the Current Study The strengths of the current study included the thorough and systematic assessment of ads based on a structured guide developed for international monitoring and benchmarking. The assessment also covered several domains of persuasive techniques. The limitations included the limited scope of the current research in that the data captured were from the free-to-air TV channels only (those channels accessible to most children from disadvantaged communities) and were collected at a single point in time. As such, ads shown on other South Africa TV channels (especially those that are accessible to more affluent communities i.e., pay/subscription TV) were missed. It may therefore be impossible to generalise these data to other South African populations that have access to pay/subscription TV channels. This study was carried out on TV stations and, as such, may not adequately represent the nature of food ads in the wider food advertising space in South Africa that also includes social media ads, radio ads, etc. While the duration of the study captured periods where children are likely to be watching TV, there is the potential for children to be exposed to TV food ads outside of those hours included in this study. Therefore, the overall potential exposure for TV food ads could be higher than reported in this study. This study also did not investigate the causal effect of food advertising to South African children. For instance, in this study, only potential exposures could be assessed without accounting for the number of child viewers of these ads. As such, new research is needed that will investigate how South African children respond to food ads. The findings can be utilised for the specific regions or African countries that have access to these South African TV stations but cannot be extrapolated to countries outside this group unless they have a similar context and TV ad regulations. Lastly, our results could not be compared with earlier findings as these studies used different classification systems. We think that the classification used in the current study can serve as a benchmark for future comparisons. --- Conclusions This study suggests a high exposure among children to unhealthy food and beverage advertising including alcohol. The use of cartoons, celebrities, brand benefit claims and health claims were used more often in unhealthy versus healthy food. These techniques may foster children's craving for unhealthy food while making unhealthy food consumption a part of their value pattern. These findings breach the South African Marketing to Children pledge and represent an unsatisfactory outcome of the self-regulation system practiced in South Africa. There is, therefore, an urgent need for a tighter control of the TV food advertising space. Options include statutory regulations and a watershed period for unhealthy food ads. --- Data Availability Statement: Not applicable
Television (TV) is a powerful medium for marketing food and beverages. Food and beverage marketers tend to use this medium to target children with the hope that children will in turn influence their families' food choices. No study has assessed the compliance of TV marketers with the South African Marketing to Children pledge since the enactment of the 2014 food advertising recommendations by the South African Department of Health and the Advertising Standards Authority. This study investigated the extent and nature of advertising of unhealthy versus healthy food and beverages to children in South African TV broadcasting channels. The date, time, type, frequency and target audience of food advertisements (ads) on four free-to-air South African TV channels were recorded and captured using a structured assessment guide. The presence of persuasive marketing techniques was also assessed. Unhealthy food and beverage advertising was recorded at a significantly higher rate compared with healthy food and beverages during the time frame when children were likely to be watching TV. Brand benefit claims, health claims and power strategies (e.g., advertising using cartoon characters and celebrated individuals) were used as persuasive strategies. These persuasive strategies were used more in unhealthy versus healthy food ads. The findings are in breach of the South African Marketing to Children pledge and suggest a failure of the industry self-regulation system. We recommend the introduction of monitored and enforced statutory regulations to ensure healthy TV food advertising space.
found at end of treatment (SMD -0.38, 95% CI -0.58 to -0.18, p = 0.0002) but not at followup from only one study. No significant improvement emerged for quality of life at end of treatment (SMD 0.38, 95% CI -0.28 to 1.05, p = 0.26) with no data available at follow-up. The main study limitations were the difficulty in this field of being certain of capturing all eligible studies, the lack of modelling of maintenance of treatment gains, and the low precision of most SMDs making findings liable to change with the addition of further studies as they are published. --- Conclusions Our findings show evidence that psychological interventions improve PTSD symptoms and functioning at the end of treatment, but it is unknown whether this is maintained at follow-up, with a possible worsening of PTSD caseness at follow-up from one study. Further interventions in this population should address broader psychological needs beyond PTSD while taking into account the effect of multiple daily stressors. Additional studies, including social and welfare interventions, will improve precision of estimates of effect, particularly over the longer term. --- Author summary Why was this study done? • Torture occurs in the majority of countries around the world, often leaving survivors with prolonged physical and psychological problems. We still do not know what treatment for psychological problems is effective. • This review aimed to calculate the effects of psychological, social, and welfare interventions on the mental health, functioning, and quality of life of torture survivors. What did the researchers do and find? • Published data from 15 randomised controlled trials (RCTs)-all of psychological interventions, including 1,373 participants across 10 countries-were systematically reviewed and analysed. • Compared to control conditions, psychological interventions significantly reduced symptoms of post-traumatic stress disorder (PTSD) and improved functioning at the end of treatment, but not at follow-up. • Psychological interventions did not significantly improve depression symptoms or quality of life. • Psychological interventions did not significantly reduce the incidence of PTSD diagnosis, and one study, with 28 participants, showed an increase of PTSD diagnosis at follow-up compared to control conditions. --- Introduction Despite 156 countries having signed the United Nations Convention Against Torture and Other Cruel, Inhuman or Degrading Treatment and Punishment [1], torture is widespread, and Amnesty International has documented torture and other forms of ill treatment in 141 countries in 2014 [2]. Long-standing and ongoing armed conflict has likely led to the increased use of torture since. Worldwide, 352,000 fatalities resulting from organised violence were identified between 2014 and 2016 alone [3]. The prevalence of torture and resulting fatalities are likely higher but difficult to estimate given that perpetrators often obscure the use of torture, and there are multiple barriers to disclosure for survivors. Torture has psychological, physical, social, and spiritual impacts that interact in diverse ways. Psychological effects are well documented; predominantly post-traumatic stress, depression, anxiety, and phobias [4,5]. Physical effects are also diverse (for reviews, see [6,7]). In addition, torture survivors' disrupted lives can bring social and financial problems that contribute to and maintain psychological distress, whether as a refugee or in the country of origin [5,8,9]. Torture often occurs against a backdrop of national and international power imbalances, war, civil unrest, and the destruction or erosion of medical and other welfare services. Arguably, treatment needs to incorporate wider conceptualisations of damage and distress than are represented in standard Western psychological treatments for psychological trauma [10,11]. A review conducted in 2011 describes a limited range of interventions for torture survivors, tested in studies with significant limitations such as small sample sizes and unvalidated outcomes [6]. Given the scant literature, greater understanding of what works in treatment and rehabilitation for torture survivors is crucial in order to obtain maximum benefits from scarce resources. A Cochrane systematic review and meta-analysis [12] aimed to summarise psychological, social, and welfare interventions for torture survivors but found eligible studies only of psychological treatment. The 9 randomised controlled trials (RCTs) included provided data on 507 adults with no immediate benefits for psychological therapy for psychological distress (as measured by depression symptoms), post-traumatic stress disorder (PTSD) symptoms, PTSD caseness, or quality of life. At follow-up, 4 studies with 86 participants showed moderate effect sizes in reducing psychological distress and PTSD symptoms. Conclusions were tentative, given the low quality of evidence, with underpowered studies and outcomes assessed in nonstandard ways, and no study assessed participation in community life or social and family relationships. More recently, a meta-analysis of 18 pre-post studies of interventions for survivors of mass violence in low-and middle-income countries showed a large improvement in PTSD and depression across treatment [13] but smaller effects from controlled studies. Another recent review [14] concluded that cognitive behavioural therapy (CBT) interventions produced the best treatment outcome for PTSD and/or depression. However, both reviews recruited more widely than torture survivors. No recent systematic reviews or meta-analyses have focused on interventions for torture survivors. We conducted this systematic review and meta-analysis to assess the reported benefits or adverse outcomes in the domains of PTSD symptoms, PTSD caseness, psychological distress, functioning, and quality of life for psychological, social, and welfare interventions for torture survivors. --- Methods --- Search strategy and selection criteria A systematic review was performed using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement [15], which is available in S1 PRISMA Checklist. To be included, studies had to be RCTs or quasi-RCTs of psychological, social, or welfare interventions for survivors of torture against any active or inactive comparison condition; the same criteria were used as in the previous review [12], and the full protocol is provided in S1 Text. Quasi-RCTs, in which the method of allocation is known but not strictly random-such as the use of alternation, date of birth, and medical record number [16]-were included considering the difficulties of conducting RCTs in this population. We extracted RCTs from searches of PsycINFO, MEDLINE, EMBASE, Web of Science, the Cumulative Index to Nursing and Allied Health Literature, Cochrane Central Register of Controlled Trials, the WHO International Clinical Trials Registry Platform, Clinical Trials.Gov, PTSDpubs, and the online library of Danish Institute Against Torture (DIGNITY) databases from 1 January 2014 (1 January 2013 in the case of Web of Science, the Cumulative Index to Nursing and Allied Health Literature, and PTSDpubs) through 22 June 2019 using key search terms including combinations of "torture," "randomised," "trial," and "intervention" with Boolean operators (S1 Text). There was no language restriction. We also searched reference lists of torture-specific reviews published in or after January 2014 and those emerging from the final set of included studies. We contacted corresponding authors when full texts were unavailable. --- Data extraction We initially screened titles and abstracts against the inclusion criteria, with the aim of identifying potentially eligible studies for which the full paper was obtained. One author (AH) initially screened titles and abstracts to select full papers; another author (AW) checked a subsample of the excluded papers and agreed with all exclusions. Full papers were screened and selected for inclusion by 2 authors independently and agreed upon after discussion (AH and AW). Descriptive data, including participant characteristics, treatment mode, and setting, were collected. The primary area of interest for this review was outcomes in the domains of PTSD symptoms and caseness, psychological distress, functioning, and quality of life. PTSD symptoms were defined as the primary outcome given that the majority of identified reviews measured this. Psychological distress was measured as a secondary outcome, in the form of depression symptoms. Depression was chosen to define psychological distress because it is more distinct from PTSD than alternative scaled constructs of psychological distress, particularly anxiety. As in Patel and colleagues' review [12], functioning was measured by engagement in education, training, work, or community activity, and quality of life was defined as a change (positive or negative) in quality of life or well-being as measured by global satisfaction with life and extent of disability. --- Statistical analyses Studies in which a psychological, social, or welfare intervention was an active treatment of primary interest were investigated. When studies included more than one arm within a trial, it was decided that-where both arms represented the same content of intervention-data from those arms were combined. The respective control arms associated with these intervention arms were also combined, given that the main area of interest of this research is the impact of intervention relative to control. In studies in which both adjusted and unadjusted treatment effects for specific covariates were reported, the adjusted treatment effects were used. Due to varying data collection and reporting methods, this review included both continuous and dichotomous scales. Meta-analyses were conducted using Review Manager (RevMan version 5.3) software [16]. It was anticipated that there would be considerable heterogeneity in the data, measured as I 2, so a random-effects model was applied. For continuous scales, treatment effects were estimated using standardised mean differences (SMDs). This requires the extraction of mean scores, standard deviations, and sample sizes for each arm. When standard deviations required for the analyses were not available, they were calculated from confidence intervals (CIs), as suggested in the Cochrane handbook [16]. For dichotomous data, treatment effects were estimated using odds ratios by extracting the number of events and sample sizes. All analyses were conducted as planned. The newly included studies were added to the 9 previous studies in each analysis. Analyses were run for end of treatment and follow-up when available. End of treatment was defined as data collected within 3 months or less from the end of treatment; follow-up was defined as more than 3 months after the end of treatment. --- Quality of studies The risks of bias were assessed using the Cochrane guidance [16]. Each study was classified for each of the categories into either low risk, high risk or unclear risk, with justifications. This quality assessment was completed by 2 authors independently (AH and AW), and disagreements were resolved by reference to the data in question. We related the risk of bias categories to the interpretation of effect sizes for the outcomes of studies. --- Results From an initial screen of 1,805 abstracts and titles, 6 RCTs since 2014 met our inclusion criteria [17][18][19][20][21][22] and were combined with the 9 RCTs identified in the previous meta-analysis (Fig 1) [23][24][25][26][27][28][29][30][31]. The characteristics of the 15 included studies are summarised in S1 Table. All eligible studies were of psychological interventions. Trials included 1,373 participants at the end of treatment (mean per study = 92) of the 1,585 that started treatment; a mean study completion rate of 86.6% with a range from 50% to 100%. Studies included 589 females and 784 males. Seven trials were conducted in Europe, 5 in Asia, and 3 in Africa. The most commonly used intervention was narrative exposure therapy (4 studies) or testimony therapy (3 studies), both of which draw on creating a testimony of traumatic events. Of the 6 new studies, all provided analysable data after calculating the standard deviation from CIs or standard errors. When neither CIs nor mean scores were available [14,21], the author was contacted, and the mean scores and standard deviations were obtained. --- Quality of studies According to Cochrane risk of bias assessment [16], one study had a high risk of bias in random sequence generation, 2 had a high risk of bias in allocation concealment, all 15 had a high risk of performance bias (inevitable in psychological treatment trials), 2 had a high risk of detection bias, 6 had a high risk of attrition bias, and no studies had a high risk of reporting bias. Therapist allegiance, treatment fidelity, therapist qualifications, and other biases were also included. Four studies had a high risk of bias due to therapist allegiance, 2 had a high risk of bias due to therapist fidelity, and 2 had a high risk of bias due to therapist qualifications (Fig 2). Other biases included varying content and length of treatment as judged by therapist according to need, as well as the absence of protocol for adaptation and translation of measures. A full breakdown of the risk of bias in each study is available in S1 Table. --- PTSD symptoms Twelve trials, with a total of 1,086 participants, reported data for PTSD symptoms no more than 3 months after the end of treatment [17][18][19][20][21][24][25][26][27][28][29][30][31], using several scales but all based on a similar formulation of PTSD. They were analysed for the effect of psychological intervention on PTSD at end of treatment using SMDs (Fig 3). There was a small to moderate reduction in PTSD symptomatology at the end of treatment (SMD -0.31, 95% CI -0.52 to -0.09, z = 2.79, p = 0.005). Between-study heterogeneity, I 2, was 55% (95% CI 0.38-0.68), indicating substantial heterogeneity [16]. The confidence in these results is limited overall, as unblinding of assessors may have contributed to detection bias in all but one study [30]. Seven trials, with 569 participants, reported data for PTSD symptoms more than 3 months after the end of treatment [19,[21][22][23][24]26,27]. All used the Harvard Trauma Questionnaire (HTQ) to measure symptoms with the exception of Esala and Taing [19], who used the PTSD Checklist for the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5). They were analysed for the effect of psychological intervention on PTSD at follow-up using SMDs (Fig 3). There was no difference between the intervention group and the control group (SMD -0.34, 95% CI -0.74 to 0.06, z = 1.68, p = 0.09) in PTSD symptoms at follow-up. Given the large CI, the precision of estimate was low, and all but one study [22] appeared to be underpowered. Heterogeneity was substantial at 66% (95% CI 0.49-0.77). --- PTSD caseness Four trials with 82 total participants, classifying participants using caseness as meeting criteria for PTSD no more than 3 months after the end of intervention [21,23,24,30], were analysed for the effect of psychological intervention on PTSD caseness at end of treatment (Fig 4). There was no overall benefit, with an odds ratio of 0.44 (95% CI 0.14-1.31, z = 1.48, p = 0.14). A heterogeneity of I 2 = 0% was noted for this comparison (95% CI 0-0.61), and a number of sources of bias in methodology were observed. Only one trial compared PTSD caseness in intervention and control groups, at 6-month follow-up for 28 participants [21]. Caseness was significantly higher at 6-month follow-up in the intervention group compared with the control group, with an odds ratio of 7.58 (95% CI 1.2-48, z = 2.15, p = 0.03). --- Psychological distress Ten trials reported data for psychological distress, measured as depression, no more than 3 months after the end of treatment, with 988 participants [17][18][19][20][21][22]24,25,27,30]. They were analysed for the effect of psychological intervention on psychological distress at the end of treatment (Fig 5). There was no benefit of treatment over control (SMD -0.23, 95% CI -0.50 to 0.03, z = 1.71, p = 0.09) with a substantial heterogeneity of I 2 = 68% (95% CI 0.56-0.77). Seven trials reported data for psychological distress, measured as depression using the Hopkins Symptom Checklist-25 (HSCL-25), more than 3 months after the end of treatment, with a total of 569 participants [19,[21][22][23][24]26,27]. They were analysed for the effect of psychological intervention on psychological distress at follow-up using SMDs (Fig 5). There was no benefit of treatment over control for psychological distress at follow-up (SMD -0.23, 95% CI -0.70 to 0.24, z = 0.96, p = 0.34), and heterogeneity was considerable (I 2 = 76%, 95% CI 0.65-0.80). --- Functioning Three trials reported data for functioning at the end of treatment, for 584 participants [17,18,21], and were analysed for the effect of psychological intervention on functioning at the end of treatment (Fig 6). There was a moderate benefit of intervention over control for functioning (SMD -0.38, 95% CI -0.58 to -0.18, z = 3.72, p = 0.0002). A heterogeneity of I 2 = 15% was observed (95% CI 0-0.73). Only one study (28 participants) provided analysable data showing effects at 6-month follow-up [21] and no statistically significant benefits for treatment over control (SMD 0.63, 95% CI -0.13 to 1.40, z = 1.62, p = -0.11). --- Quality of life Two trials [20,30], with 36 participants, assessed quality of life after treatment. Their scales were constructed with opposite direction for improvement; the trial by Puvimanasinghe and Price [20] was reversed so that a positive effect size represented improvement. There was no effect of intervention over control on quality of life (SMD 0.38, 95% CI -0.28 to 1.05, z = 1.14, p = 0.26) with a low precision of estimate. No study assessed quality of life at follow-up. --- Adverse events and dropout Two studies reported on adverse effects of treatment. Weiss and colleagues [22] reported that one participant attempted suicide after the first therapy session. The authors related this to the participant being related to the therapist and the therapist failing to notify the supervisor due to stigma concerns in the family. Another participant was hospitalised with severe depression and received therapy in the hospital but did not return to the study, and one participant died of a heart attack with no apparent relationship to participation in the study. In Wang and colleagues' [21] study, the intervention group increased in PTSD caseness over follow-up, a statistically significant finding, but the authors were not able to explain this result. All but 2 trials [23,29] reported dropout during treatment. Of these, 4 reported greater than 20% dropout in the intervention arm [19,24,27,30], and one trial reported a 28% exclusion of participants overall, with no further detail given [28]. Four studies provided detailed reasons for dropout [18,24,27,30]. --- Clinical meaning of changes Calculation of the SMD assumes that differences in standard deviations among studies reflect differences in assessment scales and not real differences in variability among study populations [16]. We chose Wang and colleagues' study [21] to calculate differences in PTSD symptoms using the HTQ, and in psychological distress (depression) using the HSCL-25. The HTQ uses a 4-point severity response scale. Respondents endorse how much each symptom has bothered them in the past week, as follows: not at all (1), a little bit (2), quite a bit (3), or extremely (4). The total score is the mean of item scores, with 2.5 suggested as the clinical cut-off score, above which a respondent has a high likelihood of PTSD [32]. The small to moderate effect size in reduction of PTSD symptoms for intervention over control represented reduction of the mean pretreatment HTQ score of 2.49 to 2.37 post treatment. That is, participants fell slightly below clinical cut-off both before and after treatment, so the clinical significance of this change is negligible. The HSCL-25 assessed depression, with 1.75 suggested as the clinical cut-off score, with high scores indicating depression. Again relating these scores to the study by Wang and colleagues [21], mean scores at pretreatment assessment (3.02), post-treatment assessment (2.77), and follow-up (2.55) all fell within the clinical range for depression. --- Discussion This systematic review and meta-analysis of 15 studies of interventions for torture survivors included 1,373 participants from 10 countries. Six of the 15 studies were published since the previous review, but the sample size increased 3-fold. The range of treatments was somewhat wider, but treatments were still most often compared with inactive controls rather than with other treatment. The problems of torture survivors were largely conceptualised in terms of PTSD symptoms that constituted the focus of treatment and, often, the primary outcome. Meta-analysis demonstrated few benefits of treatment: a statistically significant but clinically small decrease in PTSD symptoms at the end of treatment-from varied psychological interventions compared to mostly inactive controls-not found at follow-up. Other outcomes-PTSD caseness, psychological distress, usually depression and often of clinical severity-were not significantly different either at the end of treatment or at follow-up, with the exception of a worsening of PTSD caseness at follow-up, a poorer outcome than in the previous review [12] and clinically very disappointing. Few studies assessed functioning or quality of life, so results must be interpreted with caution, but they showed no improvement in quality of life and only in functioning, at the end of treatment but not at follow-up. Outcomes representing broader health and participation in society were neglected, as was the context of social, economic, and political uncertainties survivors face; threats to civil and legal status, accommodation, safety, connections with family and friends, and other assaults on well-being [8,33,34]. Because refugees have a high rate of life events that can facilitate or undermine treatment gains, it would be helpful for studies to monitor these changes across the timescale of treatment and follow-up [35]. It was disappointing to find these shortcomings persisting despite comment in our previous review [12] and in others [36,37]. Although it should be interpreted with caution, the finding of worsening at follow-up in the study by Wang and colleagues [21], using CBT with prolonged exposure, should alert researchers to the importance of studying long-term outcomes and the potentially harmful effects of psychological interventions and other contextual factors post treatment. Furthermore, 4 out of the 15 trials reported over 20% dropout in the intervention arm. It is possible that this is a function of greater social instability of the participant population and understandable preoccupation with meeting basic human needs and rights. However, more investigation of treatment expectations and acceptability is required. Conducting and analysing follow-up interviews, using nonaligned and nonbiased interviewers, would lead towards better understanding of what may work and for whom. Other reviews of psychological treatments for torture survivors [36,38] or for traumatised refugees [39] have produced more optimistic accounts of benefits of therapy, although they raise similar concerns regarding methodology and cultural appropriateness of interventions. By contrast, Salo and Bray [37] reviewed interventions in relation to what they described (drawing on Bronfenbrenner [40]) as the 'ecological' needs of torture survivors: microsystem life domain, such as family, social, legal, and occupational domains; macrosystem domain, mainly consisting of cultural and language features of the trials; and the chronosystem domain, represented in time of follow-up assessments. They found relatively scant recognition of needs in any of these areas, either in assessment or intervention. This appears to be a very promising framework for reconsidering therapeutic interventions in the field. Methodological quality of the included studies was largely similar to that in our previous review. Apart from the absence of blinding of therapists or patients to treatment allocation, rarely possible in trials of psychological treatment, bias arose mainly from incomplete reporting of outcomes, dropping noncompleters from outcome analysis, and uncertainty about whether the intended treatment had been delivered as designed, mainly because of lack of therapist qualifications to deliver it. Whether training volunteer therapists, with no existing clinical competences, in the specific therapeutic techniques for the trial is adequate to produce treatment fidelity is an open question and should be addressed within trials. The same comment applies to cultural adaptation of treatment that originated in Western healthcare. Studies gave little detail of what it was they meant by 'cultural adaptation', beyond translation of outcome scales and treatment materials, but effective cultural adaptation involves extensive work between people from all the main cultures represented in a study, who understand the context and content of treatment. Similar methods are required for true validation of translated scales in the languages of the cultures in which they will be used [41]. Even when these procedures are followed, it is by no means clear how a treatment is established as culturally adapted beyond the claims of its authors. The review has some limitations that potentially affect conclusions. Our search could have been widened by including the grey literature, but a zero yield from around 1,500 chapters, reports, and other articles accessed for the previous review decided us against it. It is possible that in the grey literature, or even in the peer-reviewed literature, our relatively broad search nevertheless missed a trial labelled in a way we did not anticipate, since the nomenclature is not well standardised. While we did not exclude studies in other languages, the majority of the databases searched have shown varied and incomplete coverage of non-English material [42] particularly from low-and middle-income countries [43], indicating a potential database coverage bias. A possible further analysis would have been to fit a model to all effect sizes of each outcome, including time (end of treatment versus follow-up) as a moderator; because we did not, we cannot draw conclusions about maintenance of treatment gains at follow-up. We interpreted our findings according to dichotomous notions of statistical significance and recognise that some overall effect sizes could change (for better or worse) with the addition of one or more studies. Heterogeneity among studies was substantial and arose from multiple sources: participants, therapists, therapeutic methods, outcomes, delivery, and setting. This produced generally high levels of between-study heterogeneity (I 2 ) that made estimates of effect sensitive to inclusion or exclusion of single studies. Given the weakness and lack of precision of the I 2 statistical test [44], we also calculated the CIs as suggested by Higgins and Thompson [45]. While CIs were generally narrow in cases of high heterogeneity, where low heterogeneity was indicated, I 2 = 0% for PTSD caseness at end of treatment and I 2 = 15% for functioning at end of treatment, wide CIs were produced ranging from 0% to 61% and 73%, respectively, indicating caution in inferring heterogeneity in these cases. We did not anticipate having the power available for subanalyses, but these could be planned in a further update, to investigate each source of heterogeneity. Although widening our scope to refugee studies would have included some family and community interventions, heterogeneity would likely have been even greater, exacerbating problems of interpretation. Given the complexity of torture survivors' needs and the obstacles they face in reconstructing a meaningful life, the emphasis of interventions on symptoms of PTSD is strikingly narrow, unless reducing or resolving these symptoms is seen as a priority or as the key to other improvements; none of the studies asserted this. It is not even clear that basic security and financial needs are addressed before offering specialised psychological interventions [46]. Thus, integration of interventions addressing the needs and priorities of torture survivors (not assessed or stated) seemed largely lacking. Recruitment into the trial was assumed to mean that survivors' PTSD symptoms were their priority as the target of intervention, though the fact that 4 out of 15 included studies reported greater than 20% dropout in the intervention arm raises questions about the relevance and appropriateness of the interventions for survivors. Furthermore, the development of interventions in terms of cultural and language appropriateness may require more fundamental exploration and questioning of Western models of psychological problems and treatment than was evident in these trials. Recent models of collaborative care [47] go some way towards this but still fall short of the ecological scope described by Salo and Bray [37]. Last, where resources are scarce and far outstripped by needs, as in many low-and middle-income countries, the model of training local volunteers or healthcare staff in Western methods of intervention delivered mostly as individual therapy may mean that interventions are more culturally embedded (depending on what the trainers or study researchers allow by way of adaptation). This, however, needs empirical support, as well as an assessment of the potential harmful impacts on the volunteer therapists and on (other) survivors they work with. Perhaps it is the limitation of this review to RCTs that means that the newer trials largely resembled the older ones, except in combining a wider range of interventions; there was little evidence of more collaborative and integrated interventions such as those developing for refugee populations [48] or envisaged in a social-ecological framework [37]. It might be that single case methods [49] are more applicable to assessing psychological, social, welfare, and other interventions for the complex and diverse needs of many torture survivors, for whom distress stems not only from the violent and traumatic experiences endured but also from current social, material, and legal conditions [34]. Evaluation of interventions needs to match this breadth of difficulties and at minimum interventions require addressing quality of life and follow-up over realistic time frames. Qualitative studies could helpfully inform more participant-focused assessment of treatment outcome with the addition of observed events such as improved overall health; enrolment in further education, training, or work; and participation in community or society. In conclusion, all RCTs we found in this systematic review and meta-analysis were of psychological interventions. Small improvements for intervention over control were found for PTSD symptoms and functioning after treatment but not at follow-up, nor was any improvement evident for psychological distress at either time point or for quality of life at the end of treatment. The overall confidence in these results and precision of estimate is still less than satisfactory, and further studies are likely to change the estimates of effect, but the differences between our findings and the impression of treatment effectiveness from narrative reviews are substantial and suggest that more survivor-focused conceptualisation of problems and improved methodology are needed. --- Data relating to analyses are within the manuscript and Supporting information. Further data relating to methodology can be found in University College London's open access repository at http://discovery.ucl.ac.uk/ 10056876/. --- Supporting information
Torture and other forms of ill treatment have been reported in at least 141 countries, exposing a global crisis. Survivors face multiple physical, psychological, and social difficulties. Psychological consequences for survivors are varied, and evidence on treatment is mixed. We conducted a systematic review and meta-analysis to estimate the benefits and harms of psychological, social, and welfare interventions for torture survivors.We updated a 2014 review with published randomised controlled trials (RCTs) for adult survivors of torture comparing any psychological, social, or welfare intervention against treatment as usual or active control from 1 January 2014 through 22 June 2019. Primary outcome was post-traumatic stress disorder (PTSD) symptoms or caseness, and secondary outcomes were depression symptoms, functioning, quality of life, and adverse effects, after treatment and at follow-up of at least 3 months. Standardised mean differences (SMDs) and odds ratios were estimated using meta-analysis with random effects. The Cochrane tool was used to derive risk of bias. Fifteen RCTs were included, with data from 1,373 participants (589 females and 784 males) in 10 countries (7 trials in Europe, 5 in Asia, and 3 in Africa). No trials of social or welfare interventions were found. Compared to mostly inactive (waiting list) controls, psychological interventions reduced PTSD symptoms by the end of treatment (SMD -0.31, 95% confidence interval [CI] -0.52 to -0.09, p = 0.005), but PTSD symptoms at follow-up were not significantly reduced (SMD -0.34, 95% CI -0.74 to 0.06, p = 0.09). No significant improvement was found for PTSD caseness at the end of treatment, and there was possible worsening at follow-up from one study (n = 28). Interventions showed no benefits for depression symptoms at end of treatment (SMD -0.23, 95% CI -0.50 to 0.03, p = 0.09) or follow-up (SMD -0.23, 95% CI -0.70 to 0.24, p = 0.34). A significant improvement in functioning for psychological interventions compared to control was
Introduction Social relationships play a key role in a variety of public health problems [1][2][3], including alcohol and other drug (AOD) use and homelessness [4][5][6]. AOD use spreads through networks [7] due to a variety of network mechanisms, such as social comparison, social sanctions and rewards, flows of information, support and resources, stress reduction, and socialization [8][9][10]. Homelessness is often precipitated by AOD use problems [11] and continued AOD use among people experiencing homelessness is influenced by continued exposure to AOD use in their social networks [12][13][14][15]. Continued AOD use impedes transitioning out of homelessness and into housing assistance, such as when AOD abstinence is a requirement for housing. Therefore, addressing the interrelated problems of AOD use and homelessness requires a focus on social networks, which play a wide range of positive and negative roles in assisting and impeding the transition out of homelessness [12][13][14][16][17][18][19][20][21][22][23][24][25][26]. Many behavior change interventions informed by social network analysis (SNA) have been developed recently [27][28][29][30], have addressed AOD use in a variety of populations [30], and can potentially address AOD use among people experiencing homelessness. Four styles of incorporating networks into interventions have emerged [27]: 1) identifying groups in a network to target based on structural position ("segmentation"), 2) identifying and intervening with key individuals based on their structural location ("opinion leaders"), 3) activation of new interactions between people without existing ties in a network ("induction"), and 4) changing the existing network ("alteration"). For the most part, network intervention approaches use methods informed by diffusion of innovation theory [31] and aim to maximize the effects of a behavior change intervention through its spread within a well-defined and clearly bounded network (such as students in the same school). There are challenges in applying diffusion-based SNA behavior interventions to assist people to reduce AOD use while transitioning out of homelessness. The segmentation, opinion leader, and induction approaches are inappropriate because they assume a static, bounded network [27]. However, people transitioning out of homelessness and into housing programs do not belong to a clearly defined network. They can experience heightened social volatility due to loss of contact with people they interacted with on the street, coupled with sudden and ongoing contact with new neighbors. Distancing themselves from AOD using network members may help them decrease AOD use by reducing exposure to high-risk behavior. At the same time, these individuals may have developed strong and supportive ties while living on the street and may have reservations about ending these relationships, even with members of their network who they realize hamper their efforts at positive behavior change and stability. Those who transition into housing programs may experience increased opportunities to develop new pro-social connections and reconnect with positive network ties who can provide key social support necessary to reduce AOD use. On the other hand, transitioning into a housing program that uses a harm reduction model [15,26,32,33] may result in continued exposure to AOD because these programs do not require residents to abstain from AOD use. This social upheaval experienced by individuals transitioning out of homelessness suggests that an AOD reduction behavior change intervention informed by SNA that assumes a static and bounded network is inappropriate. Network "alteration" intervention approaches, on the other hand, do not make this assumption and appear to be a better fit for addressing the social volatility associated with transitioning out of homelessness. Our team recently developed a Motivational Interviewing Social Network Intervention (MI-SNI) designed to reduce AOD use among adults with past year problematic AOD use who recently transitioned from being homelessness to residing in a housing program [34][35][36]. The MI-SNI targets alterations of the "personal" networks of independently sampled individuals, rather than individuals who are members of a static, bounded network [37][38][39]). This approach is appropriate for people transitioning out of homelessness because each person experiencing this transition is at the center of a unique and evolving group of interconnected people who play a variety of roles in assisting or hampering their transition. The MI-SNI combines visualizations of personal network data with Motivational Interviewing (MI), an evidence-based style of intervention delivery that triggers behavior change through increased self-determination and self-efficacy while reducing psychological reactance [40,41]. Results from a pilot randomized controlled trial of the MI-SNI on AOD-related outcomes found that formerly homeless adults who recently transitioned to a housing program and received the intervention experienced reductions in AOD use, and increased AOD readiness to change and abstinence self-efficacy, compared to those who were randomly assigned to the control condition [34]. Examining whether the MI-SNI is associated with actual changes to participants' social networks, a hypothesized mechanism through which it is expected to affect AOD-related outcomes, is an important next step in this line of research. The present study compares personal network composition and structure data collected before and after the intervention period to explore if the MI-SNI was associated with longitudinal changes in personal networks of MI-SNI intervention participants compared to participants who received usual case management services. This study provides a preliminary test of several hypotheses. Our primary hypothesis was that the intervention would be associated with a change in network composition, primarily a decrease in number network members who influence the participants' AOD use, such as those who are drinking or drug use partners. We also hypothesized that receiving visualization feedback that highlighted supportive network members would prompt participants to take steps to retain supportive network members, drop unsupportive members of their networks, and add new network members who provide support, leading to an overall increase in supportive ties. For alters who remained in the network after the intervention period, we hypothesized that MI-SNI recipients would be more likely to change their relationships with these network members, resulting in fewer AOD risk behaviors with them. Finally, we tested an exploratory hypothesis that MI-SNI recipients would make changes to their networks that would result in them having significantly different overall network structures (size and connectivity among network members) and more network member turn-over between waves. Finding such intervention effects on network structure and turn-over would suggest that the MI-SNI influenced how participants interacted with their social environments during the intervention period. --- Material and methods --- Intervention design, setting, and participants The complete and detailed plan for the conduct and analysis of this Stage 1a-1b randomized controlled trial (RCT) is available elsewhere [36] and the clinical trial has been registered (ClinicalTrials.gov Identifier: NCT02140359). Detailed descriptions of the development and beta testing of the Stage 1a computer interface, feasibility tests of the intervention procedures, pilot test participant characteristics, and initial pilot test results are also available elsewhere [35,36]. Participants were new residents of a housing program for adults transitioning out of homelessness in Los Angeles County recruited between May 2015 and August 2016. The primary analytic sample comes from the initial pilot test site, Skid Row Housing Trust (SRHT), which provides Permanent Supportive Housing (PSH) [42][43][44][45][46][47] services in Skid Row, Los Angeles. PSH programs do not require AOD abstinence or treatment, but do provide case management and other supportive services such as mental health and substance abuse treatment. SRHT residents and staff participated in project planning and beta testing prior to recruitment [35,36]. The intervention procedures were designed to be delivered during typical case manager sessions with new residents to supplement and improve the support they provide residents by raising both the case manager's and resident's awareness of the role that the resident's social environment plays in the transition out of homelessness. Beginning in February 2016, an additional supplemental sample was recruited from SRO Housing Corporation (SRO Housing), which is a similar housing program also located in Skid Row. This additional recruitment was in response to slower than expected monthly recruitment rates from SRHT and a projected shortfall in our targeted recruitment sample size of 15-30 subjects per intervention arm, which is a rule-of-thumb recommendation of the National Institutes of Drug Abuse for funding Stage 1b Pilot Trials [48]. Residents were recruited through SRHT and SRO Housing leasing offices prior to receiving the assignment of a housing unit. Eligible participants were > = 18 years old English speakers who had been housed within the past month and were screened for past-year harmful alcohol use (Alcohol Use Disorders Identification Test (AUDIT-C) score > = 4 for men and > = 3 for women) [49] or drug use (Drug Abuse Screen Test (DAST) score greater than 2) [50][51][52]. The 149 residents who were contacted by the research team resulted in a 49 eligible residents who were randomized into the intervention arm (N = 25) or the control arm (N = 24) using a permuted block randomization strategy stratified by gender. Full recruitment details and results are provided in the Fig 1 CONSORT diagram along with a CONSORT checklist in S1 Appendix. Eligible residents were informed of their rights as research participants and provided written consent. Retention in the study was excellent, with 84% of participants (n = 21 intervention, n = 20 control) completing the follow-up assessment three months later. Participants averaged 48 years of age, were primarily male (80%), African American (56%), had a high school education or less (68%), were never married (66%), had children (59%), and received an average of $471 in monthly income. Full details about participant demographics and AOD use is available elsewhere [34]. All procedures were approved by the authors' Institutional Review Board (IRB) (Study ID: 2013-0373-CR02) and the complete and detailed plan for the conduct and analysis of the trial that was approved by the IRB before the trial began is available in S2 Appendix. A Federal Certificate of Confidentiality was obtained for this study, which provided additional privacy protection from legal requests. --- Baseline and follow-up data collection procedures The purpose of the baseline and follow-up network data collection assessments was to measure participants' personal network characteristics when they first moved into their supportive housing unit and 3 months later to provide measures of network change and test if those who were offered the intervention experienced significantly different network changes compared to those randomly assigned to the control condition. Personal network assessment interviews were conducted through one-on-one, in-person interviews (<unk>45-60 minutes) by independent data collectors who did not have access to the assignment of IDs to study arm and were therefore blind to study condition. Interviews were conducted using the social network data collection software EgoWeb 2.0 (egoweb.info) installed on a laptop computer. Participants were paid $30 to complete the baseline interview and $40 for the follow-up. We followed common procedures for collecting personal network data [37] used in previous studies of AOD use and risky sex among homeless populations [12][13][14][53][54][55][56][57]. Respondents (referred to as the "egos" in personal network interviews) were first asked questions about themselves, including demographic questions (baseline only) and a series of questions about their own AOD use. After these questions, the egos were asked the following standard question prompting them to name up to 20 people in their network (referred to as a "name generator" question): "Now I'd like to ask you some questions about the people that you know. First, I'd like for you to name 20 adults, over 18 years old, that have been involved in your life over the past year. We do not want their full names-you can use their first names, initials or descriptions. These should be people you have had contact with sometime in the past year-either face-toface, by phone, mail, e-mail, text messaging, or online. Start by naming the people who have been the most significant to your life-either in a positive way or a negative way. You can decide for yourself who has been significant, but consider those who have had a significant emotional, social, financial, or any other influential impact on your life. We'll work outwards toward people who have less significance. You can name any adults you have interacted with no matter who they are or where they live or how much time you have spent with them." Once each ego provided a list of names (referred to as network "alters"), they were asked a series of questions about each person (referred to as "name interpreter" questions). Respondents were also asked, for each unique alter-alter tie, if these two people knew each other. These personal network questions were asked at both baseline and follow-up and these responses provided raw data for measurements of change in personal networks. --- Intervention procedures Residents who completed baseline interviews were randomly assigned to either the intervention or control arms. Those assigned to the intervention arm were offered four biweekly inperson sessions with a MI-trained facilitator. Full details about the intervention delivery, including examples of the visualizations presented to participants during the session, are available elsewhere [34]. Briefly, facilitators conducted a brief personal network interview (<unk>15 minutes) focusing on recent network interactions (past 2 weeks). Name generator and name interpreter question wording were selected to generate a series of visualizations of the resident's recent interactions with their immediate social network. These visualizations highlighted different aspects of the network (network centrality, AOD use, social support) and were used to guide a conversation about the participant's social network in a MI session that immediately followed the personal network interview. --- Measures Network outcomes: AOD use/influence. We constructed four types of network AOD use/influence measures from three name interpreter questions. Participants identified which alters they drank alcohol with and whether they engaged in this behavior over the past 4 weeks. Based on this question, alters were categorized as a drinking partner and a recent drinking partner. A similar question was asked about other drug use with each alter, which was used to classify alters as a drug use partner and recent drug use partner. Participants were also asked if they drank more alcohol or used more drugs than usual when they were with the alter and if this happened recently. This question was used to classify each alter as an AOD use influence alter and a recent AOD use influence alter. These variables were combined to produce overall any risk or any recent risk dichotomous variables if any of the above variables was true. For each of these dichotomous variables, an overall network proportion variable was constructed by summing the number of alters with the characteristic and dividing by the total number of alters in the ego's personal network. Network outcomes: Social support. We constructed four types of network social support variables from 3 name interpreter questions. Respondents were asked if they received three different types of support from each alter: emotional support (e.g. encouragement), information support (e.g. advice), and tangible support (e.g. money, transportation, food) and if this support happened in the past 4 weeks. Alters were classified as having given each of these types of support both ever and recently. Also, alters who provided at least one of these types of support were classified as any support and any recent support. For each alter social support variable, an overall network support proportion variable was constructed by summing the number of alters with the support characteristic and dividing by the total number of alters in the ego's personal network. AOD risk relationship change outcomes. We constructed four types of AOD risk relationship measures to test if the intervention was associated with egos changing their AOD related behavior with alters who remained in their networks (in contrast to alters who were removed or added to their networks across assessments). To construct these measures, we first identified which network members were named at both assessments by matching the names listed at the baseline and follow-up interview for each respondent to identify unique alters. Next, we compared the responses about retained alters' AOD risk at the baseline and followup assessments to identify those who changed status as drinking partners, drug use partners, AOD use influence partners, or any AOD risk partners. For each of these four types of status changes, we constructed: (a) stopping measures to indicate alters who had the characteristic at baseline, but not have it at follow-up; and (b) starting measures to indicate alters who did not have the characteristic at baseline, but had it at follow-up. We constructed overall measures of each of these variables for each ego by counting the number of alters who had the relationship change characteristic. Network structure and network member turnover outcomes. We constructed measures of overall cross-wave network structure to explore associations with overall network size and interconnectivity intervention status. Matching alter names across waves enabled construction of a cross-wave network that included all alters named at either wave. Next, based on these cross-wave networks, we constructed common measures of personal network structure [38] including a measure of network size (i.e., total unique alters named), and two measures of network connectivity: cross-wave density (the ratio of existing ties between network members to the total possible number of ties) and cross-wave components (number of groups of network members with no connections to other members of the network). To measure network turnover, alters who were named in only one wave were classified as either dropped alters (baseline only), added alters (follow-up only), or retained alters (named in both waves). For each respondent, we constructed counts of dropped, added, or retained alters in the crosswave network. Background variables. Demographic and AOD use variables were used to inform the construction of model weights to adjust for participants who did not complete both assessments. Demographic variables captured in the baseline assessment included age, gender, race/ ethnicity, education, number of children, marital status, and income. AOD use variables included the quantity and frequency of alcohol use and days using marijuana in the past 4 weeks and an assessment of readiness to change AOD use [58]. Also included in the construction of weights were variables assessing housing program (SRHT vs. SRO) and intervention arm. --- Analyses The primary goal of the current study is to provide preliminary empirical evidence of the intervention's effect on changes to network composition and structure for intervention recipients compared to participants assigned to the control condition. The results presented here were generated with the same analytic approach as a previous study that found an association between receipt of the intervention and changes in participants' AOD behavior and attitudes [34]. We used an intent-to-treat [59] approach by offering follow-up to all participants and analyzing their data to reduce type I errors [60]. We constructed and tested a series of regression models with each AOD use/influence proportion and each social support proportion from the follow-up assessment as the dependent variable with the intervention group indicator as the predictor variable, while controlling for the baseline measure of the dependent variable. We also constructed regression models with each of the AOD risk relationship count variables and each network structure and turnover variable with intervention group as predictor variable while controlling for network size at baseline. We used linear regression for continuous outcomes and Poisson regression for count outcomes. The models were fitted using the "survey" package in R version 3.3.1 to include nonresponse weights. These weights enabled computation of accurate standard errors and accounting for the potential bias caused by unit non-response missing data [61] due to participants skipping the follow-up assessment or dropping out of the study. Of the 49 eligible study participants who completed a baseline assessment, 41 also completed the 3-month follow-up assessment and responders differed from non-responders on a few characteristics, such as income and whether they were housed in SRHT or SRO Housing. The nonresponse weights were estimated using a non-parametric regression technique, called boosting [62,63], instead of logistic regression, as implemented in the TWANG R package [64] and including baseline outcome and demographic variables in the model. We calculated effect sizes based on parameter estimates from the regressions and pooled standard deviation at baseline to calculate Cohen's d [65,66]. For each model, we conducted two stages of analysis, similar to our previous approach [34]. First, we analyzed data from our primary sample, the 28 participants from SRHT only, because the MI-SNI was developed for SRHT residents with input from SRHT staff and SRHT case management process. Second, we conducted a secondary analysis on the full sample of 41 that included the small number of residents from a different housing program (SRO Housing). Additional details about the justification of this two stage approach, including details about the differences between the participant samples, is available elsewhere [34]. --- Results --- Network composition Table 1a presents descriptive statistics (i.e., mean and standard deviations) for the baseline and follow-up network proportion measures for the SRHT intervention and control group participants. In addition, the table presents results from the regression models for the SRHT sample analysis predicting the intervention effect on proportions of types of network members at follow-up controlling for the same baseline network composition measures. Each row presents the results of one model. Table 1b presents these same findings for the full sample. The intervention effect was significant at the 95% confidence level with a large effect size for the proportion of drinking partners in the network at follow-up for the SRHT residents only. On average, intervention recipients had 13% fewer recent drinking partners in their networks at follow-up compared to participants in the control arm, controlling for baseline personal network composition (p =.042, d =.81). The average change in proportion of recent drinking partners in the overall sample was not significantly different between intervention and control recipients (p =.145). There was also a significant decrease in proportion of alters with "any risk" for the SRHT residents only. The average 13% decrease in the SRHT-only sample for intervention participants compared to control participants was marginally significant with a medium to large effect size (p =.063, d =.74). The model for the full sample did not reach significance (p =.21). Table 2a (SRHT only) and 2b (full sample) provide descriptive statistics for counts of alters who changed their AOD use and risk influence relationship status with egos between baseline and follow-up assessments and results of models testing if intervention status was significantly associated with these counts. Each model with count outcomes controlled for size of the network at baseline (number of alters named). The tables also present results of exploratory tests of the intervention effects on network structure and turnover. Each model estimate and 95% CIs were converted to incident rate ratios (IRR) (excluding network density) because model estimates of count outcomes can be easily interpreted as predicted % increase or decrease [67]. Models testing for associations between intervention arm and counts of changing relationships identified several medium-sized effects for the full sample. First, intervention participants had an average of 2.68 times more retained alters who stopped being drinking partners (i.e., the respondent reported as a drinking partner at baseline but not at follow-up) compared to control participants (p =.03, d =.61). Second, when considering those who influenced AOD use with respondents at follow-up but not baseline (i.e., classified as starting AOD use influence), intervention participants had only 13% the number of these retained alters in their networks compared to similar control participants (p =.02, d =.59). Third, intervention recipients had an average of 42% fewer retained alters who changed from not being rated as having any of the three risk characteristics at baseline to having at least one at follow-up compared to control participants (p =.05, d =.52). These associations were not significant within the SRHT-only sample (see Table 2a) except for a marginally significant decrease in alters who started influencing AOD use between waves: SRHT intervention participants averaged only 10% the number of these retained alters in their networks compared to similar control participants (p =.07, d =.49). --- Network structure and turn-over For overall network structure, several significant effects of medium-to-large magnitude were found. Average cross-wave network density was 0.18 higher for intervention participants compared to control participants in the SRHT-only sample (p =.02, d =.82), although this association was not significant in the full sample (p =.14). For cross-wave network number of components, intervention networks had on average 55% the number of components as the control arm for the full sample (p =.02, d =.74) and 42% for the SRHT-only sample (p <unk>.01, d = 1.01). Overall network size did not significantly differ between treatment conditions..37 1 Baseline and Follow-up means and SDs weighted from full intent-to-treat sample (N = 49) to account for non-response at follow-up. 2 Weighted intervention effect estimates and 95% Confidence Intervals from linear regression models predicting follow-up measure controlling for baseline. 3 Cohen's d effect sizes interpreted as small (.20), medium (.50), and large (.80). https://doi.org/10.1371/journal.pone.0262210.t001 However, the average number of alters dropped from the network between baseline and follow-up was marginally lower for intervention participants compared to control participants in the SRHT-only sample, with a small effect size (p =.10, d =.39), but this association was nonsignificant in the full sample (p =.14). The average number of alters retained in the network between baseline and follow-up was marginally higher for intervention participants than control participants in the SRHT-only sample, with a medium to large effect size (p =.07, d =.71),.05 (.18,.99).52 --- Cross-wave network structure Total Unique Alters Named 24.64 (9.38) 23.93 (8.00)..08.03 Cross-Wave Density.59 1 Baseline and Follow-up means and SDs weighted from full intent-to-treat sample (N = 49) to account for non-response at follow-up. 2 Estimates and 95% CI reported for alter count and number of components outcomes are converted to IRR to aid interpretation for non-linear models. Density estimates presented are linear model estimates. 3 Cohen's d effect sizes interpreted as small (.20), medium (.50), and large (.80). https://doi.org/10.1371/journal.pone.0262210.t002 but non-significant in the full sample (p =.12). The number of new alters added to the network between baseline and follow-up did not significantly differ across treatment conditions. --- Discussion The goal of this project was to conduct a pilot evaluation of an innovative MI-SNI using exploratory analyses to determine if the intervention was associated with changes in personal network composition and structure. Building off of previous results that demonstrated promising changes to participants' AOD use, readiness to change, and abstinence self-efficacy [34], the results presented here also demonstrate significant associations between participation in the intervention and changes in network characteristics. These findings suggest that the MI-SNI may help individuals experiencing homelessness and risky AOD use positively restructure their social networks while transitioning into supportive housing. In terms of network composition, we found evidence from the SRHT sample that intervention participants had smaller proportions of risky network members from baseline to followup, namely drinking partners and network members who had any risk influence, compared to participants in the control condition. However, contrary to our expectations, we did not find any significant intervention effect on changes in the overall proportion of supportive network members. Another important finding is that intervention participants experienced more positive changes in their relationships with retained alters compared to control participants. For example, compared to control participants, those who received the intervention had a greater number of ties to alters with whom they had a drinking relationship at baseline but did not drink with in the two weeks prior to the follow-up assessment. Intervention participants also had fewer ties to alters who were rated as not being influential over their AOD use at baseline but were rated as having AOD risk characteristics at follow-up. Finally, when examining network turnover, we found that SRHT intervention participants had fewer dropped alters and more retained alters between the baseline and follow-up assessments compared to control participants, resulting in significantly denser networks with fewer components among intervention participants. The full sample analysis showed a similar result for change in components. Therefore, these results demonstrated that the MI-SNI recipients had significantly higher retention of members of their existing networks over the 3 months between assessments compared to participants in the control arm. These findings provide preliminary evidence that intervention recipients were more likely to positively adjust their relationships with network ties they retained over the first three months after transitioning into housing compared to those who received usual case management. The findings suggest that presenting a series of network visualizations that highlighted network centrality, AOD risk, and social support may have helped MI-SNI recipients recognize both the potential for AOD risk in their networks and the network strengths that were worthy of maintaining. Although the intervention was not associated with increased social support, those who received the intervention had greater network stability and did not differ significantly in their network social support compared to those in the control condition, while reducing their overall AOD network risk overall and within retained relationships. Taken together, these findings suggest that the intervention may have triggered recipients to adjust their relationships strategically. For example, participants may have increased their awareness of risky network members, but instead of dropping them from their network, participants may have identified ways to avoid risky interactions when with these members. It is possible that combining personal network visualizations with Motivational Interviewing triggered intervention recipients to articulate active steps they could take to minimize exposure to AOD influence from network members they did not want or were not able to completely avoid. It is possible that the MI-SNI triggered network specific "change talk" that led to behavior changes in their interactions with their network 41]. --- Limitations Although this study provides some promising results that this innovative MI-SNI design coupling Motivational Interviewing and personal network visualizations can help restructure networks in positive ways, there are several limitations worth noting. First, while our sample size is appropriate for an exploratory, small pilot study of a novel intervention approach [48], it was too small to control for factors that may have influenced the results. Also, the large number of exploratory tests run in this study is appropriate for Stage 1 behavioral therapy research development, but may have produced significant findings due to chance. Our predominantly male sample drawn from only 2 housing providers limits generalizability to other housing programs in other geographic regions with different demographic characteristics. A limitation to our tests of network change is that we were only able to collect network data immediately after the intervention period and we have no assessment of the longer-term impact of the intervention on the networks of participants. This study also relied on self-reports of network characteristics at baseline and follow-up. Due to the high respondent burden of completing personal network interviews [68,69], we had to limit our standard questions to only a few relationship characteristics. There are likely many other important relationship qualities that may be impacted by the MI-SNI intervention that we did not measure. As in other AOD use interventions, social desirability may have impacted the self-reported network AOD use outcomes, particularly for those who were invited to receive the intervention sessions and discussed their networks with MI facilitators. However, the findings showing network changes are consistent with individual-level AOD use change outcomes [34] and self-reports by egos of their alters' AOD use using a personal network approach has been found to be accurate when compared to alter self-reports [70]. Another important limitation of this study is the mixture of results that were significant for our primary sample of residents of SRHT only, the original program that contributed to the design of the intervention, and results that were significant for models based on the entire sample. These mixed findings are similar to the results of the analysis of individual level changes in AOD-related outcomes for MI-SNI recipients compared to control participants [34]. These mixed findings make it difficult to draw conclusions because there were too few SRO Housing residents (n = 13) to conduct a sub-sample analysis. Different housing programs that follow a harm reduction model operate in different ways [71] and it is possible that differences in how these two programs provide services and case management to residents impacted these mixed findings. Because of these limitations, many of the results of this exploratory analyses are preliminary and will require a larger RCT to fully test the intervention impact. --- Conclusions Despite these limitations, these results met our initial objective to conduct a pilot test of a novel personal network-based intervention approach. The findings suggest enough promise to justify a larger RCT that enables more robust tests of hypotheses. These results provide some evidence that the intervention had an impact on intervention recipients that went beyond changes to their own personal AOD risk behavior. We believe that the findings of this pilot test suggest that coupling MI with visualizations of personal network diagrams that highlight AOD risk and support characteristics may help residents who have recently transitioned to housing to take steps to change their immediate social environment to achieve AOD use reduction goals. These findings suggest that the intervention may have prompted actions by participants to reduce the prominence of network members who had the potential to influence their own AOD risk. In addition to conducting a larger RCT to provide sufficient power to control for potential confounding factors, such as demographics or housing program characteristics, we recommend that future studies of this approach include a complimentary, qualitative investigation of the network change process for MI-SNI recipients compared to control participants to better understand how the intervention triggers a pattern of choices regarding which network members to retain, which to drop, and the development of relationship change strategies. This would possibly shed light on the mechanisms of network change that are triggered by coupling MI with visualizations of personal networks and key relationship characteristics related to beneficial network reconfiguration. The development of the MI-SNI and interpretation of these RCT results benefitted from qualitative data collected during beta tests of the MI-SNI interface [35] as well as other studies of formerly homeless people in substance abuse recovery [19]. Continued collection of qualitative data can provide context to better understand how people actively modify their networks to achieve behavior change outcomes. A better understanding of the context of network change would also help assist the selection and construction of personal network measures to track changes for both control and intervention participants. We have presented one approach to measuring personal network change that met the goals of this small sample pilot test. A larger sample would enable other analytic approaches for measuring personal network change [37,39,72], including multilevel models that can test for participant-alter relationship outcomes controlling for participant, alter, and personal network characteristics while accounting for non-independence of ego-alter observations [53][54][55]73,74]. Although most examples of SNA informed behavior change interventions use a personal network approach, few have been rigorously tested with RCTs and longitudinal network data [30]. Therefore, this is clearly a developing field and in need of more examples to help identify best practices for measuring and testing network change. Another modification of the design used in this pilot test would be to have residents' case managers deliver the MI-SNI rather than external intervention facilitators. The visualizations resulting from the personal network interviews may help case managers understand the starting point of new residents' social environment as they transition out of homelessness and may improve their ability to understand their social challenges and recommend appropriate services. People transitioning away from homelessness and attempting to reduce their AOD use appear to recognize the importance of the social environment in their continued AOD use. The MI-SNI may be a tool that provides them with an easy to understand personal overview of their current social environment. The four sessions that MI-SNI recipients were invited to receive may trigger them to take preliminary steps towards changing aspects of their networks while seeing tangible evidence of how these efforts impacted their networks. This progress towards social network change may encourage changes the participants' own AOD use behavior. Therefore, changing social networks may make achieving change in AOD use more attainable and may lead to greater AOD use outcomes over time. These preliminary findings suggest the need for a larger trial with a
Social relationships play a key role in both substance use and homelessness. Transitioning out of homelessness often requires reduction in substance use as well as changes in social networks. A social network-based behavior change intervention that targets changes personal social networks may assist the transition out of homelessness. Most behavior change interventions that incorporate social networks assume a static network. However, people experiencing homelessness who transition into housing programs that use a harm reduction approach experience many changes in their social networks during this transition. Changes may include disconnecting from street-based network contacts, re-connecting with former network contacts, and exposure to new network members who actively engage in substance use. An intervention that helps people transitioning out of homelessness make positive alterations to their social networks may compliment traditional harm reduction housing program services.We conducted a pilot randomized controlled trial (RCT) of an innovative Social Network Intervention (MI-SNI), which combines network visualization and Motivational Interviewing to assist adults transitioning out of homelessness. The MI-SNI provides feedback to new residents about their social environments and is designed to motivate residents to make positive changes in both their individual behavior and their personal network. In a sample of 41 adult housing program residents with past year risky substance use, we examined whether participants randomized to receive a MI-SNI showed greater changes in their personal networks over 3 months compared to those receiving usual care.
a developing field and in need of more examples to help identify best practices for measuring and testing network change. Another modification of the design used in this pilot test would be to have residents' case managers deliver the MI-SNI rather than external intervention facilitators. The visualizations resulting from the personal network interviews may help case managers understand the starting point of new residents' social environment as they transition out of homelessness and may improve their ability to understand their social challenges and recommend appropriate services. People transitioning away from homelessness and attempting to reduce their AOD use appear to recognize the importance of the social environment in their continued AOD use. The MI-SNI may be a tool that provides them with an easy to understand personal overview of their current social environment. The four sessions that MI-SNI recipients were invited to receive may trigger them to take preliminary steps towards changing aspects of their networks while seeing tangible evidence of how these efforts impacted their networks. This progress towards social network change may encourage changes the participants' own AOD use behavior. Therefore, changing social networks may make achieving change in AOD use more attainable and may lead to greater AOD use outcomes over time. These preliminary findings suggest the need for a larger trial with a longer follow-up. Although the MI-SNI was customized for new residents of a harm reduction housing program, the results of this pilot test can also serve as promising results this intervention approach could have impact beyond the housing context. The MI-SNI intervention approach can be adapted for other populations (e.g., adolescents) and other health outcomes where social networks are influential (e.g., smoking). --- All data file are available in Github repository: https://github.com/ NCT02140359. qualintitative/EgoWeb-Project-Data/tree/main/ PONE-D-19-36073R1. --- Supporting information --- S1 Appendix. CONSORT checklist. (PDF) S2 Appendix. Study protocol. This document includes exact text describing the RCT procedures approved by the author's IRB prior to the trial beginning. The document includes both the original study plan, human subjects protection plan, and data safeguarding plan provided to the IRB in the initial ethics application as well as the final text uploaded into the human subjects review system which was discussed and approved in a full committee meeting prior to the trial starting. (PDF) --- Author Contributions Conceptualization: David P. Kennedy
Social relationships play a key role in both substance use and homelessness. Transitioning out of homelessness often requires reduction in substance use as well as changes in social networks. A social network-based behavior change intervention that targets changes personal social networks may assist the transition out of homelessness. Most behavior change interventions that incorporate social networks assume a static network. However, people experiencing homelessness who transition into housing programs that use a harm reduction approach experience many changes in their social networks during this transition. Changes may include disconnecting from street-based network contacts, re-connecting with former network contacts, and exposure to new network members who actively engage in substance use. An intervention that helps people transitioning out of homelessness make positive alterations to their social networks may compliment traditional harm reduction housing program services.We conducted a pilot randomized controlled trial (RCT) of an innovative Social Network Intervention (MI-SNI), which combines network visualization and Motivational Interviewing to assist adults transitioning out of homelessness. The MI-SNI provides feedback to new residents about their social environments and is designed to motivate residents to make positive changes in both their individual behavior and their personal network. In a sample of 41 adult housing program residents with past year risky substance use, we examined whether participants randomized to receive a MI-SNI showed greater changes in their personal networks over 3 months compared to those receiving usual care.
INTRODUCTION Sexual wellbeing is a human right. The Declaration of Sexual Rights, endorsed by the World Association for Sexual Health (2014), states "the following sexual rights must be recognized, promoted, respected, and defended" regardless of age, race, sexual orientation, health status, social and economic situation, and so forth: the right to sexual autonomy (including choices about one's body, sexual behaviours, and relationships), the right to sexual freedom (including both the freedom to sexual expression and freedom from all forms of violence, stigma, and oppression), and the right to pleasurable, satisfying, and safe sexual experiences, which can be an important source of overall health and wellbeing. These rights, however, often go unacknowledged and unsupported in research, policy, and discourse regarding the sexuality and sexual health of women living with HIV (Carter, Greene, et al., 2017). For decades, sex in the context of HIV has been synonymous with danger, resulting in a lack of pleasure in discussions and programs about women and HIV (Higgins & Hirsch, 2007;Higgins, Hoffman, & Dworkin, 2010). This narrative, combined with gendered cultural norms, have produced expectations that women living with HIV ought not to have sex, or, if they must, then need to do so safely, with no acknowledgment of the satisfaction, pleasure, or other benefits that women may be deriving from sex (Gurevich, Mathieson, Bower, & Dhayanandhan, 2007;Lawless, Crawford, Kippax, & Spongberg, 1996). Importantly, however, women living with HIV have, for many years, fought back against these negative sexual scripts. From Mariana Iacono's (2016) tips on how to go down on a woman living with HIV, to queer artist-activist Jessica Whitbread's (2011;2016) "Fuck Positive Women" poster and "I Don't Need a Space Suit to Fuck You" retro lesbian sci-fi fantasia, to the policy statement of the International Community of Women Living with HIV/AIDS (2015) opposing laws that criminalize intimacy between Running head: PLEASURE AND SATISFACTION AMONG WOMEN WITH HIV 3 consenting adults, women living with HIV have been at the forefront of efforts to end sexual oppression and promote sexual liberation for themselves and their communities. This kind of sex-positive feminist dialogue is largely absent from HIV research, as most studies concerning HIV-positive women's sexual health continue to focus on others' sexual health. The emphasis on HIV prevention is evident in the large literature on: safer sex, which has primarily interrogated (male) condom use practices (Carvalho et al., 2011); safer conception (Matthews et al., 2017) and prevention of vertical transmission (Ambia & Mandala, 2016); and more recently, treatment-driven prevention strategies, for which the latest science shows that people who are adherent to combination antiretroviral therapy (cART) and achieve and maintain an undetectable viral load (VL) have effectively no risk of sexually transmitting the virus to HIV-negative partners (Rodger et al., 2016). While important inequities in treatment access and adherence exist owing to a myriad of social factors (e.g., substance use, violence, poverty) (Carter, Roth, et al., 2017), researchers are beginning to theorize that this biomedical science may have the unintended good consequence of freeing people living with HIV from repressive discourses of sexual risk and opening up new possibilities for sexual pleasure (Persson, 2016). To draw attention to the need for research, policy, and discourse to support the sexual rights of women living with HIV, as set forth in the Declaration, the purpose of this study was to explore sexual satisfaction and pleasure among women living with HIV in Canada. Consistent with critical feminist theory (Carter, Greene, et al., 2017), we were concerned with how these experiences relate to issues of power, looking specifically at women's intimate relationships and the larger social realities in which women enact their sexual lives. By studying positive aspects of sexuality, and understanding the relational and social conditions under which women are most and least likely to enjoy them, we aim to shift the focus in HIV to women's rights and help Running head: PLEASURE AND SATISFACTION AMONG WOMEN WITH HIV 4 change the dominant narrative from risk to pleasure. --- Definitions and conceptual underpinnings Sexual satisfaction and pleasure. Sexual satisfaction is often defined with regard to positive emotions. For example, Sprecher and Cate (2004) conceptualized it as "the degree to which an individual is satisfied or happy with the sexual aspect of his or her relationship" (p. 236). Early theories of sexual satisfaction stem mainly from social exchange models that posit that feeling sexually satisfied (or sexually unsatisfied) arises from a perceived balance between the presence of sexual rewards (e.g., joy, pleasure) and absence of sexual costs (e.g., anxiety, inhibition) as exchanged between partners (Byers, Demmons, & Lawrance, 1998). These descriptions, however, focus on satisfaction within relationships, while others have measured satisfaction in relation to how happy one is with one's sexual life more broadly (Bridges, Lease, & Ellison, 2004). The Sexual Satisfaction Scale -Women's version (SSS-W) was developed to capture both the relational and personal dimensions of this concept (Meston & Trapnell, 2005), and several other new scales assessing sexual satisfaction have been developed and recently reviewed by Mark, Herbenick, Fortenberry, Sanders, and Reece (2014). Although sexual pleasure plays an important role in satisfaction (Pascoal, Narciso, & Pereira, 2014), it also has distinct meanings. Broadly defined, Abramson and Pinkerton (2002) described sexual pleasure as the "positively valued feelings induced by sexual stimuli" (p. 8). Other definitions emphasize both physical and emotional sensations arising from intimate touch of the genitals or other erogenous zones, such as breasts and thighs (De la Garza-Mercer, 2007). Yet sex and sexual gratification can also encompass broader experiences such as kissing, hugging, or fantasizing (Fahs & McClelland, 2016), which women living with HIV themselves report are important aspects of a pleasurable sexual life (Taylor et al., 2016). --- Subjectivity, Agency, and Entitlement. Cutting across these literatures is the notion that sexual satisfaction and pleasure are subjective experiences. Indeed, when people are asked to reflect on these concepts, the individual and dyadic factors they describe as contributing to sexual fulfillment and enjoyment are highly diverse and personal in nature. Yet sexuality is also political, and is "moderated by and unfolds within a particular and cultural milieu" (Abramson & Pinkerton, 2002, p. 10). A key feature, then, of critical sexuality research is attention to the ways in which disparate socio-political conditions may shape not only how women experience but also how they evaluate their sexual lives within specific social contexts (Fahs, 2014;Fahs & McClelland, 2016;McClelland, 2011McClelland,, 2013)). Feminist scholars have taken up this cause in recent studies by theorizing outcomes in relation to sexual agency and entitlement. Agency has been defined as "the ability of individuals to act according to their own wishes and have control of their sexual lives" (including the choice to have or not have sex) (Fahs & McClelland, 2016, p. 396). In empirical research on the subject, higher agency has been associated with greater sexual satisfaction and excitement (Fetterolf & Sanchez, 2015;Kiefer & Sanchez, 2007;Laan & Rellini, 2011;Sanchez, Kiefer, & Ybarra, 2006), while lower agency has been linked to a reduced likelihood of declining unwanted sex (Bay-Cheng & Eliseo-Arras, 2008) and feeling pleasure (Sanchez, Crocker, & Boike, 2005). Beyond deciding to have sex and pursue pleasure, is the issue of feeling entitled to it. Sara McClelland (2010) recently elaborated on this in her "intimate justice" framework to guide sexual satisfaction research among marginalized populations. After methodically reviewing decades of sexual and life satisfaction research, she argued that external contexts (e.g., pressure to conform to gender roles, stigma against sexuality) can lower what a person feels they deserve sexually and heighten satisfaction ratings (McClelland, 2010). --- Research on sexual satisfaction and pleasure among women living with HIV Both qualitative studies (Carlsson-Lalloo, Rusner, Mellgren, & Berg, 2016) and women's own personal testimonies (Becker, 2014;Caballero, 2016;Carta, 2016;Fratti, 2017;Whitbread, 2016) reveal how several social, political, emotional, and relational factors can affect women's experiences of sex. Common concerns reported in the literature include disclosure and its consequences (e.g., rejection, violence), fears of transmitting HIV and challenges discussing safer sex, and external (e.g., HIV non-disclosure laws) and internal (e.g., low self-esteem) HIVrelated stigmatization (Beckerman & Auerbach, 2002;Crawford, Lawless, & Kippax, 1997;Gurevich et al., 2007;Lather & Smithies, 1997;Siegel, Schrimshaw, & Lekas, 2006;van der Straten, Vernon, Knight, Gomez, & Padian, 1998;Welbourn, 2013). For some women, such stressors contribute to feelings of loss of sexuality (Balaile, Laisser, Ransjo-Arvidson, & Hojer, 2007;Gurevich et al., 2007). Studies, thus, suggest many women (though not all) report less satisfaction with their sex lives (Balaile et al., 2007;Hankins, Gendron, Tran, Lamping, & Lapointe, 1997;Siegel et al., 2006) and reduced enjoyment of sex (Closson et al., 2015;Lambert, Keegan, & Petrak, 2005;Siegel et al., 2006) after an HIV diagnosis. Evidence from large-scale, quantitative studies is relatively limited, however; and, of significance, most findings come from gender-aggregated data. One of the most consistent predictors of sexual satisfaction in the context of HIV has been stigma-related constructs, with lower satisfaction ratings found among those reporting greater sex-negative attitudes, perceived responsibility for reducing the spread of HIV, discrimination in a relationship, and internalized stigma (Bogart et al., 2006;Castro, Le Gall, Andreo, & Spire, 2010;Inoue, Yamazaki, Seki, Running head: PLEASURE AND SATISFACTION AMONG WOMEN WITH HIV 7 Wakabayashi, & Kihara, 2004;Peltzer, 2011). Researchers have also explored the role of age, depression, and education and employment (Bouhnik et al., 2008;Castro et al., 2010;Peltzer, 2011), though only socioeconomic factors have been found to consistently promote satisfaction. Quantitative studies have not explored relationships well. Studies have focused narrowly on women's relationship status (i.e., married vs. single) and report conflicting findings (Castro et al., 2010;Inoue et al., 2004;Peltzer, 2011). In contrast, results from non-HIV literature emphasize a clear connection between sexual satisfaction and pleasure and numerous indicators of relationship quality such as physical intimacy, emotional closeness, commitment, and gender power relations, among other factors (Haavio-Mannila & Kontula, 1997;Henderson, Lehavot, & Simoni, 2009;Sánchez-Fuentes, Santos-Iglesias, & Sierra, 2014). These studies, however, have failed to account for the multidimensional nature of sexual and intimate partnering, and it is the interaction between relationship dimensions that may be critical to experiences of sexual satisfaction and pleasure. --- Study objective In a previous paper, we used latent class analysis (LCA) to model patterns of sexual and intimate relationship experiences among women living with HIV in Canada, uncovering five multi-dimensional latent classes (i.e., no relationship; relationships without sex; and three sexual relationships: short-term, long-term/unhappy, and long-term/happy), which differed on seven indicators of sex, intimacy, and relationship power (Carter et al., 2016). The current paper represents a follow-up to this analysis and is guided by the following objective: to describe women's feelings of sexual satisfaction and pleasure and compare such experiences across these five latent classes, critically examining and adjusting for social and health factors associated with relationship types and predictive of sexual outcomes. Running head: PLEASURE AND SATISFACTION AMONG WOMEN WITH HIV --- 8 --- METHOD --- Study design Data for this analysis came from the baseline questionnaire of the Canadian HIV Women's Sexual and Reproductive Health Cohort Study (CHIWOS, www.chiwos.ca). CHIWOS is a community-based research project of self-identified women living with HIV aged 16 years or older from British Columbia, Ontario, and Quebec (Loutfy et al., 2017). The study is committed to the meaningful involvement of women living with HIV as Peer Research Associates (PRAs) and academic researchers, care providers, and community agencies as allied partners throughout all stages of the research, from the design of data collection tools, through participant outreach and recruitment, to knowledge dissemination activities including scientific co-authorship. Women living with HIV were recruited into CHIWOS between August 2013 and May 2015, using a comprehensive strategy designed to oversample women from communities traditionally marginalized from research (Webster et al., In Press). After a brief screening interview, PRAs administered FluidSurveys TM questionnaires to women in English (n = 1081) or French (n = 343). Interviews were completed either in-person (at community agencies or women's homes) or via telephone or Skype for those living in rural or remote areas, and lasted an average of 2-hours (interquartile range (IQR): 90-150 minutes). Participants provided voluntary informed consent and were given $50 to honour their time and contributions. We received ethical approval from Simon Fraser University, the University of British Columbia/Providence Health Care, Women's College Hospital, McGill University Health Centre, and community organizations where necessary. --- Study variables --- Outcome variables. Sexual health questions were informed by women living with HIV and aimed at minimizing participant burden. Sexual satisfaction was assessed among all women using one item from the personal contentment domain of the SSS-W (Meston & Trapnell, 2005): "Overall, how satisfactory or unsatisfactory is your present sex life?" Responses were on a five-point scale ranging from "completely," "very," or "reasonably" satisfactory to "not very" or "not at all" satisfactory. The final two categories were collapsed due to low numbers. Sexual pleasure was assessed using one item from the Brief Index of Sexual Functioning for Women (BISF-W) (Taylor, Rosen, & Leiblum, 1994), which read: "During the past month, have you felt pleasure from any forms of sexual experience?" Responses included: "always felt pleasure," "usually, about 75% of the time," "sometimes, about 50% of the time," "seldom, less than 25% of the time," "have not felt any pleasure," and "have had no sexual experience during the past month." Those with no recent sexual experience were excluded from analyses, with the remainder collapsed into three groups (i.e., always vs. usually/sometimes vs. seldom/none). --- Explanatory variables. The main explanatory variable was relationship latent class, derived via LCA. A detailed description of LCA methodology and these relationship types is available elsewhere (Carter et al., 2016). Briefly, LCA is a person-centred approach capable of identifying clusters of individuals that share a common set of characteristics using structural equation modelling of categorical data (Lanza, Bray, & Collins, 2013). In our analysis, we modelled seven indicators: 1) sexual relationship status (a cross of recent consensual oral, anal, or vaginal sexual activity with a regular partner and current relationship status), 2) (dis)contentment with their frequency of sexual intimacy (e.g., kissing, intercourse, etc.), 3) (dis)contentment with the amount of emotional closeness experienced, and, of those with a regular partner (i.e., spouse, common law partner, long term relationship, friend with benefits, or partner seen on and off for some time): 4) relationship duration, 5) couple HIV serostatus, 6) sexual exclusivity, and 7) relationship power (i.e., the Relationship Control sub-scale of the Sexual Relationship Power Scale, developed by Pulerwitz, Gortmaker, and DeJong (2000)). Two items (i.e., emotional closeness and sexual intimacy) came from the SSS-W (Meston & Trapnell, 2005) and bivariable analyses revealed a strong association with reporting a completely satisfactory sex life (data not shown). However, LCA groups women according to their response patterns on multiple variables, which together contribute the underlying meaning of the latent class. Thus, while we acknowledge strong intercorrelations, we questioned whether these two indicators were perfectly aligned with the outcome of overall sexual satisfaction and sought to uncover this in our analysis, exploring how varying levels of physical and emotional intimacy may impact global satisfaction ratings. As the resulting latent classes are described elsewhere (Carter et al., 2016), we offer a brief description here along with a figure illustrating the latent class structure (Figure 1). The most prevalent class within the entire sample (which we called, no relationship [46.5%]) was comprised entirely of women who reported being single, separated, widowed, or divorced and had not engaged in any consensual oral, anal, or vaginal sexual activity with a regular partner in the past 6 months. The second class (relationships without sex [8.6%]) consisted of women who had similarly not had any recent sex but reported their current legal relationship status as married, common-law, or in a relationship but not living together. Forty three per cent of the women in this class were content with the amount of physical intimacy in their life (or lack thereof), while 27% felt they had enough emotional closeness. The final three latent classes represented distinct types of consensual sexual relationships with a regular partner (short-term [15.4%], long-term/unhappy [6.4%], and long-term/happy [23.2%]). Relative to women in short-term relationships, women in the two longer-term latent classes had much higher probabilities of reporting that they were in a sexually monogamous relationship, were married, common-law, or non-cohabiting, and had been with their partner for <unk> 3 years. These sexual relationships diverged, however, on contentment with physical intimacy (97%-happy vs. 44%-unhappy vs. 46%-short-term) and emotional closeness (86%-happy vs. 24%-unhappy vs. 16%-short-term), high power equity (93%-happy vs. 52%-unhappy vs. 51%-short-term), and the presence of an HIV-negative partner (71%-happy vs. 59%-unhappy vs. 81%-short-term). Further, in bivarable analyses, we found that women in long-term/happy sexual relationships (66.8%) and relationships without sex (50%) were most likely to report "feeling love for and wanted by someone all of the time", compared to women in long-term/unhappy relationships (33.3%), short-term relationships (24.8%), and no relationship (23.5%) (p <unk>.0001). --- Confounders. Factors associated with latent class membership in the previous analysis and theorized to be determinants of sexual satisfaction and pleasure were considered as potential confounders (see tables for full derivations and cited literature for scoring instructions). These included: age; annual personal income; education; children living at home; transactional sex; illicit drug use; any physical, verbal, sexual, or controlling violence as an adult or child; use of cART; discussed with a provider how VL impacts HIV transmission risk; post-traumatic stress disorder (PTSD) (score range = 6 -30, <unk> 14 indicating likely PTSD; Cronbach <unk> =.91) (Lang & Stein, 2005); depression (score range = 0 -30, <unk> 10 suggesting probable depression; Cronbach <unk> =.74) (Zhang et al., 2012); sexism/genderism and racism (score range = 8 -48; Cronbach <unk> =.94) (Williams, Yan, Jackson, & Anderson, 1997); and HIV stigma (score range = 0 -100; Cronbach <unk> =.84) (Berger, Ferrans, & Lashley, 2001). Although not independently associated with relationship types (and thus, not meeting confounding criteria), we also examined the following factors in relation to sexual outcomes in bivariable analyses: gender; sexual orientation; ethnicity; time living with HIV; most recent VL; most recent CD4 cell count; and physical and mental-health related quality of life, assessed via the SF-12 (score range = 0 -100, Cronbach <unk> =.82) (Carter, Loutfy, et al., 2017). --- Analysis plan --- Final analytic sample. Overall, 1,424 women living with HIV were enrolled in CHIWOS, but only 1,334 were included in the previous LCA owing to missing relationship data. Of these 1,334 women, 1,230 responded to the aforementioned question about sexual satisfaction, while 675 reported on pleasure from any forms of sexual experience in the past one-month. For regression analyses of sexual satisfaction, we excluded another 163 women who responded, "don't know" or "prefer not to answer" to confounders, resulting in a final analytic sample of 1,067 for both unadjusted and adjusted analyses (80.2% of the total sample). For pleasure, the final sample size for multivariable comparisons was 567 (41.6% of the total sample). --- Descriptive, bivariable, and multivariable analyses. Baseline characteristics were reported on all 1,334 women comprising the LCA, using frequencies (n) and percentages (%) for categorical variables, and medians (M) and interquartile ranges (Q1, Q3) for continuous variables. Bivariable analyses were conducted of the explanatory variable (relationship types) and confounders by both sexual satisfaction (n = 1230) and pleasure (n = 675). Crude associations were tested using the Pearson <unk> 2 test or Fisher's exact test for categorical variables and Kruskal Wallis Test for continuous variables. Those with a p-value of <unk>0.2 (Kaida et al., 2015) and previously associated with relationship types (Carter et al., 2016) were examined in further analyses. Binomial and multinomial logistic regression (the latter adjusting for factors meeting confounding criteria) were used to investigate how relationship types were associated with increased odds of feeling completely, very, or reasonably satisfied with one's sexual life, using not very/not at all satisfied as the referent, with unadjusted and adjusted odds ratios (ORs and AORs) and 95% confidence intervals (CIs) reported. Procedures were repeated to explore the link between relationship types and an increased odds of always or usually/sometimes feeling sexual pleasure, using seldom/not at all as the referent. To compare all latent classes, we ran multiple models, each time using a different latent class as the reference group. Analyses were conducted using SAS® version 9.3 (SAS, North Carolina, United States). --- RESULTS --- Social and health circumstances of women's lives The 1,334 women living with HIV included in baseline analyses were diverse in gender (4.3% trans), sexual orientation (12.5% LGBTQ), ethnicity (22.3% Indigenous; 28.9% African/Caribbean/Black; 41.2% White), socio-economic status (71.4% personal income <unk>$20,000 CAD, 18.1% current illicit drug use, 6.2% current sex work), age (median: 42.0 years; IQR: 35.0, 50.0; range: 16 -74), and time living with HIV (median: 10.8 years; IQR: 5.9, 16.8; range: 1 month -33.7 years). Nearly one-quarter (22.8%) had biological children living at home with them. Nearly half had depression and PTSD symptoms, and 80.4% reported lifetime experiences of violence. Most were taking cART (82.7%) and had an undetectable VL (81.5%). About two-thirds (68.8%) had talked to their doctor about its impact on transmission. Table 1 shows other social and health factors as well as levels of sexual satisfaction and pleasure. --- Experiences of sexual satisfaction and pleasure Of women with sexual satisfaction ratings (n = 1,230), 21.0% and 17.1% reported being completely and very satisfied with their sex lives, respectively, with the remainder feeling reasonably (30.9%) or not very/not at all satisfied (30.9%). Overall, 51.8% of the cohort stated they had some form of sexual experience in the past month (n = 675), including 22.5% of women in no relationship and 21.7% of women in relationships without sex. Of these 675 women, 41.3% always and 38.6% usually/sometimes felt pleasure from sexual experience, while 20% reported seldom/no pleasure. Satisfaction and pleasure were correlated but not identical constructs: among those who always felt pleasure, 47.6% were completely and 28.9% were very satisfied with their sex life (vs. reasonably [14.4%] and not very/not at all satisfied [9.0%]; data not shown). --- Patterns of sexual satisfaction and pleasure by relationship types As highlighted in Table 2, approximately half (48.7%) of the women in long-term/happy sexual relationships (defined by the highest levels of love, physical and emotional intimacy, shared power, and mixed HIV status) were completely satisfied with their sexual life, while 32.0% were very and 17.3% reasonably sexually satisfied; just 2% (n = 6) said not at all/not very satisfied. The opposite pattern was found for women in no relationship, of whom 44.4% (n = 237) were not very/not at all satisfied; although the remainder were satisfied at some level with their sexual life (i.e., 30.9% reasonably, 12.4% very, and 12.4% completely). Of the three remaining latent classes (all with similar levels of physical intimacy), women in relationships without sex were more likely to report that overall, their present sex life was completely satisfactory (20.4%) than women in short-term (7.6%) and long-term/unhappy (8.2%) sexual relationships. In terms of sexual pleasure (Table 3), 64.2% of women in long-term/happy sexual relationships reported that they always felt pleasure from any forms of sexual experience during the past month, while 33.9% usually/sometimes felt pleasure and 2.8% experienced seldom/no pleasure. Reports of always feeling pleasure were much lower among women in short-term sexual relationships (30.7%), and even lower among those in long-term/unhappy sexual relationships (16.2%, characterized by longer duration and more HIV-positive partners). For women in no relationship or relationships without sex, about one-quarter reported always feeling pleasure during their sexual experiences. As seen in both tables, sex did not equate with satisfaction or pleasure, as some women were completely satisfied without sex (i.e., 12.4% no relationship, 20.4% relationships without sex), while others were having sex without reporting pleasure (i.e., 24.2% short-term, 21.6% long-term/unhappy). --- Patterns of sexual satisfaction and pleasure by social and health factors In terms of social and health covariates, sexual satisfaction was crudely associated with age, sexism/genderism, annual personal income, education, PTSD and depressive symptoms, violence as an adult and as a child, cART, discussed with provider how VL impacts transmission risk, and HIV stigma, all of which were associated with relationship types in our previous LCA paper (Carter et al., 2016). With the exception of income, these same factors showed crude associations with sexual pleasure, along with three additional influences (i.e., transactional sex, illicit drug use, and children at home). Gender and sexual orientation were not associated with relationship types or sexual satisfaction and pleasure, while ethnicity was only associated with sexual satisfaction: specifically, Indigenous women were more likely to be completely sexually satisfied (27.8%) compared to women of all other ethnicities (18.1 -20.5%), while African, Caribbean, and Black women reported the highest rates of sexual dissatisfaction (38.5%) versus their peers (range: 19.9 -33.7%). Since, however, ethnicity was not a determinant of relationship types (the second criterion for confounding), it was excluded from the multivariable confounder analyses. Clinical factors (e.g., VL, CD4 count) were not examined further for the same reason. --- Multivariable confounder analysis of sexual satisfaction In adjusted analyses, women in long-term/happy sexual relationships had much greater odds of reporting satisfaction with their sexual life than women in all other latent classes, with the greatest effects seen relative to no relationship, and the weakest in relation to relationships without sex (Table 4, n = 1,067). Additionally, the effect estimates were generally strongest at the highest level of sexual satisfaction ("completely") and gradually decreased in strength through to the middle ("very") and lowest level of satisfaction ("reasonably"), all relative to "not very/not all" satisfied. For instance, after adjusting for confounders, the odds of feeling completely satisfied with one's sex life (vs. not very/not all) were 94 times greater among women in long-term/happy relationships than women in no relationships (AOR = 94.05, 95% CI = 35.75, 247.44). The extremely large estimates and wide CIs indicate a strong predictor and reflect the fact that very few women in long-term/happy relationships were not very/not at all satisfied (n = 6 [2.0%]) versus many women in no relationship (n=237 [44.4%]). Much lower effect estimates (i.e., less than 2) were observed for all other relationship comparisons. For instance, women in relationships without sex also had increased adjusted odds of reporting that their sex life was completely satisfactory, relative to women in no relationship (although the 95% CI included the null value) (AOR = 1.88, 95% CI = 0.98, 3.63). There were no differences when comparing short-term and long-term/unhappy relationships to no relationships (referent) at the highest outcome level (i.e., completely satisfied), but higher AORs were seen at the remaining two outcome levels (i.e., very and reasonable satisfied). Likewise, there were also no differences when women in relationships without sex were used as the referent. In terms of confounding factors, women with depression (AOR = 0.32, 95% CI = 0.20, 0.53) and currently experiencing violence (AOR = 0.38, 95% CI = 0.18, 0.82) had reduced odds of reporting a completely satisfactory sex life. Older age (AOR = 0.89, 95% CI = 0.73, 1.09, per 10-year increase in age) and HIV stigma (AOR = 0.98, 95% CI = 0.87, 1.09) also had reduced effects on sexual satisfaction, though the estimates were smaller and patterns non-significant (i.e., the 95% CI included the null value). Women with higher than high school education also had lower AORs for being completely satisfied relative to women with lower than high school education (AOR = 0.46, 95% CI = 0.24, 0.86), as did women who had discussed with their provider how VL impacts transmission risk (AOR = 0.67, 95% CI = 0.43, 1.05). --- Multivariable confounder analysis of sexual pleasure In regards to sexual pleasure (Table 5, n = 567), women in long-term/happy sexual relationships had greater adjusted odds of reporting that they always felt pleasure during any sexual experiences versus seldom/no pleasure, relative to those in long-term/unhappy relationships (AOR = 41.02, 95% CI = 11.49, 146.40) and those in short-term relationships (AOR = 11.83, 95% CI = 4.29, 32.59). The strength of association was reduced at the outcome level of "usually/sometimes" felt pleasure but nonetheless elevated (i.e., referents: longterm/unhappy: AOR = 4.84, 95% CI = 1.66, 14.09; short-term: AOR = 6.48, 95% CI = 2.40, 17.47). In comparing women in long-term/unhappy relationships versus short-term relationships, the adjusted odds of always feeling pleasure during sexual experiences were reduced for the former group by 71% (AOR= 0.29, 95% CI = 0.10, 0.87). No significant differences in the experiences of pleasure were observed when comparing those in no relationships to those in relationships without sex. In terms of confounders, as with sexual satisfaction, women experiencing depression Running head: PLEASURE AND SATISFACTION AMONG WOMEN WITH HIV 18 (AOR = 0.46, 95% CI = 0.24, 0.91) and current violence (AOR = 0.21, 95% CI = 0.06, 0.73) had lower adjusted odds of reporting that they always felt pleasure. Current transactional sex, while not included in the satisfaction model, was also associated with a significant reduction in always feeling pleasure (AOR= 0.16, 95% CI = 0.05, 0.52). Similar to the previous model, small and non-significant associations with pleasure were seen for older age (AOR = 0.82, 95% CI: 0.59, 1.13) and HIV stigma (AOR= 0.88, 95% CI = 0.74, 1.03). On the other hand, two contrasting findings were seen in relation to higher than high school education (AOR = 2.22, 95% CI = 0.94, 5.22) and having discussed with a provider how VL impacts transmission risk (AOR= 1.87, 95% CI = 1.00, 3.50), with higher (i.e., above 1) AORs for always reporting pleasure observed rather than lower (i.e., below 1) AORs as seen with satisfaction. --- DISCUSSION This analysis revealed positive dimensions of sexual health for women living with HIV in
In the context of HIV, a focus on protecting others has overridden concern about women's own sexual wellbeing. Drawing on feminist theories, we measured sexual satisfaction and pleasure across five relationship types among women living with HIV in Canada. Of the 1,230 women surveyed, 38.1% were completely or very satisfied with their sexual life, while 31.0% and 30.9% were reasonably or not very/not at all satisfied, respectively. Among those reporting recent sexual experiences (n=675), 41.3% always felt pleasure, with the rest reporting usually/sometimes (38.7%) or seldom/not at all (20.0%). Sex did not equate with satisfaction or pleasure, as some women were completely satisfied without sex while others were having sex without reporting pleasure. After adjusting for confounding factors, such as education, violence, depression, sex work, antiretroviral therapy, and provider discussions about transmission risk, women in long-term/happy relationships (characterized by higher levels of love, greater physical and emotional intimacy, more equitable relationship power, and mainly HIV-negative partners) had increased odds of sexual satisfaction and pleasure relative to women in all other relational contexts. Those in relationships without sex also reported higher satisfaction ratings than women in some sexual relationships. Findings put focus on women's rights, which are critical to overall well-being..
= 0.21, 95% CI = 0.06, 0.73) had lower adjusted odds of reporting that they always felt pleasure. Current transactional sex, while not included in the satisfaction model, was also associated with a significant reduction in always feeling pleasure (AOR= 0.16, 95% CI = 0.05, 0.52). Similar to the previous model, small and non-significant associations with pleasure were seen for older age (AOR = 0.82, 95% CI: 0.59, 1.13) and HIV stigma (AOR= 0.88, 95% CI = 0.74, 1.03). On the other hand, two contrasting findings were seen in relation to higher than high school education (AOR = 2.22, 95% CI = 0.94, 5.22) and having discussed with a provider how VL impacts transmission risk (AOR= 1.87, 95% CI = 1.00, 3.50), with higher (i.e., above 1) AORs for always reporting pleasure observed rather than lower (i.e., below 1) AORs as seen with satisfaction. --- DISCUSSION This analysis revealed positive dimensions of sexual health for women living with HIV in Canada: 69% of women in our cohort were satisfied, to some extent (i.e., reasonably, very or completely), with their sexual life (or lack thereof), and among those with recent sexual experiences, 41.3% reported always feeling sexual pleasure. This finding disrupts narratives of sexual danger in the context of HIV and demonstrates to women living with HIV, and to society, that many women can and do enjoy their sexual lives following a diagnosis of HIV. Yet access to a satisfying and pleasurable sex life was not equal amongst women in our cohort. A key finding was that women in long-term/happy relationships (characterized by higher levels of love, greater physical and emotional intimacy, more equitable relationship power, and mainly HIV-negative partners) had the highest degree of sexual satisfaction and pleasure. It is noteworthy, however, that some women in this cohort were sexually satisfied despite being in no relationship or a nonsexual relationship. Our analysis also highlighted how social status and mental health are related to sexual satisfaction and pleasure. These findings fill important knowledge gaps pertaining to how relational dynamics, social inequities, and trauma impact positive and rewarding aspects of sexuality for women living with HIV, an under-studied population in the field of sexual science. The overall prevalence of sexual satisfaction in our analysis is similar to that reported for other HIV cohorts (Castro et al., 2010;Lambert et al., 2005), but lower than some general population estimate papers (i.e., 75 -83%) (Colson, Lemaire, Pinton, Hamidi, & Klein, 2006;Dunn, Croft, & Hackett, 2000). The differences may be due to the effects of living with HIV or other social factors that disproportionately impact women living with the virus such as violence and chronic depression (Machtinger, Wilson, Haberer, & Weiss, 2012). However, it remains difficult to draw conclusive interpretations and to compare to other, more recent studies (Heiman et al., 2011;Henderson et al., 2009;Schmiedeberg & Schröder, 2016;Velten & Margraf, 2017), as researchers have used various single-and multi-item instruments (with slight differences in question wording and response scales) and have commonly focused exclusively on sexually active individuals in relationships (del Mar Sánchez-Fuentes, Santos-Iglesias, & Sierra, 2014). Conversely, our prevalence of sexual pleasure is higher than that reported by one previous HIV study (Hankins et al., 1997), conducted early in the epidemic. Thirty-three per cent of women living with HIV in that study reported feeling little to no sexual pleasure during recent sexual activity, compared to just 20% of women in our analysis. As both scales used the same time frame, phrasing, and study population, this improvement in time could reflect the repositioning of HIV as a chronic disease today, which may reduce fears of transmission and maximize women's enjoyment of sex. The finding that women in long-term/happy relationships were more likely to feel that their present sex life was, overall, either completely, very, or reasonably satisfactory compared to Running head: PLEASURE AND SATISFACTION AMONG WOMEN WITH HIV 20 women in all other relational contexts is consistent with other results showing the quality of a relationship with a partner can impact the quality of women's sex life, both within (Castro et al., 2010;Inoue et al., 2004;Peltzer, 2011) and outside the HIV field (Haavio-Mannila & Kontula, 1997;Henderson et al., 2009;Sánchez-Fuentes et al., 2014). Previous studies, though, focused on singular dimensions. For example, some reported longer relationship duration predicts lower sexual satisfaction due, in part, to more familiar, routine sex (Carpenter, Nathanson, & Kim, 2009;Liu, 2003;Pedersen & Blekesaune, 2003;Schmiedeberg & Schröder, 2016). Yet, within long-term committed relationships, women can have varying experiences of sexual satisfaction based on other critical subtleties of relationships, as seen with the long-term/happy and longterm/unhappy latent classes in our analysis (of which, the latter had lower levels of love, power, intimacy, and HIV-negative partners and were less likely to be satisfied sexually). This finding underscores the importance of considering the interaction of several relationship variables. It also highlights how partaking in sex does not universally mean a woman is enjoying a satisfying sex life, adding to previous literature among women without HIV (Fahs & Swank, 2011). With regard to sexual pleasure, we found that women in long-term/unhappy relationships also had significantly reduced odds of always feeling pleasure compared to women in short-term and long-term/happy relationships. The former comparison (i.e., long-term/unhappy to shortterm) may indicate that, when indicators of intimacy and power are equal, newer relationships are more sexually gratifying, as observed in past HIV research (Hankins et al., 1997). It may also point to a role of couple HIV serostatus, as HIV-positive partners were more common in longterm/unhappy relationships and previous research suggests some women may stay in these relationships simply because of shared status, fearing that no HIV-negative person would want to be with them (Keegan, Lambert, & Petrak, 2005;Lawless et al., 1996;Nevedal & Sankar, 2015). Running head: PLEASURE AND SATISFACTION AMONG WOMEN WITH HIV 21 Yet relationships and pleasurable sex is possible with HIV-negative people, as seen for women in the long-term/happy latent class (of which 71% had HIV-negative partners and 64.2% always felt pleasure), corroborating past research linking pleasure to power equity (Holland, Ramazonoglu, Sharpe, & Thomson, 1992), physical and emotional intimacy (Muhanguzi, 2015), and other relational factors (Carpenter et al., 2009). This finding subverts a common assumption that couples with differing HIV statuses are plagued by sexual challenges (Beckerman & Auerbach, 2002;Bunnell et al., 2005;Lawless et al., 1996;Rispel, Metcalf, Moody, Cloete, & Caswell, 2011;Siegel et al., 2006;van der Straten et al., 1998). Clearly, HIV "serodiscordance" does not necessarily mean sexual discord. In fact, serodiscordance may even enhance intimacy for women through the process of partner acceptance and validation (Persson, 2005), which may reduce internalized stigma and facilitate self-acceptance, all leading to more capacity for trust, intimacy, and pleasure. Beyond relationships, our findings highlight how sexual experiences are also shaped by a number of important social factors. Women living with HIV experience high rates of violence (Logie et al., 2017), depression, and trauma (Machtinger et al., 2012). Our results show that these stressors can greatly affect experiences of both sexual satisfaction and pleasure, consistent with findings outside the HIV field (del Mar Sánchez-Fuentes et al., 2014). Involvement in transactional sex is also more common among women living with HIV, though it negatively affected reports of sexual pleasure only. Conversely, factors associated with increased sexual pleasure included higher education and provider communication about the science of transmission, while these same factors predicted lower odds for sexual satisfaction. The former findings are consistent with previous research linking higher social status to sexual pleasure (Sanchez et al., 2005), likely through enhanced sexual agency (Bay-Cheng & Eliseo-Arras, Running head: PLEASURE AND SATISFACTION AMONG WOMEN WITH HIV 22 2008). They may also signify the sexually liberating potential of the prevention benefits of cART for some women (Persson, 2016), though important inequities in awareness of this science and in treatment remain (Carter, Roth, et al., 2017;Patterson et al., 2017). Regarding the latter finding (on satisfaction), one interpretation may be that women who are more highly educated and have talked to their doctor about this global strategy are less satisfied because they have higher internal expectations for their sex lives (McClelland, 2010). Collectively, these findings expand the literature on the sexuality of women living with HIV, while also making a number of contributions to the broader science of women's sexuality. First and foremost, critical sexuality researchers have emphasized the importance of centering discussions of abject bodies within the sexuality field (Fahs & McClelland, 2016). This study constitutes an important example of how to engage with this goal. By reframing the sexual experiences of women who are living with HIV away from contagion, as women with other sexually transmitted infections (Nack, 2008) and severe mental illness (Davison & Huntington, 2010) have done, we can build an evidence-base that de-stigmatizes sexuality for marginalized and excluded groups. The findings also make visible the relational and social powers that influence women's sexuality. Many of these factors (e.g., sex work, drug use, violence at war, PTSD) are invisible in current literature, as psychological studies often rely on university samples. Finally, from a methodological point of view, this paper demonstrates the utility of feminist quantitative approaches in understanding and supporting women's sexual lives. LCA, in particular, offers a rich area of study for measuring dynamic patterns of sex and relationship experience. --- Limitations A significant limitation of this study is that the measures used to assess sexual satisfaction and pleasure were broad, whereas the underlying concepts are comprehensive and multifaceted (Opperman, Braun, Clarke, & Rogers, 2014;Pronier & Monk-Turner, 2014). Choice of measurement should be informed by the research question; however, this study was a tertiary objective of the larger parent study. Our questionnaire had a total of nine sections (Abelsohn, 2014), just one of which was specific to sexual health. Of relevance to feminist community-based research, we prioritized questions that were most important to women with HIV and sought to balance participant burden with scientific rigor, a frequent challenge in research with vulnerable populations (Ulrich, Wallen, Feister, & Grady, 2005). While our singleitem assessments precluded us from understanding the multiple dimensions of these constructs, it is worth noting that a recent review of sexual satisfaction tools found that just one question can meet some psychometric criteria and is enough if cost or participant burden is a concern (although this item was not from the SSS-W) (Mark et al., 2014). Nonetheless, future research should examine these experiences using the full range of items included in validated scales. We also acknowledge that we did not assess how women were interpreting "sexual satisfaction" and "sexual pleasure." While these experiences may be quite personal in nature (i.e., what brings one woman sexual enjoyment may not pleasure another woman), appraisals may be subject to gender norms, social stigma, and other factors (McClelland, 2010(McClelland,, 2011(McClelland,, 2013)). For instance, some women may consider their partner's satisfaction in their own selfratings (McClelland, 2011), or pleasure may be experienced or interpreted differently across age groups (Taylor et al., 2016). Data may also be affected by social desirability bias, such that sexual satisfaction and/or pleasure were over-reported. We aimed to minimize the effect of such biases, through the involvement of women living with HIV in the design and administration of the survey, as well as intensive survey training and piloting procedures. Another important limitation is that we provided no definition for "sexual experiences," which, depending on the person, may include oral, vaginal, and/or anal sex as well as a broader range of activities such as kissing, touching, masturbation, and so forth (Peterson & Muehlenhard, 2007;Sanders et al., 2010). Given the varied meanings of the same construct, it remains difficult to make conclusions about the kinds of activities that are eliciting pleasure as well as reports of pleasure among women in no relationships and relationships without sex. Future HIV studies should assess these constructs in surveys more carefully. Future work should also explore how physical health (e.g., vaginal pain, disabilities, general ill-health) may influence sexual enjoyment, as these data was not collected in our survey. Some effect estimates for sexual satisfaction were extremely large with wide CIs, chiefly for the long-term/happy versus no relationship comparison because of high correlations with two LCA indicators (i.e., physical intimacy and emotional closeness). These results should be interpreted cautiously. Interestingly though, these measures were not perfectly correlated in our study, since three classes (i.e., relationships without sex, short-term, and long-term/unhappy) had similar levels of physical intimacy but differed in terms of overall satisfaction, perhaps owing to differing emotional closeness, couple HIV serostatus, or other unmeasured factors (e.g., trust, communication). Future work should assess additional aspects of relationships (including nonsexual dynamics) and explore their relative importance. This topic is particularly ripe for exploration qualitatively, and studies should explore women's narratives about feeling sexually happy and having great sex to help increase possibilities for women living with HIV to enjoy their sexuality. While this research has limitations, it focuses on a much-needed area of sexual health for women living with HIV. Additional critical studies on sexual rights and social justice in the context of HIV are necessary. --- Implications Sexual satisfaction and pleasure were greatest in long-term/happy relationships, underscoring the centrality of love, intimacy, and power to positive sexual outcomes. However, it is important to acknowledge that all consensual relationship types are valid, and to avoid discourses that position women's pursuit of pleasure as proper only in the context of committed, long-term relationships (Fahs, 2014;Holland, Ramazanoglu, Scott, Sharpe, & Thomson, 1990). Women deserve to have the type of relationship they want (inclusive of no sex and both serious and casual relations), and they should be free to pursue pleasurable and satisfying sexual experiences regardless. Thus, we advocate for the need for interventions to 1) improve unequal sexual power within all relationships and between different socio-demographic groups, 2) promote sexuality and HIV education (including the right to autonomy, mutual pleasure, and the science of HIV transmission), and 3) address the social impediments to women's sexual wellbeing, especially stigma, violence, and trauma of various kinds. By doing so, all women living with HIV may be able to more easily negotiate and fight for sexual satisfaction and pleasure in their lives. --- Conclusions This research provides an alternative, pleasure-focused narrative that is largely absent in quantitative research on sexuality among women living with HIV, one that supports women's right to sexual satisfaction and pleasure while simultaneously uncovering the factors that can deny women these rights. In making perspectives like these more visible, and through disseminating positive accounts of sexuality, we hope women living with HIV will feel less alone and more empowered to lead the sexual lives they really want. We call on providers and researchers to support women in this endeavour by talking about and studying the rewarding aspects of sexuality and relationships, including non-sexual relationships that can bring joy to women's lives. Not only is researching and promoting sexual satisfaction and pleasure important for pleasure's sake, but it may also contribute to positive outcomes across multiple dimensions of well-being and sexual health (Herbenick et al., 2009;Higgins, Mullinax, Trussell, Davidson Sr, & Moore, 2011;Hogarth & Ingham, 2009;Smiler, Ward, Caruthers, & Merriwether, 2005). --- Mental health and violence factors Mental health-related quality of life 46.5 (34.3, 55.6) 40.6 (29.8, 51.8) 33.3 (23.9, 44.4
In the context of HIV, a focus on protecting others has overridden concern about women's own sexual wellbeing. Drawing on feminist theories, we measured sexual satisfaction and pleasure across five relationship types among women living with HIV in Canada. Of the 1,230 women surveyed, 38.1% were completely or very satisfied with their sexual life, while 31.0% and 30.9% were reasonably or not very/not at all satisfied, respectively. Among those reporting recent sexual experiences (n=675), 41.3% always felt pleasure, with the rest reporting usually/sometimes (38.7%) or seldom/not at all (20.0%). Sex did not equate with satisfaction or pleasure, as some women were completely satisfied without sex while others were having sex without reporting pleasure. After adjusting for confounding factors, such as education, violence, depression, sex work, antiretroviral therapy, and provider discussions about transmission risk, women in long-term/happy relationships (characterized by higher levels of love, greater physical and emotional intimacy, more equitable relationship power, and mainly HIV-negative partners) had increased odds of sexual satisfaction and pleasure relative to women in all other relational contexts. Those in relationships without sex also reported higher satisfaction ratings than women in some sexual relationships. Findings put focus on women's rights, which are critical to overall well-being..
Introduction Major Depressive Disorder (MDD) is experienced by approximately 16% of Americans in the course of their lives (Kessler, Chiu, Demler, & Walters, 2005) and is expected to be the leading cause of disability among all diseases by the year 2030 (World Health Organization, 2008). Although recent research has reported that rates of MDD are lower for African Americans than for the general population (Breslau et al., 2006;Williams et al., 2007), depression is significant for African Americans for several reasons. When African Americans experience MDD, the disorder is often more severe and poses a greater burden than observed with other ethnic groups (Williams et al., 2007). African Americans with depression are also less likely to utilize treatment services (Garland et al., 2005;Neighbors et al., 2007). Specifically, a recent study found that 40% of African Americans with MDD received treatment compared to 54% of non-Latino Whites with MDD, suggesting a significant health disparity (González et al., 2010). Maternal depression is also a significant issue for African American families, as demonstrated by a recent study, which found that the lifetime prevalence of MDD for African American mothers was 14.5% (Boyd, Joe, Michalopoulos, Davis, & Jackson, 2011). Despite the substantial amount of research on maternal depression, African American families with maternal depression are understudied. This is a critical area of research because African American women and their children are disproportionately confronted with environmental and life stressors that may increase their vulnerability to depression (Goodman et al., 2011;Riley et al., 2009). The children of mothers with depression are at risk for a range of negative developmental and psychological outcomes. For example, they are more likely to be depressed or anxious themselves, and more likely to have problems with disruptive and oppositional behavior (Goodman et al., 2011;Luoma et al., 2001). Longitudinal research has shown that the negative effects of maternal depression begin in childhood and continue into adolescence and adulthood (Campbell, Morgan-Lopez, Cox, McLoyd, & National Institute of Child, Health and Human Development Early Child Care Research Network, 2009;Lewinsohn, Olino, & Klein, 2005;Weissman et al., 2006). A 2011 meta-analysis of 193 studies found significant small-magnitude effects of mothers' depression on children's outcomes, including both internalizing and externalizing behavior (Goodman et al., 2011). Although only a small number of the studies assessed ethnic minorities, the relationship between maternal depression and negative child outcomes was shown to be even stronger among these populations. --- Mechanisms for transmission of depression In order to prevent or reduce depression in African American children, it is important to consider the processes by which depression develops. Hammack's (2003) integrated theoretical model for the development of depression in African American youth outlines a potential pathway starting with social and environmental stress, leading to parent psychopathology and subsequent impaired parenting, which then results in youth depression. Several studies have found evidence that family environment and parent-child interactions impact the transmission of depression (Carter, Garrity-Rokouys, Chazen-Cohen, Little & Briggs-Gowan, 2001;Jones, Forehand, & Neary, 2001). Specifically, mothers' depressive thoughts and behaviors may prevent them from engaging in more positive parenting behaviors that would better meet children's emotional and developmental needs (Goodman, 2007;Goodman & Gotlib, 1999). Depression has also been shown to interfere with effective parenting by making mothers less responsive to their children or less supportive, and by increasing the use of negative or harsh parenting behaviors (Mitchell et al., 2010). In a metaanalysis of 46 observational studies of the relationship between depression and parenting, Lovejoy, Graczyk, O'Hare, and Neuman (2000) found that mothers with depression showed significantly higher levels of negative parenting behaviors, were significantly more disengaged, and demonstrated significantly less positive parenting behavior. On the other hand, there is recent research suggesting that the use of positive parenting practices (e.g., praise, encouragement of appropriate behavior) buffers children from the impact of their mother's depression. This is an understudied area; however, it has been found that maternal symptoms of depression impact a young child less when the mother is more responsive and affectionate (Leckman-Westin, Cohen, & Stueve, 2009). Other studies with African American and Caucasian adolescents have found that positive or supportive parenting is associated with lower rates of depression and anxiety currently and six and twelve months later (Compas et al., 2010;Jones, Forehand, Brody, & Armistead, 2002;Zimmerman, Ramirez-Valles, Zapert, & Maton, 2000). Although there is some good evidence of the beneficial impact of positive parenting practices, further examination of their potential protective role in families with maternal depression is needed. --- Protective Factors Children's social skills are another source of resilience for children at risk for negative outcomes (Luthar, Cicchetti, & Becker, 2000). There is evidence that children's social competence is linked to positive psychosocial and educational outcomes (Ladd, 1990;McClelland, Morrison, & Holmes, 2000;Welsh, Parke, Widaman, & O'Neil, 2001). At the same time, studies of pre-adolescent and adolescent depression have determined that poorer social skills and deficits in social problem-solving are significantly related to youth depression symptoms (Becker-Weidman, Jacobs, Reinecke, Silva, March, 2010;Frye & Goodman, 2000;Ross, Shochet, & Bellair, 2010). Unfortunately, social skill development may be impeded for children whose mothers have depression. These children are less likely to be exposed to enriching social situations with peers and positive adults, and more likely to observe and learn their mother's negative cognitive style as it relates to social interactions (Hipwell, Murray, Ducournau, 2005;Silk, Shaw, Skuban, Oland, & Kovacs, 2006;Taylor & Ingram, 1999;Wu, Selig, Roberts, & Steele, 2010). On the other hand, coping efficacy, emotion regulation skills, and social skills have been shown to foster resiliency among children exposed to maternal depression (Beardslee & Podorefsky, 1988;Riley et al., 2008;Silk et al., 2007). As such, we hypothesize that children's social skills buffer them against the lower levels of positive parenting behavior often associated with maternal depression. Kinship support is another factor proven to buffer children from negative psychosocial outcomes. Research with urban African Americans has shown that kinship support moderates the effect of negative family interactions on children's and adolescents' internalizing and externalizing behavior (Li, Nussbaum, & Richards, 2007;Taylor, 2010). Higher levels of kinship support have been found to be associated with greater maternal warmth, emotional support, and better maintenance of routines within the family (Taylor, 2011). In this same study, Taylor found that the beneficial impact of kinship support on mother's supportive parenting behavior was less for mothers with more depression symptoms. In a different sample of mothers with depression, mothers' lower satisfaction with their social support networks was associated with more internalizing disorders in their children one year later (McCarty, McMahon, Conduct Problems Research Group, 2007). While there is good preliminary evidence for the protective function of kinship support in families with maternal depression, its role along with other protective factors in preventing children's depression merits closer examination. In the present study, we examine the effects of positive parenting behaviors on child depression and the potential protective effects of social skills and kinship support among low-income African American children whose mothers are depressed. Specifically, we will test whether kinship support and child social skills moderate the impact of positive parenting skills on children's symptoms of depression. We hypothesize that more positive and involved parenting practices will be associated with less child depression. We also hypothesize that both kinship support and child social skills will serve as protective factors and moderate the impact of positive parenting skills on child depression. --- Method Participants The participants were 77 mother-child dyads. The children ranged in age from 8 to 14 years with a mean age of 11.1 (SD = 2.0) years. Their school grade ranged from second to tenth with a mean of grade 5.6 (SD = 2.1). Approximately half (58%; n = 45) of the children were female. All mothers identified their children as African American, however, 7.8% (n = 6) also identified with other races (i.e., White, Native Hawaiian/Pacific Islander, Asian, American Indian/Alaskan Native). Five children (6.5%) were also of Latino ethnicity. The mothers ranged in age from 23 to 63 years with a mean age of 38.6 (SD = 7.4) years. All mothers identified their race as African American, with 6.5% (n = 5) also identifying with other races (i.e., White, Native Hawaiian/Pacific Islander, Asian, American Indian/ Alaskan Native) and 1.3% (n = 1) also identifying with Latino ethnicity. The majority of the mothers were never married (63.6%, n = 49), while 15.6% (n = 12) were married or living with a partner and 20.8% (n = 16) were separated, divorced or widowed. The majority of the mothers received public assistance (59.2%, n = 45). Total household income for the sample was as follows: 33.8 % (n = 26) between $0-$10,000; 28.6% (n = 22) between $10,001-$20,000; 9.1% (n = 7) between $20,001-$30,000; 11.7% (n = 9) between $30,001-$40,000; 6.5% (n = 5) between $40.001-50,000; and 6.5% (n = 5) $50,000 or greater. Data was not available for three households. In terms of education level, approximately 72% (n = 55) of the mothers had either high school degree equivalency or higher. Specifically, 22% (n = 17) were high school graduates or obtained a GED, 32% (n = 24) attended some college or vocational school, 9% (n=7) graduated from vocational school, and 9% (n =7) graduated from college or higher. In the majority (90.9%; n = 70) of the mother-child dyads, the mother was the child's biological parent. --- Procedures Participants were drawn from two related studies focusing on maternal depression within African American families. Mothers were eligible for the study if they: 1) were African American; 2) had a primary current or past-year psychiatric diagnosis of MDD, Dysthymic Disorder or Depressive Disorder, Not Otherwise Specified ; and 3) were the primary caregiver of a school-age child who resided with them on at least a part-time basis. Mothers could not have: 1) a history of Bipolar Disorder or any psychotic disorder; 2) current or past year diagnosis of substance dependence; or 3) mental retardation (determined by mothers stating that they had been diagnosed with mental retardation within their lifetime). Children reported by their mothers as having a diagnosis of mental retardation were also were excluded from the study. Study participation involved three steps. First, mothers completed a telephone screening to assess their preliminary eligibility for the study. If appropriate, the diagnostic eligibility of the mothers was then determined by a clinical interview (Structured Clinical Interview for DSM-IV-TR Axis I Disorders, First, Spitzer, Gibbon, & Williams, 2001) conducted by the primary author (a licensed clinical psychologist). Finally, eligible mothers and one of their children completed a battery of questionnaires read aloud by research staff. Mother and child were each paid $20 for the assessment interview. The consent process was conducted in person by the principal investigator or another member of the research staff such that the study team obtained written consent for participation from the mother and verbal assent from the child. The studies were approved by the Institutional Review Boards of the Children's Hospital of Philadelphia, the University of Pennsylvania, and the Philadelphia Department of Public Health. --- Recruitment The principal investigator and research staff developed relationships with staff at clinic and community sites throughout a large metropolitan area in order to recruit study participants. These recruitment sites included outpatient mental health agencies, other research studies, homeless shelters, schools, and health fairs. Recruitment flyers were also given to community site staff for dissemination and put on public display at participating sites. Additionally, recruitment advertisements were placed in several local newspapers. Interested participants contacted research staff via telephone or completion of consent to contact forms at recruitment sites. The largest recruitment sources were newspaper advertisements and research studies. To facilitate recruitment, childcare was provided and participants received bus tokens or reimbursement for parking costs. --- Measures To assess positive parenting skills, mothers completed the Parenting Practices Scale (Tolan, Gorman-Smith & Henry, 2000), which has four scales: Positive Parenting, Extent of Involvement in the child's life, Discipline Effectiveness, and Avoidance of Discipline. The Positive Parenting scale assesses the use of rewards and encouragement of appropriate behavior. The Extent of Involvement scale assesses parents' involvement in the child's daily activities and routines. For the current study, the Positive Parenting and Extent of Involvement scales were summed for a Positive Parenting Skills total score. The Discipline scales were not utilized in the current study as they are more relevant for delinquent youth and do not assess positive parenting skills. Confirmatory factor analyses demonstrated a latent construct representing both positive parenting and extent of involvement (Gorman-Smith, Tolan, Henry, & Florsheim, 2000;Gorman-Smith, Tolan, Zelli & Huesmann, 1996) supporting the validity of the Positive Parenting Skills total scale. The scales of the Parenting Practices Scale have previously demonstrated adequate internal consistency (.78 -.84) with caregivers of urban youth (Gorman-Smith et al., 1996;Tolan et al., 2000). In the current sample, the Cronbach alpha coefficient was.77 for the overall Positive Parenting Skills total score,.85 for the Positive Parenting scale, and. 63 for the Extent of Involvement scale. The Social Skills Rating System (SSRS; Gresham & Elliott, 1990) was used to assess children's social skills (i.e., cooperation, assertion, responsibility, empathy, and self-control). The SSRS has child-report and parent-report versions for different developmental levels. For purposes of the present study, total standard scores were combined across elementary and secondary levels. The standard scores are based on normative data for gender and grade and provide an equivalent metric across the multiple versions of the SSRS. The child-report version has good internal consistency (<unk> =.83) and adequate four-week test-retest reliability (r =.68). The child-report and parent-report versions for children from kindergarten to 12 th grade demonstrate adequate reliability and validity (Gresham & Elliott, 1990). The Cronbach alpha coefficients for the child-report version with the current sample are.87 for elementary-age children and.91 for secondary school-age children. The Cronbach alpha coefficients for the parent report version are.70 for elementary-age children and.72 for secondary school-age children (Gresham & Elliott, 1990). The Kinship Support Scale (Taylor, Casten, & Flickinger, 1993) was completed by mothers and children in order to assess each individual's perception of the amount of social and emotional support received from extended family members. Construct validity of this measure is demonstrated by positive correlations with measures of family routines and informal kinship support (Jones, 2007;Taylor, Seaton, & Dominquez, 2008). The Kinship Support Scale has adequate internal consistency (0.72 -0.86) for African American youth (Hall, Cassidy, & Stevenson, 2008;Jones, 2007;Kenny, Blustein, Chaves, Grossman, Gallagher, 2003;Taylor et al., 1993). Strong internal consistency (<unk> =.88) has been found in a sample of low-income African American mothers (Taylor & Roberts, 1995). The Cronbach alpha coefficients for the current sample are.74 for the children and.89 for the mothers. The Children's Depression Inventory (CDI, Kovacs, 1992) is a self-report scale of depressive symptoms suitable for youth ranging in age from 7 to 17 years. It has demonstrated good concurrent validity with other measures of depression, cognitive distortions, and self-esteem (Myers & Winters, 2002). The CDI has adequate internal consistency (.82 to.87) for African American youth (Cardemil, Reivich, Beevers, Seligman, & James, 2007;DuRant, Cadenhead, Pendergrast, & Slavens, 1994). The Cronbach alpha coefficient for this measure in the current sample is.83. The Beck Depression Inventory-II (BDI-II; Beck, Steer, & Brown, 1996) was used to measure the severity of mothers' depressive symptoms in areas such as mood, pessimism, sense of failure, and somatic symptoms. There is strong evidence of the reliability, validity, and utility of the instrument (Dozois, Dobson, & Ahnberg, 1998;Steer, Ball, Ranieri, & Beck, 1999). It has excellent internal consistency (<unk>=.90) with African American samples (Gary & Yarandi, 2004;Grothe et al., 2005). The Cronbach alpha coefficient for this measure in the current sample is.89. --- Data Analytic Plan The goals of the analyses were to assess the effect of positive parenting skills (as measured by maternal report on the Parenting Practices Scale) on child depression (as measured by the CDI) and to test whether child social skills (as measured by maternal and child reports on the SSRS) and kinship support (as measured by maternal and child reports on the Kinship Support Scale) moderate that effect. We analyzed maternal depression severity (as measured by the BDI-II) as a covariate, as it is potentially an important variable in the transmission of depression from a mother to her child. Preliminary analyses included descriptive statistics including means and standard deviations, as well as bivariate associations measured with Pearson correlations for all study variables. Two multiple linear regression equations were performed in the primary analyses. The first regression used maternal reports of child social skills and kinship support as the moderating variables. The second regression used child report of their social skills and kinship support as the moderating variables. The positive parenting skills, kinship support and maternal depression severity variables were standardized by calculating z-scores to be used in the regression analyses. The independent variables were entered in three blocks. In the first step, maternal depression severity was entered in a block as a covariate. In the second step, positive parenting skills, child social skills, and kinship support were entered in a block to test for main effects. In the third step, the interaction between positive parenting skills and child social skills and the interaction between positive parenting skills and kinship support were entered in a block to test for moderation effects. Additionally, we conducted post-hoc analyses consisting of two regression analyses separately examining the effects of the two positive parenting skills scales (Positive Parenting and Extent of Involvement) on child depression. --- Results --- Descriptive Analyses and Correlations Means, standard deviations, and Pearson correlations of all study variables are presented in Table 1. The mean of child-reported depression symptoms was within the normative range. Similarly, means of maternal and child reports of child social skills were within the average range. The mean of maternal depression symptoms was in the clinical range, indicating moderate severity of depression in this sample. Maternal depression symptoms were negatively correlated with maternal report of kinship support (r = -.28, p =.02), but positively correlated with child report of kinship support (r =.30, p =.01). Maternal report of positive parenting skills was positively correlated with maternal report of child social skills (r =.42, p <unk>.001) and child report of kinship support (r =.25, p =.03), but was negatively correlated with child depression symptoms (r = -.26, p =.02). Child report of kinship support was also positively correlated with child-reported social skills (r =.39, p =. 001) but negatively correlated with child depression symptoms (r = -.23, p =.04). --- Regression Analysis using Parent-Report Measures Table 2 displays the results of the final regression model using parents' reports of the moderators. In the first step, the covariate, maternal depression severity, was not associated with child depression symptoms. In the second step, parent report of child social skills was negatively and significantly associated with child depression symptoms. In the third step, the interaction of positive parenting skills and parent-reported child social skills was significant. To explicate this interaction, separate regression analyses were conducted testing the association between parent report of child social skills and child depression symptoms, using median splits to classify positive parenting skills as low or high. Results showed that higher parent-reported child social skills were associated with lower depression symptoms in children of parents with lower positive parenting skills (B = -0.33, t = -3.23, p =.003); however, the interaction analysis was not significant when positive parenting skills were high. The interaction was plotted in graphical form (Figure 1), displaying positive parenting skills (low and high) and child social skills (low and high). There was no significant interaction between positive parenting skills and parent-rated kinship support. --- Regression Analysis using Child-Report Measures Table 3 displays the results for the final regression model using children's reports of the moderators. In the first step, the covariate, maternal depression severity, was not associated with children's depression symptoms. In the second step, none of the main effect variables were significantly associated with children's depression symptoms. In the third step, neither the interaction between positive parenting skills nor the interaction between child-rated social skills and positive parenting skills and child-rated kinship support were associated with children's depression symptoms. --- Post-Hoc Analyses of Parenting Scales To further explore the moderation of the relationship between parenting and child depression by child social skills, we conducted separate post-hoc regression analyses for the positive parenting and extent of involvement scales. In each case, the regression analysis included a three-step model with maternal depression severity added in the first step, positive parenting skills and parent report of child social skills added in the second step, and the interaction between positive parenting skills and parent report of child social skills added in the third step. The interactions of both positive parenting and parent report of child social skills (B = 0.10, t = 2.07, p =.042) and extent of involvement and parent report of child social skills (B = 0.17, t = 2.72, p =.008) were significant. To explicate these interactions, separate regression analyses were conducted to test the association between parent report of child social skills and children's depression symptoms using median splits to classify positive parenting as low or high. Similar analyses were conducted using median splits to classify extent of involvement as low or high. For children exposed to low levels of positive parenting, parent report of child social skills was negatively associated with children's depression symptoms (B = -0.33, t = -3.23, p =.003). Similarly, for children exposed to low levels of extent of involvement, parent report of child social skills was negatively associated with children's depression symptoms (B = -0.31, t = -3.20, p =.003). --- Discussion The present study examined the interrelations of positive parenting, child social skills and kinship support in determining child depression in a sample of African American children who have mothers with depressive disorders. This is a unique and understudied population that may be vulnerable to a host of mental health difficulties (Boyd, Diamond, & Ten Have, 2011). The findings support factors that protect against the development of depression among this population. Positive parenting practices and child social skills appear to be associated with lower depression symptoms in children, while the impact of kinship support is less clear. As hypothesized, our results demonstrated a significant interaction effect of parenting and child social skills on child depression. Social skills were negatively associated with child depression symptoms only for those children exposed to poorer parenting skills, suggesting that social skills are a protective factor in these circumstances. There is substantial evidence demonstrating the deleterious effects of negative parenting on child and adolescent behavior (e.g., Goodman, 2007;Lovejoy et al., 2000); however, evidence of social skills weakening this impact is not as well documented. In a study with predominantly African American 2 nd to 6 th graders, negative parenting behavior was no longer associated with higher levels of depression symptoms once children's perceived competence was added into the model (Dallaire et al., 2008). Further research on this topic is needed, as social skills have been identified as a potential protective factor for children experiencing overall adversity (Luthar, Cicchetti, & Becker, 2000) and maternal depression in particular (Beardslee & Podorefsky, 1988). Surprisingly, maternal depression severity was not associated with child depression symptoms. This may be the case because there was a limited range of depression for both the mothers and the children. All the children in the sample have been exposed to significant levels of maternal depression symptoms as demonstrated by moderate clinical level of depressive symptoms on the BDI-II. However, the children's depression scores were in the normative range. Another explanation could involve depression in the context of other adversity. For example, Silk et al. (2007) found that low maternal depression was associated with positive child functioning only for those children who had low to moderate neighborhood risks. This may have occurred in our study as well, given that the majority of the women in the sample were single, low-income mothers. Economic stressors have been found to compound the impact of maternal depression and parenting on child outcomes (Barnett, 2008;Boyd, Diamond, & Bourjolly;Murry, Bynum, Brody, Willert, & Stephens, 2001), however, we cannot determine if this was the case in our study since we did not explicitly assess the conditions of economic stress or neighborhood disorganization. Nonetheless, it is important to recognize that a number of risk and protective factors interact in very complex ways to determine whether children will develop depression (Li et al., 2007;McCarty et al., 2003). The finding that positive parenting skills were negatively correlated with child depression suggests that positive parenting skills may serve as a protective factor for child depression. Parenting has been identified as a major mechanism in the transmission of depression from a mother to her child (e.g., Goodman, 2007;Goodman & Gotlib, 1999). Much of the maternal depression research has focused on the impact of negative parenting behaviors. Importantly, our findings suggest that positive parenting can be beneficial for families suffering from maternal depression. The results of the current study are in line with other research demonstrating positive parenting to be associated with less depression in youth (Compas et al., 2010;Jones et al., 2002) and to protect against psychological problems among children exposed to interpersonal violence, children in Head Start, and children whose mothers are HIV positive or have AIDS (Graham-Berrmann, Gruber, Howell, & Girz, 2009;Koblinsky, Kuvalanka Randolph, 2006;Murphy, Marelich, Herbeck & Payne, 2009;Riley et al., 2009). Contrary to the study hypotheses, kinship support was neither significantly related to child depression through main effect nor by interaction with parenting skills. Further examination of this finding using the correlation matrix reveals that maternal depression severity was negatively correlated with maternal report of kinship support, but positively correlated with child report of kinship support. One interpretation of this finding is that the children of mothers with depression in this study were receiving good support from their extended family, even if their mothers did not perceive this to be the case. This is an interesting finding since it contradicts the theory that the increased social isolation resulting from maternal depression can limit the social support available to children (Coyne et al., 1987;Riley et al., 2008). Child report of kinship support was negatively correlated with child depression symptoms, which mirrors the finding for mothers. These results were expected, as there have been several studies showing that weaker social support is associated with greater depression and psychological distress within African American populations (Ceballo & McLoyd, 2002;McKnight-Eily et al., 2009;Thompson et al., 2000). For instance, kinship support has been found to be negatively correlated with adolescent depression symptoms and behavior problems in single-parent households (Hall et al., 2008;Taylor et al., 1993). Also, in a study with both African American and Caucasian mothers with depression, lower satisfaction with support networks was associated with higher rates of internalizing disorders in their children (McCarty et al., 2003). Given the empirical evidence for the protective role of kinship support in multiple domains, the lack of significant findings for kinship support as a moderator or protective factor against depression in this sample was unexpected. It may be that child social skills are more important than kinship support in protecting children against depression. A possible explanation is that having strong social skills can enable a child to enlist the support they need from adult friends and family given that children in our sample who rated themselves as having good social skills also rated themselves as having good kinship support. This is consistent with Beardslee and Poderfsky's (1998) description of resilient children of parents with depression as possessing characteristics to promote positive interpersonal relationships. Another possible explanation for the lack of findings related to kinship support is that child social skills are more proximal determinants of child depression, while kinship support may be important, but more distal. There are several limitations to the present study. First, although we were able to achieve statistically significant interaction effects in our regression analyses, the sample size is relatively small. The sample size may limit the power to detect significant associations in the multiple regression analyses, thereby increasing the likelihood of Type II error. Second, without a non-clinical control group, we cannot compare African American children with and without exposure to maternal depression to determine how the interplay of kinship support and social skills may differ. Third, the study does not include child report of the mother's parenting behaviors. There may be differences in how mothers and children evaluate and perceive the mothers' parenting, and depressed mothers may not be the most accurate reporters of their own behavior. Fourth, the cross-sectional design of the study limits our ability to establish the direction of effect among the variables. Finally, the sample was predominantly low-income and thus the results may not generalize to middle-and highincome African American families. Overall, findings from the current study highlight valuable areas for future research and intervention. Investigation of these protective factors in a longitudinal study with a larger and more economically diverse sample of African American families is needed to confirm these initial findings. Such a study should assess additional processes in the development of depression over time, such as life stressors, exposure to racism and community violence, and biological markers. Furthermore, qualitative research on protective factors for African American families with maternal depression could supplement the quantitative data and could help with hypothesis generation to better understand these processes. Our findings also suggest that improving parenting and child social skills are important elements to include in a preventive intervention programming. Intervention research for families with maternal depression is lacking in general (Boyd & Gillham, 2009), and is especially scarce for African American children whose mother has depression. For example, Compas et al. (2009) tested a cognitive-behavioral family intervention focusing on parenting skills, psychoeducation, and stress coping skills with positive findings, however, only a small number of African Americans were included in the trial. There is a clear need to include more African American families in these preventive interventions, and also to examine cultural adaptations of already empirically-supported interventions to better address the needs of this population. --- Interaction of parent report of child social skills and positive parenting skills on child depression
Maternal depression has a deleterious impact on child psychological outcomes, including depression symptoms. However, there is limited research on the protective factors for these children and even less for African Americans. The purpose of the study is to examine the effects of positive parenting skills on child depression and the potential protective effects of social skills and kinship support among African American children whose mothers are depressed and lowincome. African American mothers (n = 77) with a past year diagnosis of a depressive disorder and one of their children (ages 8-14) completed self-report measures of positive parenting skills, social skills, kinship support, and depression in a cross-sectional design. Regression analyses demonstrated that there was a significant interaction effect of positive parenting skills and child social skills on child depression symptoms. Specifically, parent report of child social skills was negatively associated with child depression symptoms for children exposed to poorer parenting skills; however, this association was not significant for children exposed to more positive and involved parenting. Kinship support did not show a moderating effect, although greater maternal depression severity was correlated with more child-reported kinship support. The study findings have implications for developing interventions for families with maternal depression. In particular, parenting and child social skills are potential areas for intervention to prevent depression among African American youth.
Introduction Relocation means that farmers leave their original land, which is an effective means to reduce poverty, solve vulnerability, and promote regional development. It profoundly impacts the natural, physical, financial, social, human, and cultural fields, and is a necessary way to achieve sustainable development [1]. The Sustainable Development Goals (SDGs) propose balancing the sustainability of the economy, environment, and society and pursue sustainable development [2]. Poverty eradication is considered the primary goal of sustainable development. As a global problem, although poverty can be measured by income, expenditure, and other dimensions, from the perspective of sustainable development, sustainable livelihood is considered to be the most effective and reasonable way to measure poverty because it can track poverty in multiple dimensions [3]. A sustainable livelihood is the ultimate goal of poverty reduction, which can provide people with comprehensive development programs based on different backgrounds and economic and political conditions [4]. When people face external pressures and shocks, if they can recover, maintain, or even increase their livelihood capital, their livelihood will be sustainable [5,6]. To study livelihood issues, the United Kingdom Department for International Development (DFID) has formulated a sustainable livelihood analysis framework, which is the most widely used and accepted tool for analyzing sustainable livelihood [7,8]. Livelihood capital is the core Land 2023, 12, 1045 2 of 18 and foundation of this framework, including natural, physical, financial, social, and human capital [9]. Promoting livelihood capital will help low-income families escape from poverty, while people with short livelihood capital struggle to get out of the poverty trap. Therefore, improving livelihood capital is vital for all countries, especially developing countries, to eliminate poverty and achieve sustainable development [10]. For farmers, realizing the sustainable development of livelihood capital is the fundamental purpose and significance of the SDGs. On the one hand, the more livelihood capital farmers have, the more able they are to resist risks and the more choices they have. On the other hand, the reasonable structure and allocation of livelihood capital can broaden farmers' livelihood channels and enable farmers to switch different livelihood strategies [11]. Thus, farmers' sustainable livelihood is not only reflected in the increase in the absolute value of livelihood capital but also requires the coupling and coordinated development of various capitals. Governments worldwide have made several plans to improve the sustainability of people's livelihoods. For developing countries, relocation is considered the most effective way. China has implemented five significant projects of a precision poverty alleviation strategy and ensured the elimination of absolute poverty through five measures: supporting production and employment, poverty alleviation relocation, ecological protection, developing education, and providing minimum living security [12]. Poverty alleviation relocation, as the "first project" of accurate poverty alleviation, aims to realize the sustainable development of relocated farmers, helping farmers move out of areas with a harsh environment and attain lasting development. Since poverty alleviation relocation began, about 35,000 resettlement communities have been built nationwide, and more than 9.6 million poor people have been resettled. The relocated farmers can eliminate the poverty trap by improving infrastructure construction, developing industries, and strengthening education and social security in the resettlement area [13,14]. As the most prominent poverty reduction target country, China has contributed more than 70% of the global poverty reduction population and made remarkable achievements [12,15]. However, the factors that restrict people's development still exist, the risk of returning to poverty has not been eliminated, and poverty governance still has a long way to go [16,17]. In particular, the COVID-19 epidemic has negatively impacted the economy, reduced people's livelihood capital, and hindered the realization of the SDGs [18,19]. In addition, poverty alleviation relocation is not only the migration of the population but also the complicated process of significant changes in the social system, economy, and politics, and the disintegration-reconstruction of farmers' livelihood capital [20,21]. Suppose the relevant departments fail to effectively implement the follow-up integration and assistance work for the relocated farmers. In that case, they will be marginalized, and poverty and inequality will be aggravated, making it challenging to achieve sustainable development, which runs counter to the original intention of the policy [22,23]. In particular, farmers in minority areas have formed unique religious beliefs, living customs, and cultural forms after long-term development. After relocation, they need to adapt to the rapidly changing external environment passively. The original social relations and economic models disintegrate, so it is difficult to reconstruct their national culture and social relations and adapt to the new livelihood model. Thus, the poverty alleviation and sustainable development of farmers in minority areas are even more arduous [3]. This study improved the traditional analysis framework of sustainable livelihoods, combined with the characteristics of minority areas, and added cultural capital to the evaluation system of livelihood capital. Based on the data of Menglai Township in Yunnan Province from 2015 to 2021, it was concluded that the livelihood capital and its coupling and coordination level of farmers have improved after relocation, which meets the requirements of sustainable livelihood development. Finally, the theoretical framework of internal and external factors affecting farmers' livelihood capital was constructed, and the influencing factors of livelihood capital were obtained through empirical analysis. This study can effectively break the development dilemma of livelihood capital after the relocation of farmers in minority areas and help the relocated farmers achieve the goal of sustainable development. This study has made outstanding contributions to both the theoretical framework and policy practice. First, it provides a new tool for evaluating livelihood capital in minority areas. It improves the DFID's sustainable livelihood analysis framework, constructs the evaluation system of farmers' livelihood capital in minority areas, and further emphasizes the importance of national culture, which provides ideas for future research on livelihood capital according to regional characteristics. Second, it obtains new findings on the sustainable development of farmers' livelihood capital after relocation. Poverty alleviation relocation is a remarkable feat in the history of human migration and poverty reduction worldwide. Evaluating the livelihood capital and its coupling and coordination level of relocated farmers provides a basis for policy implementation and promotes the realization of sustainable development goals. Third, the study expands a new perspective for studying influencing factors of livelihood capital. It constructs the theoretical framework of internal and external factors that affect relocated farmers' livelihood capital, breaks the limitation that the existing research mainly relies on external forces to improve livelihood capital, and realizes the complementarity of endogenous motivation and external assistance. The remainder of this study is organized as follows. Section 2 introduces the materials and methods. Section 3 lists the measurement results of livelihood capital and its coupling and coordination level, and verifies the internal and external factors affecting livelihood capital through regression analysis. Section 4 presents discussions of this study. The final section summarizes the study. --- Materials and Methods Based on the SDGs and sustainable livelihood analysis framework, this study analyzes the livelihood issues of relocated farmers in Menglai Township, Yunnan minority areas, to realize the sustainable development of farmers' livelihood capital. To carry out the research effectively, it is necessary to construct an evaluation system of farmers' livelihood capital in minority areas, which is the basis of any quantitative analysis on livelihood capital, and further measure and compare the stock of livelihood capital and the coupling and coordination level between livelihood capitals before and after relocation. Hereafter, based on theoretical analysis, this study constructs a theoretical framework of internal and external factors affecting the livelihood capital of relocated farmers in Yunnan minority areas and explores the influencing factors of livelihood capital to realize accurate policies and the sustainable development of livelihood capital. The framework and design of the study are shown in Figure 1. --- Livelihood Capital Evaluation --- Construction of Livelihood Capital Evaluation System Farmers' livelihood capital includes natural, physical, financial, social, human, and cultural capital. Natural capital is the natural resources, environmental services, and biodiversity that people enjoy, including all kinds of land, forests, wildlife, and water resources [4]. For poor farmers, natural capital is the basis of their productive activities and is most closely associated with livelihood vulnerability [24], in which land is the most significant capital [11,25]. The primary function of physical capital is to meet the basic needs of farmers and improve their productivity, including safe housing, vehicles, roads, transportation, and production equipment and tools. Financial capital usually refers to the funds raised or controlled by people to achieve their livelihood goals, including relief, lending, savings, and income. For farmers, the most crucial financial capital is their income. The richer the sources of income, the more they can accumulate financial capital. Social capital is embodied in the participation of social groups, social contact, social trust, and public health support [26,27]. The level of farmers' social capital is greatly influenced by the quality and scale of the social network, and it will also affect the realization of the functions of the rest of the livelihood capital. Through people's interaction, social capital can bring farmers more resources and social support [28]. Human capital usually exists in the form of skills, health, and education [29]. On the one hand, the external manifestation of poverty can be reflected in the lack of human capital; on the other hand, the lack of human capital will further lead to poverty. Cultural capital is the element that best reflects regional characteristics, including norms, values, rules, indigenous customs, traditional knowledge, and activities [30]. Cultural factors often impact farmers' agricultural practices, production and consumption patterns, family decisions, and attitudes toward new agricultural technologies [31][32][33]. Thus, for farmers in minority areas, cultural capital, like the other five capitals, greatly influences farmers' livelihood strategies and results. As shown in Figure 2, this study comprehensively summarizes the relevant literature and combines the characteristics of minority areas to build an evaluation system of farmers' livelihood capital in Yunnan minority areas based on the principles of scientificity and objectivity, comprehensiveness and representativeness, comparability and operability. 2, this study comprehensively summarizes the relevant literature and combines the characteristics of minority areas to build an evaluation system of farmers' livelihood capital in Yunnan minority areas based on the principles of scientificity and objectivity, comprehensiveness and representativeness, comparability and operability. --- Measurement of Livelihood Capital Based on the evaluation system of livelihood capital constructed above, the weight of each index was obtained by using the global entropy method, and the comprehensive evaluation value of livelihood capital was calculated, which avoids the interference of people's subjective factors and fully considers the characteristics of three-dimensional spatiotemporal data composed of farmers, indicators and time [34,35]. The specific steps are as follows: First, a global evaluation matrix is constructed to evaluate m farmers' livelihood capital in t years with n indicators. --- Measurement of Livelihood Capital Based on the evaluation system of livelihood capital constructed above, the weight of each index was obtained by using the global entropy method, and the comprehensive evaluation value of livelihood capital was calculated, which avoids the interference of people's subjective factors and fully considers the characteristics of three-dimensional spatio-temporal data composed of farmers, indicators and time [34,35]. The specific steps are as follows: First, a global evaluation matrix is constructed to evaluate m farmers' livelihood capital in t years with n indicators. X = <unk> <unk> <unk> <unk> <unk> <unk> <unk> X 1 11... X 1 1n......... X t m1... X t mn <unk> <unk> <unk> <unk> <unk> <unk> <unk>(1) Second, the range method standardizes the data to eliminate differences [36]. Land 2023, 12, 1045 6 of 18 If the indicator is positive, X ij = X ij -minX ij maxX ij -min X ij <unk> 0.9 + 0.1, (1 <unk> i <unk> mt, j = 1, 2, 3... 17) (2) If the indicator is negative, X ij = maxX ij -X ij maxX ij -minX ij <unk> 0.9 + 0.1, (1 <unk> i <unk> mt, j = 1, 2, 3... 17)(3) Third, the weight of each index is calculated. w j = 1 -(-k<unk> mt i=1 X ij <unk> mt i=1 X ij ln X ij <unk> mt i=1 X ij ) <unk> 17 j=1 1 -(-k<unk> mt i=1 X ij <unk> mt i=1 X ij ln X ij <unk> mt i=1 X ij ), (k = 1 lnmt )(4) Fourth, the comprehensive evaluation value of livelihood capital is calculated. LC = n <unk> j=1 W j X ij(5) --- Measurement of Coupling Coordination Level More importantly, the sustainable development of livelihood capital is not only manifested in the increase in its absolute value but also in the improvement in the level of coupling and coordination among various capitals. (1) Coupling degree model "Coupling" refers to the interaction and influence between several systems. The coupling degree describes the degree of interaction, and the benign coupling is measured by the coordination degree. The higher the level of coupling and coordination, the more harmonious and orderly the development of each subsystem [37]. The calculation formula of the coupling degree of multiple systems is as follows: C n = <unk> 1 <unk> <unk> 2 <unk>... <unk> <unk> n (<unk> 1 + <unk> 2 +... + <unk> n ) n 1 n (6) where <unk> i (i = 1, 2,..., n) is the comprehensive evaluation function of each subsystem, and the number of subsystems in this study is n = 6, so the coupling level of six kinds of livelihood capital is: C = NC <unk> PC <unk> FC <unk> SC <unk> HC <unk> CC [(NC + PC + FC + SC + HC + CC)/6] 6 1 6 (7) where C is the coupling degree of six capitals; NC, PC, FC, SC, HC, and CC represent the evaluation of six subsystems, that is, natural, physical, financial, social, human, and cultural capital values, respectively. (2) Coupling and coordination model The coupling degree can only reflect the level of interaction between subsystems and cannot obtain their coordination degree. The coupling coordination degree can comprehensively consider the two dimensions of "development" and "coordination" between systems, and the formula is as follows: where C is the coupling degree between capitals, T is the total amount of livelihood capital, D is the degree of coupling and coordination among the six capitals, and its level and classification are shown in Table 1 [38]. The theory of internal and external factors suggests that, in the process of the development and change of a subject, external and internal factors complement each other and are indispensable, which jointly affect the evolution and development of the subject. A comprehensive consideration of the internal and external factors that affect the subject is conducive to determining their respective correlations, interactions, and possible complementary or substitutive relationships to realize an in-depth analysis of the subject [39]. D = <unk> C <unk> T (8) Thus, the characteristics of farmers are the essential factors that affect their livelihood capital after poverty alleviation relocation, which determines the primary trend and subjective initiative of livelihood capital development. Moreover, the change in environment, as an external factor that affects livelihood capital, is an indispensable condition to realize an improvement in livelihood capital. If farmers only rely on external forces and ignore the critical role of internal factors, they will strengthen their dependence and reduce their initiative. On the contrary, improving their livelihood capital will be challenging if they only focus on internal factors and lack external help. Therefore, farmers can form a complementary mechanism of internal self-development and practical external assistance by fully considering the internal and external factors affecting livelihood capital. In terms of internal factors affecting farmers' livelihood capital. The family life cycle theory describes the process of a family from emergence, development, and maturity to extinction [40]. The characteristics of the farmers' family population will change with the different family life cycles, affecting the family's livelihood strategy and livelihood capital [41,42]. In terms of external factors affecting farmers' livelihood capital, location theory integrates human activities and space and puts forward those areas with abundant cultivated land resources, low transportation costs, and convenient transportation, which are more conducive to the development of farmers, providing a scientific basis for poverty alleviation relocation [43]. Therefore, geographical location is the most basic external feature of farmers, and the advantages and disadvantages of location conditions determine the development foundation and conditions of farmers, which play a decisive role in the sustainable development of farmers. At the same time, with the gradual improvement in the theory of sustainable development and the increasing demand for tourism, the sustainable development theory of tourism poverty alleviation has risen rapidly. The theory puts forward that by developing tourism, the natural, economic, social, and cultural fields will be fully developed, thus reducing or eliminating the poverty of local farmers. In addition, relocation can promote farmers to achieve sustainable livelihood by creating employment opportunities and increasing income [44,45]. This theory provides an action guide for the sustainable development of the livelihood capital of relocated farmers. The cumulative causation theory believes that in a developing society, the change in one factor will make other factors change accordingly, further strengthening this factor and eventually forming a circular development model of self-strengthening and accumulation [46]. The causes of poverty often play a leading role in the sustainable livelihood of farmers. With the development of the economy and society, farmers will further aggravate this poverty phenomenon because of their poverty-causing factors. On the contrary, if farmers have some development advantages from the beginning, they will realize sustainable development based on their existing advantages. The causes of poverty include not only external factors such as water shortage, land shortage, and backward traffic conditions, but also internal factors such as lack of self-development motivation, disability, and illness, which are the primary concerns of sustainable livelihood. Based on the above analysis, the theoretical framework of internal and external factors affecting the livelihood capital of relocated farmers in Yunnan minority areas was constructed, as shown in Figure 3. ward that by developing tourism, the natural, economic, social, and cultural fields will be fully developed, thus reducing or eliminating the poverty of local farmers. In addition, relocation can promote farmers to achieve sustainable livelihood by creating employment opportunities and increasing income [44,45]. This theory provides an action guide for the sustainable development of the livelihood capital of relocated farmers. The cumulative causation theory believes that in a developing society, the change in one factor will make other factors change accordingly, further strengthening this factor and eventually forming a circular development model of self-strengthening and accumulation [46]. The causes of poverty often play a leading role in the sustainable livelihood of farmers. With the development of the economy and society, farmers will further aggravate this poverty phenomenon because of their poverty-causing factors. On the contrary, if farmers have some development advantages from the beginning, they will realize sustainable development based on their existing advantages. The causes of poverty include not only external factors such as water shortage, land shortage, and backward traffic conditions, but also internal factors such as lack of self-development motivation, disability, and illness, which are the primary concerns of sustainable livelihood. Based on the above analysis, the theoretical framework of internal and external factors affecting the livelihood capital of relocated farmers in Yunnan minority areas was constructed, as shown in Figure 3. --- Variables and Data The study area is Menglai Township, Cangyuan Wa Autonomous County, Yunnan Province. The township is dominated by Wa nationality, and its ethnic structure is complex and diverse. It is a typical representative of minority areas because of its relatively high altitude difference and harsh natural environment. Since the "Thirteenth Five-Year Plan", Menglai Township has implemented the poverty alleviation relocation project, and the relocated farmers have eliminated poverty. There are seven resettlement sites in Menglai Township, namely: Haibie resettlement site in Manlai Village, Mangmajie resettlement site in Menglai Village, Gonggaji resettlement site in Yong'an Village, Yonggongchadi resettlement site in Gongnong Village, Gongbobo resettlement site in Dinglai Village, Gongyalong resettlement site in Banlie Village, and Gongwang resettlement site in Banlie Village, involving 324 households with 1265 people. The data in this study were obtained from the continuous and in-depth field investigation in Menglai Township, Cangyuan County, from 2015 to 2021. Moreover, we referred to the Statistical Bulletin of National Economic and Social Development, the Yearbook of Lincang, the Yearbook of --- Variables and Data The study area is Menglai Township, Cangyuan Wa Autonomous County, Yunnan Province. The township is dominated by Wa nationality, and its ethnic structure is complex and diverse. It is a typical representative of minority areas because of its relatively high altitude difference and harsh natural environment. Since the "Thirteenth Five-Year Plan", Menglai Township has implemented the poverty alleviation relocation project, and the relocated farmers have eliminated poverty. There are seven resettlement sites in Menglai Township, namely: Haibie resettlement site in Manlai Village, Mangmajie resettlement site in Menglai Village, Gonggaji resettlement site in Yong'an Village, Yonggongchadi resettlement site in Gongnong Village, Gongbobo resettlement site in Dinglai Village, Gongyalong resettlement site in Banlie Village, and Gongwang resettlement site in Banlie Village, involving 324 households with 1265 people. The data in this study were obtained from the continuous and in-depth field investigation in Menglai Township, Cangyuan County, from 2015 to 2021. Moreover, we referred to the Statistical Bulletin of National Economic and Social Development, the Yearbook of Lincang, the Yearbook of Cangyuan Wa Autonomous County, and related government documents in Cangyuan County from 2015 to 2021 to provide a good database for this study. Taking the calculated livelihood capital value as the explained variable, based on theoretical analysis, the number of domestic and foreign tourists, family population, administrative villages (the administrative village was assigned to the farmers in Menglai Village as 1, Yongan Village as 2, Yingge Village as 3, Minliang Village as 4, Manlai Village as 5, Gongnong Village as 6, Gongsa Village as 7, Dinglai Village as 8, and Banlie Village as 9), and causes of poverty (the causes of poverty were divided into capacity loss, increased burden, factor shortage, accidental impact, and lack of self-development motivation, and they are assigned 1 to 5, respectively) were selected as the explanatory variables to explore their influence on the livelihood capital of relocated farmers. Based on the data on livelihood capital and its influencing factors of 144 relocated farmers in Menglai Township from 2015 to 2021, the descriptive statistics of each variable are listed in Table 2. Before empirical analysis, the multicollinearity needs to be tested first. If the explanatory variables have multiple collinearities, it will lead to pseudo-regression and estimation bias. Thus, the variance inflation factor (VIF) is used to test the multicollinearity problem to improve the accuracy of regression results. The greater the VIF, the more serious the collinearity problem is. The results of the multicollinearity test are shown in Table 3. It can be seen that the maximum VIF is 1.03, the VIF value of each variable is far less than 10, and the average value of VIF is far less than 5; that is, there is no multicollinearity among the influencing factors selected in the study, which meets the requirements of data analysis [47]. --- Model Construction and Regression Method To explore the influence of the number of domestic and foreign tourists, family population, administrative villages, and causes of poverty on various capitals and livelihood capital, a regression model is constructed as follows: capital it = <unk> 0 + <unk> 1 tourist it + <unk> 2 population it + <unk> 3 village it + <unk> 4 cause it + <unk> it (9 ) where i is the farmer, t is the year, capital it represents the farmer's natural, physical, financial, social, human, cultural, and livelihood capital values, tourist it represents the number of do-mestic and foreign tourists, population it represents the family population, village it represents the administrative village to which the farmers belong, and cause it reflects the causes of poverty of the farmers. <unk> 0 is a constant term, and <unk> i,t is a random error term. An F-test, LM-test, and Hausman test were used to determine the regression method of the model, and the statistical test results are shown in Table 4. First, the p value of the F-test is 0.0000, which is significantly less than 0.05, rejecting the original assumption that the mixed regression model is better than the fixed effect model; that is, it is necessary to choose the fixed effect regression model. Second, the p value of LM-test is less than 0.05, which rejects the original hypothesis that the mixed regression model is better than the random effect model, indicating that the random effect model is better. Finally, the original hypothesis of the Hausman test is that the random effect model is superior to the fixed effect model, and the p value of the Hausman test is 0.9968, which is significantly greater than 0.05, indicating that the original hypothesis is accepted; that is, the random effect model is superior to the fixed effect model. Therefore, to make the analysis results more realistic and reasonable, it is necessary to use a random effect model for regression. --- Results --- Measurement of Livelihood Capital The livelihood capital of relocated farmers from 2015 to 2021 is shown in Figure 4. Before the relocation, farmers' livelihood capital increased slightly in 2015-2016, and the change was not noticeable. In 2017-2018, with the acceleration of poverty alleviation relocation and the improvement in various support policies, the livelihood capital of farmers increased significantly, reaching a maximum of 0.6451 in 2019 after relocation. Meanwhile, after the relocation was fully completed, various subsidy policies were weakened. Moreover, affected by the COVID-19, farmers' livelihood capital declined slightly in 2020-2021, but it was still greatly improved compared with the livelihood capital before the relocation. --- OR PEER REVIEW 11 of 1 the average social capital was 0.1688, which became the capital with the most significan increase. In terms of human capital, farmers' human capital before the relocation wa 0.0237 and 0.0284, respectively. After relocation, the average human capital was 0.1292 and farmers' knowledge and skills were improved. In terms of cultural capital, farmers cultural capital in 2015 and 2016 was 0.0167 and 0.0307, respectively. After the relocatio was completed, that is, in 2019-2021, the average cultural capital was 0.0795. By carryin out various cultural activities to enhance the local cultural attraction, the cohesion of farm ers has been continuously improved, and cultural activities have been further transformed into productive forces, which have become a source of vitality for promoting the sustain able development of farmers' livelihood capital. Specifically, the distribution of farmers' livelihood capital from 2015 to 2021 is shown in Figure 5. It can be found that all kinds of livelihood capital improved and developed steadily. In terms of natural capital, farmers' natural capital was 0.0232 and 0.0257 in 2015 and 2016, respectively. After the relocation, the natural capital increased and the average value was 0.0422. In terms of physical capital, farmers' physical capital before the relocation was 0.0313 and 0.0477, respectively, and the average value of physical capital after relocation was 0.1775, which improved the safety and convenience of farmers' production and life. In terms of financial capital, farmers' financial capital in 2015 and 2016 was 0.0205 and 0.0227, respectively. After the relocation, the average financial capital was 0.0322, farmers had more opportunities to increase their income and obtain employment, and their income sources were more stable and diversified. In terms of social capital, farmers' social capital in 2015 and 2016 was 0.0225 and 0.0295, respectively. After the relocation, the average social capital was 0.1688, which became the capital with the most significant increase. In terms of human capital, farmers' human capital before the relocation was 0.0237 and 0.0284, respectively. After relocation, the average human capital was 0.1292, and farmers' knowledge and skills were improved. In terms of cultural capital, farmers' cultural capital in 2015 and 2016 was 0.0167 and 0.0307, respectively. After the relocation was completed, that is, in 2019-2021, the average cultural capital was 0.0795. By carrying out various cultural activities to enhance the local cultural attraction, the cohesion of farmers has been continuously improved, and cultural activities have been further transformed into productive forces, which have become a source of vitality for promoting the sustainable development of farmers' livelihood capital. into productive forces, which have become a source of vitality for promoting the sustainable development of farmers' livelihood capital. --- Coupling and Coordination Level of Livelihood Capital --- Coupling and Coordination Level of Livelihood Capital Figure 6 describes the coupling and coordination level of various capitals of relocated farmers in Menglai Township from 2015 to 2021. Before the relocation, farmers' capital was on the verge of imminent imbalance. With the promotion of poverty alleviation relocation, the coupling and coordination level of farmers' livelihood capital was significantly improved by implementing comprehensive support policies. After the relocation, from 2019 to 2021, farmers' livelihood capital was upgraded to a moderately coordinated state. Although the coupling and coordination level has been significantly improved, the six capitals have yet to reach an extremely coordinated state due to the differences in the initial level and growth rate of each capital. It is necessary to promote the coupled and coordinated development of various capitals, which is not only conducive to the increase in livelihood capital but can also break the barriers of transformation among various capitals and promote the sustainable development of livelihood capital. was on the verge of imminent imbalance. With the promotion of poverty alleviation rel cation, the coupling and coordination level of farmers' livelihood capital was significant improved by implementing comprehensive support policies. After the relocation, fro 2019 to 2021, farmers' livelihood capital was upgraded to a moderately coordinated stat Although the coupling and coordination level has been significantly improved, the s capitals have yet to reach an extremely coordinated state due to the differences in the in tial level and growth rate of each capital. It is necessary to promote the coupled and coo dinated development of various capitals, which is not only conducive to the increase livelihood capital but can also break the barriers of transformation among various capita and promote the sustainable development of livelihood capital. --- Influencing Factors of Livelihood Capital --- Regression Result Based on the random effect model, the effects of various factors on the livelihoo capital and total capital of relocated farmers in minority areas are verified, and the resul are shown in Table 5. Table 5. Regression result. (1)(2) (3) (4) (5) (6) (7) --- Influencing Factors of Livelihood Capital --- Regression Result Based on the random effect model, the effects of various factors on the livelihood capital and total capital of relocated farmers in minority areas are verified, and the results are shown in Table 5. (1) Number of domestic and foreign tourists The regression results show that when the number of domestic and foreign tourists increases by one percentage point, farmers' natural, physical, financial, social, human, and cultural livelihood capital increase by 0.0273, 0.2191, 0.0191, 0.2311, 0.1388, 0.0207, and 0.7486 percentage points, respectively, at the significance level of 1%, which shows that tourism promotes the livelihood capital of farmers. Among them, the growth of tourists has the most obvious influence on social capital, and its promotion of physical, human, natural, cultural, and financial capital is weakened in turn. Farmers' social networks can be expanded by vigorously developing tourism, and they can obtain more social support and a sense of belonging and satisfaction. In addition, through skills training and "driven by capable people", farmers' labor skills are enriched, and their human capital is improved. Moreover, with tourism development in minority areas, various cultural tourism products with ethnic characteristics have appeared. Farmers' awareness of environmental protection and "Lucid waters and lush mountains are invaluable assets" has deepened, gradually promoting cultural and natural capital. For farmers, with the increase in the number of domestic and foreign tourists, the most intuitive change is reflected in the improvement in farmers' income and basic living security, that is, the growth of financial capital and physical capital and the development of tourism has improved farmers' quality of life and living standards. Finally, farmers' livelihood capital can be improved by accumulating human, physical, and financial resources that are conducive to development. Table 5. Regression result. (1)(2) (3) (4) (5) (6) (7) (2) Family population The regression coefficient of the influence of family population on natural capital is 0.0016 at the level of 1% significance, and the family population will influence the promotion of natural capital. Specifically, if every unit of the family population increases, the natural capital of farmers will increase by 0.16%. Furthermore, the influence of family population on farmers' financial capital is significant at the level of 1%. Every unit of family population increases, farmers' financial capital increases by 0.17%, and family size positively impacts farmers' income growth. (3) Administrative villages The regression coefficient of administrative villages to natural capital is 0.0005 at the level of 5% significance, and the regression coefficients to physical, financial, and social capital are -0.0018, -0.0006, and -0.0024 at the level of 1% significance, respectively. Thus, the development of livelihood capital expressed by farmers in different administrative villages is quite different. There are often significant differences in geographical conditions, infrastructure, road traffic conditions, economic development level, and social relations of farmers in different administrative villages, which further affect farmers' livelihood capital. (4) Causes of poverty The regression coefficients of the causes of poverty to financial and social capital are -0.0010 and 0.0031 at the level of 1% significance, respectively, indicating that farmers with capacity loss, increased burden, factor shortage, accidental impact, and lack of self-development motivation have different performances in financial and social capital. Therefore, to improve farmers' livelihood capital and realize sustainable livelihood, it is necessary to attach importance to the orderly connection between various policies and poverty alleviation relocation and implement differentiated assistance and development measures for farmers with different causes of poverty. --- Robustness Test (1) Replacement matching method To verify the robustness of the research conclusion, OLS and FE estimation methods are used to verify the influence of the number of domestic and foreign tourists, family population, administrative villages, and the causes of poverty on farmers' livelihood capital. The regression results are shown in Table 6, showing that the significance and direction of most variable coefficients are stable. The research conclusions are consistent with the benchmark regression results, indicating that the empirical analysis results are robust. 7. It can be seen that although the regression coefficients of various influencing factors are different in absolute values, the sign and significance level of the coefficients remain unchanged, which further proves that the benchmark regression results are robust. --- Discussion Since the concept of sustainable livelihood was put forward, it has become the core issue of poverty and sustainable development research, which focuses on ability, fairness, and sustainability [30]. Livelihood capital is the core of sustainable livelihood, and scholars have made functional explorations and summaries in the evaluation and promotion of livelihood capital and the study of livelihood capital in specific events. Most of the existing studies evaluate livelihood capital from five aspects: nature, physical, financial, social, and human capital, according to DFID's sustainable livelihood analysis framework [48]. Many studies promote the development of livelihood capital through the intervention of external factors and seldom explore the impact of farmers' factors on livelihood capital [49][50][51][52][53][54]. Moreover, the research on livelihood capital in specific events focuses on climate change [55][56][57]. However, the existing research rarely investigates the influence of relocation on farmers' livelihood [12], especially the research on farmers' livelihood capital after poverty alleviation relocation in minority areas, and insufficient attention is paid to cultural capital in minority areas. However, due to the particularity of social history, cultural traditions, and living customs, minority areas need to fully consider and respect local characteristics and development laws, choose development methods based on local conditions, take into account the internal and
As an essential regional planning policy, poverty alleviation relocation has a significant impact on the regional economy, environment, and social well-being and is critical for sustainable development. Based on the development of minority areas in Yunnan, this study improves the traditional sustainable livelihood analysis framework and constructed a livelihood capital evaluation system including natural, physical, financial, social, human, and cultural capital. Furthermore, the measurement standard of sustainable livelihoods is proposed, which requires not only the enhancement of livelihood capital but also the coupling and coordinated development of all capital components. Based on the data of Menglai township from 2015 to 2021, this study estimates that farmers' livelihood capital has increased after relocation, and the level of coupling and coordination has improved. Still, it has yet to reach extreme coordination. Hereafter, the theoretical framework of internal and external factors affecting livelihood capital is constructed, and the influencing factors of livelihood capital are obtained through regression analysis. This study provides a new tool for evaluating livelihood capital in minority areas, obtains new findings on the sustainable development of farmers' livelihood capital after poverty alleviation relocation, and expands a new perspective for studying the influencing factors of livelihood capital.
ability, fairness, and sustainability [30]. Livelihood capital is the core of sustainable livelihood, and scholars have made functional explorations and summaries in the evaluation and promotion of livelihood capital and the study of livelihood capital in specific events. Most of the existing studies evaluate livelihood capital from five aspects: nature, physical, financial, social, and human capital, according to DFID's sustainable livelihood analysis framework [48]. Many studies promote the development of livelihood capital through the intervention of external factors and seldom explore the impact of farmers' factors on livelihood capital [49][50][51][52][53][54]. Moreover, the research on livelihood capital in specific events focuses on climate change [55][56][57]. However, the existing research rarely investigates the influence of relocation on farmers' livelihood [12], especially the research on farmers' livelihood capital after poverty alleviation relocation in minority areas, and insufficient attention is paid to cultural capital in minority areas. However, due to the particularity of social history, cultural traditions, and living customs, minority areas need to fully consider and respect local characteristics and development laws, choose development methods based on local conditions, take into account the internal and external influencing factors of livelihood capital, and promote the stock improvement in livelihood capital and the coordinated development of various capitals. Ecological, economic, and social factors such as natural disasters, environmental pollution, climate change, land tenure deterioration, lack of rural employment opportunities, lack of educational resources, and inadequate health and social welfare are the leading causes of the relocation of farmers. Based on the factors affecting farmers' livelihood capital in this study, to improve the livelihood capital of relocated farmers, they can be organized to move to areas with tourist resources and increase their income by developing homestays and rural tourism. Eugenics and childcare should be promoted, family members' education and employment levels should be improved, and their self-development ability should be enhanced. In addition, they can strengthen cooperation between different administrative villages, jointly carry out planting and breeding projects, share resources, and improve production efficiency. Furthermore, government departments need to deeply understand the causes of farmers' poverty and formulate specific assistance programs. For example, they can encourage young people in impoverished households to start businesses in their hometowns if the family is impoverished due to a lack of labor. This study has great theoretical and practical significance for academic research and policymaking. On the one hand, by supplementing cultural capital, the original sustainable analysis framework is improved, which provides a scientific theoretical reference for the study of sustainable livelihood issues. Furthermore, the theoretical framework of internal and external factors affecting livelihood capital is constructed, making it possible to pay attention not only to the importance of external assistance but also to farmers' characteristics and endogenous motivation. On the other hand, this study is conducive to the relevant departments to realize that farmers need not only physical and economic support but also cultural integration after relocation to continuously enrich cultural support carriers, build cultural facilities, enrich national cultural activities, and meet the diverse cultural needs of relocated farmers. Moreover, the internal and external factors that affect farmers' livelihood capital are comprehensively considered, and the relocated farmers are given specific policies based on different influencing factors. The limitation of this study is that only one area was taken as an example for field investigation and empirical analysis, and whether the index system and empirical research results are suitable for farmers in other minority areas remains to be discussed. In the future, it will be necessary to expand the research area further and increase the comparative analysis of different regions to enhance the universality of the research conclusions. --- Conclusions As the "first project" in the battle against poverty, poverty alleviation relocation is the most effective way to alleviate poverty for farmers in regions where "one's soil and water cannot support one's people". It is also a great feat in the history of human migration and world poverty reduction and an essential part of the "China Plan" for poverty alleviation in the new era. As the main battlefield of poverty alleviation, Yunnan Province integrates frontier, ethnic, mountainous, and poverty. To further consolidate poverty alleviation achievements and enhance the livelihood capital of relocated farmers, this study takes the relocated farmers in Menglai Township, Cangyuan County, Yunnan Province, from 2015 to 2021 as the research object, evaluates the livelihood capital of the farmers, and explores the influencing factors of livelihood capital to provide decision support for the sustainable development of the livelihood capital of the relocated farmers, promote the effective connection between the poverty alleviation achievements and the rural revitalization strategy, prevent the farmers from returning to poverty, and realize the sustainable development goal. The main research contents and conclusions are as follows: (1) Construct a livelihood capital evaluation system for farmers in Yunnan minority areas. The evaluation system of farmers' livelihood capital includes 17 indexes, including four third-level indexes of natural capital, three third-level indexes of physical capital, four third-level indexes of financial capital, two third-level indexes of social capital, two third-level indexes of human capital, and two third-level indexes of cultural capital. (2) Measure the value of livelihood capital and its coupling and coordination level. The livelihood capital and all kinds of farmers' capital have increased significantly after relocation, and the level of coupling and coordination among the six types of capital has been improved. However, there is still a significant gap in the level of extreme coordination. (3) Construct the theoretical framework of internal and external factors affecting the livelihood capital of relocated farmers. Integrating the internal and external factors theory, family life cycle theory, location theory, sustainable development theory of tourism poverty alleviation, and the cumulative causation theory, the empirical analysis shows that the number of domestic and foreign tourists, family population, administrative villages, and causes of poverty have different degrees of influence on farmers' livelihood capital. --- Data Availability Statement: Not applicable. --- Conflicts of Interest: The authors declare no conflict of interest. --- Author Contributions: Conceptualization, J.W. and H.Y.; methodology, J.W.; software, J.W.; validation, J.W., H.Y. and J.Z.; formal analysis, J.Z.; investigation, J.W.; resources, H.Y.; data curation, H.Y.; writing-original draft preparation, J.W.; writing-review and editing, J.W.; visualization, J.W.; supervision, J.Z.; project administration, H.Y.; funding acquisition, H.Y. All authors have read and agreed to the published version of the manuscript.
As an essential regional planning policy, poverty alleviation relocation has a significant impact on the regional economy, environment, and social well-being and is critical for sustainable development. Based on the development of minority areas in Yunnan, this study improves the traditional sustainable livelihood analysis framework and constructed a livelihood capital evaluation system including natural, physical, financial, social, human, and cultural capital. Furthermore, the measurement standard of sustainable livelihoods is proposed, which requires not only the enhancement of livelihood capital but also the coupling and coordinated development of all capital components. Based on the data of Menglai township from 2015 to 2021, this study estimates that farmers' livelihood capital has increased after relocation, and the level of coupling and coordination has improved. Still, it has yet to reach extreme coordination. Hereafter, the theoretical framework of internal and external factors affecting livelihood capital is constructed, and the influencing factors of livelihood capital are obtained through regression analysis. This study provides a new tool for evaluating livelihood capital in minority areas, obtains new findings on the sustainable development of farmers' livelihood capital after poverty alleviation relocation, and expands a new perspective for studying the influencing factors of livelihood capital.
Background Women in their reproductive age with the need for family planning satisfied by modern contraceptive method has seen a steady increase globally, from 73.6% in the year 2000 to 76.8% in the year 2020 [1,2]. Some reasons ascribed to this mild change include limited access to services, as well as cultural and religious factors [3]). However, these barriers are being addressed in some regions, and this accounts for an increase in demand for modern methods of contraception [2]. According to the World Health Organization, the proportion of women with needs for modern methods of contraception has been stagnant at 77% from 2015 to 2020 [2]. Globally, the number of women using modern contraceptive methods has increased from 663 million in 2000 to 851 million in 2020 [2]. In 2030, it is projected that an additional 70 million women will be using a modern contraceptive method [2]. In low to middle-income countries, the 214 million women who wanted to avoid pregnancy were not using any method of contraception as of 2020 [2]. Low levels of contraceptive use have mortality and clinical implications [3]. However, about 51 million in their reproductive age have an unmet need for modern contraception [1]. Maternal death and new born mortality could be reduced from 308,000 to 84,000 and 2.7 million to 538,000 respectively, if women with intentions to avoid pregnancy were provided with modern contraceptives [4]. Low prevalence in the use of modern contraceptives has been linked to negative events such as maternal mortality and unsafe abortion in Africa [5][6][7]. Women with low fertility intentions in sub-Saharan Africa, record the lowest prevalence rate of modern contraceptive use [8]. The use of modern contraceptive remains a pragmatic and cost-effective public health intervention for reducing maternal mortality, averting unintended pregnancy and controlling rapid population growth especially in developing countries [3,9]. Beson, Appiah & Adomah-Afari, [3] highlight that knowledge and awareness per se do not result in the utilization of modern contraceptives [3]. Cultural and religious myths and misconceptions tend to undermine the use of modern contraception [10][11][12]. Ensuring access and utilization of contraceptives has benefits extending beyond just the health of the population [3]. Amongst these include sustainable population growth, economic development and women empowerment [2,3]. Nonetheless. predominantly in SSA, women do not have the decision-making capacity to make decisions pertaining to their health [13]. Although this has proven to be an efficient driver for improved reproductive health outcomes for women [14,15]. To improve efforts on contraceptive usage in Africa, people are encouraged to make positive reproductive health decisions to prevent unintended pregnancy and other sexually transmitted infections since these steps would lead to the reduction of maternal mortality and early childbirth amongst women [14]. At an estimated population growth of 3.5% per year, Chad's population growth is considered to be growing at a relatively fast pace [16]. This trend in growth may be ascribed to the country's high fertility and low use of contraceptives [16]. In Africa, Chad had been found to have the lowest prevalence of modern contraceptive use in sub-Saharan Africa despite its recorded growth from 5.7% to 2015 to 7.7% in 2019 [8,17]. UN Women [18] data also showed that a lot need to be done in Chad to achieve gender equality with about 6 out of 10 women aged 20-24 years married before age 18 and about 165 of women in their reproductive age reporting being victims to physical and/or sexual violence by a current or former intimate partner in the year 2018. Studies indicate that social and religious norms have undermined women's rights and self-determination in Chad [19][20][21] which Plain language summary The use of modern contraceptives remains a pragmatic and cost-effective public health intervention for reducing maternal mortality, averting unintended pregnancy and controlling of rapid population growth, especially in developing countries. Although there has been an increase in the utilization of modern contraceptives globally, it is still low in Chad with a prevalence rate of 7.7%. This study assessed the association between the health decisionmaking capacities of women in Chad and the use of modern contraceptives. We used data from the 2014 -2015 Chad Demographic and Health Survey. Our study involved 4,113 women who were in sexual union and with complete data on all variables of interest. We found the prevalence of modern contraceptive utilization at 5.7%. Level of education of women, women who can refuse sex and employment status were found to be significantly associated with the use of modern contraceptives. Whereas those who reside in rural settings are less likely to use modern contraceptives, those who have at least primary education are more likely to use modern contraceptives. Our study contributes to the efforts being made to increase the utilisation of modern contraceptives. There is a need to step up contraceptive education and improve adherence among Chad women in their reproductive years. In the development of interventions aiming at promoting contraceptive use, significant others such as partners and persons who make health decisions with or on behalf of women must be targeted as well. Keywords Women, Chad, Modern contraception, Reproductive health, Demographic and Health Survey affects their health decision-making capacity negatively. This situation makes it challenging for women in their reproductive age in Chad to be of independent mind when making decisions about the use of modern contraception. Though the use of contraception in many parts of the world had been found to yield immense benefits such as low levels of maternal mortality and morbidity, and to a larger extent influence economic growth and development [2,3]. It has become necessary to investigate the health decision-making capacity and modern contraceptive utilisation among sexually active women in Chad. Findings from this study will provide stakeholders and decision-makers with evidence that will guide policymaking to improve access and utilisation of modern contraceptives in Chad. --- Materials and methods --- Data source The study used data from the current Demographic and Health Survey (DHS) conducted in Chad 2104 -2015. The 2014-2015 Chad Demographic and Health Survey (CDHS) aimed at providing current estimates of basic demographic and health indicators. It captured information on health decision making, fertility, awareness, and utilization of family planning methods, unintended pregnancy, contraceptive use, skilled birth attendance, and other essential maternal and child health indicators [22]. The survey targeted women aged 15-49 years. The study used DHS data to provide holistic and in-depth evidence of the relationship between health decision-making and the use of modern contraceptives in Chad. DHS is a nationwide survey collected every five-year period across low-and middle-income countries. A stratified dual-stage sampling approach was employed. Selection of clusters (i.e., enumeration areas [EAs]) was the first step in the sampling process, followed by systematic household sampling within the selected EAs. For the purpose of this study, only women (15-49 years) in sexual unions (marriage and cohabitation) who had complete cases on all the variables of interest were used. The total sample for the study was 4,113. --- Study variables --- Dependent variable The dependent variable in this study was "contraceptive use" which was derived from the 'current contraceptive method'. The responses were coded 0 = "No method", 1 = "folkloric method", 2 = "traditional method, " and 3 = "modern method". The existing DHS variable excluded women who were pregnant and those who had never had sex. The modern methods included female sterilization, intrauterine contraceptive device (IUD), contraceptive injection, contraceptive implants (Norplant), contraceptive pill, condoms, emergency contraception, standard day method (SDM), vaginal methods (foam, jelly, suppository), lactational amenorrhea method (LAM), countryspecific modern methods, and respondent-mentioned other modern contraceptive methods (e.g., cervical cap, contraceptive sponge). Periodic abstinence (rhythm, calendar method), withdrawal (coitus interruptus), and country-specific traditional methods of proven effectiveness were considered as traditional methods while locally described methods and spiritual methods (e.g., herbs, amulets, gris-gris) of unproven effectiveness were the folkloric methods. To obtain a binary outcome, all respondents who said they used no method, folkloric, traditional, were put in one category and were given the code "0 = No" whereas those who were using the modern method were also put into one category and given the code "1 = Yes. " --- Explanatory variables Health decision-making capacity was the main explanatory variable. For health decision-making capacity, women were asked who usually decides on respondent's health care. The responses were respondent alone, respondent and husband/partner, husband/partner alone, someone else, and others. This was recorded to respondent alone = 1, respondent and someone (respondent and husband/partner, someone else, and others) = 2, and partner alone = 3. Similarly, some covariates were included based on theoretical relevance and conclusions drawn about their association with modern contraceptive utilisation [13,14,23]. These variables are age, place of residence, wealth quintile, employment status, educational level, marital status, age at first sex, and parity. --- Analytical technique We analysed the data using STATA version 13. We started with descriptive computation of modern contraception utilization with respect to health decisionmaking capacity and the covariates. We presented these as frequencies and percentages (Table 1). We conducted Chi-square tests to explore the level of significance between health decision-making capacity, covariates, and modern contraceptive utilization at a 5% margin of error (Table 2). In the next step, we employed binary logistic regression analysis in determining the influence of health decision-making capacity on modern contraceptive utilization among women in their reproductive ages as shown in the first model (Model I in Table 3). We presented the results of this model as crude odds ratios (cOR) with their corresponding 95% confidence intervals. We further explored the effect of the covariates to ascertain the net effect of health decision-making capacity on modern contraceptive utilization in the second model (Model III in Table 3) where adjusted odds ratios (aOR) were reported. Normative categories were chosen as reference groups for the independent variables. Sample weight was applied whilst computing the frequencies and percentages so that we could obtain results that are representative at the national and domain levels. We used STATA's survey command (SVY) in the regression models to cater for the complex sampling procedure of the survey. We assessed multicollinearity among our co-variates with Variance Inflation Factor (VIF) and realized that no multicollinearity existed with a mean VIF = 3.7. --- Results --- Socio-demographics and prevalence of modern contraceptive use Among the respondents who participated in this study, 91% of them are married and about three-quarters of them (73.3%) have partners that are sole decision-makers --- Table 3 Multivariate logistic regression results on the predictors of modern contraception utilisation among women in Chad Variables regarding health issues (Table 1). Most of the respondents (79.8%) reside in rural settings with 69.5% having no education and about half (56.4%) between the ages of 20 and 34 (Table 1). A higher proportion of the respondents (56%) are not able to refuse sex when demanded. The prevalence of modern contraceptive use is 5.7% [CI = 5.46-5.91] (see Fig. 1). --- Association between use of modern contraceptive and the predictor variables As shown in Table 3, in both the adjusted and the unadjusted models, respondents who take health decisions with someone are more than two times (aOR = 2.71; 95% CI = 1.41, 5.21 and OR = 2.38; 95% CI = 1.28, 4.42 respectively) likely to decide on using contraceptives than respondents who decide alone. It has also been observed that having at least a primary education positively affect the likelihood of using modern contraceptives; primary education (aOR = 2.34; 95% CI = 1.56, 3.50); secondary or higher education (aOR = 4.02; 95% CI = 2.44, 6.60) (see Table 3). Likewise, people who reside in rural areas are 53% less likely to patronize modern contraceptives (aOR = 0.47; 95% CI = 0.27, 0.82) than their counterparts who live in urban areas (see Table 3). Furthermore, women who are employed have higher odds of using contraceptives than those who are unemployed (aOR = 2.24; 95% CI = 1.54, 3.28). Along with that, women who have given birth at least four (4) times are 61% more likely to use modern contraceptives (aOR = 2.71; 95% CI = 1.41, 5.21) than those with no birth experience. It was observed that women who can refuse sex have higher odds of using modern contraceptives (aOR = 1.61; 95% CI = 1.14, 2.27) relative to those who are unable to refuse sex (see Table 3). --- Discussion This study was essential since the ability of sexually active women to make significant decisions on their health including choices of modern contraceptive use (i.e., condom use) can lead to good reproductive health [24]. We observed that the prevalence of modern contraceptive use in Chad among women in sexual union was 5.7%. Generally, about three-quarters (73.3%) of respondents who were married (91%) had partners as the sole decision-makers regarding their health issues, a finding similar to that of a study conducted in Ghana where only a quarter of women in the study took healthcare decisions single-handedly [25]. However, in a multi-country assessment, it was revealed that about 68.66% of respondents across the 32 nations studied could make decisions on their reproductive health [14]. The discrepancy might be attributable to the diverse research populations and the number of nations investigated. Furthermore, parties to decision-making regarding the health of women play an important role in the usage of contraceptives. It was found that respondents who took health decisions with someone were more than two times likely to decide on using contraceptives than respondents alone. Similar result was seen in Burkina Faso [26] and Mozambique [27] as spousal decision with women had a positive influence on the utilization of contraception. In terms of decisions taken solely by partners, women were 18% less likely to report an intention to use contraceptives [27]. Again, it was revealed in Pakistan that women whose partners were sole decision-makers were less likely to use contraception. This shows that a woman's inability to discuss and make decisions on health, especially on family planning issues can negatively affect the use of modern contraception. Education has been recognised as a strong determining factor of contemporary contraception use. It exposes women to factual information as well as convinces their partners of the need for contraception [28]. This is relevant as we also observed that having at least a primary education induces higher odds of using modern contraceptives. Although the study establish education to be significantly associated with contraption use, a study conducted to measure the trend in the use of contraception in 27 countries in sub-Saharan Africa reported that an increase in the proportion of the study participants with secondary education did not affect the use of contraception [29]. In agreement with our finding, a high level of education has been found to increase the likelihood of using modern ways of delaying birth in women living in Uganda [30]. A plausible explanation to this is that as the level of education of women increases, women are more empowered to take charge of their health decision-making capacity. Since education empowers women to have autonomy over their reproductive rights [23]. With regards to the place of residence, we found that urban women were more likely to use modern contraceptives as compared to their counterparts in rural areas. a possible explanation is that women in urban areas may have better access to information, and are more likely to be interested in education, hence, the use of modern contraceptives to delay childbirth. Other reasons may be poor transportation access, long distances to access health facilities and shortage of contraceptives in the rural areas as compared to in the urban areas [31].This corroborates the findings of Apanga et al., [32] which they ascribed to the fact that there is a high prevalence of late marriage in urban areas as compared to rural areas [33]. Hence, there is a possibility that women in urban areas are likely to use modern contraceptives to avoid unwanted pregnancies. Consistent with prior studies in Ghana, [3,34] women who are working had a higher likelihood of utilizing contraception methods than those who are unemployed. The reason for this is because, compared to their nonworking peers, the working class may be willing to do everything to maintain their employment and have more time for their occupations instead of having children, especially given their capacity to acquire contraceptives in comparison to their non-working peers. Also, working women are expected to have the financial backing to be able to make health decisions concerning their reproductive health. It is, therefore, no surprise that we found women within the richest wealth quintile to have the highest likelihood of utilizing modern contraceptives. Finally, women who have given birth to at least four (4) children are more likely to use contraceptives than those with no birth experience. A plausible explanation is that multiparous women may not want more children hence resort to the use of modern contraceptives to either delay the next pregnancy or stop childbirth. This finding is similar to previous reports from Ethiopia and Tanzania which reported that as the number of living children increases, so does the usage of modern contraceptives [35,36]. --- Strength and limitations We used a large dataset comprising 4,113 women aged between 15-49 which makes our results compelling. Findings from this study are also based on rigorous logistic regression. Despite these strengths, the study had some notable shortcomings. To begin with, the study's cross-sectional methodology limits causal inferences between respondents' individual factors and modern contraceptive use. Second, because most questions were answered using the self-reporting approach, there is a risk of social desirability and memory bias in the results. Furthermore, because this study only included women, the conclusions do not incorporate the perspectives of spouses. Finally, we believed that variables like cultural norms and health-care provider attitudes would be relevant to investigate in the context of this study, however, such variables were not included in the DHS dataset. --- Conclusion The study revealed that modern contraceptive utilization is very low among sexually active women in Chad. We conclude that health decision-makers, education, occupation status [working], higher parity and women's ability to refuse sex have positive association with modern contraceptive utilization among sexually active women in Chad. There is a need to step up modern contraceptive education and improve adherence among women in their reproductive years. In the development of interventions aiming at promoting modern contraceptive use, broader contextual elements must be prioritized. For instance, significant others such as partners and persons who make --- Data availability Data used for the study is freely available to the public via https://dhsprogram. com/data/available-datasets.cfm. --- health decisions with or on behalf of women need to be targeted. --- Abbreviations --- Declarations Ethical approval This study used publicly available data from DHS. Informed consent was obtained from all participants prior to the survey. The DHS Program adheres to ethical standards for protecting the privacy of respondents. The ICF International also ensures that the survey processes conform to the ethical requirements of the U.S. Department of Health and Human Services. No additional ethical approval was required, as the data is secondary and available to the general public. However, to have access and use the raw data, we sought and obtained permission from MEASURE DHS. Details of the ethical standards are available on http://goo.gl/ny8T6X. --- Consent for publication Not applicable. --- Competing interests The authors declare that they have no competing interest. --- Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Background Globally, there has been an increase in the percentage of women in their reproductive ages who need modern contraceptives for family planning. However, in Chad, use of modern contraceptive is still low (with prevalence of 7.7%) and this may be attributable to the annual increase in growth rate by 3.5%. Social, cultural, and religious norms have been identified to influence the decision-making abilities of women in sub-Saharan Africa concerning the use of modern contraceptives. The main aim of the study is to assess the association between the health decision-making capacities of women in Chad and the use of modern contraceptives.The 2014-2015 Chad Demographic and Health Survey data involving women aged 15-49 were used for this study. A total of 4,113 women who were in sexual union with information on decision making, contraceptive use and other sociodemographic factors like age, education level, employment status, place of residence, wealth index, marital status, age at first sex, and parity were included in the study. Descriptive analysis and logistic regression were performed using STATA version 13.The prevalence of modern contraceptive use was 5.7%. Women who take health decisions with someone are more likely to use modern contraceptives than those who do not (aOR = 2.71; 95% CI = 1.41, 5.21). Education, ability to refuse sex and employment status were found to be associated with the use of modern contraceptives. Whereas those who reside in rural settings are less likely to use modern contraceptives, those who have at least primary education are more likely to use modern contraceptives. Neither age, marital status, nor first age at sex was found to be associated with the use of modern contraceptives.Education of Chad women in reproductive age on the importance of the use of contraceptives will go a long way to foster the use of these. This is because the study has shown that when women make decisions with others, they are more likely to opt for the use of modern contraceptives and so a well-informed society will most likely have increased prevalence of modern contraceptive use.
immigrant men report paying women primarily of Latino ethnicity for sex [17][18][19][20][21][22][23][24]. Sex industry typologies have described how sex work in the Latino immigrant population operates, identifying the place and manner in which transactional sexual encounters occur [25,26]. In a study from North Carolina, for example, interviews with service providers concluded that the existing public health infrastructure is not well suited to meet the health needs of highly mobile, unauthorized immigrant Latina sex workers [25]. There are few studies, however, eliciting information about the Latina sex work industry from Latina FSWs themselves [27]. We conducted in depth interviews with Latina immigrant women living in Baltimore City who exchange sex for money and goods and their clients, and found that most of these women engaged in indirect sex work. Although these women engaged in transactional sex to earn extra money, they were not considered sex workers or prostitutes by themselves or the community [26,28]. In this study, we evaluate how syndemic risks and resiliency impact the health risk of Latina immigrant FSWs engaging in indirect sex work. Syndemic theory postulates that assessing the overall impact of co-occurring factors, such as substance use and mental health issues, provides a better assessment of HIV risk than considering the additive effects of separate factors, [29]. This theory provides a useful framework to explore the complex and multiple challenges faced by female sex workers [30]. However, focusing only on deficits and challenges faced by vulnerable populations minimizes the strength and potential of individuals and communities living in difficult situations. There is, therefore, a compelling argument to focus on resiliency, or the "the process of overcoming the negative effects of risk exposure, coping successfully with traumatic experiences, and avoiding the negative trajectories associated with risk" [30][31][32]. Evidence suggests that increased resilience may be associated with protective factors that improve health outcomes [33]. In this study, we evaluate how syndemic factors and resilience influence the behavior of immigrant Latina FSW in an effort to identify strategies that may mitigate HIV risk in this population. --- Methods --- Recruitment and Data Collection We conducted 32 in-depth interviews with Latina sex workers and their Latino immigrant clients. All interviews were conducted between July 2014 and April 2015. Eligibility included being: 1) <unk> 21 years old; 2); being born in a Spanish-speaking Latin American country, and 3) having engaged in transactional sex with a Latino immigrant man (if sex worker) or Latina woman (if client) within the past year in Baltimore, Maryland. Transactional sex was defined as exchanging vaginal and/or anal sex for money (i.e., cash, rent, or payment of bills), material goods (i.e., presents, drugs), and/or housing. Participants could therefore be engaged in "direct" sex work, in which the primary purpose of the interaction is to exchange sex for a fee, or "indirect" sex work in which sex is exchanged for a fee but not recognized as sex work. After learning from clients that street-based FSW are most likely to be U.S.-born Latinas, we expanded recruitment to include two U.S.-born Latina FSW. The analysis for this manuscript focuses on indirect sex work; 11 of the 14 FSW interviewed had experience with indirect sex work in the past year, and all of the male clients had engaged in transactional sex with a Latina indirect sex worker. The participants were recruited through snowball sampling with coupons for referrals. Initial participants were identified through our community network. At the end of each interview, the participants were asked if they knew of another person who may be eligible and interested in completing an interview. If the interviewee referred an eligible person who completed an interview, they were provided $50. Two trained Latina immigrants with extensive experience conducted the interviews. Interview questions included migration history, local social support, perceptions of sex work in the local Latino community, sex work history, current sex work practices, experiences with violence, perceptions of HIV/STI risk, and access to health care. Interviews took place in a private location convenient to and trusted by the participant (i.e., local restaurants, public parks). Interviews were audio recorded with participant consent, and lasted 45-90 minutes each. Sex workers were compensated $100 USD for their time, and clients were compensated $50 USD for their time. --- Data Analysis The audio recording of each interview was transcribed verbatim. Transcripts were then cleaned of any possible identifiers, translated into English, and reviewed for accuracy. Spanish and English transcripts were then imported into Atlas.ti qualitative software. Transcripts were reviewed as the research was conducted so that the analysis of the early interviews could inform those that occurred later. Data analysis of the text was conducted using an iterative, constant comparison coding process. A team of two coders independently coded the cleaned transcripts (one in Spanish, one in English), generating as many concepts as possible before moving on to selective coding. These concepts were then consolidated into themes and subthemes [34,35]. Thematic codes were compared within a single interview and between interviews [34]. The Johns Hopkins University School of Medicine Institutional Review Board (IRB) and the Baltimore City Health Department approved all protocols. --- Results Ten themes emerged as syndemic risk factors or resiliency. The themes address the lived experience and impact of indirect sex work on the Latina immigrant sex workers. Participant demographics are presented in Table 1. --- Syndemic risk factors Difficulty finding work due to undocumented status.-The women were overwhelmingly living in the U.S. without documentation. As a result of this, the women expressed great difficulty finding employment that adequately paid. Described one Honduran woman who worked through an agency that would place undocumented workers in jobs: "Latina women are heavily exploited here. Heavily...The problem is that you work there through an agency and the agency keeps a percentage of each person. You are paid a miserable pittance...They keep the rest of the money." Other women found work through family members or acquaintances, but these were low paying and often not many hours a week. Said another Honduran woman: Why do we do this [sex work]? [Because] it's difficult to find a job." Shame and mental health hardship.-Many of the women interviewed wanted to find an alternative way to make money and expressed shame in needing to sell sex. One woman from Costa Rica who had lived in the U.S. for 8 years said, for example: "I know that even though I am paid for my body, it will never be enough of a price because you must have values, must have dignity...I say well he can give me this much. I know it isn't right." Almost all of the women interviewed commented that they were very discreet in these interactions: "No, no one [knows]. No one... we all play it like we are proper and decent ladies." For some women, this shame influenced their mental health and wellbeing. Said one Honduran who worked in a bar: "I get depressed. I cry a lot. Sometimes I get drunk every day because I don't want to know anything. A lot of depression. Once I tried to commit suicide. I don't want to get drunk. I don't want this life. I want to be someone." Lack of social support.-Although the women initially came to Baltimore because they knew someone there, the women reported minimal to no social support. Described one woman from Costa Rica: "Support here? No. Trust? Just me. But I consider [a former roommate] a friendship....[I can get help from her if needed] depending on the help." One woman from El Salvador described being able to get help if needed from two men who were her sex work clients: "When I want someone to talk to, I have friends. I call a guy, a man named [removed], another one called [name removed]. They help me." Many of the women, however, were unable to identify someone who could help them if they needed support of any kind, and the women who could identify support recognized this support as limited and/or with conditions. Alcohol Use.-Alcohol use was most prominent among bar workers. In the bars, women are hired by Latino immigrant men frequenting the establishment to serve them drinks and provide company. The beer, typically around $3 USD, may cost up to $20 with the woman's company. A Salvadorian bartender explained this: "There is an obligation to drink because otherwise the tips are little... [So you have] to be invited for a beer, because a beer costs $20. Half is for me and half is for the owner." The range of alcoholic drinks consumed in a night varied among participants from 5 to 20. Said one Honduran worker: "You have to drink a lot. If you don't drink, you don't make much...I think I drink too much in this country, from working in here. [I drink] beer. Sometimes 15, 12 [a night]." Alcohol use was also mentioned in connection with other types of indirect sex work but this was not common. Violence.-Women in all types or venues of sex work experienced violence or threats of violence from their clients. One Honduran sex worker who met clients through the bar described how frequent this is: "Every women, as I told you, we are mistreated but we don't say anything because we are ashamed...I was badly hit [by a client] and said it was an accident. I still have marks on my body and I said I had fallen down. But no, a guy hit me." The violence or threats of violence women faced largely resulted from disputes of willingness to engage in transactional sexual activities that the man wanted or the amount of money to be paid. Described one client from Honduras: "Imagine, I've invited a girl, she is sitting here on my lap or next to me and I am spending money on her... Then another guy comes and just because she wanted, she leaves [me for] him. Of course I won't like that situation. I am spending on her...so a violent person gets angry and the quarrel begins." --- Resiliency: Protective Factors to Mitigate Risks Rationalizing sex work.-The women interviewed expressed that many Latina immigrants in Baltimore occasionally engaged in transactional sex. One Honduran woman who worked in a bar and sold sex in addition to working in a factory and providing cleaning services stated: Almost 5 [out of 10], something like that. It's quite common, quite common. If women's wages were different, we wouldn't need to do this...The thing is that Latin women do it [sell sex] here. Yes. They do it because they need to. They don't have a husband who pays for all they need. Identifying as a "decent woman."-Despite feeling shame, the indirect sex workers maintained a standing of "decent" or "respectable" women through their typically slow approach to gaining male clients. As defined by one client, when discussing the indirect sex workers he meets at the bars, "A respectable woman is one with whom it's not so easy to have sex. You need... 'who's that person? What's their name?' and all that." Described another man from Honduras: "I [am there] today, I invite the girl. Tomorrow, I invite her again. And that's it, talking. Where are you from, do you have children... 'not now, wait' [they say] but they usually will [eventually] say yes." The indirect sex workers take pride in being "decent" women, and include in this definition only engaging in vaginal sex. Selling sex as needed to fulfill immigration goals.-As indirect sex workers working independently, the woman all decided when to sell sex and to whom. For many of the women, selling sex, only when independently decided, provided an opportunity to gain a sense of empowerment and success while attempting to survive as an immigrant in a new environment that is overwhelmingly difficult and without a supportive network. Specifically, this was tied to their ability to do what they sought to do by coming to the U.S. -provide for their family. Said one woman from El Salvador: Reducing alcohol consumption.-Many of the bar workers recognized that their level of alcohol consumption was not healthy or conducive to their safety, either physically from a man or during sex by control of condom use. As a result, many of the women shared strategies to reduce alcohol consumption while still earning money through selling drinks and company with men at the bars. For example, one woman described fooling a man: "One of my co-workers taught me that I could discreetly throw it away [by pouring some of the beer on the floor or in the trash]." --- Creating rules to maintain control and reduce risk of violence and HIV/STIs.- In an effort to reduce their risk of violence and HIV/STIs, many indirect sex workers have rules they follow to "vet" a potential client. These include getting to know the potential client first, or having other people they know and trust vouch for the potential client. Women who sell sex independently when needed, for example, rely on gaining clients from men they already know and trust -often previous or current clients they consider friends. One client from Honduras described how women in the bars do this: In the bars, you meet the women, you talk with them, they ask you to invite them some drinks, beers...Once you know them and began talking with them, you have a kind of friendship, you tell them, "You are beautiful." And that you like her...So you invite her to go out to eat one day, you can go for a ride with her, and after that... you do it. If not, you have to reactivate the relationship until she accepts. Not every woman accepts it; and those who do accept want to be motivated and you have to gain their trust." --- Discussion In this study, we used syndemic theory and a resiliency framework to document the experiences of Latina immigrant FSW who participate in indirect sex work. We demonstrate high levels of resilience among these women, even as they faced multiple co-existing syndemic risk factors, many of them at the structural or community level and out of their locus of control. Specifically, the women understood their sex work as a means for economic independence and altered the behaviors under their control to reduce risk of HIV/STIs and violence. The FSW in this study experienced multiple and severe syndemic risk factors. For most, limited work opportunities because of their documentation status led to sex work, but at a high price. The women expressed high levels of guilt and shame leading to depression, increased alcohol use, and even a loss of agency to confront partner violence. In addition, the women had very limited social support. Smaller social networks and isolation among Latino immigrant is associated with depressive symptoms and poor physical and mental health [36]. Despite these challenges, the women exhibited important elements of resilience that help them cope with their situation and gave them a sense of control and self-efficacy. Resilience operates on three levels: 1) the social environment (e.g., neighborhoods and social supports), 2) the family (e.g., attachment and parental care), and 3) at the individual level (e.g., attitudes, social skills, and intelligence) [37]. Among the FSWs in this study, social and family support were limited or non-existent, and, therefore, they demonstrated resilience primarily at the individual level. For example, despite pressures to drink heavily as part of the job, the women found ways to mask or fake consumption of alcohol as a means of retaining control of the situation and to protect their health. They also established rules of engagement with their clients in order to reduce their risk of violence and HIV/STIs. In addition, the women justified their work as necessary to complete immigration goals, with emphasis on the sacrifice done for the wellbeing of their families. These types of adaptive coping strategies have been shown to reduce emotional distress [38]. Self-efficacy, or a sense of control over thoughts, feelings, and environment through action, as demonstrated by these women, can protect against stress and promotes physical and mental well-being [39][40][41][42]. Understanding the resilience of the FSW as described through their narratives can help develop strength-based interventions to reduce co-existing risk factors, including HIV/STIs. For example, social isolation and shame were prominent syndemic factors that the women discussed individually without recognizing that these feelings were a common thread throughout the interviews. This suggests that there may be an opportunity for women who do this work to share experiences and learn from each other. In Baltimore, for example, group therapy sessions for undocumented immigrants reduce social isolation by providing an opportunity for people with shared background, experience, and beliefs to discuss issues and coping strategies related to the migration experience [43]. A similar approach, adapted for women who engage in sex work, could reduce social isolation and help FSW recognize their strength and resilience. Other approaches could include interventions that address the women's priorities and health concerns, and training that builds on the skills they currently utilize to reduce risk. Strategies to reduce alcohol consumption can be adapted for the context of bar work, and partnerships with police (in cities where police do not cooperate with immigration authorities) may encourage women to report and seek help for genderbased violence. Interventions to improve job opportunities, such as English language classes, and partnership with law services to gain documentation if eligible, would address a top priority for these women by providing them with more options for economic independence. This study has several limitations that are important to recognize. This study utilized a relatively small sample of indirect FSW who engaged in sex with Latino men and the findings cannot be generalized to Latina sex workers who engage in traditional direct sex work. The findings are specific to women and not generalizable across genders, sexual identities or orientations, or other ethnic/racial groups. --- Author Manuscript Grieb et al.
Background: Female sex workers (FSW) constitute a highly vulnerable population challenged by numerous co-existing, or syndemic, risk factors. FSW also display resilience to these, and some evidence suggests that resilience may be associated with protective factors that improve health outcomes.We conducted in-depth interviews with indirect sex workers (n = 11) and their clients (n = 18). Interviews were coded utilizing an iterative, modified constant comparison method to identify emergent themes.We identified five syndemic risk factors (difficulty finding work due to undocumented status, shame and mental health hardship, lack of social support, alcohol use, and violence) and five resilient factors (rationalizing sex work, identifying as a "decent" woman, fulfilling immigrant goals, reducing alcohol consumption, and creating rules to reduce risk of violence and HIV/STIs). Discussion: Understanding the syndemic risk factors and resiliency developed by FSW is important to develop tailored, strength-based interventions for HIV/STIs and other risks. Female sex workers (FSW) are a highly vulnerable population, whose estimated risk of HIV is 13.5 higher than among similarly aged women [1]. Additionally, FSWs are likely to experience violence [2-4], suffer from depression and other mental health issues [5-8], substance use disorder [9][10][11], and face stigma and discrimination [12,13]. Many FSW experience more than one of these factors at a time. Latino migrant women who exchange sex are particularly vulnerable to health-related issues, including HIV and other STIs, as a result of their unique political, economic, social, and
Introduction Both academics and HR specialists recognize that keeping workers happy is important for the organization because satisfied workers-being more productive [1][2][3][4][5], more loyal, and less likely to leave their jobs [6][7][8][9][10][11]-can positively impact company performance [12][13][14][15]. Not only does a comprehensive review study find a significant correlation between job satisfaction and job performance, especially in complex jobs [16], but other research associates low levels of job satisfaction with higher levels of absenteeism and counterproductive behavior [17,18]. The extent to which workers consider their jobs satisfying is thus now a major focus in many disciplines, including psychology, economics, and management [19][20][21][22][23][24]. China offers a particularly interesting case study for job satisfaction because its Confucianbased work ethic of hard work, endurance, collectivism, and personal networks (guanxi) expects Chinese employees to devote themselves to and take full responsibility for the job, work diligently, and generally align their values and goals with those of the organization [25]. Deeply rooted in this Confucianism is the construct of Chinese individual traditionality reflecting "a moral obligation to fulfill the normative expectations of a prescribed role to preserve social harmony and advance collective interests" [26]. Hence, for the traditionalist Chinese, self-identity is defined by role obligations within networks of dyadic social relationships, which may imply less relevance for the job satisfaction determinants that matter in Western countries. Yet one of the rare nationwide studies that examined job satisfaction in China [27], found not only that job satisfaction among employees aged 16-65 is relatively low-with only 46% explicitly satisfied-but also that worker expectations differ significantly from what their jobs actually provide. In particular, many jobs are less interesting than expected, which prevents workers from realizing their perceived potential, creating an expectations gap that is a strong determinant of job satisfaction. Unlike research for Western countries, however, their study finds no link between job satisfaction and turnover, an outcome they attribute to China's unique Confucian-based work ethic. Despite this clear documentation of relatively low job satisfaction in China, however, few extant studies systematically and comprehensively compare such satisfaction with that in other countries. To begin filling this void, this present analysis draws on data for 36 countries, including China, from one of the most comprehensive cross-national surveys on job satisfaction ever conducted. One unique aspect of this survey is that it collects information not only on actual job characteristics but also on worker perceptions of what an ideal job should entail. As pointed out by Locke, "Job satisfaction is the pleasurable emotional state resulting from the appraisal of one's job as achieving or facilitating the achievement of one's job values. Job dissatisfaction is the unpleasurable emotional state resulting from the appraisal of one's job as frustrating or blocking the attainment of one's job values or as entailing disvalues. Job satisfaction and dissatisfaction are a function of the perceived relationship between what one wants from one's job and what one perceives it as offering or entailing" [23]. It is thus this expectations gap which is fundamentally driving job satisfaction. Unfortunately, much of the job satisfaction literature focuses solely on job attributes, and not on how these are evaluated. Hence, in addition to decomposing job satisfaction differences between China and other country clusters (using the Blinder-Oaxaca method), we are also able to determine the extent to which work-related expectations are being met and how they relate to low job satisfaction, thereby helping to explain its drivers. In doing so, we also provide additional evidence to a previous study [27] that found lower job satisfaction in China, particularly in relation to Western countries. Identifying the determinants of job satisfaction in China and understanding differences in these determinants to other countries is important from a management perspective. Western countries are investing billions in China and many multinational companies have set up major manufacturing and distribution facilities in China. These companies not only employ very many Chinese workers, they are also frequently managed by international teams that often apply Western HR concepts. Yet considering China's very different social and cultural background, it is important to assess Chinese employees' responses to such Western HR concepts. In this paper we provide evidence on what Chinese workers value in a job and how these values differ to workers in other countries. This is an important precondition for a deeper understanding of the effectiveness of HR policies in China. --- Previous research Despite a large body of literature on the determinants of job satisfaction [6, 19-22, 24, 28-35], 2021the research for China is restricted mostly to particular geographic areas [36][37][38][39][40][41][42][43] or specific occupations, including teachers [44][45][46], physicians [47,48], nurses [49][50][51][52][53], civil servants [54], and migrant workers [55,56]. To our knowledge, only four studies focus broadly on all employees across the nation. The first, based on 2002 China Mainland Marketing Research Company data for 8,200 employees in 32 cities, identifies age, education, occupation, and personal income as the main determinants of job satisfaction [57], while the second [58], drawing on 2008 Chinese General Social Survey (CGSS) data for urban locals, first-generation migrants (born before 1980), and new-generation migrants (born 1980 or thereafter) pinpoints income and education. The third study, based on 2006 CGSS data, not only identifies lower job satisfaction among female employees than among male employees, but positively associates job satisfaction with higher levels of education and communist party membership [59]. It also demonstrates, however, that job tenure, job security, earnings, promotion, and having a physically demanding job are significantly and positively correlated with job satisfaction for both sexes [59]. The final study [27] is already referenced, which uses a combination of 2012 China Labor-Force Dynamic Survey (CLDS) data and 2012-2014 China Family Panel Studies (CFPS) data to document the relatively low Chinese worker job satisfaction and significant job expectation gap, which reduces worker ability to reach perceived potential and greatly determines (low) job satisfaction. Although the number of cross-national analyses in this area is limited, one study [24], using data from the 1997 International Social Survey Program (ISSP), document that 79.7% of employees in 21 countries report being fairly satisfied or satisfied with their job. Such satisfaction is significantly impacted by work-role inputs and outputs, with having an interesting job and good relations with management being the major determinants. Subsequent work [35], based on data from phase two of the Collaborative International Study of Managerial Stress (CISMS 2), reports a significantly lower average job satisfaction for their Asian country cluster (7.9) than for their Anglo Saxon (9.6), Eastern European (9.2), and Latin American (9.6) country clusters, with a 2-item job satisfaction measure ranging from 2 to 12. Their results support the assumption that the linkages between work demands and work interference with family (WIF) and between WIF and both job satisfaction and turnover intentions are stronger in individualistic Anglo-Saxon countries than in more collectivistic world regions, including Asia, Eastern Europe, and Latin America. Other research focuses either on specific subpopulations of the workforce or particular aspects, such as skills and benefits. For instance, one of these previous studies [31], using 1994-2001 European Community Household Panel (ECHP) data, demonstrate that selfemployed workers are more likely than paid employees to be satisfied with their present job type but less likely to be satisfied with the corresponding job security. More recent work [60] uses 2005 ISSP data for 32 countries, shows that women and mothers occupy more satisfying jobs in countries with more extensive workplace flexibility. As regards job skills, another more recent study [61], using Programme for the International Assessment of Adult Competencies (PIACC) data for 17 OECD countries, reports that the impact of labor mismatches on job satisfaction is generally better explained by skills mismatch, although educational mismatches have a greater effect on wages. Lastly, drawing on Global Entrepreneurship Monitor (GEM) data, some literature reveals that although entrepreneurial innovation benefits the job satisfaction, work-family balance, and life satisfaction of entrepreneurs globally, in China, it benefits only satisfaction with work-family balance and life-not job satisfaction [62]. As this brief review underscores, with the notable exception of the recent study [27] mentioned above, not only are representative investigations into job satisfaction determinants in China rare, but, more important for our study, so are cross-national studies, especially ones addressing China's relatively low level of employee job satisfaction. We are also unaware of studies which explicitly assess job attributes, that is the extent to which certain attributes are present and also cherished. Hence, to expand understanding of this issue, we decompose the job satisfaction differences between China and several other country clusters to assess the universality and generalizability of particular determinants of job satisfaction and, importantly, the extent to which differing expectations about a job explain China's job satisfaction level. --- Data and methods Data: Our analysis is based on data from the 2015 ISSP, an ongoing collaborative administration of annual cross-national surveys on topics important for the social sciences. Begun in 1984 with four founding members, the program now includes about 50 member countries from all over the world. Whereas three previous surveys (1989, 1997, and 2005) included a section on work orientation and collected data on job attitudes and job characteristics, China did not participate in this module until 2015. Drawing on this 2015 data set, we analyze a sample of 17,938 individuals in 36 countries and regions: Austria, Belgium, Chile, China, Croatia, Czech Republic, Denmark, Estonia, Finland, France, Georgia, Germany, Great Britain, Hungary, Iceland, India, Israel, Japan, Latvia, Lithuania, Mexico, New Zealand, Norway, Philippines, Poland, Russia, Slovakia, Slovenia, South Africa, Spain, Suriname, Sweden, Switzerland, Taiwan (province of China), and the United States. The ISSP survey is usually included in other large surveys (with only a handful of countries conducting single surveys). In most countries, face-to-face interviews with multi-stage sampling were conducted (in some countries such as Poland questionnaires were self-completed with interviewer involvement). All surveys were conducted in the national language(s). Translations were evaluated by experts and, in some countries, by back-translation. Each country used a specific stratification strategy, with China, for example, using education, GDP per capita, and urbanization [63]. Our final sample excludes all self-employed workers to cover only those currently in paid employment (see S1 Table for summary statistics for the entire sample and for China only). Note that the sampling procedure differs somewhat in each country, and the main sampling quotas (i.e., age, gender and education) are based on the composition of the whole population of a respective country, not just on the labor force. Defining clusters of countries: When comparing China's job satisfaction with the job satisfaction in other nations, one must decide on how to construct a comparison group. Several options are possible, including country-by-country comparisons, comparing China with "the rest of the world", or grouping countries according to some characteristics. In order to take account of the heterogeneity in job characteristics and job expectations among countries, and yet to provide insights in a summarized and tractable way, we have opted for clustering countries according to a few economic and sociodemographic characteristics. We partition the remaining 35 countries and regions into 3 clusters by using the k-means clustering algorithm [64]. The algorithm begins by assigning a random number to each observation. These serve as initial cluster assignments for the observations. For each of the clusters the algorithm then computes the clusters' centroids (vector of the clusters' variable means) and assigns each observation to the cluster whose centroid is closest (where closest is defined using Euclidean distance). This process continues until assignments do not change anymore [65]. To obtain a valid assignment of each country to a specific cluster, we run the algorithm 400 times, with each iteration using different cluster assignments at the beginning. We base our cluster analysis on the country specific mean values of certain variables within the data set, namely working hours, income in US-Dollars, age, years of education, family size and marital status. The obtained clusters are the following: • Cluster 1: Chile, Croatia, Czech Republic, Estonia, Georgia, Hungary, India, Latvia, Lithuania, Mexico, Philippines, Poland, Russia, Slovak Republic, Slovenia, South Africa, Spain, Suriname, Taiwan (province of China). • Cluster 2: Australia, Denmark, Iceland, Norway, Switzerland. • Cluster 3: Austria, Belgium, Finland, France, Germany, Israel, Japan, New Zealand, Sweden, Great Britain, United States. The largest Cluster 1 includes all Eastern European countries, Russia, the Baltic states, as well as a few other countries from Asia, Western Europe and South America. Cluster 2 primarily captures the Nordic countries, as well as Switzerland and Australia. Cluster 3 is made up of primarily Western European countries and the United States. Summary statistics for each cluster are presented in S2 Table. As can be seen in the summary statistics, Cluster 1 is characterized by a higher number of average working hours, as well as a larger family size on average compared to Clusters 2 and 3. The average monthly income and the educational level are substantial lower in Cluster 1 compared to the other two clusters. The mean age in Cluster 2 is the highest among all three clusters. Cluster 2 also exhibits the most educated and (in terms of income) wealthiest population, yet having the lowest amount of weekly working hours. Only minor differences among Clusters 2 and 3 exist with regards to the average marital status and family size of the population. Although our objective is to construct homogenous clusters based on economic and sociodemographic variables, we also conducted an analysis using the GLOBE country classification, which groups nations by cultural characteristics; however, the main conclusions remain unchanged. In a further sensitivity analysis, we also clustered countries according to their Human Development Index which is a composite index of life expectancy, education, and per capita income at the country level. The main conclusions of this paper, however, remain unchanged. Country differences: Although the 7-point scaling of our job satisfaction measure might suggest a latent variable estimation approach as the most appropriate, because the bias introduced by an OLS analysis is relatively small [66], we employ the standard OLS regression method applied in the majority of SWB studies [67]. Hence, to pinpoint the differences among countries, we estimate a series of linear regressions (OLS) of the following form: JS i 1<unk>4 b 0 <unk> b 1 C i <unk> b 2 S i <unk> b 3 A i <unk> b 4 E i <unk> <unk> i<unk>1<unk> where JS i denotes job satisfaction of individual i, C i is the country dummy variable (with Germany as the reference group because its job satisfaction mean falls roughly mid sample). S i, A i and E i represent socioeconomic and demographic characteristics, work attributes, and work expectations, respectively, while <unk> i is the error term. Job satisfaction is measured by the question, "How satisfied are you in your (main) job?" with responses measured on a 7-point scale from "1 = completely satisfied" to "7 = completely dissatisfied." For convenience of interpretation, we recode the values so that 7 reflects the highest job satisfaction and 1 the lowest. This job satisfaction measure, although based only a single-item, is empirically documented to be acceptable [68]. Work attributes. Based on prior literature and data availability, we use seven variables to capture work attributes: • Hours worked per week (including overtime). • Work time conditions: Based on the response to "Which statement best describes how your work hours are decided? 1 = fixed time, 2 = decide with limits, and 3 = free to decide," we create two dummy variables for 1 and 3, with 2 as the reference. • Daily work organization: Using responses to "How is your daily work organized? 1 = not free to decide, 2 = with certain limits, 3 = free to decide," we again formulate two dummy variables for 1 and 3 with 2 as the reference. • Work schedules: Based on responses to "Which statement best describes your usual working schedule in your main job? 1 = decided by the employer, 2 = scheduled with changes, 3 = regular schedule," we generate dummies for 1 and 3, with 2 as the reference. • Employer-employee relations: From responses to the question, "In general, how would you describe relations at your workplace between management and employees? 1 = very bad, 2 = quite bad, 3 = neither good nor bad, 4 = quite good, 5 = very good," we derive a 3 category coding of 1 = bad, 2 = neither good nor bad, 3 = good, from which we create dummies for 1 and 3, with 2 as the reference. • Relations between colleagues: We similarly recode the responses to "In general, how would you describe relations at your workplace between workmates/colleagues? 1 = quite bad, 2 = very bad, 3 = neither good nor bad, 4 = quite good, 5 = very good" as 1 = bad, 2 = neither good nor bad, 3 = good, and generate the two dummies for 1 and 3, with 2 as a reference. • Work pressure: From responses to "How often do you find your work stressful? 1 = never, 2 = hardly ever, 3 = sometimes, 4 = often, 5 = always," we derive a 4-category recoding of 1 = never, 2 = sometimes, 3 = often, 4 = always, and define three dummy variables for 1, 3, and 4, with 2 as the reference. Work expectations. We assess work expectations based on the discrepancy between personal importance (what is wanted) and perceived outcome (what is obtained) of a given work facet; namely, job security, income, job interest, promotion opportunities, work independence, usefulness to society, helping others, and contact with other people. We derive our variables from responses to related survey questions, all measured on a 5-point scale. Specifically, individuals are first asked to assess the importance of these job attributes on a scale ranging from 1-5 (from 1 = not important at all to 5 = very important). Thus, for example, the question related to job security is formulated as follows: "How important is job security?" Individuals are then asked to assess their current job using the same scale (from 1 = strongly disagree to 5 = strongly agree). In the case of job security, the question is as follows: "How much do you agree or disagree that it applies to your job: my job is secure". We then calculate work expectations by subtracting the value assigned a specific job characteristic's importance from the value depicting its actual presence in the job, thereby capturing unmet expectations with variables valued from -4 to 4. Clearly, a negative value has a conceptually different meaning than a positive value. More specifically, a negative value indicates that a characteristic of the current job is more pronounced than the importance giving to it, whereas a positive value is more akin to unmet expectations. In order to take these different concepts into account, our regressions include dummy variables for each characteristic that are equal to one if the difference is negative or zero, and zero otherwise. It should be noted that, for most job characteristics, values are seldom negative (less than 10% of observations). Only with regards to "contact with other people" do we have 36% negative values, indicating that about a third of workers have contact with other people, but do not value this characteristic highly. --- Socioeconomic and demographic variables Our socioeconomic and demographic controls are those usually included in job satisfaction regressions [6,24]; namely, age, gender (a dummy equal to 1 for males, and 0 for females), education (measured by years of schooling), and family size. Marital status is recoded into three dummies for married, divorced, and widowed (with single as the reference). To capture personal income, we convert income data into a categorical variable based on a 3-point scale from 1 = low to 3 = high, with the top and bottom 25% of personal income defining a country's high and low levels, respectively, and the middle 50% designating the mid-level (with low as the reference category). Decomposing job satisfaction differences: To identify which specific determinants account for the job satisfaction gap between China and other countries, we employ a mean-based Blinder-Oaxaca (BO) decomposition [69,70] that assumes a linear and additive nexus between job satisfaction and a given set of characteristics. One advantage of BO decomposition over regression analysis is that it quantifies the contribution of specific factors that account for job satisfaction differences between China and a specific cluster. In our case, the total difference in mean job satisfaction can be decomposed as follows: Y C <unk> Y Cl 1<unk>4 <unk> X C <unk> X Cl <unk> bC <unk> X Cl <unk> bC <unk> bCl <unk>2<unk> where X i is a vector of the average values of the independent variables and bi is a vector of the coefficient estimates for China (denoted by C) and a specific cluster (denoted by Cl). In Eq (2), the first (explained) term on the right indicates the contribution of a difference in the distribution of determinant X, while the second (unexplained) term refers to the part attributable to a difference in the determinants' effects [71]. The second term thus captures all the potential effects of differences in unobservables. In keeping with the majority of previous research using decomposition [72], we focus on the explained terms and their disaggregated contribution for individual covariates, with a variable's contribution given by the average change in the function if that variable changes while all other variables remain the same. It is important to note that this decomposition does not reveal causal relations but rather decomposes the change in job satisfaction between China and some cluster by assessing the change in the observables associated with job satisfaction. These are merely associations and cannot infer the direction of a relationship. Thus, it is conceivable that certain expectations not only affect job satisfaction, but that job satisfaction may in turn affect expectations and the general assessment of a job. Hence, although we follow common practice in speaking of the "explained" part of the decomposition, we do so in full awareness that the analysis is not causal. --- Results As Table 1 shows, average levels of job satisfaction range from 5.786 in Austria to 4.342 in Japan, with China, at 4.745, ranking second worst and substantially lower than the sample mean of 5.322. It is interesting to note that two Confucian Asia countries (Japan and China) are ranked last among 36 countries. Japan's low average job satisfaction is quite impressive, being 0.403 points lower than that of China. The third Confucian Asia region, Taiwan, is also ranked quite low, yet its job satisfaction is a significant 0.430 points higher than that of China. Taiwan also has a higher average job satisfaction than non-Confucian countries such as France, Australia and Poland. Taking Germany as the reference country, our Fig 1 comparison shows its average job satisfaction to be 0.7 points higher than that of China. When we then run a series of regressions to assess the extent to which socioeconomic and demographic variables, job attributes, and job expectations affect this ranking, we find that the socioeconomic and demographic variables make little difference, but job attributes and job expectations substantially reduce the size of China's coefficient (Fig 2). More specifically, whereas job attributes reduce the coefficient by 33%, adding in job expectations lowers it by 72%, meaning that these two sets of variables explain about two-thirds of the job satisfaction gap between Germany and China. The fact that even after we control for these three variable sets, China's coefficient is a significant -0.19 indicating that cultural differences (probably in answering subjective questions on well-being) may play a certain role in job satisfaction differences among countries. For a more in-depth explanation of China's markedly low levels of job satisfaction, we decompose the satisfaction differences between China and our three country clusters. The differences in average job satisfaction are presented in Fig 3 and they reveal a substantially lower average for China, but relatively small differences among the three clusters. The results of the BO decomposition are presented in Table 2 and they show that 30%-46% of the job satisfaction differences between China and the three clusters are associated with differences in socioeconomic and demographic characteristics, job attributes, and job expectations. More specifically, 31% of the gap between China and Cluster 1 is associated with differences in job attributes and job expectations, while 44% of that between China and Cluster 2 and Cluster 3 is associated with differences in job expectations. Thus, unmet job expectations appear to be a major driver of China's low levels of job satisfaction. We summarize the five variables that account for most of the job satisfaction differences between China and the three clusters in Table 3, and graph the job expectations gap in Fig 4. What is evident from both graphics is that unmet expectations for an interesting job is by far the most important variable, accounting for 19-34% of the job satisfaction difference (Table 3). In fact, as can be seen in the descriptive statistics in S1 and S2 Figs, although about 82% of the Chinese workers believe that having an interesting job is important, compared to Croatia, Czech Republic, Estonia, Georgia, Hungary, India, Latvia, Lithuania, Mexico, Philippines, Poland, Russia, Slovak Republic, Slovenia, South Africa, Spain, Suriname. Cluster 2: Australia, Denmark, Iceland, Norway, Switzerland; Cluster 3: Austria, Belgium, Finland, France, Germany, Israel, Japan, New Zealand, Sweden, Great Britain, United States. https://doi.org/10.1371/journal.pone.0222715.g003 91%, 95%, and 93% or workers in Clusters 1-3, respectively; only 36% consider their jobs interesting, compared to 65%, 78%, and 74% in Clusters 1-3, respectively. Unmet expectations for income also matter, with about 94% of Chinese workers thinking it important to earn a high income, versus only about 90%, 70%, and 77% of workers in Clusters 1-3, respectively (S1 Fig). Again, however, only about 23% of the Chinese sample agrees that the current position offers a high income, compared with 34% and 30% for Clusters 2 and 3, respectively (S2 Fig). Unmet expectations for income thus appear to have a greater influence on job satisfaction level in China than in more Western economies. Another aspect that contributes to job satisfaction is the freedom to organize one's own daily work, which only 15% of Chinese workers report having, compared to 27%, 25%, and 26% in Clusters 1-3, respectively (S3 Fig). Even the good relationships with colleagues, which 78% of the Chinese workers admit to having, is significantly lower than the 85%, 90%, and 86% reported by workers in Clusters 1-3, respectively (S3 Fig). Good relations with the employer (at 66%) are also slightly lower in China than in other countries (74%, 73%, and 72% --- Discussion and conclusions Given the scarcity of cross-national job satisfaction research that includes China, this present analysis of 2015 ISSP data is most probably the first comprehensive comparison of job satisfaction in China with that in a large sample of other countries. As anticipated by a previous study [27], our results confirm that job satisfaction in China is substantially lower than in most of the other countries studied, ranking second to last of 36. By clustering these countries into three homogeneous groups based on observable economic and sociodemographic characteristics, we are able to identify several reasons for this relatively low job satisfaction, three of which are particularly important. The most notable driver of low job satisfaction across all comparison clusters is unmet expectations for how interesting a job should be. Although Chinese workers' expectations for this attribute are similar to those of workers in other countries, they consider their jobs substantially less interesting. This finding supports the claim that a large proportion of jobs fail to satisfy worker interests [27]. One possible cause of this proliferation may be the vertical relations (i.e., rigid top-down hierarchy and paternalism) that still dominate Chinese business organizations, which may hamper workers' ability to organize their own daily activities and stiffen self-initiative, which would make the job less interesting. At the same time, however, as S1 Table shows, the share of workers who value the importance of job security (95%) and high income (94%) is larger than the share that values job interest (82%). Thus, Chinese workers, unlike those in our country clusters, value job security and a good income more than having an interesting job, implying that they would rather sacrifice personal interest for a well-paid, guaranteed position. It is therefore not surprising that most young people attending college in China today choose their majors based mainly on future job security and income considerations, and less on intrinsic interest [73]. This importance that Chinese workers place on a well-paying job, which is higher than in most Western countries, generates a second driver of dissatisfaction, the tendency for workers to judge their own current wage as inadequate. In fact, according to CLDS data, the financial aspect has become the most important job characteristic in China [27], an observation that totally contradicts the widely held belief that earnings are less of an intrinsic motivator in Confucian societies. Of course, the per capita annual disposable income of residents in China is approximately 21,966yuan (equivalent to 3,527US$) in 2015, which is indeed lower than in most developed countries [74]. Nonetheless, individuals in all countries tend to assess their own incomes relative to those of their peers, which, given the dramatic increase in income disparity at various levels, could be contributing to the relatively low income satisfaction [75]. A third reason for job dissatisfaction identified by our decomposition analysis is the perception of relatively poor advancement opportunities, which is particularly pronounced in China, even though the amount of importance attributed to it differs little from that in other countries. Yet despite the importance attributed to advancement opportunities, only about 1 in 5 workers reports to having a job that actually offers such development perspectives (S2 Fig). Even though these unmet expectations for an interesting well-paid job with attractive advancement opportunities can explain part of the job satisfaction gap between China and the other countries, however, a significant part remains unexplained. One briefly mentioned social aspect that should be emphasized here is that ways of responding to subjective questions on well-being may be culturally specific, making the Chinese workers' low job satisfaction ranking no more than an artefact unassociated with actual job characteristics. Although we cannot refute this argument, which is seemingly supported by the considerable share of the satisfaction gap that our variables cannot explain, the markedly higher levels of job satisfaction reported by Taiwanese workers (a frequent proxy for the Chinese because of a common language and Confucian philosophy) is compulsive evidence against it. In fact, the difference in average job satisfaction between Taiwanese and Chinese workers of over 0.4 points is substantial (see Table 1). Our results do make a useful contribution to the economic convergence or divergence literature [76][77][78], which examines whether, as economies develop, work attitudes converge irrespective of cultural context into a universal stance or whether underlying values and belief systems engender significant differences in employee expectations and attitudes. Our results provide ample evidence for convergence in that Chinese workers attribute similar importance to most job attributes as workers in other countries (S1 Fig). Interestingly, one of the few notable intercountry differences concerns income, with Chinese workers placing more importance on a well-paying job than their Western counterparts. Nonetheless, even though Chinese workers expect an interesting job, higher pay, and advancement opportunities, this expectation stems less from differing work attitudes or values than from perceptions of what the current job offers. This convergence is further underscored by the importance of developing good relations with coworkers, deemed as important in China as elsewhere despite a lower probability of Chinese workers having a job that allows such development. In fact, relationships with both colleagues and employers in China are not as good as those reported in all three clusters (S3 Fig), a somewhat surprising finding given the group orientation and participative decisionmaking encouraged by China's collectivistic society. Finally, cross-national studies such as ours are invaluable, "even indispensable," to valid interpretation and generalizability of findings from research that, like the job satisfaction literature, tends to focus on Western countries and test assumptions specific to a single culture or society. Not only does cross-national investigation ensure that "social structural regularities are not mere particularities, the product of some limited set of historical or cultural or political circumstances," it also forces researchers to "revise [their] interpretations to take account of cross-national differences and inconsistencies that could never be uncovered in single-nation research" (p. 77). [79] --- The data underlying the results presented in the study are available from http://issp.org/menu-top/home/. --- Supporting information S1 Writing -original draft: Xing Zhang, Peng Nie, Alfonso Sousa-Poza. Writing -review & editing: Alfonso Sousa-Poza.
Using data from the 2015 International Social Survey Program (ISSP), this study conducts a multinational comparison of job satisfaction determinants and their drivers in 36 countries and regions, with particular attention to the reasons for relatively low job satisfaction among Chinese workers. Based on our results from a Blinder-Oaxaca decomposition analysis, we attribute a substantial portion of the job satisfaction differences between China and the other countries to different job attributes and expectations; in particular, to unmet job expectations for interesting work, high pay, and opportunities for advancement. We also note that, contrary to common belief, Chinese workers value similar attributes as Western workers but perceive their work conditions as very different from those in the West.
Background Prenatal care has the potential to address many pregnancy complications, concurrent illnesses and health problems [1]. An essential aspect of prenatal care models concerns the content of prenatal care, which is characterized by three main components: a) early and continuing risk assessment, b) health promotion (and facilitating informed choice) and c) medical and psychosocial interventions and follow-up [2,3]. Another essential aspect of prenatal care models concerns the number and timings of prenatal visits. While there is overall agreement on the importance of early initiation of prenatal care, the number of prenatal visits has led to a great deal of discussion. A Cochrane review of ten RCTs among mostly low-risk women concluded that the number of prenatal visits could be reduced without increasing adverse maternal and perinatal outcomes, although women in developed countries might be less satisfied with this reduced number of prenatal visits [4]. Despite universal healthcare insurance coverage in most industrialized western countries, studies in these countries have shown that non-western women make inadequate use of prenatal care. They are less likely to initiate prenatal care in good time [3,[5][6][7], attend all prenatal care appointments [8] and attend prenatal classes [9]. Furthermore, non-western women have also been shown to be at increased risk for adverse perinatal outcomes. A meta-analysis by Gagnon et al. showed that Asian, North African and sub-Saharan African migrants were at greater risk of feto-infant mortality than'majority' populations in western industrialized countries, with adjusted odds ratios of 1.29, 1.25 and 2.43 respectively. This study also found that Asian and sub-Saharan African migrants are at greater risk of preterm birth, with adjusted odds ratios of 1.14 and 1.29 respectively [10]. Besides an increased risk for adverse perinatal outcomes, non-western women are also at increased risk of adverse maternal outcomes, in terms of both mortality [11,12] and morbidity [13]. A few studies have implied a relationship between non-western women's higher risk of adverse pregnancy outcomes and their use of prenatal care. In a Dutch study conducted by Alderliesten et al., late start of prenatal care was one of the maternal substandard care factors of perinatal mortality that were more common among Surinamese and Moroccan women [14]. In a French study conducted by Philibert et al., the excess risk for postpartum maternal mortality among nonwestern women was associated with a poorer quality of care, suggesting attention should be paid to early enrolment in prenatal care [15]. This relationship emphasizes the importance of proper use of prenatal care to address pregnancy complications, concurrent illnesses and health problems. Two previously conducted reviews provide relevant insights into the factors affecting prenatal care utilization [16,17]. The first review focused on women, irrespective of origin, in high-income countries. Ethnicity, demographic factors, socioeconomic factors at the individual and neighbourhood level, health behaviour and provider characteristics were found to be determinants of inadequate prenatal care utilization [16]. The second review focused on first-generation migrant women of western and non-western origin in western industrialized countries. In this review, being younger than 20, poor or fair language proficiency and socioeconomic factors were reported to affect prenatal care utilization [17]. A review specifically focused on factors affecting prenatal care utilization by non-western women, irrespective of generation, was still lacking. Furthermore, qualitative studies -, which are well suited to exploring the experiences and perceptions that play a role in women's prenatal care utilization -were not included in previously conducted reviews. Also, these reviews were not restricted to countries with similar accessibility to healthcare, which complicates generalization of the results found. In this review, we therefore aimed to identify and summarize all reported factors, irrespective of study design, affecting non-western women's use of prenatal care and prenatal classes in industrialized western countries with universal insurance coverage. Prenatal (or antenatal) care was defined as all care given by professionals to monitor women's pregnancy. All courses preparing pregnant women for birth or teaching them how to feed and take care of their baby were defined as prenatal or antenatal classes. 'Factors' were defined as all experiences, needs, expectations, circumstances, characteristics and health beliefs of non-western women. --- Methods --- Search strategy The following databases were searched: PubMed, Embase, PsycINFO, Cochrane, Sociological Abstracts, Web of Science, Women's Studies International, MIDIRS, CINAHL, Scopus and the NIVEL catalogue. The search was limited to articles published between 1995 and July 2012. The search strategy consisted of a number of Medical Subject Headings (MeSH) terms and text words, aiming to include as many relevant papers as possible (Additional file 1). It was devised for use in PubMED, and was adapted for use in the other databases. The search was performed in all fields of PubMed (the main database) and in titles, abstracts and keywords for the other databases. No language restriction was applied. --- Methods of screening and selection criteria The initial screening of articles was based on titles, and the second based on titles and abstracts. Finally, the full texts of the articles were assessed for inclusion. Screening was done by five reviewers (WD, AF, TW, JM, AB). Each article was screened by two reviewers: one of the first four reviewers plus the fifth reviewer. For each article, any discrepancy between the two reviewers was resolved through discussion. The aim was to identify studies analysing or exploring factors affecting the use of prenatal care by non-western women in industrialized western countries. We therefore included studies if they (a) concerned prenatal care; (b) concerned factors affecting the use of prenatal care; (c) did not concern specific diseases during prenatal care, with the exception of pregnancy-related or postpartum conditions; (d) concerned industrialized western countries (high-income OECD countries except for Japan and Korea) with universal insurance coverage (resulting in exclusion of the USA); (e) concerned non-western women as clients (women from Turkey, Africa, Latin-America, Asia), with results presented at subgroup level; (f) did not concern illegal immigrants, refugees, asylum seekers, students or migrant farm workers (seasonal workers, internal migration); (g) were based on primary research (qualitative, quantitative, mixed methods or case studies). We have used the term 'non-western' women to mean immigrant women from the countries mentioned above, as well as their (immediate) descendants. Studies focusing on women from non-migrant ethnic minority groups (e.g. Aboriginals) were excluded. In the first two screening stages (titles and titles plus abstracts), studies were included when both reviewers agreed they were eligible for inclusion, or if there was doubt about whether or not to exclude them. In the final screening stage (full texts), studies were included when both reviewers felt they met all the inclusion criteria. --- Data extraction and quality appraisal The following information was abstracted from the included studies: (a) general information: authors, journal, publication date, country, language; (b) research design: qualitative, quantitative or mixed-methods design; (c) research population: ethnic group, immigrant generation, sampling method, sample size; (d) analytical approach; (e) all possible factors affecting the use of prenatal care; (f ) results and conclusions. The quality of the studies was assessed by two reviewers, using the Mixed Methods Appraisal Tool (MMAT-version 2011) [18]. This quality appraisal tool seems appropriate as it was designed to appraise complex literature reviews consisting of qualitative, quantitative and mixed-methods studies. Quantitative and qualitative studies are each appraised by four criteria with overall scores varying from 0% (no criterion met) to 100% (all four criteria met). For criteria partially met, we decided to give half of the criterion score. For mixed methods studies, three components are appraised: the qualitative component, the quantitative component and the mixed methods component. The overall score is determined by the lowest component score. --- Synthesis Because of the heterogeneity in terms of countries, nonwestern groups and methods of analysis, we chose not to conduct a meta-analysis for the quantitative results. Instead, we chose to produce a narrative synthesis of the results of the studies included. For that synthesis, we used the conceptual framework of Foets et al. 2007, an elaborated version of Andersen's healthcare utilization model (Figure 1) [19]. As this conceptual framework integrates the possible explanations for the relationship between ethnicity and healthcare use, it seemed the most appropriate. In this elaborated model the predisposing, enabling and need factors of Andersen are explained by two groups of underlying factors: individual factors and health service factors. The individual factors are subdivided into several categories: demographics and genetics, migration, culture, the position in the host country and social network. The health service factors are subdivided into: accessibility, expertise, personal treatment and communication, and professionally defined need. To fit the factors emerging from the data extraction, the category "demographics and genetics" was expanded to include pregnancy. This finally resulted in the following categories: Individual factors 1) Demographics, genetics and pregnancy: women's age, parity, planning and acceptance of pregnancy, pregnancy related health behaviour and perceived health during pregnancy 2) Migration: women's knowledge of/familiarity with the prenatal care services/system, experiences and expectations with prenatal care use in their country of origin, pregnancy status on arrival in the new industrialized western country 3) Culture: women's cultural practices, values and norms, acculturation, religious beliefs and views, language proficiency, beliefs about pregnancy and prenatal care 4) Position in the host country: women's education level, women's pregnancy-related knowledge, household arrangement, financial resources and income 5) Social network: size and degree of contact with social network, information and support from social network Health service factors 6) Accessibility: transport, opening hours, booking appointments, direct and indirect discrimination by the prenatal care providers 7) Expertise: prenatal care tailored to patients' needs and preferences 8) Treatment and communication: communication from prenatal care providers to women, personal treatment of women by prenatal care providers, availability of health promotion/information material, use of alternative means of communication 9) Professionally defined need: referral by general practitioners and other healthcare providers to prenatal care providers --- Results A total of 11954 articles were initially identified, of which 4488 were duplicates. Title screening of the remaining 7466 non-duplicate references resulted in 1844 relevant articles being selected for abstract screening. After abstract screening, 333 articles were selected for full text screening, either because they were relevant (230) or no abstract was available (103). Finally, full text assessment resulted in 16 peer-reviewed articles being included and their methodological quality being assessed (Figure 2). --- Characteristics of the included studies Additional file 2 provides an overview of the articles included. Three articles described quantitative observational studies: 2 cohort studies [20,21] and 1 crosssectional study [22] with methodological quality scores varying between 75% and 100%. Twelve articles described qualitative studies: seven individual interview studies [23][24][25][26][27][28][29], two focus group studies [30,31], two studies combining individual interviews and focus group interviews [32,33] and one study combining individual interviews and observations [34]. The methodological quality scores of eleven of these twelve qualitative studies varied between 50% and 100%, with the twelfth study scoring 25%. One study used mixed methods -combining a retrospective cohort design with focus groups [35]. Only the focus group yielded relevant information for this review. The methodological quality score of this study was 25%. The studies were conducted in various industrialized western countries. Nine studies were conducted in a European country [20,21,23,28,29,[31][32][33]35], four in Canada [22,25,27,30] and three in Australia [24,26,34]. Fourteen articles were published in English [20][21][22][24][25][26][27][28][29][30][31][32][33][34], one in German [23] and one in Italian [35]. The studies included women from different regions of the world. Three studies reported factors for sub-Saharan African women: Somali or Ghanaian [29,32,33]; eight for Asian women: South Asian [22], Sri Lankan [23], Filipino [26], Vietnamese [27], Indian [30], Thai [34] or a mixture of Asian origins [24,28]; and two for Turkish women [21,31]. One study reported factors for Muslim women not further specified [25]. Some studies reported factors for various non-western ethnic groups. One study reported factors for sub Saharan African women (Ghanaian), North African women (Moroccan), Turkish women and other nonwestern women not further specified [20]. Another study reported factors for North African women (Northwest African women) and Asian women (Chinese) as part of a group of migrant women [35]. --- Barriers to prenatal care utilization All factors impeding the use of prenatal care were classified as barriers. The first column of Table 1 gives an overview of these factors according to the conceptual framework of Foets et al.. Both quantitative and qualitative studies reported factors impeding non-western women's use of prenatal care. Demographic, genetic and pregnancy-related factors were only described in one quantitative study and in none of the qualitative studies. In this study multiparity, being younger than 20 and unplanned pregnancy were associated with late prenatal care entry [20]. On the other hand, expertise factors as well as personal treatment and communication factors were only described in qualitative studies. Care providers with a lack of knowledge of cultural practices were described as being unable to provide knowledgeable health guidance and more likely to display insensitive behaviour [25]. Interviews with caregivers revealed that Somali women perceiving themselves as being treated badly by a care provider would not return for antenatal care [33]. Poor communication complicated women's access to prenatal care [35], prevented attendance of prenatal classes [23] and was reported as an underlying problem in understanding maternity reproductive services [32]. Factors reported in both qualitative and quantitative studies concerned: migration, culture, position in the host country, social network and accessibility of prenatal care. -Migration-related factors: For Asian, Somali and Turkish women, as well as Muslim women otherwise unspecified, lack of knowledge of or information about the Western healthcare system was reported to deter utilization of prenatal care [26,27,[30][31][32]35] or prenatal classes [22,23,25]. Arriving in the new country late in pregnancy was reported as another reason for not attending prenatal classes [22]. -Cultural factors: Adherence to cultural and religious practices was reported to impede prenatal care utilization by Asian and Muslim women. Women Table 1 Overview of the factors according to the conceptual framework of Foets et al. --- Category Barriers Facilitators --- Individual factors --- Demographics, genetics and pregnancy Being younger than 20 [20]* Multiparity [20]* Unplanned pregnancy [20]* --- Migration Lack of knowledge of or information about the Western healthcare system [22,23,[25][26][27][30][31][32]35] Recognition of prenatal care as an important issue in the community [30] • Arriving in the new country late in pregnancy [22]* --- Culture Adherence to cultural and religious practices [23,25,34] • Care provider of the same ethnic origin [27] • Poor language proficiency [20,22,24,26,27,30,31] Belief that prenatal care ensures baby's well-being [23,34] • Lack of assertiveness [24] • Belief in looking after your own health for a healthy baby [34] • Dependency on husband [22,34,35] Perceiving pregnancy as a normal state [29] • Belief that prenatal care is more a burden than a benefit [25] • Belief that prenatal classes are not necessary [22,34] --- Position in host country Financial problems [22,23,31] Better socio-economic follow-up [31] • --- Unemployment [21]* Low or intermediate educational level [20,21]* Social inequality (education, economic resources and residence (rural or urban)) [35] • Lack of time [22,23,27,30] Lack of childcare [23,25] • No medical leave from work [31] • --- Social network No support from family [35] • Husband with a good command of the industrialized country's official language [34] • Acquiring or following advice from family and friends [22,23] Isolated community [35] • --- Health service factors --- Accessibility Inappropriate timing and incompatible opening hours [23,35] • Transport and mobility problems [22,26,27,35] Indirect discrimination [32] • --- Expertise Care provider lacking knowledge of cultural practices [25] • A mature, experienced healthcare provider with a command of the native language [30] • Care provider showing interest and respect [23] • Care provider alleviating worries and fears [23] • --- Personal treatment and communication Poor communication [23,32,35] entered prenatal care late because of shame about being undressed during consultations [23]. Prenatal classes were not attended because of feelings of fear and embarrassment about watching a video of the act of giving birth [34] and because classes were not exclusively designed for women [25]. Poor language proficiency was another cultural characteristic described as an impeding factor for prenatal care [20,22,24,27,30,31] and prenatal classes [26]. Lack of assertiveness appeared to make it difficult for Asian women to access maternity services and information. These women were too reluctant or ashamed to enquire about services or ask for information [24]. Dependency on the husband was described as complicating access to both prenatal care [35] and prenatal classes [22,34]. Pregnancy was perceived as a normal state by Somali women and some of them therefore did not understand the necessity of prenatal care [29]. Prenatal care was perceived as a burden more than a benefit because the same procedure is performed every time and doctors are too busy to provide pregnancy-related information [25]. Prenatal classes were perceived as not being necessary as women had already experienced birth [22,34] or attended classes previously [22]. -Factors related to women's position in the host country: Financial problems impeded the ability to pay for health insurance [31], access to medical care during pregnancy [22] and attendance of prenatal classes [23]. Unemployment was another characteristic that was identified. In a Dutch study, enabling factors (including being in employment) explained Turkish women's delayed entry into prenatal care [21]. In two studies, low or intermediate educational level was associated with late entry into prenatal care [20,21]. Social inequalities in education, economic resources and residence (rural or urban) among those who have immigrated, were found to affect access to prenatal care [35]. Lack of time was reported as a reason for not attending prenatal classes [22,23,30] and as a barrier to accessing prenatal support from public health and community nurses [27]. Another reason for not attending prenatal classes was lack of childcare [23,25]. Turkish women in a Swiss study reported problems obtaining medical leave from work [31]. -Social network factors: Little or no support from family was described as complicating access to prenatal care [35]. Acquiring or following advice from family and friends was reported as a reason for not attending prenatal classes [22,23]. Isolation of the community was described as complicating Chinese women's access to prenatal care [35]. -Accessibility factors: Inappropriate timing was reported as a reason for not attending prenatal classes [23] while incompatible opening hours (incompatible with women's own working hours or those of their husband or accompanying persons) were reported to affect their access to prenatal care [35]. Transport and mobility problems were reported to complicate access to medical care during pregnancy [22], prenatal care [35] and prenatal classes [26,27]. Indirect discrimination also affected access to care. Somali women in a UK study reported that general practitioners would sometimes refuse to see them if they did not bring along an interpreter, and that they had to book appointments for secondary care three days in advance if interpretation services were needed [32]. --- Facilitators of prenatal care utilization All factors facilitating the use of prenatal care were classified as barriers. The second column of Table 1 gives an overview of these factors according to the conceptual framework of Foets et al.. These factors were only reported in qualitative studies and concerned: migration, culture, socioeconomic status, social network, treatment and communication. -Migration-related factors: To improve prenatal class attendance, women suggested recognition of prenatal care as an important issue in the community through mobilisation within their communities by word of mouth, radio and television [30]. -Cultural factors: Women felt that prenatal support provided by health workers or peers of the same ethnic origin would be beneficial to them [27]. Believing that prenatal care ensures babies' wellbeing was another characteristic that facilitated prenatal care utilization. In one study, prenatal care was perceived as an important aspect of pregnancy that could assure women about their babies' wellbeing [34], while in another study regular consultations reduced women's uncertainty or fear about the pregnancy or their babies' health [23]. Believing in looking after your own health for a healthy baby was also described as a reason for not missing any prenatal check-ups [34]. -Factors related to women's position in the host country: Women suggested better socioeconomic follow-up by institutions because socioeconomic conditions affected their ability to pay for health insurance [31]. -Social network factors: Women with a husband who spoke the industrialized country's official language reported that their husbands told them to attend antenatal check-ups and arranged antenatal care because they did not speak the country's language themselves [34]. -Expertise factors: Women recommended that healthcare providers facilitating prenatal care sessions should be mature women with experience of childbirth [30]. Care providers were expected to show respect by being interested and allowing for women's sense of shame about nudity [23]. They were also expected to alleviate worries and fears by giving women a sense of security through careful monitoring, assessment, supervising and by acknowledging women's fears and reassuring them [23]. -Personal treatment and communication factors: One of these factors was the use of women's native language. Women proposed more information in their native language [31], prenatal classes being conducted in their native language [27] and healthcare providers with a command of their native language [30]. Group prenatal care was described as being more accessible when practice midwives spoke several community languages [28]. Another characteristic was improved communication. Care providers or institutions were expected to provide translation [23,31], conversation space [23], and to make up for women's experience and knowledge by asking specific questions and giving customized information, demonstrations and explanations [23]. In one study, women reported a preference for audio-visual material over written information [27]. Women explained that the term "classes" suggests that they are ignorant about childbirth, and that prenatal classes should be called prenatal sessions to improve their attendance [30]. --- Discussion --- Factors affecting prenatal care utilization This review gives an overview of factors affecting nonwestern women's use of prenatal care in western societies. Therefore, 'factors' were described in the broadest sense, comprising experiences, needs and expectations, circumstances, characteristics and health beliefs of non-western women. The results indicate that non-western women's use of prenatal care is influenced by a variety of factors, and that several factors may simultaneously exert their effect. The categories migration, culture, position in the host country, social network, expertise of the care provider and personal treatment and communication were found to include both facilitating and impeding factors for nonwestern women's prenatal care utilization. The category demographics, genetics and pregnancy and the category accessibility of care only included impeding factors. The only aspect of the conceptual framework of Foets et al. that was not found in the studies included in this review was 'professionally defined need'. In a systematic review conducted by Feijen-de Jong et al., ethnic minority was found to be one of the determinants of inadequate prenatal care utilization in high income countries [16]. As ethnic minority status does of itself not explain prenatal care utilization, our review adds relevant information to the review by Feijen-de Jong and colleagues, and gives more insight into the factors behind these women's prenatal care utilization, at least for those of non-western origin. The demographic and socioeconomic factors found in our review are largely in line with the results of Feijen-de Jong et al.. However, we did not find any factors concerning pattern or type of prenatal care, planned place of birth, prior birth outcomes and health behaviour. Our results are also in line with the review by Heaman et al., who reported that demographic, socioeconomic and language factors affected prenatal care utilization by first generation migrant women [17]. In addition to these two reviews, we found several other factors at the individual and health service levels that impeded or facilitated nonwestern women's prenatal care utilization. To our knowledge, this is the first review of prenatal care utilization by non-western women that has combined quantitative, qualitative and mixed-methods studies. By doing this, we were able to find a very wide range of factors affecting non-western women's prenatal care utilization. This is clearly evident from the barriers. A comparison shows that the quantitative studies made a full contribution to inadequate users' demographic, genetic and pregnancy characteristics. All three factors in this category: namely being younger than 20, multiparity and unplanned pregnancy were derived from one quantitative study. The qualitative studies contributed fully to expertise factors as well as personal treatment and communication factors. Care providers lacking knowledge of cultural practices, poor communication and perceiving yourself as having been badly treated by a care providers were only derived from qualitative studies and the qualitative part of the mixed methods study. Besides providing all the barriers in a specific category, quantitative and qualitative studies also complemented each other by both providing barriers in the same category (migration, culture, position in the host country, social network, accessibility), sometimes even by means of the same barrier. The factors: lack of knowledge of or information about the Western healthcare system, poor language proficiency, dependency on husband, belief that prenatal care is not necessary, financial problems, lack of time, acquiring or following advice from family and friends, and transport and mobility factors were all reported in quantitative as well as qualitative studies. By combining different study designs, we were also able to provide more in-depth insight into the mechanisms of some factors. For instance, we obtained more insight into the mechanisms of the factor multiparity reported in two previous quantitative studies. Qualitative studies showed that multiparous women did not perceive prenatal classes as necessary because they had already given birth. Furthermore, multiparous women reported lack of childcare as a reason for not attending prenatal classes. Perhaps these two reasons also play a role in multiparous women's utilization of medical care during pregnancy. In the introduction, non-western women's risk for adverse pregnancy outcomes was described according to region of origin. By placing this review's findings in a regional perspective, some noteworthy insights were gained about factors affecting these high risk groups' health care utilization. As to individual barriers, lack of knowledge of the Western healthcare system was described among all four regional groups distinguished in this review (sub-Saharan African, North African, Asian and Turkish). Health beliefs were reported among sub-Saharan African (Somali) and Asian women. Dependency on husband was reported among Asian and North African women. However, adherence to cultural practices, acquiring or following advice from family and friends, lack of assertiveness and lack of time were only described in studies conducted among Asian women. As to health service barriers, accessibility factors were reported in studies conducted among Asian and North African woman. On the other hand, expertise and personal treatment factors were only found among sub-Saharan African (Somali) women. These insights can be used to develop a more targeted approach towards specific groups. For example by placing emphasis on 'dependency on husband' for Asian and North African women, and 'personal treatment' for sub Saharan women. However, this should be done carefully. Some factors may seem to play no role for certain ethnic groups, while they were simply not included or discussed in these studies. The individual and health service facilitators were all derived from qualitative studies conducted among Asian women and Turkish women. Nevertheless, these facilitating factors can be applicable to other ethnic groups, as they relate to difficulties also reported by these groups (e.g. improved communication). Several factors such as lack of knowledge or information of the western healthcare system, poor language proficiency and poor communication applied to women of various ethnic origins. On the other hand, some factors were highly specific to a country, culture or religion. Muslim women, for example, were found to refuse combined session with males while other women might have fewer gender issues. Extrapolation of the results is therefore less applicable. The factors reported to facilitate prenatal care utilization were mostly suggestions made by women. As women based these suggestions on their own experiences with prenatal care, we decided to include these in our review. In a systematic review conducted by Simkahada et al., perceiving pregnancy as a normal state and seeing little direct benefit from antenatal care were reported as barriers to antenatal care utilization in developing countries [36]. In our review, we found somewhat similar impeding beliefs about prenatal care in two studies conducted among first generation women. Furthermore, Simkhada and colleagues reported unsupportive family and friends as a barrier to antenatal care utilization which was also found in our review. These similarities between nonwestern women in industrialized western countries and women in developing countries indicate that some women seem to continue to have certain beliefs, attitudes and needs they had prior to migration. A comparison between first and second generation non-western women would be very useful, but was not possible. Only one study included second generation women but presented the results in combination with first generation women. Even though we included only high-income countries with universally accessible healthcare, we found that financial factors did affect non-western women's prenatal care utilization. One explanation for this finding might be that women may not be aware of the universal accessibility of care, and therefore perceive lack of money as a barrier to prenatal care. It might also be that, even though women are currently legally resident (which was an inclusion criterion of our review), they reflect back on periods when this was not the case. --- Methodological reflections One noteworthy point is the large number of qualitative studies included in this review, as compared to quantitative studies. During the review process, we identified several quantitative studies focusing on factors affecting prenatal care utilization by non-western women among their study population. Regrettably, we had to exclude most of these studies as they lacked a sub-analysis specifically for non-western women. By doing a sub-analysis specifically for non-western women in future quantitative studies on prenatal care utilization, more insights can be gained on factors affecting their use of prenatal care. The studies included in this review all considered different subgroups of non-western women. However, the immigrant generation of the women was not reported in five studies and factors were not specified according to generation in the only study that included first and second generation women. The factors found in the qualitative studies were mostly part of women's experiences, needs and expectations with prenatal care. These studies did not specifically focus on inadequate users, and therefore did not include a definition. On the contrary, two of the three quantitative studies defined inadequate use, but did so differently (Additional file 3). This difference in definition between the quantitative studies and the lack of definition in qualitative studies complicates comparison and integration of the study results. The included studies showed a large variance in methodological quality. Nevertheless, we decided not to exclude studies with a low quality score, in order to prevent loss of any relevant factors in this review. Instead we compared the results of the high and low methodological quality studies against each other, and did not find any contradictory results. Two main strengths of this study are the use of a broad search string and not applying a language restriction, to minimize the chance of missing relevant studies. Also the inclusion of quantitative, qualitative and mixedmethods studies adds to the strength, as this increases the chance of finding different types of relevant factors affecting prenatal care utilization. Another strength is the restriction to countries with universally accessible healthcare. Therefore, results are more comparable and generalizable to other countries with a similar organization of their healthcare system. The use of a theoretical framework to sort the factors found is another strength of the study, as this gives a clear overview of the factors and the level at which they exert their effect. --- Conclusions Sixteen studies heterogeneous in methodological quality were included in this review. A variety of factors at the individual and health service levels were found to affect non-western women's use of prenatal care. Lack of knowledge of the western healthcare system and poor language proficiency were the most frequently reported impeding factors, while provision of information and care in women's native language was the most frequently reported facilitating factor. The factors found could all be classified according to the conceptual framework of Foets et al., and covered all categories with the exception of 'professionally defined need'. The factors reported were mainly derived from qualitative studies, and more detailed quantitative research with sub-analyses for non-western women is needed to determine the magnitude of these factors' effects on prenatal care utilization. Furthermore, more qualitative studies specifically aimed at non-western women making inadequate use of prenatal care are necessary. The factors found in this review provide specific indications for identifying non-western women at risk of inadequate use of prenatal care, and developing interventions and adequate policy aiming at improving their prenatal care utilization. --- Additional files Additional file 1: Search strategy in PubMed. Additional file 2: Overview of the study characteristics. Additional file 3: Additional information of the included studies. --- Competing interests The authors declare that they have no competing interests. --- Authors' contributions All authors have made substantial contributions to this study. AB and WD developed the review with the support of TW, JM and AF. AB conducted the search, and all authors contributed to the screening, data extraction and quality assessment. The final version of the manuscript was read and approved by all authors.
Background: Despite the potential of prenatal care for addressing many pregnancy complications and concurrent health problems, non-western women in industrialized western countries more often make inadequate use of prenatal care than women from the majority population do. This study aimed to give a systematic review of factors affecting non-western women's use of prenatal care (both medical care and prenatal classes) in industrialized western countries. Methods: Eleven databases (PubMed, Embase, PsycINFO, Cochrane, Sociological Abstracts, Web of Science, Women's Studies International, MIDIRS, CINAHL, Scopus and the NIVEL catalogue) were searched for relevant peerreviewed articles from between 1995 and July 2012. Qualitative as well as quantitative studies were included. Quality was assessed using the Mixed Methods Appraisal Tool. Factors identified were classified as impeding or facilitating, and categorized according to a conceptual framework, an elaborated version of Andersen's healthcare utilization model. Results: Sixteen articles provided relevant factors that were all categorized. A number of factors (migration, culture, position in host country, social network, expertise of the care provider and personal treatment and communication) were found to include both facilitating and impeding factors for non-western women's utilization of prenatal care. The category demographic, genetic and pregnancy characteristics and the category accessibility of care only included impeding factors. Lack of knowledge of the western healthcare system and poor language proficiency were the most frequently reported impeding factors. Provision of information and care in women's native languages was the most frequently reported facilitating factor.The factors found in this review provide specific indications for identifying non-western women who are at risk of not using prenatal care adequately and for developing interventions and appropriate policy aimed at improving their prenatal care utilization.
INTRODUCTION Mental health problems are prevalent among young people, and preventive approaches have gained traction to improve their mental health [1]- [3]. Adolescents are particularly vulnerable to mental health difficulties, and there are barriers to support, including capacity difficulties, stigma, and lack of tailored services [4] Research shows weaknesses in young people's knowledge and beliefs about mental health and mental health support, as well as the accumulation of stigmatizing attitudes historically. The lack of research is also present in young people's desire for support [5] Preventive psychiatry is a potential transformative strategy to reduce the incidence of mental disorders in young people [6], [7] Selective approaches mostly target familial vulnerability and exposure to non-genetic risks. Selective screening and psychological/psychoeducational interventions in vulnerable subgroups may improve anxiety/depression symptoms, but their effectiveness in reducing the incidence of psychotic/bipolar/general mental disorders has not been proven [8]. Psychoeducational interventions can universally improve anxiety symptoms but do not prevent depression/anxiety disorders, while physical exercise universally can reduce the incidence of anxiety disorders [4]. The COVID-19 pandemic has highlighted the link between education and health, and school closures are most likely associated with significant health disruptions for children and adolescents [9], [10]. A systematic review [11] of the available evidence is needed to inform policy decisions regarding school closures and reopenings during the pandemic. The review found that mental health was significantly impacted by school closures, with 27 studies identifying a considerable impact. A growing number of digital health treatments have been created to address a variety of mental health disorders. Digital health technologies are seen as promising for treating mental health among adolescents and young people. In comparison to usual care or inactive controls, a systematic review [12] of recent evidence on digital health interventions aimed at adolescents and young people with mental health conditions found that they were effective in addressing mental health conditions. However, the quality of evidence is generally low, and there is a lack of evidence on the costeffectiveness and generalizability of interventions to low-resource settings. According to a systematic review [12] it is estimated that 1 in 5 adolescents experience a mental health disorder each year. The most common mental health problems studied in young people are depression and difficulties related to mood, anxiety, and social/behavioral problems [4] The study [12] also found that digital health technologies are considered promising for addressing mental health among adolescents and young people, and there are a growing number of digital health interventions targeting this population. Preventive approaches have gained traction for improving mental health in young people, and there is evidence supporting primary prevention of psychotic, bipolar, and general mental disorders as well as good promotion of mental health as a potential transformative strategy to reduce the incidence of these disorders in young people [4]. In a review of coverage [13] the most common mental health problems investigated in adolescents with physical disabilities beginning in childhood were depression and difficulties related to mood, anxiety, and social/behavioral problems. Adolescents believe that mental health concerns are a typical occurrence, and the rise in these issues is linked to pressures regarding academic success, social media, and more candor regarding mental health issues. [14] Prejudices, preconceptions, hearsay, and gender standards are all regarded as significant risk factors for mental health issues. Prejudice towards persons with mental health issues is thought to stem from ignorance [15], [16]. In young Australians, harmful alcohol use is linked to mental health issues and other risky behaviors. [17] Indonesia is also experiencing issues with the rise of mental health issues among young people brought on by expectations connected to academic success, social media, and more candor regarding mental health issues [10], [18], and [19]. Prejudice, stereotyping, and gender norms are all key risk factors for mental health issues [14] Having a physical disability during adolescence and young adulthood increases the risk of developing mental illness [13] Selective approaches mostly target familial vulnerability and non-genetic risk exposure, while universal psychoeducational interventions can improve anxiety symptoms but do not prevent depression/anxiety disorders [20] Approaches that target school climate or social determinants of mental disorders have the greatest potential to reduce the risk profile of the population as a whole [4] Social media can offer a space for people to share stories of times they are experiencing difficulties and seek support for mental health issues [21]. However, the use of social media can cause young people to experience conditions such as anxiety, stress, and depression [22] The detrimental effects of social media use on young people's mental health can be caused by a variety of factors, including increased screen time, cyberbullying, and social comparison [23] The impact of COVID-19 on young people's mental health has also been discussed in relation to social media use [9] Mental health practitioners have recommended that the use of digital technology and social media be explored routinely during mental health clinical consultations with young people [12] It is important to identify barriers to effective communication and examples of good practice in talking about young people's web-based activities related to their mental health during clinical consultations. Several studies have demonstrated that using social media can have a detrimental effect on mental health, particularly for those who spend more than 2 hours per day on social networking sites, including depression, anxiety, and suicidal ideation [24], [25]. Bullying on social media can also fuel its development and fuel sadness [26] The usage of social media, however, has been linked to positive effects on young people's mental health, including social support and a decrease in feelings of loneliness [25]. There is continuous study into the connection between highly visual social media and young people's mental health, but the results are conflicting and there are few studies that focus just on highly visual social media [21], [27] Schools, parents, social media and advertising companies, West Science Interdisciplinary Studies <unk> and governments have a responsibility to protect children and adolescents from harm and educate them on how to use social media safely and responsibly [28]. Parents can educate their children on how to use social media safely and responsibly, including setting boundaries such as limiting access to technology in bedrooms and at mealtimes [29] Parents can also be good role models by ensuring that they do not allow excessive use of social media and modeling positive habits [30] In addition, parents can help their children choose reputable sources of support that can be accessed through social media, such as groups for parents and caregivers of children with cancer [29], [31], [32]. Schools, social media and advertising companies, and governments also have a responsibility to protect children and adolescents from harm and educate them on how to use social media safely and responsibly [33] The events that occurred in 2020/2021, such as the ongoing climate emergency, bushfires in Australia and the COVID-19 pandemic, reflect the human-caused environmental issues that young people are most concerned about and also exacerbate the mental health issues they have reported being at crisis point in 2019 [34]. A study found that environmental factors, such as perception of the surrounding environment, can significantly predict mental health indicators in young people aged 15 to 17 years [35] Another study found that self-esteem mediates the impact of epilepsyspecific factors and environmental factors on mental health outcomes in young people with epilepsy [36] It is very important to listen to adolescent views on mental health issues because these problems are common among young people, and exposure to stigmatization is an additional burden, leading to increased suffering [14], [17] Social media is a huge force in young people's lives with far-reaching effects on their development, and little research has been done on the impact of social media on young people's mental illness [24] The relationship between highly visual social media and young people's mental health remains unclear, and there is still little data exclusively examining highly visual social media [25] Social media use can negatively impact mental health and lead to addiction, but it can also help people to stay connected with friends and family during the COVID-19 pandemic [9], [10]. The impact of COVID-19 on young people's mental health has been a concern, and young people's discussions on social media about the impact of COVID-19 on their mental health have been analyzed thematically [37]. Research on how social media affects young people's mental health is, however, lacking. Social media users have experienced both good and bad effects as a result of the COVID-19 epidemic [28], [37]. Although parents, social media, and advertising companies also have a duty to protect children and adolescents from harm, schools play a significant role in teaching young people how to use social media safely and responsibly [11]. It is important to understand the psychological effects of COVID-19 on young people and how these effects fit into the pre-existing social environment. Therefore, there is a pressing need for more research on how social media use and the surrounding environment affect young people's mental health in Sukabumi. This will help us understand the potential risks and benefits of social media use and help us create the right kind of support for young people's mental health. --- LITERATURE REVIEW --- Social Media Use and Mental Health A number of studies have reported a correlation between social media use and poor mental health among young people. Research conducted by [21], [30] found that participants who used scocial media instragram, and facebook for a week reported decreased subjective well-being and increased feelings of loneliness and isolation. Similar findings were made by Woods and Scott (2016), who discovered that heavy social media use was linked to more severe anxiety and depressive symptoms. The link between social media usage and mental health consequences is complicated, though, and not all research have shown adverse correlations. For instance, a research [16], [22] revealed that teen usage of social media was not linked to depressed symptoms. In addition, several studies have found that social media use can have a positive effect on mental health outcomes, such as increased social support and self-esteem [3]. --- Environmental Factors and Mental Health Environmental factors were also found to play an important role in mental health outcomes among young people. For example, a study by [38] found that exposure to green space is associated with lower stress levels and better mental health outcomes. Similarly, a study by [39], [40] found that exposure to the natural environment was associated with increased attention capacity and reduced ADHD symptoms among children. Conversely, exposure to negative environmental factors, such as air pollution and noise pollution, has been found to have a negative impact on mental health outcomes. [39], [41] found that exposure to air pollution was associated with an increased risk of depression and anxiety symptoms among adolescents. --- METHODS This study will use a cross-sectional study design to examine the relationship between social media use, environmental factors, and mental health outcomes among young people in Sukabumi. A cross-sectional study is a type of observational research design that collects data at a single point in time. This research design is useful for investigating the prevalence of a particular phenomenon, as well as examining relationships between variables [42] The participants for this study were young people aged between 18 and 24 years who lived in Sukabumi as many as 400 young people. We will use convenience sampling methods to recruit participants from local universities and community organizations. The inclusion criteria for participating in the study were: Between 18 and 24 years old 1. Residing in Sukabumi 2. Regularly use social media platforms 3. Willing to participate in this research. --- RESULTS AND DISCUSSION --- Sample Characteristics A total of 400 young people between the ages of 18 and 24 participated in the study. The mean age of the sample was 21.2 years (SD = 1.5), and 60% of the sample identified as female. The majority of the sample (78%) were college students, and 68% reported living in urban areas. Participants reported using social media platforms an average of 3.6 hours per day (SD = 1.8). The most used platform is Instagram (78%), followed by Facebook (67%), and WhatsApp (52%). Participants reported moderate levels of environmental exposure to air pollution (M=3.4, SD=0.8) and noise pollution (M=3.3, SD=0.9), and low levels of exposure to green space (M=2.1, SD=0.6). Participants reported moderate levels of depression (M = 12.6, SD = 6.7), anxiety (M = 10.8, SD = 6.2), and stress (M = 13.1, SD = 7.1) over the past week. --- Multiple Regression Analysis To test the association between social media use, environmental factors, and mental health outcomes, multiple regression analyses were performed. Age and gender were included as control variables in the analysis. The results of multiple regression analysis, the overall model was statistically significant (F (5,194) = 23.87, sig <unk>.001), showed that predictors explained a large amount of variance in mental health outcomes. According to the results of the regression study, social media usage, air pollution, noise pollution, and green open spaces were all very significant predictors of the outcomes in terms of mental health. Particularly, more frequent use of social media was linked to higher levels of stress, anxiety, and depression. Additionally connected to higher levels of depression, anxiety, and stress are air and noise pollution. On the other hand, more exposure to green space was linked to reduced levels of stress, anxiety, and sadness. In addition, gender has a crucial role in predicting the results of mental health, with women reporting higher levels of stress, anxiety, and depression than males. The results of mental health are not significantly predicted by age. --- Discussion According to the study's findings, social media usage, contextual variables, and gender are significant indicators of young people's mental health in Sukabumi. Particularly, more frequent use of social media was linked to higher levels of stress, anxiety, and depression. These results are in line with earlier studies that found social media usage to be a risk factor for young people's poor mental health outcomes [10], [16], [21], [22], [26], [30]. The findings also revealed that environmental variables, including noise pollution, air pollution, and green open spaces, were powerful indicators of the course of mental health. More specifically, increased exposure to green space was linked to lower levels of sadness, anxiety, and stress whereas higher exposure to air and noise pollution was linked to higher levels of these emotions. These results are in line with other studies that found environmental variables as significant predictors of outcomes related to mental health [38], [41], [43]. Another important predictor of mental health outcomes was found to be gender, with women reporting greater levels of stress, anxiety, and depression than males. These results are in line with other research that found gender variations in mental health outcomes [2]. Overall, these findings emphasize the significance of taking social media usage, contextual variables, and gender into account when analyzing the outcomes of young people in Sukabumi's mental health. The results suggest that interventions aimed at promoting mental health among young people should consider addressing social media use, environmental factors, and gender-related factors. --- CONCLUSION In conclusion, this study provides evidence that social media use, environmental factors, and gender are important predictors of mental health among young people in Sukabumi. These findings suggest that interventions aimed at promoting mental health among young people should consider addressing social media use, environmental factors, and gender-related factors. Future studies using longitudinal designs may provide more definitive evidence of causal links between these factors and mental health outcomes.
This study looked into how social media use and outside influences affected young people's mental health in Sukabumi, Indonesia. 400 young individuals between the ages of 18 and 25 participated in a cross-sectional survey in which information on social media use, environmental exposure, and mental health outcomes (such as depression, anxiety, and stress) was gathered. According to the findings, increased social media use was linked to greater levels of stress, anxiety, and depression, but exposure to environmental elements including noise, air pollution, and green open spaces was found to be a significant predictor of mental health outcomes. In particular, increased exposure to green space was linked to lower levels of sadness, anxiety, and stress whereas higher exposure to air and noise pollution was linked to higher levels of these emotions. Gender was also found to be a significant predictor, with women reporting higher levels of depression, anxiety, and stress than men. These findings highlight the importance of considering the role of social media use, environmental factors, and gender in understanding mental health outcomes among young people in Sukabumi. Interventions aimed at promoting mental health among young people should consider social media use, environmental factors, and gender-related factors. Limitations of the study include a cross-sectional design and limited generalizations to other populations.
Introduction Systematic socioeconomic inequalities in health persist and continue to widen within many economically prosperous countries across the globe [1,2]. The socioeconomic gradient in health remains one of the main challenges for public health as socioeconomically disadvantaged individuals have a lower life expectancy and a higher risk of developing life-limiting illnesses, such as diabetes and cardiovascular disease, compared to their advantaged counterparts [3,4]. The theories and frameworks developed to understand the causes of and solutions to the socioeconomic gradient in health are undoubtedly complex. For example, the World Health Organization's (WHO) Commission on the Social Determinants of Health (CSDH) developed a conceptual framework to illustrate the relationship between the social determinants of health and equity in health and wellbeing, which was multi-level and contained feedback loops [5]. The CSDH framework highlights the multi-faceted nature of inequality from the impact of the socioeconomic and political context to psychosocial factors and biology. Thus, there is an increasing recognition that health inequality is a complex or 'wicked' problem and systems simulation models are a useful tool to understand the underlying causes and mechanisms [6]. Complex systems are systems which consist of interacting parts or subsystems. Some key characteristics of complex systems are dynamic, resulting in adaptation to change, non-linear relationships, feedback loops, tipping points, and the emergence of macrophenomena from interactions at the micro level (see, e.g., CECAN 2018) [7]. It is difficult to capture these relationships using a traditional epidemiological "risk factor" approach which uses linear reductionist models to test the relationships between decontextualised dependent and independent variables [8]. Agent-based modelling (ABM), a well-established methodological approach used widely in the field of social science, has been highlighted as a methodological approach that can be used to address this problem [6]. ABM involves simulating the actions and interactions of individual agents with other agents and their environment based on a set of specified rules and observing emergent phenomena [9]. Agents may adapt their own behaviour in response to previous behaviour, their social network, or environmental stimuli [9]. Not only can ABM be used to understand complex phenomena, but they can also be used to test the impact of policy interventions and inform policy decisions and have been successfully applied in other areas of public health, particularly for the control of infectious diseases [10]. ABM has been used successfully to understand the causes of inequality more broadly outside the field of public health. Famously, the Schelling model of segregation which identified residential segregation is generated in the presence of relatively simple nearest neighbour preferences and could be used to understand the racial segregation patterns in the USA [11]. Additionally, the Sugarscape model developed by Epstein and Axtell has offered insights into the generation of wealth inequality using a relatively simple model which simulates the land in which sugar is grown and can be harvested by individuals to become their wealth [12,13]. Individuals in the simulation are programmed to harvest the sugar closest to them; strikingly, even when the wealth available to all individuals at the beginning of the simulation is equal, trends in wealth inequality are produced even after a short simulation period. Additionally, only a very small proportion of individuals have high levels of wealth, while a much larger proportion have low levels of wealth. These models, alongside many others developed in the field of social science, have illustrated the benefits of using ABM to understand complex observable phenomena. A review by Speybroeck and colleagues, covering research published before January 2013, explored how simulation models had been used in the field of socioeconomic inequalities in health specifically [14]. They found only four ABM studies, which focused on understanding differences in health behaviour or infectious disease transmission between socioeconomic groups. Speybroeck and colleagues concluded that ABM is the most appropriate computational modelling method to examine health inequalities as they can incorporate all the characteristics of a complex system such as the heterogeneity, interactions, feedback, and emergence [14]. However, while the four identified models contained many of the expected features of ABM (e.g., multi-level, dynamic, and stochastic), the Speybroeck review concluded that to better understand the complex mechanisms underlying health inequalities, more ABM that features feedback loops, temporal changes, and agent-agent and agent-environment interactions are required. Since the Speybroeck review, there has been a methodological shift towards using complex system methods in public health and public policy, much supported by large investments in data accessibility and computing power. In the UK, this is also reflected in the Medical Research Council's updated guidance for the development and evaluation of complex interventions [15] and the Her Majesty Treasury's Magenta Book Annex "Handling Complexity in Policy Evaluation", both published in 2021 [16]. This methodological turn has resulted in a significant increase in computational modelling papers in the public health literature in recent years; therefore, it is now timely to update and deepen the previous review. Here, we focus on the contribution of ABM to understand the socioeconomic inequalities in health specifically, by reviewing the application area (e.g., the inequality mechanisms studied, the choice of the health outcome(s), and the measure of socioeconomic position), and the details of the ABM approach (e.g., the represented complexity features and whether models have been validated). The aim of this review was to synthesise the growing evidence based on the use of ABM in the field of health inequalities research. --- Materials and Methods We followed the guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [17]. The protocol for this review was developed and registered on the International Prospective Register of Systematic Reviews (protocol registration PROSPERO 2022 CRD42022301797). PubMed, Scopus, and Web of Science were searched from 1 January 2013 to 15 November 2022. The Scopus search was limited by subject area to Medicine; Social Sciences; Computer Science; Multidisciplinary; Mathematics; Nursing; Economics, Econometrics and Finance; Neuroscience; Health Professions; Psychology; Decision Sciences; and Engineering. For Web of Science, searches were made of the editions of Science Citation Index Expanded and Social Sciences Citation Index. For both Web of Science and PubMed, only the titles and abstracts were searched. An extensive list of search terms was used (see Table S1 in Supplementary Materials) to capture the themes of simulation modelling, socioeconomic inequality, and health. The search strategy was validated against that used in the Speybroeck review [14], confirming that all ABM studies included in that review also appeared using our search strategy. --- Eligibility Criteria Table 1 lists the inclusion criteria for this review; this criterion includes the population, exposures, comparisons, outcomes, and study designs (PECOS) required for a study to be eligible for inclusion. Studies were included if they: (i) were full papers published in English, and (ii) the paper described an ABM study with the purpose to understand the emergence and/or persistence of health inequalities in relation to either non-communicable disease or the differential response of different socioeconomic groups to health-related interventions. Papers were only included if they simulated human individuals or groups and investigated within-country socioeconomic inequalities (using measures such as the socioeconomic position, income, and education) in health, restricted to the differences in the health status, health behaviour, or access to healthcare. Papers in which healthy food access was modelled as a proxy for the consumption of healthy food were also included. Studies that developed ABM in combination with system dynamics or population-based models were included. There were no geographical restrictions. Papers that modelled communicable diseases or water or food access/security as the health outcomes were outside the scope of this review and were therefore excluded. The studies published before 2013 were also excluded as these studies were covered in the Speybroeck review [14]. --- Screening Searching returned a total of 2533 records. All the records were downloaded to EndNote X9 and imported to the EPPI-Reviewer. The total records were reduced to 1436 following the removal of duplicates. An initial screening was carried out by one reviewer (RW). Following title screening, 477 records were identified for abstract screening. A second reviewer (JB) independently double-screened a randomly selected subset of abstracts (20%). After title and abstract screening, 51 records were selected for full-text screening and 18 of these met the eligibility criteria for data synthesis (Figure 1). The second reviewer (JB) also independently screened all the selected full-text studies to validate that the included papers met all the eligibility criteria. Any disagreements were recorded and discussed to ensure consistency. Two further reviewers (CE and AH) assisted with the screening for papers queried on methodological grounds (n = 29), in instances where it was uncertain whether a simulation model met the inclusion criteria. Manual reference searching identified two additional papers which met the inclusion criteria, giving a final sample of 20 included studies. --- Screening Searching returned a total of 2533 records. All the records were downloaded to End-Note X9 and imported to the EPPI-Reviewer. The total records were reduced to 1436 following the removal of duplicates. An initial screening was carried out by one reviewer (RW). Following title screening, 477 records were identified for abstract screening. A second reviewer (JB) independently double-screened a randomly selected subset of abstracts (20%). After title and abstract screening, 51 records were selected for full-text screening and 18 of these met the eligibility criteria for data synthesis (Figure 1). The second reviewer (JB) also independently screened all the selected full-text studies to validate that the included papers met all the eligibility criteria. Any disagreements were recorded and discussed to ensure consistency. Two further reviewers (CE and AH) assisted with the screening for papers queried on methodological grounds (n = 29), in instances where it was uncertain whether a simulation model met the inclusion criteria. Manual reference searching identified two additional papers which met the inclusion criteria, giving a final sample of 20 included studies. --- Data Extraction Data from the papers were extracted by one reviewer (RW). A second reviewer (JB) assessed the accuracy of the data extraction for all the included studies. In the case of a disagreement, both reviewers referred to the paper in question, and a consensus was reached. A data extraction matrix was developed which included the basic characteristics of the studies (the year, location, and study's aims), variables modelled (socioeconomic measure and health outcome), model characteristics (multi-level, dynamic, feedback loop, stochastic, spatial, heterogenous, agent-agent interaction, and adaptation to environment), if and how the model was validated, the model's function (framework development and/or to test an intervention/scenario), and the relevant findings. The model's characteristics were not always explicit but could be derived from the methods section. The relevant findings were defined as those related to health or intervention outcomes stratified by a measure of the socioeconomic position. --- Quality Assessment Given the lack of an appropriate quality assessment or a risk of bias assessment tool to assess ABM, a quality assessment was not conducted, but we recorded the compliance with the reporting guidelines of the ODD (the overview, design concepts, and details) [18]. --- Analysis Descriptive summary statistics were used to describe the search results and study's characteristics. We describe the specific modelling details of the included studies using a narrative synthesis in which we group models based on the health outcome. --- Results --- Descriptive Analysis The study characteristics for the 20 included papers are displayed in Table 2. The most common geographical settings for the models were the USA (n = 7) and the UK (n = 4). The other models were set in the Netherlands, Mexico, India, South Korea, Canada, and Japan. Only two models were abstract and did not have a geographical setting. Most of the included models were set at the city level (n = 10), other settings included the national (n = 5), state (n = 2), and district level (n = 1). Most of the included papers described the ABM of the socioeconomic differences in health behaviour (n = 14). Three papers focused on explaining the socioeconomic differences in the physical health outcomes and three papers modelled a mental health outcome. The measures of the socioeconomic position covered the income (n = 14), educational attainment (n = 4), social grade (n = 2), and wealth (n = 1). All of the included models were multi-level (they represented both individuals and structural entities), dynamic (captured changes over time), stochastic (based on probabilities), and had heterogeneous agents. Most models represented both the individuals and the environment with environmental features (e.g., shops, green spaces, and public transport). Often, in the models, agents could age, die, and change their behaviour over the course of the simulation. Only three papers used the ODD reporting guidelines when writing descriptions of their ABM [18]. Examine the impact of a free bus policy on public transit use and depression among older adults. Individual Income-divided into quintiles (1 to 5). Prevalence/ percentage of agents with depression. ML-multi-level. D-dynamic. St-stochastic. FL-feedback loop. Sp-spatial. HtI-heterogeneous individuals. AI-agent-agent interactions. EI-agent-environment interactions. V-validation. F-framework. I-test an intervention. --- Health Behaviours Most models with a focus on the health behaviour modelled dietary behaviours (n = 7). Four of the models were concerned with physical activity and the access to green space, and three modelled substance use, specifically the consumption or purchase of alcohol and tobacco as a proxy for consumption. --- Dietary Behaviour Papers that used ABM to model the socioeconomic differences in dietary behaviours tested the impact of interventions on the consumption of sugar-sweetened beverages [19], the purchase of ultra-processed food [20], the consumption of fruits and vegetables [21,22], and the access to healthy food outlets [23]. The interventions were educational campaigns (e.g., nutrition warnings and school-based programmes), advertising campaigns, changes to tax, increasing access to vegetables, and reducing the cost of vegetables. However, two papers focused on the impact of residential segregation on the access to healthy food outlets as an explanation for the socioeconomic differences in dietary behaviours [24,25]. All models used the level of income of the individual or household, educational achievement, or both as the measure of the socioeconomic position. The only paper that did not include a spatial component to the model was set at a national level, and explored the impact of tax, nutrition warnings, and advertising on the purchase of ultra-processed food in Mexico [20]. The other six models used artificial grid space [24], a 1-dimensional linear township [25], a raster map to represent the spatial distribution of income [21], or actual geographic space, including GIS modelling of real-life cities [19,22,23]. Six of the models included agent-environment interactions which often captured how individual agents engage with food outlets [21][22][23][24][25]. Only two of the included papers modelled agent-agent interactions through dietary social norms operationalised via a social network which influenced the taste preferences and health beliefs [22], and the purchasing of ultra-processed foods [20]. Five of the models featured feedback loops, these included the update of social norms based on behaviour over the course of the simulation [20,22], and the food outlet responses to the agent's behaviour by closing and opening outlets [23], changing the type of food available for sale [24], or an increasing appetite and overeating following the consumption of foods high in fat, sugar, and salt [21]. Only two of the papers had attempted validation by comparing the simulated outcomes to the 'observed' outcomes in real world data [19,22]. --- Physical Activity and Use of Urban Green Space All the models that investigated the socioeconomic differences in physical activity simulated intervention scenarios. These scenarios included additional physical education in schools, the promotion of active travel, educational campaigns, increasing the availability and affordability of sports activities, improving neighbourhood safety, and increasing the expense associated with driving [26][27][28]. All the models focusing on physical activity used the level of income of the individual or household as the measure of the socioeconomic status and explored a range of physical activity-related outcomes including the minutes of physical activity per day [26], sports participation [27], and walking [28]. Models concerning physical activity involved a spatial component operationalised as either a representation of the actual geographical space [26,27] or an artificial grid [28]. All the models simulated both agent-agent interactions (e.g., social interactions modelled via a social network that impacts behaviour) and agent-environment interactions (e.g., playing outdoors or engaging with sports facilities in the environment). Two models contained feedback loops, including the updating of social norms regarding exercise and travel preferences [27,28] and environmental feedback, including the safety and traffic levels of travel routes on the attitudes towards transport methods [28]. Two models were validated by comparing the simulated outcomes to the outcomes observed in the pre-existing data [26,28]. One paper modelled intra-and inter-city inequalities in visiting urban green spaces, specifically testing the mechanism that the decision to visit these spaces is influenced by an individual's assessment of who had previously visited the space [29]. Given conflicting evidence, the model explored two possibilities: (1) that agents visit spaces that people like themselves to visit (homophilic preference) and ( 2) that individuals with a lower SES (socioeconomic status) prefer to spend time in areas which those of a high SES visit (heterophilic preference). This model used the occupational grade to classify the agents into either a high or low SES. The model spatially represented the cities of Edinburgh, Glasgow, Aberdeen, and Dundee, and simulated both agent-environment interactions in the form of visiting urban green spaces and agent-agent interactions via agents assessing the similarity of other agents visiting the green space. The feedback loop in this model was the update of who visited green spaces, which was a function of the update to whether 'in-' or 'out-' group members were present in those spaces. Given a lack of observed data, the model was validated using a pattern matching approach; the model could reproduce the observed patterns of urban green space visitation in a spatial microsimulation of Glasgow. --- Substance Use Two of the models that focused on the socioeconomic differences in substance use tested the impact of interventions, including alcohol taxation [30] and the restriction of menthol cigarette sales and tobacco retailer density [31]. One paper simulated several counterfactual scenarios which varied the degree of socioeconomic disparity and genderrelated susceptibility to social influence in the context of smoking [32]. All the models used the income level of the individual or household as the measure of the socioeconomic position and investigated substance use in the form of smoking prevalence [32], tobacco purchasing [31], and the average number of alcoholic drinks per day [30]. Two models simulated agent-agent interactions including the influence of gender and socioeconomic social norms on an individual's own smoking [32], and social network influences on drinking behaviour [30]. Two models were spatial; one represented the city of New York [30] and the other an abstract town called 'Tobacco Town' [31]. Two models simulated agent-environment interactions, such as travelling to and from locations and engaging with tobacco and alcohol retail outlets [30,31]. One paper not only focused on the consumption of alcohol but also examined the interaction between neighbourhood characteristics, social networks, sociodemographic characteristics, drinking, and violence [30]. Two models featured feedback loops in the form of updates to norms based on drinking and smoking behaviour [30,32]. One model validated the simulated outcomes by comparing these to the outcomes observed in real-world data on the prevalence of smoking in Japan [32]. --- Physical Health Of the three models that focused on physical health outcomes, one examined the incidence of severe neonatal morbidity and deaths per 1000 live births averted [33], one looked at the health status and care need [34], and the other investigated the impact of an exposure to air pollution on the health status [35]. Two of the papers modelled the effect of potential interventions on the physical health outcomes [33,34]. In one, the intervention was the alteration of the eligibility criteria for government-funded social care, in the other increasing the responsibilities and coverage of community health workers. All the models used a different measure of the socioeconomic position including wealth quintile [33], approximated social grade [34], and educational attainment [35]. All three models included the individual and household levels and two included additional levels such as kinship networks and the regional level. One study represented space using a grid based on the geography of the modelled country [34] and two represented the actual geographic space [33,35]. An interaction with the environment was in the form of migration, seeking treatment at facilities, and the exposure to pollution. Only two models included a feedback loop, including feedback between the parental income level and childhood educational attainment and feedback between the level of disease and the probability of developing a further disease [34,35]. Only one model involved agents interacting with each other, in the form of a kinship network, which consisted of familial relationships [34]. None of these models validated their results using real world data. One of the models was used to create a complex theoretical framework to represent the social care system. The geographical and population data inputted into this framework could then be adjusted to model and understand the drivers of the unmet social care need in different countries [34]. --- Mental Health Two of the three papers focusing on a mental health outcome examined the impact of transport on depression among older adults. The first examined the impact of multiple transport interventions [36], and the second examined that of a free bus policy on public transit use and depression [37]. An individual's income was used as a measure of the socioeconomic status in both papers. One model carried out three experiments: increasing the walkability and safety of neighbourhoods to promote walking; decreasing bus fares and bus waiting times; and adding bus lines and stations [36]. While the second model focusing on transport carried out four experiments: altering mean attitudes towards the bus; bus waiting times; the cost of parking; and fuel prices; each experiment was also carried out with and without the inclusion of the free bus policy [37]. Both models captured the individual and neighbourhood level. A feedback loop resulted in improved attitudes towards a certain mode of travel following the positive experience of that mode. The spatial element was applied to income segregation patterns. In one model, the agents interacted with each other in the form of social networks influencing the travel behaviour [37]. In both models, agents interacted with the environment by using transport. Both models were validated against empirical data on the prevalence of depression in the United States by gender, age, and income level. The third paper examined the impact of reducing income inequality on depression among expectant mothers [38]. Four interventions to increase income were tested: two child benefit programs (ACB and CCB), universal basic income (UBI), and increasing minimum wage. This model focused on individuals, and while it captured the neighbourhood characteristics for each individual (e.g., a sense of safety and the prenatal services available), the environment was not spatially represented in the model. Agents could decide to make or break social connections with other agents, and whether to break ties with other agents with depression. This model was not validated. --- What Can ABM Tell Us about Socioeconomic Inequalities in Health? Studies investigating the explanations for the socioeconomic differences in health found that those of a higher socioeconomic position were more likely to be exposed to healthier environments and therefore engage in healthier behaviours and have better health outcomes. For example, one model found that a greater income segregation in communities led to a decreased access to healthy food for lower income households [25]. Another study which modelled agents' movements from work to home found that regardless of the level of air pollution, those with a lower level of education consistently had the highest risk of developing an illness [35]. Models which tested the impact of interventions on the socioeconomic inequalities in health found that some interventions increased inequalities. For example, those of a high socioeconomic position improved their health behaviour more in response to educational campaigns concerning nutrition [22,23]. It was argued that nutritional education campaigns may be ineffective for those of a lower socioeconomic position due to a sensitivity to food prices and a lack of access to healthy alternatives [22]. Similarly, it was found that the promotion of active travel had greater benefits for those of a high socioeconomic position, as they are more likely to travel by car and travel by car more often to extra-curricular activities prior to the intervention [26]. However, there were some modelled interventions that decreased the socioeconomic inequalities in health. For example, one model tested the impact of a sugar-sweetened beverage tax and found that at 25% tax, the reduction in the consumption of sugar-sweetened beverages was greater in those from low-income populations [19]. This finding was largely the result of increases in price which made sugar-sweetened beverages less affordable to low-income households. Another study, which modelled the expanded responsibilities and increased coverage for accredited social health activists who perform postnatal check-ups, found that these interventions resulted in greater decreases in the neonatal morbidity and mortality among those of a low socioeconomic position [33]. Yang and colleagues also showed that, in older adults when attitudes towards bus use improved and the waiting time decreased, decreases in depression were estimated to be greater among low-income groups [37]. This larger increase was because those of a low income are less likely to own cars and are therefore more susceptible to an intervention to increase the uptake of public transport, which increases the number of non-work trips they take, which was beneficial to their mental health. --- Discussion This review included 20 papers that described the ABM of the socioeconomic inequalities in health that have been published since January 2013, the end point of the Speybroeck review which found only four ABM studies on the topic [14]. Using ABM in the context of socioeconomic health inequalities was most common in the USA and UK (n = 11). The included studies illustrated that ABM is a useful tool to understand complex problems and has been used flexibly to represent dynamic, multi-level processes, often in physical space, and to capture the interactions between individuals and interactions with their environment. These models can tell us about the causes of health inequalities, potential interventions to reduce health inequalities, and which interventions may inadvertently increase health inequalities. Typically, ABM has been used to explore socioeconomic differences in health behaviours (n = 14) including diet, physical activity, access to green space, and substance use, but few have approached the socioeconomic differences in physical and mental health outcomes. Additionally, only one paper modelled access to healthcare as a potential explanation for socioeconomic inequalities in health [33]. To an extent this is unsurprising, given a historic focus on health behaviours in public health [39] coupled with the fact that ABM as a method captures how behaviours at the micro-level give rise to emergent phenomena at the macro-level [40]. Most ABM were used to test a range of interventions (n = 14), from educational campaigns to taxation, and were underutilised for other purposes, such as testing the explanatory value of the theory or mechanisms to explain the generation or persistence of socioeconomic inequalities in health. This is consistent with the Speybroeck review which found that all ABM studies were used to test an intervention or scenario [14] and highlights that a valuable feature of ABM is the ability to experiment and test a range of interventions in silico [40]. Less than half of the included studies (n = 9) attempted to validate their models and, to varying degrees, some using observational data or pattern matching methods. However, none of the included studies used structural validation techniques which would ensure that it is the intended "structure of the model that drives its behaviour" [41]. This finding is consistent with the Speybroeck review which found that only one ABM had been validated using observational data [14]. Additionally, only three of the included papers explicitly used and referred to the ODD protocol, the guidelines with the purpose of ensuring that ABM is described fully to facilitate its replication [18]. It is clear from the findings of this review that most existing ABM studies investigating the socioeconomic inequalities in health have focused on health behaviour. This individualistic focus on health inequalities in ABM efforts on this topic so far is not reflective of ABMs in the field of social science more generally. ABM has been used to understand broader social phenomena such as racial segregation and the generation of wealth inequality at the societal level [11][12][13]. While these patterns are generated by individual-level behaviours, these models do not seek to explain these behaviours. Reducing health inequality to understanding the differences in health behaviour is problematic given that research has shown that for the same level of any given behaviour, the health outcomes remain worse for the most socioeconomically deprived [42]. --- Limitations Currently there is no available tool to assess the quality of ABM studies, and therefore we could not ensure that the models included in this review were of a high quality. There are a variety of quality assessment tools available to assess other study types, for example, the appraisal tool for cross-sectional studies (AXIS) which can be used to assess a study's design, reporting quality, and risk of bias [43]. Given the increase in ABM studies in public health, it is critical to consider how we will assess the quality of these studies going forward. While the Speybroeck review considered a breadth of simulation models [14], we chose to focus on ABM only, given the particular promise of ABM applied to health inequality research and the rapid increase in the use of simulation modelling techniques since 2013 [10]. The application of alternative simulation modelling techniques (e.g., microsimulation and system dynamics) to the socioeconomic inequalities in health in recent years awaits a further examination. --- Future Research Efforts thus far to use ABM to understand socioeconomic inequalities in health have focused on the contribution of health behaviour. However, this focus on health behaviour is at odds with calls from researchers to "move beyond bad behaviours" [44] and the position of influential public health organisations. For example, the WHO concluded that it is the underlying social and economic factors that determine health and health inequalities as opposed to health behaviours [45]. We are increasingly aware that health inequalities are not only the result of differences in health behaviour, yet little has been done using ABM to attempt to understand the complex relationships between the social and economic environment people live in and the influence on their health via pathways other than health behaviour. There are explanations for socioeconomic inequalities in health that shift the focus from individual-level behaviours to the social determinants which themselves determine health and to an extent behaviour (e.g., the social determinants of health) [45]. Existing ABMs have started to look at the social drivers of health behaviours (e.g., the role of social network and social norms) [20,22], however they avoid alternative pathways through which social and economic factors directly or indirectly impact health. It has been argued that ABM could be used to investigate the mechanisms specified in social and economic explanations for health inequality [46]. An existing hypothetical example of how this may be done is the operationalisation of psychosocial theory [46]. Instead of a focus on health behaviours, operationalising psychosocial theory would involve simulating support seeking and giving among friendship networks which mediates health outcomes via stress pathways. Future research should consider how we can use ABM to simulate alternative mechanisms which could explain the socioeconomic inequalities in health that are not exclusively focused on health behaviour. --- Conclusions In recent years, ABM has increasingly been used to explain socioeconomic inequalities in health. ABM allows us to develop a deeper understanding of the complex consequences of individual heterogeneity, spatial settings, feedback, and adaptation resulting from agent interactions with each other and their environment. However, to date, much of the focus has been on understanding the role of health behaviours. The features of ABM provide the opportunity to investigate alternative, more complex explanations for socioeconomic health inequalities. Therefore, an important next step in public health is to attempt to operationalise explanations for the causes and consequences of health inequalities beyond representations of health behaviour. --- The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/ijerph192416807/s1, Table S1: Systematic Search Strategy. --- Data Availability Statement: Not applicable. --- Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
There is an increasing focus on the role of complexity in public health and public policy fields which has brought about a methodological shift towards computational approaches. This includes agent-based modelling (ABM), a method used to simulate individuals, their behaviour and interactions with each other, and their social and physical environment. This paper aims to systematically review the use of ABM to simulate the generation or persistence of health inequalities. PubMed, Scopus, and Web of Science (1 January 2013-15 November 2022) were searched, supplemented with manual reference list searching. Twenty studies were included; fourteen of them described models of health behaviours, most commonly relating to diet (n = 7). Six models explored health outcomes, e.g., morbidity, mortality, and depression. All of the included models involved heterogeneous agents and were dynamic, with agents making decisions, growing older, and/or becoming exposed to different health risks. Eighteen models represented physical space and in eleven models, agents interacted with other agents through social networks. ABM is increasingly contributing to our understanding of the socioeconomic inequalities in health. However, to date, the majority of these models focus on the differences in health behaviours. Future research should attempt to investigate the social and economic drivers of health inequalities using ABM.
INTRODUCTION While early research has conceptualized emotions as largely intrapersonal experiences that take place within individuals, emotions are also social (Parkinson, 1996) and emerge from dynamic interactions between individuals and their social environment (Campos et al., 1989;Lazarus, 1991;Mesquita, 2010). Because the social environment is culturally constructed, the interaction between individuals and their social environment can lead to variations in emotional experiences across cultures (Markus and Kitayama, 1991;Mesquita and Frijda, 1992). At one level, this cultural --- Emotional Fit and Individual Well-Being There is growing evidence to support the notion that experiencing similar patterns of emotions to others within the same culture is important for individual well-being (De Leersnyder et al., 2014, 2015). In a series of studies, De Leersnyder and colleagues directly measured, rather than inferred, emotional fit with culture by using a profile correlation approach -correlating each individual's pattern of emotions in response to different situations with the average emotional pattern of the group. They then assessed the association between emotional fit and well-being in three different cultures (United States, Belgium, and Korea). Their results revealed that having higher emotional fit in relationship-focused situations (i.e., situation that involved relationship with others) was associated with greater relational well-being (i.e., having good interpersonal relationships) across all cultures (De Leersnyder et al., 2014). Emotional fit also predicted psychological wellbeing across cultures, although the specific contexts in which emotional fit mattered varied depending on culture (i.e., relationship-focused situations in Korea, and self-focused situations in the United States; De Leersnyder et al., 2015). These findings suggest that although there may be some cultural variability in how emotional fit relates to individual well-being, emotional fit is generally important for well-being at some basic level across cultures. Evidence from research examining cultural norms and wellbeing further support this point. Being in alignment with the normative practice of one's own culture is important for individuals' adjustment and well-being (Oishi and Diener, 2001;Kitayama et al., 2010). While the cultural mandates for well-being may vary across cultures, it is universal for people to achieve well-being through actualizing their respective cultural mandates. For example, actualizing values of autonomy and personal control would lead to well-being in Western culture, whereas actualization of the values of interdependence and relational harmony leads to well-being in East Asian culture. In a crosscultural study comparing Americans and Japanese, it was indeed shown that personal control was the strongest predictor of wellbeing in the United States, but the absence of relational strain was most predictive of well-being in Japan (Kitayama et al., 2010). Similarly, attaining relational goals, and thus actualizing cultural mandates of interdependent culture was closely associated with well-being among Asian Americans and Japanese, but not among European Americans. In contrast, attaining independent goals was related to well-being in European Americans but not among Asian Americans or Japanese (Oishi and Diener, 2001). In sum, these studies suggest that fitting with norms of cultures is important for achieving individual well-being regardless of one's cultural orientation, even if those norms vary from culture to culture. --- Emotional Fit and Collective Identity Parallel to the individualistic focus on the conceptualization and study of emotions as an intra-individual phenomenon, studies of well-being and adjustment have also traditionally emphasized the individualistic, personal aspects of well-being (e.g., personal self-esteem). Yet, individuals' well-being and adjustment are also closely related to the collectivistic aspects of the self. For example, having a positive collective identity, indexed via collective self-esteem -the tendency to have a positive view about one's group identity -has been found to be associated with psychological well-being (Crocker et al., 1994). This relationship was evident especially in Asians (vs. European Americans) even after controlling for the effect of personal selfesteem, reflecting the greater emphasis on the group and group experiences in Asian culture. Given that collective identity may be an important index of well-being that complements the index of individualistic well-being, the current study focuses on the relationship between emotional fit and collective identity (i.e., collective self-esteem and identification with one's group) in addition to the individualistic indices frequently used in studies of well-being (i.e., life satisfaction and depression). Previous research suggests that the experience of shared emotions with group members is important for constructing a positive group identity (Livingstone et al., 2011;Páez et al., 2015). For instance, Páez et al. (2015) found that perception of emotional synchrony while participating in collective gatherings (i.e., folkloric marches and protest demonstrations) led to greater collective self-esteem and increased identity fusion with the group. Similarly, in a laboratory study that employed an experimental manipulation of emotional fit with pre-existing and arbitrary groups, participants with increased emotional fit with the group indicated greater identification with the group, even when the group was created arbitrarily and carried no real meaning (Livingstone et al., 2011). On the other hand, some research also suggests that group identification may lead to shared emotional experience as well (Weisbuch and Ambady, 2008;Tanghe et al., 2010). For example, Tanghe et al. (2010) showed that increasing group identification through a laboratory manipulation led to greater similarity in emotional experience among group members. While these studies suggest that emotional fit may be generally important for achieving positive collective identity (higher collective self-esteem and stronger identification with a group), studies have not yet examined cultural differences in how emotional fit relates to collective identity. However, crosscultural theorists have long discussed how one's sense of self is closely tied to others in interdependent cultures, whereas it is construed more independently in independent cultures (Markus and Kitayama, 1991). Thus, it follows that collective identity should be affected by the degree of shared experiences with group members to a greater extent in interdependent cultures than in independent cultures, making the link between shared emotional experiences (i.e., emotional fit) and collective self-esteem and group identification especially pronounced in East Asian culture. --- Broadening the Assessment of Emotional Fit Previous work on emotional fit has primarily focused on similarity in the patterns of subjective (i.e., self-reported) emotional responses between an individual and a reference group. The current study takes a multi-method approach to the assessment of emotions, and therefore to the measurement of emotional fit. We see emotions as a multi-componential construct that comprise subjective, behavioral, and physiological responses. Although some theories of emotion assume response coherence across the various components of an emotional response (e.g., Ekman, 1992;Levenson, 1994), empirical support for the response system coherence is largely inconsistent. Recently, a dual-process perspective on emotion response coherence has been proposed to reconcile this inconsistency (Evers et al., 2014). This framework suggests two relatively independent emotion systems: one automatic system that is relatively unconscious and fast (e.g., physiological response) and another reflective system that is relatively conscious and deliberate (e.g., subjective and behavioral responses). While the two emotion systems are thought to work together to promote adaptive behaviors (Baumeister et al., 2007), the response coherence between the two systems tends to be weak or nonexistent in contrast to the coherence evident between varying indicators within each system (Evers et al., 2014). This lack of coherence suggests that emotional fit in one of these response domains may not necessarily be associated with emotional fit in another. The potential variability in emotional fit across emotional response domains (subjective, behavioral, and physiological) may also carry important implications for how emotional fit plays out in different cultures. According to Levenson's biocultural model of emotion (Levenson, 2003;Levenson et al., 2007), self-reports of subjective experience are highly susceptible to cultural influences, facial expressions are somewhat susceptible to cultural influences, and physiological response tendencies are relatively uninfluenced by culture. Because self-reports and behavioral expressions of emotions are visible and can directly influence social interactions, these may need to be modulated according to cultural norms more so than physiology. Therefore, emotional fit with culture may be more likely among subjective and behavioral response domains than in physiological responses. These ideas have yet to be examined empirically, however, because of the narrow interpretation of emotional fit in the literature. Given the complexity of emotional experiences and varying cultural influence on emotion systems, the current study sought to broaden the concept of emotional fit by using assessments of both automatic and reflective emotion systems. We assessed individuals' subjective (self-report), behavioral (facial expression), and physiological (cardiovascular) responses to emotional stimuli to determine indices of self-reported, behavioral, and physiological emotional fit. Self-report measures of emotion are thought to capture the reflective emotion system, and physiological arousal associated with an emotional response are believed to reflect the automatic system. Facial expressions likely represent a combination of both reflective and automatic processes given evidence for both universal and culturally variable components of facial expressions (Levenson et al., 2007). --- The Present Study The present study examines the associations between emotional fit and individual and collective aspects of well-being among a sample of East Asians/Asian Americans (henceforth, Asian Americans) and European Americans. Because we were interested in capturing representatives of two broad cultural groups whose traditional values regarding self and relationship are quite different, we employed stringent criteria that made use of behavioral markers of cultural orientation, family origin criteria, and self-identification to operationalize our cultural groups. These criteria are outlined in the methods and are meant to increase the likelihood that the cultural groups studied reflect the traditional norms and values associated with their respective cultural heritages, which include differential emphasis on social contexts in determining well-being. In measuring the construct of emotional fit, we used a method from De Leersnyder et al. ( 2014) that considers the patterns of emotional experience in relation to those of the same cultural group. Here, we measured emotional fit objectively by taking the correlation between the individual's emotional pattern and the average pattern of the group (see the section "Materials and Methods" for details). Thus, rather than reflecting a subjective awareness of one's fit with one's cultural group, this conceptualization of emotional fit reflects an objective measure of normative emotional responding. While it is possible that subjective awareness of emotional fit may also provide valuable information about the relationship between emotional fit and well-being, the subjective measure of fit may be susceptible to demand characteristics. On the other hand, the objective measure of emotional fit allowed us to explore the direct link between normative emotional responding and well-being while separating the effect of demand characteristics (De Leersnyder et al., 2014). To test our research question, we reanalyzed data originally collected as part of a large multi-method project investigating cultural difference in emotional reactivity and regulation. Results of the rest of the experiment are reported elsewhere (Soto et al., 2016). Although these data were not designed for the purposes of analyzing emotional fit, and therefore was largely a convenience data set, it did afford several opportunities to advance the emotional fit work and expand it in novel ways. This was an experimental study that collected self-report, behavioral (facial expression), and physiological responses to varying emotional stimuli, with participants being asked to regulate their emotional behavior (i.e., suppress or amplify) for a subset of the trials. Assessing various components of emotions in this study allowed us to explore emotional fit at multiple levels and in multiple ways. Thus, in the present study we examined emotional fit based on self-reported emotions (henceforth, self-report emotional fit) as well as emotional fit based on behavioral and physiological responses (behavioral emotional fit and physiological emotional fit, respectively). We were also able to look at emotional fit in different emotional response contexts (baseline emotional responding, in response to neutral stimuli, and in response to negative stimuli). We tested two primary hypotheses in the present study. Based on previous studies supporting the relationship of individual well-being with self-report emotional fit (De Leersnyder et al., 2015) and with actualization of cultural norm (Oishi and Diener, 2001) across cultures, we hypothesized that self-report emotional fit would be associated with greater individual well-being (as indexed via increased life satisfaction and lower depression) in both Asian Americans and European Americans. In addition, we hypothesized that self-report emotional fit would be associated with more positive collective identity (as indexed via greater collective self-esteem and increased identification with group) based on previous evidence supporting this link (Livingstone et al., 2011;Páez et al., 2015). Importantly, we also predicted that culture would moderate this relationship, because in many East Asian cultures the self is construed in relation to others (Markus and Kitayama, 1991), and thus, being in alignment with others may have a greater impact on the collective identity of Asian Americans than European Americans. Thus, we expect that the positive association between self-report emotional fit and collective identity will be stronger in Asian Americans than in European Americans. In addition to testing these hypotheses, we conducted a series of exploratory analyses to test whether or not the hypothesized patterns of results for self-report emotional fit would replicate with the behavioral and physiological emotional fit indices. Lastly, the design of the original experiment allowed us to investigate emotional fit across different emotional contexts. It is becoming increasingly important to recognize the contextualized nature of emotions (Scherer, 2009;Izard, 2010;Aldao, 2013). Emotion researchers have called for increased attention to the cultural and social context of emotions at the collective level in order to enhance our understanding of emotions as a whole (Goldenberg et al., 2017). This view also calls for the need to understand emotions in the context of particular emotional situations. This is because cultural differences in emotional experience occur in part as a function of varying situation selections across cultures (De Leersnyder et al., 2013). This means that findings from cultural investigation of emotions may vary depending on what emotional situation has been examined in the study. This highlights the importance of studying and understanding emotions in relation to particular emotional situations. Thus, in this study, we examined participants' emotional fit at three different experimental time points: prior to the introduction of any emotional stimuli (Time 0), in response to a neutral film (Time 1), and in response to a disgust-inducing film (Time 2). Previous studies on emotional fit examined mostly participants' broad emotional patterns in a particular environment (e.g., family or work settings; De Leersnyder et al., 2015). We thought that this approach would be most comparable to self-report emotional fit at baseline (Time 0) where participants were in the same setting, prior to presentation of any laboratory emotional stimulus. Thus, our primary hypotheses relating to self-report emotional fit and wellbeing are specific to measurement of emotional fit at Time 0. However, we also explored whether or not any of the findings observed at Time 0 are also seen at Times 1 and 2 when specific emotional stimuli are introduced. --- MATERIALS AND METHODS --- Participants The final sample consisted of 127 undergraduate students recruited at a large university in the northeastern United States. Fifty two participants (29 females; 23 males) were identified as East Asians or Asian Americans (referred to as Asian Americans throughout the paper) and 75 participants (49 females; 25 males; 1 missing gender information) were identified as European Americans. Among the total of 127 participants, the age information was missing for 24 participants due to experimenter errors. The average age of the remaining 103 participants was 19.50 (SD = 2.86). A demographic screener survey was used to determine participant eligibility for both groups (see below). All participants were either recruited from introductory psychology classes and compensated with course credit or recruited from the general campus community and paid $18 for their participation. All procedure was approved by the university's institutional review board and conducted in accordance with the American Psychological Association's ethical standards. --- Eligibility Criteria We relied on several pieces of culturally relevant information, including behavioral information such as language preferences, to go beyond racial or ethnic self-identification to characterize our groups based on criteria employed in previous studies of culture and emotion [see Soto et al. (2005) and Soto and Levenson (2009) for full discussion of the rationale behind the criteria]. European Americans must have been born and raised in the United States and had to self-identify as White or European American. Participants also had to report that their parents and grandparents were born in the United States and identified as White or European American. In addition, European American participants had to report being of Christian or Catholic religion, or growing up with these religions being practiced in their households. Finally, participants had to report that over 50% of their friends while growing up and over 40% of their neighborhood while growing up were of European American background. Asian American participants had to self-report their ethnicity as Asian or East Asian (e.g., Chinese, Korean, Japanese, and Vietnamese) and have been born either in an East Asian country or in the United States. South Asian participants from countries such as India, Pakistan, or Bangladesh were not eligible. In addition, participants' parents and grandparents also had to meet the same birth-country requirements. Furthermore, participants had to be conversant, though not fluent, in both English and in the Asian language of their culture of origin. There were no religious criteria for the Asian American participants. The criteria around childhood friends and neighborhood were also not applied to this group. While the original criteria were developed for participants living in a large metropolitan area where exposure to culturally similar others is common, this assumption would have been an unrealistic standard for the East Asian and Asian American participants in the community from which participants in the current study were sampled (University Park, PA, United States). --- Procedure Data used for the present study were collected as part of a large multi-method project investigating cultural differences in the experience and regulation of physiological, behavioral, and selfreported responses to emotional stimuli. Upon arriving at the lab room, participants signed the informed consent form and sat in a comfortable chair 3 feet away from a 19 LCD monitor. Participants completed a series of questionnaires including measures of emotion, depression, life satisfaction, collective selfesteem, the importance of their racial group membership to their identity (see below), and other measures outside of the scope of the present study. After this point, an experimenter of the same gender applied the physiological sensors to participants. Participants then watched a total of five film clips previously used in emotion regulation research (Gross and Levenson, 1993;Kunzmann et al., 2005) while their facial and physiological responses were collected. After each film, participants completed a self-report measure of emotion. All films were between 52 and 62 s in duration, with the exception of the first film, which lasted 22 s. Film 1 was the same across all participants and was a neutral film (seagulls flying over a beach). Films 2-4 were disgust films. The first disgust film (Film 2) always depicted an eye operation and was not associated with any specific emotion regulation instructions. The next two films were of a burn victim's skin graft and an arm amputation, and participants were asked to either amplify or suppress their emotional expression while viewing the films. The order of regulation instructions and the actual film presentation for films 3 and 4 were counterbalanced. Film 5 was a slightly positive film (nature scenes) used to help participants recover from negative emotions induced by previous films [see Soto et al. (2016) for more detailed information about the methods and procedures]. The fact that this convenience dataset consisted only of neutral, relaxing, and disgust elicitors limited the scope of our emotional fit variable. However, given that disgust reactivity does not tend to vary greatly across cultures (Rozin et al., 2008), we also thought this would provide a more conservative test of our research question pertaining to cultural moderation. In addition, examining emotional fit in response to neutral stimuli may provide important information that has been hitherto unexamined given that neutral stimuli are often processed similarly as negative stimuli (Codispoti et al., 2001;Lee et al., 2008), especially so among clinical populations (Felmingham et al., 2003;Leppänen et al., 2004). Thus, responses to the neutral stimuli could reflect individual differences in responding that could lead to variability in emotional fit that may be meaningfully related to well-being outcomes. The present study examined emotional fit at the first three time points prior to the introduction of emotional regulation instructions -emotional fit at baseline (Time 0), emotional fit in response to neutral film (Time 1), and emotional fit in response to the disgust film (Time 2). We did not include time points after emotion regulation instructions were presented because the impact of these instructions on emotional fit is outside of the scope of the present study. Because the collection of behavioral and physiological data began with the introduction of neutral film, baseline response (Time 0) consisted of the self-report measure of emotion only. Responses to neutral film (Time 1) and disgust film (Time 2) consisted of self-report, behavioral, and physiological responses. --- Measures Satisfaction With Life Scale Participants completed a five-item measure of life satisfaction. The SWLS assesses global judgments of satisfaction with one's life (SWLS; Diener et al., 1985). Participants are asked to rate their responses to questions such as "in most ways my life is close to my ideal" and "the conditions of my life are excellent, " using a 7-point Likert scale (1 = strongly disagree to 7 = strongly agree). Higher scores indicate greater satisfaction with life. The SWLS has shown good internal consistency in previous studies, with alpha coefficients ranging from 0.79 to 0.89 (Pavot and Diener, 1993). Cronbach's alpha coefficients in the current sample were 0.79 for Asian Americans and 0.84 for European Americans, indicating acceptable to good reliability. --- Center for Epidemiologic Studies Depression Scale The CES-D is a 20-item self-report inventory of depressive symptoms (CES-D; Radloff, 1977). Participants use a 4-point Likert scale (0 = rarely or none of the time to 3 = most or all of the time) to rate the degree to which they experienced, over the past week, major symptoms of depression including depressed mood, feelings of guilt and worthlessness, feelings of helplessness and hopelessness, psychomotor retardation, loss of appetite, and sleep disturbance. Higher scores indicate greater depressive symptoms. The CES-D has shown good internal consistency with alpha coefficients ranging from 0.85 to 0.90 in previous studies (Radloff, 1977). In the current study, the CES-D also indicated good internal consistency with an alpha coefficient of 0.85 for both Asian Americans and European Americans. Collective Self-Esteem Scale -Private Collective Self-Esteem and Importance to Identity Subscales The 4-item private collective self-esteem and 4-item importance to identity subscales of the CSES were used to measure participants' positive collective identity and identification with their group (CSES; Luhtanen and Crocker, 1992). The private collective self-esteem refers to one's evaluation of how good one's ethnic group is. Importance to identity (henceforth, identity) assesses how important one's ethnic group is to one's self concept. The public collective self-esteem (one's perception of how others evaluate one's ethnic group) and membership esteem (one's perception of how good of a member one is for one's ethnic group) subscales were not included because they were less relevant to the focus of the present study. Participants use a 7point Likert Scale (1 = strongly disagree to 7 = strongly agree) to rate their collective self-esteem. Higher scores indicate greater collective self-esteem. The original validation study (Luhtanen and Crocker, 1992) reported alpha coefficients ranging from 0.73 to 0.85, indicating acceptable to good internal consistency. In the current sample, the private collective self-esteem subscale indicated acceptable internal consistency with alpha coefficients of 0.79 for Asian Americans and 0.72 for European Americans. The alpha coefficients for the identity subscale were 0.79 and 0.86 for Asian Americans and European Americans, respectively, indicating acceptable to good internal consistency. --- Multidimensional Inventory of Black Identity -Centrality Subscale To assess the degree to which participants identify with their ethnic group (referred to as racial centrality hereafter), we used the 8-item centrality subscale of the MIBI (MIBI; Sellers et al., 1997). The centrality subscale of the MIBI assesses a broader concept of group identification than the CSES identity subscale. In addition to assessing the degree to which ethnic group membership is central to one's core self-concept, the MIBI centrality scale also captures participants' sense of connection/belonging to other members of their ethnic group. Because the items in the original MIBI were developed for African Americans only, we modified the wording of items to accommodate other ethnic groups as well. Items include "overall, being of my racial group has very little to do with how I feel about myself " and "I have a strong sense of belonging to people of my racial group." This modification has been used previously with ethnic minority groups other than African Americans (Perez and Soto, 2011). Participants rated their response using a 7point Likert scale (1 = strongly disagree to 7 = strongly agree), and higher score indicated greater importance of racial group membership to their identity. The internal consistency of the centrality subscale of the MIBI was 0.77 in the original validation study, which indicates acceptable consistency (Sellers et al., 1997). The current sample also indicated acceptable consistency, with alpha coefficients of 0.79 and 0.77 for Asian Americans and European Americans, respectively. --- Self-Reported Emotional Experience At six different time points throughout the experiment (i.e., at the beginning of the experiment, and after each of five films), participants were asked to use a 9-point Likert scale (0 = none and 8 = the most in my life) to rate their current experience of 16 different emotions: interest, happiness, surprise, amusement, contentment, relief, anxiety, sadness, annoyance, disgust, embarrassment, boredom, fear, anger, contempt, and stress. This rating scale has been used to measure the experience of specific emotions in previous emotion research (Ekman et al., 1980;Soto et al., 2005). --- Facial Emotional Expression Participants' facial expressions during the presentation of films were video recorded and then coded into six discrete emotions (happiness, sadness, anger, surprise, fear, and disgust) using the commercial face reading software FaceReader v. 6.1 (Noldus, 2014). FaceReader objectively estimates the presence of emotion expressions by utilizing over 500 facial landmark cues typically present in emotion expressions as well as specific action units as defined by Paul Ekman's facial affect coding system. For each video frame (image) FaceReader supplies a "confidence score" between 0 and 1 representing the likelihood that each discrete emotion is present. FaceReader was trained on over 10,000 expert-coded images and has demonstrated high accuracy for emotion expression classification (Lewinski et al., 2014). For the present study, we averaged confidence estimates for the presence of each emotion expression over the 1-min film presentation period. This resulted in six scores per film clip per participant representing the average likelihood that each of the emotions were present over the film's presentation. --- Physiological Response Electrocardiography (EKG) and skin conductance level (SCL) were recorded using a Mindware impedance cardiograph (MW2000) in conjunction with the Biopac MP150<unk> device consisting of an eight-channel polygraph and a microcomputer. All physiological data were collected second-by-second using AcqKnowledge<unk> software. EKG, which provides a measure of cardiac activity, was measured through three Biopac pre-gelled, self-adhering, disposable electrodes placed at three places on the torso: the right clavicle at the midclavicular line, just above the last bone of the ribcage at the left midaxillary line, and just below the last bone of the ribcage at the right midaxillary line. Cardiac impedance was collected with four self-adhering electrodesone placed at the suprasternal notch (jugular notch), one at the inferior end of the sternum (xiphoid process), and two on the back (one located roughly at the fourth cervical vertebra and one located roughly at the eighth thoracic vertebrae). MindWare Impedance Cardiography and MindWare HRV 2.51 software (MindWare Technologies Ltd., Gahanna, OH, United States) were used to clean raw data and extract the systolic time intervals (PEP, LVET) and heart rate variability (RSA) using spectral analysis. Clear artifacts in EKG data were deleted and excluded from analyses. In addition, SCL was measured using two disposable electrodes filled with isotonic recording gel that were placed on the middle phalange of the second and fourth fingers of the non-dominant hand. While indicators of both sympathetic (SNS) and parasympathetic nervous system (PNS) arousal can be obtained from analysis of physiological data, the present study focused on the pattern of SNS arousal. SNS indices include HR, cardiac output (CO), stroke volume (SV), left ventricular ejection time (LVET), cardiac impedance (Zo), preejection period (PEP), and SCL. HR is the number of contractions of the heart per minute. CO is a measure of the overall volume of blood being pumped by the heart per minute. SV represents the volume of blood ejected by the left ventricle of the heart in one beat. LVET is a measure of myocardial contractility. Zo is an indicator of blood flow through the thoracic cavity. PEP is an indicator of sympathetic myocardial drive and indicates the interval between onset of the EKG Q-wave and onset of the left ventricular ejection. SCL is an index of sweat gland activity at the surface of the skin. --- Emotional Fit Indices Following a calculation method used in previous studies of emotional fit with culture (De Leersnyder et al., 2014, 2015), three types of emotional fit with individuals' own culture (i.e., Asian American and European American) were calculated using self-report emotion ratings (self-report emotional fit), behavioral responses (behavioral emotional fit), and physiological responses (physiological emotional fit). The means and variances of all variables used to calculate emotional fit are presented in Table 1. In order to calculate self-report emotional fit, we first calculated the group's average rating for each of the 16 different emotions excluding the respondents' own scores, which constituted the group's average emotional profile. We then correlated each individual's emotional profile consisting of 16 emotions to the group's average emotional profile. The derived correlation coefficients were Fisher's z-transformed in order to achieve a normal distribution of data. The final correlation coefficient for each individual served as self-report emotional fit score -the degree to which individual's emotional profile resembles the normative emotional profile of one's group. This process was repeated three times for each of the three time points (baseline, Films 1 and 2), resulting in three separate self-report emotional fit scores for Times 0, 1, and 2. Behavioral emotional fit was calculated using the facial expression data. Six emotions used for behavioral emotional fit were happiness, sadness, anger, surprise, fear, and disgust. Following the same procedure as self-report emotional fit, the group's average behavioral emotional profile was derived from the group's average score on each of the six different emotions excluding the respondents' own scores. Then the group's emotional profile was correlated to each individual's emotional profile, and the Fisher's z-transformation was applied. This process was repeated two times, each using the responses to Films 1 and 2, resulting in two separate behavioral emotional fit scores for each individual in Times 1 and 2. For calculating physiological emotional fit, we used seven different indices of sympathetic activation collected during the first two films. These were HR, CO, SV, LVET, Zo, PEP, and SCL. Among these, Zo and PEP decreases as SNS activity increases. Thus, Zo and PEP indices were reverse coded by multiplying them by -1, so that the increase in number would indicate greater SNS arousal. In addition, each of these indices were originally on different scales. Therefore, we standardized the scores using the formula: (x-x min )/(x max -x min ), which transformed the data into a 0-1 scale. The rest of the process of calculating emotional fit was identical to that of self-report and behavioral emotional fit. We first calculated the group's average scores for each of the seven sympathetic indices while excluding the respondents' own score and used it as the group's average emotional profile. This was correlated to individual's profile of physiological responses. The correlation coefficients were then Fisher's z-transformed. The process was repeated two times for each individual using the responses to Films 1 and 2, which resulted in two separate physiological emotional fit scores for each individual in Times 1 and 2. --- RESULTS --- Data-Analytic Approach To test the link between participants' well-being and emotional fit and whether culture moderates this link, we conducted a series of multiple regression analyses. In these analyses, Emotional Fit variables were always entered as Step 1, followed by Culture in Step 2, and the interaction between Emotional Fit and Culture in Step 3 to test for the hypothesized moderation of culture on the link between emotional fit and well-being. When significant interactions between emotional fit and culture emerged, the identified interaction effects were decomposed using a simple slopes analysis (Aiken et al., 1991). In addition, based on prior evidence suggesting gender differences in response to disgust (e.g., Schienle et al., 2005;Rohrmann et al., 2008), we examined the effects of gender on (a) the emotional responses to the disgust film and (b) our indices of emotional fit. Some gender differences emerged across specific facial expressions in response to disgust, and behavioral emotional fit also varied significantly by gender 1 1 We explored gender differences in self-reported, behavioral (facial expressions), and physiological responses to the disgust film. Self-reported emotions in response to the disgust film did not differ by gender, ps > 0.05. Similarly, there were no significant gender differences in facial expressions of disgust, anger, and fear in response to the disgust film, ps > 0.05. However, males showed more happiness expressions than females, t(55) = -2.35, p = 0.023, while females showed more expressions of surprise, t(89) = 2.91, p = 0.005, and sadness t(103) = 2.96, p = 0.004, relative to males. Looking at physiological responses, males showed greater SCL responses than females, t(81) = -2.44, p = 0.017, but there were no other significant gender differences across the remaining physiological indices, ps > 0.05. We also examined whether emotional fit differed by gender. There were no gender differences in self-report emotional fit at all three time points, as well as physiological emotional fit at the two available time points, p
The present study examined how emotional fit with culture -the degree of similarity between an individual' emotional response to the emotional response of others from the same culture -relates to well-being in a sample of Asian American and European American college students. Using a profile correlation method, we calculated three types of emotional fit based on self-reported emotions, facial expressions, and physiological responses. We then examined the relationships between emotional fit and individual well-being (depression, life satisfaction) as well as collective aspects of well-being, namely collective self-esteem (one's evaluation of one's cultural group) and identification with one's group. The results revealed that self-report emotional fit was associated with greater individual well-being across cultures. In contrast, culture moderated the relationship between self-report emotional fit and collective self-esteem, such that emotional fit predicted greater collective self-esteem in Asian Americans, but not in European Americans. Behavioral emotional fit was unrelated to well-being. There was a marginally significant cultural moderation in the relationship between physiological emotional fit in a strong emotional situation and group identification. Specifically, physiological emotional fit predicted greater group identification in Asian Americans, but not in European Americans. However, this finding disappeared after a Bonferroni correction. The current finding extends previous research by showing that, while emotional fit may be closely related to individual aspects of well-being across cultures, the influence of emotional fit on collective aspects of well-being may be unique to cultures that emphasize interdependence and social harmony, and thus being in alignment with other members of the group.
ported, behavioral (facial expressions), and physiological responses to the disgust film. Self-reported emotions in response to the disgust film did not differ by gender, ps > 0.05. Similarly, there were no significant gender differences in facial expressions of disgust, anger, and fear in response to the disgust film, ps > 0.05. However, males showed more happiness expressions than females, t(55) = -2.35, p = 0.023, while females showed more expressions of surprise, t(89) = 2.91, p = 0.005, and sadness t(103) = 2.96, p = 0.004, relative to males. Looking at physiological responses, males showed greater SCL responses than females, t(81) = -2.44, p = 0.017, but there were no other significant gender differences across the remaining physiological indices, ps > 0.05. We also examined whether emotional fit differed by gender. There were no gender differences in self-report emotional fit at all three time points, as well as physiological emotional fit at the two available time points, ps > 0.05. However, there were significant gender differences in behavioral emotional fit indices at both Times 1 and 2, such that males showed greater behavioral emotional fit than did females, t(120) = -2.24, p = 0.027, t(118) = -2.78, p = 0.006, for Times 1 and 2, respectively. Given these gender differences in facial expressions in response to disgust film, as well as in behavioral emotional fit, we included gender as a covariate in the regression models testing the effect of behavioral emotional fit on outcome variables. This did not change any of the reported patterns of results, and therefore these analyses were not included in the manuscript given that examination of gender was outside of the scope of the present study. As a result, we re-ran our regression models controlling for gender, and this did not change any of our reported findings. Therefore, we report the models without gender for the sake of parsimony. In reporting of the results, we focus on the main effect of emotional fit in Step 1 and interaction between emotional fit and culture in Step 3. Correlations between emotional fit and well-being variables and descriptive statistics are presented in Table 2. For our primary analyses (self-report emotional fit at time 0), we chose not to correct the alpha level (0.05) to preserve power and because we were testing a priori hypotheses (confirmatory analyses) and only conducted five regressions to test two questions (Rothman, 1990;Proschan and Waclawiw, 2000;van Belle, 2008;Rubin, 2017). For the exploratory analyses, we employed the Bonferroni correction given the large number of tests conducted. In all, we tested how three types of fit (selfreport, behavioral, and physiological) relate to two types of outcomes (individual well-being and collective aspects of wellbeing) using a total of 30 regressions relating to variations in the specific outcome variables and time points considered. Thus, the adjusted p-value of 0.002 (0.05/30) was used to re-evaluate any of the significant findings that emerged from analysis using an uncorrected p-value. We chose to present the results of the test both before and after the Bonferroni correction given the recommendation that corrections for multiple comparisons also has the drawback of reducing power (Rothman, 1990). --- Self-Report Emotional Fit We first examined the link between self-report emotional fit at Time 0 (EF T0-SR ) and individual well-being variables, and whether culture moderated this relationship. There was a significant main effect of EF T0-SR on depression, with higher emotional fit predicting reduced depression, <unk> = -5.45, t(1, 125) = -3.91, p <unk> 0.001. As predicted, the interaction Means with asterisks significantly differ between groups. * p <unk> 0.05; * * * p <unk> 0.001. between EF T0-SR and culture on depression was not significant. Similarly, a significant main effect of EF T0-SR was found in predicting life satisfaction, such that higher emotional fit predicted greater life satisfaction, <unk> = 3.29, t(1, 125) = 3.05, p = 0.003. As hypothesized, culture did not moderate this relationship either. Next, we tested the link between self-report emotional fit at the remaining time points and individual wellbeing variables. The results were largely consistent with Time 0 findings. There was a significant main effect of self-report emotional fit at Time 1 (EF T1-SR ) on depression, such that higher emotional fit predicted reduced depression, <unk> = -4.26, t(1, 125) = -3.21, p = 0.002. There was a significant main effect of EF T1-SR on life satisfaction with higher emotional fit predicting greater life satisfaction, <unk> = 2.50, t(1, 125) = 2.44, p = 0.016. The same pattern of results emerged with self-report emotional fit at Time 2 (EF T2-SR ). There were significant main effects of EF T2-SR on both depression and life satisfaction, <unk> = -3.56, t(1, 124) = -2.19, p = 0.03, <unk> = 2.75, t(1, 124) = 2.24, p = 0.027, respectively. After applying a Bonferroni correction to these exploratory analyses at Times 1 and 2, only the relationship between EF T1-SR and depression remained significant. Culture did not moderate any of the associations between self-report emotional fit at T1 and T2 and individual well-being. Next, looking at the effects of emotional fit on collective aspects of well-being, there was a significant main effect of emotional fit at Time 0 on collective self-esteem (i.e., one's evaluation of how good one's ethnic group is) with higher emotional fit predicting greater collective self-esteem, <unk> = 1.64, t(1, 125) = 2.43, p = 0.017. As hypothesized, this main effect was qualified by a significant interaction between EF T0-SR and culture, <unk> = 2.79, t(3, 123) = 2.08, p = 0.04. A follow-up simple slopes analysis revealed that the simple slope of the regression of collective self-esteem onto EF T0-SR for Asian Americans was significant (simple slope = 3.05), t(123) = 3.20, p = 0.002, with higher EF T0-SR predicting greater collective self-esteem (Figure 1). In European Americans, the relationship between collective self-esteem and EF T0-SR was non-significant (simple slope = 0.27), t(123) = 0.28, p = 0.779. These findings were specific to Time 0 Emotional Fit. There were no significant main effects of EF T1-SR and EF T2-SR on collective self-esteem, and no cultural moderation was found at these additional time points. The effects of emotional fit on measures of how important one's ethnicity is to one's own self-concept (CSES identity and racial centrality) were non-significant across all three time points. That is, EF SR in Times 1, 2, and 3 did not predict either CSES identity or racial centrality, and there was no cultural moderation, all ps > 0.05. --- Additional Indices of Emotional Fit Next, we explored whether behavioral and physiological indices of emotional fit predicted individual and collective aspects of well-being. Both behavioral emotional fit at Time 1 (EF T1-BEH ) and Time 2 (EF T2-BEH ) did not predict any of the outcome variables, and there was no interaction between EF BEH and culture. Looking at physiological indices of emotional fit, there was no main effect of physiological emotional fit at Time 1 (EF T1-PHY ) on any of the outcome variables, and no cultural moderation was found. Similarly, there was no main effect of physiological emotional fit at Time 2 (EF T2-PHY ) on any of the outcome variables. However, there was a marginally significant interaction effect between EF T2-PHY and culture in predicting racial centrality, <unk> = 4.03, t(3, 91) = 1.92, p = 0.058. A followup simple slopes analysis indicated that the simple slope of the regression of racial centrality onto EF T2-PHY for Asian Americans was significant (simple slope = 3.67), t(91) = 2.09, p = 0.04, with higher EF T2-PHY predicting greater racial centrality (Figure 2). In contrast, the simple slope was non-significant in European Americans (simple slope = -0.36), t(91) = -0.32, p = 0.753. This marginally significant interaction became nonsignificant when the Bonferroni corrected p-value was applied. --- DISCUSSION The present study examined the association between emotional fit and individual and collective aspects of well-being and the role of culture in this relationship. Emotional fit based on self-report ratings of emotions significantly predicted individual well-being including reduced depression and greater life satisfaction in both Asian Americans and European Americans. In contrast, selfreport emotional fit in the absence of laboratory stimuli predicted collective aspects of well-being, particularly collective self-esteem only in Asian Americans. In addition, emotional fit based on physiological response to a strong negative stimulus predicted greater identification with one's group only in Asian Americans, though this cultural moderation was only marginally significant in the initial test and disappeared when the Bonferroni correction was applied. --- Self-Report Emotional Fit Emotional fit based on self-reported emotions in all three timepoints was associated with individual well-being (i.e., lower depression and greater life satisfaction) across cultures. This finding is in line with the view that while there may be different cultural mandates for well-being in interdependent and independent cultures (e.g., social harmony in Japan and personal control in United States; Kitayama et al., 2010), being in alignment with one's own cultural norms around emotion is generally important for individual well-being across cultures. It has been shown that even though different emotions are preferred in Japan and the United States, the experience of culturally preferred emotions was associated with happiness in both cultures (Kitayama et al., 2006). In a similar vein, experiencing a culturally normative pattern of emotions has been found to be important for psychological well-being in both independent and interdependent cultures, although the specific contexts in which emotional fit becomes crucial varies depending on respective cultural values (De Leersnyder et al., 2015). Because people's emotions are shaped by how they perceive and appraise their environment (Ellsworth and Scherer, 2003), their fit with the average emotional pattern of others in the same culture may represent their level of sharing and participating in the predominant world-view of that culture. Thus, emotional fit to a certain extent may reflect a general level of social adjustment (De Leersnyder et al., 2011), which may have universal implications for one's psychological well-being. While we have conceptualized the above relationship as one where emotional fit with one's group might lead to increased well-being, we can also consider the pathway in which individual well-being leads to increased emotional fit. For instance, the cultural norms hypothesis of depression (Chentsova-Dutton et al., 2007) suggests that the symptoms of depression (i.e., impaired concentration, low energy, and anhedonia) may impair individuals' abilities to attend to and enact cultural norms and ideals regarding emotion and emotional expression. Indeed, it has been demonstrated that depressed individuals showed lower emotional fit with their cultural group than did non-depressed individuals (Chentsova-Dutton et al., 2007). These findings demonstrate that perhaps individuals who have lower well-being and greater depression may have more difficulty responding in a culturally concordant manner. As such, more research is needed in order to establish the directionality of the relationship between emotional fit and well-being. In contrast to the individual well-being findings, culture moderated the relationship between self-report emotional fit and collective identity, particularly, individuals' evaluation of their own cultural group (collective self-esteem). In Asian Americans, greater emotional fit predicted more positive evaluation of their own cultural group, whereas such a relationship was not present in European Americans. People generally experience similarity as safe and comforting, and similarity leads to greater liking (Montoya et al., 2008). This may be especially so in cultures where social harmony and conformity are greatly valued and practiced. Previous research has shown that people in collectivistic societies conform more than those in individualistic societies (Bond and Smith, 1996). It is possible that this greater importance of similarity in East Asian cultures leads to greater liking or more positive evaluation of the group that one also shares an emotional response pattern with. Alternatively, individuals may be more motivated to behave consistently with the group when they feel positively about their own cultural group. It is possible that we see this pattern only in Asian American individuals because conformity, in general, is practiced more in collectivistic than individualistic societies (Bond and Smith, 1996). On the other hand, the inconsistency between one's own emotions and the modal emotional pattern of one's culture may be more self-threatening in interdependent culture. Negative evaluation of a group that is seen as dissimilar to oneself may represent an attempt to reconcile this threat to self by degrading dissimilar others and in turn preserving or enhancing the self. Alternatively, however, the experience of dissimilarity may lead to negative evaluation of both the individual and group in interdependent cultures. Extensive research on interdependent self-construal in interdependent cultures (e.g., Markus and Kitayama, 1991) suggests that there may be a greater overlap between individual and collective selves in Asian cultures. Although the evaluation of individual self (e.g., personal selfesteem) was not measured in the current study, it is possible that reduced fit with other Asian Americans led to more negative evaluations of the individual self, which in turn spilled over to the evaluation of their collective self. In addition to the possible role of interdependence and collectivist values in the present findings, the role of Asian Americans' position as a racial minority group in the United States cannot be ignored. For instance, the status of a racial minority and the repeated experience of being marginalized may have led Asian Americans to seek belonging and to place a greater value on the group through which they can fulfill such a need. As such, Asian Americans who share emotional similarity to the members of their cultural group may be able to more readily satiate their need for belonging through their group membership, and in turn, evaluate their group more positively. Additionally, because a minority often experiences being perceived as representing one's broader minority group as a whole, Asian Americans may be more aware of and sensitive to how their individual behavior reflects on outside perceptions of their group as a whole. In the presence of this heightened sense of prescribed connection between their own behaviors and the outside perception of their group, Asian Americans may experience the group with which they share emotional similarity (i.e., greater emotional fit) less effortful to represent, and thus, leading to greater liking or more positive evaluation. Interestingly, the results relating to self-report emotional fit and collective self-esteem were specific to emotional fit at baseline before any specific laboratory stimuli were presented. This could be because reflective responses to a strong emotional stimulus may override individual or cultural variability in emotional patterns, leading to too little variability in emotional fit indices, which in turn may limit the possibility of identifying any meaningful patterns between emotional fit and outcome measures. In fact, the variance in self-report emotional fit was lowest in Time 2 when the fit was measured in response to a strong negative stimulus. The pattern of results regarding individual well-being is somewhat consistent with this point as well. While the effect of self-report emotional fit on individual well-being was observed at all three time points, the magnitude of effect decreased from emotional fit at Time 0, to Time 1 (in response to neutral film), and to Time 2 (in response to disgust film), and some of the Times 1 and 2 effects were eliminated when employing the Bonferroni correction. --- Additional Indices of Emotional Fit Another aim of this study was to explore whether any of the effects found with self-report emotional fit is replicated with other indices of emotional fit such as behavioral and physiological emotional fit. We did not find the comparable patterns of results with other indices of emotional fit, which is consistent with the dual-process perspective suggesting that there is little response coherence between reflective and automatic emotion systems (Evers et al., 2014). In addition, indices of emotional fit at different levels were largely uncorrelated to each other, although emotional fit indices within the same level (e.g., self-report, physiology) were generally related to each other. Behavioral emotional fit in response to both neutral and disgust films did not predict any individual and collective aspects of well-being. Similarly, physiological emotional fit in response to the neutral film did not predict any of the outcome variables. However, a marginally significant interaction pointed to a pattern consistent with our prediction such that higher physiological emotional fit in response to disgust film was associated with greater racial centrality in Asian Americans, whereas there was no such relationship in European Americans. In other words, the perceived level of group identification (racial centrality) was mirrored in greater individual-group synchrony in automatic responses to a strong emotional situation in Asian Americans. It is conceivable that when members of interdependent culture identify with their group, their collective identity gets deeply internalized to the point that this is reflected in a greater physiological concordance with their group members. This result, however, became non-significant after employing the Bonferroni correction. Given the small sample size, we believe this finding may nevertheless be worth testing in future studies, especially since we observed the similar pattern found in the primary analyses (emotional fit relating to collective aspects of well-being for Asian Americans only), although only in response to a strong negative stimulus (Time 2). Future studies aiming to measure physiological emotional fit may note that in the absence of a stimulus to respond (no stimuli or neutral stimuli) there may be too much variability/physiological noise across subjects to be able to calculate a meaningful fit index. However, the introduction of a punctate stimulus may organize the physiological system enough to be able to calculate the fit indices discussed. The variance in physiological emotional fit in Time 1 was considerably greater than that of Time 2, which further support this possibility. Thus, while these findings are not robust they are suggestive of a possible future direction to pursue when there is adequate power to test the hypothesis. --- Limitations and Future Directions The current study has a few important limitations that are worth noting. First, while we used data from previous study that allowed us to also explore behavioral and physiological emotional fit in addition to self-report emotional fit, we did not have behavioral and physiological emotional fit indices at Time 0. Thus, we cannot know whether our self-report emotional fit findings from Time 0 will be corroborated with behavioral and physiological emotional fit measured in the same context. In addition, the choice of emotion elicitors was restricted by the nature of convenience dataset. In particular, given that disgust may be an emotion with the least cultural variability, the use of the disgust film at Time 2 allowed for a more conservative test of our research question but also may have underestimated the impact of emotional fit. Future studies employing varying indices of emotional fit across diverse emotional contexts are needed for a more in-depth investigation into the effects of emotional fit. Second, our study is cross-sectional, and thus cannot answer questions regarding the directionality in the observed links between emotional fit and well-being. Additionally, the design of the current study does not allow us to explore the specific mechanisms underlying the relationship between emotional fit and well-being as well as the cultural moderation observed in predicting collective aspects of well-being. Important next steps would be to examine the causality in the link between emotional fit and well-being through a longitudinal design or a laboratory experiment where emotional fit is manipulated (e.g., Livingstone et al., 2011) and through what processes such causal effects emerge. Third, it will be important to replicate these results in East Asians residing in East Asian countries to disentangle the potential role of interdependence with that of being a minority experience in the current finding. Fourth, careful studies examining gender effects on emotional fit would also be a fruitful avenue of future research. Based on the observed gender differences in behavioral emotional fit, it may be worth examining gender-specific emotional fit (emotional fit calculated using a same-gender reference group) and how it relates to well-being. Lastly, prior studies examining emotional fit using the same profile correlation approach have used a relatively larger sample (e.g., N = 266 in Study 3 in De Leersnyder et al., 2015) compared to the current study. The relatively small size of the current sample, especially in regard to exploratory analyses with physiological emotional fit (Asian American n = 39, European American n = 56) may have limited our ability to detect significant relationships between primary variables of interest. Although this preliminary result is interesting, future studies using a larger sample should further examine this finding to draw more meaningful conclusions. --- CONCLUSION Individuals must constantly navigate through their social worlds while paying simultaneous attention to both their individual needs and behaviors and the needs and behaviors of those around them. However, the extent to which individual and group behaviors fit with each other can vary meaningfully across cultural groups as can the relationship between this fit and wellbeing. The present study revealed that emotional fit based on individuals' subjective emotional experience predicted individual well-being across cultures, but predicted collective self-esteem only in Asian Americans. Being the first study to examine the relationship between emotional fit and collective aspects of well-being, the current finding adds to the growing research attempting to understand emotions as social and interpersonal processes that are naturally imbedded in cultural contexts. We believe this underscores the need to consider, not only how emotions may conform to normative patterns in one's cultural milieu, but that this degree of fit may impact members of different cultures in different ways. --- ETHICS STATEMENT This study was carried out in accordance with the recommendations of the American Psychological Association's ethical standards with written informed consent from all subjects. All subjects gave written informed consent in accordance with the Declaration of Helsinki. The protocol was approved by the Penn State University's Institutional Review Board. --- AUTHOR CONTRIBUTIONS SC contributed to conception of the work, and collection, cleaning, analysis, and interpretation of data, and she was responsible for drafting and revising the manuscript. NVD contributed to conception of the work, cleaning of physiological data, and revising the manuscript. MM contributed to collection and cleaning of physiological and behavioral data. DA and RA contributed to the cleaning of behavioral data and revising the manuscript. JS supervised the project and contributed to all aspects of the work. --- Conflict of Interest Statement: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The present study examined how emotional fit with culture -the degree of similarity between an individual' emotional response to the emotional response of others from the same culture -relates to well-being in a sample of Asian American and European American college students. Using a profile correlation method, we calculated three types of emotional fit based on self-reported emotions, facial expressions, and physiological responses. We then examined the relationships between emotional fit and individual well-being (depression, life satisfaction) as well as collective aspects of well-being, namely collective self-esteem (one's evaluation of one's cultural group) and identification with one's group. The results revealed that self-report emotional fit was associated with greater individual well-being across cultures. In contrast, culture moderated the relationship between self-report emotional fit and collective self-esteem, such that emotional fit predicted greater collective self-esteem in Asian Americans, but not in European Americans. Behavioral emotional fit was unrelated to well-being. There was a marginally significant cultural moderation in the relationship between physiological emotional fit in a strong emotional situation and group identification. Specifically, physiological emotional fit predicted greater group identification in Asian Americans, but not in European Americans. However, this finding disappeared after a Bonferroni correction. The current finding extends previous research by showing that, while emotional fit may be closely related to individual aspects of well-being across cultures, the influence of emotional fit on collective aspects of well-being may be unique to cultures that emphasize interdependence and social harmony, and thus being in alignment with other members of the group.
BUILDING A HUMAN REFERENCE ATLAS Andreas Bueckle, Indiana University, Bloomington, Indiana, United States The Human Reference Atlas (HRA, https://humanatlas. io) is a comprehensive, high-resolution, three-dimensional atlas of all the cells in the healthy human body. The HRA provides standard terminologies and data structures for describing specimens, biological structures, and spatial positions linked to existing ontologies. In this talk, we will present a high-level overview of the major components of the HRA--including 67 anatomically correct 3D Reference Objects for 29 organs and 31 Anatomical Structure, Cell Types, and Biomarker (ASCT+B) Tables--and the tools to explore, use, author, and review the HRA--including the Registration User Interface, the Exploration User Interface, the ASCT+B Reporter, and the HRA Organ Gallery in virtual reality. We welcome experts and practitioners to join the monthly WG meetings (sign up at https://iu.co1.qualtrics. com/jfe/form/SV_bpaBhIr8XfdiNRH), to explore and contribute to this effort, and to provide feedback on the evolving HRA from diverse perspectives. --- SESSION 4205 (SYMPOSIUM) Abstract citation ID: igad104.1557 --- FINDINGS FROM NSHAP: SOCIAL CONNECTEDNESS, HEALTH INDICATORS, MEDICATION EFFECTS, AND PREDICTING MORTALITY Chair: Lissette Piedra Discussant: Amelia Karraker The National Social Life, Health, and Aging Project's broad range of both social measures, and objective and selfreported health measures enable detailed analysis of the intersections between these fundamental aspects of older adults' lives. The papers in this symposium explore various aspects of these topics from different angles. The first explores employment as an important form of social participation, establishing that full-time employment among respondents is associated with better cognitive function, and less ADL and IADL difficulties. The second examines how social isolation affects men and women differently. Social Networks are the focus of the third paper and compare family to friendship ties. Using NSHAP's unique medication log, Wilder examines sleep disturbances and the prevalence of respondents taking medications with somnolence as an adverse event, demonstrating the need for more research into how this might affect older adults' health and well-being. Li uses NSHAP data to develop machine learning models to predict 10-year mortality of older adults in the US which perform with better accuracy than logistic regression. Abstract citation ID: igad104.1558 --- EMPLOYMENT AS A FORM OF SOCIAL PARTICIPATION AMONG OLDER ADULTS: LINKS TO COGNITIVE AND FUNCTIONAL HEALTH Peilin Yang 1, Linda Waite 2, and Ashwin Kotwal 3, 1. University of Michigan Ann Arbor, Ann Arbor, Michigan, United States, 2. University of Chicago,Chicago,Illinois,United States,3. University of California San Francisco,San Francisco,California,United States Within the active aging literature, studies on social participation and health concur that people who are better socially integrated and engage in social activities tend to have better physical, mental, and cognitive health. This study revisits the literature by aiming to address three primary knowledge gaps in prior literature: 1) we explicitly examine the change within the 5-year-interval in cognition, activities of daily living (ADL), instrumental activities of daily living (IADL) associated with social participation five years prior; 2) we examine diversity in participation's association with not only cognitive function, but also ADL and IADL, which we lack knowledge of; 3) we conceptualize employment in later life as a kind of social participation, a part of older adults' lives that is overlooked in the social participation literature. We also examine whether the relationship between social participation and cognition, ADL, and IADL is the same for men and women, and for those employed and those not employed. The study finds that neighborhood participation at a high level indicates worse cognitive, ADL, and IADL outcomes 5 years later, and a higher level of neighborhood participation is more indicative of worse cognitive outcomes for men than for women. Full-time employment predicts better cognitive, ADL, and IADL outcomes 5 years later. We also find evidence that full-time work creates a stronger buffer against cognitive decline, developing ADL difficulties, and IADL difficulties, even among older adults who socialize with family and friends, participate in the community, and the neighborhood at a high level, respectively. Abstract citation ID: igad104.1559 Self-neglect among older adults is characterized by inattention to hygiene and one's immediate living conditions, and may reflect unmet needs from social relationships. We therefore determined if social isolation was associated with self-neglect and how the association differed by gender. We used data from the National Social Life, Health, and Aging Project (NSHAP) Wave 3 (2015), a nationally-representative survey of 3,677 community-dwelling older adults. Social isolation was determined using a 12-item scale assessing household contacts, social network interaction, and community engagement. Self-neglect was assessed in-person and included 1) body neglect (lowest quintile of bodily self-presentation related to clothes and hygiene) and 2) household neglect (lowest quintile of household building condition, cleanliness, odor and clutter). Logistic regression was used to determine the adjusted probability of self-neglect by social isolation, and interaction terms with gender. Results indicated the association between social isolation and self-neglect differed by gender (p-values for interaction: body neglect: 0.02, household neglect: 0.20). Among women, social isolation was associated with a higher risk of body neglect (social isolation: 26% vs no isolation: 14%, p=0.001) and household neglect (23% vs 17%, p=0.05). For men, social isolation was not associated with body neglect (27% vs 23% p=0.2) or household neglect (23% vs 22%, p=0.8). In summary, social isolation was associated with body and household neglect among women, but was not associated with neglect among men. Future work should investigate mechanisms for gender differences and interventions to address or prevent self-neglect through enhancing social connectedness. --- THE RELATIONSHIP OF SOCIAL ISOLATION TO SELF-NEGLECT AMONG OLDER ADULTS: RESULTS OF A NATIONAL SURVEY Abstract citation ID: igad104.1560 --- LOCAL FAMILY AND FRIEND TIES AND THEIR RELATIONSHIP TO SOCIAL SUPPORT AND STRAIN AMONG OLDER ADULTS Won Choi, University of Chicago, Chicago, Illinois, United States Family members and friends who live nearby are likely valuable sources of support for older adults. At the same time, local family and friend ties may also be a source of strain as spatial proximity to close ties can generate more intense interactions. Using data from Round 3 (2015-2016) of the National Social Life, Health, and Aging Project (NSHAP) (N=3,615), this study examines how local family and friend ties reported in older adults' social network roster are associated with instrumental and emotional support and social strain among community-dwelling older adults aged 50 and older. Results from ordered logistic regression models show that having a local friend tie is associated with higher levels of instrumental and emotional support from friends and lower levels of instrumental and emotional support from family. Having a local family tie, on the other hand, is associated with higher levels of instrumental support from family and lower levels of emotional support from friends. Having a local family tie is not related to emotional support from family or instrumental support from friends. Results also indicate that having a local friend tie increases the odds of reporting that friends make too many demands (i.e., higher friend strain) whereas having a local family tie is not a predictor of family strain. Together, results suggest that spatial proximity to friends and, to a lesser degree, family members are linked to how older adults experience social support and strain. Abstract citation ID: igad104.1561 --- USE OF PRESCRIPTION MEDICATIONS WITH SOMNOLENCE AS A POTENTIAL ADVERSE EFFECT AMONG OLDER ADULTS IN THE UNITED STATES Jocelyn Wilder, NORC, Chicago, Illinois, United States Over half of community-dwelling older adults experience sleep disorders, with approximately 40% reporting somnolence or/and excessive daytime sleepiness, associated with an increased risk for cognitive impairment and premature mortality. The use and concurrent use of prescription medication with somnolence as an adverse effect may be an overlooked contributor to this growing problem. This study aims
NIH SenNet consortium aims to dissect the heterogeneity of senescent cells (SnCs) and map their impact on the microenvironment at a single cell resolution and in the spatial tissue context, which requires the implementation of an array of omics technologies to comprehensively identify, characterize, and spatially profile SnCs across tissues in humans and mice. These technologies are broadly categorized into two groups -single cell omics and spatial mapping. To achieve single cell resolution and overcome the scarcity of SnCs, high-throughput single-cell and single-nucleus transcriptomic techniques have become a mainstay tool for surveying tens of thousands of cells to identify transcriptional signatures in rare cell populations, enabling discovery of potential new SnC biomarkers. Novel single cell mass spectrometry methods are developed for unbiased discovery of proteomic signatures of SnCs. A hallmark of SnCs is the senescence-associated secretory phenotype (SASP), which requires the use of proteomics, secretomics, metabolomics and lipidomics, especially SASP-associated extracellular vesicles, for comprehensive characterization of SAPS. High resolution molecular and cellular imaging of gene expression (e.g., MERFISH) or protein markers (e.g., CODEX) is critical for the study of SnCs in the large-scale tissue context. NGS-based spatial omics sequencing is poised to bridge the gap to realize both genome scale and cellular resolution in mapping SnCs in tissue. Novel technologies such as Seq-Scope and Pixel-Seq developed within SenNet further enabled subcellular resolution. SenNet investigators also developed spatially resolved epigenome and multi-omics sequencing techniques to link transcriptional or proteomic phenotype of SnCs to epigenetic mechanism. Further integration with high-resolution imaging makes spatial omics the crucial linchpin in connecting mechanistic underpinnings and molecular signatures with morphological features and spatial distribution. All these are critical for the construction of a map of SnCs and associated niches in the native tissue environment implicated in human health, aging, and disease, which is one of the main goals of the SenNet consortium.
The reproductive approach has achieved astounding successes very quickly and promises to continue to advance exponentially (think about the development of mRNA vaccines against COVID-19 (Pizza et al. 2021), where being able -thanks to AI tools-to do reprogramming (as fast as possible and in a way that is as coordinated as possible) helped managing the deluge of data associated with the project1. In so many areas, reproductive AI is better and tends to replace human intelligence because it is faster, more reliable, and more consistent in its results. The absolute reliability of AI in standardized tasks is probably the main difference with a human operator. While the latter may manifest greater degrees of freedom in task execution (something to do with the cognitive and productive aspects of AI), automated systems guarantee unambiguous, infinitely repeatable performance without fluctuations. This is also a cultural feature that cannot be overlooked when considering the labour market and industrial production. Indeed, one can identify a characteristic sought by both the supply side and the demand side; namely the desire for a product that is "perfect" insofar as it is not limited or influenced by human "imperfections". The delegation to a "dumb"-as Floridi calls it-but exceptionally effective AI allows us to make our lives much easier and less tiring. AI as a reservoir of capabilities can therefore tackle any number of problems and tasks for which human intelligence characteristics of understanding, awareness, sensitivity, semantics, and meaning are not needed. And this happens, as proposed by Floridi (2013Floridi (, 2014Floridi (, 2022) ) among others, as the world adapts to reproductive AI and not vice versa. Industrial automation follows this paradigm. The introduction of robots or devices that carry out the production and distribution processes with reduced human intervention or diminishing participation is done by circumscribing the work environment to the limited capabilities of simple machines. We don't try to build a humanoid robot to wash clothes in a bathtub but build a microenvironment (such as a washing machine) that takes advantage of available technology. The same happens with automated ironing. This changes not only the way people work towards the realization of these activities, but also the products for which the services are designed. We are talking here about technologies that are not cutting-edge, where AI plays a limited role. Consider, however, other procedures, such as house cleaning. Robot vacuum cleaners take advantage of AI to move with increase effectiveness in complex environments. However, it is clear that it will soon be the design of homes that will adapt to automated service systems, especially with the needs of the elderly in mind, if robotic assistants become more prevalent for lonely people. The self-driving car may be one example among the hightech ones, where engineering AI is the absolute protagonist (Bonnefon 2021). The self-driving car does not start out as a classic self-driving car that is adaptable to different road locations and can, if need be, travel on unpaved terrain or in adverse environmental conditions, such as a blackout of lighting and electronic signage. The self-driving car comes with specific requirements due to AI technology that allows the vehicle to move without a human driver. It must move in an environment that allows it to have all the feedback necessary for the efficient execution of its task, which is to move from point A to point B with maximum safety and comfort of the passengers and all who may be in its path. This can be accomplished by engineering the roads, making them suitable for the self-driving car (Birdsall 2014). It is not the car that has to adapt to the environment, but it is the environment that is wrapped-around a tool that we find particularly useful in terms of saving effort, time, and traffic accidents (Borenstein et al. 2019). Paradoxically, at an early stage, self-driving cars will have a narrow range of available destinations and thus condition the mobility of those who want to rely on them. For instance, robotaxis can only circulate on a few streets in San Francisco (cf. Heaven 2022) or in very small cities (such as Innopolis). In general, wrapping the environment in an infosphere has become an increasingly common practice to exploit the potential of AI, where "the infosphere is the whole system of services and documents, encoded in any semiotic and physical media, whose contents include any sort of data, information and knowledge (...) with no limitations either in size, typology, or logical structure. Hence it ranges from alphanumeric texts (i.e., texts, including letters, numbers, and diacritic symbols) and multimedia products to statistical data, from films and hypertexts to whole text-banks and collections of pictures, from mathematical formulae to sounds and videoclips" (Floridi 1999). Connected to the infosphere is the onlife dimension, i.e., the activity that everyone performs while being connected to digital devices, which are also embedded in the wrapping-around logic we referred to above. Environments are changing, so that artificial agents-robots, bots, algorithmscan move with greater ease than humans can now do. In highly digitally wrapped environments, all relevant data are collected (or at least potentially collected) and analysed without the need for other interventions. Thus, decisions and actions can be made automatically by applications and actuators. In this context consider the process of datafication, which is illustrative of many of the ideas discussed above. Datafication, according to Mayer Schoenberger and Cukier 1 3 (2013a, b) is the transformation of social action into online quantified data; a procedure that allows for real-time tracking and predictive analysis of consumers' behaviors. Simply stated, datafication is all about accessing -with the help of AI tools -previously inaccessible processes or activities and turning them into data, that can be subsequently monitored, tracked, analyzed and optimized, or even sold (Cukier and Mayer-Schoenberger 2013). To be sure, the exploitation of Big Data can unlock significant value in areas such as decision making, customer experience, market demand predictions, product and market development and operational efficiency (Yin and Kaynac 2015) and many of the technologies we use in our daily life have enabled different ways of 'datafying' our basic activities and behaviors (Da Bormida 2021). Social networks (such as Facebook or Instagram) notoriously collect and monitor data information to market products and services with the intent to produce recommendations to potential buyers (Chamorro-Premuzic et al. 2017). Yet, datafication is a much more pervasive phenomenon than the na<unk>ve eye may prima facie meet, as it is actively pursued (with different goals and aims) by many industries (Pybus and Coté 2021), for example: • by insurance companies, where the data gathered is used to update risk profile development and business models; • by banks, to establish the trustworthiness of a certain individual requiring -for example-a loan; • by human resources and hiring managers at various level, which use datafication to identify risk-taking profiles or even to spot potential personality issues; • by governments and institutions, where datafication and digitalization are often pursued with the intent of minimizing bureaucracy and optimizing transparency in both decision making and resource allocation; • (in general), by investors worldwide to boost business opportunities, credentials, and productivity. For example, very successful companies (such as Netflix, Amazon, Uber, Fitbit) typically merge the resourcefulness of big data with the power of AI to offer their users products that are smart and reliable. In short, one can argue that datafication -especially if pursued in an infosphere-can make our lives smoother and -in doing so-fundamentally change our societies, how people interact between each other and with their institutions, and probably even transform people's understanding of the concept of community as a whole (Skenderija, 2008). Nevertheless, in the face of these positive effects any data-driven endeavor that takes place in an infosphere must also be considered (and therefore properly assessed) against the backdrop of complex and multidimensional issues or challenges that it may contribute to form, concerning -for instance-decision-making processes, social solidarity, privacy, security, the management of public goods, of civil liberties, or even sovereignty (Da Bormida 2021). For example, in the health care sector, concerns about the datafication of the infosphere relate to the difficulty of respecting ethical boundaries relating to sensitive data (e.g., Ruckenstein and Schüll 2017). Datafication, it has been argued, has the potential to erode goal orientation and the room for professional judgement (Hoeyer and Wadmann 2020), favoring varieties of neoliberal subjectification (Fotopoulou and O'Riordan 2016;Foucault 1991) in the form of tools that may accelerate the withdrawal of the welfare state from citizens' lives, which can eventually turn health care into self-care (Ajana 2017). In the education sector, the major risk involved is that students may feel constantly under 'liquid surveillance' (Bauman and Lyon 2013;Zuboff 2019), due to the continuous collection and processing of their data on all levels of their learning trajectory in the educational system (from the classroom to the school, from the region to the state and internationally [Jarke and Breiter 2019]). This -it has been observed-can potentially lead to a reduction of their creativity and/or in higher levels of stress (Williamson et al. 2020). Thus, while wrapping up environments to harness the potential of AI represents a good way to improve humans' condition, the future of our lives is (and will be even more) marked by datafication, which may actively modify our environments in the attempt to achieve more effectiveness and efficiency. The modification of work processes pursued within the infosphere of an increasingly datafied society has several consequences. While many researchers have investigated the consequences of datafication in separate fields (e.g., Da Bormida 2021), not much work has been done -so far-to bring all these insights together in one research paper. This is what we propose to do in our contribution. Specifically, we show that datafication in a rich infosphere may determine that: (a) the full protection of privacy may become structurally impossible, thus leading to undesirable forms of political and social control; (b) worker's degrees of freedom and security may be reduced; (c) creativity, imagination, and even divergence from AI logic might be channeled and possibly discouraged; (d) there will likely be a push towards efficiency and instrumental reason, which will become preeminent in production lines as well as in society. All this encourages reflections on the ways in which digital technologies may foster or hinder decisionmaking processes in future societies and on how increasingly automatized algorithms, based on machine learning, may gradually take over certain roles that were previously uniquely attributed to humans. algorithms (Christophersen et al. 2015). As Kennedy et al. (2015, p. 1) brilliantly put it: 'the advent of big data brings with it new and opaque regimes of population management, control, discrimination and exclusion', something very much akin to what Foucault (1997) called biopolitics; a pervasive mode of power that attempts to understand, control, influence and even regulate the vital characteristics of any given population (Farina and Lavazza 2021b). In agreement with Lupton (2016), we believe that we are now entering an era in which biopolitics may be enforced through datafication; that is, through the joint combination of extensive datasets of digital information gathered synchronously across multiple domains. All this raises crucial issues surrounding the privacy of individuals as well as their basic civil liberties (such as freedom of movement and freedom of association) that are now -it seems to us-more than ever under threat (Farina and Lavazza 2021b;Pietrini et al. 2022;Lavazza & Farina 2021). Consider the following example as a paradigmatic illustration of this claim. It involves the collection of biometric data through face recognition algorithms based on machine learning (Gray and Henderson 2017;Ball et al. 2012). This is just an example of a more general trend (involving the application of biometrics in society). We note that the rolling out of this technology is taking place as we write this paper in many countries, especially in those in which there is a rich infosphere that supports widespread technological advancements (such as the development of 5G). Biometrics can be defined as 'the science of automatic identification or identity verification of individuals using [unique] physiological or behavioral characteristics" (Vacca 2007, p.589). Roughly speaking, biometric systems can be divided into two main categories: hard biometrics and soft biometrics. Hard biometrics include traditional biometric identifiers (such as faces, iris scans, DNA markers, and fingerprints) that are normally used for identity verification technologies (Benziane and Benyettou 2011). Soft biometrics are instead parameters (such as gender, ethnicity, age, height, weight, voice accent, birthmarks etc) that can complement hard biometrics and be used to increase the precision or the accuracy of the recognition system (Nixon et al. 2015). Soft biometrics typically provides information about a person, without -on its own-necessarily providing sufficient evidence to precisely determine the identity of that person. The process of biometric identification is quite complicated and can be summarized in four basic steps (Hu 2017), which include: (1) Enrollment (biometrics data are gathered from the individual); ( 2) Recognition (a template of the individual's identity is created on an artificial system for monitoring purposes); (3) Comparison (future biometrics data are gathered from individuals); and (4) Decision (a This does not necessarily mean that the development of AI to improve working conditions should be resisted; rather, we should reflect on how to better organise the process to achieve social and moral good. The first concern of ethics in the face of the advance of AI is with workers and their condition. The goal is therefore to identify the risks that individuals and society at large may face and to find regulatory remedies to those risks. In the next four sections, we will look at areas where the spread of AI in workplaces and processes may require conceptual clarification and both ethical and legislative regulation. --- Privacy Issues Several researchers working on datafication (e.g., Van Dijck 2014) argue that surveillance is 'too optically freighted and centrally organized a phenomenon to adequately characterize the networked, continuous tracking of digital information processing and algorithmic analysis' (Ruckenstein and Schüll 2017, p.264), that occurs in the world in which we nowadays live. On these grounds, such researchers propose to replace the term'surveillance' with the term 'dataveillance' (Gitelman 2013;Ruppert 2011), by which they mean that the act of surveillance in today's world does not take place directly from the above, but rather becomes distributed across multiple parties and several domains (covering much of our activities and potentially spanning from business to education, from medicine to justice, from governance to management). These researchers (e.g., McQuillan 2016) also notice a different telos (or end goal) between surveillance and dataveillance. Where the end goal of surveillance might be defined as the ability to constantly'see' something or someone; the telos of dataveillance is rather concerned with the capability of continuously tracking information across multiple domains to capture emergent patterns capable of predicting people's behaviors (not only of observing it). Yet, algorithms and tracking AI tools are not only used to detect and predict one's behavior but also to shape and actively modify it (Beer 2009;Mackenzie 2005). For example, the data that users generate might be gathered and processed to give a digital feedback capable of indirectly modulating and orienting someone's action, in a way that subtly departs from direct panoptic forms of discipline but could be argued to be even more effective. An illustration of this claim is the growing usage of wellness programs in corporate settings (Till 2017). Such programs typically encourage employees-through incentives or rather penalties-to engage in self-tracking activities, with the intent of gathering data that employers (in various forms and at various levels) can then analyze, by using proprietary 1 3 as an ideal trait to use for automated biometric recognition. Face recognition systems typically utilize the spatial relationship among the locations of facial features (such as eyes, nose, lips, chin, and the global appearance of a face [Jain 2007]) in conjunction with rapidly developing artificial intelligence (AI) technologies, to provide information that can be used for security and law enforcement purposes. See Ali et al. (2021) and Boutros et al. (2022) for surveys of recent face recognition technologies. For example, western countries (such as United Kingdom, United States, and Australia), being at the forefront of the development of comprehensive surveillance systems, increasingly use such technologies for security purposes (without getting into unnecessary technicalities, anyone walking around London can easily get a feeling of that). The expanding use of this technology therefore raises pressing ethical and social concerns regarding its adoption in society. 'Central to the ethical, legal and policy issues is the tension that exists between the legitimate collection of biometric information for law enforcement, national/ international security, and government service provision, on the one hand; and the rights to privacy and autonomy for individuals on the other' (Smith and Miller 2022, p.168). Descending from this point there are also issues concerning potential violations of individuals' privacy in search of wrongdoings that can lead to imbalance between a state and its citizenry and that need to be carefully evaluated. In modern societies, it is normally agreed that the state has no right to engage in selective monitoring of any citizen, unless that citizen raised strong suspicions of unlawful behaviors. Yet, the development of facial identification technology invites the active monitoring and even the full-scale mapping of law-abiding citizens; in essence, the pervasive wrapping of technology around innocent civilians, which may contribute to undermine the basic universal right of not being investigated selectively (Gstrein and Beaulieu 2022). Of course, face recognition technology is also used for good things. For example, it is widely deployed in airports, where it has contributed to speed up the processing on incoming passengers by customs authorities. Legislation to facilitate the usage of facial recognition programs capable of integrating pictures from passports and various forms of IDs (such as drivers licenses) into a national database, which can then be consulted by law enforcement and other government agencies are being introduced in several countries across the globe; however, the average reader is probably less aware that such technology is also being actively rolled out in many countries, especially in connection with the development of 5G networks. 5G networks, which possess extremely high computational power combined with the huge storage capability of modern clouds (we are talking about zetta possibly match is found or not found among the data collected based on specific algorithms that cross-check all biometrics data obtained on the individual). We note that biometric database screening technology is increasingly employed in this fourth step, as it is believed to remove the human element from the matching process, thereby maximizing objectivity and efficacy in decision-making (Ellerbrok 2011). Biometric technology is also increasingly considered as an effective tool for dealing with security matters (such as terrorism prevention). Because of this, the last decade has seen a very rapid development of biometric technologies (Alsaadi 2021). 'Biometric dataveillance programs', as we may call them, are proliferating under preemptive strategies to combatting crime and terrorism and to ensure homeland as well as international security. We shall note that the U.S. Department of Defense (DoD) has called such approaches -perhaps in a Freudian slip -'population management'2, which suggests that their potential applications may well stretch -to put it mildly-to much wider realms, quite possibly along the lines envisaged by Foucault (1997) 3. Anyhow, major recent trends in biometrics typically focus on individuating behavioral kind or towards the development of'multimodal biometrics' (Ryu et al. 2021), a procedure which involve the combination of sensor and computing capabilities endowed with enhanced connectivity with the intent to apply such technologies in a broad variety of sectors and for a broad variety of purposes, far beyond law enforcement or prevention of crimes (Hu 2017). For example, latest breakthroughs in the field include the development of sensors that can capture new types of bio-signals (such as heart beats and brain waves via -for instance-EEG or ECG), or brain-computing-interfaces (BCI). Such interfaces are reported to be able to measure neuro activity and translate it into machine-readable inputs (Anumanchipalli et al. 2019), which suggests that these devices could -in the future-allow for the detection of thoughts, possibly opening to the possibility of influencing operations of the human brain. We won't focus on such technologies on this paper, as they are mostly covered by state secrets (and are currently under development); however, we would like to spend the remainder of this section on analyzing the case of face recognition technology through machine learning algorithms, which is equally significant and perhaps is the one that poses -at this stage at least, especially given its widespread application in society-the most significant ethical and social challenges. Humans are very good at recognizing fellows based on facial appearance. Naturally then, face can be considered --- Freedom Issues Increasingly datafied working environments are geared toward efficiency and, therefore, in general terms toward reducing worker discretion. In this sense, a certain loss of worker's 'freedom' is inherent -and perhaps even acceptable -in any process, not only of automation but also of standardization and compartmentalization of resources and procedures. The shift from the craftsman performing the whole process of pin production to the division of labour among workers performing different tasks was famously described by Adam Smith in the 18th century. We should now consider the peculiarity of working in an environment that extensively relies on datafication and is richly wired and interconnected (infospheric) across multiple domains and dimensions. In such an environment, the human being must be a facilitator of processes that automated systems are not yet able to do or will never be able to do. For example, in warehouses this happens with the substantial homologation of workers to the procedures, rhythms, and forms of control and evaluation that have been introduced for processes carried out entirely by industrial robots (Delfanti 2021; Engstrom and Jebari 2022). It is not intended here to make a social and political critique of this kind of evolution of the work environment decoupled from technical considerations referring to productivity gains that translate into concrete benefits for consumers in terms of product availability and low costs. In our societies, all workers are also consumers, and this cannot be underestimated. However, the more we make the working environment wrapped around robots, the more the risk grows that even human employees will be totally absorbed in this new production procedure, which may have strong repercussions for workers. This could lead to new forms of exploitation, as some are afraid of. Yet, it is not necessarily the case that this will happen. In any circumstance though, the logic of quantification and automation entails a modification of the worker's spaces of freedom. Indeed, it should be emphasized that two of the basic criteria of AI-based approaches are predictability and certainty. These criteria are structurally opposed to the classical idea of freedom, which is understood as the possibility of choosing from time to time between alternative courses of action based on reason (Lavazza and Inglese 2015). There are several areas in which workers' freedom might be diminished because of the widespread implementation of AI tools and datafication in society. Personnel selection is one of such areas, where the hiring process is progressively being managed by algorithms capable of evaluating candidates in accordance to predefined criteria, which are set against the perceived compatibility of a subject for a yottabytes of images and even videos), represent the ideal companion for this facial recognition technology in as much as they allow to fully exploit its potentials in richly datafied and infospheric environments. In brief, current face recognition technologies allow to store huge amounts of personal data coming from multiple domains and timespans, to reliably access them at will at any point in time, with fast algorithms specifically designed to selectively checking all the information gathered for 'desired' purposes. Yet, facial recognition programs are -to date, at least-quite vulnerable to deepfake-based attacks (see Ramachandra and Busch 2017, for a helpful review), for example-with static facial images4, which raise concerns about the security as well as the effective trustability of those data. In addition, facial recognition technologies might be combined with AI tools preprogrammed for spotting specific emotions (e.g., anger) to target minorities (e.g., prone to rebellion) based on ethnicity (so on automated analyses of morphological traits); hence, they could be massively deployed to discriminate and even oppress -given the pervasivity of such systems in modern infospheric societies-certain strata of any given population (those that -for instance-do not adhere to a state religion due to different cultural backgrounds). Furthermore, given the storage capabilities of modern clouds, which are set to increase dramatically over the next decades, who could guarantee that the biometric data stored in archives now, through the extensive process of datafication, wouldn't become compromising -say-30 years from now, when certain moral values or virtuous might have changed, partly or entirely? Who could then assure that lawabiding citizens couldn't be prosecuted in 30 or 40 years for behaviors, words, or actions that are completely acceptable now but may not be deemed as 'convenient' in the future if a track record of their actions associated with their morphological traits is permanently stored (and readily accessible) somewhere? Given current trends on cancel culture and the corresponding emergence of 'dataveillance', this possibility shouldn't be too hastily ruled out. These are very crucial issues underlying the usage of facial recognition in biometrics mapping that promise to bear a significant ethical and legal impact on the future of our societies. Having briefly reviewed them, we now look at another application of datafication in rich infospheric environs, which include industrial automation. Another consequence for the workers within the wrapped datafied/infospheric environment in which we increasingly live could be the progressive loss of the freedom to change the rules that govern the environment itself. This is a discretionary activity that does not violate quality standards but allows for changes and improvements in the production process, both technically and in terms of working relationships and conditions. For example, introducing a moment of confrontation between workers can improve both productivity and employee motivation. If, however, the procedures do not allow this, any momentary slowdown in the process will be evaluated negatively, even though it may yield better results in the long run. An efficiency-bound environment that monitors all processes in real time and intervenes to make them homogeneous and smooth cannot tolerate unanticipated deviations and tends to discourage or suppress them. In this vein, it can be considered another form of freedom: the idea of self-government (Pettit 2011). This latter entails an overall ability to do and not to be governed by alien forces, and the self-mastery ability that sustains the full optionality (or the freedom to do otherwise as specified so far). These two accounts of freedom are logically separated and can vary independently of one another. Now suppose a situation in which we have a high level of optionality, but an environment in which there is a predominance of heteronomy (freedom in the self-governing sense is not respected). In this situation options would be left open to agents, but the agents would not be free by simply having a set of options open, since the algorithms would be in a position to filter the "choice environment" (Danaher 2019). Thus, in wrapped datafied/infospheric working environments there may be cases with high optionality but with low autonomy. For example, soft control mechanisms over workers' routine, including persuasive pop-ups ads or nudging techniques, have been employed by Uber for steering drivers to have diverse booking options and more flexibility (Scheiber 2017;Webster 2020). Some have noted that this can lead to power asymmetries and structure control over workers. This is not only the case in strictly structured fields of work such as logistics, but also in fields where AI is only now appearing: such as medical diagnoses, marketing, or the entertainment industry. In all these cases, workers' freedom in decision-making might be reduced in parallel with the possibility to exercise their creativity, as we shall see in the next section. If in many areas, the human contribution cannot be (yet) dispensed with, one issue related to the progressive depersonalisation of the worker within a datafied environment is that of the loss of the possibility of cultivating and exercising those characteristics in humans that have been shaped specific task. There is already a rather large literature on the possible biases introduced by such programs (Tippins at al., 2021;Goretzko and Israel 2021). These biases depend on how the programs were designed and on the type of data on which they were fed and trained. Typical examples of bias introduced by personnel selection programs trained on time series or previous informal criteria adopted by companies involve decisions unfavourable to women, ethnic minorities, or social groups that have been historically disadvantaged or excluded. To be sure, discrimination in the workplace has always existed, and power relations between firms and individual workers have always been highly unbalanced and asymmetrical. A new sensibility in recent decades, however, has brought new attention to the issue and has made it possible to reduce systematic bias in selection and various types of abuses (Woods et al. 2020). Yet, the introduction of algorithms that are considered more efficient and unbiased may -if not properly supervised-risk introducing the very same systematic discrimination that we strove to fight in recent decades (Farina et al. 2022a, b;Bakare et al. 2022;Bugayenko et al. 2023). In addition, this new sort of potential systemic discrimination may be far less detectable (as based on mathematical data, which are difficult to interpret for the non-experts) than the one which has historically affected less advantaged groups. Contributing to this trend will be the growing need to adapt all procedures related to the entire production processes to the automation typical of increasingly AI-managed work environments. In this sense, control, surveillance and the system of incentives and sanctions (as discussed in Sect. 2 above) will also have to conform to quantification and datafication. The worker's margins of freedom will then likely be reduced as result of the need to conform to strictly quantitative criteria in their actions and in light of the need to be evaluated with tools that prioritize objectivity and efficiency. Ironically, such algorithms are already actively used in the criminal justice system of certain countries (Custers et al., 2022). Indeed, it is hard to see why we could rely on programs that assess the appropriate sentence for an offender, or the possibility of recidivism of the same, after a certain period of imprisonment and not do so for labour disputes. Another issue relevant to the economic field and to the workers' freedom concerns the possibility of being evaluated and judged by peers and not by AI algorithms (Ernst and Young. 2018;Keystone Consulting, 2017). It is generally agreed that there is a duty of dignity to be accorded to human beings, who should be treated as unique individuals defined by personal traits, and not as a set of data unified by the attribution to a first and last name. This duty of dignity seems to be threatened by the widespread adoption of such technologies. 1 3 text) on most of Dennett's corpus, with the aim of seeing whether the resulting program could answer philosophical questions similarly to how Dennett himself would. The result was that philosophy experts were unable to clearly discriminate between the answers given by Dennett and answers given by GPT-38. The two examples we discussed above are just paradigmatic instances of an ongoing revolution focussing on the "creative" possibilities of AI (Miller 2019). The topic of creativity and its definition is one of the most complicated in the field of psychology, but it has to do with the ability to produce something that is new (original and unexpected) and useful (appropriate to the performance of a task) (Sternberg and Lubart, 1999, p. 3). In other words, what is creative is the result of a process that is not necessarily reducible to the mechanics of deterministic reasoning. Usually within creative acts, one cannot identify a precise concatenation of stages but rather perceives holistically the emergence of the result (Koestler, 1964). In contrast, as far as AI creativity is concerned, the operation of the algorithm is potentially "transparent"; that is reducible to a finite number of steps, and its success relates either to the direct liking of a human viewer (as in the case of the art contest mentioned earlier) or in its appropriateness (as in the case of the Dennett-like responses of GPT-3; or other reproductive applications of AI, which thanks to huge databanks and vastly superior computational power to humans can produce a very large number of solutions to a problem, among which to find the most appropriate one). One may wonder whether we will end up delegating all creative tasks to algorithms, especially in wrapped and datadriven environments, where AI can deploy its engineering capability to the fullest degree. And -if this is the goalwhether human creativity at work will be less and less used. Is this a likely scenario? And what consequences might it entail? Firstly, one may ask whether low-cost, AI-produced creativity is sufficient to meet the needs of consumers (of goods and cultural products) and the resolution of problems that may arise from time to time. Today's computers are composing music that sounds "more Bach than Bach," turning photographs into paintings in the style of Van Gogh's Starry Night, and even writing screenplays (Miller 2019). The key point; however, seems to be this: every relevant problem that is more than just a procedural query has to do with humans and their complexity. For example, there is a need to not only save energy and reduce climate-altering emissions, but there is a necessity to do this in tune with the desires and goals of people living in that specific area with a specific culture and specific values. by natural evolution and that AI tends to counteract or suppress (Malinetsky and Smolin 2021). For example, sociality and relationships; the ability to frequent natural and not just artificial environments; other activities including those oriented to a relevant, concrete, and visible purpose. These aspects are related to physical and mental wellbeing, which go beyond the immediate gains that the new AI-based economy may bring about in terms of physical security, education, income, or general wealth (even assuming an optimistic scenario, on which many don't necessarily agree). Humans are proactive creatures who deeply fear loneliness, boredom, and feelings of worthlessness. In general, the sense of agency and being held accountable for their actions is something that underlies freedom as a value, as a property that gives meaning to existence from a phenomenological point of view (Farina et al. 2022a). --- Creativity Recently, an American artist won the first place in the emerging artist division's "digital arts/digitally-manipulated photography" category at the Colorado State Fair Fine Arts Competition 5. His winning image, titled "Théâtre D'opéra Spatial," was made with Midjourney 6 -an artificial intelligence system that can produce detailed images when fed written prompts. The affair caused controversy because the (human) jury evaluated the work without considering that it was produced with an AI system; even though the artist openly declared that he had used an AI tool to generate the image upon submitting his work. After the artist got the prize, he was inundated with criticism from numerous colleagues, who deemed it inappropriate
by the initiators of contemporary cognitive science (McCarthy et al. 2006). According to Floridi, it is sufficient to have recourse to a counterfactual, which concerns human behaviour. In this sense, the problem of artificial intelligence is only that of making a machine act in ways that would be called intelligent if a human being behaved in the same way. Thus, there is no issue of comparison between human intelligence and machine intelligence. The only relevant issue is to perform a task successfully, such that the result is as good or better than human intelligence would be able to achieve. How this happens is not the central issue (although it may have important consequences); the outcome is. This approach to AI is called engineering or reproductive. It aims to reproduce the results or successful outcome of our intelligent behaviour by nonbiological means. In contrast, the cognitive or productive approach to AI aims to produce the nonbiological equivalent of our intelligence; that is, the source of the behaviour that the engineering approach aims to reproduce (cf. Floridi 2011a, b).A recent interpretation of AI developments proposes to consider AI as a form of acting that does not have to be intelligent to be successful (Floridi, 2013(Floridi, , 2022)). The basic idea is to return to how the problem of intelligence was framed Mirko Farina
or general wealth (even assuming an optimistic scenario, on which many don't necessarily agree). Humans are proactive creatures who deeply fear loneliness, boredom, and feelings of worthlessness. In general, the sense of agency and being held accountable for their actions is something that underlies freedom as a value, as a property that gives meaning to existence from a phenomenological point of view (Farina et al. 2022a). --- Creativity Recently, an American artist won the first place in the emerging artist division's "digital arts/digitally-manipulated photography" category at the Colorado State Fair Fine Arts Competition 5. His winning image, titled "Théâtre D'opéra Spatial," was made with Midjourney 6 -an artificial intelligence system that can produce detailed images when fed written prompts. The affair caused controversy because the (human) jury evaluated the work without considering that it was produced with an AI system; even though the artist openly declared that he had used an AI tool to generate the image upon submitting his work. After the artist got the prize, he was inundated with criticism from numerous colleagues, who deemed it inappropriate to compete with a work made that way. It's like admitting robots to the Olympics, was one of the comments. There are numerous programs that allow people to create images based on verbal instructions (such as DALL-E 2 7 ). Such programs draw on vast image repositories and modify or mix pre-existing figures based on users' inputs. Until now, they were considered curious pastimes, but their entry into competitions and the art market could revolutionise the criteria of creativity, the way it is evaluated, and the role of human beings in contributing to society's creative processes. Another experiment sparked discussion in early 2022. Two scholars have, with Daniel Dennett's permission and cooperation, "fine-tuned" GPT-3, (the autoregressive language model that uses deep learning to produce human-like 5 https://arstechnica.com/information-technology/2022/08/ai-winsstate-fair-art-contest-annoys-humans/, Last Accessed April 2023. 6 https://www.midjourney.com/home/, Last Accessed April 2023. 7 https://openai.com/dall-e-2/, Last Accessed April 2023. 1 3 data-driven (infospheric) environments invites an evaluation of criteria of efficiency, timeliness, and replicability as central to the production process and as particularly valued for what they entail on the wealth and welfare side of consumers and of society as a whole. The so-called instrumental reason; that is, adjusting means to predetermined ends to achieve the best possible outcome, may thus become the benchmark for the entire economic sector (Acemoglu and Restrepo 2020). In principle, humans are still responsible for decisions concerning the ultimate goals and ultimate choices, but easily find that in the wrapped and datafied microenvironments the whole process revolves around the optimal management of quantitative aspects that can be handled by AI. Speculations about algorithms taking over and altering the purposes for which they were created currently remain science fiction scenarios (Floridi 2022). However, what we may witness in the short term is a culture that may be affected by being increasingly placed in the onlife dimension typical of personal devices, characterized by speed, real time, ever-better performance, and minimization of waiting time or expectations. This has its counterpart in the impatience with slowness and qualitative aspects, with a prevalence of phenomenal aspects over cognitive ones, which distinguish each individual. Consciousness qua basic feeling of existence, as a background that qualifies all our waking states, seems to be exhibited by at least some living species and, as far as we know, especially by human beings. This is a feature that cannot be replicated or simulated -to date-in artifacts, which however, can be partially exhibited in software as a selective intelligence, sometimes superior, to that of human beings. This is demonstrated by the ability of computers to defeat humans in chess (such as the case with Deep Blue and Kasparov) and even in GO (an abstract strategy board game, where two players play in the attempt to surround more territory than the opponent). These examples show that appreciation for highly developed forms of intelligence also favours the illusion of seeing consciousness where there is none (as in some types of software, e.g., the one in the movie Her, with which the protagonist falls in love) and not seeing consciousness where instead it exists (as in non-responsive individuals) (Lavazza and Massimini 2018). If we pursue forms of intelligent functionalism (cf. López-Rubio 2018), we might end up morally devaluing the criterion of the presence of consciousness in favour of the presence of intelligence-or at least of full consciousness associated with the ability to exercise intelligent functionalism (Lanier 1995). One can, of course, argue in favour of an ethical position of this kind, but it is not easy to do so without completely giving up moral intuition, even in rationally supervised forms. In fact, moral intuition is what seems to And the same goes for creativity. If we delegate the entire creation and all marketing of -say-a business to a highly efficient algorithm, will so-called creative workers lose their role and over time we will have no more reserves of human creativity? This seems to be related to a certain approach maintaining (perhaps naively) that a "parallel computer" (such as our brain) is capable to produce in ways that are not yet well understood and that exceed the serial capabilities of an analogic computer. However, recent progress on evolutionary computation, especially those grounded on population-based search techniques, seem to suggest the possibility for AI tools (based on parallel processing) to find creative solutions to very practical problems of the real world (Miikkulainen 2021). Evolutionary computation, especially if complemented by deep learning (Schmidhuber 2015;LeCun et al. 2015) can process data both synchronically (in parallel) and diachronically (evolutionarily). It has been observed that population-based search methods based on evolutionary computation can scale better than other machine learning approaches (Miikkulainen 2021, p.163). This suggests that soon we should see many applications of these AI tools to problems directly involving human creativity in numerous fields, such as engineering (Dupuis et al. 2015), healthcare (Miikkulainen et al. 2021), finance (Buckmann et al. 2021), or even in agriculture (Johnson et al. 2019). There is thus a question of whether the gradual reduction of the creative roles entrusted to humans in highly data-driven environments will lead to an increase in overall system efficiency and increasing consumer satisfaction. Or whether, instead, it may leave uncovered an important part of the innovation that proceeds with the single, unpredictable insights of a few individuals of genius. In addition, the relative untapping of the creativity of workers, who have become executors of the new ideas produced by automated systems, could induce a lowering of the motivation and mood of workers themselves, who will become less and less involved in the production (and decision-making) processes; therefore unable to devise answers even to decisions that for now are still entrusted to humans. --- Efficacy and Instrumental Reason Issues The goal of efficiency, as mentioned above, drives the creation of new environments in which AI-based technology may prevail. It is not necessary to refer to Marx's works to consider how relevant the means of production and the relationships between workers and production processes typical of a given era can be in shaping culture and other types of relationships in society. The logic inherent in the increasingly pervasive application of AI across wrapped and 1 3 of algorithmic (vs. human) management reduces prosocial behavior (e.g., the tendency to help other workers)". In addition, "negative effect (i) occurs because the use of algorithms to manage workers leads to greater objectification of others, (ii) also occurs when algorithms perform tasks together with human managers, and (iii) depends on the type of management task algorithms perform". Being caught up in the apparent gamification of an increasing number of tasks and functions through digital technology may lead to an overvaluing of instrumental reason at the expense of a search for ends and values to which one can give motivated and thoughtful personal adherence. Muldoon and Raekstad (2022) proposed the concept of "algorithmic domination", where an individual "is subjected to a dominating power, the operations of which are (either in part or in whole) determined directly by an algorithm". Also gamification permits employers "to intervene at a more minute level in ways that are not feasible if required to be undertaken by a human supervisor". In this scenario, the business sector seems to be destined to be increasingly pervaded by AI. Producing quantifiable, guaranteed, and predictable results is one of the main goals of deeply wrapped datafied environments, a goal that tends to leave no room for uncontrollable and uncontrolled personal paths. Such a scenario, we maintain, requires careful ethical evaluation and constant scrutiny to avoid that a single efficientistic view (incapable of an inclusive look at every human being) may prevail. --- Conclusion As AI becomes ubiquitous in society, possibly leading to the formation of increasingly intelligent bio-technological unions, there will likely be a coexistence of a plethora of micro-environments wrapped and tailored around robots and humans. The key element of this pervasive process will be the capacity to integrate biological realms into an infosphere suitable for the implementation of AI technologies. This process will likely require extensive datafication. This trend can help to meet an increasing number of needs of a growing share of the population by improving the efficiency of production processes and introducing into them elements of quantification, predictability, reproducibility, and minimization of error and imperfection. All this, however, can also trigger unintended and suboptimal consequences. In this paper we have considered four such consequences that seems to be crucial for decision-making processes in future human societies dominated by AI technologies. The datafication required to realize the quantification and application of AI resources implies increasing control of the provide us with the basic preconditions of moral reasoning; that is, the fact of sharing at least some of the basic values of the subjects involved (Audi 2015). The latter fact is mainly due to the fundamental quality of living beings: consciousness. And consciousness is something that intelligent artifacts seem to lack, even though they can mimic moral reasoning at a cognitive level. One consequence of this shift toward quantification, efficiency, speed, and continuous connection is the projection of these machine characteristics to which we have become increasingly accustomed onto our fellow human beings. Tolerance for those who are lower performing, less able to keep up with the pace of the AI systems of which we are gradually becoming a part may be diminishing, starting precisely in workplaces built around automation and possibly extending to the wider society (for instance, in terms of systems revolving around social credit, which may also be based on work performance) (Shew 2020;Nakamura 2019). In those contexts, predictability and reliability are prioritized and measurement ranks first among the system's capabilities. What does not fit within the parameters, what slows down or hinders the flow of the process will tend to be pushed aside, expelled, or not even recruited. There are several levels at which this selection based on efficiency and instrumental reason can take place. There is a more trivially physical one: those who cannot handle the pace of automation cannot participate in the work process. People affected by different forms of illness or disability, the elderly, and those who fall below minimum performance standards will have difficult access to the labour market and, more importantly, may be seen as less useful to society at large, reversing a trend toward inclusion that has been taking hold recently Stypinska 2022;Farina and Lavazza 2022a, b,c;Farina and Lavazza 2021a). The same, and perhaps to a greater extent, may happen at the cognitive level, as pointed out earlier. The inability, for various reasons, to keep up and be deeply attuned to the wrapped around and datafied environment could lead to the marginalization of those who manifest such detachment from the new AI-colonized context. This is not an inevitable outcome, but it is a risk that can already be glimpsed in a push for a "digital uniformity" that comes from the now compulsory reliance on electronic devices and indeed even social media with varying forms of indirect control and public exposure. Heßler and colleagues (2022) noticed that "increased importance of empathy and autonomy leads to a higher degree of algorithm aversion. At the same time, it also leads to a stronger preference for human-like decision support, which could therefore serve as a remedy for an algorithm aversion induced by the need for self-humanization". In recent lab experiments Fuchs (in press) found "that the use are quintessentially social beings, who are bound to have contacts with their peers to find satisfaction, often in free and unstructured interactions. The cancellation of these interactions can trigger a reduction in their well-being far greater than the support they could get from the intelligent tools located in increasingly datafied environs. So, as mentioned above, potential risks exist that need to be addressed pre-emptively as they seem to be inherent in structural trends (and aspects of decision-making processes) based on the widespread diffusion of AI in society. It is the task of philosophy and ethics to help analyse these risks, highlight their contours, and propose solutions so that artificial intelligence may become a valuable complement to human activities, favouring (rather than hampering) social harmony and moral good. individual involved in the production process and quite possibly over her life. This loss of privacy is typical of new datafied and infospheric environments, where it is not necessarily pursued with the explicit purpose of monitoring the individual (surveillance) but rather of actively predicting her behaviour (dataveillance), by having the individual herself interacting effortlessly with a wide range of integrated technological tools across multiple domains and dimensions. This need for control may also result in a loss of freedom understood in the classical sense as the possibility of deciding between alternative courses of action. Indeed, the worker must manifest maximally predictable behaviour for her contribution to be as effective and integrated as possible. Freedom, in this context, may become structurally endangered as an end-product, especially in production processes, which are increasingly oriented toward maximal certainty (which is the opposite of freedom). Another consequence of this production arrangement is the delegation of creativity to algorithms, which are often presented as higher performing, hence preferable to humans because -unlike humans-they are not subject to quantitative and qualitative fluctuations. The risk here is a loss of the reserve in terms of qualitative resource on the part of workers, which -in the long term-could leave some creative areas uncovered, especially those where machines are not (yet) at the level of productive intelligence of humans. Finally, a more general cultural tendency to favour efficiency and instrumental reason might assert itself because of the structural constraints that environments wrapped around AI tend to produce. A less inclusive and tolerant society could be the result of our onlife characterized by immediacy and absence of expectations, a world -in briefwhere common-sense will leave space to pure objectivity and absolute neutrality based on algorithmic efficiency. In this vein, datafication points toward an automation of decision-making that makes it primarily efficiency-driven toward predetermined goals. One strategy to rebalance this trend could be to create areas of decision-making that are removed from extreme datafication to allow a process of decision making driven by the choices of individuals without the close guidance of AI. For we know that a strong sense of agency is inherent in human beings, consisting of (presumed) conscious control over their choices and courses of action. The deprivation of this sense of agency usually leads individuals to a reduction of their own well-being (Creed and Klisch 2005). If, therefore, the efficiency of economic organization is not to become the first and only goal of the social system, with the consequences just highlighted, it is necessary to prevent a form of automated decision-making (based on datafication) from becoming the only method for choices in working environments and in societies in general. Humans manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted 1 3 --- Data Availability N/a. --- Author contributions The authors contributed equally to the writing of this paper. Funding N/a. --- Declarations
by the initiators of contemporary cognitive science (McCarthy et al. 2006). According to Floridi, it is sufficient to have recourse to a counterfactual, which concerns human behaviour. In this sense, the problem of artificial intelligence is only that of making a machine act in ways that would be called intelligent if a human being behaved in the same way. Thus, there is no issue of comparison between human intelligence and machine intelligence. The only relevant issue is to perform a task successfully, such that the result is as good or better than human intelligence would be able to achieve. How this happens is not the central issue (although it may have important consequences); the outcome is. This approach to AI is called engineering or reproductive. It aims to reproduce the results or successful outcome of our intelligent behaviour by nonbiological means. In contrast, the cognitive or productive approach to AI aims to produce the nonbiological equivalent of our intelligence; that is, the source of the behaviour that the engineering approach aims to reproduce (cf. Floridi 2011a, b).A recent interpretation of AI developments proposes to consider AI as a form of acting that does not have to be intelligent to be successful (Floridi, 2013(Floridi, , 2022)). The basic idea is to return to how the problem of intelligence was framed Mirko Farina
Introduction During the past few years, we have witnessed a remarkable increase in the number of users in virtual worlds. According to KZero [54], there were 1.921 billion registered users in virtual worlds in the first quarter of 2012, more than triple the number of users in 2009. The largest segment of users (802 million) is between the ages of 10 and 15 [54]. Despite the growing popularity of virtual worlds, there is no agreement on the definition and/or typology of virtual worlds [20], [71]. The numerous contextual descriptions provided by academics, industry professionals and the media, have further complicated agreement on a common understanding about virtual worlds [91]. One of the earliest definitions of a virtual world was that of Schroeder [74] p.25 who defined the virtual environment or virtual reality as "a computer-generated display that allows or compels the user (or users) to have a sense of being present in an environment other than the one they are actually in, and to interact with that environment." Years later, Koster [52] suggests a definition which contains many essential characteristics of a virtual world: "a virtual world is a spatially based depiction of a persistent virtual environment, which can be experienced by numerous participants at once, who are represented within the space by avatars." Castronova [25] adopts a more technologically oriented viewpoint and defines virtual worlds as "crafted places inside computers that are designed to accommodate large numbers of people." Building on the definitions provided by Bartle [16], Koster [52] and Castronova [25], and including an emphasis on the people and their social network, Bell [20] defines virtual world as "A synchronous, persistent network of people, represented as avatars, facilitated by networked computers." Against this backdrop, social networking sites, such as Facebook and LinkedIn are not virtual worlds. Although not without its critics [19], social networking sites (SNSs) are defined as "web-based services that allow individuals to (1) construct a public or semi-public profile within a bounded system, (2) articulate a list of other users with whom they share a connection, and (3) view and traverse their list of connections and those made by others within the system" [22]. Thus, SNSs constitute virtual communities which have persistence, but no sense of synchronous [20]. Keeping the Bell's [20] definition of virtual worlds in mind, massively multiplayer online role-playing games (MMORPG) like World of Warcraft or Ultima Online are virtual worlds. This applies also for MMO games. However, there is a discussion about whether a distinction should be drawn between game-based worlds and non-game worlds. Some researchers [51], [77] argue that virtual worlds are essentially non-game environments where divergent games can be present but are not the defining characteristics of the world. Instead, MMORPGs are subject to precise gaming rules, and therefore, they are essentially games. Even though, some MMORPGs provide opportunities for social networking, the game element is central to their functioning [47]. The growing number of Internet users and popularity of virtual worlds mean that more and more people are becoming involved in different types of virtual environments. This also provides new opportunities for businesses to market products and services in these virtual worlds [38], especially if it can be exposed that product placements in virtual worlds are more effective at generating sales and brand loyalty than static marketing channels, such as print and web-based advertisement [92]. Even though, little is known how to effectively market to virtual world participant through avatar-oriented activities, organizations and marketers should consider the online opportunities of marketing to the inhabitants of virtual worlds, as the avatars of users represent prospective targets of current and future business. This raises a number of interesting research and practical questions how companies can market themselves, their products and services within virtual world's environment by making sense of the unique features offered with this new medium [92]. Previous research has investigated online interaction in different types of virtual communities, such as text-based [10] and network-and small group-based virtual communities [11], [30]. On the other hand, research has also investigated several high-interactivity online venues (real-time chat systems, web-based chat rooms and networked video games), and low-interactivity online venues (e-mail lists, website bulletin boards and usenet newsgroups) [13]. Participation has also been examined in special contexts like software user groups [12], and from educational perspectives [93]. Viewing the phenomenon through the lens of social psychology, this study examines the underlying motives of users for participating in virtual worlds, utilizing an applied version of the frameworks presented by Dholakia et al. [30] and Bagozzi and Dholakia [11], [12]. These frameworks were developed to examine user motivations and behaviors in virtual worlds, and are related to the model of goal-directed behavior [69]. Participating in virtual worlds is perceived as intentional social action influenced by several social determinants such as attitude, subjective norms, perceived behavioral control, enjoyment, entertainment value, ease of use and social identity. In the current study, authors adapt the Bell's [20] view of virtual world, which builds on synchronicity, persistence, network of people, avatar representation and facilitation of the experience by networked computers. The authors investigate the users of 2-D virtual world called Moipal aimed at users between the ages of 10 and 15. Moipal is not a MMORPG in a sense that users' story or narrative unfolding within the strict constraints of the rules and goals set by the designers. Instead, Moipal has the elements of both a fictional and physical world and exists primarily as a place for social interactions to occur. However, Moipal is not based on a social platform like Facebook, and therefore it is not a social game. Authors identify Moipal as a virtual world environment which can be classified within the broad domain of massively multiplayer online games (MMOG). It can also be tagged with the label multi-user virtual environment (MUVE) [62]. Moipal offers its players a virtual world environment to do everything from playing minigames to meet new and existing virtual friends, to exploring many public spaces available to them. Moipal experience consists of many parts, which are all inextricably linked. Apparently, The Sims Online [60] has been a role model for Moipal. Moipal was launched in October 2007. There were around 120 000 users in Moipal at the end of the year 2008. Moipal was shut down in September 2011. Moipal is free to play, but registration is mandatory. At the initial sign-up, each player selects the look and style of an avatar, called Pal, from a wide range of options, including gender, hair and skin color, clothing, facial characteristics and body type. Pals are automatically given a personal home upon sign-up and invited to personalize it with a variety of furniture and accessories like rugs, lamps, posters and plants. Pals' residences are located in the virtual world called Pal City. The City provides Pals dozens of different places to visit and opportunities to carry out wide variety of tasks. Pals can visit, for instance, a horse stable, library, cinema, film studio, radio station, city hall, restaurants, museums, art gallery, and holiday resort. By completing tasks related to different places, Pal can earn Pal-money to buy new furniture or clothing from divergent shops located in a shopping mall called Pal Store. The tasks are extremely diverse, ranging from eating pizza at Joe's pizzeria, dancing at Cube Club, having snowball fights in Iceland, training karate at Dojo to feeding dinosaurs at Museum of natural sciences. For nurturing social interactions, Moipal provides communication opportunities such as chatting and sending PalMail to others. The number of friends is not limited in Moipal. Many Pals also create a group or community around a certain topic such as horse riding, rock star or fashion. Like minded friends were then invited to the group. Non-members are able to request invitation by MoiMail. Besides the parties Pals could arrange for their friends, plenty of attractive events are organized around Pal City. These include a fashion event at the beach, silent movie festival at Kino Lumiere (cinema), Cross stich exhibition (pixel art created by Pals) at Art Gallery 44, and Palympics sport events in sport field. Pals could play several minigames in Moipal, such as Moipal Racing where a player can drive a car with a side scrolling view. The car can be driven across a track and the driver has to avoid hitting pumpkins and other obstacles on the track. Other minigames include Karate, MoiBand (several instruments), Jump rope, PalPing (ping pong), Locomotion (dancing), MoiPets (virtual dogs), just to mention a few. In the next section we review the relevant literature to support the development of our hypotheses. This is followed by a discussion of the study's methodology. We then continue with the presentation of the results. Finally, we draw conclusions from the study, outline its main limitations and offer ideas for further research in this area. --- Goal-directed Behavior vs. Experiential Service Use Although Bagozzi and Dholakia [9] state that consumer behavior is predominantly goal-directed because goods and services are purchased with a certain goal in mind, it is important to note that not all consumer behavior is based on this utilitarian and information-processing view. As noted by Holbrook and Hirschman [39], using the information processing perspective to explain consumer behavior might not always be the appropriate choice in settings which include playful leisure activities, such as gaming [67]. According to this experiential view of consumer behavior, consumption is viewed as a subjective state of consciousness that includes various symbolic meanings and hedonic responses. As pointed out by Holbrook and Hirschman [39], it is important to recognize and also to contrast the two views of consumption: the information-processing and the experiential view. As this paper is interested in volitional behavior in an experiential service setting (gaming) in which consumer behavior is driven by pleasure-seeking, enjoyment and fun, intrinsic motivational factors such as enjoyment are expected to have a stronger effect on intention and behavior than extrinsic motivational factors like perceived utility. Prior research has modeled participation in virtual communities and the associated behavior from the viewpoint of goal-directed behavior [10]- [12], [47], [69] which suggests that desires predict intentions, and the traditional antecedents of the theory of planned behavior (TPB), namely attitudes, perceived behavioral control and subjective norms influence intention through desires too. The model of goal-directed behavior [69] has since been revised and applied in many studies. In this case, we consider applications that discuss intentional social action in the context of groups [8], virtual communities [10], [30] and online venues [13]. Nysveen et al. [67] p. 336, who studied antecedents to mobile service usage, argue that experiential services are characterized by "ritualistic orientation and hedonic benefits derived from the use of the service, whereas goaldirected services are characterized by instrumental orientation and utilitarian benefits related to the use of the service". On this basis, we now present a framework combining aspects of goal-directed behavior and experiential service use. --- Conceptual Model and Hypotheses Building on the research on both goal-directed behaviors [10]- [12], [30] and experiential service use [67], we propose the following framework (Figure 1) to capture the antecedents of intention and behavior in the context of virtual worlds characterized by hedonic pleasure-seeking motives. In the next sections we discuss the model in more detail, develop the hypotheses and review relevant literature to support them. --- Ease of Use Perceived ease of use refers to the degree to which a potential user of a certain technology expects the target system to be free of effort [28], [29]. Ease of use is one variable introduced by Davis [28] under the technology acceptance model (TAM), an adaptation of the theory of reasoned action [34]. However TAM focuses precisely on explaining purposive behavior in the context of technology use. TAM also posits that two beliefs, perceived usefulness and perceived ease of use, influence computer acceptance through attitude in the following sequence: first, the design features of a certain technology affect a person's perceptions of its usefulness and ease of use. Consequently the person forms a certain attitude toward using the technology. Finally, attitude produces behavioral response, that is, actual system use. The effect of perceived ease of use on information system acceptance and use has been studied extensively in the TAM research domain (for a review see [50]. Ease of use has been found to explain a considerable amount of the variance in attitude. In experiential service settings ease of use has been found to have a significant association with attitude toward use and intention to use, but its explanatory power is not very strong with regards to either [67]. In this study, the concept of ease of use is a somewhat complicated because ease-of-use may not exactly reflect the motivation of online games users. Authors acknowledge that "without usability no one can play a game; make it is too usable and it's no fun" [55] p. 319. However, in the case of online gaming acceptance, Hsu and Lu [42] found that ease of use appeared to be the key determinant to predict online game play instead of usefulness. In addition, Hsu and Lu [43] have shown that perceived ease of use appears to have significant effects on both perceived enjoyment and preference to participate in online game communities. They found in their study that easy-to-use interface enhance enjoyment and encourage people to re-participate. On the contrary, difficulties of use make people resist Ease of use H1 The relationship between enjoyment and intention is supported by many studies, particularly with reference to hedonic information systems [1], [46], [81], [84]. Davis et al. [29] argue that users who get enjoyment from using an information system are more likely to form behavioral intentions compared with other users who do not experience as much enjoyment. Perceived enjoyment is also shown as a significant predictor to the intention to use virtual worlds [66], [75].Therefore, we propose that: H2c: Enjoyment is positively related to intention. --- Attitude Toward Use and Attitude Toward Advertising In general, attitude toward a certain behavior such as using a system or service is positively related to intention to engage in that behavior [2]. In computer-mediated environments, many studies state that attitude towards using a system has been found to be the strongest determinant of intention to use that system [28], [67]. With respect to social communication behavior online, Chang and Wang [26] show that a more positive attitude towards the use of online communication tools corresponds to a greater behavioral intention to use them. Their results show that behavioral intention is influenced by perceived usefulness, flow experience and attitude towards use. The factors jointly explain 80 percent of the total variance in behavioral intention, of which attitude alone explains 56 percent. In the same vein, Nysveen et al. [67] propose that attitude toward using mobile services is a strong determinant of intention and usage. In addition, Moon and Kim [63] argue that attitude toward using the Web has a strong influence on behavioral intention. On this basis we propose that: H3: Attitude toward use is positively related to intention to use. Attitude toward advertising can be defined as "a learned predisposition to respond in a consistently favorable or unfavorable manner to advertising in general" [57] p. 53. Research on attitude toward advertising has concentrated mainly on three areas: attitude towards ads [57], [59], perceptions of ads in general [33] and brand attitude [58], [64]. Scholars have shown increasing interest in attitudes toward online advertising since its emergence on the Internet. Studies have investigated, for instance, the perceived value of Web advertising [32], different online advertising formats [24] and attitudes toward online advertising [73]. Attitudes toward online advertising have been found to be related to the informativeness and enjoyment of the advertisements [32], [73]. Attitude toward advertising is a strong determinant of, for instance, purchase intentions [57], [59]. Attitudes toward advertising are also found to determine behavioral responses in online [24] and mobile environments [48], [82]. The empirical evidence from prior studies about advertising in virtual worlds is virtually non-existent. However, some studies are conducted in social networking sites. For instance, Kelly et al. [49] networking environment. In their study, many participants indicated that advertising on their online social networking sites was acceptable, because it kept the use of site free of charge. This may apply also for advertising in virtual worlds. Thus, we suggest the following hypotheses: H4: Attitude toward use is positively related to attitude toward advertising. H5: Attitude toward advertising is positively related to intention. --- Social Identity Social identity theory is a social-psychological perspective developed by Tajfel and Turner [79], [80]. It defines how people classify themselves and others into various social categories. The social classification comprises two functions. The first function gives the means for a person to define others by cognitively segmenting and ordering the social environment surrounding them. Second, social classification helps individuals to define themselves in the social environment [6]. Originally, the model of goal-directed behavior [69] included only one social variable, namely subjective norm. However the construct of social identity was added to the model by Bagozzi and Dholakia [10]. The purpose in adding the variable was to make the model suitable for examining group actions. Dholakia et al. [30] state that social identity captures the main aspects of the individual's identification with the group in the sense that the person comes to view himself or herself as a member of the community and feels that he/she belongs to it. Bagozzi [8] states that social identity evolves through self-categorization processes that define how members think and feel about themselves, how other in-group and out-group members are perceived and how one acts in relation to in-group and out-group members. Bagozzi divides social identity into three components: self-categorization, affective commitment and group-based self-esteem. These were later re-defined into cognitive, affective and evaluative social identity [11], [12]. Cognitive social identity refers to self-awareness of membership in a social group; selfcategorization is related to affective social identity that presents the emotional feeling of belonging within the group, while evaluative social identity refers to a person's positive and negative value connotation related to group membership, that is, collective self-esteem. Research has tested the validity of these measures [14], [21]. Dholakia et al. [30] completed a study of social identity in the context of network-and small group-based virtual communities. Their model tested the motivational antecedents and mediators of group norms and social identity forms (cognitive, affective and evaluative). They hypothesized that higher levels of value perceptions lead to a stronger social identity regarding the virtual community. The results of their study supported the hypothesis and revealed that purposive and entertainment value determined social identity in the relevant context. Against this backdrop, we propose that social identity is comprised of cognitive, affective and evaluative social identity [11], [12] and hypothesize that: H6: Social identity is positively related to intention. H7: Social identity is positively related to behavior. --- Subjective Norms, Intention and Behavior The second determinant of intention in the theory of planned behavior is subjective norm, which refers to the influence of one's personal community on the specified behavior [2]. Bagozzi and Dholakia [11] note that group norms might be an important aspect of social influence in small group brand communities, and therefore call for research on the effect of subjective norms on intention. In a virtual community context, the member's subjective norms affecting the intention to perform a certain behavior might be the approval or disapproval of the other members. According to Ajzen [3], normative beliefs are the antecedents of subjective norms. If a person assumes his or her referents think he or she should perform a certain behavior, the person will perceive social pressure to do so. On the other hand, if a person supposes his or her referents would disapprove of the behavior, the person will have a subjective norm applying pressure not to perform the behavior in question. Therefore subjective norm is a social factor that affects a person's intention to behave in a certain manner. A number of studies indicate that the influence of peers on behavioral intention related to entertainment services is stronger than the influence of other subjective norms, such as parents or comparative referents [35], [61]. Peer influence has been a significant predictor of intention and behavior in the mobile entertainment services setting [17], [48], [72]. In addition, subjective norms have been found to predict user behavior in online games [42], blogs [41] and virtual communities [27]. Recent literature has also found subjective norms to be a significant factor in the user adoption of virtual worlds [18], [45]. As a result, we put forward the following hypotheses: H8: Subjective norms, especially peers, are positively related to intention. --- Methodology The data was collected from the users of a virtual world called Moipal. The survey was promoted via a banner advertisement in the gaming world. The players were encouraged to click on the banner and complete the questionnaire. As an incentive to answering the survey, the respondents were entered into a lottery for a gaming console. Notes on the questionnaire form advised respondents that the purpose of the study was to examine behavior and attitudes in the context of virtual communities. The respondents were asked to devote about ten minutes to completing the survey form. As regards the research ethics, the fact that a majority of the Moipal users are underage was taken into account when designing the survey. First, the survey was completely anonymous. To further ensure anonymity, Moipal user name, i.e. Pal's name, of the respondents was not requested at any point in the survey. Secondly, with the exception of the background questions on gender and age, no questions about the respondent's offline lives were included in the survey. A total of 319 acceptable responses were received. In evaluating the response rate in this kind of online survey setting, we compared those who clicked the link to the number of completed questionnaires. By this count the response rate was close to 90 percent. A total of 86 percent of the respondents were females. The mean age of the respondents was 14.3 years. These demographics are in line with the demographics of the registered gamers. Potential nonresponse bias was also examined by comparing early to late respondents [5]. In terms of demographics, the groups do not differ from each other (p<unk>.01) but in terms of the study constructs, the early and late respondents differ in their intention and behavior (p<unk>.01). The results of the mean tests indicate that early respondents have higher intentions to use and are more active users of virtual worlds than late respondents. This finding was expected, as those who answer surveys first usually represent the most enthusiastic user groups. On this basis, we argue that as the survey reached the majority of the active users of the virtual world, as nonresponse occurs mostly among those who are less active gamers. Therefore, nonresponse bias should not be considered a major weakness of the study. Potential common method variance bias was reduced and examined in various ways as suggested by Podsakoff, MacKenzie, Lee and Podsakoff [70]. First, at the data collection stage the respondents' identities were kept confidential, item ambiguity was reduced and the items were mixed in the questionnaire. Second, in the data analysis stage, we examined common method variance bias through Harman's (1967) one factor test and the partialcorrelation technique. The one factor solution (<unk>2 = 4451.6 (df=464), p <unk>.00; RMSEA =.146) was inferior to the hypothesized factor structure. In addition, the partial-correlation technique was used to further assess method bias. As a marker variable we used the item 'There should be no advertising in the virtual world'. Adding the marker variable to the model showed no effects on the observed relationships. On the basis of these two tests, it seems that common method variance bias is not a problem in this study. --- Measurement Scales All the items were measured on seven-point scales with a 'do not know' option. In some questions, a semantic differential scale was used instead of a Likert-type scale. In measuring attitudes, items were adapted from Bagozzi and Dholakia [10], [11]. Cognitive, affective and evaluative social identity constructs were all measured, with two items each adapted from Bagozzi and Dholakia [11] and Dholakia et al. [30]. In measuring ease of use we adapted a three item scale from Davis [28] and Davis et al. [29]. Enjoyment was measured with a three item scale taken from Nysveen et al. [67]. In measuring attitudes toward advertising in the virtual world, we used a semantic differential scale adapted from Ajzen [4]. Subjective norms were measured on a three item scale taken from Ajzen [4] and Bagozzi and Dholakia [11]. Intentions and behavior were both measured with items adapted from Bagozzi and Dholakia [10] and Dholakia et al. [30]. --- Convergent and Discriminant Validity The measurement model showed acceptable fit (<unk>2 = 707.6 (df=332), p <unk>.00; RMSEA =.060; SRMR =.043; CFI =.987; IFI =.987; RFI =.971). The fit indices (Table 1) associated with the CFA exceeded acceptable thresholds [23], [44]. Only the chi-square value was problematic, but researchers have suggested looking at other fit indices like the RMSEA value if the chi-square test is not passed [31], [83]. The RMSEA statistic for the measurement model was below the cut-off criteria of.08, indicating a relatively close fit of the model [23]. The Cronbach's alphas were larger than or equal to.72. Following Dholakia et al. [30], composite reliabilities (CR) were calculated for two item scales. All CRs were larger than the recommended cut-off criteria of.60 [15]. Therefore the scales show sufficient internal consistency. The indicators in the model loaded highly on their hypothesized constructs, and were significant. In addition, all the average variance extracted (AVE) values were over.50 (ranging from.61 to.76). On this basis, the confirmatory factor analysis shows acceptable convergent validity. Discriminant validity was assessed by looking at the correlation among the constructs (Table 2) and square roots of AVE values. All the AVE square root values were higher than the correlations among constructs, indicating acceptable discriminant validity [36] --- Structural Model Assessment and Hypotheses Tests The structural model fit was acceptable (<unk>2 = 835.1 (df=360), p <unk>.00; CFI = 0.984; NFI =.972; NNFI = 0.982; IFI = 0.984; SRMS =.07; RMSEA =.064) [44], [23]. Hypothesized path loadings, their respective t-values and R2 values are shown in Figure 2. Of the nine hypothesized relationships, six turned out to be statistically significant. H1 contended that there is a positive and direct relationship between ease of use and attitude. No support for the relationship was found. There are two possible explanations for this. First, this insignificant path might be explained by the strong relationship between enjoyment and attitude. Studies have found that in experiential settings, enjoyment plays a stronger role Ease of use R 2 =.32 Attitude R 2 =.73 This paper is available online at www.jtaer.com DOI: 10.4067/S0718-18762013000100002 than ease of use in determining attitudes and behavioral intentions [67]. Second, in technology acceptance research the effect of ease of use on attitude and intention is often weaker than the effect of usefulness, as the effect of ease of use is mediated through usefulness [50]. In line with the literature [1], [85], [86], [94] we find strong support for H2a, which proposed that enjoyment is positively related to ease of use. To test the reversed path (PEOU<unk>PE), a competing structural model was estimated. The competing model showed a significantly worse fit than the hypothesized model. On this basis it seems that, in experiential settings, perceived enjoyment has a significant impact on perceived ease of use, and not vice versa. With respect to H2b, the path shows that enjoyment is positively related to attitude (<unk> =.87, t = 13.8). This path is extremely strong and indicates that enjoyment is a stronger determinant of attitude than is perceived ease of use. This finding is supported by the literature which has found that enjoyment plays an important role in user acceptance of technology, especially in the case of hedonic systems [78]. There is no evidence to support H2c, which proposed that enjoyment is positively related to intention. This finding echoes Venkatesh et al. [90], who found no support for the direct relationship between perceived enjoyment and behavioral intention. However, that study supported the view that the effects of enjoyment are fully mediated by perceived usefulness and perceived ease of use. H4, arguing that attitude toward use is positively related to attitude toward advertising, was supported (<unk> =.68, t = 12.6). No support was found for H3 which argued that attitude toward use is positively related to intention to use. In contrast with the findings of prior studies on virtual world usage [45], [76] there was a non-significant effect of attitude in predicting the intention to participate into virtual world environment. However, Mäntymäki and Salo [66] made similar findings in their study conducted in social virtual world called Habbo Hotel. In the line with Mäntymäki and Salo [66], we suggest that a potential reason for a non-significant effect may be that since attitudes develop over time, their role is less salient with young people. Alternatively, it is also possible that intentions to use virtual worlds are driven by affective, emotional, impulsive or habitual factors rather than attitudes. H5 contended that there is a positive and direct relationship between attitude toward advertising and intention to use. No support for the relationship was found. One potential explanation for this may be advertising avoidance. It may be that young people pay little or no interest in advertising in virtual worlds like they do in online social networking sites [49]. In such a setting, attitudes toward advertising may be less established and thus not exerting a strong influence on behavioral intention. The next hypotheses proposed that social identity is positively related to intention (H6) and behavior (H7). Both hypotheses receive significant support from the data and are thus confirmed. We found no support for H8, which argues that subjective norms are positively related to intention. Finally, there is strong evidence supporting H9 which contended that intention is positively related to behavior. That path is strong and significant (<unk> =.38, t = 4.6). The non-significant direct effect of subjective norms on intention to participate in virtual worlds was counterintuitive and contrary to recent literature which indicate that subjective norms are a significant factor in the user adoption of virtual worlds [18], [45]. However, the effect of subjective norms on intention has been found to be somewhat inconsistent [40], [87], [88]. For instance, Liang and Yeh [56] found in their study that subjective norm had no significant effects on the continuance intention to use mobile games. In addition, in their examination of e-commerce adoption Pavlou and Fygenson [68] did not find that subjective norms predicted either the intention to seek information online or the online purchase intention. Recently, using data gathered from 3265 survey participants in a social virtual world called Habbo Hotel, Mäntymäki [65] found no effect of subjective norms on continuous use intention on social virtual world. Interestingly, in his study, the research setting and profile of respondents were very similar to the current study. Respondents were female dominated and the majority of respondents were between the ages of 10 and 15. In the line with Mäntymäki [65], we suggest that a potential reason for non-significant effect of social norm may be the fact that the normative influence is not particularly salient in predicting virtual world use. Empirical studies have rather consistently found the influence of subjective norms to be less significant in the continuous phase of technology diffusion, or where the use of the technology is voluntary [53], [89]. Alternatively, as participants in virtual worlds can interact with other people, who just happen to be present in the virtual environment, while not knowing them in real life, and without necessarily forming personal relationships. As a result, anonymity inside the virtual world may reduce the salience of normative influence. --- Competing Models Two competing models were tested. Competing model #1t measured social identity as first order constructs. Competing model #2 was run without the social identity constructs. --- Competing Model #2 The second competing model was run without the social identity constructs (Figure 3). The model fit was acceptable (<unk>2 = 553.3 (df=220), p <unk>.00; CFI = 0.983; NFI =.972; NNFI = 0.980; IFI = 0.983; SRMS =.07; RMSEA =.069). This model confirms the links between enjoyment and intention to use, attitude and intention and subjective norms and intention that were not established in the hypothesized model but were proposed in the literature [50], [63], [67]. Hence, our three models show that the addition of the social identity construct in technology acceptance models has an effect on the other established causal relationships, for example between attitude-intention and subjective normsintention. --- Discussion Consumers are increasingly using virtual online games to spend time and interact with other users. The objective of the study was to examine this issue from the viewpoint of users' intentions to use experiential virtual game services. The developed framework showed that social identity is the strongest determinant of intention and behavior in the study setting. Social identity outweighs the effect of attitudes, enjoyment and subjective norms in explaining intention to use a gaming service. Furthermore, the empirical test of the model successfully validated the multidimensional view of social identity. Our findings further indicate that affective social identity is the strongest indicator of a person's social identity outperforming the effects of cognitive and evaluative social identity. Affective social identity also has the strongest association with intention to use a game service and behavior. --- Theoretical Contributions In line with the theory [8], [11] the most notable finding of this study was that social identity is a strong antecedent of intention and behavior in the social virtual world context. Our findings also demonstrate that social identity outweighs the effects that enjoyment, attitude toward use and subjective norms have on intention. We showed that social identity consists of three components, and these functions are important in determining a person's intention and behavior in a gaming world. In line with the theory [11], the most influential component was found to be affective social identity, followed by evaluative and cognitive social identity. Previous studies have identified similar results. Bagozzi and Dholakia [11] found in their study of both Harley Davidson brand communities and non-Harley-driving club members that affective social identity was the strongest part of social identity, while the evaluative component was somewhat less strong and the cognitive component the least strong. They also noticed that customer communities organized around small groups resulted in greater social identification than similar communities of customers organized around a more general topic. In line with Bagozzi and Dholakia [11], then, it can be concluded Eas
This study develops a framework for understanding user intentions and behaviors within a virtual world environment. The proposed framework posits that the intention to participate in virtual world is defined by a person's 1) social identity, 2) attitude toward using the service, 3) subjective norms, 4) attitude toward advertising on the service and 5) enjoyment. The proposed model is tested using data (n=319) from members of the virtual world environment. The results support the multidimensional view of social identity and show a strong positive association between social identity and intention and social identity and behavior, and further, confirm the intention-behavior link. Moreover, the results indicate that social identity outweighs the significance of a person's attitude and relevant subjective norms in explaining intention and behavior. The results also indicate that enjoyment strongly explains both ease of use and attitude.
line with the theory [8], [11] the most notable finding of this study was that social identity is a strong antecedent of intention and behavior in the social virtual world context. Our findings also demonstrate that social identity outweighs the effects that enjoyment, attitude toward use and subjective norms have on intention. We showed that social identity consists of three components, and these functions are important in determining a person's intention and behavior in a gaming world. In line with the theory [11], the most influential component was found to be affective social identity, followed by evaluative and cognitive social identity. Previous studies have identified similar results. Bagozzi and Dholakia [11] found in their study of both Harley Davidson brand communities and non-Harley-driving club members that affective social identity was the strongest part of social identity, while the evaluative component was somewhat less strong and the cognitive component the least strong. They also noticed that customer communities organized around small groups resulted in greater social identification than similar communities of customers organized around a more general topic. In line with Bagozzi and Dholakia [11], then, it can be concluded Ease of use.79 (11.5).17 ( that customers in small group brand communities are more homogeneous in their psychographic characteristics and therefore have greater social identification. Thus, the strength of social identity in this study may be explained by the psychographic similarity of the examined virtual world participants. In summary, it seems that the finding that social identity is a strong antecedent of intention is more robust when interaction in the group is dense and/or organized around a specific theme or setting [11]- [13]. The strength of affective social identity indicates that a person's intentions to use virtual world may be predicted from his or her feelings of belonging to the group. Thus, if a person feels that he or she belongs to a group in the virtual world, he or she is more likely to visit that world. Affective social identity also showed a direct relation to behavior. This suggests that a person attached to the group to which he or she belongs, is more likely to perform direct behaviors. Evaluative social identity is another important antecedent of behavior. Therefore, the more important and valuable member of the group the person perceives him or herself to be, the more likely he or she is to perform behaviors in the group. In contexts in which social identity is not present, behavioral intentions can be predicted from attitudes, subjective norms and enjoyment. In other words, these constructs become significant predictors of intention and behavior when social identity is not included in the models, or its role is minimal. This kind of situation may occur when a person interacts with people that he or she does not know very well, for example when joining a new discussion or interest group within the virtual world. Because the group members are just starting to get to know each other, social identity, and especially the emotional attachment to the group, has not yet strengthened. Instead the members' attitudes toward using the service and their perceptions of enjoyment in using it may be better predictors of whether they take part in discussions in the future. Subjective norms may also influence a person's intentions. Thus if the individual member of the group supposes that the other members think that he or she should, for example, take part in later group discussions, he or she will perceive social pressure to do so. Another important finding is the role of enjoyment as an antecedent of attitudes. In line with the literature [1], [29], [46], [47], [63], [84], [90], [94] the links between enjoyment and ease of use and enjoyment and attitude were strong, suggesting that attitude is influenced by perceived enjoyment. Thus, a person who finds participating in the discussion group enjoyable, for example, is more likely to have a positive attitude toward the service. The link between intention and behavior was strong in all model tests. This link has been studied extensively in the prior literature [3], [11]- [13]. This study confirms that intention is also an important antecedent of behavior in the social virtual world context. --- Managerial Contributions Our study shows that participation in virtual worlds can be predicted from intention which can, in turn, be predicted from social identity. The importance and dominance of social identity were prevalent and this construct outweighs all other constructs tested. Moreover, comparing this finding to the prior literature shows that the role of social identity as an antecedent of intentions seems to be higher when interaction in the group is dense and also organized around a specific theme or setting. From a managerial viewpoint, this implies that developers of virtual worlds should consider building theme-based virtual worlds which are designed to promote a particular type of content among a community or provide more opportunities for theme-based group formation among the participants of virtual worlds. We have identified the following important characteristics for developing virtual worlds and a person's social identity within them. First, developers of virtual worlds should promote the development of social identity among users, that is, a part of one's self-concept deriving from the knowledge, attached value and emotional significance of a certain membership of a social group [79]. In other words, developers should enable and encourage users to get to know each other, make friends and form communities and teams to work together on solving a problem or completing a certain task. To support the feeling of belonging within the groups, which refers to the affective side of social identity, the developers and administrators of virtual worlds should allow groups to interact without restrictions, for example by allowing the users to interact vividly both verbally (text-based) and nonverbally (gestures and expressions). In addition, a highly personalized graphical user interface and making it possible to design group logos, for example, would support social identity formation in the virtual world context. The results can also be viewed in the light of marketing communications. The hypothesized link between attitude toward advertising and intentions was not significant, which indicates that a person's intention is not affected by his or her attitude toward advertising in the virtual world. From the advertising point of view this finding suggests that regardless of how disruptive, sensitive, harmful or beneficial advertising in the virtual world is, it has no direct relationship with intention to use the service. For advertisers, this finding could have both positive and negative implications. As the users do not change their intentions on the basis of advertising on the service, marketers may be launching ads perceived as disruptive. These kinds of ads can be, for example, pop-ups or floating ads. However, effective marketing in virtual worlds might just call for more sophisticated forms of advertising. As social identity was the central determinant of intention and behavior, marketing should support the development of the users' social identity by reinforcing the users' perceptions about their belonging to and importance in a group. This type of advertising may involve games that require players to form social groups. Another important finding for marketing communications is that perceived enjoyment affects both the perceptions of ease of use and attitude toward using --- Study Limitations and Future Research The empirical assessment of our framework should be interpreted in light of several limitations arising from our sample, common method bias and direction of causality. First, our study used a convenience sampling method which yielded a sample that is very much dominated by females (86 percent) and the young. One can expect that preteens' responses to surveys might be superficial. However, we found no biases in the answers due to respondents' age. Therefore the results cannot be generalized to other populations. To be more certain on possible answer bias, a comparison sample should be collected. Second, although common method bias was minimized, its impact on survey study results could only be completely ruled out if longitudinal data were used. Third, the direction of our causal relationships is based on theory rather than on mathematical caveats. However, we were able to also contribute to the discussion around the direction of causality between the constructs' enjoyment and ease of use. The limitations and findings presented offer important opportunities for further research. We propose researchers further validate the links between social identity and the other constructs considered in the study. Specifically, prior studies have not conceptualized and therefore not tested the association between social identity and enjoyment, or social identity and attitude. As theories have not examined these aspects before, further work is needed to capture the links between social identity and the other constructs. Research has mostly concentrated on modeling the links between social identity and intention [11], [13]. In addition, we propose more research on the concept of social identity in experiential service settings. Previous studies have merely modeled social identity in the context of goaldirected behavior and not in experiential services including pleasure-seeking and hedonic user experiences [11], [13]. Finally, although this research has incorporated a variety of constructs into the developed framework, it seems that other factors may also exert an influence. As such, the exploration of differentiated service dynamics in alternative contexts seems a potentially fruitful avenue for research. Finally, it would be interesting to develop better understanding of how to grow the user base in virtual worlds. Especially, how can new users be attracted and what are the key issues at this stage? This calls for more research prior to exposure to the virtual world and prior to the social influence exerted by other users of the virtual world.
This study develops a framework for understanding user intentions and behaviors within a virtual world environment. The proposed framework posits that the intention to participate in virtual world is defined by a person's 1) social identity, 2) attitude toward using the service, 3) subjective norms, 4) attitude toward advertising on the service and 5) enjoyment. The proposed model is tested using data (n=319) from members of the virtual world environment. The results support the multidimensional view of social identity and show a strong positive association between social identity and intention and social identity and behavior, and further, confirm the intention-behavior link. Moreover, the results indicate that social identity outweighs the significance of a person's attitude and relevant subjective norms in explaining intention and behavior. The results also indicate that enjoyment strongly explains both ease of use and attitude.
Introduction The rapid increase of global mobility that has characterized the mature phase of the globalization process in the past couple of decades has also, as a consequence, led to the escalation of 'overtourism' issues in many global tourism destinations, and most notably in major art and heritage cities. Despite massive flows of tourists clearly benefit the local economy, they also pose a major threat to both the livability and, in some cases, even the sustainability of cities that are literally consumed by a level of human occupancy they weren't designed or intended to host. In Barcelona, where the number of overnight stays escalated from 1.7 millions in 1990 to more than 8 millions in 16 years, overtourism is one of the key causes of an environmental pollution emergency (Ledsom 2019). In addition to the most renowned tourist locations, the geography of overtourism is also rapidly expanding due to the global visibility acquired by some cities for having been the shooting location of successful TV series, as in the case of Dubrovnik for Game of Thrones (Wiley 2019). However, an increasing number of critical voices are questioning this trend, locally as well as internationally (Economist, 2018). For residents, overtourism may have dramatic consequences. Housing for permanent residential use becomes increasingly scarce and expensive. Services catering to the needs of locals become rarer, more difficult to reach, and again more expensive. The constant noise and the overcrowding of streets and local transport can be a source of considerable stress for working people, families with small children and the elderly. In cities like Venice, the number of bed-and-breakfasts and flats for short-term tourist occupancy has nearly doubled in the space of just one year (Tantucci 2018). As a consequence, residents are evicted by landlords who find way more profitable to rent to tourists. In Florence, for instance, between October 1, 2017, and June 30, 2018, as many as 478 residents who couldn't keep up with the rising rents had to leave their homes, including lifetime ones: 209 living in the historical center, 71 in the Unesco area and 198 in other areas of the city (Conte 2018). More generally, the so-called airification (Picascia et al. 2019) has been identified as a disruptive force that is literally 'hollowing out' cities (Hinsliff 2018). Such a state of things does not come as a complete surprise to the tourism studies literature. Although early warnings were appropriately sent, as in the seminal paper by van den Borg et al. (1996), they have not succeeded in convincing local policy makers to devise appropriate countervailing strategies and to take action. Now that the negative effects of the phenomenon are becoming indisputable, however, some cities are starting to react aggressively. Amsterdam has banned the concession of new licenses to business within the historical city core that offer goods and services targeting tourist demand (O'Sullivan 2017), as a way to curb the 'Disneyfication' of the city (Boztas 2017). Bruges has strictly limited the maximum number of cruise ships that may be hosted at its port's docks on a daily basis and has limited its own tourism-related advertising in major nearby cities (Marcus 2019). Venice has implemented a very severe set of restrictions to many different kinds of tourist misbehavior, sanctioned with heavy fines (Spinks 2018). Ten major European heritage cities such as Amsterdam, Barcelona, Berlin, Bordeaux, Brussels, Krakow, Munich, Paris, Valencia and Vienna have jointly signed a letter to the new EU Commission asking for severe limitations to the further expansion of Airbnb and other holiday rental websites (Henley 2019). However, it is not easy to go against such a powerful trend, despite that the current COVID-19-related crisis that has caused a temporary collapse of the tourism industry worldwide will probably provide overcrowded tourist cities with an unexpected opportunity to prevent the eventual return to the 'old normal' once the pandemic is over (Higgins-Desbiolles 2020). The vested interests that rely upon the extractive logic of the mass tourism economy are a major local consensus pool and exert powerful political pressure (Benner 2019). On the other hand, the needs of tourists and residents significantly differ, and this is likely to spark conflict between different local stakeholders, depending on the extent to which they benefit from tourism (Concu and Atzeni 2012). Whether or not a city eventually gets colonized by the tourism economy or manages to find a reasonable compromise can therefore be the result of a very complex interplay of factors. It is therefore of particular importance to study under what conditions such interplay leads to different long-term scenarios, thus enabling public decision makers to better understand not only the nature of the problem in order to imagine and test possible solutions, but also the critical conditions that regulate the emergence of possible outcomes. Merely proposing 'plausible' or 'just' solutions is not enough. We also need to assess whether such solutions would work, and under what circumstances, once they are actually implemented. In principle, solutions that are more desirable in abstract terms need not be the ones that work best. As cities are very complex dynamical systems, the pursuit of the public interest, which in this case identifies to a significant extent with that of city residents, whose 'right to the city' (Lefebvre 2010) should be the object of special consideration and protection, needs to be supported by evidence-based policies building upon a sound understanding of the underlying economic and social dynamics. The aim of this paper is that of studying a simple dynamic model that analyzes the effects of the tension between residents and tourists in the social usage of city resources. We focus on the interplay of the essential factors behind such tension: the substitution between resident-oriented and tourist-oriented facilities and shops, the congestion of city space from overtourism, but also the experience value of cities as related to the effective presence of residents as a source of authenticity. Given that the escalating tourist flows are literally preying on the city's resources from the residents' viewpoint, it is natural to think of modeling such dynamics with the predatorprey framework in mind. We introduce an expanded variant of the predator-prey dynamics, which yields more complex dynamic behavior than the original one, and allows a better analytical treatment of the main factors at play. The model's structure is easily interpretable, but the corresponding dynamics are not obvious. In particular, we show that the actual dynamic trajectories of the system may be very different for relatively small changes in the key parameters. This implies that even relatively small differences in local conditions and in policy actions may cause divergent outcomes, with substantial differences in terms of their social desirability. Our results should be read as a cautionary tale against delayed or unsystematic action in curbing the social costs from overtourism: intervening too little or too late, or not focusing on the truly critical parameters might lead to disappointing results. The remainder of the paper is organized as follows. Section 2 offers a brief review of the main issues discussed in the overtourism-related literature. Section 3 presents the model. Section 4 contains the main results. Section 5 discusses the results and concludes. A technical Appendix closes the paper. --- Literature review One vastly debated issue that clearly relates to overtourism is that of residents' attitudes toward tourists. There is a rich literature that explores this topic, but most of it has dealt with minor or even marginal tourist destinations rather than with overcrowded tourist attractors. Lin et al. (2017) focus upon the process of value co-creation through social interaction between tourists and residents in a Chinese sample and find that the positive economic benefits from tourism may also positively affect the life satisfaction of residents. Mathew and Sreejesh (2017), working on a sample of three Indian tourism destinations, highlight the relationship between responsible tourism and perceptions of sustainability of the tourist destination in promoting the perceived quality of life of residents. On the other hand, Boley et al. (2017) show that although destinations that place more emphasis on sustainability tend also to be the more sustainable, perceptions of actual sustainability by residents tend to be low. Rasoolimanesh et al. (2017) show that, for two UNESCO Heritage sites in Malaysia, one of which located in an urban context and the other in a rural one, there are nuanced differences between the urban and the rural site in terms of the impact of residents' perceptions on the support for tourism development or lack thereof, but also substantial homogeneities. Therefore, when tourism is still in a developing phase, the evidence of the benefits from tourism development can be a main driver of support from residents, and this effect may even cut across major territorial divides such as the urban/rural one. As shown by Stylidis et al. (2014) and Wang and Chen (2015), a central mediating role in residents' perceptions of the impacts of tourism is perceived place image-a dimension that is, tellingly, significantly compromised in destinations affected by overtourism, but can be improved by an increased tourists presence in developing destinations. It is no surprise that literature reviews of this research field lament the excessive narrowness of focus of most research, as well as its reliance on specific quantitative techniques that are good at highlighting specific effects but often fail to deliver the big picture (Sharpley 2014). For instance, Almeida Garcia et al. (2015) argue that the current literature on residents' perception of tourism significantly underplays the role of key historical, cultural and social factors in shaping a specific destination and its response to tourism. Moreover, the nature of the 'ecological' interactions between the residents and tourists populations may make a big difference and has an intrinsically dynamic nature (Vargas-Sanchez et al. 2011). And if this is true in general, it is even truer for overcrowded destinations, and the possible scenarios may be very different from one another. For instance, touristic congestion may be the result of a sudden boom or of a gradual, steady increase; the pervasive presence of tourists in the urban space may have become a deeply ingrained feature of the local culture, or be an outcome of recent tourism development strategies; the availability of space and the impact of building density may be not particularly problematic for urban livability or rather extremely critical and exacerbated by tourism flows, and so on, just to limit ourselves to a few obvious examples. Segota et al. (2017) show for instance that the informedness and involvement of residents in the local management of tourism-related issues significantly impacts their perceptions in the expected direction (the more involved and informed, the more positive). A rare example of a study on the acceptability of crowding perceptions by residents in a global tourist destination such as Bruges, carried out by Neuts and Nijkamp (2012), moreover, shows that the actual negative perception of crowding varies widely across residents depending on individual characteristics and is not found in the majority of the sample. However, the situation might have changed now, in the light of further recent accelerations of tourism flows in many heritage cities, as possibly signaled by Bruges' current de-advertising on the tourism market. Despite this, overtourism and its policy implications are still relatively poorly covered in the literature, with the consequent risk of failing to fully appreciate the complex social conflict issues that can emerge and deflagrate in the absence of proper policy strategies and management at the city level. The key critical aspect, which is amplified by overtourism but already apparent in developing destinations even in the case of positive residents' perceptions, is the impact of tourism on local culture and behaviors, whose effects can only be appreciated in full in the medium-long run. Of course, culture and behaviors are inevitably bound to change anyway, independently of tourism. But the changes induced by tourism might eventually clash with the developmental priorities and goals of local communities (Simpson 2009. There is a need to strike a balance between the benefits of tourism as a local developmental driver and potentially negative effects, e.g., in terms of longlasting impacts on cultural identity and authenticity (Lacy and Douglass 2002;Cole 2007;Zhu 2012), on socioeconomic inequalities (Lee 2009; Alam and Paramati 2016), on community empowerment (Cole 2006;Aref and Redzuan 2009;Chen et al. 2017), and so on. Especially critical is the evaluation of residents' perceptions in developing countries affected by substantial socioeconomic issues (Truong et al. 2014). Analyses that rely on an exclusively tourism-centric perspective are likely to overlook the most critical dimensions (Easterling 2004). Ribeiro et al. (2017)'s analysis of the development of pro-tourism behaviors of Cape Verde Islands residents is an example in this regard. Nunkoo and Gursoy (2012) instead consider, in the case of Mauritius, the role of local identity in the orientation of residents' support for tourism, but interestingly point out how even the emergence of a supportive orientation need not reflect into a significant shift in attitudes, thus underlying the complex functioning of community identity as a regulator of cultural and social change. On the other hand, tourism itself is constantly raising the bar as to the level and depth of interaction with local social life and customs that tourists expect to reach as a quintessential aspect of their experience, to the extent of becoming co-creators of the experience itself. Prebensen and Xie (2017) show, for example, that the level of tourists' participation under the form of mastering and co-creation in experience tourism significantly enhances their value perception. Paulaskaite et al. (2017) highlight how tourists increasingly expect to spend their time at the destination 'living like the locals,' therefore transforming local identity itself into a commodity that can be purchased at will. Such issues are relevant for all kinds of tourist destinations, but they are especially problematic in overcrowded ones. Overtourism shifts the focus of residents' perceptions on critical aspects such as the pressure of tourism flows on the local system (Muler Gonzalez et al. 2018), the threats to ecological sustainability (Cheer et al. 2019) and the role of media, and social media in particular, in causing tourist congestion peaks on an almost instantaneous basis (Jang and Park 2020). In other words, the aspect of overtourism that is seen as the most socially alarming is its capacity to put under stress at an unprecedented scale and pace the homeostatic mechanisms of local systems on many different levels: economic, social, cultural, logistical, and so on. Overtourism magnifies many of the most critical features of tourism to an extent that strains local governance and regulatory capacity; however, its effects may be more critical on certain dimensions rather than others (Carvalho et al. 2020). When such impact is perceived as disruptive by local communities, social protest ensues (Alexis 2017;Pinkster and Boterman 2017;Seraphin et al. 2018). Once a perceived saturation level is reached, a vicious circle can take over as residents classify as threatening by default any tourism event that causes local congestion, irrespectively of its quality, importance and expected long-term benefit for the city (Lemmi et al. 2018). This kind of vicious circle may mutually reinforce with others, e.g., the one causing the erosion of local services quality in overcrowded tourism destinations (Caserta and Russo 2002). Such social dynamics are difficult to manage at all levels, and even large digital tourism platforms may find it hard to function well (e.g., in rewarding quality in their rankings of local businesses) when the effects of digital influencing upon spatial patterns of tourism congestion spark social controversy (Ganzaroli et al. 2017). Such new, system-wide challenges may be effectively tackled only through tailored, sophisticated forms of local cooperation between key stakeholders (Kuscer and Mihalic 2019) and of smart governance (Agyeiwaah 2019). --- The model The literature briefly discussed in the previous section shows how the problem of overtourism, in the more general context of the residents' perceptions of the social and economic impacts of tourism, is generally focused on the analysis of specific case studies and on the measurement of perceptions and attitudes by means of suitable psychometric tools. In this paper, we take a different route as a contribution to a comprehensive approach to the smart governance of overtourism dynamics: that of characterizing such dynamics in terms of an explicit mathematical model. The ambition of the model is not to provide a detailed, realistic representation of overtourism in all of its multifaceted dimensions, but to examine what are the basic conditions that may favor, or prevent, its onset, paying special attention to a basic phenomenon: the competition between resident-oriented and tourist-oriented services for the limited spatial and material resources of the city. As we have seen from the literature review, a detailed modeling of such dynamics would involve many different variables-place identity, social perceptions, local culture, historical trajectories and many more-and this would easily make an explicit dynamic analysis intractable due to the number of potential variables implied. However, simplifying the model to its essentials has the advantage of providing some insight that may help focus upon the possible dynamic regimes that may prevail, providing policymakers with some important indications for policy design. We choose as our conceptual benchmark the classical Lotka-Volterra predator-prey model, which has been the object of countless applications in a variety of different fields, and of mathematical generalizations of all kinds due to its optimal combination of simplicity of structure and richness of dynamic behaviors. In our case, the predatorprey logic is somewhat ingrained into the nature of the problem we want to analyze, as one thinks of overtourism as the process through which tourism flows literally 'capture' the local system, reshaping it according to their necessities. On the other hand, even a basic description of the overtourism problem urges us to depart from the basic formulation of the predator-prey model to better take account of some essential specificities. In particular, the model we propose has the following structure: <unk> = r + ax -b(y -<unk>)x + c(x -x) <unk> = s + dx y + e(y -<unk>)x -f y where x is the level of the resident population and y is the level of the tourist population. All parameters are positive. Let us now see in some detail the rationale behind the equations. The basic premise of the model is that there is an implicit competition between residents and tourists for the availability of services and resources that respond to their specific, and partly mutually incompatible, needs. In particular, there are two threshold values of x and y, x and <unk>, respectively, beyond which the local level of residents (tourists) is large enough to warrant a satisfactory provision of resident-(tourist-) specific services and resources. We call such thresholds the relevance thresholds. When one population crosses its relevance threshold, the local economy becomes increasingly respondent to that population's needs, and this positively influences the dynamics of such population. These two effects are captured, respectively, by the two terms x -x and y -<unk>. So, the level of the resident population is positively depending on whether the residents are above their relevance threshold, and negatively depending on whether the tourists are above their relevance threshold. In this latter case, however, the size of the effect is scaled by the level x of the resident population: the larger the pool of residents, the more an above-threshold level of tourists makes competition for scarce space and resources more sustained, increasing the negative impact of tourists on the resident population. Parameters b and c measure the relative size of the two effects. Moreover, the dynamics of the resident population also linearly depends (according to the parameter a) on the actual level of the resident population, as the choice to live in a city is characterized by some amount of inertia, due to a variety of factors such as relocation costs, habit, cultural and affective reasons, job-related reasons, and so on. As to the tourist population, it positively benefits from the crossing of its own relevance threshold as already anticipated, and the effect is measured by the parameter e. Moreover, its dynamics are negatively influenced by tourism congestion, an effect whose size is measured by the parameter f. Finally, the tourist population's persistence in the destination also depends, and in a positive way, on the level of residents, as measured by the parameter d. Insofar as the resident population is small, the city basically turns into a 'theme park' devoid of any specific authenticity and vitality, to become a mere entertainment district that maximizes tourism-related profit. This effect, as already hinted at in the discussion of the previous section, therefore captures the 'experience economy' dimension, as tourists do not simply ask for entertainment, but also value opportunities of meaningful interaction with locals. Finally, parameters r and s measure the exogenous components of the rates of growth of the resident and tourist populations, respectively. The system of equations above can be conveniently rewritten as follows: <unk> = r -c x + (a + b <unk> + c)x -bx y (1) <unk> = s -e <unk>x + (e + d)x y -f y (2) In our analysis, we will refer to (1)-( 2) as the default formulation of the model. --- Existence and stability of the stationary states To analyze the dynamic behavior of the model, we start by posing: A := a + b <unk> + c b, B := c x -r a + b <unk> + c, C := e <unk> e + d, D := s e <unk>, E := f d + e The complete taxonomy of possible dynamic regimes is illustrated in the following proposition. We will see how even a relatively simple model like the present one can generate a rich array of dynamic behaviors depending on the prevalence of certain constellations of conditions rather than others. Proposition 1 Under the assumption that all parameters of the system (1)-( 2) are strictly positive, the following dynamic regimes can be observed: (1) If B > 0 (i.e., x > r c ), then at most two stationary states exist. In particular, (1.a) if either D <unk> B <unk> E or E <unk> D <unk> B holds, a unique repelling stationary state P exists (Fig. 7b,c) in the Appendix); (1.b) if B <unk> min<unk>D, E<unk>, two stationary states P 1 = (x * 1, y * 1 ) and P 2 (x * 2, y * 2 ), with x * 1 <unk> x * 2 and y * 1 <unk> y * 2, may exist, where P 1 is always a saddle point, whereas: (1.b.1) if D <unk> E, then P 2 is a repeller (Fig. 7a) and (7e) in the Appendix); (1.b.2) if D > E, then P 2 is either a repeller or an attractor (Fig. 7d in the Appendix). (2) If B <unk> 0 (i.e., x <unk> r c ) a unique stationary state exists; if D <unk> E, it is a either a repeller or an attractor (Fig. 7f in the Appendix) while, if D > E, it is an attractor (Fig. 7g in the Appendix). --- Proof See Appendix To explain the meaning of Proposition 1, let us start by understanding better the interpretation of the new composite parameters A, B, C, D and E. The parameter A measures the relative size of the parameters that positively regulate the growth of the resident population vs. the parameter b that negatively affects it. In particular, the growth of the size x of the resident population depends positively on the parameter a (measuring the persistence effect), on the parameter c (representing the reactivity to the difference between x and the threshold x and on the parameter <unk> (the threshold for the tourist population). We can therefore intuitively interpret A as a measure of residents' resilience. As for B, it positively depends on the relevance threshold of the resident population (measured by c x): the larger it is with respect to r, and with respect to the parameters that positively influence residents' resilience, the larger B. B can therefore be intuitively interpreted as a measure of residents' susceptibility: the higher B, the more demanding for the residents' community to fulfil the conditions for the prevalence of a resident-oriented local economy. Likewise, C can be interpreted as a measure of tourists' susceptibility, as C is larger the higher the threshold of relevance for tourists e <unk>, and the smaller the combined strength of the experience value parameter d from visiting the city plus the impact e of crossing the relevance threshold on the availability of tourist-oriented services and resources. D can be seen as the city's intrinsic attraction value for tourists, as it equals s (the exogenous growth rate of tourists) scaled by the relevance threshold for tourists. Finally, E measures the tourists' relative congestion effect expressed by the congestion parameter f scaled by the combined strength of the experience value and resource and service availability effects for tourists. At this point, we are ready to illustrate the findings in Proposition 1. The results are organized around the sign of B, that is, whether or not the residents' susceptibility problem occurs, which implies a relatively high relevance threshold for the residents-oriented local economy to kick off. In the case of a positive level of residents' susceptibility, that is, B > 0, we have at most two stationary states that can be potential equilibria for the dynamics. A first sub-regime relies on two possible conditions at which, of the two possible stationary states, only one exists and is repulsive, that is, the dynamics never settle down to a given state. The two conditions are D <unk> B <unk> E and E <unk> D <unk> B. In the first case, we have a condition where the congestion effects is particularly high with respect to residents' susceptibility and to the intrinsic attraction value for tourists. This is, for example, the case of a relatively small city where, despite the comparatively modest attraction value, congestion is a problem and tourists can crowd out residents relatively easily. In the second case, we have to the contrary a situation where congestion is relatively unimportant and residents' susceptibility is comparatively high in presence of a relatively substantial attraction value. This is for instance a scenario that could describe a relatively large city with high carrying capacity and cultural/amenity value, where there is real competition for local resources and services between residents and tourists. These two conditions may therefore span very different cases. A second sub-regime contemplates again the existence of two possible stationary points, one of which is always a saddle, that is, a state where a unique converging trajectory exists and all the other ones diverge. The key condition for the second sub-regime is that B be smaller than both D and E. Given that B is constrained to be positive, the condition requires that both the congestion effect and the intrinsic attraction value are relatively high. An example here is that of an established tourism destination with severe congestion problems where however the issue of resource accessibility for residents is relatively less binding, possibly due to a large, diversified local economy that can accommodate local demand. An extra condition regulates the dynamic properties of the second possible steady state, according to the relative size of the two potentially dominating effects. If the dominant effect is congestion, the second stationary state is locally unstable. If instead the dominant effect is the intrinsic attraction value, the second stationary state may either be locally unstable or locally stable, that is, may attract all local trajectories and emerge as a stable state. When residents' susceptibility is not a major concern (i.e., B <unk> 0), the dynamic regime is much simpler. In this case, the stationary state is always unique. Moreover, if congestion prevails upon intrinsic attraction, this state may be either attracting or repelling (locally stable vs. unstable). If the opposite is true and intrinsic attraction prevails, the stationary state is always attractive. An example of this latter condition is a world-renowned tourist destination, with a large carrying capacity that can manage congestion, and where the competition between residents and tourists for local services and resources is not binding. Proposition 1 tells us, among other things, that the dynamics we are studying is not in many cases conducive to a stable equilibrium state and is rather characterized by more complex long-run behaviors. The stability properties of the stationary states do not give us enough information to understand what such dynamic behaviors will look like, as they only provide insight about what happens close to them. However, the structure of stationary states is an important piece of information, and in particular, it is interesting to ask how the number and stability properties of the stationary states vary depending on the levels of specific couples of parameters such as the relevance thresholds for residents and tourists, given our focus on overtourism and its possible impacts. In all the analysis that follows, the choice of parameter values for the simulations has been made in order to select cases that enable us to illustrate clearly and in a compact way the dynamic properties of the model. Figure 1 illustrates the bifurcation diagrams obtained by varying the relevance threshold y. Panels (a) and (b) show how the coordinates x and y (on the horizontal axis) of stationary states vary in response to variations in y (on the vertical axis). The LP point separates the interval of y values where no stationary state exists, from that in which two stationary states exist. The point H indicates the Hopf bifurcation value of y. Dashed, continuous and dotted lines represent saddle points, attractive and repulsive stationary states, respectively. The conditions under which a Hopf bifurcation occurs by varying the parameter y, computed according to the criterion proposed by Liu (1994), are given in the Appendix. In Panel (c) of Fig. 1, we show how, through the Hopf bifurcation, a family of limit cycles emerges. Notice that an increase in the parameter value y leads to an increase in the magnitude of the limit cycles. In Fig. 2, we show, for a specific set of parameter values, the bifurcation diagram that illustrates the existence and stability of the stationary states as the two relevance thresholds vary. Figure 2a provides the full diagram, whereas Fig. 2b presents an enlargement of the rectangle area where the most fine-grained structure is found. As we can see, the bifurcation diagram contains here all seven possible scenarios for the stationary states, where, in Fig. 2, the apexes (S, A), (S, R), A, R denote, respectively: regions where two stationary states exist, of which one is a saddle (S) and another an attractor (A); regions where two stationary states exist and are in particular a saddle and a repeller (R); regions where one stationary state exists and is an attractor; and regions where one stationary state exists and is a repeller. The H curve is the Hopf bifurcation curve, whereas the LP curve is the one that separates the region without stationary states from the region where at least one stationary state exists. The Hopf bifurcation curve H separates the regions where an attractive stationary state is found (to the left of the curve) from those where a cycle emerges, as shown in more detail in Fig. 2b. In the simulations below, we find that the attractive cycle is stable and the corresponding stationary state consequently becomes unstable. Figure 2c, instead, reports the bifurcation diagram in the (c, e) space, where we study how the structure of stationary states varies with the parameters that measure the strength of resource provision when residents (respectively, tourists) cross their relevance threshold. Again, the bifurcation curve H and the LP curve delimit the areas where one of the stationary states (or the only one, if unique) changes its local behavior from repulsive to attractive, and where stationary states exist vs. fail to exist. From these figures we see how, in the case of the bifurcation diagram for the relevance thresholds, there is a vast region where stationary states do not exist for most values of the relevance threshold for residents if the relevance threshold for tourists is small enough. That is, when tourists are substantially favored in their capacity to access local resources with respect to residents, the dynamics fails to settle on a stationary state. However, when the relevance threshold for residents is very low, even for relatively high levels of the relevance threshold for tourists a stable stationary state emerges. That is, when residents succeed in getting access to the local resources, the system has a chance to stabilize itself. But when the relevance threshold for tourists or even both thresholds become very high so that it is difficult for both populations to gain easy accessibility to local resources, there is no chance that the system may settle down to a stable equilibrium. In the case of the bifurcation diagram in the (c, e) space, the pattern is more complicated, and the existence of stable stationary states here relies on more specific combinations of the two parameters. In general, when c is very high, that is, when the access to resources beyond the relevance threshold has a big positive impact on the population of residents, no equilibrium exists, whereas for smaller values of c a stable stationary state can emerge. Again, when both parameters are large, no stable stationary state can be found. Remember that these bifurcation diagrams are drawn for a given choice of numerical values of all the other parameters, and that they change as any one of the other parameters varies. To get a better understanding of what the actual trajectories of the system look like, we report a few examples of phase diagrams for a specific choice of parameter values in Fig. 3. In particular, we keep the values of all the other parameters but the relevance thresholds as in Fig. 2, and we set a specific value for x = 4, letting <unk> vary. The four cases correspond, respectively, to points from the white, yellow, indigo and orange regions of Fig. 2b. For <unk> = 0.8 (white region in Fig. 2b), for most initial conditions the system converges toward states where the resident population goes extinct and only a stable level of tourists is observed: this is a full 'Disneyfication' scenario where the city turns into a tourist theme park, where the eventual level of tourists depends on initial conditions. As it could be expected, this is due to the fact that the relevance threshold for tourists is very low with respect to that for residents, and consequently, tourists take over local services and resources expelling the residents. However, for very low initial levels of tourists and high enough levels of residents, there are also trajectories where residents take over the city, letting tourists go extinct or remain present at very low levels. As the relevance threshold <unk> grows to 1.1 (yellow region in Fig. 2b), making access to resources more demanding for tourists, we witness the emergence of a stable attractor where residents and tourists stably coexist in the long term, approaching this state through a cyclical adjustment path, whose basin of attraction is delimited by the yellow region. Outside this basin, depending on the initial level of tourists vs. residents we find as before that either tourists take over entirely, or residents do, entirely or partially (that is, with a more or less high level of tourists observed in the long term). As <unk> is brought further up at 1.3151 (indigo region), the stationary state becomes unstable and cyclical behaviors emerge within the yellow region, whereas outside the region one still observes as before, depending on initial conditions, the eventual takeover of tourists or the emergence of a state with high levels of residents and some tourists. Finally, with <unk> at 1.35 (orange region), the system is destabilized, the stationary state is unstable and the trajectories may entail big oscillations where, despite that both the noresidents and prevailing-residents long-term states can materialize as before, it is also possible that the limit state is reached through expanding fluctuations. In particular, it is interesting to observe that as the conditions for accessibility of resources for tourists become more demanding as <unk> increases, the resulting dynamic behaviors do not simply favor residents-rather, what we
Overtourism is an increasingly relevant problem for tourist destinations, and some cities are starting to take extreme measures to counter it. In this paper, we introduce a simple mathematical model that analyzes the dynamics of the populations of residents and tourists when there is a competition for the access to local services and resources, since the needs of the two populations are partly mutually incompatible. We study under what conditions a stable equilibrium where residents and tourists coexist is reached, and what are the conditions for tourists to take over the city and to expel residents, among others. Even small changes in key parameters may bring about very different outcomes. Policymakers should be aware that a sound knowledge of the structural properties of the dynamics is important when taking measures, whose effect could otherwise be different than expected and even counterproductive.
is delimited by the yellow region. Outside this basin, depending on the initial level of tourists vs. residents we find as before that either tourists take over entirely, or residents do, entirely or partially (that is, with a more or less high level of tourists observed in the long term). As <unk> is brought further up at 1.3151 (indigo region), the stationary state becomes unstable and cyclical behaviors emerge within the yellow region, whereas outside the region one still observes as before, depending on initial conditions, the eventual takeover of tourists or the emergence of a state with high levels of residents and some tourists. Finally, with <unk> at 1.35 (orange region), the system is destabilized, the stationary state is unstable and the trajectories may entail big oscillations where, despite that both the noresidents and prevailing-residents long-term states can materialize as before, it is also possible that the limit state is reached through expanding fluctuations. In particular, it is interesting to observe that as the conditions for accessibility of resources for tourists become more demanding as <unk> increases, the resulting dynamic behaviors do not simply favor residents-rather, what we observe is an increase of the system's dynamic variability with the eventual emergence of cyclically diverging behaviors where big changes in the levels of residents vs. tourists are observed in time. In Fig. 4 we highlight a different phenomenon, namely how the size of the basin of attraction of the stable stationary state varies with the variation of <unk> for a given value of x. We now fix x = 2.2 and choose the values of <unk> in order to always remain within the yellow region of Fig. 2b where a stable stationary state (attractor) exists. As we see, in Fig. 4a, as <unk> increases, the size of the basin of attraction of the stable stationary state (denoted with a black dot) significantly increases. In Fig. 4b, we analogously set <unk> at a constant value 1.2 and let x vary. In this case, as with the increase in x access to resources becomes less and less easy for residents, the size of the basin of attraction of the stable stationary state gradually shrinks. Maintaining a viable access to resources for residents therefore causes, as one might expect, a dynamic stabilization of the system. Figure 5 reports yet another angle of analysis, namely, how the coordinates of the stationary state vary with <unk> for a given level of x and of e. The other parameters are still kept at the usual values. We see that, as <unk> increases, the stationary state entails smaller equilibrium levels of both tourists and residents. However, for a given <unk>, increases in x imply lower levels of tourists at the stationary state. This pattern of course only informs us about the composition of the stationary state but not about its stability properties or, if attractive, about the size of its basin of attraction. In Fig. 5b, as it could be expected, as c grows, we see that the stationary state entails lower and lower levels of tourism all other things being equal. Beyond a certain threshold for c, the steady state level of tourists keeps declining even when e increases, whereas below the threshold an increase in e causes a corresponding increase of the level of tourists at the steady state. We have checked the robustness of our simulation results through further, extensive numerical tests that are not reported here for brevity and which confirm our analysis. --- Discussion and conclusions We have built a simple model to study the conditions for the emergence of overtourism through mathematical simulation of a predator-prey-inspired dynamical system. The core element that drives our dynamics is the competition for the accessibility of resources and services between residents and tourists, a feature that is typical of overtourism and is mainly responsible for its most disruptive effects. The model has been further enriched with a few elements that capture effects such as tourist congestion or the experience value for tourists deriving from the interaction with residents or from the intrinsic attractiveness of the city. Even if studying the model in its most essential form, the dynamic analysis is challenging. Our model shows that, under suitable conditions, overtourism may emerge, to the point of causing a full 'Disneyfication' of the city with the eventual extinction of all residents and its final transformation into a tourist theme park. However, also the reverse option is possible, with tourists disappearing from the city or reaching a stable level without taking over the local economy. Of course, in addition to these extreme cases, the possibility of a stable coexistence of residents and tourists is also possible, but equally possible are more complex dynamics that may entail stable cyclical oscillations or wide variations in the relative levels of the populations of residents and tourists. The outcome that is eventually reached depends on a very complex constellation of parameters, each of which plays a specific role that can, however, be fully understood only by means of a thorough analysis. What we have learnt from this study is that, in a nonlinear setting, acting on specific parameter values may cause counterintuitive effects. As we have seen, some cities have decided to tackle overtourism by restricting tourists' access to local services and resources. In our model, this basically amounts to raising the relevance threshold of tourism as it makes the conditions for access to tourist-specific resources more demanding. However, this does not necessarily entail the eventual reduction of the number of tourists or even the reaching of a stable stationary state where the number of tourists is under control. It may happen instead that the main effect of raising the threshold <unk> is destabilizing the system, for instance by causing the emergence of large oscillations in the levels of residents and tourists. This means that, contrary to commonsense approaches, it is important to understand how certain measures affect the whole structural organization of the local economy. The interplay with factors such as congestion, intrinsic attractiveness, or experience value can generate complex dynamic effects that influence the existence and stability of stationary states, and more generally the dynamic behavior of the system. It is interesting to notice that, in determining the existence and stability properties of the stationary states of the model, certain composite parameters play a more substantial role than others. In particular, residents' susceptibility (B) is the key parameter in determining the dynamic regime that prevails, whereas residents' resilience ( A) and tourists' susceptibility (C) play practically no role, although it is far from excluded that they may play a role in the dynamic behavior of the system far from equilibrium. The central point seems therefore to be the conditions for access to local services and resources by residents. Promoting residents' access does not merely amount to restricting access to the same resources to tourists. Lowering residents' susceptibility might be a better strategy and also a source of stabilization of the system. This goal may be reached, for instance, by providing better social and welfare services to residents, by supporting social entrepreneurship that better addresses critical local needs, by improving the quality of key resident services such as kindergartens or retirement houses, and so on. What is important to stress is that, in a nonlinear system, even relatively small changes may make a big difference, for better or for worse. And therefore, building models that allow to estimate the likely impact of policy measures as an essential support tool for public decision making becomes crucial. It is unlikely that overtourism will be successfully dealt with by cities through the implementation of occasional measures without a clear evidence-based strategy that is informed by a solid knowledge of the underlying system of structural interdependencies, not unlike what happens in the management of ecological systems. Our study has clear limitations, due to the extreme simplicity of the model that disregards many potentially relevant factors. In particular, the role of residents' and tourists' expectations and attitudes, that as we have seen is an important aspect in the current evaluation of the social and economic impacts of tourism, could also be modeled with all the ensuing complexities arising from cultural transmission effects, misperceptions and biases, manipulation of consensus, and so on. Another important limitation is that an empirical estimation of the values of the model parameters is not simple and would call for a sophisticated nonlinear econometric analysis. Data availability is also demanding, as, ideally, very long time series of residents/tourists populations of cities with significant or potential overtourism issues would be required. The nonlinearity features of the model would imply that even relatively small estimation errors might have big consequences on the projected dynamics, yielding potentially misleading indications. The present paper has therefore mainly a conceptual value in drawing attention upon the dynamic complexity of the socioeconomic dynamics of overtourism, and the ensuing necessity to carefully assess the long-term effects of policy changes even when they intuitively seem to respond effectively to outstanding issues. Curbing tourist congestion through the reduction of commercial licenses for tourism-related businesses, for instance, looks like an appealing solution but its long-term consequences might be more complex than one could expect, depending on the overall structure of the local economy and its 'ecosystemic' interdependencies. In its current form, our model is not tailored to guiding policy design choices, a task that requires suitably calibrated empirical models. But we hope that this first study may inspire further, more sophisticated analyses that will serve in turn as a guide for the construction of policy oriented tools. We look forward to this promising perspective. we obtain ( <unk>H ) = 91.73884257 > 0, and therefore, condition 2 holds. Finally, at <unk> = <unk>H, we have dT ( <unk>) d <unk> = -12.92610372 = 0, so that a Hopf bifurcation occurs at the parameter value <unk>H = 1.309562086. --- Existence and stability of stationary states In order to study the existence of the stationary states, we rewrite the system (1)-( 2 (5) so, the isoclines (i.e., g(x) = 0 and h(x) = 0) of the dynamical system become: y = g(x) := A x (x -B) (6) y = h(x) := C x -D x -E. (7 ) It is easy to check that the above functions are two hyperbolas with the following properties: i. the function y = g(x) (Fig. 6a,b) presents an horizontal asymptote at y = A, a vertical one at x = 0 and its graph crosses the x-axis at x = B (sign(B) = sign(c xr )); ii. the function y = h(x) (Fig. 6c,d) presents an horizontal asymptote at y = C, a vertical one at x = E and its graph crosses the x-axis at x = D. Remark 1 The graphs of g(x) and h(x) can have at most two intersection points and, therefore, at most two stationary states exist. Furthermore, under the assumption that all parameters of the system (1)-( 2) are strictly positive, it is easy to check that the inequality A > B is always satisfied. Overlapping the pairs of Fig. 6a-c, a-d and b-d, we obtain all possible intersections between the two isoclines as shown in Fig. 7a-g. This proves the claim about the existence of the stationary states of Proposition 1. In order to study the stability properties of the stationary states, we compute the Jacobian matrix of the system (4)-( 5), evaluated at the stationary state P * : J (P * ) = b(A -y * ) -bx * f E (y * -C) f E (x * -E) (8) We know that the signs of the determinant D(J (P * of the matrix (8) give us the stability properties of the stationary state. In particular, if D(J (P * )) <unk> 0, then the stationary state is a saddle point; if D(J (P * )) > 0 and T(J (P * )) > 0(<unk> 0), the stationary state is a repeller (an attractor) point. We prove the result for the sub-regime 0 <unk> B <unk> E <unk> D (see claim (1.b) in Proposition 1) shown in Fig. 7d. The other claims for the other sub-regimes can be proven in the same way. We observe that the slopes of curves G(x, y) = 0 and H (x, y) = 0 are given by: m G (x, y) = -y -A x m H (x, y) = -y -C x -E at any given stationary state P *. In this respect, we rewrite the determinant as D(J (P * )) = b f E x * (x * -E) (m G (x *, y * )m H (x *, y * )) Since y * -A <unk> 0, y * -C > 0, x * -E <unk> 0, the stability analysis can be developed as follows: i. At the stationary state P 1, the curves G = 0 and H = 0 are both increasing and the slope of G = 0 is greater than that of H = 0. Then, the determinant D(J (P 1 )) is strictly negative and the stationary state is a saddle. ii. At the stationary state P 2, the curves G = 0 and H = 0 are both increasing and the slope of H = 0 is greater than that of G = 0. Then, the determinant D(J (P 2 )) is strictly positive and the stationary state is either a repeller or an attractor depending on the sign of the trace T(J (P 2 )). --- Appendix --- Hopf bifurcation The Jacobian matrix of the system (1)-(2) evaluated at a stationary state P * = (x *, y * ) is given by: Liu (1994) derived a criterion to prove the existence of a Hopf bifurcation without using the eigenvalues of the matrix J (P * ). According to Liu's criterion, if the stationary state P * depends smoothly upon a parameter p <unk> (0, p), and there exists a parameter value p H <unk> (0, p) such that the characteristic equation of J (P * ), <unk> 2 + T ( p)<unk> + ( p) = 0, satisfies the conditions: Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Overtourism is an increasingly relevant problem for tourist destinations, and some cities are starting to take extreme measures to counter it. In this paper, we introduce a simple mathematical model that analyzes the dynamics of the populations of residents and tourists when there is a competition for the access to local services and resources, since the needs of the two populations are partly mutually incompatible. We study under what conditions a stable equilibrium where residents and tourists coexist is reached, and what are the conditions for tourists to take over the city and to expel residents, among others. Even small changes in key parameters may bring about very different outcomes. Policymakers should be aware that a sound knowledge of the structural properties of the dynamics is important when taking measures, whose effect could otherwise be different than expected and even counterproductive.
Background Women's high-risk fertility behaviour (HRFB), which is defined by "narrow birth intervals, high birth order, and younger maternal age at birth, have been associated with negative health outcomes for both the mother and the child" [1,2]. Maternal HRFB is a bio-demographic risk factor that impedes the achievement of lower maternal and child morbidity and mortality [3][4][5][6][7]. Some demographic variables, such as women's age, parity, and birth spacing are the crucial parameters of measuring HRBF including too-early (<unk> 18 years) or too-late (> 34 years) childbearing, short birth intervals (<unk> 24 months) and a higher number of live births (4 or higher) [3,4,7,8]. Although the total fertility rate (TFR) of Bangladesh declined from 3.7 in 1995 to 2.04 in 2020 [9]. Remarkably the rate of teenage pregnancy is about 35% and 15•1% gave birth less than 24 months interval in Bangladesh. Comparing with many developing countries Bangladesh has the highest rates of adolescent fertility with 82 births per 1000 women as of 2019 where over 50 percent of adolescents gave birth between the years 15-19 [10]. Several studies identified that early or late motherhood is associated with hypertension, premature labor, anemia, gestational diabetes, diabetes, obesity, pregnancy related complications, higher rates of caesarean and operative deliveries and unsafe abortions [11,12]. Childbearing at an early age (<unk> 18 years) is connected to a growing risk of intrauterine growth restriction, child undernutrition, preterm birth, and infant mortality. On the other hand, late motherhood (> 34 years) is related to preterm births, intrauterine growth restriction, stillbirths, amniotic fluid embolism, chromosomal abnormalities and lowbirth-weight newborns [12,13]. HRFB in mothers also associated with the neonatal mortality; while a study in India identified causal effect of birth spacing on neonatal mortality [14], and also childbearing at teenage was also found to be linked to neonatal mortality [15]. Some previous studies established a relationship between numerous HRFB-related parameters and their detrimental effects on maternal and infant health [7,8,16,17]. Women who start having children at an early age often have more children [18] and this is also associated with adverse maternal, infant and child health outcomes [19]. One the other hand, short birth intervals (<unk> 24 months) [20] and higher birth order [21] may also aggravate the infant and child mortality. Although such evidence supports the consideration of different exposures to high-risk fertility behaviors as a high-priority maternal and child health concern, very few studies in Bangladesh have evaluated factors related to HRFB in women of reproductive age. Therefore, in order to develop effective prevention programs for the region, a clear understanding of the determinants and potential risk factors for maternal high-risk fertility behavior among Bangladeshi women is required. There is, however, a dearth of literature examining the risk factors for HRFB in Bangladesh. To date, most of the studies on HRFB in Bangladesh focused on identifying the relationship between HRFB in women, and maternal and child health outcome [7,17,22]. Based on these considerations, this study aimed to identify the associated factors of HRFB in women. Identifying such determinants will be crucial for formulating evidence-based programs in Bangladesh especially targeting the significant risk factors. --- Methods --- Data sources The study relied on data from the Bangladesh Demographic and Health Survey 2017-18. The National Institute of Population Research and Training (NIPORT) of the Ministry of Health and Family Welfare of Bangladesh used a two-stage stratified sampling approach to conduct this cross-sectional study. The outcomes of our study were assessed using a total sample of 7757 women (age 15 to 49). The study included ever-married women aged 15-49 who were not pregnant currently and had at least one child before the survey. Unmarried and pregnant mothers with incomplete BMI information were excluded as the sample. The description about the data collection procedures and sampling frame are detailed in the original (BDHS 2017-18) report [23]. --- Outcome variable The outcome variable for this study took into account maternal "high-risk fertility behaviour" developed using the definition of the BDHS [23]. The study considered three variables to define the high-risk-fertility behaviour: (a) maternal age at the time of delivery, (b) birth order, and (c) birth interval. The presence of any of the following conditions was termed as a single high-risk fertility behaviour: (i) mothers age less than 18 years at the time practicing Islam as core religion, age above 35 years, having normal childbirth, having above 3 children, having unwanted pregnancies and not using birth control methods were at increased risk of having HRFB. As a result of the study's findings, interventions are urgently needed to prevent high-risk fertility behaviour among Bangladeshi women aged 15 to 49 years. of childbirth (ii) mothers age over 34 years at the time of childbirth (iii) latest child born less than 24 months after the previous birth; and (iv) latest child's birth order 3 or higher. Multiple high-risk categories are made up of two or more aforesaid conditions. High-risk fertility behaviour was defined as the presence of any of the four conditions listed above (coded as 1 and otherwise 0) for final analysis. --- Independent variables The researchers reviewed the most recent relevant articles to determine the independent variables. The selected sociodemographic and economic variables (independent) included in the analysis are: place of residence (urban and rural), administrative division (Barishal, Chottogram, Dhaka, Khulna, Mymensingh, Rajshahi, Rangpur, Sylhet), religion (Islam, Hindu and other), age (15-24, 25-34 and 35-49 years), age at marriage (<unk> 18 and <unk> 18 years), education (no education, primary, secondary and higher), access to television (no and yes), body mass index (according to WHO [24]; underweight: <unk> 18.50 kg/m 2, normal: 18.50-24.99 kg/m 2, overweight/obese: <unk> 25.00 kg/m 2 ), current working status (currently working and not working), partner's education (no education, primary, secondary, higher); partner's occupation (agricultural, business, non-agricultural, other). Reproductive factors: birth order (1-2, > 3), antenatal care (ANC) seeking (no, yes) and current use of contraceptive methods (yes, no), types of childbirth (normal, caesarean), place of childbirth (home, facility birth), pregnancy wanted (then, later, no more). --- Statistical analysis The frequency and percentage of the selected attributes were determined using descriptive statistics. The Pearson's chi-square test was performed to show the association between the outcome variables and the specified independent variables at the bivariate level. Finally, the factors related to "high-risk fertility behaviour" were determined using a logistic regression analysis with significant components (p-values <unk> 0.05) at the multivariate model. These analyses included both unadjusted odds ratios (UORs) and adjusted odds ratios (AORs) along with 95% confidence intervals (CIs). Multicollinearity among covariates was checked for all models using variance inflation factors (VIFs), which were determined to be the modest with VIF <unk> 2 for all covariates. Statistical package for social sciences (SPSS. 25.0) was used to conduct all statistical analyses. --- Ethical consideration DHS data is available in the public domain and is freely available to anyone who makes a reasonable request. The entire study protocol was approved by the Bangladesh Ethics Committee and ICF International; thus, we did not need any additional ethical approval. The BDHS 2017-18 report contains details about the ethical approval [23]. --- Results --- Background characteristics and prevalence of HRFB The final study included 7757 women who had given birth within the previous five years. The median (IQR) age of the respondents was 25.0 years (25.0-75.0). More than half (56.8%) of the women aged 15 to 24 years. Most women (71.6%) lived in rural areas, and overwhelmingly large number (90.5%) of them were Muslims. Over half of the women finished secondary education and 62.8% were unemployed (Table 1). The worst situation was found in the rural areas for both the single and multiple HRFB. About 46.7% of respondents from rural areas had single HRFB compared to 11.7% from urban areas. Similarly, 24.5% of women from rural areas were at multiple HRFB, which was only 4.4% among women from urban areas (Fig. 1). Figure 2 demonstrates the prevalence of HRFB across different administrative divisions of Bangladesh. The highest prevalence of single risk fertility behaviour was found in Dhaka (10.5%), followed by Chottogram division (9.7%). However, the highest rate of multiple HRFB was found in Chottogram division. --- Reproductive characteristics and high-risk fertility behaviour Most women (63.8%) have had a recent normal childbirth and 54.3% have given birth at a healthcare center. Of the total mothers, a significant portion (91.9%) completed ANC follow-up for their recent pregnancy (Table 2). --- Factors associated with high-risk fertility behaviour Both univariate and multivariate logistic regression models were used to identify potential risk variables, however because this model was controlled for confounding effects of covariates, we only used the adjusted results to interpret the findings. The women who were Muslims having higher risk of Fertility behavior (Adjusted Odds Ratio [AOR] = 5.52, 95% Confidence Interval [CI] 2.25-13.52, p <unk> 0.001) than that of other religion. HRFB was found 19% less common in younger women (15-24 years; AOR = 0.19, 95% CI 0.10-0.30, p <unk> 0.001) and 6.42 times more likely in women over 35 years (AOR = 6.42 95% CI 3.95-10.42, p <unk> 0.001). Women who had normal childbirths possessed higher HRFB (AOR = 1.47, 95% CI 1.22-1.69, p = 0.003) compared to those who had a caesarean section. Women who had unwanted pregnancies were 10.79 times more likely to have high-risk fertility than women whose pregnancies were desired (AOR = 10.79, 95% CI 5.67-18.64, p <unk> 0.001). Women who did not presently use contraceptive methods were 1.37 times more likely to have HRFB compared to their counterparts (AOR = 1.37, 95% CI 1.24-1.81, p <unk> 0.001). The odds of HRFB disproportionately distributed across the divisional regions. On the other hand, women aged 25 to 34, having secondary and higher education level; partner's higher-level education reduced the odds of high HRFB (Table 3). --- Discussion This study showed that 67.7% of women had HRFB, of which 45.6% were in single high-risk category and 22.1% women have had multiple high-risk categories. This high prevalence rate demonstrates that HRFB are all too common in Bangladesh, potentially endangering the health of the country's women. We found that women who were Muslims, age above 35 years, having normal childbirth, having low literacy level, having unwanted pregnancies, not using birth control methods were at increased risk of having HRFB. When compared to women who have never had any formal education, those with a higher level of education had a lower likelihood of high-risk fertility behaviour. This result was supported by the previously conducted studies [22,[25][26][27]. The reason for this could be having no formal education impacts on work status and leads to lower income and independence all of which affect In this study, visiting ANC was found to be facilitating factor for reducing the odds of high HRFB. This is probably due to the fact that antenatal care provides opportunities to reach pregnant women with a variety of interventions that may be essential to their health and well-being [28,29], thus they were more likely to receive information regarding importance of routine check-up, maternal nutrition, delivery complications and risk of having HRFB. On the other hand, women, who did not have ANC follow-ups for their recent children, had more probability to engage in risky reproductive behaviours. Family planning for extending the time between births was discussed during postnatal care counseling. As a result, decreased ANC seeking during pregnancy may have a role in HRFB. Another important finding from this study is women who had a history of caesarean delivery were less likely to have high-risk fertility behavior. There are some other studies related to the association between type of delivery and subsequent fertility [30,31] which have similar results. The reason behind this may be women who have their babies by cesarean section were less likely to have more children than women who have their babies vaginally and also cesarean section delivery was followed by a higher likelihood of actively contraception after that birth, which may lead to low odds of HRFB. This study revealed that, HRFB was more likely to occur among women who had never taken contraception compared to those who used which is in line with previously did studies elsewhere [32,33]. One of the goals of contraception is to increase the birth interval and reduce unplanned pregnancies. Women who had unwanted pregnancies were more likely to engage in high-risk reproductive behaviour than those who had previous desired pregnancies. It may be the result of not using contraceptive methods by the women who experienced unwanted pregnancies. This result also corroborates with a study conducted in Nigeria [25]. Moreover, religious belief also did affect maternal HRFB. Our study revealed that the women who were Muslims, have increased odds of HRFB compared with other religious believers. This finding was in line with an Indian study [34], where the author argued that Muslim women are less willing to use contraceptive methods, family planning and they prefer temporary methods over sterilisation, these could be plausible reasons why Muslim women in Bangladesh were at higher risk of having HRFB. Evidence suggests that maternal age of 35-49 have the higher odds of HRFB than their counterparts. Similar result was found in other study where the author concluded that pregnancy at later stage is associated with significant increases in maternal risks and complications [35,36] which leads to adverse outcome for both the mother and the child. Furthermore, high-risk fertility behaviours were found more than double among women in Rangpur, a northern region in Bangladesh, compared to the women who live in Sylhet. This is probably due to the fact that women in remote locations may stay behind in terms of utilizing reproductive health services, such as ANC, poor family planning adopted rates related to religious beliefs and community attitudes, as well as having poor literacy levels. However, this inequity in utilizing reproductive health facilities among different regions in Bangladesh should be minimize to reduce the odds of HRFB. This analysis may lead to important inferences that may help to lower maternal high-risk fertility behaviour and can be useful and relevant in areas where HRFB is ubiquitous. The strengths and limitations of this study have been well-recognised. The study employed the recently published BDHS 2017-18 data which had a large country representative sample size, allowing the findings to be more generalisable. Moreover, appropriate statistical technique applied in the analysis can be used to find probable components and their relationships. However, the study has some limitations. For instance, due to cross-sectional data, outcomes and predictors variables were collected at a point of time; therefore, causality cannot be established. In addition, some important factors, such as dietary factors, physical activity and maternal comorbidity histories are not taken into consideration due to unavailability in the original dataset, but these factors may have been associated with HRFB. --- Conclusions This study highlighted the pervasiveness of maternal high-risk fertility behaviour among Bangladeshi reproductive aged women. Several significant protective factors, such as maternal and partners' higher education were associated with lower HRFB. In contrast, being Muslims, age 35 to 49 years, having normal childbirth, having unwanted pregnancies, and not using any birth control tools may increase risk of having HRFB for women. Thus, findings from the study identify the need to develop an intervention; especially focusing on Bangladeshi Muslim women aged 35-49 years to reduce highrisk fertility behaviour. Furthermore, the government of Bangladesh and stakeholders (e.g., NGOs, INGOs) should work jointly to prevent early marriage of women and to enhance awareness and proper education to reduce the high-risk fertility behaviour. --- Availability of data and materials This study used publicly available Demographic and Health Surveys Program datasets from Bangladesh which can be freely obtained from https:// dhspr ogram. com/. As a third-party user we don't have permission to share the data publicly in any platforms. --- Authors' contributions MHH and MAR conceptualised the research idea and study design. MAR explored the data and performed analysis with the guidelines of MHH. MHH, SK, HRH checked and validated the results. MHH, MAR, HOR drafted the manuscript with the support from MHH. SK, HRH, SKC critically reviewed the manuscript for scientific coherence. MAR supervised the whole study. All authors read and approved the final manuscript. --- Declarations Ethics approval and consent to participate The current study involved analyzing secondary data, which is publicly accessible at www. dhspr ogram. com and free of cost upon appropriate application. The ICF Institutional Review Board and Ethical Review Board of Ministry of Health approved the data collection and survey process. Therefore, further ethical approval was not needed. The current study relied on publicly available data sources that had already been ethically approved for the primary investigations, so no additional ethical approval was required. --- Consent for publication Not applicable. --- Competing interests None of the authors declares any conflict of interest. --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Background: We aimed to determine the factors that increase the risk of HRFB in Bangladeshi women of reproductive age 15-49 years.The study utilised the latest Bangladesh Demographic and Health Survey (BDHS) 2017-18 dataset. The Pearson's chi-square test was performed to determine the relationships between the outcome and the independent variables, while multivariate logistic regression analysis was used to identify the potential determinants associated with HRFB. Results: Overall 67.7% women had HRFB among them 45.6% were at single risk and 22.1% were at multiple highrisks. Women's age (35-49 years: AOR = 6.42 95% CI 3.95-10.42), who were Muslims(AOR = 5.52, 95% CI 2.25-13.52), having normal childbirth (AOR = 1.47, 95% CI 1.22-1.69), having unwanted pregnancy (AOR = 10.79, 95% CI 5.67-18.64) and not using any contraceptive methods (AOR = 1.37, 95% CI 1.24-1.81) were significantly associated with increasing risk of having HRFB. Alternatively, women and their partners' higher education were associated with reducing HRFB.A significant proportion of Bangladeshi women had high-risk fertility behaviour which is quite alarming. Therefore, the public health policy makers in Bangladesh should emphasis on this issue and design appropriate interventions to reduce the maternal HRFB.
Introduction Over the last two decades, online practices of death, mourning, and memorialization have grown into a vibrant field of interest and research. Studying the intersection of death and digital media sheds light on novel commemorative practices, affective performances, and oscillations between personal and public spheres. As such, it contributes to our understanding of social practices of remembrance, underlined by the questions: Who is worthy of remembrance, and why and how are they remembered? A major site of online mourning and memorialization is the social networking site Facebook. The oscillations between personal and public spheres are integrated into Facebook's internal logic and infrastructure. As a multifunctional platform, Facebook brings together several distinct communication channels, also known as "subplatforms" (Navon & Noy, 2021). Facebook's three main subplatforms are Profiles, Groups, and Pages. Profiles are personal accounts that represent the user as an individual; Groups allow interaction of several users, usually around a specific shared interest; and Pages serve as public channels, allowing a more unidirectional communication with broad audiences. The official aim of Pages is to serve businesses, communities, organizations, and public figures who seek to increase their digital presence and connect with audiences and fans. It is an essentially public subplatform that is visible to anyone on Facebook by default (as opposed to Profiles or Groups) and may have an unlimited number of followers. Pages' administrators (admins) manage the interaction with the followers and the content on the Page. Interestingly, users commonly employ Pages in a memorial capacity and create Pages to memorialize and publicize ordinary people. In this article, we look at memorial Pages that are dedicated to ordinary people who died in nonordinary circumstances (terror attacks, murder, suicide, etc.). Our data consist of 18 cases, all of whom are Israeli. The aim of this study is to examine the practices involved in creating and maintaining memorial Pages from the theoretical perspective of the social capital approach. We explore how creators of memorial Pages view the role of the Page, their motivations, and their relations with different audiences, such as strangers (Marwick & Ellison, 2012). We further analyze how admins interact with their network of followers and the various resources Associate editor: Katy Pearce they accumulate through this process-from economic capital and practical support to solidarity and emotional support. --- Literature review --- Online mourning and memorialization Online practices of remembrance and memorialization emerged in the mid-1990s, initially in the form of virtual cemeteries and private Web memorials (Roberts, 1999). This "first generation" of digital practices, as Walter (2015) terms it, "changed surprisingly little" compared to earlier offline practices (p. 10). It was only in the early to mid-2000s, with the rise of social media, that things have significantly changed. Social media, and social network sites (SNSs) in particular, afford new means for grieving and commemorating, and influence the experience of death both online and offline. Brubaker et al. (2013) identified three expansions of death and mourning that SNSs afford and facilitate: a spatial expansion in which physical barriers to participation are dissolved; a temporal expansion that refers to the immediacy of information enabled by SNSs; and a social expansion that results in a context collapse and the inclusion of the deceased within the social space of mourning (see also Marwick & Ellison, 2012). The intense social nature of social media is shaped by its inherent features of sharing, performance, and interaction. Sharing can be appreciated through different logics, while on SNSs the primary logic is communicative and not distributive (when someone shares her feelings or belief, she is not left with less). On SNSs, sharing is telling, where "fuzzy objects of sharing" are nonetheless associated with giving and caring (John, 2013). These sharing-telling practices involve multiple modes and strategies of self-presentation, identity negotiation, and performance (Papacharissi, 2010), which inevitably lead to intensified engagement and participation. This is true also of mourning and memorialization contexts, where engagement and participation can be viewed as a demonstration of communality and social support (Do <unk>veling, 2015;Walter, 2015). Alternatively, engagement and participation may also indicate social pressure and competition over who has the most significant contributions or the right to portray the deceased (Carroll & Landry, 2010;Marwick & Ellison, 2012;Walter, 2015). Social dynamics and engagements are complex, determined in part by the specific media (sub)platform and its affordances. Studies of mourning and memorialization examine various social media platforms, including MySpace (Carroll & Landry, 2010), YouTube (Harju, 2015), Instagram (Gibbs et al., 2015), Twitter (Cesare &Branstad, 2018), andTikTok (Eriksson Krutro <unk>k, 2021). However, the most dominant platform, both in terms of research and of user practices, is Facebook. The vast number of dead users along with the various practices and rituals that living users perform qualifies Facebook as a "current center of gravity" for the discussion of online mourning and memorialization (Moreman & Lewis, 2014, p. 4). --- Mourning and memorialization on Facebook Studies of mourning and memorialization on Facebook point at multiple practices and uses. Such practices may chronologically commence with death announcements and the posting of information about memorial services (Babis, 2021;Carroll & Landry, 2010, respectively), to subsequent and more continuous practices, such as visiting the deceased's Profile and posting messages as a way to commemorate, express emotion, and remember special occasions (Pennington, 2013;Moyer & Enck, 2020, respectively). An additional common practice, which lies at the heart of our current study, is the creation of memorial Pages. These Pages may be dedicated to individual subjects, groups of people, animals, and things such as places (Forman et al., 2012;Kern et al., 2013). They enable "para-social copresence" and continuing bonds (Irwin, 2015), as well as public presence of the deceased and engagement with strangers (Kern et al., 2013;Kern & Gil-Egui, 2017). Rossetto et al. (2015) point to three themes or functions that mourning and memorialization on Facebook possess: news dissemination, preservation, and community. News dissemination describes sharing or learning information about a death through Facebook. Preservation refers to the continued presence of the deceased and maintaining communication and connection with them. Lastly, the community theme refers to the connection and communication with people other than the dead. It includes connecting with other mourners, seeking and offering social support, and expressing one's feelings and thoughts, while at the same time facing a challenge to privacy. One way to face this privacy challenge and negotiate boundaries is through Facebook subplatforms. In a study of mourning and memorialization practices across Facebook's subplatforms, Navon and Noy (2021) outline a spectrum that ranges from private to public and accordingly from a more personal sphere of mourning to a larger and more institutional sphere of memorialization. Located on one side of the spectrum, Profiles are characterized by expressive and emotive communication, hence turning with time into personal mourning logs on the bereaved's Profile and online mourning guestbooks on the deceased's Profile. Located on the spectrum's other side, Pages possess a distinctly public quality and serve as online memorialization centers where the deceased becomes an icon and is portrayed in one dominant way. Finally, Groups are positioned inbetween, possess a hybrid nature, and combine selfexpression and emotional sharing along with more public aspects. This results in Groups affording the revival of onceprevalent bereaved communities (Navon & Noy, 2021). The triadic spectrum we outlined corresponds with the three levels of social death that Refslund and Gotved (2015) have put together. First, the individual level focuses on the personal loss (Profiles); second, the community level revolves around an extended network: relatives, neighbors, colleagues, and other acquaintances of the deceased (Groups); and third, the cultural or public level (Pages), refers to the death of people not personally known. According to the authors, this level "generates memorial practices that relate to the way of death (e.g., murder and traffic) or how they were appreciated in life (e.g., celebrities)" (p. 5). Similarly, Walter (2015) suggests the concept of public mourning, pertaining to either high-status figures or to ordinary people who die in tragic circumstances. In this study, we examine the latter. We look at public memorial Pages created in memory of ordinary people that nonetheless generate public mourning. Analyzing Facebook memorial Pages, Marwick and Ellison (2012) discuss the publicizing of the deceased in terms of impression management strategies and conflicts among users. They focus on context collapse, negotiation of visibility, and the four characteristics of social media: persistence, replicability, scalability, and searchability (see boyd, 2010). They conclude with a recommendation for future research that will employ qualitative methods to explore how creators of memorial Pages view the role of the Page, their motivations, and their view of different audiences, such as strangers (p. 398). Our current research does precisely that and seeks to provide answers to these questions. However, while Marwick and Ellison (2012;also Sabra, 2017) frame their investigation in terms of context collapse, we suggest viewing Facebook memorial Pages via the social capital approach. We focus on admins' (Page creators') practices, which result in the accumulation of social capital. --- Social capital and social media Defining the term social capital is challenging, in part because it has received multiple definitions during the last few decades. Kritsotakis and Gamarnikow (2004) observe that "defining social capital is rather problematic" (p. 43); Williams (2006) adds that it is a "contentious and slippery term" (p. 594), and Xu et al. (2021) conclude that it is an "encompassing yet elusive construct" (p. 362). One of the influential formulations of social capital was proposed by Bourdieu (1986), as part of his conceptualization of different types of capital and related systems of exchange. For Bourdieu (1986), "the distribution of the different types and subtypes of capital at a given moment in time represents the immanent structure of the social world" (p. 242). In line with his practice-centered approach and his dialectic view of the structure-agency relations (like Giddens, 1984), Bourdieu puts much emphasis on the role of constant social interaction (micro) in maintaining social structures (macro). He accordingly sees social capital as "potential resources which are linked to possession of a durable network of more or less institutionalized relationships" (Bourdieu, 1986, p. 248). Another oft-cited and productive definition emerges from Putnam's (2000) view of social networks from the perspective of political science and civic engagement. Putnam (2000) draws a distinction between two main types of social capital: "bridging" and "bonding." The first describes broader, more diverse, and inclusive relations, which are often more tentative, while the latter concerns more exclusive relations, which are less diverse and more cohesive. The two concepts echo Granovetter's famous (1973) observation concerning "weak ties" versus "strong ties" (which, again, are wittingly or not, goal-oriented). In a related manner, Williams (2006) notes that strong ties supply a "getting by" type of network (e.g., family and close friends), while weak ties supply a "getting ahead" social environment (e.g., distant acquaintances, social movements). He further suggests that different types of social networks can predict different types of social capital. More recently, Xu et al. (2021) conclude that "social capital consists of both social networks and resources derived from social networks" (p. 363, emphasis in original). Hence, we now turn from describing network characteristics to describing measurements of their outcomes or resources. Williams (2006) operationalizes measures of assessing social capital outcomes by addressing the two types of social capital Putnam (2000) discerned. As per bridging social capital, he developed a questionnaire based on several criteria, one of which is contact with a broad range of people; as per bonding social capital, he builds on several dimensions, including emotional support and the ability to mobilize solidarity. Xu et al. (2021) found that network features, specifically tie strength and communication diversity, result in different levels of emotional, practical, and informational support. Theories of social capital have been studied extensively in relation to social media, so much so that they are recognized as a leading area of interest in the field (Stoycheff et al., 2017). One stream of scholarship has explored the effect of social media affordances on social capital outcomes. The term "socio-technical capital" (Resnick, 2002), captures these relations, arguing that individual users enjoy a greater ability to accrue social capital in the age of social media, as it becomes easier to maintain and create new connections. Indeed, studies have found a positive association between the usage of SNSs and perceived access to social capital resources (Ellison & Vitak, 2015). Ellison and Vitak (2015) observe that recent studies further examined the "specific kinds of activities that are predictive of social capital" (p. 210, emphasis added) and not only general measures of use. They point to two main factors that appear to be most significant to social capital gain: the size and composition of the network and how users communicate with that network, that is, patterns of interaction. They stress that, "social capital is derived from interactions with one's network" (p. 210, emphasis in original). In this article, we view social capital as potential resources that are produced through interactions in a structured social network. These resources may possess bonding or bridging social capital qualities, including emotional, practical, and informational support (Bourdieu, 1986;Putnam 2000;Xu et al., 2021). Rather than looking separately at resources or network characteristics, our focus is on social capital processes, that is the relations between the social network and the outcomes or resources that emerge from it. While some existing literature examines these relations and processes, we add a third element, namely the social network platform. This concerns how specific affordances enable and motivate social capital processes, and how users utilize affordances to position themselves and others in ways that encourage accumulation. Positioning is constitutive of social capital processes (Basu et al., 2017), and in line with our platform-centered approach, we take it to include users' practices of discursive positioning and the positioning that the platform itself performs. Within this framework we look at Pages' affordances (Page category, About, followers' count, etc.) as well as users' practices and activities, discourse, and patterns of interaction. Together, the findings provide fruitful insights into social capital processes, memorialization practices, and public remembrance on SNSs. --- Method --- Sampling The research sample includes 18 Facebook Pages, which we observed during three years. 1 All the Pages were created in memory of ordinary people who died in nonordinary circumstances. Typical examples include a woman who was murdered by her male partner, a high-school student who committed suicide as a result of cyberbullying, a female backpacker who died in a bus accident during a trip to Nepal, victims of terrorist attacks, and fallen soldiers (males and females). Table 1 presents key details of all the cases, including the cause of death. To stress, none of these commemorated individuals were public figures or known publicly. The cases include 12 men and 7 women (one case refers to the death of female and male spouses), ranging in age between 15 and 55, with an average of 25.6 (Table 2). The Pages were created between January 2011 and October 2016, and are all in Hebrew. All the translations are ours. Data collection procedures employed Facebook's search bar (Marwick & Ellison, 2012;Navon & Noy, 2021). We looked for keywords and phrases related to death and memorialization while using Facebook's filter to specifically reach Pages (and not Groups or Profiles). Because the display of Facebook's search results is managed by unclear criteria (alphabetical order, date of creation, followers count, etc., see Kern et al., 2013;Kern & Gil-Egui, 2017;Navon & Noy, 2021), we conducted multiple searches, which led us to different lists of Pages. To further offset Facebook's unknown algorithmic preferences, we did not always sample the Pages from the top of the result list. After collecting the data, we selected Pages for analysis based on the "intensity sampling" method. Intensity sampling focuses on the relevance of specific cases, their expected contribution to the research, and the extent to which they offer insights into our field of research (Suri, 2011). In order to strengthen the data's heterogeneity, we selected diverse cases in terms of age, gender, cause of death, socio-cultural background, etc. As indicated earlier, Pages are visible and available to anyone on Facebook, which made the work of accessing all the contents, posts, and comments on each Page relatively easy. Since we are particularly interested in the admins' roles and communicative practices, our analysis focuses on posts and not on comments. Still, the comments provided complementary material that enabled a better understanding of the larger picture, including the dynamics among users and between the users and the admins. --- Analysis Between June 2018 and March 2021, following the data collection phase, we conducted ethnographic fieldwork based on the principles of digital ethnography (Varis, 2016). Siding with Varis, we see ethnography not primarily as a data collection practice and not so much as a set of methods and techniques, but as an approach that "is methodologically flexible and adaptive: it does not confine itself to following specific procedures, but rather remains open to issues arising from the field" (Varis, 2016, p. 61). As such, we do not employ a prestructured qualitative analysis procedure (such as content analysis), but address discursive concepts such as positioning and participatory dynamics (Giaxoglou, 2015;Harre <unk>, 2015;Navon & Noy, 2021). Following Romakkaniemi et al. (2021), we link the frames of positioning and social capital, taking positioning as both a theoretical and methodological framework (p. 5). We identified and analyzed positioning levels (for instance, positioning of the deceased, of the Page, of the admins, and of the followers) and positioning strategies, which we refer to as techno-discursive practices (from choosing the most beneficial Page category to describing the deceased and the death story in a collective/heroic manner). We were sensitive to relations between actors as established through positioning, keeping in mind that the "way individuals are positioned in social structure can be an asset in itself, and social capital is conceptualized as that asset" (Basu et al., 2017, p. 782). Examining positions and positioning, we highlighted different roles, different acts of participation, and levels of engagement and commitment, which together amount to participatory dynamics between the network members (Navon & Noy, 2021). We applied these discursive concepts more closely to several dozen posts from each Page, from which the examples below are taken. In terms of research ethics, we now turn to address the processes of accessing, analyzing, and representing data from these memorial Pages. In a scoping review of 40 empirical papers, Myles et al. (2019) aimed to situate ethics in online mourning research. They suggest that "terrain accessibility constitutes a determining factor" (p. 293) in ethical decisionmaking, including data anonymization. They further refer to the difference between a Facebook Page and a Facebook Profile in terms of "the nature of the online setting" (p. 292). Yet, they emphasize that ethical decisions should not rely on technological arguments and affordances, but rather on an actual ethical reflection. In line with Markham (2015), they invite researchers to think contextually about ethics and conclude that ethical judgments could only be made in context. In the context of the current study, we believe that the activity on the memorial Pages in our sample possesses a distinct public quality. Nevertheless, to ensure anonymity, we changed the names of the deceased/the memorial Pages, and since we translated all the quotes from Hebrew, they could not be located via search. Varis (2016) highlights the difference between early research of technologically mediated communication that centered around "things" or "texts" (collected randomly, detached from their social context) and later research that examined "actions" and situated practices. This shift builds on a new understanding of discourse as a socially contextualized activity. In this perspective, context and contextualization are critical issues that "should be investigated rather than assumed" (p. 57). Varis suggests two contextual layers that digital ethnographies of communication need to investigate. The first is media affordances and the second is online-offline dynamics. We implemented these two layers as part of our analysis. As for the first layer, we pursued an ethnography of affordances, identifying and investigating different affordances that admins use when creating and maintaining a Page, and how features such as Likes, Shares, and Following shape discourse and dynamics on the Page. In line with Klastrup's (2015) and Kern and Gil-Egui's (2017) study of Facebook memorial Pages, we also examined the About section of each Page and analyzed the textual data therein. The About section serves as the Page's visiting card. It displays its basic information, in part provided by Facebook and in part by the admins. This includes the current number of people who like and follow the Page, its category, contact info, and a short introductory text. Up to May 2020, during our data mining stage, the About section also included the Page creation date and, in most cases, a "Team Members" title which shows the admins. In the updated version, this information was removed. A new section called "Page Transparency" was added "in an effort to increase accountability and transparency of Pages" (Facebook Help Center, n.d.). Yet, the information provided in this new section is actually more limited when compared to the previous version. A "Page History" title presents the creation date; however, a new title "People Who Manage This Page" does not reveal names and Profiles as it used to in the past, but rather only the primary country location of the admins. This update reinforces previous observations about the vagueness and anonymity of Pages' administrators (Gro€mping & Sinpeng, 2018;Kern et al., 2013;Kern & Gil-Egui, 2017;Poell et al., 2016), and puts into question Facebook's declared efforts to increase transparency. Specifically examining memorial Pages, Marwick and Ellison (2012) likewise observed the difficulty "to ascertain who created the page and their motivations for doing so" (p. 388). In the current study, we try to answer these questions based on the Team Members data we collected before the Facebook update, a textual analysis of the About texts, and the ethnography we conducted. As for the second layer, online-offline dynamics indeed turned out to be an important part of our analysis. The memorial Pages in our sample are all created in memory of actual people and the life they lived offline or their offline death story, as opposed to studies of memorial Pages that included Pages dedicated to fictional characters, places, or things (Forman et al., 2012;Kern et al., 2013). Moreover, the activity on these memorial Pages involves production, promotion, and documentation of a rich variety of offline events, as will be discussed shortly in the findings section. Below we discuss the Pages' names, their categories, About texts, admins, and followers' count along with the analysis of admins-followers interaction and the activity on the Pages. --- Findings and discussion --- Page name The first finding we discuss corresponds with the first step of creating a Page: supplying the Page name. Typically (72% of the cases), the Page name consists of two verbal elements. The first element concerns one of the following phrases: "In memory of..." or "Remembering...," which is formative because it designates the meaning of the Page as a memorial site. The second element is the deceased's full name, which appears in all the cases. Supplying the deceased's full name contributes to a more formal and respectful tone. It accords well with cases in which the dead have served in the military or in a police unit, where their rank appears next to their name as part of the Page title ("In memory of ACOP Eytan Bar," or "In memory of Cpl. Hodaya Cohen"). In this vein, several titles include an English translation or the ending "the official Page of...," serving to establish a sense of formality, authority, and recognition of the Page and of the deceased. Supplying the deceased's full name may also suggest that Page creators do not expect or assume that all visitors know the deceased personally or beforehand. One way or another, a norm seems to be emerging in regard to naming memorial Pages, which is based on the evocation of one of the two phrases together with the deceased's full name. --- Page category: community, public figure, interest Right below the Page name, the Page category appears in grey and in smaller letters. A Page category "describes what type of business, organization or topic the Page represents" (Facebook help text). When creating a Page, Facebook affordances allow users to type freely in the Page category text box while receiving "help" from Facebook in the shape of prompting or introducing to the user existing categories according to the letters she types. These potential categories may seem like helpful suggestions, but in fact, the user must choose one of these pre-existing options. In other words, defining a category is a necessary step in creating a Page on Facebook, which can be done only according to a pregiven list that the platform provides. The list of available categories that Facebook offers (as of July 2021) comprises over 1,500 possibilities, which include, for example, 12 types of Tour Agencies, 15 subcategories of Pet Services, 28 different types of Chinese Restaurants, and a similar number of Beverage shops (from Sake Bar to Tiki Bar). However, none of the categories or subcategories offered in the detailed list relates to memorialization, nor to synonyms or related terms (commemoration, remembrance, death, dying, mourning, grief, etc.). This raises interesting questions: How carefully does Facebook select, form, and shape Pages' categories and uses? How do users act within this framework of affordances? And more practically, which categories do users employ in memorial Pages when such elementary categories are missing altogether? Our the Interest category includes Sports, Visual Arts, and the like. In the context of memorial Pages, however, the meaning of these categories is negotiated. They do not reflect the meaning Facebook provides, but instead the interpretations that users/admins ascribe in line with their goals: to engage interest in the Page, to create a large community that recognizes and remembers the deceased, and to turn her/him into a public figure. This finding demonstrates an interactional view of affordances as a relationship and negotiation between the interface and the user, rather than merely a property, or a feature, of the interface itself-an "entanglement of policy and practice," in the words of Arnold et al. (2018, p. 52). Users take the freedom to interpret Facebook categories creatively in order to contend with restrictions put forth by the platform. They might not have absolute freedom to choose the Page category, but they enjoy the freedom, which they exercise, to choose how to interpret and use it. This finding also reveals an emerging norm concerning the socially accepted way of naming and categorizing memorial Pages. This norm hints at admins' underlying motivations for creating and maintaining a memorial Page (more on this below). --- Admins' concealed identity Four cases in our sample (22%) appeared with users (linked Profiles) as Team Members. In three of these four cases, the surname of the admin user was identical to that of the deceased, yet the specific kinship was unspecified. This relation has not been revealed in the About text as well, but a review of the posts along the Pages tells us that two are mothers of the deceased, one is a sister, and one is a cousin (hence the different surname). In two other cases (11%), the introductory text in the About section refers to the Page admins, though in an unspecified way: "The Page is moderated by the loving family," and "The Page is moderated by friends from Na'ariah [city] and by Nurit's brothers." Finally, in three additional cases (16.6%), we were able to deduce who runs the Page by looking at the posts over time, as the information was not provided in the About section (neither as Team Members nor in the introductory text). In one case, the admin is the deceased's daughter who often signs posts as "daddy's girl"; in a second case, it is the sister who frequently mentions her name, uploads photos of herself, and shares posts from her personal Profile on the Page. In the third case, family pictures appear frequently, and most of the posts end with the designation "the family," but no further information is provided about the specific admins. Overall, in all the nine cases we detailed above (50% of our sample), the admins are selfidentified as relatives of the deceased, and in the rest of the cases (50%), it is unclear who created or manages the Page. In other words, in most cases it is difficult to determine who the admins are (again, cf. Marwick and Ellison, 2012). --- Admins' motivations and collective discursive positioning of the deceased In more than half of the cases (61%), admins describe in the About section the reason(s) for which they have created the Page. The accounts they supply share similar motifs: "We opened this Page to keep the spirit of... alive," "This Page was created for the memory of...," and "This Page is in his memory and to inspire his legacy." In other words, the goal of the Page, as stated by the admins, is to have the deceased remembered publicly. More precisely, it is to make the deceased remembered and recognized by as many people as possible, beyond the circles of relatives and acquaintances who knew her when she was alive. In most of the cases (83%), admins use the About text as a space to write about the deceased and provide basic information that should presumably be known to acquaintances. For example, age at death, date and cause of death, a list of family members who are left behind, or a short biography. In line with the formal register, these brief biographies are often written in an informative and factual manner. Consider the following example: Such texts supply a brief overview of the deceased's life story, highlighting his exemplary military service and heroic death. The death story is charged with a deeper meaning relating to honor, sacrifice, patriotism, and recognition, that aims at transforming it from a personal death story to one that is collective and anchored in the public sphere. Hence the detailing of the (large) number of people who attended the funeral. Stressing the deceased's contribution to the state or society adds both moral and collective values to the act of remembrance, to those publicly and collectively engaging it, and thereby also to the memorial Page itself. This finding echoes Harju's (2015) observation of "a stance of moral superiority" (p. 130) that users construct in relation to public mourning of a celebrity on YouTube (Steve Jobs). The question at heart is a moral and sociocultural one, namely who is worthy of public remembrance? According to Harre <unk>(2015), moral questions are integral to discursive positioning. The positioning theory claims that every thought, expression, and social action in and among groups "take place within shared systems of belief about moral standards," and about the distribution of roles, rights, and duties (p. 266). Similarly, Giaxoglou (2015) describes affective positioning as "semiotic and discursive practices whereby selves are located as participants... producing one another in terms of roles." (p. 56). Our findings point at heroic and sacrificial discursive positioning in all the cases in which the deceased served in the military or the police. For example: "Her death saved many lives," and "...Taking the shot in his own body, Yoni prevented multiple deaths." These and similar texts portray the ultimate sacrifice paid by the deceased, evoking a sense of patriotic gratitude (Noy, 2015). In one case, in a Page dedicated to the memory of Shlomo Levi, the About text opens with this brief introduction: "Gal Levi-a son, brother, friend, worrier." Here the discursive positioning reflects a scale that ranges from the personal, through the familial (first "son," then "brother"), to the social and the institutional. In another case, even though the deceased's cause of death was suicide and he did not fall in the line of duty, his rank plays a salient role. On the About it says: "This Page intends to commemorate the legacy of the officer, the policeman, and the beloved person, Major General Eytan Bar." The admin of the Page is self-identified as the deceased's daughter, who regularly signs her posts with the words "daddy's girl," yet the focus lies with his public role and contribution. The goal is to form his memory as a respectable individual who has served the country and the society well. Collective and often heroic discursive positioning also appears when the deceased was not a soldier nor held a formal institutional role. In these cases as well, admins highlight the social importance of the deceased or the death story and the collective values it embodies. Such is the case in the Page in memory of Talya Nadav. Talya died in a car accident abroad involving negligent DUI by two Israelis who avoided prosecution. The admin stresses the relevance of the tragic death story to the general public. We need your support: after a year and ten months they [the perpetrators] are still walking free... They deserted Talya who died and fled Mexico. We begin a struggle to bring them to justice. Enter the link and donate for "Justice for Talya Nadav." For us. For everyone. Because we all travel abroad. Us, our children, our friends. We might all find ourselves in a similar situation. [The Page Remembering Talya Nadav with a smile, March 9, 2017] This example demonstrates how the admin discursively positions the deceased and the death story as a matter of collective interest. She builds on the shared value of justice to mobilize social engagement and support in the form of crowdfunding. The deceased becomes a symbol, yet this is achieved not by appealing to themes associated with national sacrifice and gratitude, as in the case of the soldiers, but by appealing to a sense of social responsibility. These dynamics resonate with Walter's (2015) observation, that in "contemporary culture's celebration of vulnerability... victims are now as or more likely to be commemorated as heroes" (p. 13). Furthermore, even when death is not framed in terms of victimhood, admins still position the deceased as a valuable collective symbol. They do so by portraying her special virtues and unique character. In the case of Osnat Shemesh, a backpacker who died in a weather related bus crash in Nepal, the admins state: "We've chosen to take 'the life according to Osnat' and turn it into legacy, into a will." Here, too, the deceased is elevated, as her life is presented in hind
This study focuses on users' practices involved in creating and maintaining Facebook memorial Pages by adapting the theoretical perspective of the social capital approach. It examines 18 Pages in Israel, which are dedicated to ordinary people who died in nonordinary circumstances. We employ qualitative analysis based on a digital ethnography conducted between 2018 and 2021. Our findings show how memorial Pages serve as social capital resources for admin users. Admins negotiate Facebook affordances when creating, designing, and maintaining such Pages. They discursively position the deceased as a respectable public figure worth remembering and their followers, who are otherwise strangers, as vital partners in this process. The resources followers provide range from economic capital and practical support to solidarity and emotional support. Finally, we point at the perceived connection users make between visible/measurable online engagement (Like, Share, Follow), and cognitive or emotive implications-public memory, recognition, and esteem.
example demonstrates how the admin discursively positions the deceased and the death story as a matter of collective interest. She builds on the shared value of justice to mobilize social engagement and support in the form of crowdfunding. The deceased becomes a symbol, yet this is achieved not by appealing to themes associated with national sacrifice and gratitude, as in the case of the soldiers, but by appealing to a sense of social responsibility. These dynamics resonate with Walter's (2015) observation, that in "contemporary culture's celebration of vulnerability... victims are now as or more likely to be commemorated as heroes" (p. 13). Furthermore, even when death is not framed in terms of victimhood, admins still position the deceased as a valuable collective symbol. They do so by portraying her special virtues and unique character. In the case of Osnat Shemesh, a backpacker who died in a weather related bus crash in Nepal, the admins state: "We've chosen to take 'the life according to Osnat' and turn it into legacy, into a will." Here, too, the deceased is elevated, as her life is presented in hindsight, as embodying shared values with which the audience can identify. Themes concerning national sacrifice and collective gratitude are altogether absent, yet the deceased is framed as a collective symbol. Admins supply quotes by the deceased, which they frame as a motto or a legacy (a practice employed in the mourning of celebrities. See Harju, 2015, p. 137), and share stories about her life and highlight her virtues. These acts of discursive positioning serve to supply an account for the reason that that specific person is worth remembering. --- The page followers' count: admins' efforts to increase the network If someone is worth remembering, her/his memorial Page should be worth following. The followers count shows how many users are following a Page. In quantitative terms, this index measures circulation and exposure, i.e., the size of the network (Ellison & Vitak, 2015). In qualitative terms, the followers' index helps assess the Page's popularity and social impact, and the social capital that Page admins have come to possess. The average number of followers on the Pages we sampled is 13K, ranging between 1.2K to 40.3K. Explicit efforts to gain followers, Likes, and Shares appear in all 18 cases, pursued through repeated and explicit requests by the admins ("Please share the Page with your friends. Thank you"). In one case, admin offers a small token in the shape of bracelets to new followers: Enter the "Osnat's Butterflies" Facebook Page, Like the Page, and you can get free bracelets... [We] warmly ask that you "Like" the Page "Osnat's Butterflies," and request that the bracelets will be sent to you. [The Page Osnat Shemesh -The sun will never set, July 11, 2015] The butterfly bracelets are part of a social initiative propelled by this Page's admins for promoting good deeds and giving, in the spirit of "Pay it Forward." This initiative was established in memory of Osnat Shemesh, the backpacker who died in Nepal, and had a butterfly tattoo on her shoulder. Osnat's Butterflies Page has over 33K followers, and the social initiative it promotes has reached over 40 countries worldwide. The goal of the Page is to increase public awareness and support for the project by documenting and posting butterflies' paintings and bracelets around the world, alongside moving stories about Osnat Shemesh (the deceased). In this way, the Page accomplishes the goals of memorial Pages, as implied in the categories we discussed above: building a community, engaging interest, and turning the deceased into a public figure. The idea of publicizing the deceased is exemplified quite clearly also in the following post, taken from a Page in the memory of Police Assistant Commissioner Eytan Bar. As indicated earlier, Bar ended his own life in the wake of an investigation he was under. The admin of the Page is his daughter ("daddy's girl"): You're all invited to Like and Share the Page in memory of our father, so no one will forget this angel who wholeheartedly gave his life to the country. [The Page In Memory of ACOP Eytan Bar, July 26, 2015] This short message performs the transition from a personal loss ("our father") to one that is collective and public ("gave his life to the country"). Such texts hint at the perceived connection that users make between online participatory acts, such as Follow, Like, and Share, and participatory acts of a cognitive or emotive nature like memory, recognition, and esteem. The admin uses the conjunction word "so" ("so no one will forget") to form a causal connection between Like/Share and a public memory, between visible online engagement and cognitive or emotive implications. --- Offline page activity: initiatives and events The Page Likes that we examined offer only a "glimpse" (Bernstein et al., 2013, p. 21) of how admins can evaluate the activity of their audience. In the case of Facebook memorial Pages, the activity extends beyond the online sphere and involves production, promotion, and documentation (uploading and posting pictures) of offline initiatives and events. The initiatives vary, reflecting the sociocultural differences found in our sample, which, in turn, reflect pre-existing memorial practices in Israeli society. Some cases have a more spiritual or religious orientation-inauguration of a Torah scroll and other Jewish rituals; in other cases, there are sporting events-racings, soccer tournaments, mass Zumba workouts; and yet, others take the shape of intellectual or educational activities-talks at schools, Mind Sports Olympiad, etc. We can therefore see that, in most cases, the admins promote multiple events and initiatives throughout the year, which make the Page active constantly, rather than only around a single, annual memorial event. The frequent activity on the Page serves to maintain an ongoing interaction with the network and to establish the Page as an appealing and vibrant site. Despite the differences in terms of content, we found several similarities in the ways admins communicate and promote these initiatives. First, the format: In 89% of the cases, event announcements (information about an event) take the shape of a photo-a professionally designed flyer-and not a textual post. The flyers are visually stylized and convey an impression of a formal invitation. The second similarity concerns addressivity, or who the posts address. These invitation posts are directed at the general public, calling for as many people as possible to join the community-turned-network and partake in its activities. Third, in most of these posts, similar keywords are used, evoking themes that concern respect, recognition, and togetherness. Consider these examples: We invite you all to come, watch and participate in the heritage of our father. It is important for us that a large crowd will show up, so that it will be respectable. The tournament will take place in Modi'in, and silicone bracelets will be sold for 10 shekels [3 USD] with the inscription "Love thy friend as thyself -in the spirit of Eytan Bar's path." We will donate the money to the same places that our father used to support. We are looking forward to seeing you. Spread the word! [The Page In Memory of ACOP Eytan Bar, June 12, 2017] Both examples include the words "honor" and "respect" (which in Hebrew are the same word, kavod). The notion of respect is significant, and the crowd plays an important role in its amassment. Indeed, the presence of a large crowd is the very mechanism through which respect and honor are generated and publicly assessed. Hence the address is directed at "the general public" and "you all." Moreover, the second example ends with the directive "Spread the word!," explicitly seeking to reach beyond the Page followers. The Admin makes use of the network (followers) alongside the platform affordances in order to reach as many people as possible, to generate large attendance, and to amass respect. As part of the production of multiple memorial events and initiatives, admins often address the followers with various requests for resources and participatory actions. These actions range from physical attendance, through volunteering and contributing one's skills and knowledge (video editing or teaching Zumba), to purchasing memorial merchandise and donating money. Requests for resources include, as we can see, economic capital (Bourdieu, 1986), but also practical support, which is appreciated as a form of social capital resources (Xu et al., 2021). The extensive memorial activity and the involvement or harnessing of wide crowds (representing "the public") in its production, fulfill the three goals of memorial Pages we detailed earlier: Community, Interest, and Public Figure. These three common categories of memorial Pages can be seen as reflecting three aspects or dimensions of the same process. In this process, users create and maintain memorial Pages, which come to serve as mechanisms for the accumulation of social capital resources. These categories are interrelated and represent different aspects or stages of the same process. This process entails the assumption that at stake here is a collective interest, which results in attempts at creating a community, and establishing the deceased as a public figure.In fact, turning the deceased into a public figure builds on the size of the community and the degree of its members' engagement and interest. The ongoing activity on memorial Pages, including the positioning of the deceased and the various online and offline events, are all put in the service of the same three goals or stages in this process: building a community (with high levels of involvement and commitment), engaging interest, and ultimately positioning the deceased as a matter of public interest, in other words, as a public figure worth remembering (Figure 1). --- Memorial merchandise: economic and social support The second example above draws attention to another widespread practice we observe in our data, namely selling merchandise in memory of the deceased (in this case silicone bracelets). In 61% of the cases in our sample, this practice appears to be popular and in some cases, several types of merchandise are sold through the Page. The examples vary: Tshirts, baseball hats, bumper stickers, memorial candles, and recently also face masks (due to. All the products are imprinted with the full name of the deceased alongside an image, a slogan, or a quote that is made to be associated with her/him. Wearing this memorial merchandise means carrying the deceased's memory offline in an embodied fashion, continuing and honoring his legacy. As Harju (2015) puts it, apropos her discussion of celebrity commemoration on YouTube, "materiality anchors meanings" (p. 136). This finding is significant because branded merchandise is generally associated with celebrities and not with ordinary people. Designing and selling merchandise in memory of an individual, therefore, convey a message that he/she was a famous person or should be famous posthumously. Furthermore, since nearly all sold merchandise are wearable, they promote offline public display of the deceased and enhance her status as a public figure. This closely relates to our earlier finding about the frequent employment of the Public Figure category and confirms our argument about the underlying motives and goals of admins of memorial Pages. The admins use the memorial Page to promote, distribute, and sell this merchandise, and in line with the moral discursive positioning on the Page, they add a moral value to those products and to the act of buying and using them. Consider these two examples: "Wearing the bracelet commits the wearer to maintain the values you [the deceased] represents and in this way to become a better person," and "10 Shekels for your contribution and involvement in the Observatory project in memory of Ofek. So friends, share and get yourself a new bracelet for a worthy cause." 2 The purchase of memorial merchandise emerges as a value laden moral action because (1) the deceased is consistently portrayed as a special person whose story carries social meaning and significance and bears moral value; (2) the money that is collected is directed to worthy and charitable causes, such as donations; and (3) participatory actions such as buying memorial merchandise supports the (often bereaved) admins. The support that admins receive is twofold: it is financial (economic capital, Bourdieu, 1986), but also social and emotional (social capital, Williams, 2006). The action of purchasing a memorial merchandise reflects both interest and involvement, and enhances the sense of recognition with regards to the social significance of the deceased. The admins are well aware of these meanings and in response, express their gratitude readily and frequently as we show below. --- Expressing gratitude: from followers to partners Alongside the multiple repeated requests and invitations that Page admins direct at the Page followers, they also make sure to thank them devotedly. In doing so, the admins' tone is rather informal, personal, friendly, and enthused. They routinely express gratitude and show their appreciation to followers and their engagement. Every action counts. From Like and Share, through money donations, to physically attending events-admins show that no activity goes unnoticed. They highlight the importance of these actions as not merely helpful, but truly vital for the memorial Page and its moral goal. Followers thus become an integral part of the Page and its activity, or in other words, they are repositioned as partners. Indeed, sometimes admins note this explicitly: "We are grateful for having such partners as yourselves." At stake here is a significant "status promotion" for the followers, which is pursued vis-a <unk>-vis Facebook's affordances and hierarchical terminology: Admins, who manage, approve, curate, edit and produce content, and followers, who consume it. By symbolically "upgrading" the followers to the status of partners, admins imbue the followers with a sense of importance and enhance their commitment and engagement with the Page. In this way, they encourage the followers to contribute more: more Likes, content, resources, and engagement. The following example nicely captures this circle of encouragement-engagement here in relation to the inauguration of a Torah book. Wow, how exciting.... Thanks to you we achieved the goal!!!... with every passing day, the hug we received grew greater and greater. Thanks to you... to your shares... to your devotion... more than 100,000 shekels were raised in the last couple of days for the commemoration of Ofek!! [The Page In the Memory of Ofek Noy H.Y.D, September 8, 2016] Communication here seems spontaneous and informal, and while the accomplishment is framed as mutually achieved ("we achieved the goal!!!"), gratitude is clearly expressed and extended to the followers ("Thanks to you" and "to your shares"). Followers' engagement is described as a "hug," a nonverbal act that indexes affection, support, and closeness. Thus, admins discursively position the followers, who are otherwise strangers, as helpful in extending love and support. This finding resonates with Stage and Hougaard's (2018) discussion about "caring crowds" (p. 79), in which love and care are not only expressed through words but also through "material practices" (p. 94). In the cases they observe-two public Facebook Groups that were created for two children diagnosed with cancer-crowdfunding was a dominant practice that was motivated and energized by sharing the personal stories of illness and suffering, alongside gratitude expressions by the parents who run these Groups. In the following example, the mother of Osnat Shemesh (mentioned earlier) nicely illustrates how to direct attention to the followers. When you experience the most excruciating pain possible, you hold on to any bit of light, like a wounded animal. It seems like this is the only way to survive. In the last couple of months, family and friends have completely embraced us, and I will forever owe them my life and my sanity. I want now to talk about the people we don't know; about bits of light that radiate from people who never knew us or Osnat. These people, who send us comforting messages, strangers who took time off their everyday routine... to all these beautiful souls... we wish to say thank you. Thanks for seeing us. Thanks for taking time off for us. Thanks for helping us regain our faith in goodness. [The Page Osnat Shemesh-The sun will never set, December 25, 2014] Emphasizing the pain this admin is experiencing ("most excruciating pain possible"), enhances the moral value of the followers' benevolent participatory actions. The admin, a bereaved mother, mentions and thanks family and friends,then directs special attention to other people. She pursues this by the meta-discursive statement "I want now to talk about...," through which she signals a thematic shift to what will be the focus of her message, namely to those who deserve the utmost gratitude. These are "strangers" -users with whom she is not familiar, who showed showed interest and "took time off," and who served as an audience ("Thanks for seeing us") and a network. The admin describes the visitors and the followers of the Page as radiating "bits of light" and as "beautiful souls" who help restore faith in goodness. Posts of this type (re)confirm the moral value that engaging the Page carries, framing it as a socially valued action. This is a result of the social solidarity and support that followers direct at the admins, often bereaved users in pain, and of the fact that memorialization is generally held as a socio-moral project (Noy, 2015, p. 39) -more so when the deceased is consistently portrayed as a hero, a special person, a respectable public figure worth remembering. Such expressions point at how admins acknowledge having received emotional support from their network of followers. Recall that Putnam (2000) and Williams (2006) associated emotional support and mobilization of solidarity to bonding social capital, that is, to interactions that are typical of strong ties and closer relationships. Interestingly, our findings suggest that such resources may also be obtained through what we can call "bridging relations" and interactions with a broad network of mostly strangers. Admins explicitly and repeatedly link emotional support to such parameters as engagement with the Page, the economic capital gained through the network, and the practical support followers provide. --- Conclusion In this article, we explored Facebook Pages created in memory of ordinary people with the aim of raising social awareness and public remembrance of their death. We offered a new perspective on these memorial Pages and suggested viewing them through the scope of the social capital approach. In line with existing literature (Ellison & Vitak, 2015), our findings demonstrate that the most significant factors of social capital processes are the size and composition of one's network and the patterns of interaction. We identified different communicative practices that admins pursue in the aim of reaching an audience, increasing the size of their network (i.e., followers count), and enhancing its activity and engagement. In addition, we analyzed how admins interact with their network-a multi-layered communication that serves the multiple functions they seek to accomplish. On the one hand, admins use a formal register, and the notion of respect is salient as they try to establish a sense of formality, authority, and recognition towards the deceased and the Page. On the other hand, they use a highly personal, enthused, and emotional register partly because of the engaging effect of affective performances, and partly because of the affect-laden quality of digital mourning practices (Giaxoglou & Do€veling, 2018). When a user performs an increased emotional sharing, it activates reactions of the networked audience in the shape of an exchange of emotional and support resources (Baym, 2010, in Giaxoglou et al., 2017), which have been shown to reinforce tie strength (Xu et al., 2021). In a discussion on networked emotions and sharing loss online (Special Issue of Journal of Broadcasting & Electronic Media, 2017), Giaxoglou et al. (2017) observe the "increasing mobilization of emotion as a commodity" (p. 7), and Sabra (2017) further notes that the potential for economic and emotional capitalization is integrated into the Facebook platform (p. 31). Our findings flesh out these observations by showing how admins carefully and strategically select where and when to use a formal and factual register (e.g., the About section, biographical and informative posts), and when to use a more personal and friendly emotional tone (e.g., posts extending gratitude and appreciation that yield an encouragementengagement circle). Like, share, and remember The expected engagement, and with it requests for resources that admins post, range from online participatory acts to purchasing memorial merchandise, donating money, physically attending events, contributing one's skills to the production of initiatives, and so on. The accumulation of resources through the Page is a social capital process par excellence, a process in which ordinary users become admins and create their own network, gradually expand it, and harness it by employing platform affordances to achieve their goals. Network members are mostly strangers. While previous studies note that strangers are unwelcomed on Facebook memorial spaces (Rossetto et al., 2015;Walter, 2015), our study suggests that strangers are more than welcome and are deeply appreciated. In an effort to portray the deceased as a public figure and to establish a state of public remembrance, admins address the largest audience possible. Pages, as opposed to other Facebook subplatforms, afford this publicity and capitalization, which users acknowledge and take advantage of from the very early stages of creating and naming the Page. This complements studies that have examined social capital processes on Facebook and relationship maintenance behaviors of existing connections (i.e., Facebook friends). Here, we examined the creation, maintenance, and strengthening of new connections with strangers (i.e., Facebook followers), or parasocial relations, corroborating previous observations of memorial Pages, which "gather strangers rather than friends" (Klastrup, 2015, p. 147). However, despite existing literature that links between strangers and "weak ties," and bridging social capital outcomes (Putnam, 2000;Williams, 2006), in the case of the memorial Pages we studied, broad networks of followers that consist mostly of strangers, in fact, facilitate bonding social capital outcomes, such as solidarity and emotional support. Admins recognize this support and pursue a circle of encouragement-engagement that motivates participatory activities. --- Limitations and future directions This study has several limitations that future studies can address. First, due to the relatively small sample size, we could not draw conclusions relating to the connections or correlations between the cause of death and the activity or dynamics on the Page. Second, future studies may examine the collectivization of personal mourning and related social capital processes on other platforms with different affordances and dynamics (such as visual versus textual platforms). We believe that much of the transferability of these insights rests on platforms' public quality or publicity. Finally, while we focused on memorial Pages, future research can explore social capital processes on Pages in different contexts and themes. Future research may also focus on social capital in relation to "special users," such as admins (rather than ordinary users), who employ special affordances and pursue special practices. --- Data availability The data underlying this study will be shared on reasonable request to the corresponding author.
This study focuses on users' practices involved in creating and maintaining Facebook memorial Pages by adapting the theoretical perspective of the social capital approach. It examines 18 Pages in Israel, which are dedicated to ordinary people who died in nonordinary circumstances. We employ qualitative analysis based on a digital ethnography conducted between 2018 and 2021. Our findings show how memorial Pages serve as social capital resources for admin users. Admins negotiate Facebook affordances when creating, designing, and maintaining such Pages. They discursively position the deceased as a respectable public figure worth remembering and their followers, who are otherwise strangers, as vital partners in this process. The resources followers provide range from economic capital and practical support to solidarity and emotional support. Finally, we point at the perceived connection users make between visible/measurable online engagement (Like, Share, Follow), and cognitive or emotive implications-public memory, recognition, and esteem.
Introduction HEALTH: i. The state of an animal or living body, in which the parts are sound, well organized and disposed, and in which they all perform freely their natural functions; in this state the animal feels no pain; this word is also applied to plants. ii. Sound state of the mind; natural vigor of faculties. iii. Sound state of the mind in a moral sense; goodness. Health as defined in Scientific Dictionary, 1863 [1] Viewed through the prism of life (Greek; bios) and ways of living (Greek; biosis), health is an expansive term which has long-since defied concrete definition. In 1946, the World Health Organization's constitutional statement [2] maintained that health is 'complete physical, mental Figure 1. High-level wellness is applicable to organizations, communities, nations, and humankind as a whole. In an era of gross environmental concerns and a crisis of non-communicable diseases, personalized medicine must be increasingly viewed in the context of planetary health [image by author, S.L.P.]. Remarkably-even without our current, sophisticated understanding of biodiversity losses, environmental degradation, climate change, and resource depletion-Dunn underscored that highlevel wellness is predicated upon the health of the Earth's natural systems [5]. In other words, discussions of high-level wellness-whether for person or civilization-must always consider the environment, and this must include broad aspects of the natural environment on which humans depend. Dunn was underscoring the principles of what is now termed 'planetary health'. The term planetary health, popularized in the 1980-1990s, underscores that human health is intricately connected to the vitality of natural systems within the Earth's biosphere. Coincident with the rise of environmentalism, preventive medicine and the self-care movements of the 1970s, the artificially drawn lines between personal, public, and planetary health began to diminish [6,7] Dunn's concept of high-level wellness was referenced in articles which discussed "a different philosophical framework through which individual, community, environmental and planetary health can be better understood in a broad and integrated fashion" [8] (see Figure 2). High-level wellness is applicable to organizations, communities, nations, and humankind as a whole. In an era of gross environmental concerns and a crisis of non-communicable diseases, personalized medicine must be increasingly viewed in the context of planetary health [image by author, S.L.P.]. Remarkably-even without our current, sophisticated understanding of biodiversity losses, environmental degradation, climate change, and resource depletion-Dunn underscored that high-level wellness is predicated upon the health of the Earth's natural systems [5]. In other words, discussions of high-level wellness-whether for person or civilization-must always consider the environment, and this must include broad aspects of the natural environment on which humans depend. Dunn was underscoring the principles of what is now termed 'planetary health'. The term planetary health, popularized in the 1980-1990s, underscores that human health is intricately connected to the vitality of natural systems within the Earth's biosphere. Coincident with the rise of environmentalism, preventive medicine and the self-care movements of the 1970s, the artificially drawn lines between personal, public, and planetary health began to diminish [6,7] Dunn's concept of high-level wellness was referenced in articles which discussed "a different philosophical framework As the global health burdens have shifted from infectious to NCDs, greater emphasis has been placed on the health-mediating role of social determinants, lifestyle, and the total lived environment. The health implications of anthropogenic threats to life within the biosphere cannot be uncoupled from discussions of the individual, community, and global health. Recent endeavors such as the Lancet Commission on Planetary Health [9] and The Canmore Declaration [10] have re-emphasized that public health, biopsychosocial medicine, and planetary health are one-and-the-same. --- Roadmap to the Current Review Here in our narrative review, we will revisit Dunn's high-level wellness and explore its place in the emerging planetary health paradigm. First, we discuss some of the origins of the high-level wellness concept and describe how it manifests in contemporary clinical care. Next, we examine the concept of planetary health, its historical origins, and the global movement which now considers the health of civilization and the Earth's natural systems as inseparable. With this background in place, we argue that the concept of high-level wellness provides an essential framework for health promotion and clinical care in the modern landscape; it allows scientists of diverse fields-no matter how reductionist the scope of their inquiry-to see the large-scale relevancy of their work; it provides healthcare providers a broader vision of human potential with individuals as living embodiments of accumulated experiences shaped by natural and anthropogenic (i.e. social, political, commercial etc.) ecosystems-rather than a vision limited to a neutral disease-free set point. Dunn's high-level wellness and planetary health (which we argue are synonymous) requires discourse concerning values, our connectedness to one another, our sense of purpose/meaning, and our emotional connections to the natural world. High-level wellness also demands discussion of authoritarianism, social dominance orientation, narcissism, and other barriers to vitality of individuals, communities and the planet. Finally, we emphasize that experts in environmental health promotion and lifestyle medicine are ideally positioned to educate and advocate on behalf of patients and communities (current and future generations), helping to promote vitality and safeguard the health of person, place, and planet. --- High-Level Wellness "Wellness is conceptualized as dynamic-a condition of change in which the individual moves forward, climbing toward a higher potential of functioning. High-level wellness for the individual is defined as an integrated method of functioning which is oriented toward maximizing the potential of which the individual is capable, within the environment where (they) are functioning. This definition does not imply that there is an As the global health burdens have shifted from infectious to NCDs, greater emphasis has been placed on the health-mediating role of social determinants, lifestyle, and the total lived environment. The health implications of anthropogenic threats to life within the biosphere cannot be uncoupled from discussions of the individual, community, and global health. Recent endeavors such as the Lancet Commission on Planetary Health [9] and The Canmore Declaration [10] have re-emphasized that public health, biopsychosocial medicine, and planetary health are one-and-the-same. --- Roadmap to the Current Review Here in our narrative review, we will revisit Dunn's high-level wellness and explore its place in the emerging planetary health paradigm. First, we discuss some of the origins of the high-level wellness concept and describe how it manifests in contemporary clinical care. Next, we examine the concept of planetary health, its historical origins, and the global movement which now considers the health of civilization and the Earth's natural systems as inseparable. With this background in place, we argue that the concept of high-level wellness provides an essential framework for health promotion and clinical care in the modern landscape; it allows scientists of diverse fields-no matter how reductionist the scope of their inquiry-to see the large-scale relevancy of their work; it provides healthcare providers a broader vision of human potential with individuals as living embodiments of accumulated experiences shaped by natural and anthropogenic (i.e. social, political, commercial etc.) ecosystems-rather than a vision limited to a neutral disease-free set point. Dunn's high-level wellness and planetary health (which we argue are synonymous) requires discourse concerning values, our connectedness to one another, our sense of purpose/meaning, and our emotional connections to the natural world. High-level wellness also demands discussion of authoritarianism, social dominance orientation, narcissism, and other barriers to vitality of individuals, communities and the planet. Finally, we emphasize that experts in environmental health promotion and lifestyle medicine are ideally positioned to educate and advocate on behalf of patients and communities (current and future generations), helping to promote vitality and safeguard the health of person, place, and planet. --- High-Level Wellness "Wellness is conceptualized as dynamic-a condition of change in which the individual moves forward, climbing toward a higher potential of functioning. High-level wellness for the individual is defined as an integrated method of functioning which is oriented toward maximizing the potential of which the individual is capable, within the environment where (they) are functioning. This definition does not imply that there is an optimum level of wellness, but rather that wellness is a direction in progress toward an ever-higher potential of functioning... high-level wellness, therefore, involves (1) direction in progress forward and upward towards a higher potential of functioning, ( 2) an open-ended and ever-expanding tomorrow with its challenge to live at a fuller potential, and (3) the integration of the whole being of the total individual-(their) body, mind, and spirit in the functioning process... high-level wellness is also applicable to organization, to the nation, and to (humankind) as a whole". Halbert L. Dunn, MD, PhD. Canadian Journal of Public Health, 1959 [11] In two notable papers-both published in 1959 [3,11]-biostatistician and public health physician Halbert L. Dunn conceptualized the idea of 'high-level wellness' (Box 1) for humankind and civilization at-large, maintaining that "wellness is not just a single amorphous condition... but is rather a fascinating and ever-changing panorama of life itself, inviting exploration of its every dimension" [3]. In this context, he included population pressures, rising rates of mental and functional illnesses, and the rapid speed of technological growth (especially in communications). Moreover, he stated: "it is probably a fallacy for us to assume, as so many of us have done, that an expansion in scientific knowledge can indefinitely counterbalance the rapidly dwindling natural resources of the globe" [3]. In other words, Dunn was acutely aware, even in 1959, that the ability to obtain high-level wellness-at individual and civilization-wide scales-was predicated on the health of the planet. "High-level wellness is applicable not only to the individual but also to all types of social organizations-to the family, to the community, to groups of individuals, such as business, political or religious institutions, to the nation and to (humankind) as a whole. For each of these aggregates, it implies a forward direction in progress, an open-ended expanding future, interaction of the social aggregate and an integrated method of functioning which recognizes the interdependence of (humans) with other life forms". Halbert L. Dunn, MD, PhD. 1966 [12] Dunn's context for high-level wellness was beyond even national boundaries; in the era of rapid change, no longer could health be viewed as exclusively a local phenomenon: "The effects of these (environmental/social) changes ripple outward to all parts of the physical environment, affecting the entire ecology on which man is dependent, and also penetrating into the deepest recesses of his inner world" [13]. The search for high-level wellness in life (Greek, bios) cannot be separated from our individual and collective mode of living (Greek, biosis) or lifestyle; to understand such connections, Dunn advocated for educational efforts to "develop interest in biology on a vast scale, so that it would become of major interest to all. This would mean acquiring a deep interest in life-in the life process itself " [14]. Related to this, Dunn emphasized a need to understand how human attitudes to other forms of life (and the natural environment in general) are formed. The prerequisite to individual and societal high-level wellness, Dunn contended, is the maintenance of a sense of purpose and opportunities for creative expression. On the other hand, he argued that the barriers to high-level wellness include authoritarianism, clinging to dogma, and lack of critical analysis skills. He encouraged health and medical bodies to self-reflect. Barriers to high-level wellness, Dunn argued, are manifest in uncritical allegiance to "teams" in political, economic, occupational, academic, and other professional and social spheres; in particular, the inability to adjust beliefs and communication based on advancing knowledge is a major impediment. Dunn maintained that global wellness in the modern era is predicated upon providing opportunities (especially early in life) to see common ground, teaching children critical appraisal skills, and learning the value of listening to opposing views while'searching for points of mutual agreement'. Dunn proposed a 'universal philosophy of living' which focused not on what individuals were 'against', but rather what they would be 'for': "a philosophy which will permeate the minds and hearts... a philosophy which men and women of good will, regardless of race, creed and nationality, can be for. A unifying type of philosophy which can be embraced and lived by all, within their own cultural background" [15]. He also called for greater research investments to be directed toward an understanding of the social, biochemical, physiological, and psychological pathways to the goal of high-level wellness; Dunn maintained that high-level wellness was itself a way of life-a lifestyle which involved a sense of purpose and meaning-one which maximized the odds of achieving the fullest potential. In its simplest form, high-level wellness equates to vitality; humans can experience the upper ranges of wellness when there is a feeling of 'zest in life', abundant energy, a tingle of vitality, and a feeling of 'being alive clear to the tips of your fingers'. However, Dunn cautioned that in 20th century modernity, zest was being confused with'something that gives us a very momentary "lift"' [14]. In the 21st century, the iron pyrite of zest and aliveness is all-too-often sold to the public in the form of "energy" drinks [16]. Vitality has since become a measurable psychological construct and the subject of intense research scrutiny. Several vitality scales have been validated (as well as vitality subscales within larger assessments such as the Profile of Mood States and the SF-36), and researchers have linked vitality to various health-related outcomes; for example, vitality is emerging as a surrogate marker of reduced risk of NCDs, psychological wellbeing, and better life-course health [17][18][19][20][21]. In line with Dunn's commentaries, vitality is captured on scales as 'approaching life with excitement and energy, feeling vigorous and enthused; living life as an adventure; feeling alive and activated; zest for life'. It is unclear if vitality is a cause or consequence of a healthy diet, exercise, social support, and other lifestyle habits such as spending time outdoors in natural environments-it is likely a mixture of contribution and cause [22][23][24][25]. The concept of high-level wellness may be identified in so-called blue zones where longevity, chronic disease resilience, and quality of life are found in tandem. --- Preventive Medicine, Public and Planetary Health The term planetary health emerged from the annals of preventive medicine, health promotion and the environmental health movement; in 1972, physician ecologist Frederick Sargent II, MD advocated for a greater understanding of the interrelations between the 'planetary life-support systems' and health (not simply the absence of disease) [26]. In 1974, Soviet bio-philosopher Gennady Tsaregorodtsev called for novel and integrative approaches to 'planetary public health' [27]. He also advocated for a greater understanding of the biopsychosocial needs of humans in the context of ecosystems at micro-and macro-scales. Both writers underscored the urgent need for information-gathering and actionable steps in relation to the human health sequelae of environmental degradation-the focus on preventing unanticipated consequences (those corrosive to wellness) of human-induced changes to the natural environment. On the environmental side of health, work of multidisciplinary scientists (especially ecology, toxicology, geography, and other environmental sciences) was folded into definitions of health by environmentalists and various advocacy groups. For example, in 1980, the environmental group Friends of the Earth expanded the World Health Organization definition of health to include ecological and planetary health inputs: "health is a state of complete physical, mental, social and ecological well-being and not merely the absence of disease-that personal health involves planetary health" [28]. At the same time, these sentiments were echoed within the growing holistic health movement of the 1980s which argued for: "greater attention to prevention... (and) a different philosophical framework through which individual, community, environmental and planetary health can be better understood in a broad and integrated fashion" [8]. Nursing, a profession which has been unified by deeper understandings of the words 'health' and 'care', was progressive in underscoring planetary health: "the health of each of us is intricately and inextricably connected to the health of our planet" [29]. By the early 1990s, leaders in nursing advocated for a need to "understand health as a reintegration of our human relationships with nature... (and maintain) openness to nature's healing power" (and a) "broader ecologically-informed perspective on health" [30]. By the mid-1990s, the 'wellness movement' had, according to experts in health education, "added a sixth dimension of health (that is, in addition to physical, social, emotional, intellectual, and spiritual), environmental or planetary, health. This dimension involves both micro (immediate, personal) and macro (global/planetary) environments" [31]. Health education textbooks maintained that we must "now view health as the presence of vitality-the ability to function with vigor and live actively, energetically, and fully. Vitality comes from wellness, a state of optimal physical, emotional, intellectual, spiritual, interpersonal, social, environmental, and even planetary wellbeing" [32]. Viewed this way, the word health cannot be disassociated from the words equity, access, and opportunity. It is also important to point out that the 'planetary health' movement which began in the 1980s was an extension of indigenous knowledge and ideation: scholars have underscored that indigenous cultures have long-since understood that "human health and planetary health are the same thing" (or "to harm the Earth is to harm the self ") [33]. For example, Lori Alvord, MD, the first conventionally-trained female Navajo surgeon in the United States, stated: "I cannot think of a single thing that would be more important to us (North American indigenous peoples) than to have a pure environment for our health... human health is dependent upon planetary health and everything must exist in a delicate web of balanced relationships" [34]. An understanding of the links between human and planetary health among indigenous peoples is a product of emotional bonds with the natural environment and effective, trans-generational knowledge transfer [35,36]. Indeed, the ecopsychology movement of the early 1990s advocated for "a planetary view of mental health... to live in balance with nature is essential to human emotional and spiritual well-being, a view that is consistent with the healing traditions of indigenous peoples past and present" [37]. In sum, the environmental health, preventive medicine, and wellness movements of the late 20th century often included a planetary health perspective. However, it must be recognized that the foundations of the contemporary planetary health concept are a product of indigenous science and medicine, and longstanding awareness that human health (that is, wellness) is dependent upon the vitality of the natural environment [38]. In the context of high-level wellness, preventive medicine is tasked not only with helping to prevent the path to specific diseases, but to prevent departure from vitality. We turn now to examine the accelerating pace at which the term planetary health has moved into the glossary of science and medicine. --- Planetary Health Moves to Mainstream "Even with all our medical technologies, we cannot have well humans on a sick planet. Planetary health is essential for the well-being of every living creature. Future healthcare professionals must envisage their role within this larger context, or their efforts will fail in their basic objective. Although until recently healthcare providers could ignore this larger context, such neglect can no longer be accepted". Thomas Berry, 1992 [39] Although the term planetary health was used frequently by various experts, researchers, clinicians, academics, and advocates, only recently has the concept entered the lexicon of mainstream science and medicine. In 2015, the Rockefeller-Lancet Commission on Planetary Health published its landmark report; the expansive document-which covered political, economic, and social systems-formally defined planetary health as "the health of human civilization and the state of the natural systems on which it depends", with its stated goal to find'solutions to health risks posed by our poor stewardship of our planet' [9]. As a crude measure of the report's impact, results of a PubMed search for "planetary health" demonstrate that over 70% of the citations have been published post-2014. The Commission report, financially supported by the Rockefeller Foundation, has already been cited over 300 times on Google Scholar; it has also spawned a dedicated Lancet Planetary Health journal. There is little doubt that the Commission report and the efforts of other groups have moved planetary health into widespread discussion. The contemporary planetary health concept is meant to break down silos and galvanize research efforts so that there is greater awareness of how specific pieces of research work toward solving the (interrelated) grand challenges of our time; planetary health is, of course, the terrain of environmental impact assessments and strategic environmental assessments, climate indicators, and toxin-based units of analysis; however, in 2018, one of the leading voices in the current planetary health movement-Lancet Editor-in-Chief, Dr. Richard Horton-underscored that it is so much more: "Planetary health, at least in its original conception, was not meant to be a recalibrated version of environmental health, as important as environmental health is to planetary health studies. Planetary health was intended as an inquiry into our total world. The unity of life and the forces that shape those lives. Our political systems and the headwinds those systems face. The failure of technocratic liberalism, along with the populism, xenophobia, racism, and nationalism left in its wake. The intensification of market capitalism and the state's desire to sweep away all obstacles to those markets. Power. The intimate and intricate effects of wealth on the institutions of society. The failure of social mobility to compensate for steep inequality. The decay of a tolerant, pluralistic, well informed public discourse. The importance of taking an intersectional perspective. Rule of law. Elites. The origins of war and the pursuit of peace. Problems of economics-and economists" [40]. We agree with this sentiment. Indeed, the future of planetary health in the context of preventive medicine and environmental health requires a greater understanding of a 'planetary health psyche'; by this we mean deeper insight into the ways in which emotional bonds are developed between person and place, and the collective cognitions and behaviors which have resulted in environmental degradation and 'Anthropocene Syndrome' in the first place [41]. This goes far beyond the now extensive research showing the health benefits-physical, emotional, cognitive, social, and spiritual health-of contact with natural environments [42,43]. The preventive form of planetary health is now an imperative; as stated by Harvard psychiatrist John E. Mack (1929Mack ( -2004)), we must develop a relational psychology of the Earth which allows us to "tell unpleasant or unwelcome truths about ourselves... to explore our relationship with the Earth and understand how and why we have created institutions that are so destructive to it... we in the West have rejected the language and experience of the sacred, the divine, and the animation of nature. Our psychology is predominantly a psychology of mechanisms, parts, and linear relationships. We have grown suspicious of experiences, no matter how powerful" [44]. The development of emotional connections with the natural world-and health-related associations with such emotional bonds-is now a measurable construct in the form of nature relatedness (see also, related validated instruments such as nature connectedness or nature connectivity scales) [45]. Nature relatedness scales are a means for researchers to evaluate individual levels of awareness of, and fascination with, the natural world; nature relatedness scores encapsulate the degree to which individuals have an interest in making contact with nature. While this body of research is far from robust, the available evidence indicates that nature relatedness is positively associated with general health, mental wellbeing, empathy, pro-environmental attitudes/behaviors, and humanitarianism (and negatively with materialism) [46][47][48][49][50][51]. The challenge for global researchers is to develop a more sophisticated understanding of how nature relatedness fits into the planetary health imperative; how is nature relatedness fostered and how is it influenced by cultural experience and socioeconomic variables [52,53]? What are the biological underpinnings of nature relatedness in relation to non-communicable disease [54]? How does it influence environmental behaviors and the political-economic viewpoints outlined by Horton [55]? Are high levels of nature relatedness a 'burden' in some cases? For example, in cases where environmental degradation and biodiversity losses are immediately apparent [56], it might be expected that rapidly changing environmental conditions would provoke distress (Box 2). Humanity is facing colossal, interconnected global challenges. It is now abundantly clear that human-caused climate change represents a threat to all of humanity. Extreme temperature and weather events, degraded air quality, and the spread of diseases via food, water, and alterations to the life of vectors (such as ticks and mosquitoes) are now a reality [57]. Climate change does not stand alone as a looming public health threat. It is coupled with environmental degradation (through industry and invasive species), biodiversity losses, grotesque health disparities, the global spread of ultra-processed foods, and what has been described as a 'pandemic' of non-communicable diseases [41,56,58,59]. The burden of these global threats is shouldered by the socioeconomically disadvantaged. Only recently have researchers begun to tabulate the ways in which environmental degradation takes its toll on mental health. In areas where environmental degradation has already been significant, researchers see a worsening of mental health-described by some as 'ecological grief' [60]. There is an urgent need to study the ways in which climate change and environmental degradation not only contribute to NCDs, but also how they contribute to mental stress and diminish vitality [61][62][63]. --- Planetary Health vs. Authoritarianism More than ever before, medicine, science, and health (at all scales) are political discussions [64][65][66][67]. A rapid change in communication technology and social media has accelerated the ability of misinformation to spread globally. We have now entered a strange era dubbed 'post-truth', [68], a time when it is no long tenable to be on the sidelines as a health 'care' spectator. However, in comparison to other professions and even the general population, US physicians show low levels of civic participation [69,70]. Recent elections in North America and Europe have underscored the ways in which public health is threatened by political authoritarianism [71,72]; however, authoritarianism and social dominance orientation are not constrained to the political arena and politicians. Rather, they can be found in many contemporary social structures, including those associated with westernized medicine [73] and science [74]. In his writings on wellness, Dunn underscored that authoritarianism is a significant barrier to global wellbeing; in order to remedy this, he encouraged greater inclusion of political science in health research and education. He also advocated for a greater understanding of leadership styles as influence on the health of groups, and broader awareness of the ways in which scientific findings are selectively misused. In particular he was concerned about the abuse of science by socially-dominant political elites and those with biased interests in the outcomes. During Dunn's time, the research on authoritarianism (as a psychological construct) was still in its infancy. Today, this area of research is far more robust, and it is much easier to determine the ways in which it interferes with health. Authoritarianism is described as expecting or requiring people to obey; favoring a concentration of power; limitation of personal freedoms. Scores on authoritarianism scales are associated with stigmatization of out-groups, a rigid adherence to mainstream convention, and broad aspects of prejudice [75][76][77]. Authoritarianism predicts intolerance to diversity and differing cultures, aggression toward out-group members, and hyper-vigilance to threats against non-conformism. It is also associated with a cognitive style devoid of fine-grained discourse and nuance; out-groups are labeled in simplistic, all-or-none fashion [78]. Social dominance orientation (SDO) is a related psychological construct that is characterized by attraction to hierarchy and areas of prestige found within social systems. SDO scales capture beliefs regarding the acceptability or entitlement of high-status groups to dominate other groups, and attitudes toward maintaining social and economic inequality. Higher scores on SDO scales are associated with lower empathy, and less concern for matters of social justice and inequalities [79]; conversely, these individuals are hyper-vigilant to threats-real and perceived-that might compromise privileged status and its benefits [80]. Researchers have shown that higher SDO predicts prejudice and diminishes awareness that power gained from the dominant social position is being used for personal gains [81,82]. The overlaps between SDO and authoritarianism have been consistently noted, such that researchers refer to the combination of SDO and authoritarianism as the "lethal union". The relevancy of authoritarianism and SDO to planetary health is now obvious. Authoritarianism and/or SDO predict denial of the seriousness of climate change, lower levels of environmental concern, and a hierarchical anthropocentric view of nature [83][84][85][86][87] Many public health professionals are keenly aware of the threats posed by political authoritarianism. Indeed, recent elections in North America and Europe have been a catalyst in (re)emphasizing the importance of political science in personal, public, and planetary health [88]. Empathic, caring, civil-minded professionals that fill the ranks of global healthcare are obligatory humanists; because so many health threats-those linked to ecosystems and the biosphere, and infectious/NCDs alike-are oblivious to national boundaries, humanist healthcare professionals are, in turn, obligatory anti-nationalists. Thus, public, preventive, and environmental health is built upon vigilance for political authoritarianism. It is understood that the misguided actions of any one nation, or even one individual, can conspire against all of humanity. However, this does not mean that SDO or institutional authoritarianism is a problem to which science and medicine is immune. On the contrary, research shows that authoritarianism and/or SDO may be uncomfortably high among students at entrance to medical schools, increased through medical education, and reinforced at the institutional levels of medicine [89][90][91][92][93][94]; medicine in general, and the technical medical disciplines such as surgery in particular, maintain high levels of perceived status [91]. That is a problem not only for clinical care, but also for building (and maintaining) the public trust in science and medicine at-large. Research is beginning to tease out the motivations of students who enter medical school as they relate to money and status, and connect these to characteristics such as low agreeableness and intolerance of opposing views [95]. Since experimental studies show that manipulating social status and power (in an upward direction) increases social dominance, and that SDO can be provoked by status reminders and cues such as money [81,96,97], medicine may need to look inward and examine its commitment to the principles of planetary health. Indeed, contemporary research supports Dunn's contention that individual (and in-group) authoritarianism is a barrier to the collective action required to support the core tenets of planetary health-that is, it blocks social rights-based movements (civil, gender, environmental, and otherwise.) [98]. As discussed in detail elsewhere [73], entering medical school with a high desire for social status, or with higher baseline levels of authoritarianism and social dominance orientation than societal norms-and to have such characteristics amplified through medical training and institutional structures-is at the heart of Horton's plea [40] for a planetary health agenda designed for meaningful change. How can science and medicine challenge an unhealthy status quo if it is unwilling or unable to confront its own contextual power hierarchies [99,100]? These are concerns which permeate healthcare-at-large. Higher SDO (even among healthcare professionals who are not medical doctors) is associated with an unwillingness to engage in inter-professional education [101]. This is likely to reflect more generalized shifts in societal goals and value systems away from meaningful life philosophy towards an emphasis on financial wealth as the dominant measure of success [102]. --- Conclusions The contemporary concept of planetary health-which has its roots in the late-20th century preventive medicine and environmental health movements-emphasizes that health equates to vitality at scales of person, place, and planet. It asserts that preventive medicine is a broad term, one which extends to the planet's natural systems-the ecosystems and biodiversity upon which our own vitality depends. Planetary health is an adisciplinary unifying concept which allows researchers working in seemingly disparate branches of science and medicine to understand the relevancy of the toil provided by each group. Specifically, we must advance the cause of planetary health by demonstrating a willingness to engage with and promote other disciplines. To this end, there are now encouraging examples of collaborative initiatives between health providers, regenerative agriculturalists, and local communities-notably developing regions of the world-with demonstrated community-wide benefits for health, wealth, employment, and environmental sustainability [103]. These integrative models provide a path forward for ensuring the health of people and planet. In the context of planetary health, the urgent task for preventive medicine and environmental health is to provide deeper insight into the ways in which we develop relationships with nature, and how we feel, think, and respond to the natural world. This includes the biological, social, political, and economic underpinnings of nature relatedness (and related psychological constructs) and its impact on vitality at all scales. It includes a more fine-grained understanding of what prevents the planetary health goals set forth by the WHO and the Lancet Commission on Planetary Health report [9]. From our perspective, this means further study of authoritarianism and social dominance orientation (at individual, institutional and other scales) vis à vis the structures-including those found in politics, science, medicine, and elsewhere-which either support the status quo, or provide meaningful solutions to planetary health objectives. This applies equally to the injustices and inefficiencies of global systems, such as food and international trade systems, which also serve to undermine health and equality through biased authoritarian and neoliberal ideologies [104,105]. The idea that threats to the health of the person, the place (community), and the planet are distinct from each other is a mirage
Experts in preventive medicine and public health have long-since recognized that health is more than the absence of disease, and that each person in the 'waiting room' and beyond manifests the social/political/economic ecosystems that are part of their total lived experience. The term planetary health-denoting the interconnections between the health of person and place at all scales-emerged from the environmental and preventive health movements of the 1970-1980s. Roused by the 2015 Lancet Commission on Planetary Health report, the term has more recently penetrated mainstream academic and medical discourse. Here, we discuss the relevance of planetary health in the era of personalized medicine, gross environmental concerns, and a crisis of non-communicable diseases. We frame our discourse around high-level wellness-a concept of vitality defined by Halbert L. Dunn ; high-level wellness was defined as an integrated method of functioning which is oriented toward maximizing the potential of individuals within the total lived environment. Dunn maintained that high-level wellness is also applicable to organizations, communities, nations, and humankind as a whole-stating further that global high-level wellness is a product of the vitality and sustainability of the Earth's natural systems. He called for a universal philosophy of living. Researchers and healthcare providers who focus on lifestyle and environmental aspects of health-and understand barriers such as authoritarianism and social dominance orientation-are fundamental to maintaining trans-generational vitality at scales of person, place, and planet.
preventive medicine and environmental health is to provide deeper insight into the ways in which we develop relationships with nature, and how we feel, think, and respond to the natural world. This includes the biological, social, political, and economic underpinnings of nature relatedness (and related psychological constructs) and its impact on vitality at all scales. It includes a more fine-grained understanding of what prevents the planetary health goals set forth by the WHO and the Lancet Commission on Planetary Health report [9]. From our perspective, this means further study of authoritarianism and social dominance orientation (at individual, institutional and other scales) vis à vis the structures-including those found in politics, science, medicine, and elsewhere-which either support the status quo, or provide meaningful solutions to planetary health objectives. This applies equally to the injustices and inefficiencies of global systems, such as food and international trade systems, which also serve to undermine health and equality through biased authoritarian and neoliberal ideologies [104,105]. The idea that threats to the health of the person, the place (community), and the planet are distinct from each other is a mirage; this false notion has been challenged by environmental health and preventive medicine for decades. We have moved past the point at which such discourse is merely intellectual fodder. We argue that in 2019, one simply cannot claim to be a 'health' care professional without advocating forcefully for the planet. There are no healthy people on an uninhabitable planet, and we are fast heading there. If its true goals are realized, environmental health and preventive medicine at the planetary scale will, as Jonas Salk implored in 1984, place emphasis on the idea that we should want "those who follow us to look back on us as having been wise ancestors, good ancestors" [106]. --- Author Contributions: S.L.P. developed the commentary, project oversight and research analysis. A.C.L. provided the research analysis and developed the historical aspects of the manuscript. D.L.K. is responsible for the commentary oversight, research interpretation, critical review of manuscript, and input of public health perspectives. All authors contributed to the development and review of the manuscript. All authors read and approved the final manuscript. The artwork was created by S.L.P.
Experts in preventive medicine and public health have long-since recognized that health is more than the absence of disease, and that each person in the 'waiting room' and beyond manifests the social/political/economic ecosystems that are part of their total lived experience. The term planetary health-denoting the interconnections between the health of person and place at all scales-emerged from the environmental and preventive health movements of the 1970-1980s. Roused by the 2015 Lancet Commission on Planetary Health report, the term has more recently penetrated mainstream academic and medical discourse. Here, we discuss the relevance of planetary health in the era of personalized medicine, gross environmental concerns, and a crisis of non-communicable diseases. We frame our discourse around high-level wellness-a concept of vitality defined by Halbert L. Dunn ; high-level wellness was defined as an integrated method of functioning which is oriented toward maximizing the potential of individuals within the total lived environment. Dunn maintained that high-level wellness is also applicable to organizations, communities, nations, and humankind as a whole-stating further that global high-level wellness is a product of the vitality and sustainability of the Earth's natural systems. He called for a universal philosophy of living. Researchers and healthcare providers who focus on lifestyle and environmental aspects of health-and understand barriers such as authoritarianism and social dominance orientation-are fundamental to maintaining trans-generational vitality at scales of person, place, and planet.
Background Men tend to eat less healthfully than women, eating fewer fruits and vegetables [1][2][3], more red and processed meat [2,4], and greater amounts of processed discretionary foods [3][4][5]. These differences contribute to gender inequalities across a range of adverse health outcomes including obesity [6], diabetes mellitus [7] and coronary heart disease [8]. Socioeconomic inequalities in diet are well established [9][10][11]. Men and women experiencing socioeconomic disadvantage (e.g. those with low education, low income, or residing in deprived areas) tend to have eating behaviours not conducive to good health [9][10][11]. Compared with more advantaged adults, those who are disadvantaged tend to eat fewer fruit and vegetables [9,12], and less fibre [9]. Disadvantaged adults also consume more fat, skip breakfast [9] and eat fast food more frequently [13]. Education was selected as an indicator of socioeconomic position because it is a strong determinant of future occupation and income, reflects knowledge-related assets and other intellectual resources, and has been strongly associated with dietary intake in previous studies [14]. The social ecological model, which recognises that individuals are embedded within larger social systems, provides a useful framework for investigating determinants of behaviour. According to the model, behaviours are determined by the interactions of individuals and their social and physical environments [15]. While correlates of women's eating behaviours are well characterized [16][17][18][19][20], influences on men's eating behaviours are less well understood, and are likely to differ from those that influence women [21,22]. While some sex difference in intakes may be attributable to biological factors, it is likely that a range of other factors at the individual, social and environmental levels are also implicated. For instance, social norms related to masculinity may lead men to perceive that consumption of certain healthy foods, and activities such as meal planning and cooking are feminine [23], and hence 'unmasculine' [24]. Several potential drivers of socioeconomic inequalities in men's eating behaviours have been identified in studies focussed on singular domains of the social ecological model. Intrapersonal factors including nutrition-related knowledge [25][26][27], self-confidence, problem-solving skills and the ability to process information are important for helping individuals overcome obstacles to adopting more favourable eating behaviours [25]. Socioeconomically disadvantaged men may also be less likely to use nutrition information and may also lack the skills or confidence to prepare healthy meals [27,28]. Social norms, particularly those related to masculinity, may also contribute to socioeconomic differences in eating behaviours. Men who endorse dominant norms of masculinity were shown to adopt less optimal eating behaviours than their peers who endorse less traditional norms [23]. Young blue-collar male workers tended to show little consideration for being health-conscious, resulting in consumption of diets high in saturated fats and sugars [29]. Those men's food practices reflected gender identity, with food preparation commonly viewed as "women's work". Blue-collar workers' food choices were also influenced by poor dietary role models, including peers, co-workers, and supervisors [29]. Environmental factors may also explain socioeconomic differences in men's eating behaviours, such as differential access to stores selling both healthy and less healthy foods [30]. Disadvantaged men may be less likely to make optimal food choices due to limited access to affordable nutritious foods within the local environments where they work and live. Danish men with low education believed their weight gain was partly attributable to the types of foods available in their work environment [31]. In New Zealand, the least deprived areas had 76% fewer fast food outlets than the most deprived areas, and fast food outlet exposure was negatively associated with individual-level SEP indicators (highest educational attainment and relative income) [30]. --- Methods To our knowledge, potential explanations of socioeconomic differences in men's eating behaviours across intrapersonal, social and environmental domains have not previously been investigated simultaneously. Further, as these influences remain unexplored across multiple domains together, such an approach may have yielded a better understanding of the interaction between factors from different domains, as well as potentially identifying factors that may have been overlooked when domains were previously investigated in isolation. How these factors may influence socioeconomic inequalities in eating behaviours among men remains unclear. The present investigation aimed to qualitatively explore potential explanations for socioeconomic differences in men's eating behaviours among men with tertiary and non-tertiary education. This study is reported according to the consolidated criteria for reporting qualitative research guidelines [32], and was conducted in conjunction with an independent social and market research agency, Market Solutions P/L (http:// www.marketsolutions.com.au/). The agency was selected to assist with the study given their strong track record in conducting social science research [33,34], and their familiarity with qualitative methodology and research, particularly amongst socioeconomically disadvantaged groups. The agency is accredited to the international ISO standard for market, social and opinion research (AS ISO 20252) and is a member of the Association of Market Research Organisations (AMSRO). Market solutions P/L was responsible for recruitment, conducting interviews, recording and transcribing data, and transmitting de-identified data to the study investigators. The study investigators were responsible for all other aspects of the study. The study was approved by the Deakin University Faculty of Health Human Ethics Advisory Group (HEAG-H; approval HEAG-H 95_2015). All men provided informed, verbal consent to participate. This was recorded by interviewers at the point of first contact with the men in a password protected project database stored on a secure server. --- Participants The sample comprised 30 men of working age (18-60 years), 15 with a non-tertiary level education, i.e. completed Year 9 or less, Year 10, Year 11, Year 12 (final year of high school in Australia), or Certificate/Diploma/ Advanced Diploma; and 15 with a tertiary education level (completed tertiary education, i.e. a Bachelor degree or higher) from Melbourne, Victoria, and Newcastle, New South Wales (large metropolitan regions in two Australian states). To reflect SEP in nutrition research, education is often stratified as described above (high SEP is indicated by having achieved tertiary level qualifications, while low SEP is reflected by achieving non-tertiary level qualifications) [35][36][37]. The current qualitative data can be used to generate hypotheses that could be followed up in future research [38]. Education was employed as the measure of SEP in this study as it is a relatively stable indicator of SEP [14,39]. Seven or eight men each with tertiary or non-tertiary education participated from each site. Men of working age were the focus of the present investigation as different factors influencing eating behaviours might be reflected among older men (e.g. those who are retired), given substantial lifestyle changes that come with older age (e.g. income, available time, household structure, health issues [40]). --- Recruitment procedure Market Solutions P/L accessed telephone directories of community members in both target catchment areas, including mobile and landline numbers and randomly selected men's numbers to be called by one of three male interviewers (agency employees trained in qualitative methodology). Male interviewers were chosen to maximise the potential to build reciprocity between the interviewer and participant which may yield richer data than may have been gathered by female interviewers [41]. Men were invited to complete a telephone-based interview either immediately or at a more convenient time. Purposive sampling [42] based on educational attainment and city of residence was used to recruit a total of 30 men (15 from each target catchment, and 15 each with tertiary and non-tertiary education). Interested participants received study information via telephone and were assessed for eligibility (i.e. 18-60 years of age, were tertiary or non-tertiary educated as defined above, and could communicate clearly in English). Men were offered an AUS$20 voucher to a leading retailer as compensation for their time (mailed postinterview). Semi-structured interview schedule and procedure Development of questions for the semi-structured interviews was informed by the social ecological model [15], and previous research examining determinants of men's eating behaviours [21,23,24,26,28,29,31,[43][44][45][46]. Questions were primarily open-ended and aimed at assessing participants' usual eating behaviours and perceived influences on these (Additional file 1: Table S1). Men were prompted to discuss food task responsibilities; influences on eating behaviours and eating choices (including an exploration of trade-offs between health, convenience, peer modelling, price, accessibility, and taste); body weight; masculinity; social influences; perceptions of other men's eating behaviours (social norms); and neighbourhood availability of healthy foods. --- Interview procedure Interviews were conducted by telephone in 2015. A one-on-one telephone interview was chosen as men resided across a wide geographical area making faceto-face interviews less feasible. The interview schedule was pilot tested and refined with the first two men (one with tertiary education from Melbourne, one with non-tertiary education from Newcastle). Piloting showed no major issues with timing or questions, with only minor changes made for clarification. Pilot data were not included in further analyses. Before commencing, interviewers asked for permission to digitally record the interview, and participants answered sociodemographic questions. Interviews lasted between 25 and 35 min, and once complete, were transcribed verbatim from the recordings. --- Sociodemographic characteristics Men provided their age (five response categories ranging from 18-24y to 55-60y) and highest attained level of education (six categories ranging from Year 9 or less to Bachelor degree or higher). Employment status (working full-time/ part-time, studying, unemployed, retired, home duties, or other), annual household income (eight response categories ranging from <unk>AUS$20,000 to <unk>AUS$150,000, and including don't know, and refused), household structure (couple with children, couple without children, single parent, single person, or flatmates) and occupation (comprising professional, technician/trades worker, community and personal services worker, manager, clerical and administrative worker, machinery operator/driver, sales worker, labourer, and other) were also established. --- Data analysis Qualitative description was used to build a comprehensive understanding of socioeconomic differences in influences on men's eating behaviours. Qualitative description aims to maximise descriptive and interpretive validity by providing an account of events (including meanings participants attribute to those events) that both participants and researchers would agree is accurate [47,48]. This methodology is more appropriate than those requiring a greater degree of researcher interpretation given the goal of the present investigation to discern potential influences on socioeconomic differences between men's eating behaviours [48]. Data were analysed by the lead author (LS) using thematic analysis, which comprised four key steps [49]: immersion in the data, line-by-line coding, creating categories, and generation of themes. LS read and re-read transcribed interviews to build familiarity with the data (data immersion), and then performed abductive thematic analysis [50] to code data using descriptive labels. Categories were formed by linking coded data together that related to similar concepts, while keeping categories for tertiary and non-tertiary educated men separate [49]. Based on these categories, LS identified key emerging themes that were salient for men within each education level group. Individual influences were each classified into a separate theme (e.g. 'cost', 'convenience', etc.). Findings were generated via an iterative, abductive cycle, moving back and forth between inductive and deductive reasoning. Where relationships between themes and/or sub-themes were identified, such interactions were classified under the predominant theme that united those factors (e.g. interplay between cost, convenience, taste, and healthfulness of food was described within the 'cost' theme. Of these factors, cost was determined to be most predominant as participants typically described cost before discussing consideration of the other factors). Rigour was maintained via researcher reflexivity (i.e. ensuring one's own perspectives are left out of the coding process as much as possible), development of an audit trail by recording steps taken in the development and reporting of findings, linking interpretations with the raw data by presenting participant quotes, and peer debriefing with the study's co-authors throughout the analytical process. An independent researcher (non-author) double-coded a subsample of interviews (20%; n = 6, three from each education group). Each coder independently and systematically employed the iterative, abductive cycle described above to create categories from the data. The purpose of double coding was to explore potential alternative interpretations of the data, as the iterative process of cross-checking coding strategies and data interpretation by the researchers enables potential alternative interpretations to be identified and discussed, serving to create a more thorough examination of the data [51]. Data analysis was conducted using raw transcripts entered into NVivo software (version 10, QSR International, Melbourne, Australia). --- Results Sociodemographic characteristics of the sample are shown in Table 1. A range of age groups were represented, with the majority aged 45-54 years, and employed in full-or part-time work (80% of non-tertiary educated men, 87% of tertiary educated men). Very few men were studying (n = 2), unemployed (n = 1), or retired (n = 1); and none were engaged in home duties or other forms of employment (data not shown). Most tertiary educated men worked as professionals (77%). Among non-tertiary educated men, 50% worked as technicians and trades workers, 17% worked as managers, and 17% in clerical and administrative roles. Only one man was employed as a machinery operator/driver, and none were employed as sales workers, labourers, or in other roles (data not shown). Major emerging themes and exemplary quotes are presented below, with results presented stratified by education level. Themes found to be equally prominent across both groups of men included the intrapersonal-level influences of attitudes relating to masculinity, nutrition knowledge and awareness, and'moralising' consumption of certain foods; and social influences of children. Environmental themes discussed by both groups included availability of and access to healthy and unhealthy foods; convenience; and the interplay between cost, convenience, taste and healthfulness when choosing foods (discussed within the cost theme). Intrapersonal influences more frequently discussed by tertiary educated men within themes identified included having greater food-related skills (e.g. cooking involving multiple, complex steps), but less involvement in foodrelated tasks (e.g. menu-planning, purchasing) because of time constraints. Almost all tertiary educated men with partners identified their partners as a positive influence on eating behaviours. Environmental influences more dominant among tertiary educated men included accessibility of healthy foods; and perceiving healthy foods as expensive and unhealthy foods as inexpensive. A number of influences within themes were more frequently discussed by non-tertiary educated men, including having less developed cooking skills but regular involvement in food-related tasks such as shopping, preparing, and cooking meals when compared to discussion by tertiary educated men. While men from both groups recognised nutrition knowledge as an influence on their eating behaviours, non-tertiary educated men reported lower perceived levels of nutrition knowledge, and sometimes described misperceptions related to nutrition and body weight. A theme identified only among non-tertiary educated men was the perception that no-one influenced their eating behaviours. Non-tertiary educated men also identified mobile worksites (i.e. moving from one work location to another during the day/week as necessitated by their job, common among those working as tradesmen) as an unhealthy influence on eating, and discussed the need to adhere to a food budget. --- Intrapersonal influences Intrapersonal influences included attitudes related to masculinity; food-related tasks and skills; nutrition knowledge and awareness, and moralising consumption of certain foods. --- Attitudes related to masculinity Men from both educational groups reported that they did not believe that preparing and consuming healthy food were negatively associated with principles of masculinity, but rather were important for good health. Some tertiary educated men thought perceptions that it was unmasculine to eat healthfully had become less common over time, while others from both groups thought eating healthfully actually enhanced masculinity. "Things have changed. It might just be a reflection of my own friends, but I think a lot of guys I know cook more and want to eat a greater range of foods. I think there is a change where guys are picking up more responsibility at home." Tertiary educated man. "I tend to think if you eat healthy it would give you a greater sense of masculinity from a male point of view." Non-tertiary educated man. --- Food-related tasks and skills Food-related tasks and skills were discussed as an influence on men's eating behaviours by almost all men from both education groups. Tertiary educated men reported taking part in meal planning, food purchasing and preparation (although to a lesser degree than non-tertiary educated men), and adding extra vegetables to a dish to make it healthier. Some non-tertiary educated men described themselves as expert cooks, while others felt they had sufficient skills to put simple meals together. Both groups of men also frequently prepared their lunch for work. "Tonight I've got leftover pasta... I just added frozen peas and some fresh asparagus, which I just boiled quickly and I added it in..." Tertiary educated man. "If I do prepare a meal I might make myself some bacon and eggs on toast or I might make myself a burger if the materials are here at the time." Nontertiary educated man. Men from both groups identified several reasons for cooking, including sharing the workload with their partner or spouse and/or because they enjoyed cooking. A few of the non-tertiary educated men described eating at home because it was cheaper to cook at home than to eat out. Some non-tertiary educated men also described sharing the food preparation workload due to time constraints, such that whoever in the household arrived home earliest after work, or had more time, did the cooking. "[Dinner time is] the time that my wife sort of works a bit later and I'm working days and I've got time to cook 'til she comes home... I like the taste and I like experimenting with cooking and making a few different things." Non-tertiary educated man. --- Nutrition knowledge and awareness Men from both groups were aware of the importance of eating healthfully and thought people, particularly other men, were far more aware of the importance of eating healthfully than in the past, and that awareness was continuing to grow over time. " Both groups of men considered healthfulness when making food choices, with many choosing foods specifically because they felt they were healthy. Nutrition knowledge was not determined by skill-testing questions and men were not asked to directly compare their knowledge to other men's knowledge, however, non-tertiary educated men perceived that they had lower nutrition-related knowledge than men with a tertiary education, and sometimes described misperceptions related to nutrition and body weight. "Steaks are probably... better for me than any of the other fatty food. Even with sausages sometimes they can be real fatty where at least I know if a steak's done properly there's not much chance of a lot of fat still being inside of it". Non-tertiary educated man. "My understanding is that fat is only stored to a point and then your body won't take anymore. What we assume is eating too much fat is actually carbohydrates stored as fat... In actual fact [people] are not fat. They're just carrying an enormous amount of carbohydrate that they're not using." Non-tertiary educated man. --- 'Moralising' consumption of certain foods Men in both groups moralised consumption of certain foods based on their perceived healthfulness, particularly snack foods. In 1999, Rozin described moralisation as the act of accreting moral value to activities or objects (such as food) that were previously without moral value [52]. Moralising food consumption can be regarded as translating food judgements into corresponding behavioural rules. For example, men associated choosing 'good' food with good health or high self-control, while 'bad' food choices were linked with poor health and low self-control. Such food judgements can be taken further to imply that certain food choices are righteous/sinful, or moral/immoral [53]. Men in both groups often described healthy food as 'right' or'sensible', while consumption of unhealthy foods was construed negatively, associated with feelings of guilt, or viewed as 'terrible'. "I always favour seafood because I tend to think it's a more sensible choice... I think seafood's invariably a healthy choice..." Non-tertiary educated man. --- Social influences Social influences on eating behaviours identified included the influences of partners/spouses and children, and the perception that no-one influenced eating behaviours. --- Partners and spouses Among those men with partners, more of the tertiary educated men than those with non-tertiary education described believing that their partner had a healthy influence on their eating behaviours. In the majority of cases partners' main mechanism of influence was by acting as gatekeepers of the home food environment by controlling the healthfulness of foods purchased, and preparing nutritious meals. Some tertiary educated men also thought their partners also verbally encouraged them to eat healthfully, or that their partner was a healthy role model. "[My wife helps me eat more healthfully]... by positive reinforcement, by actively seeking and assisting in healthy choices, healthy recipes and healthy food" Tertiary educated man. --- Children Among both groups of men, most who had children thought their children influenced them to eat healthfully. A number of fathers described choosing healthier foods in order to make them available to their children, as well as to role-model healthy eating for their children. --- No-one influences eating behaviour Several non-tertiary educated men stated they did not believe anyone else exerted influence on their eating behaviours, despite many of these men having partners and/or children. This view was not identified by tertiary educated men. --- Environmental influences Environmental influences identified by both groups of men included availability of, and access to, healthy and unhealthy foods as well as convenience and cost. --- Availability of and access to healthy and unhealthy foods All men discussed availability of, and access to, healthy and unhealthy foods at home, work, and in the local neighbourhood as affecting food choice. Almost all men from both groups felt healthy food was readily available (e.g. where they did their weekly grocery shopping); and accessible in the local neighbourhood (e.g. at local markets and supermarkets that could be reached either on foot or by car in a short amount of time). Tertiary educated men thought access to particular foods increased the likelihood those foods would be eaten, therefore ready access to healthy foods would result in eating more healthfully in general. A few non-tertiary educated men also chose foods at home, particularly snack foods, simply because they were readily accessible. "If I'm in the right frame of mind when I'm shopping I'll buy better things... I'll buy more vegetables and more fruit... And if I buy it, I eventually will eat it. I don't like wasting stuff... Just making sure that you buy more fruit and vegetables than you think you need... because they're there, you can think of things you can do with them." Tertiary educated man. "[For snacks, I eat] anything I can get my hands on really. I'm a bit of a human garbage disposal, so there's fruits and biscuits and nuts and whateverchocolate. Anything I can get. Chips. Anything I can get a hold of. Anything in front of me." Non-tertiary educated man. Some non-tertiary educated men had mobile worksites, and so work lunch choices were influenced by what was available in the neighbourhood surrounding their workplace, i.e. they purchased food wherever they were located for a job. "Not an actual workplace cafeteria. I'm self-employed. I'm sort of all over the place so it'd be just like a shop [where I buy my lunch when at work]. Yeah, just whatever's closest." Non-tertiary educated man. --- Convenience Almost all men from both groups cited convenience as a major influence of food choice, selecting foodsparticularly breakfast and lunch foodsbecause they were close to hand, and quick to purchase and consume. Among men who purchased work lunches, several from both groups considered the convenience and time it took to access food influenced their choices, often leading to less healthy food purchases. "There's always a lot more temptation to eat junky food [for work lunch], because it's really easy and it's there, and it's just about everywhere that you go. You can just grab it and eat it, you don't have to think about it. And I've noticed if you have to wait and think about it, you generally change your mind." Tertiary educated man. --- Cost Cost influenced men's food choices. All tertiary educated men considered cost when choosing food, and the perception that healthy food was expensive was prominent among tertiary educated men, but not among non-tertiary educated men. Tertiary educated men thought the cost of healthy food was prohibitive when doing the grocery shopping, and unhealthy food items available in supermarkets were often cheap, or on special. "It's so much easier, in particular this country, to buy cheap take-away than it is to buy what's often not so cheap healthy food and then do the groundwork of preparing. It's easier and often cheaper... You walk into a supermarket and you're going to pay AUS$3.00 for a bottle of [high-calorie beverage] and AUS$3.50 for a bottle of water. How is that possible?" Tertiary educated man. Almost all non-tertiary educated men also considered price when choosing foods, with some households having to stay within a budget when they shopped for food. "Generally [we cook] the cheaper cuts of meat, mince and sausages... because we're on a budget." Nontertiary educated man. Men from both groups talked about considering cost along with other influences when choosing foods. Consistently, the interplay between cost, convenience, taste, and healthfulness of foods were considered together before a choice was made. Among men from both groups, those who prioritised health tended to consider cost as a secondary influence after health, followed by convenience, with taste being less important; among those who did not prioritise health, cost and convenience were more important over health and taste considerations. "Probably convenience, cost and health would be the main three [influences to consider when choosing lunch] for me. It's just with my work and home life, [having a] schedule where we're home, with the little one at lunch time [and] she's having a sleep during my lunch [I choose what is convenient], and then other times cost. It's more cost effective for me to take [my lunch to work with me], something that I like to eat rather than have to pay $8 for a salad roll when I can make one and bring one from home and don't have to go looking for it as well." Tertiary educated man. " [Food] definitely has to be filling because the price of food these days out is usually expensive. Definitely filling... You need to be content. You don't want to have one hot dog and go, 'Gee, I'm still hungry.' At the end of the day you might get to a place and there's only two options [available]. So you look at that and convenience, what's easy, what's simple. Price does come into it. Again, it's hard to judge because everything that you buy these days is pricey anyway." Non-tertiary educated man. --- Discussion The present investigation aimed to examine potential explanations of socioeconomic differences in men's eating behaviours by qualitatively exploring influences on eating among men of tertiary and non-tertiary education levels. Salient themes among men from both education groups included influences from intrapersonal, social, and environmental domains. Influences more predominant among tertiary educated men included having more advanced food-related skills but relatively less involvement in food-related tasks compared non-tertiary educated men; partner/spouse support for healthy eating; access to healthy foods; and views relating to food cost. Prominent influences among men with non-tertiary education levels included having limited cooking skills (e.g. being able to prepare simple dishes with few steps and uncomplicated techniques) but more frequently being involved in food-related tasks, and perceiving having limited nutrition knowledge when compared with discussion by tertiary educated men. These men also identified more often that no-one influenced their diet; they had mobile worksites; and adhered to a food budget. Neither group perceived food preparation or healthy eating to be at odds with the concept of masculinity, a finding which is divergent with those of previous studies that showed men, irrespective of education level or occupation, considered healthy eating as feminine [21,54,55]. It may be that with increasing global recognition of the importance of diet for chronic disease prevention, eating for good health has become more acceptable and normative among men since those earlier studies were published. Men's perceptions about masculinity described in the present investigation may also be attributed to workforce and societal changes in women and careers, with fewer men being the family's primary income provider, and with fewer women staying home to perform all food-related tasks than previously. Further, the majority of participants in the present investigation were aged <unk>45 years, and may have greater awareness of the importance of health behaviours as they age and face increased risk of diet-related disease. When discussion about food-related tasks and skills was examined, tertiary educated men's cooking skills were more developed, but they had less involvement in food-related tasks than non-tertiary educated men who had more limited cooking skills but regular involvement in food-related tasks. These findings correspond with those reported previously. For example, low income US men were nearly three times more likely to be involved in meal planning and preparation compared to their wealthier counterparts [56], and Norwegian men working in blue collar occupations (carpenters) were more likely to share food shopping and preparation with their partner/spouse compared to men in white collar occupations (engineers) [57]. Consistent with our findings regarding education level and cooking skills, when selfdescribed cooking skills were compared between Swiss men, those with high education levels had more elaborate cooking skills than less educated men [58]. Social influences on men's eating behaviours included those in their family unit (i.e. partner/spouse, and/or children), or, as for several non-tertiary educated men, no other individuals. Partner/spousal support for healthy eating was recognised as important by tertiary educated men in our study, but not among those with non-tertiary education. Conversely, low income British men previously identified female figures (e.g. spouses/ partners, mothers, grandmothers) as positive influences on their eating behaviours [59]. Similarly, Dutch men with lower vocational education or below stated they would eat healthfully if their spouse/partner did [60]. A previous Australian nutrition and physical activity intervention incorporating social support by partners resulted in significant decreases in total and saturated fat consumption, and significant increases in fibre intake among men and women [61], implying that greater social support from spouses/partners would encourage men to eat more healthfully. It is unclear why our findings diverged from these previous studies, however it may simply be a function of studying different samples. Fathers from both education groups acknowledged the importance of role-modelling healthy eating for their children, and how this encouraged their own healthy eating. Previous research showed that Australian children's total fruit consumption was positively associated with that of their father [62], and thus supports observations in the present investigation. That some non-tertiary educated men in the present investigation thought no-one influenced their diet was novel, and contradictory to previous research suggestive that social support for healthy eating encouraged less educated or low income men to adhere to healthier eating behaviours [59,60]. On balance, findings from the present investigation and previous research suggest that role-modelling and social support are important factors for supporting men to eat healthfully, and have the potential to be powerful mechanisms through which improvements in men's diets could be achieved if incorporated into future nutrition promotion initiatives, for example, engaging men along with their partners in intervention strategies including nutrition education and cooking classes. Tertiary educated men in our study considered healthy foods to be expensive; however, although non-tertiary educated men reported having to adhere to a food budget, they did not generally describe healthy foods as expensive. Potential explanations for this paradoxical finding could be that only six of the non-tertiary educated men had low income and may have been able to afford healthy foods. However, previous research among socioeconomically disadvantaged men showed they did not consider healthy foods prohibitively expensive [59,60]. The present investigation also revealed that men chose foods by considering a number of influences in conjunction at multiple socioecological levels (e.g. cost, taste, etc.). The observed interplay between influences on men's eating behaviours implies multiple factors shape men's dietary behaviours. It also suggests employing a qualitative approach to explore influences on men's eating behaviours across the domains of the social ecological model in unison, such as employed in the present investigation is advantageous. This can yield a deeper understanding of how influences across domains interact and can be utilised in future to further inform research and interventions aimed at improving men's eating behaviours. Factors identified as potential influences on socioeconomic inequalities in men's diets in this study need confirmation in larger samples using quantitative methods. Acknowledging this, the present investigation has elucidated key levers that could, if confirmed, be targeted in initiatives aimed at reducing inequalities in eating behaviours, in turn ameliorating the socioeconomic 'gap' and adverse health and economic outcomes associated with these inequalities. For example, strategies to promote healthy eating among non-tertiary educated men could focus on developing greater nutrition knowledge, improving cooking skills, identifying key social supports for healthy eating, and providing skills and strategies to purchase healthy foods, particularly whilst at work, whether at a fixed or mobile worksite, and on a budget. Strategies that could support tertiary educated men to eat healthily could include promoting greater involvement in food-related tasks and education about choosing low cost healthy foods. Previous programs incorporating some strategies identified above have successfully promoted healthy eating among women and men [63] including those experiencing socioeconomic disadvantage [64]. However, given challenges in engaging men in such programs [65], policy and practice should not only focus on developing nutrition promotion initiatives aimed at improving men's diet that are custom-made to specific socioeconomic groups, but also incorporate specific tailoring to engage men. Study limitations should be acknowledged. Participating men may have been more interested in nutrition and health than non-participants, resulting in possible participation bias. Transferability of findings may be limited by a single measure of SEP being used to define the sample. Almost all participating men were employed, and had professional occupations; and only half of non-tertiary educated men had low incomes. More sensitive measures of
Background: Men of low socioeconomic position (SEP) are less likely than those of higher SEP to consume fruits and vegetables, and more likely to eat processed discretionary foods. Education level is a widely used marker of SEP. Few studies have explored determinants of socioeconomic inequalities in men's eating behaviours. The present study aimed to explore intrapersonal, social and environmental factors potentially contributing to educational inequalities in men's eating behaviour. Methods: Thirty Australian men aged 18-60 years (15 each with tertiary or non-tertiary education) from two large metropolitan sites (Melbourne, Victoria; and Newcastle, New South Wales) participated in qualitative, semi-structured, one-on-one telephone interviews about their perceptions of influences on their and other men's eating behaviours. The social ecological model informed interview question development, and data were examined using abductive thematic analysis. Results: Themes equally salient across tertiary and non-tertiary educated groups included attitudes about masculinity; nutrition knowledge and awareness; 'moralising' consumption of certain foods; the influence of children on eating; availability of healthy foods; convenience; and the interplay between cost, convenience, taste and healthfulness when choosing foods. More prominent influences among tertiary educated men included using advanced cooking skills but having relatively infrequent involvement in other food-related tasks; the influence of partner/spouse support on eating; access to healthy food; and cost. More predominant influences among non-tertiary educated men included having fewer cooking skills but frequent involvement in food-related tasks; identifying that 'no-one' influenced their diet; having mobile worksites; and adhering to food budgets. Conclusions: This study identified key similarities and differences in perceived influences on eating behaviours among men with lower and higher education levels. Further research is needed to determine the extent to which such influences explain socioeconomic variations in men's dietary intakes, and to identify feasible strategies that might support healthy eating among men in different socioeconomic groups.
cooking skills, identifying key social supports for healthy eating, and providing skills and strategies to purchase healthy foods, particularly whilst at work, whether at a fixed or mobile worksite, and on a budget. Strategies that could support tertiary educated men to eat healthily could include promoting greater involvement in food-related tasks and education about choosing low cost healthy foods. Previous programs incorporating some strategies identified above have successfully promoted healthy eating among women and men [63] including those experiencing socioeconomic disadvantage [64]. However, given challenges in engaging men in such programs [65], policy and practice should not only focus on developing nutrition promotion initiatives aimed at improving men's diet that are custom-made to specific socioeconomic groups, but also incorporate specific tailoring to engage men. Study limitations should be acknowledged. Participating men may have been more interested in nutrition and health than non-participants, resulting in possible participation bias. Transferability of findings may be limited by a single measure of SEP being used to define the sample. Almost all participating men were employed, and had professional occupations; and only half of non-tertiary educated men had low incomes. More sensitive measures of education (beyond the binary categorisation applied in the present investigation) could be considered in future research. Further, SEP is not determined by education alone, and is only one of many possible measures, when in fact SEP is best described in a more complex way by considering multiple factors such as income, education, and occupation simultaneously, not singly. As no data about men's ethnicity or culture were gathered in the present investigation, it was not possible to make any observations about possible cultural variations in views between men. Exploring cultural differences in conjunction with socioeconomic differences may be considered in future. Also, as more than half of participants were aged 45-54 years, the generalisability to men of other age groups may be limited. Men who identified as having a partner were not asked to disclose the sex of their partner. It is unclear if study findings would vary whether the couple was same-sex or opposite-sex, and is therefore acknowledged as a limitation. Nevertheless, qualitative studies do not intend to focus on general sample representativeness, but rather aim to generate a range of responses and hypotheses for potential follow up in future research [38]. Men may also have provided socially desirable responses, such as stating they had more favourable eating behaviours than in reality, yet participants also identified challenges faced in consuming healthy foods and openly discussed barriers to doing so, suggesting that socially desirable responses were minimised. Further, participants' responses might have been influenced by being interviewed by another male; views presented may have inadvertently been driven by participants' perceptions of shared masculine identity with, or reciprocal enactment of masculinity by the male interviewer, and consequently resulting in a more idealised cultural notion of masculinity [41]. However, as this was not reflected in the responses observed (e.g. healthy eating was not perceived to be unmasculine), the use of male interviewers here could be interpreted as a strength as there may have been reciprocity between the interviewer and interviewee, resulting in richer data than may have been gathered by female interviewers [41]. Also, using a one-on-one telephone interview methodology may have reduced some response bias as participants may have been less affected by cues from facial expressions or perceived social desirability from the researcher, e.g. in face-to-face interviews, or other participants, e.g. in a focus group setting [66,67]. While using a telephone method also has disadvantages, including lack of visual cues and difficulty building rapport [68], this method was deemed necessary as participants were recruited across a wide geographical area. Finally, data analysis occurred after data collection was complete, and therefore emerging themes could not be checked during the data collection process. Study strengths include the qualitative design which provided in-depth, comprehensive insights into socioeconomic differences in influences on men's eating behaviours, with perspectives provided by men living in two regions of Australia, drawn from different educational strata. A further notable strength of the study is that it provided unique insights into men's eating behaviours overall, irrespective of SEP. --- Conclusions To conclude, the present investigation provided insights into individual, social and environmental influences on the eating behaviours of men with divergent education levels, expanding the knowledge base around this important topic. Key potential drivers of socioeconomic inequalities in men's eating behaviours were identified, with potential to inform novel strategies to encourage men to eat healthfully. Future quantitative research is required to examine how factors identified in the present investigation are associated with men's dietary intakes across socioeconomic strata; how they might explain socioeconomic differences in men's diets; and the feasibility of adopting various strategies to support healthy eating among men in different socioeconomic groups. drafted the manuscript; all authors contributed to revising the manuscript; LDS had primary responsibility for final content. All authors read and approved the final manuscript. --- Availability of data and materials The dataset generated and analysed during the present investigation are not publicly available due to ethics requirements to maintain confidentiality but are available from the corresponding author on reasonable request. --- Additional file Additional file 1: Table S1. Semi-structured interview questions investigating ns on men's eating behaviours. Summary of semi-structured interview questions used in the present investigation. (DOCX 27 kb) Abbreviation SEP: Socioeconomic position Authors' contributions DC, LT, and KB designed the research; DC, LT, DLO, PJM, FJvL, and KB developed measures; LDS performed data analyses; LDS, DC, DLO and KB --- Ethics approval and consent to participate The study was approved by the Deakin University Faculty of Health Human Ethics Advisory Group (HEAG-H; approval HEAG-H 95_2015). All men provided informed, verbal consent to participate. The ethics committee approved the procedure for verbal consent, and waived the requirement for written consent to reduce the participant burden associated with obtaining consent in written form. --- Consent for publication Not applicable --- Competing interests The authors declare that they have no competing interests. --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Background: Men of low socioeconomic position (SEP) are less likely than those of higher SEP to consume fruits and vegetables, and more likely to eat processed discretionary foods. Education level is a widely used marker of SEP. Few studies have explored determinants of socioeconomic inequalities in men's eating behaviours. The present study aimed to explore intrapersonal, social and environmental factors potentially contributing to educational inequalities in men's eating behaviour. Methods: Thirty Australian men aged 18-60 years (15 each with tertiary or non-tertiary education) from two large metropolitan sites (Melbourne, Victoria; and Newcastle, New South Wales) participated in qualitative, semi-structured, one-on-one telephone interviews about their perceptions of influences on their and other men's eating behaviours. The social ecological model informed interview question development, and data were examined using abductive thematic analysis. Results: Themes equally salient across tertiary and non-tertiary educated groups included attitudes about masculinity; nutrition knowledge and awareness; 'moralising' consumption of certain foods; the influence of children on eating; availability of healthy foods; convenience; and the interplay between cost, convenience, taste and healthfulness when choosing foods. More prominent influences among tertiary educated men included using advanced cooking skills but having relatively infrequent involvement in other food-related tasks; the influence of partner/spouse support on eating; access to healthy food; and cost. More predominant influences among non-tertiary educated men included having fewer cooking skills but frequent involvement in food-related tasks; identifying that 'no-one' influenced their diet; having mobile worksites; and adhering to food budgets. Conclusions: This study identified key similarities and differences in perceived influences on eating behaviours among men with lower and higher education levels. Further research is needed to determine the extent to which such influences explain socioeconomic variations in men's dietary intakes, and to identify feasible strategies that might support healthy eating among men in different socioeconomic groups.
Introduction Although Asian American adolescents are commonly perceived to be model minorities, there has been a growing concern about delinquent behaviors in this group. Indeed, studies have found that Asian American adolescents are at least as likely to engage in delinquency (e.g., graffiti painting, shoplifting or stealing a car) as their European American counterparts (Choi and Lahey 2006;Willgerodt and Thompson 2006). The literature on this topic suggests that Asian American adolescents' delinquent behaviors are tied to the challenges of adapting to life in the US, such as dealing with family and peer relationships in potentially conflicting mainstream and heritage cultures (Le 2002). Thus, it is important to consider the psychosocial predictors of delinquency, such as acculturation, in order to inform future intervention efforts. Acculturation refers to a process through which immigrants gradually adapt their language, behaviors, beliefs, and/ or values as a result of contact with the mainstream culture (Yoon et al. 2011). A significant body of work has shown that discrepancy in acculturation levels between parents and children is a significant risk factor for child maladjustment, as indicated by decreased academic performance, depression, and delinquency (Costigan and Dokis 2006a;Kim et al. 2009;Unger et al. 2009). Longitudinal research on this link's underlying mechanism, however, is limited. Also limited are studies examining within-family variations in the effects of parent-child acculturation discrepancy on child maladjustment. The present study explores how parents' knowledge of children's daily experiences (as perceived by the adolescents) and adolescents' association with deviant peers, two important constructs related to delinquency, operate sequentially to mediate the relationship between parent-child acculturation discrepancy and adolescent delinquency in an understudied population of adolescents in Chinese immigrant families. Within each family, the mediating pathway is tested separately for two groups of parent-adolescent dyads: those that are more discrepant in their acculturation levels, and those that are less discrepant. --- Parent-Child Acculturation Discrepancy as a Risk Factor for Adolescent Delinquency Acculturation is a bi-dimensional construct, consisting of orientations toward two cultures, heritage and mainstream, which are independent from each other (Ryder et al. 2000). Children of immigrants tend to be more acculturated to the mainstream culture, while immigrant parents tend to be more oriented toward their heritage culture (Portes and Rumbaut 1996). Although the alternate scenario occurs with less frequency, some immigrant parents are more acculturated to the mainstream culture, while their children are more oriented toward the parents' heritage culture (e.g., Lau et al. 2005). Regardless of direction, however, discrepancies in family members' acculturation levels have been linked to externalizing behaviors in children, such as substance use in Latino youth (Unger et al. 2009), conduct problems in Mexican youth (Lau et al. 2005), and violence in Asian American youth (Le and Stockdale 2008). This might be due to the fact that as long as parents and children have discrepant beliefs, values and behaviors, family functions are likely to be disrupted. Birman (2006) found that parent-child acculturation discrepancy leads to family disagreement regardless of the direction of discrepancy. Therefore, although the current study controls for the direction of discrepancy, any form of acculturation discrepancy is considered to have a similar effect on adolescent adjustment. One limitation of previous studies on this topic is their tendency to rely on concurrent data and to examine only direct correlational relationships between acculturation discrepancy and adolescent delinquency. By using longitudinal data, the current study takes into account the temporal ordering of variables to test the long-term effect of parent-child acculturation discrepancy on youth delinquency and to explore in greater depth the underlying mechanisms of this relationship. --- Parental Knowledge and Deviant Peers as Potential Mediators Parent-child acculturation discrepancy in immigrant families disrupts family functioning by increasing the incidence of miscommunication and misunderstanding (Hwang 2006). Theories on communication have highlighted the disruptive effect of not sharing beliefs, values and behaviors; people with divergent points of view can experience difficulty gaining information from each other (Berger and Calabrese 1975). Similarly, parents and children who hold culturally discrepant beliefs, values and behaviors may be discouraged from communicating and interacting effectively. Therefore, adolescents may come to feel that their parents do not know or understand their daily activities, whereabouts and companions. The extant literature does not directly examine the link between parent-child acculturation discrepancy and perceived parental knowledge. However, previous research does provide some support for such a link. For example, using generational status as a proximal measure of acculturation, Tasopoulos-Chan et al. (2009) found that second generation Chinese American youth more frequently avoided discussing their activities with their parents than did first generation Chinese American youth. In a case study of Chinese immigrant families, Qin (2006) found that both parents and children report that parents do not know about their children's friends and school activities and children do not tell their parents about their experiences due to the fact that parents and children adhere to the heritage and mainstream cultures to different degrees. Weaver and Kim's study (2008) on Chinese American families, which used parent and adolescent reports of parental knowledge as one of several indicators of supportive parenting, suggested that a high level of parent-child acculturation discrepancy may be related to less parental knowledge about children's whereabouts, companions, and bedtime. Therefore, it is possible that in families with a high level of acculturation discrepancy between parents and children, adolescents perceive a lack of parental knowledge. In addition, within a two-parent immigrant family, the child may perceive that the parent who is more culturally discrepant knows less about the child's activities, whereas the other parent, whose acculturation level more closely matches that of the child, knows more. Parental knowledge has been consistently connected to fewer adolescent problem behaviors because such knowledge reduces the likelihood that the child will affiliate with deviant peers (for a review, see Crouter and Head 2002). Although this link between parental knowledge and adolescent delinquency has been demonstrated in the literature, few studies on immigrant families have examined the factors that set this process in motion. Parent-child acculturation discrepancy may be an ongoing obstacle for immigrant parents when it comes to obtaining knowledge about their children, which in turn places adolescents at risk for affiliating with deviant peers and engaging in delinquent behaviors. --- Within-Family Variations on the Hypothesized Model Studies on parent-child acculturation discrepancy usually sample only one parent within a family, even though two-parent families are the most common family form in the immigrant population (Hernandez 2004). Examining the effect of parent-child acculturation discrepancy without considering the family context may yield inconclusive results, as the dynamics in each of the parent-child dyads within a family are interdependent (Costigan 2010;Minuchin 1985). For example, an acculturation discrepancy with one parent may not influence family functioning if there is a great deal of tension between the child and the other parent. Within a family, there are likely to be differences between parents in terms of how similar their acculturation level is to that of their child. In fact, Costigan and Dokis (2006b) found that father-and mother-child acculturation discrepancy differed significantly from each other in both Chinese and American orientations. Thus, the two parent-child dyads within a family can be categorized as the dyad with a greater acculturation discrepancy versus the dyad with a smaller acculturation discrepancy. A contrast effect is likely to take place: acculturation discrepancy in the less discrepant parent-child dyad becomes less important, whereas acculturation discrepancy in the more discrepant dyad becomes more problematic. Indeed, literature on social judgment suggests that one's evaluation of a target is based on its relative characteristics-that is, in comparison to the reference, whatever the reference might be (Mussweiler 2003). Therefore, the link between parent-child acculturation discrepancy and adolescents' perceptions of lack of parental knowledge may be stronger among more discrepant parent-child dyads than it is among less discrepant dyads. --- Control Variables Several control variables are theoretically related to the main study variables of parent-child acculturation discrepancy, perceived parental knowledge, adolescents' contact with deviant peers and delinquency. First, the current study controls for family income and parental education level, as the risks posed by parent-child acculturation discrepancy may be especially strong in families in which parents have fewer resources (Portes and Rumbaut 1996). Second, parent gender is controlled, as some parent and child characteristics (e.g., maternal working hours and children's temperament) are more consistently related to paternal knowledge than they are to maternal knowledge (Crouter et al. 1999). Third, empirical studies have found that second-or later-generation adolescents engage in more delinquent behaviors than their first-generation counterparts (Choi and Lahey 2006), and that boys engage in more delinquent behaviors than do girls (Moffitt et al. 2001). In addition, delinquent behaviors tend to increase from early to middle adolescence (Moffitt 1993). Therefore, the present study also includes adolescents' generational status, sex and age as control variables. We also control for the direction of parent-child acculturation discrepancy and whether the more/less discrepant designation remains the same across waves. --- Present Study The present study is part of a longitudinal project on Chinese immigrant families. Data were collected first when children in these families were in their early adolescent years (middle school), and again when they were in their middle adolescent years (high school). The current study has two aims. First, we examine the proposed mediating pathways separately among the more and less discrepant parent-adolescent dyads. We hypothesize that parentchild acculturation discrepancy will be related to adolescents perceiving that their parents know less about their daily experiences. The perception of less parental knowledge will be associated with adolescents affiliating with more deviant peers, which in turn will be related to adolescents engaging in more delinquent behaviors. Second, we compare model paths between more and less discrepant dyads. We hypothesize that model paths may be stronger for more discrepant dyads than they are for less discrepant dyads. The conceptual model to be tested is shown in Fig. 1, which depicts both concurrent and longitudinal paths between model constructs. Concurrent relationships from parent-child acculturation discrepancies to parental knowledge to adolescent delinquency are tested among all Wave 1 variables as well as among all Wave 2 variables (a paths). Data on deviant peers were collected only at Wave 2, and thus are tested as a Wave 2 construct only. Auto-regressive influences are controlled through paths of the same constructs across waves (b paths). In addition, cross-lagged paths are specified for distinct constructs from Wave 1 to Wave 2 (c paths). Alternative cross-lagged paths (d paths) are also specified to test for a potential alternative causal direction of the proposed relationships in the model. --- Method Participants Participants were drawn from a two-wave longitudinal study conducted in Northern California. Immigrant parents in the current study hail from mainland China, Hong Kong and Taiwan. As the study targets both parents in a family, all families have two foreign-born parents who are married to one another, both of whom participated in the study. The current sample consists of 201 families in the first wave and 183 in the second wave. Adolescents were between 12 and 15 years of age (M = 13.0, SD = 0.71) at Wave 1, and 16-19 years of age (M = 17.0, SD = 0.72) at Wave 2. Females accounted for 61.2% of the adolescent sample at Wave 1 and 60.1% at Wave 2. Median family income was in the range of $30,001-$45,000 at Wave 1 and $45,001-$60,000 at Wave 2. Median education level was high school graduate for both fathers and mothers across waves. --- Procedure At Wave 1, participants were recruited from seven middle schools in major metropolitan areas of Northern California. With the aid of school administrators, Chinese American students were identified, and all eligible families were sent a letter describing the research project. Participants received a packet of questionnaires for the mother, father, and target child in the household. Participants were instructed to complete the questionnaires alone and not to discuss answers with friends and/or family members. They were also instructed to seal their questionnaires in the provided envelopes immediately following completion of their responses. Within approximately 2-3 weeks after sending the questionnaire packet, research assistants visited each school to collect the completed questionnaires during the students' lunch periods. Of the 47% of families who agreed to participate, 76% returned surveys. Approximately 79% of families participating at Wave 1 completed questionnaires at Wave 2. At each wave, the entire family received nominal compensation ($30 at Wave 1 and $50 at Wave 2) for their participation. Questionnaires were prepared in English and Chinese. The questionnaires were first translated to Chinese and then back-translated to English. Any inconsistencies with the original English version of the scale were resolved by bilingual/ bicultural research assistants with careful consideration of culturally appropriate meaning of items. Attrition analyses were conducted to compare whether demographic variables differed between families that participated at only one wave and those that participated at both waves. Only adolescent sex was marginally significantly related to attrition: boys were more likely to have dropped out than girls (<unk> 2 (1) = 3.86, p =.051). --- Measures Acculturation-The Vancouver Index of Acculturation follows the bi-dimensional model of acculturation and was developed for use with Chinese Americans (Ryder et al. 2000). Using a scale ranging from (1) "strongly disagree" to (5) "strongly agree," mothers, fathers, and adolescents responded to 10 questions about their American orientation and 10 questions about their Chinese orientation. Questions asked about a range of generic behaviors without listing specific traditions or attitudes (e.g., "I often follow Chinese cultural traditions"). The American orientation items were the same as the Chinese orientation items, except that the word "Chinese" was changed to "American." Only those items that conformed to the common factor structure across informants and waves were used (Kim et al. 2009). Across informants and waves, the internal consistency was high for both orientations (<unk> =.76-.82). Parental Knowledge-Parental knowledge was assessed through a measure adapted from the Iowa Youth and Families Project (Ge et al. 1996). Using a scale ranging from (1) "never" to (5) "always," adolescents rated three items on parents' knowledge of adolescents' daily activities (e.g., "During the day, does your parent know where you are and what you are doing?"). Across waves, the internal consistency was acceptable (<unk> =.62-.74). Deviant Peers-Adolescents reported on their association with deviant peers at Wave 2 only, using an abridged 7-item version of a peer deviance measure previously used with Asian American adolescents (Le and Stockdale 2005). Adolescents rated the proportion of their close friends who had exhibited problem behaviors (e.g., gone joyriding) during the past 6 months using a scale ranging from (1) "almost none" to (5) "almost all." The internal consistency was high (<unk> =.83). Delinquent Behaviors-Delinquent behaviors were assessed through measures adapted from the "rule-breaking behaviors" subscale of the Child Behavior Checklist (Achenbach 2001). One additional item, "is part of a gang," was added. Using a scale ranging from (0) "not true" to (2) "often true or very true," adolescents rated their own problem behaviors during the past 6 months. Two items ("feel guilty after doing something I shouldn't do" and "would rather be with older kids than kids my own age") were dropped from factor analysis due to low factor loading. The internal consistency was between.57 to.60 across waves. Given the low levels of delinquent behaviors reported, each delinquent behavior was dichotomized, such that a score of 0 reflected no delinquent behavior and a score of 1 indicated delinquent behavior, whether occasional or frequent. Control Variables-Fathers and mothers reported on their family income before taxes and highest level of education attained. Family income was assessed using a scale ranging from (1) "below $15,000" to (12) "$165,001 or more." The highest level of education attained by parents was assessed using a scale ranging from (1) "no formal schooling" to (9) "finished graduate degree (e.g., Master's degree)." Adolescents also reported their age, sex, whether they were foreign-or US-born and whether their parents were married to one another. --- Conceptualizing More/Less Discrepant Parent-Child Dyads Acculturation scores of adolescents and parents were first standardized. The parent-child discrepancy score was the absolute value reached by subtracting the standardized parent score from the standardized adolescent score. The discrepancy scores of the two parentadolescent dyads in the same family were then compared with each other. The dyad with a higher discrepancy score was assigned to the more discrepant group, whereas the dyad with a lower discrepancy score was assigned to the less discrepant group. These designations were done separately for each wave and separately for Chinese and American orientations. For the entire sample, there were slightly more father-adolescent dyads (50.8-54.2%) than mother-adolescent dyads (49.2-45.8%) placed in the more discrepant group for all the designations. This issue was addressed by controlling for parent gender as a covariate in the following analyses. --- Results --- Analyses Plan Data analyses proceeded in three steps. First, we conducted descriptive and correlational analyses for model constructs and control variables. Second, we tested our first hypothesis on the mediating pathway separately among more and less discrepant groups using structural equation modeling. We examined the hypothesized paths depicted in Fig. 1 and the indirect effects from parent-child acculturation discrepancy to adolescent delinquency. Third, we tested our second hypothesis on the difference between more and less discrepant groups. We conducted invariance tests to compare the strength of the model parameters for more and less discrepant dyads. All the steps were conducted separately for Chinese and American orientations. --- Descriptive Statistics and Correlational Analyses Among Model Constructs Table 1 displays the descriptive statistics for the raw scores from participants' original reports. Tables 2 and3 display the descriptive statistics and correlations among the study variables for models involving Chinese and American orientations, respectively. Consistent with the hypotheses, concurrent relationships and auto-regressive relationships between model constructs are generally significant. One notable exception is that parent-child acculturation discrepancy is significantly correlated with parental knowledge only among the more discrepant parent-adolescent dyads in American orientation. In addition, only two cross-lagged relationships are significant among the more discrepant parent-adolescent dyads in American orientation: a high level of parent-child acculturation discrepancy at Wave 1 is related to compromised parental knowledge at Wave 2, and a high level of parental knowledge at Wave 1 is significantly related to less contact with deviant peers at Wave 2. A potential alternative cross-lagged relationship emerged (Path d3 in Fig. 1), as adolescent delinquency at Wave 1 is significantly related to deviant peers at Wave 2 for both Chinese and American orientations. This is the only alternative path included in the analyses of the hypothesized models described below. --- Analyses of Hypothesized Models Structural Equation Modeling (SEM) was used to examine the hypothesized model using Mplus 6.11 (Muthen and Muthen 2011). Both concurrent and longitudinal links, as well as direct and indirect effects among the model constructs, were tested simultaneously. Mplus uses the full information maximum likelihood (FIML) estimation method to handle missing data, so that all the available data can be used to estimate model parameters (Muthen and Muthen 2011). Four separate models were tested, separately for more and less discrepant parent-adolescent dyads, for both Chinese and American orientations. For all models, the endogenous variable was adolescent delinquent behaviors, and the mediating variables were parental knowledge and deviant peers. Adolescents' age, sex, and place of birth, as well as family income, parental educational level, the direction of the parent-child acculturation discrepancy, and whether the assignment to the more or less discrepant group switched from Waves 1 to 2, were included in all models as covariates. The model fits are displayed in the last set of rows in Table 4. The four models showed a fair to good fit to the data. Each model explained 9.2-15.9% of the variance in Wave 1 adolescent delinquency, and 43.5-47.7% of the variance in Wave 2 adolescent delinquency. The coefficients and confidence intervals for our hypothesized paths are also shown in the first set of rows in Table 4. All the hypothesized concurrent relationships among parent-child acculturation discrepancies, perceived parental knowledge, adolescents' contact with deviant peers and adolescent delinquency (a paths) are significant in the models for more discrepant dyads in American orientation. In contrast, parent-child acculturation discrepancy is not significantly related to less parental knowledge (Paths a1 and a3) in the models for less discrepant dyads in Chinese or American orientation, nor for more discrepant dyads in Chinese orientation. Auto-regressive influences are generally significant for parent-child acculturation discrepancy and parental knowledge (Paths b1 and b2). However, with the exception of Model 4, the auto-regressive influence of adolescent delinquency (Path b3) is not significant. In addition, with the exception of the significant relationship between W1 delinquency and W2 deviant peer association (Path d3 in all four models), none of the other cross-lagged paths is significant. Indirect effects are shown in the second set of rows in Table 4. Concerning our first hypothesis, on mediating effects, only the models for more discrepant parent-adolescent dyads in American orientation yielded significant indirect effects from parent-child acculturation discrepancy to adolescent delinquency. Concurrently, the effect of parentadolescent acculturation discrepancy on adolescent delinquency was mediated by parental knowledge at Wave 1 (Pathway 1), and by both parental knowledge and contact with deviant peers at Wave 2 (Pathway 2). Longitudinally, the indirect effect of parent-adolescent acculturation discrepancy at Wave 1 on adolescent delinquency at Wave 2 was significant via two pathways. The first pathway was via parental knowledge at both waves and contact with deviant peers at Wave 2 (Pathway 3). The second was via parental knowledge at Wave 1, adolescent delinquency at Wave 1, and contact with deviant peers at Wave 2 (Pathway 4). --- Comparing Models for More and Less Discrepant Parent-Adolescent Dyads Concerning our second hypothesis, on the difference between more and less discrepant parent-child dyads, invariance tests were used to determine whether the model paths (Paths a, b, c and d3) were significantly different between the two groups; these were conducted separately for American and Chinese orientations. For each orientation, data for more and less discrepant dyads were modeled within the same covariance matrix to account for within-family dependence (Benner and Kim 2009). A model was first fitted allowing all structural paths to be freely estimated between more and less discrepant dyads. Individual paths of the structural model were then constrained, one at a time, to determine if they were significantly different across groups. The Chi-square test was used to determine whether a more constrained model fitted the data significantly worse than a less constrained one. For American orientation only, invariance tests showed that three paths are stronger in the model for more discrepant parent-adolescent dyads than in the model for less discrepant dyads: the path from parent-child acculturation discrepancy to parental knowledge at Wave 1 (Path a1, <unk> 2 (1) = 4.47, p <unk>.05), the path from parental knowledge to adolescent delinquency at Wave 1 (Path a2, <unk> 2 (1) = 5.68, p <unk>.05), and the path from parental knowledge to contact with deviant peers at Wave 2 (Path a4, <unk> 2 (1) = 7.25, p <unk>.01). --- Discussion Parent-child acculturation discrepancy has mostly been studied using cross-sectional data from the adolescent and just one parent in the family, usually the mother (Costigan 2010). The current study used longitudinal data to examine parent-child acculturation discrepancy as an ongoing risk factor for adolescent delinquency, and explored possible variations of this effect between more and less discrepant parent-adolescent dyads in terms of how their different acculturation levels might affect the functions within each family group. The mediating mechanism of this relationship was examined both concurrently and longitudinally. For more discrepant parent-adolescent dyads in American orientation, the relationship between parent-child acculturation discrepancy and adolescent delinquency is mediated by adolescents' perception of parental knowledge and contact with deviant peers, both concurrently and longitudinally. In the current study, parent-child discrepancies in American orientation, but not Chinese orientation, are indirectly related to adolescent delinquency. The extant literature has been inconsistent on the question of whether orientations towards the mainstream and heritage cultures influence delinquent behaviors in adolescents from immigrant families. For example, Le and Stockdale (2005) found that Asian American adolescents' endorsement of both orientations was related to their delinquent behaviors. In comparison, Juang and Nguyen (2009) found that adolescents' misconduct (i.e., damaging school property, threatening a teacher or hurting a classmate) was not significantly related to orientations towards either American or Chinese culture, but instead to specific cultural values (i.e., autonomy expectations). This finding suggests that the effects of acculturation-related factors on adolescent adjustment may vary according to the specific area being examined. It is possible that only a parent-child discrepancy in American orientation affects adolescent delinquency through the mediating pathway of parental knowledge and contact with deviant peers, whereas a discrepancy in Chinese orientation affects adolescent adjustment through other mediating mechanisms. This possibility seems especially likely considering that the construct measured in the current study-namely, parental knowledge about children's daily experiences-is more likely to be associated with the mainstream culture than with the heritage culture. Future studies are needed to explore whether and how parent-child discrepancy in Chinese orientation may be related to adolescent delinquency in Chinese immigrant families. The existing literature considers lack of parental knowledge, especially adolescents' perceptions that their parents lack knowledge, to be a risk factor for adolescent delinquency (Crouter and Head 2002). The current study adds to this literature by identifying parentchild acculturation discrepancy as one possible origin of this particular risk factor in immigrant families. Further, this link between parent-child acculturation discrepancy and parental knowledge may take different forms, depending on the various dynamics operating within a given family. In our study, we compared the more and less discrepant parentadolescent dyads within each family. Generally, the parent who is more discrepant from the child in orientation towards the mainstream culture presents more of a risk factor than does the less discrepant parent. Only among dyads in the more discrepant group is parent-child acculturation discrepancy related to deterioration in adolescents' perceptions of parental knowledge, which in turn is linked to more adolescent delinquency. Studies have found that parental knowledge comes from different sources, such as parents' active surveillance and adolescents' voluntary disclosures (Stattin and Kerr 2000). Studies measuring perceived parental knowledge (Soenens et al. 2006) also support this notion. It is possible that both processes, surveillance and disclosure, are compromised for the more discrepant parentchild dyad. In comparison, the less discrepant parent may assume more responsibility for actively tracking the child's activities, because he or she relates to the child better. For their part, adolescents may be more willing to share their daily experiences with their less discrepant parent, as they may feel that this parent understands them. An interesting finding in the current study is that adolescent delinquency in early adolescence is consistently related to contact with deviant peers in middle adolescence, but not as consistently to delinquency in middle adolescence. In fact, contact with deviant peers during middle adolescence seems to bridge delinquency in early and middle adolescence. This result suggests that it may be ideal to time an intervention for reducing delinquency before early adolescence, when it may be most effective at reducing the long-term consequences of problem behaviors. Early onset of delinquent behaviors is a sign of a life-course-persistent pattern, whereas adolescence-limited delinquent behaviors are more likely to exist only in middle adolescence (Moffitt 1993). As the life-course-persistent pattern of delinquency clearly poses more of a developmental risk, it is important to develop early intervention programs aimed at preventing this persistent pattern from developing. --- Implications The current study demonstrates that acculturation discrepancy in parent-child dyads is implicated in child maladjustment. Moreover, it suggests that the parent who is more discrepant poses the greater risk to child outcomes. Intervention programs usually target mothers, or whichever parent in a family signs up for the program (Ying 1999). However, this may not be a good strategy if the participating parent happens to be the less discrepant parent in the family. Rather, it may be more fruitful for future interventions to use a baseline measure to identify and target the parent whose acculturation level is more discrepant from that of the child. The current study also identifies parental knowledge as a proximal mediator of the relationship between parent-child discrepancy in American orientation and adolescent delinquency. A lack of shared values, beliefs and activities may create misunderstanding and precipitate disagreements among family members. Intervention programs need to facilitate effective communication by providing approaches such as active monitoring and encouraging adolescents' disclosure. --- Limitations There are some limitations of the current study. First, families in which only one parent participated, including all single-parent families in the project, were not included in the sample. Thus, our findings may not be applicable to those families. In a similar vein, given the low participation rate, future studies with different samples are needed to examine whether the current findings can be replicated. Second, there are few significant crosslagged relationships between study variables. This lack of significance may be attributed to the gap of 4 years that occurred between data collection waves. Third, although the direction of parent-child acculturation discrepancy was included as a covariate, the current study could not compare model parameters between families with different discrepancy directions. Future studies with larger sample sizes are needed to examine whether the direction of the parent-child acculturation discrepancy has an effect on how it impacts child adjustment. Finally, the current study assumes that a high level of parental knowledge and a low level of adolescent delinquency are adaptive. It is possible, however, that an extremely high level of parental knowledge indicates an overly controlling parenting style, and an extremely low level of adolescent delinquency indicates poor peer relationships, both of which are indicators of adolescent maladjustment. Future studies are needed to examine how various levels of parental knowledge and adolescent delinquency are related to adolescents' longterm developmental outcomes. --- Conclusion The current study explored the possible mediating mechanism of the relationship between parent-child acculturation discrepancy and adolescent delinquency, and compared the mediating pathways between more and less discrepant parent-adolescent dyads in Chinese immigrant families. For parent-adolescent dyads more discrepant in American orientation, acculturation discrepancy in early adolescence is an ongoing risk factor for adolescents' engagement in delinquent behaviors, in both early and middle adolescence. These results suggest that future intervention programs need to include the parent whose acculturation level is more
Parent-child acculturation discrepancy has been considered a risk factor for child maladjustment. The current study examined parent-child acculturation discrepancy as an ongoing risk factor for delinquency, through the mediating pathway of parental knowledge of the child's daily experiences relating to contact with deviant peers. Participants were drawn from a longitudinal project with 4 years between data collection waves: 201 Chinese immigrant families participated at Wave 1 (123 girls and 78 boys) and 183 families (110 girls and 73 boys) participated at Wave 2. Based on the absolute difference in acculturation levels (tested separately for Chinese and American orientations) between adolescents and parents, one parent in each family was assigned to the "more discrepant" group of parent-child dyads, and the other parent was assigned to the "less discrepant" group of parent-child dyads. To explore possible within-family variations, the mediating pathways were tested separately among the more and less discrepant groups. Structural equation modeling showed that the proposed mediating pathways were significant only among the more discrepant parent-adolescent dyads in American orientation. Among these dyads, a high level of parent-child acculturation discrepancy is related to adolescent perceptions of less parental knowledge, which is related to adolescents having more contact with deviant peers, which in turn leads to more adolescent delinquency. This mediating pathway is significant concurrently, within early and middle adolescence, and longitudinally, from early to middle adolescence. These findings illuminate some of the dynamics in the more culturally discrepant parent-child dyad in a family and highlight the importance of examining parent-child acculturation discrepancy within family systems.
that an extremely high level of parental knowledge indicates an overly controlling parenting style, and an extremely low level of adolescent delinquency indicates poor peer relationships, both of which are indicators of adolescent maladjustment. Future studies are needed to examine how various levels of parental knowledge and adolescent delinquency are related to adolescents' longterm developmental outcomes. --- Conclusion The current study explored the possible mediating mechanism of the relationship between parent-child acculturation discrepancy and adolescent delinquency, and compared the mediating pathways between more and less discrepant parent-adolescent dyads in Chinese immigrant families. For parent-adolescent dyads more discrepant in American orientation, acculturation discrepancy in early adolescence is an ongoing risk factor for adolescents' engagement in delinquent behaviors, in both early and middle adolescence. These results suggest that future intervention programs need to include the parent whose acculturation level is more discrepant from that of the child. Facilitating better communication between parents and children, thereby increasing parental knowledge during early adolescence, may be the most promising strategy for interventions aiming to reduce adolescents' affiliation with deviant peers and subsequent engagement in delinquent behaviors. influence adolescent development within a cross-cultural context. Her current research is on parenting practices and adolescent adjustment in Chinese immigrant families in the US. Conceptual longitudinal model linking parent-child acculturation discrepancy, parental knowledge, deviant peers, and adolescent delinquency in Chinese immigrant families. a paths: concurrent relationships between model constructs within Wave 1 or Wave 2; b paths: auto-regressive relationships between the same constructs across Wave 1 and Wave 2; c paths: cross-lagged relationships between distinct constructs from Wave 1 to Wave 2; d paths: alternative cross-lagged relationships between distinct constructs from Wave 1 to Wave 2 Descriptive statistics for raw scores of study variables Descriptive statistics and correlations among study variables in Chinese orientation models Descriptive statistics and correlations among study variables in American orientation models --- Su
Parent-child acculturation discrepancy has been considered a risk factor for child maladjustment. The current study examined parent-child acculturation discrepancy as an ongoing risk factor for delinquency, through the mediating pathway of parental knowledge of the child's daily experiences relating to contact with deviant peers. Participants were drawn from a longitudinal project with 4 years between data collection waves: 201 Chinese immigrant families participated at Wave 1 (123 girls and 78 boys) and 183 families (110 girls and 73 boys) participated at Wave 2. Based on the absolute difference in acculturation levels (tested separately for Chinese and American orientations) between adolescents and parents, one parent in each family was assigned to the "more discrepant" group of parent-child dyads, and the other parent was assigned to the "less discrepant" group of parent-child dyads. To explore possible within-family variations, the mediating pathways were tested separately among the more and less discrepant groups. Structural equation modeling showed that the proposed mediating pathways were significant only among the more discrepant parent-adolescent dyads in American orientation. Among these dyads, a high level of parent-child acculturation discrepancy is related to adolescent perceptions of less parental knowledge, which is related to adolescents having more contact with deviant peers, which in turn leads to more adolescent delinquency. This mediating pathway is significant concurrently, within early and middle adolescence, and longitudinally, from early to middle adolescence. These findings illuminate some of the dynamics in the more culturally discrepant parent-child dyad in a family and highlight the importance of examining parent-child acculturation discrepancy within family systems.
Introduction HIV pre-exposure prophylaxis (PrEP) has transformed the landscape of HIV prevention. It forms part of a series of behavioural and biomedical interventions of varying levels of efficacy that have disrupted the normative power of condoms in HIV prevention discourse from the 1990s onward. Other interventions in this series have included negotiated safety [1], postexposure prophylaxis [2], strategic positioning [3], serosorting [4] and treatment-as-prevention [5]. PrEP is highly effective at preventing HIV [6], and has the advantages of not being coitally dependant and providing receptive sexual partners with an intervention they can use without requiring the insertive partner's cooperation [7]. Despite these advantages, the use of PrEP in populations of gay, bisexual and other men who have sex with men (GBMSM) was initially problematised by some prominent figures in the United States gay community when first approved in the US. Michael Weinstein, president of the AIDS Healthcare foundation, dismissed PrEP as a 'party drug'; Larry Kramer, founding member of both the Gay Men's Health Crisis and activist organisation ACT-UP, described taking a pill to prevent HIV rather than using a condom as 'cowardly'. Freelancer David Duran wrote disapprovingly that PrEP gave 'gay men who prefer to engage in unsafe practices' a way to 'bareback' without having to worry about HIV in a piece memorably titled 'Truvada whores?', referencing the brand name of the medication used for PrEP [8]. (Ironically, PrEP advocates then adopted 'Truvada whore' as a cultural meme promoting PrEP use.) The stark community divisions between those advocating for PrEP and those warning that it could do more harm than good signal the cultural significance of condom-protected sex as normative in HIV prevention discourses for GBMSM, despite the raft of other interventions listed above that had to some extent already displaced condoms [1][2][3][4]. 'Safe sex' (or'safe(r) sex) was a concept generated from the very earliest days of the HIV epidemic. The development of safe sex culture-which included, but was not confined to, condom use-focused on articulating and promulgating menus of sex practices that enabled rich expression and enjoyment of sex while precluding HIV transmission between partners. There are examples of'safe sex' materials developed even prior to there being certainty that a sexually transmissible virus was the cause of AIDS [9]. Taking collective responsibility for sexual health and the avoidance of HIV transmission among gay men was described by Weeks as a concrete exercise in sexual citizenship, and he suggested that men who failed to do this risked moral pariah status [10]. For many years in Australia, the term'safe sex' was synonymous with condom use, even though other forms of safe sex were articulated and practiced [1]. Maintaining high prevalence of condom use was deemed critical to controlling HIV incidence by community-based organisations and public health experts alike [11][12][13]. By 2010, however, there was emerging evidence of the effectiveness of new antiretroviral strategies to reduce or prevent HIV transmission to sexual partners, either by suppressing the viral loads of people living with HIV, or through the use of antiretroviral drugs as prophylaxis by HIV negative people-PrEP [14]. One of the normative challenges that PrEP brought to HIV prevention discourse was that it required individuals to acknowledge a risk (condomless or 'bareback' sex) that gay men had been told to avoid for three decades, outside of relationship sex [15]. Although the efficacy of treatment-as-prevention also allowed for consideration of 'bareback' sex, it was premised upon the use of antiretroviral drugs in people living with HIV. Suppression of the infective agent is a time-honoured strategy in infectious disease control and is less contentious in that context, though in practice some HIV negative men remain nervous despite the strong evidence of effectiveness [16]. With PrEP, the focus shifted to the routine use of antiretroviral drugs in HIV negative people potentially for protracted periods of time, an approach analogous to malaria prevention in travellers but on a far greater scale. This shift was described by Thomann as 'the pharmaceuticalisation of the responsible sexual subject' and is connected to 'end of AIDS' discourses that posit HIV prevention as a medical and technological problem [15]. Recent research has also shown both that taking PrEP is associated with lowered anxiety in gay and bisexual men who would otherwise be at risk of HIV [17][18][19], and that clinicians will prescribe PrEP to gay men where there is no clear clinical risk of HIV acquisition, speculating that there might be undisclosed risk factors [20]. To date there has been considerable qualitative research on the willingness of GBMSM to use PrEP, its acceptability [21][22][23][24][25][26], and community perceptions of its value in HIV prevention [27]. Research in Canada and the U.S. has also explored the impact of PrEP with respect to sexual health, communication and behaviour and social and community issues among gay and bisexual men [19,28]. However, there has been little Australian research that explores the meaning of PrEP and how men in gay male sex cultures see it shaping evolving norms of'safe sex'. This study investigated perceptions of PrEP and conceptualisations of'safe sex' during the period of incrementally increasing access in Australia (2015-2018), drawing predominantly upon perspectives of GBMSM, and also on stakeholders comprising HIV community staff and healthcare providers. At the beginning of the study, PrEP was only available through very limited trials and through personal importation. Access changed dramatically in March 2016, when large-scale implementation studies commenced, with more than 10,000 GBMSM enrolled in New South Wales (NSW) [29]. In April 2018, subsidised access under Australia's Pharmaceutical Benefits Scheme made PrEP available nationwide at a standard, subsidised price [30]. Thus, this study spanned a period of rapid change in PrEP access and uptake with data collection beginning in October 2015 continuing until December 2018. The study aimed to explore how PrEP was impacting on sex cultures-how GBMSM saw PrEP as affecting their sex practices, as well as perspectives on how PrEP affected existing cultural norms for HIV prevention. --- Methods The Sydney In-depth PrEP study (SIn-PrEP) was a qualitative study that explored evolving norms of'safe sex' during the introduction of PrEP in Australia. SIN-PrEP drew on participatory action research methods with respect to data collection, analysis and communication of results [31]. Prior to data collection, a reference group was established to guide the research. This comprised representatives from the local LGBTIQ, HIV positive and transgender community organisations; and two researchers with extensive experience in research on gay male sexuality. This group met regularly in the early period of data collection to discuss initial findings and developments in PrEP access. As data collection progressed, the first author met periodically with representatives of the local community organisation ACON (formerly known as the AIDS Council of New South Wales), to discuss how findings could inform health promotion campaigns under development, and participated in information sessions with the community organisation to discuss implications. Study findings were reported to and discussed with community organisations prior to presentation or publication so that findings could inform development of health promotion campaigns. --- Recruitment Data collection commenced in October 2015 and ceased in December 2018. Study participants were drawn from three distinct populations-sexually active GBMSM, clinicians involved in PrEP prescribing, staff working in HIV and LGBTIQ+ community organisations-each with different recruitment strategies. Sexually active GBMSM community participants (n = 31), (hereafter 'gay community participants', as these participants identified as gay) were recruited primarily through the social media channel of a local community-based LGBTQ+ organisation, ACON, supplemented by fliers distributed at gay community organisations, events, venues and word of mouth. This group included HIV negative men taking PrEP, HIV negative men who chose not to take PrEP and men living with HIV. Both cis and trans identified gay men were eligible for the study, and participants were recruited from Sydney, NSW. In 2016 and 2017, there was further targeted recruitment through Kirby Institute research data bases purposively inviting transgender gay men, and gay men on PrEP access studies who reported they had ceased taking PrEP. Only people who had given permission to be contacted for research participation opportunities were contacted using this method. Clinicians from public sexual health clinics and general practice with high caseloads of GBMSM (n = 6) were purposively selected. Community-based staff (n = 4) were recruited through invitations to major LGBTIQ+ organisations which passed the invitations onto key personnel who then decided whether to participate. Data collection. Data were collected in the form of in-depth semi-structured interviews for clinicians (n = 6) and gay community participants (n = 31), and a focus group of community-based professionals (n = 4). Interviews were audio recorded and transcribed verbatim by a professional transcriber. Interviews lasted approximately 60 minutes, while the focus group ran for 90 minutes. Interviews were usually held face-to-face, although three gay community participants were interviewed by phone. Participants in the gay community group chose their own pseudonyms. Health care providers were assigned numbers (1-6), as were focus group participants. Gay community participants were interviewed individually as they were discussing very personal issues. Data were collected from community-based professionals in a focus group, as this allowed the for a rich discussion where participants built on each other's views and compared experiences, without privacy risks as they were not discussion their own private behaviour. All data were collected by the first author, who is a queer-identified woman with extensive networks in the LGBTIQ+ and HIV communities. Domains of interviews and focus groups. Gay community participants were asked questions about how and why they saw PrEP as relevant to their sexual lives, whether or how it was changing their sexual lives, and how they rated the importance of sex in their lives. HIV negative men were also asked about the importance of remaining HIV negative, in addition to other questions about access to PrEP and adherence for those taking PrEP. Health care providers and community-based professionals were asked about emerging issues in the provision of PrEP, their views on optimal implementation and the challenges of health communication. Community-based professionals in focus groups were asked about the impacts of PrEP of'safe sex' health promotion, complexities of access and observed changes in community norms. --- Research ethics This study was approved by the University of New South Wales Human Research Ethics Committee (approval number HC15305) and the ACON Research Ethics Review Committee (RERC 2015/08). All participants who participated in face-to-face interviews or focus groups provided written informed consent. Participants interviewed by telephone provided formal verbal informed consent. Participants were not remunerated for their participation. --- Analysis Transcripts from interviews and the focus group were reviewed and then coded using NVIVO (v11-12) software. Coding was initially inductive and comprised descriptive (e.g. 'condom use-kills erection') and conceptual codes (e.g. 'citizenship'). Codes were reviewed and mapped in relation to each other, and developed into key themes by the first author, in discussion with reference group members, study investigators and stakeholders, and at formal presentations of preliminary findings. Descriptive themes (e.g. 'STI testing and communication' and 'advocating/explaining PrEP through social media') were further compared and analysed, leading to higher order concepts (e.g. 'Responsibility and care') drawing on Braun and Clarke's six step process of reflexive thematic analysis [32,33]. --- Results A total of 24 HIV negative gay men currently or recently on PrEP, seven gay men who had never taken PrEP (two HIV positive, five HIV negative), and six healthcare providers took part in semi-structured, in-depth interviews. One focus group was conducted with four community HIV sector staff. Two of the HIV negative men currently or recently taking PrEP were transgender and 22 were cisgender. Gay community participants were aged between 18-53 years (median 38 years; community-based staff and healthcare providers were not asked their ages). All gay community participants described themselves as sexually active. Many had primary relationship partners or husbands, but also had other regular and/or casual partners. Among those with primary relationship partners, relationship agreements included complete openness, 'don't ask don't tell' agreements, monogamy with exceptions (such as other partners allowed when travelling), playing together (having sex with other partners together) and monogamy. This article draws predominantly on the interview data with gay community participants. Three major cross cutting themes were identified. 'Changing norms and clashing symbols', encompassed the decreasing centrality of condoms in risk reduction and participants' responses to that, and has a sub-theme on negotiation where the emergent norms are discussed in the specific context of sexual negotiation. 'Stigma' encompassed both stigma related to HIV and stigma related to not taking PrEP. 'Responsibility and care', comprised participants' accounts of their views of activities as seemingly disparate as regular STI testing, promotion of PrEP and/or other risk reduction in their social circles, and contribution to research, which were nevertheless linked conceptually in participants' discourse to 'giving back to' or promoting the wellbeing of their communities. --- Changing norms and clashing symbols Participants across all three groups strongly endorsed the idea that established norms of'safe sex' had changed, and that condom use was no longer central. Although most of the men in the gay community participant group had been having at least some condomless sex before PrEP, nearly all these men, whether on PrEP or not, reported that their own sexual practice had been affected directly or indirectly by increasing PrEP access. This impact was in the form of reduced condom use in casual sex. Among the sexually active men not on PrEP, there was a minority view that PrEP could not and arguably should not replace condom use, as they deemed condom use to be central to STI control. Many of the men on PrEP or those living with HIV, however, deemed curable STIs a minor annoyance only, as can be seen in the following quote. STIs are not as of concern for me, you know. For the sake of the argument, you go in and get a jab. You go and take a couple of pills, you know, and, and we're fine. HIV's the big one that we don't have a cure for. Teddie, 32, on PrEP For many participants, a shift away from a condom-based norm while remaining protected from HIV brought a new sense of freedom, regardless of the lack of protection from other STIs. I feel like shackles have been loosened a little. Chukki, 43, on PrEP This freedom was connected to the physical pleasures of condomless sex, as indicated by Mannie, a 35 year old gay community participant who expressed this as "I don't like being fucked by a plastic bag". Some men however perceived that there were socially valuable aspects of 'condom culture' which they feared were being lost. For these men, condom use had a symbolic value as a marker of caring either specifically for a sex partner or more broadly for 'community' by adopting tangible sexual practices that prevented the transmission of HIV. For men who perceived that condom use could indicate care, there was some concern that PrEP could symbolically erode this. If someone only wants to fuck you without a condom, then are they actually thinking about the bigger consequences of the act? Steve, 53, on PrEP Other men however used advocacy for PrEP in their virtual and real-life social circles as a way of protecting and promoting community values. I made like some Facebook post about it... My words were: it's a way for HIV negative people to be active in fighting HIV. Mark, 24, on PrEP With regard to how PrEP impacted on the concept of an inclusive community, again there were clashing perspectives. HIV positive participants suggested that PrEP was diminishing what they perceived as a sexual division between HIV negative and positive men. There's quite a big split between condoms, people that use condoms consistently and people that use PrEP. What's sort of happening I think is that people that are on PrEP are a lot more open to sleeping with people that are positive. Mike, 38, HIV+ There were two facets identified in this-firstly, that taking antiretroviral drugs opened HIV negative men up to understanding social issues related to taking a medication associated with HIV, and secondly, that negative men taking PrEP were less likely to serosort (proactively choose partners known or assumed to be the same serostatus) [4]. One of the HIV positive participants, however, who only had condomless sex, said that he still serosorted. I will not choose someone that's, that is HIV negative. [Okay] Yeah. [Yeah] I'd only, I only have sex with people that are HIV positive. Ron, 40, gay community participant, HIV+ Notably, not all HIV negative participants, whether on PrEP or not, were accepting of having known HIV positive men as sexual partners, and in particular were troubled by the idea of condomless sex with a known positive partner despite other risk-reduction interventions such as PrEP or the potential partner having an undetectable viral load. I understand that someone who, has an undetectable viral load is, you know, safe. But, nevertheless, it just kind of plays on your mind. Josh, 45, takes PrEP periodically, such as when travelling. One HIV negative participant not on PrEP was adamant that he would only have condomless sex with an HIV positive partner if he could see their viral load test results. Like there's guys I've met on-line who, one of them's positive and he wants to do it without the condom. And I said, "I wanna see your [viral load] blood test [results]." Nick, 57, not on PrEP While almost all participants were very clear that they understood that an undetectable viral load meant'safe sex' from the perspective of HIV risk, several said they would expect a positive person with an undetectable viral load to use a condom. Others admitted that they avoided known HIV positive men as sex partners, though recognising that they probably had had unacknowledged HIV positive sex partners. --- Negotiation How risk reduction was negotiated for casual sexual encounters was a major issue of debate regarding changing norms. In sexual negotiation, the massive changes caused by the increasingly pervasive role of on-line sex applications (hereafter 'hook up apps') was as much an issue as the changes in HIV risk reduction occasioned by PrEP and treatment-as-prevention, particularly for older men who were veterans of gay bars and sex-on-premises-venues. PrEP-taking participants were divided as to whether they would list 'on PrEP' on their profiles, as this set up the presumption of condomless sex-on the one hand, this was seen as increasing the attractiveness of a profile (hence increasing sexual capital), but on the other, it would shut down the potential for negotiation. I figure that the only people who need to know that are the people who are naked next to me... if you wanna have sex with me, I actually want to have some connection with you as a human being. Steve, 53, on PrEP Hook up apps were also a medium for discussion of PrEP-both for providing information about it to curious others, and also for heated and sometimes polarised debate about the social and community value of PrEP. Having PrEP listed on a hook-up app was widely seen as something that forestalled negotiation about HIV risk reduction. If you do have it on, they take that as like, "Oh, he's going to like be into like bare-back. Like no condoms. Calvin,18, on PrEP Another participant, who was taking PrEP but had to stop due to unmanageable side effects, noted the difference in both volume and quality of responses he got on hook up apps from when he had 'on PrEP' in his profile and when he subsequently removed it. The minute you put [PrEP] out there [on your profile] people would get straight to the point with what they wanted to do with you. And like, "Oh, okay. This is kind of cool." And then you'll get a lot more of on-PrEP guys message you as well.... I'm like, "Whoa! Okay. No! No! Can I have a conversation with you first? See your face first? That'd be nice." You just don't get that [when it's not on the profile]. Sussman, 30, former PrEP user. For some men, particularly those who expressed some difficulty with negotiating with sex partners, PrEP was a way of protecting themselves without any need for communication about HIV risk. Basically, I really didn't know how to navigate conversations a lot or I just forgot about conversations in the moment. So this was something... I like to think I'm pretty organised so for me being able to do something daily is a lot easier than one thing like when you're with somebody. Lance, 34, on PrEP Several men in the study, including negative men not taking PrEP, talked about having condomless sex with a range of regular fuckbuddies with whom they had established trust relationships. The people that I do have sex with without a condom who are on PrEP I know are tops. I know that they test regularly and I've, I had a long history with them before. Long-ish history. Max, 39, not on PrEP. Several of the HIV negative men-both those on PrEP and those not-reported some experience of 'vicarious PrEP' [34] where one partner was on PrEP and the other relied on that for risk management by proxy. While several participants thought that this was an adequate strategy with known and trusted fuckbuddies, it was also strongly criticised by other participants. Thus, while there was a consensus that the sex culture had changed particularly with respect to how sex is negotiated, there were differing views about the meaning of that change in this theme, and whether it was just about more freedom for condomless sex, or whether there was social value in the change. --- Stigma Participants spoke about stigma in a range of different ways, and these accounts illustrated some of the many contradictions associated with the arrival of PrEP on the HIV prevention landscape. Some men described how the deliberate avoidance of men with HIV as sexual or relationship partners, which has been well documented [35], still persists even among PrEP users. Many participants also described how they either excluded-or were excluded by-other men because they were not using PrEP. Despite some consensus that PrEP should have contributed to reducing the serodivide between HIV positive and HIV negative men, the stigma associated with an HIV diagnosis was frequently spoken about as a primary reason for wanting to stay HIV negative, and sometimes for avoiding sex with known HIV positive partners even when on PrEP. I do know that there's like medication and it's like manageable, but the stigma scares me...I think that's part of the reason I haven't been with an openly positive partner because I'm like even on PrEP I wouldn't wanna take that risk. Calvin, 18, on PrEP. Many participants perceived that with increased uptake of PrEP, many within gay male sex cultures had become less accepting of HIV negative men who opted not to take it. I understand for some people there's a lifestyle decision around using PrEP but it's not for everyone and the stigma is that, if you're against PrEP or you don't think you need to take it up, that you're somehow an idiot. So that's the new stigma in the community. That, if you're on PrEP, you're a responsible, socially considerate, golden gay. And, if you're not on it, you're somebody who can be poo-hooed and dismissed, and attacked. Justin, 40, not on PrEP. This idea that not using PrEP and wanting condom-protected sex diminished sexual capital was echoed across the different groups of participants. Some participants openly acknowledged that they would reject a potential sex partner if he wanted to use a condom. If I'm at a sex party... if I turn around [and] somebody's put a condom on, I will roll my eyes and get up, and walk away. Jack, 39, on PrEP Jack's reported actions convey not just a 'no, thank you' to prospective partner, but a pointed act of rejection. Other participants reported filtering out prospective partners who wanted to use condoms by positively selecting partners on the basis of PrEP use. What's your name? Are you on PrEP? Marc, 32, gay community participant on PrEP. Other gay community participants confirmed that expressing an interest in using condoms was likely to result in rejection. To be honest with you, if it's in Sydney or Melbourne, you could almost guarantee that a condom's gonna be a deal-breaker for the other person. David, 40, on PrEP. This perception that wanting to continue to use condoms could adversely affect a man's sexual capital was also predicted by one of the health providers. The sexual, social milieu is going to change and, if you want to have sex, you're going to have to adapt to the new flavour. Unless you're the cutest boy on earth, negotiating condom use is going to become harder. Healthcare provider #3. HIV community professionals working in a community-based HIV testing site also noted that some men who had previously been condoms users were turning to PrEP due to peer pressure: These days I'm seeing more and more people come with, have been using condoms until today but they find that they, when meeting people who are on PrEP and they don't want to use condoms, they find that conversation a bit of an issue. So eventually they feel like they are missing out because the guy on PrEP ends up not necessarily having sex with them because they don't want to use a condom. So some people have decided to go on PrEP because they find that their casual partners don't want to have sex with them 'cause they won't use a condom. HIV community professional #4 In this thematic area, there was little evidence of PrEP use or PrEP users being shamed or stigmatised; rather it was men who chose not to use PrEP who reported feeling that their social and sexual capital was diminished. Regarding HIV stigma, many participants accepted that it was a given. While some reflected on how their PrEP use could potentially reduce HIV stigma, one of the key reasons that HIV negative participants gave for wanting to remain HIV negative was to avoid the perceived social burden and loss of sexual capital attached to an HIV positive diagnosis. --- Responsibility and care From a range of domains including condom use, sexually transmissible infections (STI), testing, and participating in research, we identified the cross-cutting theme of responsibility and care. That is, participants framed their responses on these issues in terms of either interpersonal responsibility or responsibility at a broader social level. Several participants framed frequent STI testing and subsequent communication of positive results to partners as a considered strategy of "stopping the spread of them as much as I can" (Jack, 39,PrEP). This strategy included testing more regularly than the recommended three months, and testing after significant risk events (such as after a sex party of 20, as cited by one participant). For some participants, this sense of responsibility also extended to wanting to ensure that their sex partners had the skills to reduce their HIV risk. For one participant on PrEP, this meant resisting partners who wanted to rely on vicarious PrEP (that is, assuming that condomless sex is safe because a partner is on PrEP, when not on it oneself). I think you have a moral responsibility to ensure that the person you're actually having sex with is-if you actually have some knowledge and some ability to prevent that person from catching HIV, then, then you need to reinforce it in some sort of way and that's either condoms or PrEP. And, if you can't have the discussion and know that person's gonna be on PrEP in the near future, then you need to reinforce with the condoms. Gordon, 53, on PrEP Two other participants talked at length about how they promoted regular STI and HIV testing in their social circles, particularly to younger friends. I spend a lot of time just checking in on my friends... "Hi, how are you?... Hey, have you had your tests recently? Mannie, 35, on PrEP. Several participants talked about the importance of PrEP being available for men in serodiscordant relationships, even if the HIV positive partner had an undetectable viral load and the couple was monogamous, meaning that there was no HIV transmission risk. The rationale for this was so that the HIV negative partner was taking responsibility for his own safety, not relying on his partner's adherence to medication to manage HIV risk. It may be doubling-up but then it gives the person capacity to, to be responsible for their own safety. Josh, 45, taking PrEP periodically In addition to wanting to take responsibility for their own sexual health, there was also an element of distrust of a partner's undetectable viral load as being a reliable form of safe sex. As noted earlier, some participants voiced nervousness of condom-free sex with known positive partners. Many men also talked about responsibility in terms of their participation in research to generate data for the good of the community. One of the reasons I'm happy to do this [interview] however long this takes out of the day is I just think it's a very good thing. [PrEP] has been very good for me and, if I can do things that encourage it to be more readily available and more accessible, I'm happy to do that. --- Ian, 53, on PrEP The concept of being a responsible sexual subject was important to the gay community participants in this study, regardless of whether they were HIV negative or positive and whether or not they took PrEP. While for some condoms remained important both practically and symbolically, others were actively reframing practices such as STI testing as ways of taking responsibility. This concept of research participation as a way of enacting a responsible attitude to community was also raised repeatedly by participants-this was not related to a question asked by the interviewer but volunteered spontaneously by several participants. --- Discussion This study explored the impact of PrEP on evolving gay male sex cultures focusing on the perceptions of gay men in Sydney, Australia, and included perspectives from health service providers and community-based stakeholders. The findings reflect that the meaning of PrEP in the lives of these men needs to be understood in the context of sex cultures deeply inflected with norms that arose in response to the risk of HIV. Taking PrEP can provide access to the pleasure of condomless sex without HIV risk, but it also disrupts decades of community norms where practices of risk reduction-condom use, serosorting [4], negotiated safety [1], strategic positioning [3]-all required negotiation and had to some degree become associated with a demonstration of care for self and other, sometimes described as'sexual citizenship' [10]. The displacement of older'safe sex' norms did not, however, indicate that participants were less invested in community. Many of the PrEP-taking men in this study talked about how other practices related to PrEP such as frequent STI testing and proactive partner notification of diagnoses, advocating for and educating others on PrEP, and participating in research could also be construed as acts of care for partners and community [36], or a new form of 'citizenship'. In considering the impacts of PrEP uptake on the sexual culture, we explored how discourses about PrEP contributed to shaping a normative goal of a new'safe sex' culture that embraces a much broader menu of options [37]. We contend that the aspirational social norms articulated by the participants and discussed herein comprise a sex culture in which risks are minimised, participants have a fair chance of finding sexual satisfaction regardless of HIV serostatus or choice of HIV risk reduction intervention, free from stigma and discrimination, with community practices that sustain and promulgate these norms. In each of these three areas-minimising risk, having discrimination-free satisfying sex, and developing and sustaining community practices that support these norms-there were areas of contention. Nearly all the gay community participants reported that their own sexual practice had changed with increasing community uptake of PrEP, in that they were less likely to use condoms in casual sex. This echoes findings of Newman et al and Pantalone et al [19,28] but contrasts with a 2017 U.S. study that found that participants reported that while PrEP brought a feeling of relief or reprieve from HIV stress, it did not directly impact their practice [38]. The difference with the 2017 study may reflect increasing community confidence with the effectiveness of PrEP. Confidence in PrEP did not, however, necessarily mean that participants were comfortable having sex with known HIV positive partners. While some participants-particularly those in serodiscordant relationships-were very clear that such sex would be'safe', others expressed avoidance of sex with known positive partners despite taking PrEP. These participants themselves recognised this avoidance as irrational, given that the point of PrEP is to prevent HIV acquisition and that they had likely had sex with undisclosed HIV positive partners. Thus, while some of the HIV positive men saw PrEP use as dissolving some of the barriers to sex between people of different serostatus-'bridging the serodivide' [39], some HIV negative men continued to have discriminatory attitudes towards known HIV positive partners. This contrasts with results from two separate U.S. based studies [18,28], which both found that PrEP uptake helped to diminish feelings of stigma toward men with HIV. Again., this difference may be due to increased confidence with PrEP efficacy, as the U.S studies recruited later than our cohort. Within our cohort, there was also evidence of a significant bias against men who opted to use condoms as their primary risk reduction method, echoing findings of both Newman et al and Pantalone et al, who noted increased pressures for condomless sex and increased challenges in negotiating condom use [19,28] This finding in three separate studies leads to a disquieting conclusion that opting to use condoms as primary risk reduction and/or a making a disclosure of HIV positive status, could diminish an individual's sexual capital and limit opportunities for satisfying sex. With regard to supportive community practices that respect diversities and different choices, some men saw the combination of PrEP and hook-up apps as decentring communication around sexual practice and eroding the
While HIV pre-exposure prophylaxis (PrEP) is highly effective, it has arguably disrupted norms of 'safe sex' that for many years were synonymous with condom use. This qualitative study explored the culture of PrEP adoption and evolving concepts of 'safe sex' in Sydney, Australia, during a period of rapidly escalating access from 2015-2018, drawing on interviews with sexually active gay men (n = 31) and interviews and focus groups with key stakeholders (n = 10). Data were analysed thematically. Our results explored the decreasing centrality of condoms in risk reduction and new patterns of sexual negotiation. With regards to stigma, we found that there was arguably more stigma related to not taking PrEP than to taking PrEP in this sample. We also found that participants remained highly engaged with promoting the wellbeing of their communities through activities as seemingly disparate as regular STI testing, promotion of PrEP in their social circles, and contribution to research. This study has important implications for health promotion. It demonstrates how constructing PrEP as a rigid new standard to which gay men 'should' adhere can alienate some men and potentially create community divisions. Instead, we recommend promoting choice from a range of HIV prevention options that have both high efficacy and high acceptability.
men continued to have discriminatory attitudes towards known HIV positive partners. This contrasts with results from two separate U.S. based studies [18,28], which both found that PrEP uptake helped to diminish feelings of stigma toward men with HIV. Again., this difference may be due to increased confidence with PrEP efficacy, as the U.S studies recruited later than our cohort. Within our cohort, there was also evidence of a significant bias against men who opted to use condoms as their primary risk reduction method, echoing findings of both Newman et al and Pantalone et al, who noted increased pressures for condomless sex and increased challenges in negotiating condom use [19,28] This finding in three separate studies leads to a disquieting conclusion that opting to use condoms as primary risk reduction and/or a making a disclosure of HIV positive status, could diminish an individual's sexual capital and limit opportunities for satisfying sex. With regard to supportive community practices that respect diversities and different choices, some men saw the combination of PrEP and hook-up apps as decentring communication around sexual practice and eroding the community building that some associated with sexual negotiation around condom use. Nevertheless, they reported enjoying the sexual freedoms afforded by PrEP. The finding that non-use of PrEP could be stigmatised was also seen in a Canadian study [40]. Orne and Gall used a model of 'PrEP citizenship' to explain how widespread PrEP uptake produced a culture of conformity to PrEP-centred regimens. This model included taking up PrEP ('conversion'), advocating it to others ('evangelising') adherence, ('self-governance') repeat testing ('surveillance'), and posited non-users as 'potentially infectious' and'stigmatised and irresponsible people' (p. 657) as distinct from the 'good citizens' taking PrEP. This model has parallels with Thomann's neoliberal sexual subject who acknowledges HIV risk [41], takes pre-emptive pharmaceutical action against it, and becomes 'biomedically responsibilised'. Both Thomann's and Orne and Gall's analyses foregrounded how 'PrEP advocacy' or 'demand creation'-as distinct from advocacy for a choice of HIV prevention interventions available to all-can marginalise those who make different choices, such as the choice to use condoms. Evidence from this study supports that contention, in that some participants took up both PrEP use and PrEP advocacy as 'the' response to HIV prevention, which alienated men who did not want to take antiretrovirals preventatively. Of note, however, some PrEP takers in this study resisted discourses of conformity to universal PrEP use and continued a champion a range of options depending on circumstances. In particular, some participants discussed PrEP use in the context of travel as distinct from during everyday life, given that for some travel was an opportunity for non-relationship sex including within the context of a relationship agreement. This phenomenon further breaks down the binary of 'PrEP user' and 'non-user' [19], and documents a new form of risk-reduction adaptation. The qualitative approach of this study enabled a rich and nuanced analysis of the evolution of safe sex norms concomitant with the advent of PrEP. While the specific impacts of PrEP on HIV risk reduction practice was one focus, our other focus on normativity within these sex cultures illuminated how care can be demonstrated between casual sex partners when the problem of HIV risk has been largely dealt with by a daily pill, and how differences in values could or should be accommodated in a sex culture that aspires to not discriminate on the basis of serostatus or choice of HIV risk reduction method. PrEP access in Australia was at least four years behind the U.S. approval in 2012, as the first large scale implementation study in Australia began in 2016 [29] and subsidised national access began in 2018 [30]. This time lag between Australia and the U.S.-and the fact that Australian community-based HIV organisations had to work hard to achieve subsidised access [42]-may in part explain why there was a less severe anti-PrEP backlash once the intervention was available. The Australian HIV community sector, health care providers and sexually active gay men had seen the 'Truvada whore' controversy [8]-which stereotyped PrEP users as promiscuous and irresponsible-play out in the U.S. before PrEP was widely available. The context of having no nationally accessible, funded mechanisms for PrEP access in Australia some four years after the FDA approval arguably contributed to heightening pro-PrEP sentiment [41], because the global connectedness of gay male communities allowed men in Australia to witness the sexual freedom that PrEP facilitated in the U.S. and recognise the advantages it could bring. This study has some limitations. Gay community participants had to contact the researchers to take part in the study, so those with strong views on the impacts of PrEP may have been more likely to volunteer. The majority of participants were white, but we did not collect data systematically on ethnicity. Accordingly, the study may overrepresent the views of white gay men. Data were also collected over a period of three years during a period of rapid change, so are not a snapshot of a point in time, but a collection of perspectives that were in the process of evolution. Most of the study participants were taking PrEP, and a significantly smaller number of HIV negative men not on PrEP and HIV positive men were included, so while the sample includes perspectives from a range of different actors, they are not equally sampled. Finally, as this paper is about the impacts of PrEP on a sex culture, the voices of the gay community participants have been privileged over those of the healthcare providers and HIV communitybased professionals. --- Conclusion The impacts of PrEP are complex and need to be considered in the context of evolving gay male sex cultures in which PrEP is only one element. PrEP was not the catalyst for condomless sex for most of the men in this group, but the introduction and scale-up of PrEP access arguably enabled men to talk about condomless sex more openly, and to consider what matters in gay male sex cultures where condom use is decentred. This study has important implications for health promotion. It reveals how new community conversations about HIV prevention can promote PrEP use as the single best option, constructing it as a rigid new standard to which men'should' adhere, instead of promoting and promulgating choice and genuine acceptance that different values can mean that different options may work better for some individuals. The identification of a potentially damaging emerging norm in these data, that of PrEP use as being positioned prescriptively as the 'best' form of HIV prevention for HIV negative men and stigma attaching to non-use, informed the development of ACON's 2017 campaign 'How do you do it?, in which the importance of individual choice from a range of effective options was emphasised with respect to HIV prevention [43]. While recognising the great importance of PrEP for many men, this study suggests that, rather than promoting PrEP as the new'safe sex' orthodoxy, there is a need to ensure that there is a range of HIV prevention options that have both high efficacy and high acceptability. Accordingly, health promotion should focus on building community attitudes that respect diversity and challenge the primacy of any one prevention tool. --- Data cannot be shared publicly because it contains sensitive information that the study participants did not consent to have shared. Data access queries may be directed to the UNSW Human Research Ethics Coordinator (contact via [email protected]. au or via + 61 2 9385 6222). --- Author Contributions Conceptualization: Bridget Haire, Dean Murphy, Lisa Maher, Iryna Zablotska-Manos.
While HIV pre-exposure prophylaxis (PrEP) is highly effective, it has arguably disrupted norms of 'safe sex' that for many years were synonymous with condom use. This qualitative study explored the culture of PrEP adoption and evolving concepts of 'safe sex' in Sydney, Australia, during a period of rapidly escalating access from 2015-2018, drawing on interviews with sexually active gay men (n = 31) and interviews and focus groups with key stakeholders (n = 10). Data were analysed thematically. Our results explored the decreasing centrality of condoms in risk reduction and new patterns of sexual negotiation. With regards to stigma, we found that there was arguably more stigma related to not taking PrEP than to taking PrEP in this sample. We also found that participants remained highly engaged with promoting the wellbeing of their communities through activities as seemingly disparate as regular STI testing, promotion of PrEP in their social circles, and contribution to research. This study has important implications for health promotion. It demonstrates how constructing PrEP as a rigid new standard to which gay men 'should' adhere can alienate some men and potentially create community divisions. Instead, we recommend promoting choice from a range of HIV prevention options that have both high efficacy and high acceptability.
Background The low appeal of General Practice and primary care as a career option is a recurrent problem for healthcare systems throughout Europe, USA and other countries in the Organization for Economic Cooperation and Development (OECD) [1,2]. A high-performing primary healthcare workforce is necessary for an effective health system. However the shortage of health personnel, the inefficient deployment of those available, and an inadequate working environment contribute to shortages of consistent and efficient human resources for health in European countries. The European Commission projects the shortage of health personnel in the European Union to be 2 million, including 230,000 physicians and 600,000 nurses, by the year 2020, if nothing is done to adjust measures for recruitment and retention of the workforce [3]. Research has shown a strong workforce in General Practice is needed to achieve an efficiency balance between the use of economic resources and efficient care for patients. [4]. Most of the research focused on the GP workforce concentrated on negative factors. The reasons students did not choose this as a career or GPs were leaving the profession were widely explored. Burnout was one of the most frequently highlighted factors [5]. In many OECD countries, apart from the United Kingdom, the income gap between GPs and specialists had expanded during the last decade, promoting the appeal of other specialties for future physicians [6]. Health policy makers, aware of the problem of a decreasing General Practice workforce, tried to change national policies in most European countries to strengthen General Practice. Health professionals respond to incentives but financial incentives alone are not enough to improve retention and recruitment. Policy responses need to be multifaceted [7]. Dissatisfaction was associated with heavy workload, high-levels of mental strain, managing complex care, expectations of patients, administrative tasks and work-home conflicts. Focusing on these issues created a negative atmosphere [5,[8][9][10]. In the above mentioned report of the European commission on recruitment and retention of Workforce in Europe, the authors used a model of Huicho et al. as a conceptual framework to analyze the situation [11]. Attractiveness and retention are two outputs used in the model. Retention is determined by job satisfaction and duration in the profession. The concept of job satisfaction is complex as it changes over time according to social context. "Job satisfaction is a pleasant or positive emotional state resulting from an individual's assessment of his or her work or work experience" [12]. There is a weak relationship between enjoyment and satisfaction, suggesting that other factors contribute to job satisfaction [13,14]. Furthermore, general practice is a specific field and theories on job satisfaction in this field are not fully explained by theories on human motivation in general. According to the research group hypothesis, it was important to investigate the positive angle separately in order to understand which factors give GPs job satisfaction. That was the focus chosen by the research team. The literature highlighted the poor quality of the research about job satisfaction within European General Practice. Most studies were carried out by questionnaire [15], focusing on issues of health organization or business and did not reach the core of GP daily practice. Some studies had confusion bias caused by authors' prerequisites on the attractiveness of General Practice [16]. Surprisingly few qualitative studies explored the topic of satisfaction [17,18]. Literature did not show an overall view of GPs' perception of their profession. It was not certain that these positive factors were similar across different cultures or in different healthcare contexts. Consequently, research into positive factors, which could retain GPs in practice, would help to provide a deeper insight into these phenomena. The aim was to explore the positive factors supporting the satisfaction of General Practitioners (GPs) in primary care throughout Europe. --- Method This research is descriptive qualitative study on positive factors for attractiveness and retention of General Practitioners in Europe. --- Research network A step-by-step methodology was adopted. The first step was to create a group for collaborative research [19,20]. The EGPRN created a research group involving researchers from any country wishing to participate: Belgium (University of Antwerp), France (University of Brest), Germany (University of Hannover) and Israel (University of Tel Aviv), Poland (Nicolaus Copernicus University), Bulgaria (University of Plovdiv), Finland (University of Tampere) and Slovenia (University of Ljubljana). Undertaking such a study in several different countries, with different cultures and different healthcare systems, presented a challenge. This has been made possible by the support of the EGPRN in the various meetings held throughout Europe. Figure 1 gives an overview of the position of the general practitioner in each country, according to the different healthcare systems. The authors scored the importance of some specificities of practice in their own country from 0 (not important) to 5 (very important). The research team decided to conduct a descriptive qualitative research study, from GPs' perspective, in each participating country [21,22]. The first interviews were completed in the Faculty of Brest, in France. The aim was to pilot the first in-depth topic guide. --- Participants GPs were purposively selected locally using snowballing in each country. Participants were registered GPs working in primary care settings. To ensure diversity, the following variables were used: age, gender, practice characteristics (individual or group practices), payment system (fee for service, salaried), teaching or having additional professional activities. The GPs included provided their written informed consent. GPs were included until data saturation was reached in each country (meaning no new themes emerged from the interviews) [21,23,24]. Overall, 183 GPs were interviewed in eight different countries: 7 in Belgium, 14 in Bulgaria, 30 in Finland, 71 in France, 22 in Germany, 19 in Israel, 14 in Poland and 6 in Slovenia. In each country, the principle of obtaining a purposive sample was observed and GPs were recruited until data sufficiency was reached. Four qualitative studies were achieved in France. In France, it was always the intention to include more participants than in the other countries, with a view to exploring potential differences between practice locality, gender, type of practice and teaching activities. One study was carried out by five focus groups, which brought together 38 GPs; the three other studies used individual interviews (11 participants, 6 participants, and 14 participants). The other countries conducted one qualitative study each. The research activities were undertaken in Germany by focus groups, in Israel using focus groups and individual interviews and in the other countries by individual interviews. --- Study procedure and data collection The research team discussed every step of the study, in two annual workshops, during EGPRN conferences, within the duration of the study. As there were few examples in the literature and, as the existing models of job satisfaction were more oriented towards employees working in a company, the international research team developed an interview guide based on their previous literature review [16]. The guide was piloted in France and was adapted and translated to ensure a detailed contribution from the GPs interviewed and, subsequently, a rich collection of qualitative data in each country. Local researchers conducted the interviews in their native language. In accordance with the research question interviewers were looking for positive views. Overall interviewers were GPs working in clinical practice and in a university of college, except in Belgium where the interviewer was a female psychologist, working in the department of GP. The GPs were first asked to give a brief account of a positive experience in their practice (ice-breaker question) [21]. The interview guide (Table 1) was used to encourage participants to tell their personal stories, not to generate general ideas but to focus on positive aspects. To ensure a maximal variation in collection techniques, in order to collect both individual and group points of views, interviews and focus groups had to take place. Saturation (no new themes emerging from data) had to be reached in each country [21]. --- Data analysis A thematic qualitative analysis was performed following the process described by Braun and Clarke [25]. In each country, at least two researchers inductively and independently analyzed the transcripts in their native language using descriptive and interpretative codes. They issued a verbatim transcript of one particular part of, or sentence from, the interview to illustrate every code in the codebook. Each code was extracted in the native language and translated into English. The contextual factors were explored in each setting by the local team of researchers and these factors were taken into account during the analysis. Then the whole team discussed the codes several times in face-to-face meetings during seven EGPRN workshop meetings. The research team merged the national codes into one unique European codebook. During a two-day meeting, the research team performed an in-depth exploration of interpretative codes and a final list of major themes was generated. Credibility was verified by researcher triangulation, especially during data collection and analysis. During the EGPRN workshops, peer debriefings on the analysis and the emerging results were held. Interviewers and researchers from such diverse backgrounds as psychology, sociology, medicine and anthropology reflected on the data from their own researcher's perspective. --- Results Table 2 gives an overview of the characteristics of the participants. The mean age was high which is an indication of a long duration in the profession. Six main themes were found during analysis. The results are summarized in the Fig. 2: International codebook on GP satisfaction. --- GP as a person The analysis of the data showed that the GP was a person with intrinsic characteristics, including interest in people's lives, with a strong ability to cope with different situations and patients. GPs loved to practice and the passion for their job was more important than the financial implications. "I also work with a very heterogeneous population, ultra-religious and secular, from various countries of origin" (Israel). "Really pleasant to work with patients, it's not only the financial aspect" (Bulgaria). "I work for pleasure. I don't do it for the money. If I don't like it anymore I'll stop doing it" (Belgium) GPs said they wanted to stay ordinary people with a strong need to take care of their personal wellbeing. This The significance of the GP's residential environment? Topic 6 Coping strategies to overcome difficulties was more than just having time for hobbies and leisure. GPs were looking for other intellectual challenges and personally enriching activities in their free time. "General practice is a beautiful profession but you are on your own too much, even in a group practice. You see the community from a limited perspective. It's important to keep in touch with the community. The fact remains that you are probably a father or mother or a partner, as well as being a physician. It's interesting to have a different perspective: it broadens your way of thinking. Reading books is the same. It's essential to read good books and to empathize with the characters. This is enriching for you as a human being, but also for your practice." (Belgium). GPs said they wanted to be there for their patients, to find common ground with them, but they also wanted to control the level of involvement with their patients. They described the ability to balance empathy with professional distance in their interaction with patients and being able to deal with uncertainty in the profession. The GP as a person theme was important, as all the above conditions were required in order to be a satisfied GP who wishes to remain in clinical work --- GP skills and competencies needed in practice GPs reported satisfaction about making correct diagnoses in challenging situations, with low technical support, and being rewarded with patients' gratitude. The intellectual aspect of medical decision-making led to effective medical management and was a positive factor for GPs. General practice is the first point of care for the patient and GPs felt themselves to be the coordinators and managers of care and the advocates for the patient. To be the key person in primary care requires strong inter-professional, collaborative skills and effective support from other medical specialties and from paramedics. GPs believed that it was highly important to be an efficient communicator to perform all these tasks. GPs were patient-centered and wanted to provide care using a comprehensive and a holistic approach. A patient centered approach is a WONCA core competency of General Practice while efficient communication with the patient is a generic skill for all health workers. They wanted to bring together a broad medical knowledge with a high level of empathy, balancing the patient's concerns with official guidelines. Guiding the patient's education was an important role for the GP, who was also a coach for life style changes. This theme was linked to the holistic model for General Practice which is also a WONCA's core competency. "To be both competent and do a bit of everything" (France). "This is intellectually extremely stimulating and challenging work" (Finland). "Happy and satisfied when making the correct diagnoses" (Bulgaria). "The patient arrives and thanks me for the good diagnoses" (Poland). "You don't just see common colds during the day. You get interesting cases and you have time to explore them. This makes general practice interesting. It's a 360°job. Variation is important". "It's our task to empower young Muslims to encourage them to study well, to become nurses or physicians". Belgian GP --- Doctor-patient relationships Patients are free to choose their GP and this is important because of the particular aspects of the doctorpatient relationship in primary care. There was a strong relationship between the GP as a person and the GP who enjoyed a rewarding, interpersonal relationship with patients. GPs had enriching human experiences with patients which was important to the physician's selffulfillment as a human being. Mutual trust and respect in their relationships were important dimensions. Being a patient-centered physician was a rewarding challenge. GPs felt they were a part of the patient's environment, but with the need to set their professional limits. GPs learned about life through their patients. GPs said they were ageing with their patients and had a long-term relationship with some of them. They were "real family doctors" and often cared for several generations. They saw babies grow up and become parents themselves. These unique doctor-patient relationships enhanced GP satisfaction. "I am the doctor for this whole family and in general practice that is something important" (France). "Some I got to know when they were small kids and they still come to see me at the age of 18 or older." (Germany). "We know much more about them than other doctors, because our patients have chosen us" (Bulgaria). "We accompany patients, throughout pregnancy, cancer and death and from the moment before birth until the age of 99 years and over" (Germany). "Patients asked for a home visit and insisted I join them at their meal and sometimes I did that but only when they were more like friends... I've had a lot of invitations to weddings..." (Belgium) GPs also liked to negotiate with patients, to help them to make decisions but also to motivate them to make lifestyle changes. --- Autonomy in the workplace Freedom in practice was closely related to work organization, which was important in all countries. GPs stayed in clinical work if they had chosen their own practice location. The living environment needed to be attractive for the family. GPs wanted to apply personal touches to their consulting rooms, to make choices in the technical equipment they used which suited their personal requirements. --- Fig. 2 International codebook on GP satisfaction Even more important was the possibility of choosing work colleagues who shared the same vision of General Practice. Satisfied GPs contributed to the organization of the practice and were influential in decisions about work and payment methods. Where there was a salaried system, GPs wanted to earn a reasonable salary to have a satisfying work-life balance. Flexibility at work was not to be interpreted as a demand from the management to be flexible in working hours but to have the flexibility to make one's own choices. Most GPs preferred additional career opportunities such as teaching, working in a nursing home and conducting research. To fulfil all these conditions GPs wanted to work in a well-organized practice with a competent support team, with a secretarial service, practice assistants and the necessary technical equipment. Another condition was an organized out-of-hours service. GPs did not want to be disturbed outside practice hours without prior arrangement. "This is the most important in our practice that I decide when and how to work" (Bulgaria). "If someone says that a practice room must be completely impersonal, it has to be interchangeable. I understand this. It's respectful towards the others but a personal touch is important for communicating something about yourself to the patient. That is important." (Israel). "It is important to have one's own organizational systems and equipment" (France). --- "I didn't have to do night shifts" (Poland). --- Teaching general practice GPs reported that they wanted to acquire new medical knowledge and learn new techniques. They liked to transmit the skills of their job. They were proud of their profession and they wanted to teach and to have an effective relationship with trainees. Teaching contributed to feelings of satisfaction with the profession. GPs mentioned the importance of training in attracting junior colleagues and the positive aspect of the mutual benefit to GPs and trainees. Teaching gave GPs more incentives for their own continued professional development and enabled them to complete their competencies. GPs feel gratified where general medicine is recognized as a specialty at the university and by the public authorities. "Guiding younger colleagues is the most rewarding part of my job" (Finland). --- "I like to transmit what I have learned" (France). "I was a tutor for a seminar group, teaching, I like to do that, those people had to learn, that was very pleasant" (Belgium). "I am teaching General Practice to students and I have found I have a flair for it. It is really fun!" (Germany). "I feel good accompanying young trainees through the process of making their choices" (Belgium). "All that you do in teaching (trainees), transmitting your knowledge to another, improves your accumulated experience. You see yourself through the eyes of others" (Israel). --- Supportive factors for work-life balance Factors that supported an efficient work-life balance were the possibility of having a full family life, with a social support network and the opportunity to benefit the whole family by enjoying holidays, money and free time. Money was not the most important issue, but income needed to be sufficient for a comfortable family life, meaning sufficient resources for a satisfying education for the children and the possibility of having regular holidays. GPs found they have job security which enables them to feel secure and free from unemployment worries. GPs explained that they wanted to choose how to separate professional and private life. They said they wanted to have social contacts in the community, which would give them a broader perspective in terms of their patients. Having relationships with patients outside the practice was important. GPs said they needed to be part of the social community if they were to stay in General Practice. GPs wanted to have a full family life and to keep free time for this. "Family Medicine is an opportunity to be with the family" (Israel). --- "My family supports me" (Bulgaria). "I try to keep work and leisure time away from each other... It is important in terms of coping. In my leisure time I have a different role from that of a doctor" (Finland) --- Country specific themes Besides those international themes there were some country specific results. In Poland and in Slovenia even when they were prompted in the interviews, GPs did not mention the importance of teaching. Belgian GPs said how important discussing the vision and mission involved in starting a group practice was to them. They took time for this process and wanted junior colleagues in practice who would share their vision and their mission. Statements needed to be updated regularly to meet the needs of a changing society and the challenges in health care. Group practices used external coaching to overcome problems. Vision and mission are important. We started from ten values as respect, diversity, the aim to train young GPs.... You have to renew the vision and mission regularly and to adapt at the changing community. --- Belgian male GP French GPs were very attentive to the need for organized continuity of care. The GPs wanted to be there for their patients, but they also wanted to protect their personal lives. The word "vocation" had a religious connotation that displeased some GPs. Finish GPs appreciated the stimulating working community and multidisciplinary teamwork. In addition, they valued the set working hours and professional development work available in the workplace. Israeli GPs were proud of their respected position. They preferred a private practice in their own style and stressed the importance of teamwork. The clinics were, I felt good were clinics that the staff was amazing and enlisted, the nurses were good and the secretaries did the work and there was a feeling that we were working for better medicine. There were weekly meetings where we really thought how to do better, a feeling of teamwork. For Polish GPs, there were some positive developments in financing medicine, which were providing better opportunities for an effective work-life balance. In Poland, there was a theme, which favoured having a strong union that can influence policy. It gave the GPs an identity as a group. The fact that I work here as I work, my income is not too high, but still is, make it possible that my kids can attend private schools and don't have to go to normal state schools. Polish female GP --- Discussion --- Main results Throughout Europe, common positive factors were found for satisfaction of GPs in clinical practice. One of the main characteristics of GPs was the need for specific competencies for managing care and communicating with patients. They needed to cope with problems during their career and professional collaboration. GPs were stimulated by intellectual challenges, not only within the profession but they also wanted enough time for personal development outside the workplace, to counterbalance the stress of daily practice. Positive GPs are persons with intrinsic specific characteristics (open-minded, curious). Participants described themselves as feeling comfortable in their job when they were trained in specific clinical and technical skill areas and had efficient communication skills. The long-term doctor-patient relationship is perceived positively by the GPs. They love teaching all these specific skills to younger GPs and appreciate the feedback and mutual benefit to be found in teaching activities. Finally, GPs need policy support for well-managed practices and out-of-hours services to maintain their optimal work-life balance. --- Strengths and limitations of this study To our knowledge, this multinational data analysis from 183 GPs is the first European multicentre qualitative study on this topic [16,26]. This study collected complete and complex data from eight countries. One of the strengths was to study a diverse population of GPs, with different cultures and health systems. Despite these differences, the main satisfaction factors to become a GP and to stay in clinical practice are found in all contexts. For instance, money is important, but it's relative because the idea to have enough to lead a comfortable family life with enough free time is for every GP crucial, although income might vary over Europe. --- Credibility and transferability Credibility was verified by researcher triangulation, especially during data collection and analysis. During the workshops, peer debriefings on the analysis and the emerging results were held. Interviewers and researchers from such diverse backgrounds as psychology, sociology, medicine and anthropology reflected on the data from their own researcher's perspective. As the results in several countries with different healthcare systems were very similar, the transferability of data seems possible. The main weakness was a possible interpretation bias. The 183 GPs provided very rich data in several languages. It was the strength of this research, but also a difficulty. The analysis and interpretation of the verbatim analysis was a linguistic and cultural problem. A different classification of themes could be achieved, but this was limited by the group meetings and the massive number of emails, phone discussions and Skype® discussions required during the research process. The number of GPs interviewed varied in the different countries, potentially leading to differences in the informational detail and in the depth of the analysis of the interviews/focus groups. However, data saturation was reached in all settings, limiting this possible bias. --- Discussion of the findings The theme "GP as a person" was highlighted in this study and in the literature review [16]. The studies found this special identity for GPs was linked to their intrinsic characteristics. The theme of "GP as a person" was important in each of the European countries. A GP is, of necessity, someone with a specific personality, which is suited to General Practice. GPs like to take care of people [27] Feeling of caring » [28]. "I can have a big impact on people's lives" [27]. This is a strong personality characteristic in a GP which policy-makers might take into consideration when formulating policies which concern the medical workforce. The GP skills and competencies were found in literature [16,29] but in a more restricted form. They focused on an effective medical management of the patient and the subsequent feeling of being competent. In a Scottish qualitative study, GPs highlighted the satisfaction derived from the perception of the consultation outcome. "Although clinical competence was an integral part of the doctors' satisfaction, they alluded to personal attributes that contributed to their individual identity as a doctor" [30]. "Take care of them and do the best you can" [27]. In our study we identified all WONCA core competencies and this is important [4]. Validation of WONCA's characteristics and competencies in hundreds of interviews across eight European countries shows the strength of the WONCA theorem and common characteristics between GPs wherever they work. The analysis of the data demonstrated a strong link between competence and satisfaction. It is necessary to give general practitioners the opportunity to acquire and improve these skills. The importance of the doctor-patient relationship was described as an effective factor in job satisfaction for the General Practice workforce [31,32]. Nevertheless, previous studies concentrated less on the rewarding nature of the relationship, its long duration and the mutual interaction. Freedom to manage the workplace organization has been described and is confirmed here. It does not prevent long working hours but focuses on the organization of the practice [33][34][35]. There was consistent evidence that GPs needed freedom for work satisfaction [36]. GPs wanted autonomy in their work [17]. The teaching and learning activities have been described and this study confirmed their importance. Academic responsibilities provide positive stimulation and new perspectives for GPs [17,36,37]. They wanted to be recognized by the academic world. Clerkships in General Practice were seen as important for attracting students to a career in General Practice [38]. The influence on students was important for their career choice [39]. The practice of clinical teaching in initial medical education, with positive role modelling, was also important [40,41]. There was a strong link between the GP, his/her family and the community they are living in. This was especially true for those practising in rural areas [39] [42]. The GP's family was sensitive to the fact that General Practice is a respected profession. Outside their professional role, other forms of satisfaction were important, such as having strong social support from schools, leisure activities and a satisfying quality of life in the residential environment [43], and of course, the importance of an income in balance with their heavy workload. Finally, the results highlighted a particular theory to describe GP satisfaction which focuses on human relationships, specific competencies, patients and the social community. --- Implications for medical education and practice Learning the core competencies of General Practice in initial and continuous medical education is very important and should lead to extended educational programs in Europe. Mobilizing stakeholders is a necessary condition of success however it is not sufficient [7]. To improve the attractiveness of general practice, universities should organise a specific selection process for GPs, not just for specialists. This might engender greater respect for the profession. Roos et al. performed a study by questionnaire on the "motivation for career choice and job satisfaction of GP trainees and newly qualified GPs across Europe" [15]. The most frequently cited reasons for choosing General Practice were "compatibility with family life," "challenging, medically broad discipline", "individual approach to people", "holistic approach" and "autonomy and independence". The current study has focused on working GPs and not on trainees, but some of the results overlap Roos' research. It remains essential to teach undergraduate medical students the bio-medical aspects of general practice, but it is also necessary to teach the management of primary care, interprofessional collaboration and communication skills. Trainees need to think about their own wellbeing and to learn to cope with problems in daily practice. The intellectual aspect of General Practice is important. Decision-makers should use all the means at their disposal to promote the profession by providing continual development. GPs want to be involved in the management of their practice. Stakeholders should be aware and very cautious about this topic which is described as extraordinarily sensitive. Systems that try to administrate GP practices, without involving the GPs, should be aware that they will experience difficulties. --- Implications for research Further studies would be useful with the objective of studying which satisfaction factors have the greatest impact on recruitment and retention in General Practice. This description of satisfied GPs will be disseminated throughout Europe to implement new policies for a stronger GP workforce. This may assist the international research team in the design of further studies to investigate the links between these positive factors and the growth of the GP workforce. At this stage, the research team will test the usefulness of each positive factor in helping each country to design efficient policies to increase its workforce. --- Conclusion Throughout Europe, GPs experience the same positive factors which support them in their careers in clinical practice. The central idea is the GP as a person who needs continuous support and professional development of special skills which are derived from the WONCA's core competencies. In addition, GPs want to have freedom to choose their working environment and organize their own practice and work in collaboration with other health workers and patients. National policy arrangements on working conditions, income, training and official recognition of general practitioners are important in facilitating the choice of a career in general practice. Stakeholders should be aware of these factors when considering how to increase the GP workforce. --- Availability of data and materials Some data in this study are confidential. The data generated and analyzed during the current study are not publicly available. But the datasets generated analysed during the current study are available from the corresponding author on reasonable request. --- Abbreviations EGPRN: European General Practice Research Network; GP: General practitioner; GPs: General practitioners; n/a : not applicable; UBO: Université de Bretagne Occidentale, France.; WONCA: World Organization of National Colleges, Academies and Academic Associations of General Practitioners/ Family Physicians Authors' contributions B LF designed the study, collected data, drafted and revised the paper. H B designed the study, collected data and revised the paper. JY LR designed the study, collected data, drafted and revised the paper. H L collected data and revised the paper. S C collected data and revised the paper. A S collected data and revised the paper. R H collected data and revised the paper. P N revised the paper. R A collected data and revised the paper. T K collected data and revised the paper. Z K-K collected data and revised the paper. T M revised the paper. L P designed the study, collected data and revised the paper. All authors read and approved the final manuscript. --- Ethics approval and consent to participate The Ethical Committee of the "Université de Bretagne Occidentale" (UBO), France approved the study for the whole of Europe: Decision N °6/5 of December 05, 2011. The Université de Bretagne Occidentale ethics committee provided ethical approval for recruitment of doctors from overseas because of the low-risk nature of the study, and the practical implications of obtaining ethics from multiple countries for the recruitment of small numbers of health professional participants using snowballing. Further, the participant recruitment strategy detailed above precluded us from preemptively knowing with certainty which countries we would recruit from and prospectively apply for ethical approval from each country. The participants provided their written informed consent to participate in the study. --- Consent for publication Not applicable as no personal information is provided in the manuscript. --- Competing interests Zalika Klemenc-Ketis and Radost Assenova are members of the editorial board (Associate Editor) of BMC Family Practice. The other authors hereby declare that they have no competing interests in this research. Author details 1 EA 7479 SPURBO, Department of General Practice, Université de Bretagne Occidentale, Brest, France. 2 Department of Primary and Interdisciplinary Care. Faculty of Medicine and Health Sciences, University Antwerp, Antwerp, Belgium. 3 Centre for Public Health and Healthcare, Hannover Medical School, Hannover, Germany. 4 Department of Family Medicine, Tel Aviv University, Tel Aviv, Israel. 5 Clinical Psychology Department, Nicolaus Copernicus University, Torun, Poland. 6 Department of Urology and General Medicine, Department of General Medicine, Faculty of Medicine, Medical University of Plovdiv, Plovdiv, Bulgaria. 7 University of Tampere, Faculty of Medicine and Life Sciences, Tampere, Finland. 8 Department of Family Medicine, Faculty of Medicine, University of Ljubljana, Ljubljana, Slovenia. 9 Department of Family Medicine, Faculty of Medicine, University of Maribor, Maribor, Slovenia. --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Background: General Practice (GP) seems to be perceived as less attractive throughout Europe. Most of the policies on the subject focused on negative factors. An EGPRN research team from eight participating countries was created in order to clarify the positive factors involved in appeals and retention in GP throughout Europe. The objective was to explore the positive factors supporting the satisfaction of General Practitioners (GPs) in clinical practice throughout Europe. Method: Qualitative study, employing face-to-face interviews and focus groups using a phenomenological approach. The setting was primary care in eight European countries: France
Introduction Australian Aboriginal and Torres Strait Islander people possess a rich and vibrant culture and have lived on and cared for the country for over 60,000 years [1]. The sudden disruption to lives and culture brought by British colonization in 1770 has created deep inequities and a high burden of poor health for Aboriginal and Torres Strait Islander people, which has been sustained until this day [2]. This inequity was sustained over the subsequent 200 or more years by a series of racist Australian policy eras resulting in marginalization, disadvantage, and extreme poverty [1]. One of the outcomes for Aboriginal and Torres Strait Islander people has been a decline in physical activity levels [3], contributing to poor health, including the development of chronic diseases such as type 2 diabetes [4]. Chronic diseases represent 70% of the gap in disease burden between Aboriginal and Torres Strait Islander people and non-Aboriginal Australians [5]. Over one-third of the total disease burden Aboriginal and Torres Strait Islander people experience could be prevented by modifying behavioral risk factors such as physical inactivity [6]. Here, we use the terminology 'Aboriginal' to refer to the Indigenous peoples of Australia (other than where Torres Strait Islander people are specifically mentioned in the references supporting this article), as this terminology is preferred by the communities participating in this study. Whilst nationally Aboriginal children participate in more physical activity than their non-Aboriginal counterparts, this difference has been shown to decrease as children transition to adolescence [7]. Two studies conducted in New South Wales (NSW) reflect this activity decline [8,9]. Gwynn et al. reported that compared with their non-Aboriginal counterparts' rural Aboriginal children aged 10-12 years were engaged in more physical activity [8]; however, by adolescence, physical activity participation rates were lower in a cohort aged 13-17 years (21% compared to 28%) [9]. A gender difference was also identified, with Aboriginal boys more likely to participate in physical activity than girls [9]. Aboriginal communities differ around Australia, not only by virtue of geographical location but also due to differences in factors such as language and culture [1]. It is therefore important to describe the experiences of Aboriginal children from different communities across the nation to gain insight into the breadth of experiences around participation in sport and physical activity and better inform relevant strategies and policies. Five studies have reported Aboriginal young people's perceptions about physical activity [10][11][12][13][14]. Of these, three (urban locations) explored children's views of their physical activity in relation to type, amount, and the role this plays in their community [11,13,14]. Only two (rural and remote locations) explored physical activity barriers, neither in NSW [10,12]. The barriers identified in the latter studies included poor community facilities, lack of transport, costs associated with participating in physical activity, and experiences of racism [10,12]. Aboriginal adolescent girls were reported as feeling'shame' ('stigma and embarrassment associated with gaining attention through certain behavior or actions' [15] (p. 8) and shyness wearing swimming costumes in pools and wearing sports clothes to exercise [10,12]. An established relationship between schools and the community was identified as a key facilitator to physical activity participation, as was the involvement and support of family and friends [10][11][12][13][14]. A recent study conducted with Torres Strait Islander communities found that community role models had a positive effect on some barriers to physical activity participation [16]. None of these studies were conducted in NSW, and given the cultural diversity between Aboriginal communities, it is yet to be established how applicable these findings are to young people in that state [17]. A recent systematic review of barriers and facilitators of sport and physical activity among Aboriginal and Torres Strait Islander children and adolescents found limited research (only nine studies) with a number of Australian states not represented [18]. The only study from NSW was not peer-reviewed and reported adult community members' perceptions of the barriers and facilitators for children. This study was conducted as a sub-study of the Many Rivers Diabetes Prevention Project (MRDPP) in response to that study's findings regarding the physical activity of Aboriginal children [9,19]. The MRDPP aimed to improve the nutrition and physical activity of children living in the North Coast of rural NSW [19] and found physical activity among Aboriginal children declined over time with differences in patterns of decline existing between Aboriginal and non-Aboriginal children [9]. Despite tending to be more active in primary school [8], Aboriginal children from these communities recorded significant declines in non-organized, organized (winter only), and school activity over time when compared with their non-Aboriginal counterparts [9]. To gain insights into this finding and to inform future physical activity health promotion programs, the study team proposed exploring the Aboriginal children's perceptions of barriers in their communities to sport and physical activity participation [19]. This study aimed to explore rural NSW Aboriginal children's perceptions of the barriers and facilitators to their sport and physical activity participation. The first author of this paper (S.L.) is a non-Aboriginal woman who completed an undergraduate (Honors) degree at the University of Sydney. J.G. is a researcher and non-Aboriginal woman who co-led the MRDPP with N.T. and has worked with the participating communities of this study for 17 years. N.T. is an Aboriginal woman from one of the participating communities who was the Manager Health Promotion and Senior Project Officer of the MRDPP. J.S. is an Aboriginal woman who is also from one of the participating communities and was an Aboriginal Project Officer of the MRDPP. R.P., E.J., and N.A.J. are researchers and non-Aboriginal co-authors who contributed their expertise in physical activity to this research. --- Methods --- Study Design This study utilized a qualitative 'photovoice' methodology derived from the principles of participatory action research. The photovoice method requires participants to take photos which to them represent the topic or issue to be explored. Participants are then interviewed and asked to talk about the photos, typically discussing why these were taken and their meaning. The photos and interviews are the data used in the qualitative analysis. This method crosses cultural and linguistic barriers and enables participants to identify their community's strengths and concerns [20]. Photovoice has been shown to be suitable and culturally appropriate for research with Aboriginal communities exploring issues as varied as food insecurity [21] and the experiences of Aboriginal health workers [22]. In this study, the photovoice method allowed children to explore the environmental and contextual factors that they perceived to influence their sport and physical activity participation [20]. --- Aboriginal Governance Structure and Ethics The Aboriginal community governance structure and procedures that guided the MRDPP and this sub-study are described elsewhere [23]. Aboriginal Project Officers (APOs) employed in the MRDPP and from the participating communities led the design and implementation of this research, ensuring cultural safety [23]. The APOs also liaised with other organizations, contributed to the thematic analysis, and co-authored this publication. In writing this paper, authors applied the consolidated criteria for reporting qualitative research (COREQ) checklist. This was to ensure transparency with the research methods, and the important aspects of the process of this study were reported [24]. Ethical approval was received from the Hunter New England Local Health District Human Research Ethics Committee (reference number 11/10/19/4.04) and the Aboriginal Health and Medical Research Council of NSW (reference number 824/11). --- Participants and Recruitment Aboriginal boys and girls aged 10-14 years, residing in two communities (Community A and Community B) on the mid-north coast of NSW, were invited to participate. Recruitment was undertaken using a'snowball' approach [25], with APOs contacting parents through the Aboriginal Corporation Medical Services (ACMS) in both communities. Parents were asked to inform their children of this study, and the children who were interested consented to participate. Consenting children then invited their peers to participate. Snowball sampling continued until no further potential participants could be identified [25]. Informed consent was obtained from all participants involved in the study. A total of 26 Aboriginal children (12 girls and 14 boys) consented to take part in this study. Of these, 18 children attended the introductory session and were given cameras. A total of 17 children (9 girls and 8 boys) returned their cameras, and each participated in an individual yarn about their photos (Figure 1). The number of photos taken per child varied between 8 and 11. Thirteen yarns were audio-recorded, and hand notes were taken for the remaining four due to the community location of the yarn. In Aboriginal and Torres Strait Islander culture, a yarn is a relaxed and informal style of conversation that takes its own time, often flowing around a topic as information and stories are shared and then within the topic until the natural completion of the yarn [26]. varied between 8 and 11. Thirteen yarns were audio-recorded, and hand notes were taken for the remaining four due to the community location of the yarn. In Aboriginal and Torres Strait Islander culture, a yarn is a relaxed and informal style of conversation that takes its own time, often flowing around a topic as information and stories are shared and then within the topic until the natural completion of the yarn [26]. --- Procedure Parents of potential participants were handed recruitment packages with child and parent information statements and consent forms. Children who signed the consent forms were contacted by APOs via their parents and invited to attend an introductory group yarn in which the study aims, consent process, and study procedures were explained. Each child was provided with a digital camera and informed of its functions. Participants were given a week to take photos of the perceived barriers and facilitators to their physical activity participation in their community. The children also took photos of the physical activities that they enjoyed or wished to engage in. At the end of the week, yarning sessions were undertaken with each child in either a community location or the ACMS according to participant convenience and preference. These were conducted by APOs (J.S. and N.T.) or the lead investigator (J.G.), audio recorded or handwritten where the location was not conducive to audio recording, and audio recordings later transcribed for analysis. Children were invited to yarn about each of the photos they had taken, and these were uploaded to a secure location on the researcher's computer. Prompts were co-designed with the APOs from the participating communities [27]. Once all individual yarns were completed, participants were then invited to a followup group yarn to select photos for community posters. Nine children and two parents (who were also aunties to other participants) took part in the first group yarn in Community A, and five children and one parent took part in the second (follow up) yarn to finalize their choices (Figure 2). 'Aunty' in Aboriginal culture is a term used to describe a respected female Elder in the community who may not necessarily be a family member [28]. In Community B, APOs reached consensus about which photos best reflected the themes arising from the individual yarns with children. Two children and two parents then met for a follow-up group yarn. --- Procedure Parents of potential participants were handed recruitment packages with child and parent information statements and consent forms. Children who signed the consent forms were contacted by APOs via their parents and invited to attend an introductory group yarn in which the study aims, consent process, and study procedures were explained. Each child was provided with a digital camera and informed of its functions. Participants were given a week to take photos of the perceived barriers and facilitators to their physical activity participation in their community. The children also took photos of the physical activities that they enjoyed or wished to engage in. At the end of the week, yarning sessions were undertaken with each child in either a community location or the ACMS according to participant convenience and preference. These were conducted by APOs (J.S. and N.T.) or the lead investigator (J.G.), audio recorded or handwritten where the location was not conducive to audio recording, and audio recordings later transcribed for analysis. Children were invited to yarn about each of the photos they had taken, and these were uploaded to a secure location on the researcher's computer. Prompts were co-designed with the APOs from the participating communities [27]. Once all individual yarns were completed, participants were then invited to a followup group yarn to select photos for community posters. Nine children and two parents (who were also aunties to other participants) took part in the first group yarn in Community A, and five children and one parent took part in the second (follow up) yarn to finalize their choices (Figure 2). 'Aunty' in Aboriginal culture is a term used to describe a respected female Elder in the community who may not necessarily be a family member [28]. In Community B, APOs reached consensus about which photos best reflected the themes arising from the individual yarns with children. Two children and two parents then met for a follow-up group yarn. A repeated reflexive approach was taken throughout the process of finalizing photos deemed suitable for inclusion on posters. In Community A, photos were printed out by the research team and brought to the first follow-up group yarn. Children considered their photos and selected those that best represented their views of barriers and facilitators of physical activity. A parent or caregiver of each participant was present for this process. In Community B, due to local community factors at the time, children did not meet as a focus group to identify their selection. Here, the APOs considered the transcripts and handwritten notes, discussed each child's photos, and reached consensus regarding those that best reflected the issues raised by the majority of participants in their interviews. Participants taking part in the group yarn concurred with the APOs reasoning and choice. A repeated reflexive approach was taken throughout the process of finalizing photos deemed suitable for inclusion on posters. In Community A, photos were printed out by the research team and brought to the first follow-up group yarn. Children considered their photos and selected those that best represented their views of barriers and facilitators of physical activity. A parent or caregiver of each participant was present for this process. In Community B, due to local community factors at the time, children did not meet as a focus group to identify their selection. Here, the APOs considered the transcripts and handwritten notes, discussed each child's photos, and reached consensus regarding those that best reflected the issues raised by the majority of participants in their interviews. Participants taking part in the group yarn concurred with the APOs reasoning and choice. The final selection of photos (and related texts) was then considered for inclusion in several draft posters of differing designs by the research team. These posters were intended to be facilitators for community discussion of results. APOs invited all participants and their parents to take part in a poster design focus group in each community. Handwritten notes of the discussion were taken from these focus groups, which largely included parental feedback. To add richness to the findings, notes were cross-checked against key themes by the first author, and information relating to these themes was included. --- Data Analysis Yarning transcripts and photographs were entered into a qualitative research software package NVIVO Version 11 (QSR International, Melbourne, Victoria, Australia) [29], for thematic analysis. Thematic analysis was informed by Braun and Clarke's six stages, which involved data familiarization, initial coding and searching, and reviewing and defining themes [30]. To enhance the rigor of thematic analysis, S.L. and J.G. independently coded the first three yarns before discussing their similarities and differences. This aimed to reduce subjectivity that can occur when coding is completed by one researcher [31]. The remainder of yarns were coded by the first author. Codes were grouped together by looking at the relationships and connections between them to create categories and, subsequently, subthemes and overarching themes [30]. Preliminary themes along with the original transcripts and photos were sent to the APOs for their review and feedback (written and verbal). This feedback informed the final themes. Posters containing participants' photos and final themes were co-created with the APOs. The final selection of photos (and related texts) was then considered for inclusion in several draft posters of differing designs by the research team. These posters were intended to be facilitators for community discussion of results. APOs invited all participants and their parents to take part in a poster design focus group in each community. Handwritten notes of the discussion were taken from these focus groups, which largely included parental feedback. To add richness to the findings, notes were cross-checked against key themes by the first author, and information relating to these themes was included. --- Data Analysis Yarning transcripts and photographs were entered into a qualitative research software package NVIVO Version 11 (QSR International, Melbourne, Victoria, Australia) [29], for thematic analysis. Thematic analysis was informed by Braun and Clarke's six stages, which involved data familiarization, initial coding and searching, and reviewing and defining themes [30]. To enhance the rigor of thematic analysis, S.L. and J.G. independently coded the first three yarns before discussing their similarities and differences. This aimed to reduce subjectivity that can occur when coding is completed by one researcher [31]. The remainder of yarns were coded by the first author. Codes were grouped together by looking at the relationships and connections between them to create categories and, subsequently, subthemes and overarching themes [30]. Preliminary themes along with the original transcripts and photos were sent to the APOs for their review and feedback (written and verbal). This feedback informed the final themes. Posters containing participants' photos and final themes were co-created with the APOs. --- Feedback of Study Outcomes to Communities Results in the form of the posters and a verbal presentation with or without powerpoint slides were discussed at meetings with local city council representatives, key Aboriginal community members involved in the MRDPP, and members of the MRDPP Steering committee. Stakeholders were provided with a copy of the final MRDPP report to contextualize the conduct of this study [19]. Results were also presented for discussion at meetings of the Aboriginal Educational Consultative Groups (AECG) in both communities. Minor changes to wording in one poster were suggested and incorporated. --- Socio-Ecological Framework Physical activity participation is a complex behavior and is determined not only by the individual or their local environment but by 'broader socioeconomic, political and cultural contexts' [32] (p. ii10). A socio-ecological framework was applied to the barriers and facilitators identified by children to assist in understanding the scope of these complex factors and the 'levels' at which these exist in the participants' environment. We applied the framework used in a recent mixed-methods systematic review of the barriers and facilitators to Aboriginal and Torres Strait Islander children's participation in sport and physical activity [18] and coded the findings according to the levels they described: individual, interpersonal, community, and policy/institutional. In doing so, we aimed to align our findings and contribute to building evidence for practice. --- Results Thematic analysis revealed seven key themes (Table 1). Interviews and photos depicted a wide range of sports and physical activities enjoyed by the participants, including different types of football, bike-riding, basketball, soccer, running, and swimming. Photos largely reflected the barriers that participants experienced when accessing physical activity opportunities. --- Barriers The physical environment was a key barrier to physical activity, particularly for Community A's participants. Participants cited the littered and vandalized community facilities as a deterrent. Poorly maintained and run-down sporting venues were also reported, with tennis and basketball courts overgrown with grass and no usable equipment (Figures 3 and4). The poor state of these facilities prevented children from playing there despite their desire to. An 'this is a photo of the basketball court. People used to drink there a lot and they used to like throw beer bottles and now it's all wrecked because of them an' the basketball nets are like, poles are like, falling, tilting, like it's about to fall... --- (P6A.) Participants discussed their experience of a lack of safety when engaging in physical activity due to hazards in the surrounding physical environment. The presence of litter such as glass in local playgrounds was identified by children as 'dangerous'. During the follow-up yarns, most children described continuing to play in playgrounds and parks despite it being unsafe.... and you can't really see if there's any glass or anything, so you never know when walking around in there. So, it's not very safe. --- (P2A.) The lack of designated space for children to engage in sports was identified by participants who also described playing non-organized sports in spaces such as near main roads. This supports children's safety concerns around their physical environment and the lack of accessible and safe places to undertake physical activity. Children identified consumption of unhealthy foods, including processed foods and sugary drinks, as a barrier to engaging in an active lifestyle. They discussed this factor as related to the development of obesity and diabetes, which, in turn, they perceived as having a negative impact on being active. Photos captured unhealthy foods on participants' laps and signs of fast-food stores. --- [Soft drink]...it can stop us from playing games outside and it could give you diabetes and you can't really like have what you want to eat sometimes... (P7B.) Well like junk food like would like stop you from a lot of sports, like putting on the weight and like things stuff like that. (P9A.) The follow-up yarns expressed the view that the proximity and exposure of unhealthy food and drinks was a contributor to the consumption of these discretionary items. Children would pass the corner shop on the way to school, and high schools would sell sugar-sweetened beverages to students. Participants acknowledged that engagement in excessive screen-based activities was sedentary behavior. In interviews, children acknowledged that screen-based activities displaced physical activity participation and recognized the impacts of this. Photos depicted different types of technology use, including iPads and computers. ---... sitting down...playing the play station or the phone instead of going out and being active... (P5B.) The cost to participate and access physical activity opportunities was noted by participants. The high price of transport, sports registrations, equipment, and its maintenance were prohibitive for some parents. The cost barrier for parents hindered children from Children identified consumption of unhealthy foods, including processed foods and sugary drinks, as a barrier to engaging in an active lifestyle. They discussed this factor as related to the development of obesity and diabetes, which, in turn, they perceived as having a negative impact on being active. Photos captured unhealthy foods on participants' laps and signs of fast-food stores. --- [Soft drink]...it can stop us from playing games outside and it could give you diabetes and you can't really like have what you want to eat sometimes... (P7B.) Well like junk food like would like stop you from a lot of sports, like putting on the weight and like things stuff like that. (P9A.) The follow-up yarns expressed the view that the proximity and exposure of unhealthy food and drinks was a contributor to the consumption of these discretionary items. Children would pass the corner shop on the way to school, and high schools would sell sugar-sweetened beverages to students. Participants acknowledged that engagement in excessive screen-based activities was sedentary behavior. In interviews, children acknowledged that screen-based activities displaced physical activity participation and recognized the impacts of this. Photos depicted different types of technology use, including iPads and computers. ---... sitting down...playing the play station or the phone instead of going out and being active... (P5B.) The cost to participate and access physical activity opportunities was noted by participants. The high price of transport, sports registrations, equipment, and its maintenance were prohibitive for some parents. The cost barrier for parents hindered children from Children identified consumption of unhealthy foods, including processed foods and sugary drinks, as a barrier to engaging in an active lifestyle. They discussed this factor as related to the development of obesity and diabetes, which, in turn, they perceived as having a negative impact on being active. Photos captured unhealthy foods on participants' laps and signs of fast-food stores. The follow-up yarns expressed the view that the proximity and exposure of unhealthy food and drinks was a contributor to the consumption of these discretionary items. Children would pass the corner shop on the way to school, and high schools would sell sugarsweetened beverages to students. Participants acknowledged that engagement in excessive screen-based activities was sedentary behavior. In interviews, children acknowledged that screen-based activities displaced physical activity participation and recognized the impacts of this. Photos depicted different types of technology use, including iPads and computers.... sitting down... playing the play station or the phone instead of going out and being active... --- (P5B.) The cost to participate and access physical activity opportunities was noted by participants. The high price of transport, sports registrations, equipment, and its maintenance were prohibitive for some parents. The cost barrier for parents hindered children from participating in their desired sport(s). In one photo (Figure 5), a participant held up a sign in front of a petrol station stating; participating in their desired sport(s). In one photo (Figure 5), a participant held up a sign in front of a petrol station stating; Mum only has $5 left from her pay. I play at [a large regional city] that's not going to get me there and back. (P4B.) Handwritten notes from the second group yarns reported that parents were not aware of the funding and support that may be available to enable their children to participate in organized sport(s). Lack of access to transport, both public and private, was associated with limited parental finances and availability of public transport, particularly when children lived out of town. Participants were reliant on parents or extended family members for transport to regular sporting competitions or community facilities. The availability of transport depended on family routine and dynamics. The issues with availability and affordability of transport were emphasized during the follow-up group yarns. Children discussed walking due to limited access to transport and this being the least-expensive option. Five community-level, three interpersonal-level, and two individual-level barriers (Table 1) were identified when the socio-ecological model was applied. Children perceived barriers to participating in physical activity around: the physical environment, particularly the availability of safe and accessible community facilities; lack of parental finances to support sports participation; consumption of an unhealthy diet; and participation in sedentary activities. --- Facilitators Family members' participation in sports and/or their sporting achievements were identified in both Community A and B as key factors facilitating physical activity, providing children with important role models for being active. ---...we started paddling out and I asked Dad if I could have a go. (P9A.)...my brother is surfin' an' we all love surfin'... (P3A.) Family activities such as fishing were enjoyed on a regular basis. Participants in Community A reported that school facilitated their engagement in regular physical activity. School events, such as the athletics carnival, encouraged children to engage in a variety of sports and to train for them in their own time. The provision of facilities such as the school oval gave children opportunities to engage in physical activity during lunch times. Mum only has $5 left from her pay. I play at [a large regional city] that's not going to get me there and back. --- I don't do any sports after school but um every lunch time I'm normally playing touch footy or I'm doing basketball, basketball with my friends. (P1A.) --- (P4B.) Handwritten notes from the second group yarns reported that parents were not aware of the funding and support that may be available to enable their children to participate in organized sport(s). Lack of access to transport, both public and private, was associated with limited parental finances and availability of public transport, particularly when children lived out of town. Participants were reliant on parents or extended family members for transport to regular sporting competitions or community facilities. The availability of transport depended on family routine and dynamics. The issues with availability and affordability of transport were emphasized during the follow-up group yarns. Children discussed walking due to limited access to transport and this being the least-expensive option. Five community-level, three interpersonal-level, and two individual-level barriers (Table 1) were identified when the socio-ecological model was applied. Children perceived barriers to participating in physical activity around: the physical environment, particularly the availability of safe and accessible community facilities; lack of parental finances to support sports participation; consumption of an unhealthy diet; and participation in sedentary activities. --- Facilitators Family members' participation in sports and/or their sporting achievements were identified in both Community A and B as key factors facilitating physical activity, providing children with important role models for being active.... we started paddling out and I asked Dad if I could have a go. (P9A.)... my brother is surfin' an' we all love surfin'... --- (P3A.) Family activities such as fishing were enjoyed on a regular basis. Participants in Community A reported that school facilitated their engagement in regular physical activity. School events, such as the athletics carnival, encouraged children to engage in a variety of sports and to train for them in their own time. The provision of facilities such as the school oval gave children opportunities to engage in physical activity during lunch times. I don't do any sports after school but um every lunch time I'm normally playing touch footy or I'm doing basketball, basketball with my friends. (P1A.) Group yarns (Community A and B) reiterated these findings and discussed school as an important factor in helping children form an active lifestyle. The school was an environment that offered a wide range of opportunities to be active and an opportunity for children to engage in sport with their peers. Schools also enabled participation in physical activity through the provision of financial support and transport, both of which addressed factors described as barriers. Participants enjoyed regular physical activity when they had access to adequate equipment and opportunities. In the final group yarns, participants were enthusiastic about outdoor play/non-organized physical activity as it was enjoyable, there was free choice of activities, and anyone could participate. Despite experiencing the complex barriers that made it difficult for children to be active, including gender role perceptions for one child, participants still desired to engage in physical activity. I took that picture like that cos it's just saying that some kids actually wanna go in there and use it and stuff. (P2A.) Too old to play football because I am a girl, I still want to play football though. --- (P3B) Participants proposed several suggestions to improve opportunities for physical activity in their community. This included better facilities and improved use of space by building community facilities.... the council should put ah real basketball court out the ridge cos we have a lot of space there. (P3A.) Three interpersonal, and two each of individual, community, and institutional facilitators (Table 1) were identified when the socio-ecological model was applied. Facilitators were largely apparent at the individual and interpersonal level, with friends and family key facilitators. At the institutional level, schools were central to many children's ability to take part in sports and physical activity. Children's vision for improvements to their opportunities for physical activity was directed at the community level. They imagined facilities that better suited their community along with better use of space for community facilities. --- Discussion This study appears to be the first to explore rural NSW Aboriginal children's perceptions of the barriers to and facilitators of their sports and physical activity participation. We found that the key facilitators of Aboriginal children's physical activity exist at the interpersonal and institutional levels of the socio-ecological approach [18] and are physical activity engagement with friends, the strength of the family unit, and schools presenting opportunities for children to be active. The key barrier to physical activity participation identified by children was at the community level regarding poorly maintained community facilities and related safety issues. Other barriers perceived by participants included: intake of unhealthy foods, excessive screen time, inability to afford physical activity opportunities experienced as costly, and reliance on parents for transport. The strength of the family unit as a key facilitator for physical activity aligns with the perceptions of Aboriginal children elsewhere [10][11][12][13][14]. Children discussed their family members (parents or siblings) who participated in sport and their sporting achievements as supporting and encouraging their physical activity. This factor is also a prominent facilitator for Aboriginal and Torres Strait Islander adults' physical activity participation [3]. Aboriginal people view physical activity as a collective occupation providing connections with others and the wider community [33]. Aboriginal families (parents and siblings) play a crucial role in supporting children and young people's physical activity engagement through encouragement, role-modeling an active lifestyle, and facilitating activities involving exercise [12,13]. The lack of family involvement has been described as hindering children's physical activity engagement in the Torres Strait and surrounding country [10]. Friends enable physical activity participation through the inherent enjoyment and fun experienced by children being active together in play, general activity, and sport [12]. Participants' enjoyment and desire to participate in physical activity led them to hold aspirations for their community, including how space can be utilized to build community facilities such as a new basketball court. Enjoyment of sport and a desire to remain physically active have also been identified as facilitators to physical activity participation by Aboriginal adults [3,34]. As such, strategies to increase physical activity should explore options where children can also socialize with their peers or within an environment that encourages social connection. School is experienced by Aboriginal children in this study as an environment that not only has better access to facilities and equipment but fosters socialization with friends. This aligns with findings elsewhere that have identified that an established relationship between schools and the community positively influences young Aboriginal people's engagement in physical activity [12] and that Aboriginal children report school facilities and community events provide them with opportunities to be active [9,11,12]. Deteriorating community facilities and the resulting lack of safety reported by these NSW rural children expands on reports from studies in other Australian jurisdictions regarding rural Aboriginal children's perceptions [12]. These factors present a significant deterrent to physical activity [35]. NSW state government policies and legislations control the availability and quality of community facilities and accessibility of neighborhoods, often through the actions of local councils that it funds [32]. Infrastructure in these communities is primarily funded by rates collected from residents [36]. As rates are calculated on property value [36], and the value of the property is less in the participating communities, fewer funds are available for infrastructure management. We suggest that the potential benefits of supplementing rates with additional funds be considered by local councils to ensure that infrastructure relevant for children's health and wellbeing is adequately maintained in disadvantaged areas. Participants in this study largely appeared to understand physical activity as engagement in organized sports, such as football, along with related non-organized sport/practice. The availability of relevant, accessible community facilities is therefore important. We note, however, that children did not consider the incidental exercise that takes place from day to day, such as walking to and from community facilities or walking as transport as physical activity. We call for local councils, communities, and schools to consider campaigns to promote alternatives to team sports, such as bike ridingand walking, to support children's understanding that participating in such activities is also beneficial for their health. Such campaigns must be led by and co-designed with Aboriginal communities [27,37]. Children in this study identified the consumption of unhealthy foods and exposure to excessive screen-time as barriers to physical activity. Children described the association of these factors with low levels of physical activity and poor physical health, citing chronic diseases such as diabetes and obesity, both prevalent in their communities [2]. These have not been identified as barriers by young people in previous studies exploring Aboriginal and Torres Strait Islander children's views on their physical activity [10,12] and should be harnessed in the design of future strategies to improve physical activity participation. Sedentary behavior due to time spent on screen-based activities is an issue for all children; however, a national report has found that Aboriginal children spend 25 min more on tech-nology per day than their non-Aboriginal counterparts [7]. This is, therefore, a barrier that also warrants inclusion in programs that address children's physical activity participation. Participants described parental circumstances around vehicle availability and sufficient finance to afford car-associated costs as barriers to accessing sporting competitions or community facilities. This has also been identified by other young Aboriginal people as a barrier
Participating in physical activity is beneficial for health. Whilst Aboriginal children possess high levels of physical activity, this declines rapidly by early adolescence. Low physical activity participation is a behavioral risk factor for chronic disease, which is present at much higher rates in Australian Aboriginal communities compared to non-Aboriginal communities. Through photos and 'yarning', the Australian Aboriginal cultural form of conversation, this photovoice study explored the barriers and facilitators of sport and physical activity participation perceived by Aboriginal children (n = 17) in New South Wales rural communities in Australia for the first time and extended the limited research undertaken nationally. Seven key themes emerged from thematic analysis. Four themes described physical activity barriers, which largely exist at the community and interpersonal level of children's social and cultural context: the physical environment, high costs related to sport and transport, and reliance on parents, along with individual risk factors such as unhealthy eating. Three themes identified physical activity facilitators that exist at the personal, interpersonal, and institutional level: enjoyment from being active, supportive social and family connections, and schools. Findings highlight the need for ongoing maintenance of community facilities to enable physical activity opportunities and ensure safety. Children held strong aspirations for improved and accessible facilities. The strength of friendships and the family unit should be utilized in co-designed and Aboriginal community-led campaigns.
beneficial for their health. Such campaigns must be led by and co-designed with Aboriginal communities [27,37]. Children in this study identified the consumption of unhealthy foods and exposure to excessive screen-time as barriers to physical activity. Children described the association of these factors with low levels of physical activity and poor physical health, citing chronic diseases such as diabetes and obesity, both prevalent in their communities [2]. These have not been identified as barriers by young people in previous studies exploring Aboriginal and Torres Strait Islander children's views on their physical activity [10,12] and should be harnessed in the design of future strategies to improve physical activity participation. Sedentary behavior due to time spent on screen-based activities is an issue for all children; however, a national report has found that Aboriginal children spend 25 min more on tech-nology per day than their non-Aboriginal counterparts [7]. This is, therefore, a barrier that also warrants inclusion in programs that address children's physical activity participation. Participants described parental circumstances around vehicle availability and sufficient finance to afford car-associated costs as barriers to accessing sporting competitions or community facilities. This has also been identified by other young Aboriginal people as a barrier to accessing physical activity opportunities [12]. Transport disadvantage is common for Aboriginal people due to lack of access to and affordability of private and public transport options [38], particularly for those living in rural and remote parts of Australia. Lack of transport has been identified as a key barrier to physical activity and sports participation by Aboriginal and Torres Strait Islander adults [3]. The costs of public bus services in rural NSW have been found to be substantially higher than metropolitan areas and are more than residents are able to afford [39]. A lack of affordable and accessible transport places Aboriginal children at a further disadvantage when accessing physical activity opportunities. We suggest that local councils consider offering (or expanding) a community bus service to support weekend sport participation for children. The inability to afford to participate in physical activity, including organized sports due to low income, has been noted by young rural Aboriginal people [12]. Aboriginal adults have also stated that the high cost of sports participation relative to their income is a very significant barrier to accessing physical activity opportunities [3,34]. While costs are also cited as a top barrier for other Australian children [40], additional financial barriers exist for Aboriginal people who experience socioeconomic disadvantage more than other Australians and possess a lower weekly household income compared to other households [5]. Associations between low physical activity levels and socioeconomic disadvantage have previously been identified [41], and the high costs associated with sport may contribute to low rates of physical activity for Aboriginal children and youth. In our study, parents indicated that they were not aware of local schemes through sports organizations or local councils to support the costs of children's participation in sports. It has been suggested elsewhere that better promotion of sporting opportunities through local agencies and clubs to young Aboriginal people may influence physical activity participation [12]. The enduring impact of colonization on Aboriginal communities is an overarching driver of the barriers to physical activity participation identified in this study and was identified as such by the APOs on this study when discussing the results. The socioeconomic disadvantage and lower weekly income evident in many Aboriginal communities [5] have been acknowledged as enduring impacts of colonial government policies, which also included regulating income of Aboriginal people, forced disconnection from traditional land, forced removal of children, and marginalization of communities [42]. Marginalization included being required to live in settlements or missions 'out of town' and being either barred from entering a town or segregated if permitted to use facilities [1]. Poor community cohesion and racism were identified by Aboriginal parents from the participating communities as an ongoing barrier to their children being active [19], and also have their origins in colonial-government policies that disrupted and fractured communities [33]. Adopting the principles of co-design [27,36] when developing physical activity programs for Aboriginal children and ensuring that these programs are led and delivered by local Aboriginal community members [43] is recognized as imperative to improving the accessibility and cultural relevance of such strategies [23,33]. However, these approaches are still yet to be widely implemented: What has been missing from these... (government policies since 1989)... commitments is the genuine enactment of the knowledges that are held by Indigenous Australians relating to their cultural ways of being, knowing and doing. Privileging Indigenous knowledges, cultures and voices must be front and centre in developing, designing and implementing policies and programs. The sharing of power, provision of resources, culturally informed reflective policy making, and program design are critical elements [44] (p. 1). Strengths of this study include the use of a novel method of investigating Aboriginal children's perceptions of physical activity participation, allowing their voices to be heard. The participatory action research approach used in this research enabled a flexible response to participant and community needs and supported their engagement at all stages of the study. A reflexive approach to the final selection of photos allowed careful consideration of those that best represented participants' views. The strong Aboriginal community governance structure enabled guidance on all aspects of the research process [23]. Community consultations allowed findings to be discussed with various Aboriginal community members who have been involved in the MRDPP and with local council representatives who wished for additional information. The posters distributed to community stakeholders allowed for further dissemination of results at a local level. A limitation to this study was that a number of community-level events and challenges unrelated to the study emerged in Community B over the time that the yarns took place. These impacted recruitment numbers and children's participation in follow-up yarns. However, the participation of APOs from the communities to some degree mitigated this issue, and feedback received from the community when results were presented was positive. --- Conclusions This photovoice study enabled Australian Aboriginal children from rural NSW to describe their experiences of sport and physical activity participation in their communities for the first time. Results extend the limited representation of Aboriginal children's voices on this topic nationally. The identification of key facilitators at the interpersonal and institutional level and of barriers at the community level offer guidance for future strategies to address improvements in enabling Aboriginal children to participate more fully in the sports and physical activities that they aspire to. Prioritizing the maintenance of community facilities is important in enabling access to physical activity opportunities, and children held strong aspirations for improved and accessible facilities. Transport accessibility, along with the costs of sports participation, continue to be barriers to Aboriginal children's engagement in sport and physical activity and require a whole-government response. The strengths of families and friendships should be harnessed to facilitate participation in sport and physical activity. Barriers and facilitators identified by Aboriginal children are a result of the enduring impact of colonization on families and communities. Aboriginal community co-design and leadership of all matters of relevance to their communities, including in public health and health promotion, are essential and widely recognized as central to improvements in health and wellbeing [45]. However, the development of policies and programs that embody these approaches is only emerging, and implementation is yet to be fully understood and accepted. Only once this occurs will Australian Aboriginal children be enabled to wholly engage with and benefit from the sports and physical activity that they desire. Author Contributions: Conceptualization, J.G., J.S. and N.T.; methodology, J.G. and S.L.; software, S.L. and J.G.; validation, S.L., J.G., J.S. and N.T.; formal analysis, S.L., J.G., J.S. and N.T.; investigation, S.L., J.G., J.S. and N.T.; resources, J.G.; data curation, S.L. and J.G.; writing-original draft preparation, S.L., J.G., J.S. and N.T.; writing-review and editing, S.L., J.G., J.S., N.T., R.P., E.L.J. and N.A.J.; visualization, S.L., J.G., J.S., N.T., R.P., E.L.J. and N.A.J.; supervision, J.G.; project administration, J.G., J.S. and N.T.; funding acquisition, J.G., R.P., E.L.J. and N.A.J. All authors have read and agreed to the published version of the manuscript. --- Data Availability Statement: Restrictions apply to the availability of these data. Data was obtained from the participating Aboriginal communities and are available from the authors with the permission of the representatives of these communities. --- Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. --- Conflicts of Interest: The authors declare no conflict of interest.
Participating in physical activity is beneficial for health. Whilst Aboriginal children possess high levels of physical activity, this declines rapidly by early adolescence. Low physical activity participation is a behavioral risk factor for chronic disease, which is present at much higher rates in Australian Aboriginal communities compared to non-Aboriginal communities. Through photos and 'yarning', the Australian Aboriginal cultural form of conversation, this photovoice study explored the barriers and facilitators of sport and physical activity participation perceived by Aboriginal children (n = 17) in New South Wales rural communities in Australia for the first time and extended the limited research undertaken nationally. Seven key themes emerged from thematic analysis. Four themes described physical activity barriers, which largely exist at the community and interpersonal level of children's social and cultural context: the physical environment, high costs related to sport and transport, and reliance on parents, along with individual risk factors such as unhealthy eating. Three themes identified physical activity facilitators that exist at the personal, interpersonal, and institutional level: enjoyment from being active, supportive social and family connections, and schools. Findings highlight the need for ongoing maintenance of community facilities to enable physical activity opportunities and ensure safety. Children held strong aspirations for improved and accessible facilities. The strength of friendships and the family unit should be utilized in co-designed and Aboriginal community-led campaigns.
Introduction Adolescence, more than in any other developmental stage, is characterized by heightened susceptibility to peer influence [1], which makes adolescents vulnerable to initiating or maintaining risky habits such as heavy drinking [2]. People are likely to engage in behaviors that match their perceptions of what is "normative," especially characteristics of those who represent idealized identities, such as high-status peers. Many deviant and risky behaviors are associated with high peer status and it is suggested that some adolescents strive to imitate their high status peers through a process of social comparison [3] which means that adolescents contrast their own sense of values, interests, beliefs, and behaviors with their perceptions of others and, in consequence of this, construct a sense of identity. Various risk factors for problem drinking among youth have been identified by researchers. The emphasis across studies has been on risk and protective factors [4,5]. There is increasing evidence that social environmental factors influence alcohol consumption and harms among youth. Social capital is one contextual factor that has been related to binge drinking-defined as consuming 5 or more drinks on one occasion- [6] among adolescents. Social capital is defined as the resources-such as social support, trust, and information channels-accessed by individuals through their social networks [7]. Social trust and social participation, have each been protectively associated with alcohol use among high school students [8]. Binge drinking has a strong social component [9,10]. Adolescents are more likely to drink in social settings, allowing for their drinking habits to be visible to peers. The combination of risk taking and the visibility of alcohol use in peer settings may allow adolescents to maintain their social network status and gain popularity [11]. In addition, some studies have shown that binge drinking varies by gender and socioeconomic status, although these associations are not always consistent. Because both alcohol use and peer influence increase during adolescence, it is critical to consider longitudinal influences of peer groups on the developmental trajectory of adolescent alcohol use [12]. Furthermore, studies that investigated the association between binge drinking and social capital have not attempted to identify differences among the sub-dimensions of the social capital construct [4,13]. The aim of the present longitudinal study was therefore to investigate the association of social capital with longitudinal changes in the frequency of binge drinking among adolescents at public and private high schools in the city of Diamantina, Brazil. --- Materials and methods --- Study design and sample To investigate an incidence of binge drinking, a survey was carried out involving all adolescents enrolled in the public and private schools of the city of Diamantina/MG, Brazil, with a full 12 years during the data collection months of the study. Data related to school addresses and number of students enrolled in each class was obtained from the State and Municipal Education Departments. Subsequently, 633 adolescents from all 13 public and private schools in Diamantina / MG were invited to participate in the study, being previously notified by telephone to schedule the researcher's visit. At that time, the objectives of the research were clarified and what activities would be carried out at the school. Also presented were an approval of the Ethics Committee and as authorizations of the State and Municipal Secretariats of Education. After the consent of the management and the teaching staff, classes with schoolchildren enrolled in public and private schools in the urban area of the city of Diamantina and who were 12 full years on the day of the exam; authorized by the parents / guardians and agreed to participate in the research (inclusion criteria) were contacted by the researcher during class time, with the teacher's presence, for awareness raising. Adolescents not authorized by parents or guardians or who did not agree to participate in the study were excluded from the study. The researcher explained the purpose of the research and asked the students to answer the questionnaires, ensuring the confidentiality of the answers, as well as the evaluation of student participation. In the baseline survey (2013), the sample consisted of 588 students (participation rate: 92.89%). The reasons for dropouts were non-authorization from parents/guardians or adolescents (4.62%; n = 28) and failure to complete the questionnaires (2.9%; n = 17). In 2014, a new data collection procedure was carried out with these adolescents when they were aged 13 years. Again, all 13 public and private schools in Diamantina / MG were invited to participate in the study and were previously notified by telephone to schedule the researcher's visit. They only included adolescents authorized by their parents or guardians and who agreed to participate in the study. Thus, the follow-up study involved a sample of 588 adolescents (100%). To achieved a 100 percent follow-up rate, the researchers responsible for the data collection made calls to the homes of students who were not present on the day previously scheduled, which led the researchers to return to some schools more than once. Furthermore, access was relatively easy because researchers live in the region and had close contact with directors of the schools. --- Measures The Alcohol Use Disorder Identification Test (AUDIT C), validated for use in Brazil [14]. was employed for the evaluation of alcohol intake. The AUDIT instrument can identify whether an individual exhibits hazardous (or risky) drinking, harmful drinking or alcohol dependence [15]. AUDIT C (the first 3 questions on the AUDIT instrument, which are related to the frequency and amount of alcohol consumed) was used, as this version can be employed as a stand-alone screening measure to detect hazardous drinkers among adolescents [16,17]: a) "How often did you have a drink containing alcohol in the past year?" b) "How many drinks containing alcohol did you have on a typical day when you were drinking?" c) "How often do you have five or more drinks on one occasion?" The latter item was used to identify binge drinking [18]. The response options are never, less than monthly, monthly, weekly and daily or nearly daily. Responses of "never" were coded as 0 in the analysis. "Less than monthly" and "monthly" were coded as 1. "Weekly" and "daily or nearly daily" were coded as 2. Although the AUDIT C was used to measure alcohol involvement, the dependent variable was change in alcohol consumption, calculated from the difference in consumption observed between 2013 and 2014, categorized into "reduced or unaltered frequency intake" and "increased frequency intake" was based only on AUDIT binge item ([c]). Our predictor variables included sociodemographic and economic characteristics (gender, type of school, mother's education, family income) and Social Capital. For evaluation of social capital, we used the Social Capital Questionnaire for Adolescent Students (SCQ-AS), which was developed and validated by our research team [19]. The study population included in the development and validation of the instrument was a convenience sample made up of 101 students aged 12 years enrolled in the public and private school systems in city of Diamantina/MG, Brazil. This questionnaire is composed of items selected from the national and international literature and has been submitted to face validation, content analysis and analyses of internal consistency (Cronbach's alpha: 0.71), reliability and reproducibility (Kappa coefficient's range: 0.63 to 0.97) [19]. The factor analysis grouped the 12 items into four subscales: Social Cohesion at School; Network of Friends at School; Social Cohesion in the Community/ Neighborhood; and Trust at School and in the Community/Neighborhood. Social capital scores range from 12 to 36 points, with a higher score denoting higher social capital (Table 1). As a questionnaire designed for children and adolescents, the decision was made to use a three-point Likert scale with response options of I agree, I do not agree or disagree and I disagree. This procedure was based on the target age group and was chosen to avoid confusion during the filling out of the questionnaire. The findings confirm indications in the literature that networks of friends and neighborhood cohesion reflect experiences one shares with one's peers and underscore the importance of the present questionnaire as an assessment tool for measuring social capital. Based on the distribution, to analyze the social capital by the adolescent the social capital variable was dichotomized by median as high (31 points or more) and low (less than 31 points). The difference of the social capital at the follow-up in relation to the social capital at the baseline of each adolescent was calculated to obtain the difference between the measures of social capital in the two evaluations. Thus: total score of social capital at follow-up (FSC) minus total score of social capital at the baseline (BSC) presented three response options increase in social capital (FSC> BSC), reduction (FSC <unk>BSC) and unaltered (FSC = BSC). We treated sex, type of school (public or private), maternal education and family income as time invariant. --- Statistical analysis Data analysis was performed using the Statistical Package for the Social Sciences (SPSS for Windows, version 22.0, SPSS Inc, Chicago, IL, USA) and included frequency distribution and association tests. The chi-square test was used to determine the statistical significance of associations between binge drinking and the independent variables (p <unk> 0.05). Given the high prevalence of the outcome (> 20%), we used log-binomial model to calculate prevalence ratios (PR) and 95% confidence intervals [20]. In this study, log binomial models were used to calculate both univariate and multivariable models [20]. The two-tailed p value was set at <unk>0.05. --- Ethical considerations This study received approval from the Human Research Ethics Committee of the Federal University of Minas Gerais (Brazil) (COEP-317/11). All parents/guardians signed a statement of --- Results The sample comprised 588 students (participation rate at one-year follow-up: 100%). Boys accounted for 48.6% (n = 286) of the sample. Among these participants, the vast majority attended public schools (92.2%; n = 542). A total of 75.2% (n = 442) of adolescents were from families that earned up to three times the Brazilian monthly minimum wage, and 61.60% (n = 361) of the mothers had less than eight years of schooling (Table 2). The prevalence of binge drinking in 2013 was 23.1% and in 2014 the prevalence had risen to 30.1%, i.e. there was a 7% increase in the prevalence of binge consumption in the period. Of the 452 teens who reported never consuming five or more alcoholic drinks at one time in 2013, 41 started to do so with some frequency in 2014 (Table 3). According with the changes in the score of social capital total between baseline (2013) and follow-up (2014), 340 (58.4%) adolescents unaltered their social capital total in follow-up; 184 (31.6%) students showed an increase in social capital total in follow-up and 58 (10.0%) showed a reduction in follow-up. Six students did not adequately answer the questionnaire. Table 4 shows the percentage of the sample related to the in subscales of social capital between baseline and follow-up and its association with the difference on binge drinking between baseline and follow-up. 166 (28.3%) students increased their social capital in the 'Social Cohesion at School' subscale and 457 (78.0%) reduced or unaltered their score of social capital total between baseline and follow-up in the 'Network of Friends at School' subscale. Twenty-six (21.4%) adolescents who reported an increase in the 'Social Cohesion in the Community' subscale also showed an increase in binge drinking at the follow-up and 188 (95.4%) reported a reduction in the 'Trust' subscale and in the binge drinking at the same time (Table 4). Log-binomial model shows the incidence of binge drinking according to the background characteristics of the respondents. Gender (PR 0.67; 95% CI 0.40-1.13) and socioeconomic status (type of school and mother's education) were not associated with the increase in the frequency of binge drinking. However, social capital was significantly associated with an increase in binge drinking by students (Table 5). Table 6 shows the prevalence ratios of changes in the frequency of binge drinking according to social capital subscales. Adolescents who reported that they had an increase in social cohesion in the community/neighborhood subscale were 3.3 times more likely (95%CI 1.83-6.19) to binge drink themselves. In addition, adolescents who reported that they had a decrease in trust subscale were less likely (PR 0.4 95%CI 0.21-0.91) to binge drink themselves. However, social cohesion at school and network of friends at school subscales were not associated with the outcome. --- Discussion The present study examined the frequency of binge drinking among adolescents at public and private schools in the city of Diamantina (southeastern Brazil). The increase in the frequency of binge drinking in the follow-up period was 7% and this increase was fivefold greater among adolescents who exhibited an increase in social capital. Our social capital questionnaire was designed so that we can distinguish the influence of social capital in different contexts that the adolescent is exposed to, i.e. the school environment versus the neighborhood environment. We therefore analyzed the subscales separately. Our findings suggest that adolescents' drinking behavior is more responsive to changes in the neighborhood context and trust, rather than the school context and friendship network at school. The literature suggests that the concept of social capital can be broken down into'structural' and 'cognitive' social capital [21]. Structural aspects of social capital refer to roles, rules, precedents, behaviours, networks and institutions. These may bond individuals in groups to each other, bridge divides between societal groups or vertically integrate groups with different levels of power and influence in a society, leading to social inclusion. By contrast, cognitive social capital taps perceptions and attitudes, such as trust toward others that produce cooperative behaviour [22]. In contrast to the results of the present study, previous reports found that students from U. S. colleges with higher levels of social capital were at lower risk for binge drinking [5,23]. The discrepancy may be due to differences in the aspects of social capital examined in the different settings. Specifically, the study based on binge drinking in US colleges focused on the structural aspect of social capital-as measured by the participation of students in voluntary activities. [23]. However, students in our Brazilian sample were at greater risk of binge drinking if they reported higher social capital in the cognitive dimension, i.e. feelings of more cohesion in their communities and neighborhoods and they are less likely to binge drinking if they have a decrease in the trust subscale. The difference between these studies, including the age of the subjects, underscore the observation that social capital can have both positive and negative health implications, depending on the form it takes [24]. In samples with older adolescents who binge drinking more often, we may find a richer (e.g., expected gender effects) and possibly more intuitive pattern of results. Individuals who have higher levels of social support and community cohesion generally are thought to be healthier because they have better links to basic health information, better access to health services, and greater financial support with medical costs [7]. However, it is important to consider the impact of complex community factors on individual behaviors. Some factors as social stratification (i.e., the probability of living in certain neighborhoods, which is higher for certain types of persons) and social selection (i.e., the probability that drinkers are more likely to move to certain types of neighborhoods) may affect health risk behaviors, including alcohol use [7]. In addition, previous research highlighted the importance of having trust in the peers with whom adolescents drank alcohol [25]. Young usually drink more with peers whom they trust probably because of a tacit acknowledgement that a friend understood unspoken rules and could be relied upon [25]. Past studies have found that binge drinking is usually performed in groups; therefore, peers play an important role in promoting binge drinking, perhaps due to peer selection or peer influence (socialization) [4,23]. Our results show that social cohesion in the community/ neighborhood subscale was significantly associated with increase in binge drinking and a decrease in trust subscale was related to the decrease in frequency of binge drinking among scholars. Although the literature is well established in relation to peer influence on binge drinking, social cohesion at school and network of friends at school subscales were not associated with the outcome. Drinking is viewed by young people as a predominantly social activity which provides an opportunity for entertainment and bonding with friends [25]. During lifetime, friendships can direct development through support, modeling, and assistance, but the significance of friendships is heightened during adolescence [26]. Previous study showed that adolescents' baseline alcohol use status (drinker/ nondrinker) strongly predicted acquisition of friends exhibiting similar alcohol use patterns twelve months later [27]. Another study among young students [28] that analyzed individual and contextual risk factors for alcohol use (temperamental disinhibition, authoritarian and authoritative parenting, and parental alcohol use) assessed during childhood and adolescents revealed significant variability in the association between alcohol consumption and deviant friends and that deviant friends was a significant covariate of alcohol consumption. Furthermore, this study revealed a significant interaction of Disinhibition <unk> Parental Alcohol Use; the childhood disinhibition interacted with parental alcohol use to moderate the covariation of drinking and deviant friends [28]. The relationship between social environments and binge drinking is complicated in part because of reverse causality or simultaneity. Environmental factors (i.e. school and neighborhood characteristics) may be spuriously linked to binge drinking because, for example, adolescents who live in neighborhoods where violent crime is high and access to illicit substances is easy, may be less likely to be socially connected and more likely to consume alcohol. [29]. Despite being a well-established determinant, the influence of socioeconomic status on health is not well understood and little research has focused on the effects of this aspect on health during adolescence [30]. In the present study, the socioeconomic status was not associated with the increase in the frequency of binge drinking among adolescents. Some studies have demonstrated that adolescents from higher socioeconomic status (SES) backgrounds have a greater propensity to use alcoholic beverages and to engage in binge drinking [4,31,32]. This may be because of higher discretionary income (pocket money) or easier access to alcohol in their homes. However, other studies have found an association between lower socioeconomic status and greater alcohol consumption [16,33], and still others have found no significant association between socioeconomic status and alcohol intake [34,35]. The literature highlighted that differences in results may be partially explained by the use of different indicators adopted such as family income, social class, level of schooling, school type, as well as the considerable variation in cut-off points, as well as the specific culture and the age of the drinker. In the present study, we did not find statistically significant difference between incidence of an increase in binge drinking and gender. This may be explained by changing gender norms over time, which has made it more acceptable for girls to engage in risky behaviors [36]. In accordance with our results, a longitudinal study used a national data to describe gender differences in health behavior of adolescents and, found that in the case of binge drinking, girl's behaviors have converged with the rates among boys [36]. A limitation of our study is that as the data were derived from self-administered questionnaires, lack of attentiveness should be taken into consideration. Second, despite emphasizing the importance of giving honest responses, the findings may have been underestimated due to self-censoring and/or a suspicion that school authorities could gain access to the answers on the questionnaires. Third, information on the influence of friends and characteristics of friendship networks, such as density, size, quality of contacts, proximity and centrality, was not collected in the present study, despite the fact that binge drinking has been associated with such factors [1-4, 12, 13]. The aim of the questionnaire was to measure social capital that was easily understood and applicable to adolescent students that encompasses the different domains of social capital for this population. Even though this questionnaire did not measure characteristics of friendship networks, such as density, size, quality of contacts, proximity and centrality, it is measures contexts that involve social relationships, such as experiences at school and in the local community, which can exert an influence on the behavior and decisions of adolescents, thereby reflecting health determinants. The Social Capital Questionnaire for Adolescent Students (SCQ-AS) demonstrate that this assessment tool is appropriate for epidemiological studies involving samples of adolescents in the investigation of the association between social capital and risk factors or health determinants. Finally, we cannot generalize findings from this study to older adolescents within Brazilian culture. --- Conclusion Binge drinking involves groups of inter-connected people who evince shared behaviors, and is a public health and clinical problem. Targeting these behaviors should involve addressing groups of people and not just individuals [24]. Our results provide new evidence about the "dark side" of social cohesion in promoting binge drinking among adolescents. Social capital interventions must include school and community engagement, parental involvement, and peer participation components to address the complex array of factors that influence adolescent alcohol use. --- All relevant data are within the paper and its Supporting Information files.
Adolescence is characterized by heightened susceptibility to peer influence, which makes adolescents vulnerable to initiating or maintaining risky habits such as heavy drinking. The aim of the study was to investigate the association of social capital with longitudinal changes in the frequency of binge drinking among adolescents at public and private high schools in the city of Diamantina, Brazil. This longitudinal study used two waves of data collected when the adolescents were 12 and 13 years old. At the baseline assessment in 2013 a classroom survey was carried out with a representative sample of 588 students. In 2014, a follow-up survey was carried out with the same adolescents when they were aged 13 years. The Alcohol Use Disorder Identification Test-C (AUDIT C) was employed for the evaluation of alcohol intake. Our predictor variables included sociodemographic and economic characteristics (gender, type of school, mother's education, family income) and Social Capital. For evaluation of social capital, we used the Social Capital Questionnaire for Adolescent Students (SCQ-AS). Descriptive and bivariate analyzes were performed (p <0.05). The log-binomial model was used to calculate prevalence ratios (PR) and 95% confidence intervals. The twotailed p value was set at <0.05. The prevalence of binge drinking in 2013 was 23.1% and in 2014 the prevalence had risen to 30.1%. Gender (PR 1.48; 95% CI 0.87-2.52) and socioeconomic status (type of school and mother's education) were not associated with the increase in the frequency of binge drinking. However, higher social capital was significantly associated with an increase in binge drinking by students. Adolescents who reported that they had an increase in social cohesion in the community/neighborhood subscale were 3.4 times more likely (95%CI 1.96-6.10) to binge drink themselves. Our results provide new evidence about the "dark side" of social cohesion in promoting binge drinking among adolescents.
Introduction Racism emerges whenever social and individual values, norms and practices of a given group are considered superior to others'. Racism occurs with the particular aim of creating, maintaining or reinforcing power imbalances, as well as the corresponding inequalities in opportunities and resources along racial lines [1]. Similar to most contemporary societies, Australia is characterized by co-existing expressions of cultural diversity on the one hand, and negative impacts of racism on social cohesion on the other [1]. In Australia, the mental health costs directly attributable to racism have been estimated at 235,452 disability-adjusted life years lost, which is equivalent to an average $37.9 billion in productivity loss per annum, or 3% of the Australian annual Gross Domestic Product (GDP) over 2001-2011 [2]. Such a strong relationship is an indication that racism may erode the very social fabric of the Australian society by producing mental disorders and suffering, which unevenly impacts upon racially marginalized groups. Social conceptions that shape intergroup relations form the common ground upon which intergroup attitudes and discriminatory behaviour take place [3]. From an empirical viewpoint, findings suggest that racist attitudes are associated with racist behaviours and racial-ethnic minorities' experiences of discrimination [4]. Positive attitudes towards diversity, however, are negatively associated with discriminatory behaviour [5]. In this study, we propose to explore attitudes in relation to multiculturalism, a construct of special relevance to the social, economic and political fabric of contemporary Australia [6]. We focus on multiculturalism as an ideology of acknowledging and celebrating ethnic and cultural differences, in which the need for preserving cultural identities is recognized [7]. It reflects a "sensibility and [a] disposition towards cultural differences among large sections of the population" [8]. Data from the 2016 Australian Census revealed that one in three Australians were born overseas, and a similar proportion of individuals speak a language other than English at home. Nevertheless, assimilationist attitudes -expectations of conformation to the dominant culture-often prevail, as opposed to multiculturalist perspectives that accept and praise racial and ethnic-cultural diversity [9]. Understanding attitudes to multiculturalism can contribute to unveil the dynamics of racism and discrimination against minorities in the country, fostering public debate and policy formulation aimed to promote positive intergroup relations [10]. Research on ethnic-racial intergroup attitudes draws from theories on ideological attitudes that explain group-based dominance and social cohesion [11][12][13]. Social Dominance Orientation (SDO), for example, reflects the degree to which respondents believe that hierarchy-based dominance between social groups is natural [14]. Discrimination against minorities, therefore, can be explained by the degree of endorsement of the notion that group-based hierarchies are natural and inevitable [14]. Endorsement of group-based dominance and out-group prejudice tends to increase among those who highly identify with the dominant group, as they represent a mechanism of maintaining the in-group status quo [12]. Research on ethnic-racial intergroup relations in contemporary societies has also explored the Right-wing Authoritarianism (RWA) concept [15][16][17]. RWA is characterized by the endorsement of social conservative values, morality, collective security, group-based social cohesion, and strict obedience to social authorities [15,17]. Those who endorse RWA values can be more sensitive to threats to social stability, being prone to conservative values as to increase their perception of control and collective security [18]. Perception of threat has been shown to mediate the association between group identification and attitudes towards multiculturalism [11]. Those that consider immigrants or ethnic-racial minorities as a threat to the control of resources or maintenance of the dominant social values tend to endorse more conservative/assimilationist attitudes towards multiculturalism [11,19]. Sustaining dominant group status quo can also be achieved by not acknowledging ethnicracial inequalities in the population. The so-called colour-blind racial ideology denies the existence of racism and justifies racial inequalities as a result of personal decisions, meritocratic achievements, and market forces [20,21]. By denying racist practices and racial inequalities, it provides the discursive tools to downplay policy proposals aimed at promoting racial justice and therefore maintains the power imbalance between ethnic-racial groups [20]. Following this perspective, public denial of racism has been pointed as an obstacle to a deeper commitment to multiculturalism in Australia [13,22]. Although the existence of racism is acknowledged, most Australians fail to recognise the existence of Anglo-privilege, a necessary step in reducing the imbalance in resource distribution and political representation among ethnicracial groups [13]. Taken together, the results mentioned above point to the centrality of properly assessing the different facets of intergroup attitudes towards multiculturalism as to inform public debate and contribute to prevent and counteract discrimination. It is important to note that the majority of the available scales used to assess race-related attitudes have been developed and psychometrically examined among U.S. populations [7]. These tools may not be relevant or provide valid/reliable estimates of race-related attitudes in non-US contexts, though, given the considerable contextual dependency of racism. Historiographic and sociological accounts of racial dynamics usually emphasize Australian specificities in terms of colonization, past and contemporary immigration policies, and patterns of cultural diversity as key aspects. Australia is a settler society that started with a policy of Anglo-celtic migration only. This was later expanded to include migrants from other European-backgrounds (e.g., Greeks, Italians), having only in the 1980s opened its borders to migrants of Asian and Middle-Eastern descent. These and other specificities (e.g., limited involvement in slave trade) cast serious doubts on the idea of simply adapting tools developed in a range of different countries to the Australian context. Just like other multiculturalist societies, including Canada and New Zealand, multiculturalism was debated at a national level as a state-policy in the 1970s. Backlashes from conservative sectors, nonetheless, contributed to prioritise an assimilationist perspective on the implementation of multiculturalism values in society. Australia has also historically dispossessed and oppressed the native Aboriginal Australians since British colonization with ongoing effects until present [23]. Our study does not focus on colonisation and racism faced by Aboriginal Australians as the unique features of these experiences can be diminished when considered under the umbrella of multiculturalism [24]. To the best of our knowledge, two measurement instruments that provide information on racial, ethnic, and cultural acceptance (i.e. race-related and multiculturalist attitudes) have been previously developed and assessed in Australia [7,25]. While the first has focused on intercultural understanding among teachers and students in schools [25], psychometric evaluation of the second was carried out in relatively young and convenience samples of primary and secondary school students (all younger than 15 years-old residing in Victoria) and community members (mean age of 23 years-old with 70% residing mainly in Victoria), which limits their applicability at a national level and among older age groups. Therefore, neither an integrated picture of attitudes towards multiculturalism across the country has yet been delineated, nor a range of strategies to advance racial equity based on this knowledge have been proposed. The present study proposes the Race-related Attitudes and Multiculturalism Scale (RRAMS) as a measure of attitudes towards multiculturalism. The items were formulated to reflect social ideologies and collective beliefs that potentially influence ethnic-racial intergroup attitudes. The aim of this study was to verify its applicability to the Australian context by assessing the extent to which the RRAMS provides a valid and reliable measurement of multiculturalist attitudes in a sample of Australian adults across all states and territories. In particular, the internal validity of the RRAMS was assessed in terms of its configural structure (i.e., the number of underlying factors), metric properties-the magnitude of factor loadings-, as well as measurement invariance (i.e., whether it allowed meaningful comparisons across sociodemographic characteristics). External validity of the RRAMS was then assessed in term of its construct validity. --- Methods --- Study design and participants This was an Australian population-based study, with data obtained from the 2013 National Dental Telephone Interview Survey (NDTIS), which includes a telephone-based interview and a follow-up postal questionnaire. The NDTIS has been carried out periodically by the University of Adelaide since 1994, and comprises a large national sample of Australian residents aged 5 years and over. The NDTIS is a random sample survey that collects information on the dental health and use of dental services of Australians in all states and territories. The survey also collects data on social determinants of oral health and wellbeing, which include detailed information on sociodemographic factors, such as household income, education, country of birth, remoteness of location and main language spoken at home. For the 2013 survey, an overlapping dual sampling frame design was adopted. The first sampling frame was created from the electronic product 'Australia on Disc 2012 Residential;' an annually updated electronic listing of people/households listed in the White Pages across Australia. Both landline and mobile telephone numbers were provided on records where applicable. A stratified two-stage sampling design was used to select a sample of people from this sampling frame. Records listed on the frame were stratified by state/territory and region, where region was defined as Capital City/Rest of State. A systematic sample of records was selected from each stratum using specified sampling fractions [26]. To include households that were not listed in the White Pages, a second sampling frame comprising 20,000 randomly generated mobile telephone numbers was used. This sampling frame was supplied by Sampleworx and the mobile telephone numbers were created by appending randomly generated suffix numbers to all known Australian mobile prefix numbers. As the mobile numbers did not contain address information, the sampling frame could not be stratified by geographic region. A random sample of mobile numbers was selected from the frame and contacted to establish the main user of the mobile phone. This person was asked to participate in the telephone interview, provided that they were aged 18 years or over. All participant provided verbal consent to participate in the survey and datasets were de-identified to ensure anonymity [26]. Following the completion of the telephone interview survey, participants were invited to respond to the postal questionnaire component. Those who agreed were sent a covering letter with the questionnaire and reply-paid envelope enclosed. A reminder postcard was sent two weeks later, with, if necessary, two additional follow-up letters/questionnaires sent subsequent to the postcard. A total of 6,340 Australian adults aged 18+ years took part in the 2013 NDTIS, with 2,935 (46.3%) completing the follow-up postal questionnaire. Sample characteristics are displayed in Table 1. Two thirds of the sample were 45 to 98 years old and had Technical and Further Education (TAFE) or went to university. Women corresponded to 60.3% of the sample. The majority of participants were born in Australia (76.7%), 12.8% were originally from Europe and 10.5% from the other continents (Asia, Africa and the Americas). --- Ethical approval Ethical approval for the study was granted by the University of Adelaide's Human Research Ethics Committee (approval number HS-2013-036). --- Statistical analysis Statistical analyses were conducted with R software [27] and R packages lavaan [28], and sem-Tools [29]. Phase 1: Item development. The RRAMS was developed by a group of researchers with expertise on the topics of racism, multiculturalism, and race-related attitudes in Australia. To ensure content validity [30], the scale was based on large surveys carried out in the country that were co-designed by the abovementioned group of researchers. These include the 2015-16 Challenging Racism Project [31] and the 2013 survey of Victorians' attitudes to race and cultural diversity [32]. The initial item development phase consisted in the design of items that reflect the different social ideologies that encompass multiculturalism and race-related attitudes. Discussions among the panel of experts were held until reaching consensus that the items comprehended a varied number of theoretical perspectives underpinning the construct of interest. A second group of experts-not involved in the first development phase-was then consulted for feedback purposes in relation to comprehensiveness and clarity of the items. The final RRAMS was proposed as comprised by two subscales. The first subscale included six items reflecting theories and social ideologies in agreement with "Anglo-centric/Assimilationist attitudes." It included items reflecting alignment with RWA (e.g., 'We need to stop spreading dangerous ideas and stick to the way things have always been done in Australia'), agreement with SDO ('It is okay if some racial or ethnic groups have better opportunities in life than others'), endorsement of colour-blind racial ideology (e.g., 'We shouldn't talk about racial or ethnic differences'), zero-sum racist thinking (e.g., 'Racial or ethnic minority groups take away jobs from other Australians'), and endorsement of assimilationist ideology (e.g., 'People from racial or ethnic minority groups should behave more like mainstream Australians'). The second subscale comprised six items assessing agreement with "Inclusive/Pluralistic attitudes." It included low compliance to RWA (e.g., 'Some of the best people in our country are those who are challenging our government and ignoring the 'normal' way things are supposed to be done'), low SDO (e.g., 'We should do what we can to create equal conditions for different racial or ethnic groups'), acknowledgment of racism (e.g., People from racial or ethnic minority groups experience discrimination in Australia), acknowledgment of white privilege (e.g., 'Australians from an Anglo background (that is, of British descent) enjoy an advantaged position in our society'), and endorsement of multiculturalism (e.g. "People from racial or ethnic minority groups benefit Australian society"). Besides their theoretical relevance, these constructs have been found to be acceptable and appropriate for assessing population racerelated attitudes in previous national studies in Australia [31,32]. Response options for each item ranged from'strongly disagree' (0), 'disagree' (1), 'neither agree nor disagree' (2), and 'agree' (3) to'strongly agree' (4). Phase 2: Identification of a potential factorial structure. Since the RRAMS was conceptualized to measure agreement with both conformity to the dominant ethnoculture ("Anglocentric/Assimilationist attitudes") and agreement with promotion of ethnic diversity ("Inclusive/Pluralistic attitudes"), an Exploratory Factor Analysis (EFA) was initially run to empirically test this assumption (i.e., that a two-factor solution would underlie the set of items). The factorial solution suggested by the EFA was then confirmed by means of a Confirmatory Factor Analysis (CFA) [33] in an independent sample to avoid capitalization on chance [34,35]. We randomly divided the NDTIS sample into one group for the EFA and another group for the CFA; see Table 1 for the distribution of each subsample according to sociodemographic characteristics. Considering that a sample size with at least 200 participants is sufficient for EFA under normal conditions (medium communalities and at least three items loading on each factor) [36] and CFA has higher sample requirements, 271 participants from the original survey were randomly selected for the EFA. Factor retention relied on Scree Plot [37] criteria and Parallel Analysis (PA) [38]. In the PA, 1,000 random and resampled datasets with the same number of RRAMS items and respondents were generated. The rationale of the PA is that meaningful factors extracted in the current study should account for more variance than factors extracted from random data [36]. Factor extraction was conducted with Maximum Likelihood [39] and oblique rotation ("direct oblimin") [40]. Items with non-salient factor loadings (.<unk>40) were deleted. Additionally, 100 bootstrapped samples were used to generate factor loadings' 95% confidence intervals [41]. Phase 3: Confirmation of the factorial structure in an independent sample. After a factorial structure was derived from the EFA, the instrument was assessed using CFA in an independent sample (n = 2,443). The estimation method was Weighted Least Squares [42], with a mean-and variance-adjusted (WLSMV) test statistic [43]. Missingness of individual item responses ranged from 0.9% to 2.2%, and this was handled with multiple imputation of 20 datasets using the fully conditional specification method [44]. We imputed information for individuals who responded to at least one item of the RRAMSs (n = 2,714). Rubin's rules [45] were used to pool point estimates and standard errors (SE). To evaluate model fit, the scaled <unk> 2 was used to test the hypothesis of exact-fit. Additionally, we used approximate fit indices, such as the scaled Comparative Fit Index (CFI) and scaled (for simplicity, the term'scaled' will be omitted from now on.) Root Mean Squared Error of Approximation (RMSEA). Values of CFI 0.96 and RMSEA 0.5 indicate good model fit [46], while 0.5 <unk> RMSEA 1.0 indicates acceptable fit [35]. Since factorial structures derived from EFA do not necessarily imply good fitting CFA models (e.g. due to cross-loadings or residual correlations) [47], in case the factorial structure had a poor fit, model re-specifications were informed by standardized residuals, Modification Indices (MI) and the Standardized Expected Parameter Change (SEPC) [48]. Completely standardized solutions were reported throughout the paper. Phase 4: Analysis of measurement invariance. An initial Multigroup CFA [49] was conducted to check if the same configural structure would hold for all sex, age, and educationbased groups-i.e., this was done to check whether configural invariance could be confirmed with the data at hand. The <unk> 2, CFI and RMSEA and their previously described cut-off points were used to evaluate configural invariance. The second level of measurement invariance, metric invariance, was assessed to ascertain whether factor loadings were similar across the same groups. The final test, scalar invariance, was used to determine whether item thresholds were equal across sex, age and education. Given that scalar models are nested within metric models, and metric models are nested within configural models, metric and scalar invariance were evaluated through a Likelihood Ratio Test (LRT), namely the <unk> <unk> 2 [50]. The <unk> <unk> 2 statistic was computed in each imputed dataset and pooled according to Li, Meng [51] recommendations (i.e. D2 statistic). When the <unk> <unk> 2 was statistically significant, the <unk>CFI [52] was used to evaluate the magnitude of the difference. Models with <unk>CFI -.002 indicated lack of invariance [53]. Whenever measurement invariance was not achieved, tests of partial invariance were conducted [54]. Phase 5: Reliability. Internal consistency was calculated with McDonald's O H [55] and ordinal <unk> [56]. The McDonald's O H has two advantages over the traditional and widely used Cronbach's <unk>: It does not assume (1) tau-equivalence or a (2) congeneric model without correlated errors (i.e. locally independent items) [57]. Furthermore, the ordinal <unk> is reported given that Cronbach's <unk> underestimates reliability in ordinal Likert scales. Adequate methods for calculating ordinal <unk> confidence intervals are not available [58]. Phase 6: Item reduction analysis. In the item reduction analysis, we evaluated inter-item correlations, corrected item-total correlations (CITC) and item difficulties. Inter-item correlations indicate the extent to which all items on a scale are examining the same construct without redundancy. Thus, inter-item correlations should be moderate (i.e. items that measure the same construct but also have unique variances) and items with correlations lower than.20 were considered for deletion [59]. The next step was the evaluation of CITC. One important aspect in instrument development is achieving a good balance between a small number of items (lengthy questionnaires can induce lower response rates [60]) and adequate reliability. A recent study by Zijlmans, Tijmstra [61] showed that the CITC [62] performed better than other methods at identifying which items can be removed while maximizing reliability. Therefore, items with the lowest CITC should be the first to be considered for removal. The corrected item-total correlation needs to be calculated within subscales, since items can only be summed into a total score when they measure the same construct [63]. For this reason, CITCs were calculated after the factorial structure was established (i.e. we had no prior information about which item belonged to which subscale to calculate corrected total scores). Given the ordinal nature of the data, the inter-item correlations and CITCs were investigated with non-parametric Kendall's <unk> [64]. Finally, due to the limitations of classical difficulty indices such as the p-value (i.e. proportion of correct responses given the total score) [65], we evaluated item difficulty with the LI IRF, the location index based on the item-response function [66]. The LI IRF is calculated based on the item locations (<unk> i ), which are a well-known reparameterization of item thresholds (<unk> i ) of adjacent i and i +1 response categories [67]. The LI IRF indicates the value of the latent trait in which respondents have an average score of half the maximum item score. For example, in a 5-point rating scale (items ranging from 0 = Strongly Disagree to 4 = Strongly Agree), the LI IRF indicates the level of inclusive/pluralistic attitudes required for participants to score on average 2 (2 = Neutral). In our study, the LI IRF was chosen over item thresholds (<unk> i ) to convey item difficulty because of two advantages: the interpretation of the LI IRF is (a) easier, since it is a single index compared to four thresholds per item; and (b) more substantive, since it is based on the latent trait ("Anglo-centric/Assimilationist attitudes" or "Inclusive/Pluralistic attitudes") rather than on the latent response variables [68]. Nonetheless, for the sake of completeness, we also reported the item thresholds (<unk> i ). Phase 7: Construct validity. To evaluate the RRAMS' construct validity, we investigated known-groups validity according to sex, education and age. Known-groups validity compares the levels of the constructs in different groups (e.g. men compared to women) and should be applied when it is known, theoretically or due to previous empirical research, that these groups differ on the variable of interest. Therefore, known-groups validity can inform whether the instrument is able to discriminate between two groups that are known to be different regarding the construct (e.g. individuals with more education have more inclusive attitudes). Investigation of known-groups validity is important in many instances, such as when there is no "gold-standard" method of measurement to which the instrument can be compared [69]. That is, since there is no "gold-standard" or established (based on robust psychometric evidence) instrument to measure race-related attitudes and multiculturalism in Australia, it is not possible to define what would constitute a good measure for the RRAMS to display convergent validity with. Furthermore, in our case, there is previous evidence of groups that are known to differ according to multiculturalism and race-related attitudes. For example, as multiculturalism can be perceived as identity-threatening by dominant group members [11,19], we expected men to have more conservative attitudes towards multiculturalism when compared to women [22,70]. The same pattern was expected for older participants (>45 years old) when compared to younger respondents [22,70,71]. Participants with a university degree, in turn, were expected to be more supportive of multiculturalism than those with lower educational attainment. This hypothesis is in accordance with previous findings showing that sense of economic security (economic, personal, and cultural), higher education and younger age were associated with more positive attitudes towards multiculturalism and lesser exclusionary attitudes [22,70,71]. Therefore, sex, age and education were chosen as the exogenous variables for the evaluation of known-groups validity. To assess known-groups validity, latent mean differences were calculated by constraining the latent means in one of the groups (i.e. women and participants with higher education) to zero, so this group would function as a reference group. Considering that latent variances were constrained to one in the completely standardized solution, latent mean differences are interpreted as effect sizes analogous to Cohen's [72] d [73]. Finally, we employed the Empirical Bayes model [74] to estimate factor scores, which were plotted using Kernel density [75] to inform not only the average but also the distribution of the latent trait according to groups. --- Results --- Identification of a potential factorial structure Investigation of the Scree Plot and PA indicated that 2 factors substantially explained more variance than factors extracted from randomly generated data (Fig 1). It should be noted that, although the third factor accounted for more variance than the third factor extracted from the random datasets, the difference was trivial. For this reason, only two factors were retained. The next step consisted of the evaluation of factor loadings (Table 2). Results showed that Item 2 ("Some of the best people in our country are those who are challenging our government and ignoring the 'normal' way things are supposed to be done"), Item 3 ("It is okay if some racial or ethnic groups have better opportunities in life than others") and Item 6 ("We shouldn't talk about racial or ethnic differences") did not have substantial factor loadings (>.40) and were therefore excluded. Item 5 had the smallest factor loadings (<unk> 2 = 0.440 95% CI [0.220, 0.610]). After deletion of these four items and EFA re-analysis, the two-factor solution achieved simple structure. This time, however, Item 5 did not achieve a substantial factor loading (<unk> 2 = 0.390; 95% CI [0.180, 0.590]) (S1 Table ); that is, the factors explained only 19% of the variance of item responses ("communality"), while 81% of the variance was explained by other sources ("uniqueness"), such as measurement error. For this reason, Item 5 was also excluded from the analysis. --- Confirmation of the factorial structure in an independent sample The 2-factor model was then selected and its fit, examined (<unk> 2 (19) = 341.070, p<unk>0.001, CFI = 0.974, RMSEA = 0.083; 90% CI [0.076, 0.091]). Since the null hypothesis of exact-fit was rejected (<unk> 2 (19) = 341.070, p<unk>0.001), we proceeded with indices of approximate-fit. The CFI indicated a good fit to the data (>.960), while the RMSEA was adequate (0.5 <unk> RMSEA 1.0). Residual correlations are displayed in S2 Table. Considering the overall good fit of the model and that all items exhibited substantial factor loadings (Table 3), the two-factor model with 8 items was accepted. "Anglo-centric/Assimilationist attitudes" (e.g. "Racial or ethnic minority groups take away jobs from other Australians") was regarded as the first subscale, whereas the second comprised six items assessing agreement with "Inclusive/Pluralistic attitudes" --- Analysis of measurement invariance Next, measurement invariance by sex, education and age was evaluated (Table 4). Regarding sex, the LRT indicated that the metric model was not statistically different from the configural better. When scalar invariance was evaluated, the pooled <unk> <unk>2 was negative for both educationand age-based groups. Although a negative <unk> <unk>2 is not interpretable (and, therefore, values were set to zero), these negative values can occur when the difference between models are small [76]. For this reason, the threshold constraints were regarded as tenable [77] and provided indirect support for scalar invariance. --- Reliability The first subscale "Anglo-centric/Assimilationist attitudes" (O H = 0.83, <unk> ORDINAL = 0.85, <unk> = 0.85; 95% CI [0.84, 0.86]) showed good reliability, while the "Inclusive/Pluralistic attitudes" subscale (O H = 0.77, <unk> ORDINAL = 0.79, <unk> = 0.72; 95% CI [0.70, 0.73]) exhibited adequate reliability. --- Item reduction analysis Inter-item correlations ranged from 0.29 to 0.56 (Supplementary 3) and no correlations were lower than 0.20. The CITCs ranged from 0.39 to 0.58. Within the "Anglo-centric/Assimilationist attitudes" subscale, the easiest item was "We need to stop people spreading dangerous ideas and stick to the way things have always been done in Australia" (LI IRF = 0.00), while the hardest item was "Racial or ethnic minority groups take away jobs from other Australians" (LI IRF = 0.72) (Table 3). That is, with respect to Item 10, respondents needed to have 0.72 standard deviations more Anglo-centric/assimilationist attitudes than the average Australian to produce an expected score of 2 out of 4. Item 10 was the hardest item in the "Anglo-centric/Assimilationist attitudes" subscale since its endorsement required more Anglo-centric/assimilationist attitudes than the other items. Within the "Inclusive/Pluralistic attitudes" subscale, the easiest item was "We should do what we can to create equal conditions for different racial or ethnic groups" (LI IRF = -1.58), while the hardest item was "People from racial and ethnic minority groups experience discrimination in Australia." (LI IRF = -0.80). The hierarchy of item difficulties was identical when average item thresholds (<unk> t) were inspected (S4 Table ). --- Construct validity Examination --- Discussion The current study aimed to present the RRAMS as a measure of attitudes towards multiculturalism in Australia and to examine some of its psychometric properties using data from a nationwide sample. Results showed that the two subscales of "Anglocentric/Assimilationist attitudes" and "Inclusive/Pluralistic attitudes" are initially valid and reliable for the Australian population. In the initial stage of psychometric assessment, we identified poorly performing items, and these were excluded. One of these was Item 2 ("Some of the best people in our country are those who are challenging our government and ignoring the 'normal' way things are supposed to be done"), an item originally designed to reflect RWA in relation to multiculturalism. Despite its original purpose, Item 2 might not reflect the cultural and racerelated topic in question. This is one possible explanation why the responses to this item were not strongly influenced by respondents' Inclusive/Pluralistic attitudes towards multiculturalism (only 12% of the variance was explained by the supposedly corresponding factor). For instance, the wording "challenging our government" can be interpreted as referring to a general debate not necessarily reflecting ethnic-racial differences on political representation and resources distribution. Future studies might test the item fit by emphasizing 'challenging our government' as pressuring for a political agenda that prioritizes reducing social inequalities among ethnic-racial groups and promotion of a pluralistic society. Items 3 ("It is okay if some racial or ethnic groups have better opportunities in life than others") and 6 ("We shouldn't talk about racial or ethnic differences") also performed poorly and failed to capture assimilationist views. Item 3 was designed to reflect respondent's SDO. It was hypothesized that participants with high SDO, and thus assimilationist views of multiculturalism, would endorse the item. Contrarily to expected, these respondents might have interpreted the phrasing'some racial or ethnic groups' as a reference to ethnic-racial minorities. Conservatives might perceive affirmative action and social assistance policies as privileges and can endorse the notion that minorities 'have it easy.' Conservative attitudes such as that of RWA and SDO have been linked to social and economic conservatism, reflecting ideologies of competition and meritocracy [78]. The ambiguity left by the item wording can thus explain its failure in discriminating assimilationist attitudes. Item 6, in turn, might have not worked in its subscale because, again contrarily to our hypothesis, respondents with high assimilationist views might be willing to discuss racial and ethnic differences with the intent of promoting assimilationist and racist views [79]. Therefore, the item performed poorly as respondents in the different strata of assimilationist attitudes could be prone do endorse the item for different reasons. The last deleted item was Item 5 ("Australians from an Anglo background [that is, of British descent] enjoy an advantaged position in our society"). One possible explanation for the item's poor performance is that the recognition of privilege does not necessarily informs on inclusive/pluralistic attitudes. For example, a previous study in the Australian states of Queensland and New South Wales showed these as two independent dimensions [9]. The poor loading on the inclusive attitudes subscale suggests that respondents might not link acknowledgment of white privilege to notions of a pluralistic society. Taken together, these results potentially indicate that debates over multiculturalism in Australia need to promote awareness of the connection between Anglo-privilege and racism. Scholars advocate that challenging racism and privilege is as a necessary step towards promoting the abandonment of assimilationist views in favour of more inclusive perspectives [9,13]. The subscales "Anglo-centric/Assimilationist attitudes" and "Inclusive/Pluralistic attitudes" achieved metric invariance and scalar invariance according to sex. Furthermore, the two subscales achieved metric invariance according education and the results also (indirectly) supported scalar invariance. That is, "Anglo-centric/Assimilationist attitudes" and "Inclusive/ Pluralistic attitudes" influenced the item responses the same way in each group (metric invariance) and the items were not more difficult for one group compared to another (scalar invariance). The RRAMS items can thus be used to compare men/women, participants with/without tertiary education and young/older participants, and the scores will reflect true differences regarding "Anglo-centric/Assimilationist attitudes" and "Inclusive/Pluralistic attitudes" rather than measurement bias [35]. After ensuring measurement invariance between subgroups, we compared the factor scores between men and women, participants with and without tertiary education, and participants up to and over 45 years of age. The stronger predictor of as
The present study aims to develop the Race-related Attitudes and Multiculturalism Scale (RRAMS), as well as to perform an initial psychometric assessment of this instrument in a national sample of Australian adults.The sample comprised 2,714 Australian adults who took part in the 2013 National Dental Telephone Interview Survey (NDTIS), which includes a telephone-based interview and a follow-up postal questionnaire. We used Exploratory Factor Analysis (EFA) to evaluate the RRAMS' factorial structure (n = 271) and then proceeded with Confirmatory Factor Analysis (CFA) to confirm the proposed structure in an independent sample (n = 2,443). Measurement invariance was evaluated according to sex, age and educational attainment. Construct validity was assessed through known-groups comparisons. Internal consistency was assessed with McDonald's Ω H and ordinal α. Multiple imputation by chained equations was adopted to handle missing data.EFA indicated that, after excluding 4 out of the 12 items, a two-factor structure provided a good fit to the data. This configural structure was then confirmed in an independent sample by means of CFA (χ 2 (19) = 341.070, p<0.001, CFI = 0.974, RMSEA = 0.083; 90% CI [0.076, 0.091]). Measurement invariance analyses suggested that the RRAMS items can be used to compare men/women, respondents with/without tertiary education and young/older participants. The "Anglo-centric/Assimilationist attitudes" (Ω H = 0.83, α ORDINAL = 0.85) and "Inclusive/Pluralistic attitudes" subscales (Ω H = 0.77, α ORDINAL = 0.79) showed adequate
Pluralistic attitudes" achieved metric invariance and scalar invariance according to sex. Furthermore, the two subscales achieved metric invariance according education and the results also (indirectly) supported scalar invariance. That is, "Anglo-centric/Assimilationist attitudes" and "Inclusive/ Pluralistic attitudes" influenced the item responses the same way in each group (metric invariance) and the items were not more difficult for one group compared to another (scalar invariance). The RRAMS items can thus be used to compare men/women, participants with/without tertiary education and young/older participants, and the scores will reflect true differences regarding "Anglo-centric/Assimilationist attitudes" and "Inclusive/Pluralistic attitudes" rather than measurement bias [35]. After ensuring measurement invariance between subgroups, we compared the factor scores between men and women, participants with and without tertiary education, and participants up to and over 45 years of age. The stronger predictor of assimilationist and inclusive attitudes was education, while sex also influenced both constructs. Furthermore, older individuals were more likely to have higher assimilationist attitudes. The role of education in promoting inclusive/pluralistic has been previously established [22,70] and suggests education as an important target for future interventions aimed at promoting multiculturalism in Australia. The results also indicated that men and older individuals had stronger assimilationist attitudes in comparison with women and younger counterparts [71]. In general, the associations of the two subscales with sex, education, and age conformed to the theoretical expectations and provide further evidence of the RRAMS' construct validity. With regards to reliability, the "Anglo-centric/Assimilationist attitudes" and "Inclusive/Pluralistic attitudes" subscales showed adequate reliability (>.70) [80], since values between.70 and.80 are considered appropriate for research purposes [81]. In case the RRAMS is used in the future in high-stakes scenarios (i.e. where decisions need to be made based on scale scores) [82], new items should be developed to increase reliability. In the item reduction analysis, all items displayed moderate inter-item correlations and CITC, so no items needed to be removed. The item with the smallest CITC was Item 7 ("People from racial or ethnic minority groups benefit Australian society"), followed by Item 4 ("We should do what we can to create equal conditions for different racial or ethnic groups."). Since reliability was only modest, we considered that further shortening the scale would be more detrimental in terms of reliability and content validity than beneficial as a means of creating a briefer measure. In addition, with the exception of Item 1 ("We need to stop people spreading dangerous ideas and stick to the way things have always been done in Australia.") and Item 12 ("People from racial and ethnic minority groups should behave more like mainstream Australians."), items difficulties were spread across the latent trait. Once again, although Item 1 or Item 2 could potentially be removed due to similar difficulties, we believe removing additional items would be detrimental to content validity and the psychometric properties of the scale. One limitation of the current study was that we were not able to evaluate convergent and discriminant validity. The RRAMS was originally applied at the 2013 NDTIS, a study that focused on collecting information on the use of dental services in Australia and did not include other psychosocial measures. For this reason, we considered known-groups validity to be the best strategy to investigate the RRAMS' construct validity. While the results from known-groups validity were in accordance with theoretical expectations (e.g. inclusive attitudes were more present in individuals with more education), future studies need also to investigate other forms of validity, such as convergent/discriminant and predictive validity. For example, future studies should evaluate whether the scores from the "Inclusive/Pluralistic attitudes" subscale are positively correlated (i.e. convergent validity) with scores from other instruments evaluating multiculturalist and inclusive attitudes. Our analyses did not account for sampling weights, meaning that our sample is not representative of the Australian population. It is important to highlight, however, that our study included Australians from all age groups and socioeconomic backgrounds across all states and territories of the country. Furthermore, to the best of our knowledge, this is the largest sample in which a measure of attitudes towards multiculturalism has been employed in Australia. Lack of representativeness and its implications to the validity of scientific findings are central to longstanding discussions in the literature [83]. Because the purpose of the current analysis was to assess the psychometric properties of the RRAMS, as opposed to purely describe prevalence estimates, we do not believe that the lack of representativeness of our sample limits the validity of inferences made here. The fact that a study sample is representative of some larger population does not mean that the associations between variables in the sample will apply to every subgroup of the population. The overall association is simply an average value that has been balanced according to the distribution of people in these subgroups. If a sample that is representative of the sex distribution in the target population, the results will not necessarily be apply to both males and females, but only to a hypothetical participant that is "weighted" on sex. Subgroups analyses are necessary if one wishes to investigate relationships between variables by subgroups, which we have performed during the criterion validity assessment stage. In conclusion, we successfully developed a comprehensive race-related attitudes and multiculturalism scale to the Australian context. We used robust, cutting edge psychometric techniques and data from a large, nation-wide survey. The small number of items (eight) means the instrument will likely be readily used by policy makers and in ensuing research. Future studies should assess the scaling properties of the instrument by using parametric and nonparametric Item Response Theory techniques. The instrument may, nevertheless, be useful to inform on multiculturalism attitudes across the country and hopefully contribute to a public debate aimed to promote multiculturalist inclusive attitudes with the potential to increase social cohesion in Australia. --- The authors do not have permission from the ethics committee to publicly release the datasets of the 2013 NDTIS in either identifiable or de-identifiedform. However data is available to bona fide researchers provided that all privacy
The present study aims to develop the Race-related Attitudes and Multiculturalism Scale (RRAMS), as well as to perform an initial psychometric assessment of this instrument in a national sample of Australian adults.The sample comprised 2,714 Australian adults who took part in the 2013 National Dental Telephone Interview Survey (NDTIS), which includes a telephone-based interview and a follow-up postal questionnaire. We used Exploratory Factor Analysis (EFA) to evaluate the RRAMS' factorial structure (n = 271) and then proceeded with Confirmatory Factor Analysis (CFA) to confirm the proposed structure in an independent sample (n = 2,443). Measurement invariance was evaluated according to sex, age and educational attainment. Construct validity was assessed through known-groups comparisons. Internal consistency was assessed with McDonald's Ω H and ordinal α. Multiple imputation by chained equations was adopted to handle missing data.EFA indicated that, after excluding 4 out of the 12 items, a two-factor structure provided a good fit to the data. This configural structure was then confirmed in an independent sample by means of CFA (χ 2 (19) = 341.070, p<0.001, CFI = 0.974, RMSEA = 0.083; 90% CI [0.076, 0.091]). Measurement invariance analyses suggested that the RRAMS items can be used to compare men/women, respondents with/without tertiary education and young/older participants. The "Anglo-centric/Assimilationist attitudes" (Ω H = 0.83, α ORDINAL = 0.85) and "Inclusive/Pluralistic attitudes" subscales (Ω H = 0.77, α ORDINAL = 0.79) showed adequate
Why Are Middle Childhood and Early Adolescence So Important? While there may be many reasons why middle childhood is an important developmental period with respect to the relation between social status and psychological well-being, two likely reasons for its importance are (1) advances in cognitive development during this period that render one's social status(es) more personally relevant to one's sense of self and (2) increases in the size and instability of the peer network. --- Advances in cognitive development Research has shown that the relationship between social status and psychological well-being is an indirect one and is, in part, mediated by the messages (both positive and negative) one receives regarding one's social status(es) (Fordham & Ogbu, 1986;Mays & Cochran, 2001;McLeod & Owens, 2004;Van Laar, 2000). Importantly, the extent to which those messages are internalized is directly related to their influence on psychological well-being (Herek & Garnets, 2007;Steele, 1997;Williams & Williams-Morrris, 2000). What is often overlooked is that the following three cognitive abilities must be acquired before the messages regarding one's social statuses can be internalized: (1) an awareness of the categories one belongs to; (2) the ability to perceive messages from others and society regarding the categories one belongs to; and (3) the ability to internalize membership in those categories as personally meaningful. Harter's (2006;1996) extensive research on the self suggests that it is not until middle childhood that youth have acquired the last of these three abilities. During middle childhood and early adolescence (8 -12 years of age), children's cognitive ability to use more objective criteria and inter-individual comparisons for self-evaluation increases (Harter, 1996;Stipek & MacIver, 1989). The self also becomes more objective and outward focused. This newfound cognitive capacity enables youth to more fully link their attributes, including the social categories to which they belong, to how they actually feel about themselves (Davis-Kean, Jager, & Collins, 2009). --- Increases in the size and instability of the peer network The size (Cairns, Xie, & Leung, 1998) and instability (Cairns & Cairns, 1994;Nash, 1973) of peer networks peak during middle childhood. Cairns, Xie, & Leung (1998) contend that the increases in the size and instability of peer networks may be, at least in part, driven by changes associated with the transition to middle school, which may result in greater opportunities for interaction with a wider range of peers. As a consequence, just when youth are beginning to base their sense of personal value on inter-individual comparisons and using peers as a "social mirror" (Sullivan, 1953), they are also interacting with more peers and are more likely to be interacting with those peers for the first time. The combination of changes in cognition as well as changes in social context may underlie the emergence of disparities in psychological well-being across the levels of social status during middle childhood and early adolescence. --- Sexual-Minority Status In terms of the emergence of disparities in psychological well-being, it is not clear whether middle childhood is an important time period for SM as it is for race, gender, overweight status, and SES. An important question regarding SM status is as follows: When does one's awareness of his or her SM status emerge? Is it early on in development like one's awareness of race and sex, or is it later on in development like one's awareness of being a college student or a parent? Retrospective reports indicate that SM individuals recall being treated differently by others, often as early as age 8, before they develop or are even aware of their attractions to the same sex (Bell, Weinberg, & Hammersmith, 1981;Zucker, Wild, Bradley, & Lowry, 1993). They recall feeling different from their peers, and often this sense of feeling different has a negative valence and is centered around atypical, gender-related traits (Savin- Williams, 2005;Troiden, 1989). Retrospective reports also indicate that around the age of 10 or 11, many SM individuals recall their first awareness of attraction to the same sex (D'Augelli & Hershberger;1993;Floyd & Stein, 2002;Friedman, Marshal, Stall, Cheong, & Wright, 2008;Rosario, Meyey-Bahlburg, Hunter, & Exner, 1996;Savin-Williams & Diamond, 2000). Thus, there is some evidence to suggest that awareness of one's SM status may emerge during the middle childhood years. As such, just as was the case for race, sex, overweight status and SES, the years between middle childhood and early adolescence may prove quite formative with respect to the relation between SM status and psychological wellbeing. Though one may acquire a vague sense of SM status during middle childhood (i.e., a sense of difference or initial awareness of feelings of same-sex attraction), coming to grips with one's own sexuality does not end there. A subset of youth go on to realize during early adolescence that this attraction to the same sex is what society deems as homosexual, and then an even smaller subset go on to actually identify themselves (as opposed to just their sexual attractions) as homosexual or bisexual (D'Augelli & Hershberger;1993;Rosario et al., 1996;Savin-Williams & Diamond, 2000). Awareness of sexual-minority status is a prerequisite for others' messages regarding sexual minorities to be internalized as personally meaningful, and the period when one's awareness of his or her SM status appears to form extends into late adolescence or even early adulthood. Thus the relation between SM status and psychological well-being may itself be in flux through late adolescence/early adulthood. Complicating things further is the possibility that growing awareness of one's SM status during adolescence will be accompanied by social isolation as well as victimization and stigmatization. In both the school and the home, many SM adolescents report feeling invisible (Garofalo et al., 1998;Hershberger & D'Augelli, 1995) and have a difficult time finding other SM adults to confide in or other SM peers with whom to socialize (Herek & Garnets, 2007;Lewis, Derlega, Berndt, Morris, & Rose, 2001;Mills, Paul, Stall, Pollack, & Canchola;Morris, Waldom & Rothblum, 2001). Beyond feeling a level of invisibility, SM adolescents also face a disproportionate amount of peer harassment, bullying, and aggression from their non-minority peers (Herek & Sims, 2007;Mays & Cochran, 2001;Russell & Joyner, 2001). Thus at a time when SM adolescents are coming to grips with their status, they are typically doing so alone, perhaps in the face of heightened harassment and aggression. As a consequence, the influence of SM status on psychological well-being may prove stronger between mid-to late-adolescence than between middle childhood and early adolescence. --- Moderators of Sexual-Minority Status and Psychological Well-Being Available cross-sectional research has identified three factors that moderate the relation between SM status and psychological well-being: (1) sexual identification or orientation, (2) age of first awareness/disclosure, (3) and gender status. Importantly, to date the extent to which, if at all, these factors moderate the relation between SM status and growth in psychological well-being is unknown. --- Sexual identification While all those that exhibit same-sex sexuality share the status of SM, they vary dramatically as to whether or not they hold a SM identity as well as the nature of that identity, if any. Among those exhibiting same-sex sexuality, some identify as heterosexual, some as homosexual, and others as bisexual (Diamond, 2006). This heterogeneity in identification among those who exhibit same-sex sexuality could have implications for the relation between SM status and psychological well-being. For example, researchers who conceptualize sexual-identity formation as a progression through a set of stages have found among SM that those in the later stages report higher psychological well-being than do those in the earlier stages (Brady & Busse, 1994;Halpin & Allen, 2004;Levine, 1997). Researchers have also found among those who identify as bisexual or homosexual, acceptance of one's sexual identity is positively related to mental health (Hershberger & D'Augelli, 1995;Miranda & Storms, 1989;Rosario, Hunter, Maguen, Gwadtz, & Smith, 2001). Finally, there is some evidence to suggest that, relative to those who identify as homosexual, those who identify as bisexual may be at higher risk for deficits in psychological well-being (Balsam, Beauchaine, Mickey, & Rothblum, 2005;Jorm, Korten, Rodgers, Jacomb, & Christensen, 2002). --- Age of first awareness/disclosure The research above indicates that coming to terms with one's sexual orientation and integrating it within one's sense of self is associated with higher psychological well-being. However, the extent to which this is the case may vary with age. There are risks associated with disclosing your sexual orientation to others, such as increased victimization, the disruption of close personal relationships, and heightened disapproval from others (Corrigan & Mathews, 2003;McDonald, 2008). For some, these risks can outweigh the benefits of coming to terms with one's sexual orientation (Corrigan & Mathews, 2003;Friedman et al., 2008). Emerging research suggests that one factor related to whether or not the risks outweigh the benefits is age of first awareness or disclosure. For example, relative to those who progress through these milestones at a later age, those who are aware of their same-sex attractions or disclose their sexual orientation at younger ages report experiencing more gayrelated discrimination, bullying, and disrupted relationships during adolescence, and they generally have fewer resources, both interpersonal and intrapersonal, to cope with these threats (D'Augelli & Hershberger, 1993;Friedman et al., 2008;Remafedi, 1991;Savin-Williams, 1995). The increase in threats coupled with the decrease in sources of support is thought to translate into lower psychological well-being among those who progress through these milestones at an earlier age (Friedman et al., 2008;McDonald, 2008). In fact, Friedman et al. (2008) found that, relative to those who were first aware of same-sex attractions at an older age (adolescence), those who were first aware at an earlier age (middle childhood) reported lower psychological well-being and physical health during adulthood. --- Gender The relation between gender and psychological well-being appears to be muted among the sexual-minority population. That is, relative to the general population, where females tend to report lower levels of psychological well-being than males (Twenge & Nolen-Hoeksema, 2002), the gender differences within the SM population are diminished or absent (Balsam et al., 2005;Cochran et al., 2003, Elze, 2002;Fergusson et al., 2005). --- Hypotheses and Key Questions Though this study was partly exploratory in nature, the following hypotheses guided our examination. 1a) By early adolescence, we expected SM youth to report lower levels of psychological well-being than those of sexual-majority status; 1b) Disparities in psychological well-being among SM and sexual-majority individuals were predicted to increase during adolescence. By comparing the size of disparities at early adolescence (i.e., hypothesis 1a) to the extent, if any, that those disparities increase over adolescence (i.e., hypothesis 1b), we evaluated the relative influence of middle childhood and adolescence on the relation between SM status and psychological well-being. 2) Among those of SM status, we expected those of bisexual status to report lower psychological well-being at the onset of adolescence as well as lower growth in well-being across adolescence. 3a) In terms of initial status differences and growth differences, we expected earlier awareness of same-sex attractions to be associated with lower psychological well-being; and 3b) we expected that the disparities in psychological well-being between SM and non-SM would be larger among those SM reporting earlier awareness of same-sex attractions. (4) In terms of both intercept differences and growth differences, we expected psychological well-being disparities between SM and non-SM to be more pronounced among males. --- Methods --- Sample The data for this study came from the National Longitudinal study of Adolescent Health (Add Health; Bearman et al., 1997), a multi-wave, nationally representative sample of American adolescents. Using a clustered sampling design, 80 high schools were recruited for participation. The sample of schools was stratified by region, urbanicity, school type, ethnic mix, and size. At the point of initial assessment (Wave 1), the total sample was 20,745 7 th -12 th graders. Two additional waves of data are available, each taking place approximately one (Wave 2) and six years later (Wave 3). The sample and retention rates for each wave are 14,988 (72%) and 15,170 (73%) respectively. For the present study, only those respondents who completed a sexual orientation measure at Wave 3, completed samesex attraction measures at Waves 1, 2, and 3, had data for age, and were assigned a sample weight were included in the study (N = 7,733). With respect to psychological well-being, respondents included in the study (n = 7,733) reported slightly lower levels of depression at Wave 1, t (20,703) = 2.44, p <unk>.05, and Wave 3, t (15,233) = 3.96, p <unk>.001, than those not included in the study (n = 12,970). Those included in the study also reported slightly higher levels of self-esteem at Wave 1, t (20,681) = 2.99, p <unk>.01; and Wave 2, t (14,726) = 3.10, p <unk>.01. In every case where differences in psychological well-being were found, effect sizes were small. (No R 2 was larger than.005.) Finally, males (<unk> 2 (1) = 75.28, p <unk>.001), and those in the older cohort (<unk> 2 (1) = 18.08, p <unk>. 001) were underrepresented among those included in the study. Among those included in the study (n = 7,733), the amount of missing data on the psychological well-being indices was low (less than 0.1% at each Wave). In order to maximize the data and include all possible cases, we used Full Information Maximum Likelihood (FIML) estimation, a missing data algorithm available within Mplus (Muthen & Muthen, 1998-2009). --- Procedure The first wave of data was collected during 1994 and 1995 via in-home questionnaires. The questionnaires covered a range of topics: health status, nutrition, peer networks, family composition and dynamics, romantic partnerships, sexual partnerships, and risk behavior. Approximately a year later, respondents completed a second in-home questionnaire that was similar in content. Approximately six to seven years after initial assessment, respondents completed a third in-home questionnaire, one that was similar in content to the first but also covered such topics as romantic relationships, child-bearing, and educational histories. --- Measures Psychological well-being-We focused on two indices of psychological well-being: depressive affect and self-esteem. Depressive affect was based on a 9-item, truncated version of the CES-D (Radloff, 1977). An example item is: "During the past week, have you been bothered by things that usually do not bother you?" The possible range was from 0 to 3, with higher responses indicating higher levels of depressive affect. Cronbach alphas were.79,.79, and.80 for waves 1, 2, and 3 respectively. Self-esteem was based on a 4-item scale used previously by Regnerus & Elder (2003). An example item is: "You like yourself just the way you are." The possible range was from 1 to 5, with higher responses indicating higher levels of self-esteem. Cronbach alphas were.83,.81, and.79 for waves 1, 2, and 3 respectively. Sexual orientation and sexual-minority status-Based on the distinction between SM status (those exhibiting versus those not exhibiting same-sex sexuality) and sexual orientation (those identifying versus those not identifying as a SM), we classified individuals into one of four groups. Classification was based on a single question that was asked at Wave 3 only: "Please choose the description that best fits how you think about yourself." The possible responses were: (a) 100% heterosexual (straight); (b) mostly heterosexual (straight), but attracted to people of your own sex; (c) bisexual -that is attracted to men and women equally; (d) mostly homosexual (gay), but somewhat attracted to people of the opposite sex; (e) 100% homosexual (gay); and (f) not sexually attracted to either males or females. All those who indicated no sexual attraction (response f) were dropped from analyses (n = 74), as were those who refused to answer the question (n = 73). All who identified themselves as 100% heterosexual (response a) were classified as Heterosexualidentified/non-SM (n = 6,889). All who indicated some level of same-sex sexuality (responses b, c, d or e) qualified as a SM (n = 844). Of these individuals, those who identified as gay (responses d and e) were classified as Homosexual-identified/SM (n = 129), those who identified as bisexual (response c) were classified as Bisexual-identified/SM (n = 140), and those who identified as straight but indicated an attraction to the same sex (response b) were classified as Heterosexual-identified/SM (n = 575). Instability of reported same-sex attractions-At Wave 1 respondents were asked two yes/no questions: (1) "Have you ever had a romantic attraction to a female?" and (2) "Have you ever had a romantic attraction to a male?" For Waves II and III respondents were asked the same questions but were asked to indicate if they experienced these attractions since the last time they were interviewed. Using the reported same-sex attraction (or lack thereof) associated with one's Wave 3 sexual orientation as the reference point, we created three dummy variables to assess instability in same-sex attraction -one for each wave. Among those who indicated a sexual orientation at Wave 3 that included same-sex attractions (i.e., Heterosexual-identified/SM; Bisexual-identified/SM; and Homosexualidentified/SM), a report at any given wave (i.e., Waves 1, 2, or 3) of no same-sex attractions was coded as 1, and a report of same-sex attractions was coded as 0. The opposite pattern was true for Heterosexual-Identified/non-SM (the only group that reported a sexual orientation at Wave 3 that did not include same-sex attractions). For this group a report of same-sex attractions at any given wave was coded as 1, while a report of no same-sex attraction was coded as 0. In concrete terms, relative to the reported same-sex attraction (or lack thereof) associated with one's Wave 3 sexual orientation, these dummy variables were an indication of inconsistency in reported same-sex attraction, with 1 indicating inconsistency and 0 indicating consistency. The Wave 3 instability dummy variable likely reflected confusion or measurement error, either in the Wave 3 sexual orientation measure or the Wave 3 questions pertaining to attraction to each sex. In contrast, the Wave 1 and 2 instability dummies may have reflected developmental changes or instability in awareness of and/or willingness to report same-sex attractions. For example, among those reporting a sexual orientation at Wave 3 that includes same-sex attractions, those who also reported same-sex attractions at Waves 1 and/or 2 may have become aware of their same-sex attractions at an earlier age than those who did not report same-sex attractions at Waves 1 and 2. Consistent with previous research (Russell, 2006), preliminary analyses revealed that the independent influence of instability in reported same-sex attractions at Waves 1 and 2 on psychological well-being was modest and non-systematic. However, additional preliminary analyses indicated that (1) instability in same-sex attractions at both Waves 1 and 2 was strongly predictive of psychological well-being, (2) the influence of instability at Waves 1 or 2 (but not both) was modest, and (3) the influence of instability at Wave 3 was often nonsignificant. Based on these preliminary findings, we chose the three following dummy variables: (1) instability at Waves 1 and 2 versus all others; (2) instability at Waves 1 or 2 (but not both) versus all others; and (3) instability at Wave 3 versus all others. When each of these dummy variables were included as controls, the reference group became those who reported same-sex attractions (or lack thereof) over time that were consistent with their Wave 3 sexual orientation and the same-sex attractions (or lack thereof) that they reported along with that sexual orientation. Cohort-Although age at Wave 1 ranged between 12 and 20 years of age, over 95% of the sample ranged between 13 and 18 (M = 15.60, SD = 1.73). We dichotomized the sample so that we could more closely examine how the relation between SM status and psychological well-being varied across adolescence. A dichotomous cohort variable was created: Those between the ages of 12 and 15 (51% of the sample) were classified as young, whereas those between the ages of 16 and 20 (49% of the sample) were classified as old. Gender status was based on self-report. Respondents indicated whether they were male (0) or female (1). --- NIH-PA Author Manuscript --- NIH-PA Author Manuscript --- NIH-PA Author Manuscript --- Results --- Basic Descriptive Statistics The means, standard deviations, sample size, and relative percentage for each of the four sexual orientation groups (i.e., Heterosexual-identified/non-SM; Heterosexual-identified/ SM; Bisexual-identified/SM; and Homosexual-identified/SM) are listed in Table 1. The percentages and frequencies for unstable and stable reports of same-sex attractions are listed in Table 2. Patterns of same-sex attraction across Waves 1 and 2 are listed in the first three columns. In the sample as a whole, 83.1% reported same-sex attractions at both Waves 1 and 2 that were consistent with the same-sex attractions (or lack thereof) associated with the sexual orientation that they reported at Wave 3. The remaining 16.9% of respondents reported Wave 1 and Wave 2 same-sex attractions that were inconsistent with the sexual orientation they reported at Wave 3: 8.7% were inconsistent at both waves and 8.2% were inconsistent at only a single wave. Generally, instability in these factors was higher among those of SM status. Wave 3 patterns of same-sex attractions are listed in the last two columns of Table 2. In the sample as a whole, 94.3% reported same-sex attractions at Wave 3 that were consistent with the sexual orientation that they reported at Wave 3. The remaining respondents (5.7%) reported same-sex attractions that were inconsistent. Generally, instability in these factors was higher among the Heterosexual-Identified/SM group. The last column of Table 2 lists those who reported same-sex attractions across all three waves that were consistent with the sexual orientation that they reported at Wave 3. --- Sexual Orientation at Wave 3 and Adolescent Trajectories of Psychological Well-Being In order to examine adolescent trajectories of psychological well-being, we used the growth curve model presented in Figure 1. The factor coefficients for the linear slope were set at 0, 1, and 6.5 because the average time between Waves 1 and 2 was 1 year, and the average time between Waves 1 and 3 was 6.5 years. The intercept factor measured initial (Wave 1) levels of psychological well-being, whereas the slope factor measured linear change in psychological well-being across Waves 1, 2, and 3. We used multiple-group analyses (Duncan, Duncan, Strycker, Li, & Alpert, 1999) to examine model differences across the four sexual orientation subgroups. All analyses were conducted within Mplus, Version 5.2 (Muthen & Muthen, 1998-2009). In order to account for Add Health's sampling design, we included a stratification variable and used a maximum likelihood estimator that is robust to the estimate of standard errors, as suggested by the administrators of Add Health when using Mplus (Chantala, 2003). All multi-group comparisons were based on <unk> 2 differences tests. When we conducted multi-group comparisons, only the model parameter of focus was constrained to be equal across the groups. Unless otherwise specified, all other model parameters (e.g., means, variances, and covariances) were free to vary across groups. Because ordinary <unk> 2 difference tests cannot be computed when using a robust maximum likelihood estimator (Muthen & Muthen, 1998-2009), differences in model fit were tested via the equations provided by Satorra and Bentler (1999). Due to space constraints, fit indices are not presented for each growth model, though in every case the fit was excellent (i.e., CFI >.95 and RMSEA <unk>.05; McDonald & Ringo Ho, 2002). Depressive affect-Pertinent results are listed in the first two columns of Table 3. Among the entire sample, intercept levels of depressive affect were low (i.e.,.638 on a scale of 0 to 3), and growth in depressive affect was negative (-.022). Intercept levels of depressive affect were equivalent across the three SM groups, <unk> 2 (2) =.342, p =.84. However, collectively the three SM groups reported higher intercept levels of depressive affect (.778) than Heterosexual-Identified/non-SM (.619), <unk> 2 (1) = 277.17, p <unk>.001. Among the three SM groups, growth of depressive affect was more negative among the Bisexual-identified/SM (-.021) and Homosexual-identified/SM (-.029) groups than it was among the Heterosexual-identified/SM group (-.010), <unk> 2 (1) = 4.27, p <unk>.05. Also, only the Heterosexual-identified/SM group differed from the Heterosexual-Identified/non-SM group (-.023), <unk> 2 (1) = 5.052, p <unk>.05. In sum, at intercept the three SM groups did not differ from one another, but they collectively reported higher levels than Heterosexual-Identified/non-SM. For Heterosexual-identified/SM these initial differences increased over time, but for Homosexual-identified/SM and Bisexual-identified/SM these differences remained stable over time. Self-esteem-In the sample as a whole, intercept levels of self-esteem were high (i.e., 4.085 on a scale of 1 to 5), and growth in self-esteem was positive but moderate (.019). Intercept levels of self-esteem were equivalent across the three SM groups, <unk> 2 (2) =.790, p =.67. However, collectively the three SM groups reported lower intercept levels of selfesteem (3.910) than did Heterosexual-identified/non-SM (4.108), <unk> 2 (1) = 94.05, p <unk>.001. With respect to growth in self-esteem, none of the four sexuality groups differed from one another. The influence of instability in reported same-sex attractions-The above analyses suggested that reported sexual orientation during early adulthood (i.e., Wave 3) was associated with psychological well-being during adolescence. Next we examined (1) whether instability in reported same-sex attractions was related to adolescent patterns of psychological well-being and (2) whether that instability influenced the relation between declared sexual orientation at Wave 3 and psychological well-being during adolescence. We did so by repeating the analyses above but including the following instability dummy variables as exogenous predictors of each growth factor: (1) unstable at Waves 1 and 2 (column 1 of Table 2), (2) unstable at Wave 1 or 2, but not both (column 2 of Table 2), and (3) unstable at Wave 3 (column 4 of Table 2). By including these dummy variables in the growth model, the reference group among the SM groups became those who reported stable same-sex attractions across all three waves, and the reference group among the Heterosexual-identified/non-SM group became those who consistently reported no same-sex attractions (column 6 of Table 2). The influence of the three instability dummy variables on each psychological well-being growth factor is presented in Table 4. Based on multi-group analyses, the relation between the instability dummy variables and depressive affect did not differ across the three SM groups. However, the relation did differ between the SM groups and Heterosexualidentified, non-SM. The same was true for self-esteem. Consequently, in Table 4 the results are listed for Heterosexual-identified, non-SM and for the three SM groups combined, but they are not listed separately for each of the three SM groups. Focusing first on SM, in reference to those who persistently reported same-sex attractions at all three waves, those who reported no same-sex attractions at Waves 1 and 2 reported higher psychological wellbeing at intercept (i.e., lower depressive affect and higher self-esteem). However, they reported smaller increases in psychological well-being over time. Among Heterosexual-Identified/non-SM the relation between instability in reported same-sex attractions was much more muted, with those reporting same-sex attractions at both Waves 1 and 2 reporting lower depressive affect at intercept. Controlling for instability in reported same-sex attractions did alter the relation between reported sexual orientation at Wave 3 and adolescent psychological well-being. Pertinent results are in the third and fourth columns of Table 3. Concerning depressive affect, intercept levels among the Heterosexual-identified/SM group and the Bisexual-identified/ SM group were equivalent, <unk> 2 (1) = 2.65, p =.11. Collectively, however, they were higher than levels of depressive affect among both the Homosexual-identified/SM group, <unk> 2 (1) = 4.06, p <unk>.05, and the Heterosexual-Identified/non-SM group, <unk> 2 (1) = 358.96, p <unk>.001. In addition, the Homosexual-identified/SM group reported higher intercept levels than the Heterosexual-Identified/non-SM group, <unk> 2 (1) = 163.41. Taken together, at intercept the Heterosexual-Identified/non-SM group reported the lowest depressive affect, followed by the Homosexual-identified/SM group, followed by the Heterosexual-identified/SM and Bisexual-identified/SM groups, who reported equivalent levels to one another as well as the highest levels overall. Growth in depressive affect was equivalent across the three SM groups, <unk> 2 (2) = 1.141, p =.56. However, declines in depressive affect over time were more evident among the SM groups (-.072) than among the Heterosexual-Identified/non-SM group (-.023), <unk> 2 (1) = 38.79, p <unk>.001. There were fewer group differences in selfesteem. At intercept the three SM groups reported equivalent levels of self-esteem, <unk> 2 (2) = 2.91, p =.23, but collectively they reported lower levels of self-esteem than the Heterosexual-Identified/non-SM group, <unk> 2 (1) = 67.84, p <unk>.001. There were no group differences in the growth of self-esteem. Summary-Wave 3 sexual orientation was associated with psychological well-being. It appeared to have a stronger relation with intercept levels than with growth, with SM reporting lower psychological well-being at intercept. Among the SM groups, early and stable reporting of same-sex attractions was associated with lower initial levels of psychological well-being but greater increases in psychological well-being over time. Within the Heterosexual-Identified/non-SM group, early and stable reporting of no same-sex attractions was associated with lower initial levels of depressive affect. Relative to cases of unstable same-sex attractions, the relation between Wave 3 sexual orientation and adolescent depressive affect was different among those who reported stable same-sex attractions. Specifically, after controlling for instability in reported same-sex attractions, the discrepancy between SM and Heterosexual-Identified/non-SM was larger at the intercept; however, SM also reported greater increases in psychological well-being over time relative to Heterosexual-Identified/non-SM. Thus, relative to those reporting unstable sexual attractions over time, among those reporting stable sexual attractions over time, the initial gap in psychological well-being between SM and Heterosexual-Identified/non-SM was larger; however, that gap also closed at a faster rate over time. --- Sexual-Minority Status and Psychological Well-Being: Cohort and gender differences Building on earlier analyses, we next examined whether the relation between same-sex sexuality and psychological well-being varied across cohort and gender. Preliminary analyses indicated that cohort differences and gender differences in psychological wellbeing were equivalent across the three SM groups. Consequently, for this portion of the analyses we did not distinguish between the individual SM groups but instead compared all SM to the Heterosexual-Identified/non-SM group. Cohort-In order to examine differences across cohort, we used a cohort-by-SM-status grouping variable that broke respondents into four groups: (1) young Heterosexual-Identified/non-SM; (2) old Heterosexual-Identified/non-SM; (3) young SM; and (4) old SM. When using this grouping variable, we used the model constraint command within Mplus (Muthen
Emerging research has shown that those of sexual-minority (SM) status (i.e., those exhibiting same-sex sexuality) report lower levels of psychological well-being. This study aimed to assess whether this relation is largely in place by the onset of adolescence, as it is for other social statuses, or whether it continues to emerge over the adolescent years, a period when SM youth face numerous challenges. Moreover, the moderating influence of sexual orientation (identification), early (versus later) reports of same-sex attractions, and gender were also examined. Using data from Add Health, multiple-group latent growth curve analyses were conducted to examine growth patterns in depressive affect and self-esteem. Results suggested that psychological well-being disparities between SM and non-SM were generally in place by early adolescence. For many, the remainder of adolescence was a recovery period when disparities narrowed over time. Early and stable reporting of same-sex attractions was associated with a greater initial deficit in psychological well-being, especially among males, but it was also associated with more rapid recovery. Independent of the timing and stability of reported same-sex attractions over time, actual sexual orientation largely failed to moderate the relation between SM status and psychological well-being. Importantly, the sizable yet understudied subgroup that identified as heterosexual but reported same-sex attractions appeared to be at substantial risk.
, the initial gap in psychological well-being between SM and Heterosexual-Identified/non-SM was larger; however, that gap also closed at a faster rate over time. --- Sexual-Minority Status and Psychological Well-Being: Cohort and gender differences Building on earlier analyses, we next examined whether the relation between same-sex sexuality and psychological well-being varied across cohort and gender. Preliminary analyses indicated that cohort differences and gender differences in psychological wellbeing were equivalent across the three SM groups. Consequently, for this portion of the analyses we did not distinguish between the individual SM groups but instead compared all SM to the Heterosexual-Identified/non-SM group. Cohort-In order to examine differences across cohort, we used a cohort-by-SM-status grouping variable that broke respondents into four groups: (1) young Heterosexual-Identified/non-SM; (2) old Heterosexual-Identified/non-SM; (3) young SM; and (4) old SM. When using this grouping variable, we used the model constraint command within Mplus (Muthen & Muthen, 1998-2009), which allows for the creation of new model parameters based on mathematical operations involving already existing model parameters. Using the model constraint command we created four new model parameters: (1) A young intercept difference score [(intercept estimate for young SM) minus (intercept estimate for young Heterosexual-Identified/non-SM )]; (2) an old intercept difference score [(intercept estimate for old SM) minus (intercept estimate for old Heterosexual-Identified/non-SM)]; (3) a young growth difference score [(growth estimate for young SM) minus (growth estimate for Heterosexual-Identified/non-SM)]; and (4) an old growth difference score [(growth estimate for old SM) minus (growth estimate for old Heterosexual-Identified/non-SM )]. Note that these difference scores represented the model factor for SM relative to the model factor for Heterosexual-Identified/non-SM. Thus a negative value indicated that the SM factor was lower, whereas a positive value indicated that the SM factor was higher. Through a series of focused model comparisons, we examined whether these difference scores varied across the young and old cohorts. Specifically, based on <unk> 2 difference tests, we compared the fit of a model where the young intercept difference score and the old intercept difference score were constrained to be equal to the fit of a model where they were not constrained to be equal. We conducted a similar model comparison for the young growth difference score and the old growth difference score. We used this approach because it allowed for the examination of a two-way interaction (cohort by SM status) while allowing the relation between instability in reported same-sex attractions and psychological well-being to vary across groups. We conducted analyses with and without controlling for instability in reported same-sex attractions. We examined differences in depressive affect and self-esteem in separate models. Results are listed in Table 5, where significant differences are indicated by a superscripted number. When not controlling for instability in reported same-sex attractions, the young growth difference score was larger than the old growth difference score, <unk> 2 (1) = 6.17, p <unk>.05. Among the young cohort, growth in depressive affect was more positive among SM than among Heterosexual-Identified/non-SM (.017). Among the old cohort, however, growth in depressive affect was equivalent across the two groups (-.007). Preliminary analyses revealed that the relation between instability in reported same-sex attractions and both depressive affect and self-esteem was equivalent across cohort for Heterosexual-Identified/non-SM. For SM we found that the relation between instability in reported same-sex attractions was equivalent across cohort for depressive affect, but it varied across cohort for self-esteem. The relation was more pronounced among the young cohort, as shown in Table 6. Based on these preliminary findings, we constrained the relation between instability in reported same-sex attractions and psychological well-being to be equal across cohort (except for SM and self-esteem, where the relation varied across cohort). As in earlier analyses, we allowed the relation between instability in reported samesex attractions and psychological well-being to vary across Heterosexual-Identified/non-SM and SM. When controlling for instability in reported same-sex attractions, the relation between SM status and depressive affect did not vary across cohort. However, for selfesteem the intercept difference score, <unk> 2 (1) = 7.13, p <unk>.01, and the growth difference score, <unk> 2 (1) = 5.14, p <unk>.05, were much larger among the young cohort, and only among the young cohort were these difference scores significantly different from zero. More specifically, only among the young cohort did those of SM status have, relative to Heterosexual-Identified/non-SM, lower self-esteem at intercept (-.734) but greater increases in self-esteem over time (.100). Gender-In order to examine gender-by-SM differences, we used the same analytic strategy that we used to examine cohort-by-SM status differences, except that we used a different grouping variable. The gender-by-SM status grouping variable broke respondents into four groups: (1) male Heterosexual-Identified/non-SM; (2) female Heterosexual-Identified/non-SM; (3) male SM; (4) and female SM. Results are listed in Table 5. Again, significant differences in difference scores are indicated by a superscripted number in Table 5. When not controlling for instability in reported same-sex attractions, depressive affect growth difference scores were not equivalent among males and females, <unk> 2 (1) = 4.36, p <unk>.05. More specifically, among females growth in depressive affect was more positive among SM than among Heterosexual-Identified/non-SM (.015). Among males, however, growth in depressive affect did not differ across Heterosexual-identified/non-SM and SM (-.002). The relation between SM status and self-esteem did not vary across gender. Preliminary analyses revealed that the relation between instability in reported same-sex attractions and both depressive affect and self-esteem was equivalent across gender for Heterosexual-Identified/non-SM. However, the relation between instability in reported same-sex attractions and both depressive affect and self-esteem varied across gender. The relation was more pronounced among males, as shown in Table 6. The relation between instability in reported same-sex attractions and psychological well-being was thus constrained to be equal across gender for Heterosexual-Identified/non-SM and was allowed to vary across gender for SM. Again, we allowed the relation to vary across Heterosexual-Identified/non-SM and SM as well. When controlling for instability in reported same-sex attractions, the relation between SM status and psychological well-being did not vary across gender. Summary-The relation between SM status and psychological well-being varied across both cohort and gender. In the case of depressive affect, patterns evident among the entire sample when instability controls were not included (i.e., greater increases in depressive affect over time among SM -Heterosexual-identified/SM in particular) were more evident among those in the young cohort and females. However, in the case of self-esteem, patterns found among the entire sample (i.e., intercept differences across SM and Heterosexual-Identified/non-SM) were more evident among the young cohort. A pattern that was not evident among the entire sample emerged as well: Among the entire sample there was no instance when growth in self-esteem varied across any of the sexual orientation groups. However, among the young cohort, growth in self-esteem was more positive among SM. Growth in self-esteem was equivalent across SM status among the old cohort. This differential growth pattern across cohort only emerged when controls for instability in reported same-sex attractions were included. Finally, the relation between early and stable reports of same-sex attractions and psychological well-being (i.e., lower initial levels but greater increases over time) was more pronounced among males. --- Discussion Overall, four main conclusions can be drawn from this study: (1) Psychological well-being disparities between SM and non-SM are in place by early adolescence, and then for many the remainder of adolescence is a recovery period when the disparities narrow over time. (2) Early and stable reporting of same-sex attractions is associated with a greater initial deficit in psychological well-being, but because it is also associated with a quicker recovery over time, the effects are often not long lasting. (3) Though the relation between sexual orientation during early adulthood (i.e., Wave 3) and adolescent psychological well-being was quite similar across gender, the negative relation between psychological well-being and early, stable awareness of same-sex attractions was more pronounced among males. (4) Relative to Bisexual and Homosexual-identified/SM, the understudied yet relatively sizable group of Heterosexual-identified/SM appeared to be at equal risk for deficits in psychological well-being. --- What Does Sexual Orientation during Early Adulthood Mean for Adolescence? Before discussing the findings, we will address some implications that the study's measure of sexual-minority status might have for the conclusions that can be drawn. The measure of sexual minority status was based on a measure of sexual orientation during early adulthood (Wave 3). Thus, the measure of sexual orientation was a static measure that failed to account for the fluidity of sexual identification over time (Diamond, 2006). Nonetheless, the measure was linked with indicators of psychological well-being that predated it by over six years. While one's declared sexual orientation during early adulthood may not be indicative of one's sexual orientation during adolescence, it is likely indicative of whether one dealt with same-sex sexuality during some point of adolescence. It is also likely indicative of the importance or primacy of that same-sex sexuality within one's overall sense of adolescent sexuality. For example, while both those who identified as homosexual and bisexual during early adulthood likely dealt with same-sex attractions during adolescence, for those who identified as homosexual during early adulthood those adolescent same-sex attractions may have been a more important or central component of their adolescent sexuality. Importantly, though a rough indication, the measures of same-sex attraction during adolescence help to narrow when during adolescence these individuals were first dealing with this same-sex sexuality. Thus, when paired together, the adolescent measures of same-sex sexuality and the early adulthood measure of sexual orientation provide among a large, national, longitudinal sample a meaningful account of sexuality as well as emerging awareness of that sexuality. --- The Emergence of the Negative Relation between SM Status and Psychological Well-Being The driving motivation for this study was to examine whether the negative relation between SM status and psychological well-being ( 1) is similar to that of other social statuses where differences are primarily in place by early adolescence; or (2) continues to emerge through the adolescent years when SM are thought to encounter unique developmental challenges. The findings suggest that the negative relation between SM status (based on the declaration of a sexual orientation that includes same-sex attractions during early adulthood) and psychological wellbeing is largely in place by early adolescence. This is evidenced by the fact that among both the young and old cohorts, and regardless of adolescent patterns of reported same-sex attractions, the discrepancies in psychological well-being were largest at the study's onset (when those among the young and old cohorts ranged between 12 and 15, and 16 and 19 respectively). Moreover, middle childhood and early adolescence appear to be more of a struggle for those who report early and stable same-sex attractions, since by early adolescence these individuals report the greatest deficits in psychological well-being relative to Heterosexual-Identified/non-SM. Across adolescence the negative relation between SM status (again based on declared sexual orientation during early adulthood) and psychological well-being either remained stable or decreased. Among those who reported early and stable same-sex attractions, the negative relation between SM status and psychological well-being decreased across time. Importantly, among the young cohort (12-15 years of age at Wave 1), this pattern held true for both depressive affect and self-esteem. This finding suggests that for those who reported early, stable same-sex attractions, the negative relation between SM status and psychological well-being decreased across time, even among those who were early adolescents at the onset of the study. When ignoring same-sex attractions and focusing on early adulthood sexual orientation, the relation between SM status and psychological well-being was stable across time except for two instances: The first exception was among the whole sample, where the negative relation between Heterosexual-Identified/non-SM and Heterosexual-identified/SM increased across time. This pattern held only for depressive affect, and it was likely due to the fact that Heterosexual-Identified/SM were the group most likely to report unstable samesex attractions. These types of attractions, in turn, were associated with less of an increase in psychological well-being across time. The second exception was among the young cohort, where the negative relation between SM status and psychological well-being increased across time. Again, this pattern held only for depressive affect and only for those reporting unstable same-sex attractions. As noted above, this pattern was reversed when controlling for instability in reported same-sex attractions. Taken together, the negative relation between SM status and psychological well-being generally did not become more pronounced across adolescence. To the contrary, it either remained stable or even decreased among those who reported early and stable same-sex attractions. --- Why Is the Negative Relation in Place by Early Adolescence? Most of the challenges associated with being a sexual minority (e.g., dealing with homophobia and bullying, trying to find other SM peers, navigating romantic relationships, coming out), are confronted over the course of adolescence, not prior to it. The relation between declared sexual orientation during early adulthood and psychological well-being seems to manifest by early adolescence and does not increase thereafter, which speaks to the deleterious effects of feeling different from others during middle childhood and early adolescence. Though individuals must deal throughout the lifespan with being members of devalued groups and the sense of difference that accompanies those memberships, middle childhood is the first time individuals are confronted with this sense of difference. After all, it is not until middle childhood that youth are cognitively capable of internalizing this sense of difference as meaningful to their own personal sense of value (Harter, 2006). Consequently, they likely have not yet acquired the tools for dealing with this sense of difference. As a result those in middle childhood may be more likely to have their sense of well-being negatively influenced by that sense of difference. Potentially compounding the deleterious effects of this sense of difference during middle childhood is the fact that unlike individuals of other stigmatized groups, SM often deal with this sense of difference in isolation, since those around them are predominantly, if not completely, of the sexual majority (D'Augelli & Hershberger, 1993). Contrast this to other youth of at-risk social status, such as females or members of racial minorities, who (1) are likely to have role models in the home or at school as well as peers and friends who share their status and (2) likely have parents or extended family members actively socializing them to deal with the challenges associated with their social status (Bowman & Howard, 1985;Cross, 1991;Thornton, 1997). Finally, the initial deficits may be larger among those SM reporting early and stable same-sex attractions because they are more likely to be dealing with this novel sense of difference at an even earlier age, an age at which they are even more likely to be isolated from others in the SM community (D'Augelli, 1996;Friedman et al., 2008). --- Who "Recovers" and Why? The negative relation between a declared sexual orientation during early adulthood that includes same-sex attractions and adolescent psychological well-being did decrease across adolescence, but only for a select group. The "recovery" or narrowing of psychological wellbeing deficits between SM and Heterosexual-Identified/non-SM was limited to those who reported early and stable same-sex attractions. In the case of self-esteem, the recovery was limited to the young cohort, those who ranged between 12 and 15 at the onset and between 18 and 23 at the conclusion of the study. Why the recovery was limited to those who reported early, stable same-sex attractions requires further examination, but we offer two possible explanations. First, SM who reported early, stable same-sex attractions had farther to recover. That is, relative to Heterosexual-Identified, non-SM, SM who reported early and stable same-sex attractions reported far lower initial levels of psychological well-being than did SM who did not report early and stable same-sex attractions. Second, SM who reported early and stable same-sex attractions may have benefited from having longer to adjust to their status and incorporate it into their sense of self (Floyd & Bakeman, 2006;Savin-Williams, 1995). Regardless of the reason, it seems that the earlier the awareness of same-sex attractions, the greater the initial deficit in psychological wellbeing, but also the steeper the recovery. This pattern of recovery among those reporting early, stable same-sex attraction is inconsistent with Friedman et al.'s (2008) findings that those progressing through gay-related developmental milestones at earlier ages tended to report lower functioning during adulthood. Respondents included in the Friedman et al. (2008) study were teenagers in the early to mid 1980s, whereas respondents in Add Health were teenagers in the mid to late 1990s. Perhaps historical increases in the acceptance of homosexuality (Savin- Williams. 2005) have contributed to reductions in the long-term consequences of an early awareness of same-sex sexuality. In cases where there was a recovery, such recovery was generally not complete. SM still reported deficits in psychological well-being during early adulthood; those deficits were simply smaller than they were during early adolescence. With and without controls for instability in reported same-sex attractions, post-hoc comparisons of Wave 3 psychological well-being revealed that each of the three SM groups sill reported lower psychological wellbeing relative to the Heterosexual-Identified/non-SM group (results not tabled). The only exception was among Homosexual-identified/SM who reported early and stable same-sex attractions. This group reported Wave 3 levels of depressive affect that were equivalent to Heterosexual-Identified/non-SM. --- Overall Lack of Gender Differences The relation between sexual orientation during early adulthood (i.e., Wave 3) and adolescent psychological well-being was largely equivalent across gender. There was, however, a gender difference in the negative relation between early and stable reports of same-sex attractions and initial levels of psychological well-being, with the negative relation proving more pronounced among males. As noted in the Introduction, previous research has found that the negative relation between SM status and psychological well-being is more pronounced among males (Balsam et al., 2005;Cochran et al., 2003, Elze, 2002;Fergusson et al., 2005). This study's findings suggest a more nuanced pattern. Instead of the relation between sexual orientation and psychological well-being being more pronounced among males, it may be that an early awareness of one's same-sex attractions (and in turn one's sexual orientation) has a more detrimental impact on males than females. For the most part the relation between early awareness and growth of psychological wellbeing did not vary across gender, suggesting that these effects persist into early adulthood. Early awareness may be more problematic for males because sexuality as well as gender roles are generally more rigid among males (Diamond, 2006;Langlois & Downs, 1980;Richardson, Bernstein, & Hendrick, 1980), and because relative to females exhibiting same-sex sexuality, males exhibiting same-sex sexuality are more likely to be victimized by members of their own gender (Dunkle & Francis, 1990;Russell & Joyner, 2001). --- Limitations This study has several important limitations, the first being the limitations of our measure of sexual orientation as discussed earlier. A second limitation is that the sample sizes of the SM-sub groups were likely not sufficiently large to capture small to modest effects. This may be why the present study found few psychological well-being differences among the three SM groups. Finally, the earliest data available in Add Health are from early adolescence. Ideally, the data would extend back into middle childhood. Unfortunately preadolescent data on the SM community are difficult to obtain, in part because parents and guardians tend to be wary of researchers asking their pre-adolescent children questions pertaining to sexuality. --- Conclusions and Next Steps Sexual minorities or those exhibiting same-sex sexuality are a heterogeneous group who vary not only in sexual orientation but also in the developmental course they follow in terms of their awareness and acceptance of their sexual orientation. Among those exhibiting samesex sexuality, there also is heterogeneity in terms of developmental patterns of psychological wellbeing. Across adolescence, trajectories of psychological well-being converge, such that by early adulthood those exhibiting same-sex sexuality look more similar to both one another and those not exhibiting same-sex sexuality. In developmental science this phenomenon is termed equifinality (Bertalanffy, 1968) -multiple pathways to the same (or similar) end point. This pattern of findings highlights the important contributions that developmental theory and longitudinal data can make to our understanding of same-sex sexuality, sexual orientation, and psychological well-being. More specifically, the pattern of results suggests that (1) the negative relation between SM status and psychological well-being is in place by early adolescence, and (2) the exact pathway or trajectory that one follows across adolescence is more a function of the timing of awareness of same sex attractions than it is of actual sexual orientation (as declared during early adulthood). These results raise the possibility that community resources and social support groups geared towards SM youth, now available in many high-schools, may benefit students in grade school and middle school as well. Finally, findings from this study are consistent with emerging research suggesting that relative to those who identify as a SM (i.e., bisexual or homosexual), Heterosexual-identified/SM, an understudied though sizable subgroup of the SM population who comprise about 8% of the overall population and about 80% of the SM population (Austin & Corliss, 2008;Remafedi, Resnick, Blum, & Harris, 1992), are at relatively equal risk (and in some cases greater risk than Homosexualidentified/SM) for deficits in psychological well-being. Future research should incorporate this subgroup when possible. Growth model examining psychological well-being across 3 waves. (1) = 6.17, p <unk>.05 2 <unk> 2 (1) = 7.13, p <unk>.01 3 <unk> 2 (1) = 5.17, p <unk>.05 --- 4 <unk> 2 (1) = 4.36, p <unk>.05 --- Table 6 Among SM, the relation between reported instability in same-sex attractions and psychological well-being, by cohort and gender
Emerging research has shown that those of sexual-minority (SM) status (i.e., those exhibiting same-sex sexuality) report lower levels of psychological well-being. This study aimed to assess whether this relation is largely in place by the onset of adolescence, as it is for other social statuses, or whether it continues to emerge over the adolescent years, a period when SM youth face numerous challenges. Moreover, the moderating influence of sexual orientation (identification), early (versus later) reports of same-sex attractions, and gender were also examined. Using data from Add Health, multiple-group latent growth curve analyses were conducted to examine growth patterns in depressive affect and self-esteem. Results suggested that psychological well-being disparities between SM and non-SM were generally in place by early adolescence. For many, the remainder of adolescence was a recovery period when disparities narrowed over time. Early and stable reporting of same-sex attractions was associated with a greater initial deficit in psychological well-being, especially among males, but it was also associated with more rapid recovery. Independent of the timing and stability of reported same-sex attractions over time, actual sexual orientation largely failed to moderate the relation between SM status and psychological well-being. Importantly, the sizable yet understudied subgroup that identified as heterosexual but reported same-sex attractions appeared to be at substantial risk.
Introduction Health is defined not only as the absence of disease and disability but also as a state of complete physical, mental and social well-being [1]. Psychosocial, economic and cultural factors and adequate utilization of health services are important in achieving and maintaining well-being [2]. The number of refugees and asylum seekers in the world is increasing. In 2022, 112.6 million people in the world were in the group defined as refugees or asylum seekers [3]. More than 3.5 million Syrian refugees live in Turkey. Health problems are more common in migrants [4]. In addition, the psychosocial and economic conditions of migrants and the language barriers they face negatively affect their health status as their search for health services remains limited [2]. Migration experience and cultural factors affect migrants' perception of drugs and antibiotics, and unconscious drug use is common among migrants [5,6]. Antibiotic resistance is one of the most important global public health threats worldwide. In particular, unconscious and improper use of antibiotics accelerates the development of resistance, which affects the success of treatment of infectious diseases and the duration of hospitalization, leading to an increase in health-related costs and mortality rates [7,8]. Rational use of drugs, especially antibiotics, is an important factor that prevents morbidity and mortality related to diseases [9]. Inadequate health literacy, self-medication and over-thecounter medication supply are important factors leading to widespread and uncontrolled use of drugs [10]. Interventions in Turkey have shown that educational activities are effective in improving the prescription, distribution and utilization of antibiotics [11][12][13]. Health literacy is defined as the knowledge and cognitive and social competence required for individuals to access, understand, evaluate and use health-related information to protect and improve their health, make decisions about their health status and improve their quality of life [14,15]. Health literacy is an important public health goal that also refers to the state and competence of individuals to meet complex health needs [16][17][18]. Challenging living conditions, cultural factors, language barriers, the complex and multidimensional structure of the health system, and social and economic disadvantages negatively affect migrants' search for and utilization of health services and their health in general [15]. The fact that information sources in health are diverse and information is dense has made the internet an important resource for accessing the right information for health. The internet is a useful and effective tool for accessing accurate health-related information and developing various skills to protect and improve health [19]. E-health literacy refers to an individual's ability to search, find, understand and evaluate health-related information from digital sources and use it for any health condition and/or problem [1,18,19]. Various studies with migrants have shown that their health literacy levels (65.1-67.8%) are inadequate and problematic [20,21]. Immigration is an important social determinant of health related to access to health services, utilization of health services, health perception and health literacy [20,22]. Health literacy is of critical importance in eliminating health inequalities and increasing the health levels in society [21]. The health literacy levels of individuals is an important and determining factor in rational drug use. Therefore, efforts to increase the health literacy level of immigrants will contribute greatly to increasing their knowledge about rational drug use and developing positive attitudes. In this study, it was aimed to determine the rational drug use and health literacy levels of Syrian adults living in a district of Istanbul and to examine the related factors. --- Results The mean age and SD value of the research group was 39.19 <unk> 13.10. In this study, 52.2% (283 people) of the participants were female and 47.8% (259 people) were male. It was determined that 46.5% of the immigrants in the research group were in the age group of 40 and above, 76.9% were married, 53.0% had high school and higher education, 80.4% had low income, 64.0% had been living in Turkey for 7 years or more, 71.4% lived in the same house with 5 or more people, 36.5% had chronic diseases, 60.9% used regular medication and 87.1% applied to a physician in the first place when they got sick. Data on the sociodemographic characteristics and disease-health status of the research group are shown in Table 1. In this study, 97.0% of the immigrants in the research group stated that medication should only be used when prescribed by a doctor. Furthermore, 93.7% stated that people should not keep antibiotics in their homes and then use them for other diseases. In total, 96.1% stated that physicians should prescribe antibiotics only when needed, and 75.8% stated that using enough medication, not too much, leads to recovery. Data on immigrants' attitudes and approaches to rational drug use are shown in Table 2. The eHEALS mean and SD value of the immigrants was 20.57 <unk> 7.26. It was determined that 80.3% of the immigrant group had limited health literacy, and 19.7% had adequate health literacy. Data on the eHealth Literacy Scale scores are shown in Table 3. In the study group, the mean ranking of eHEALS was significantly higher in immigrants who were in the 30-39 age group, married, had low income, had been living in Turkey for 7 years or more, did not have chronic diseases, did not use regular medication and had a monthly out-of-pocket health expenditure of less than 500 TL (p <unk> 0.05). The comparison of sociodemographic and health-disease status with eHEALS is shown in Table 4. Among the independent variables age group (p: 0.019 O.R: 2.83), gender (p: 0.048 O.R: 1.60), education level (p: 0.003 O.R: 3.96) and regular medication use (p: 0.000 O.R: 0.18) were found to contribute significantly to the model. The regression analysis of health literacy level according to sociodemographic characteristics is shown in Table 5. --- Discussion Socially and economically disadvantaged migrants are one of the groups that should be prioritized for public health interventions. It is important to determine the knowledge and behaviors of migrants regarding rational drug use. Health literacy is an important tool to increase the health level of individuals and society. The level of health literacy among migrants is a critically important determinant of rational drug use. In this study, we aimed to determine the rational drug use and e-health literacy levels of Syrian migrants and to evaluate the associated factors. It was found that 76.9% of the migrants in the study group were married, 53.0% had high school or higher education, 80.4% had low income, 60.9% used regular medication, 87.1% consulted a physician first when they got sick, and 80.3% had limited e-health literacy. Approximately 3.5 million Syrian immigrants live in Turkey. The average age of immigrants is 22.32 years. Overall, 72.68% are women and children. In total, 30.23% are under the age of 10. Furthermore, 2.23% live in temporary shelter centers and 97.7% live in cities (Istanbul: 531.996, Gaziantep: 434.045, <unk>anl<unk>urfa: 317.786), and the ratio of Syrian immigrants to Turkey's population is 3.73% [23]. The fact that the individuals in the research group first consult a physician when they get sick shows that they care about their health and seek to protect their health. In addition, the 87.1% preference for consulting a physician when ill suggests that immigrants do not experience difficulties in accessing health services in Turkey. In a meta-analysis, it was shown that migration-related factors, as well as social and economic conditions, may affect the health of immigrants [24]. In this study, 97% of the immigrants in the research group stated that medication should only be used when prescribed by a doctor. Furthermore, 93.7% stated that people should not keep antibiotics in their homes and then use them for other illnesses. Moreover, 96.1% stated that doctors should prescribe antibiotics only when needed. In total, 75.8% stated that using enough medication, not too much, leads to recovery. In this study, 51.0% stated that people can stop taking medication if they feel well during treatment. In total, 38% stated that people can stop taking medication if they feel well during treatment, and 38% stated that people can stop taking medication if they feel well. In this study, 0% stated that there is no harm in recommending medication to their relatives with similar complaints. Of the sample, 38.7% stated that herbs can be used instead of medication. In this study, 62.5% stated that using herbal medication as much as desired is not harmful to health. Furthermore, 36.7% stated that the form and duration of medication use cannot be determined by the individual. In this study, 61.0% stated that medications cannot be used to the same extent in every age group. In total, 68.1% stated that the duration of use of medications is not the same, and 67.4% stated that expensive medications are not more effective. The fact that the majority of the immigrants in the study group have low income and about half of them have an education level below high school suggests that their knowledge and perceptions about antibiotics are not sufficient. However, the results of the study show that the general knowledge and perceptions of immigrants about antibiotic use are better than expected [25]. In addition, it is seen that medication compliance is low, and it is common to recommend medication to relatives with similar symptoms. On the other hand, the presence of positive perceptions of herbal medicines and their use among Syrian immigrants may be related to sociocultural factors and past experiences. Increasing antibiotic resistance is now considered a public health problem because it poses both a threat to human health and a serious economic cost [26,27]. In a study conducted with immigrants in the Netherlands, it was shown that immigrants had a more limited perception and knowledge of antibiotics compared to the native population [5]. Although physical and mental health problems are common in immigrants, their low socioeconomic status is associated with poor health outcomes [28]. There are studies showing that treatment compliance is low in immigrants [29]. Studies conducted in Turkey have shown that age, marital status, education level, income level, family structure, place of residence, employment status and health education status are associated with rational drug use [30][31][32]. In a different study, it was shown that giving importance to health and seeking healthy life behaviors positively affected the attitude toward rational drug use [33]. In another study conducted in Turkey, sociodemographic characteristics such as age, gender, employment status and education level were found to be associated with the level of rational drug use knowledge of Syrian immigrants [34]. In a meta-analysis, it was shown that factors such as previous similar symptoms and antibiotic experiences, perceived low severity of the disease, intention to recover quickly, difficulty in accessing a physician or health facility, lack of trust, low cost and ease of use affect/increase self-medication [35]. In another meta-analysis, a positive relationship was found between health literacy and medication adherence [36]. Today, the internet has become a frequently used source of health information because of its ease of access and use, low cost and ubiquity. People frequently use the internet for disease prevention, healthy living behaviors and general disease conditions [37]. However, there may be some difficulties for users to access useful and quality health and medical information online [38]. It is inevitable that individuals with low income and low levels of e-health literacy, such as immigrants, will experience difficulties in this situation. As a matter of fact, the eHEALS median (min-max) value of the migrants in our research group was found to be 21. The e-health literacy level of 80.3% of the immigrant group was found to be limited (insufficient + problematic), and 19.7% was found to be sufficient. In a recent study conducted in the same city, it was also observed that immigrant health literacy levels were insufficient [39]. Immigrants in the study group with low income levels may have limited internet access and use. In addition, low education level and sociocultural factors in the study group may have affected immigrants' access to accurate and reliable information about health on the internet and their ability to understand and use this information. In addition, immigrants' health perceptions, chronic disease status and health-information-seeking behaviors or habits may affect their e-health literacy levels. The level of education and health literacy of society affects the health status of individuals and their attitudes and perceptions towards medicines [40]. On the other hand, in disadvantaged groups such as immigrants and the elderly, technological applications can make a significant contribution to individuals' access to reliable health information and making the right health decisions [41]. It is important that the health services of immigrant-hosting countries are appropriate to the personal needs, living conditions, sociocultural characteristics and competence levels of immigrants. Improving health literacy plays a critical role at this point. In Turkey, Syrian immigrants can access health services free of charge [42]. In addition, health services are provided to these immigrants by Syrian healthcare professionals through reinforced immigrant health centers. In these centers, where specialist physicians in various branches work, preventive health services (immunization, family planning, education, screening programs), outpatient diagnostic and therapeutic health services are provided without language barriers. This situation positively affects Syrian immigrants' access to and use of health services and contributes to the protection and improvement of their health. It should not be overlooked that it also contributes positively to their health literacy status. A meta-analysis has shown that the concept of health literacy is very important for protecting and improving the health of individuals and is an important determinant of the health level of society [43]. Basic health literacy facilitates individuals' access to health services, enables reducing health inequalities and contributes to the development of health services policies at the societal level [44]. In the research group, the mean ranks of e-health literacy were significantly higher in the age group of 30-39. Married, low income, living in Turkey for 7 years or more, not having chronic diseases, not using regular medication, not doing anything for a while when they get sick and using medication according to their own experience, and having a monthly out-of-pocket health expenditure of less than 500 TL (p <unk> 0.05). The presence of social support within the family among married individuals in the research group may have contributed to the well-being and better health of individuals. In a meta-analysis, the positive effect of education, income level and the presence of social support on individuals' health literacy was shown [45]. Immigrants who live in Turkey for longer periods of time overcome the language barrier to a large extent. Since they are in contact with the community, their children or siblings go to school, and their spouses or family members work, there is always someone in the family who speaks Turkish. This makes it easier for Syrian immigrants who live in Turkey for longer periods of time to follow official procedures and access and use health services. As a matter of fact, Syrian immigrants who are registered in the city where they reside in Turkey can access public health services free of charge thanks to their temporary protection status, and they can also get their medicines free of charge or by paying co-payments. This is supported by the fact that the vast majority (84.1%) of immigrants in the study had a small out-of-pocket health expenditure (<unk>500 TL). The high level of eHealth literacy of immigrants who do not have chronic diseases and do not use regular medication may be related to the fact that they use digital resources more intensively in accessing reliable and accurate information about protecting their health and adopting healthy life behaviors because they care more about their health. Systematic reviews have shown that education level is associated with eHealth literacy [46]. It is inevitable for individuals with low health literacy to skip preventive health services, treatment compliance, chronic disease management and, more generally, have poor health outcomes [47]. Factors such as cultural beliefs about health and illness, language problems and socioeconomic status affect immigrants' communication with healthcare providers and their understanding and compliance with medical instructions [48]. In the logistic regression analysis established to predict the level of eHealth literacy according to sociodemographic characteristics, model fit was found to be good. Since it is thought that the effect of some independent variables would be more significant within the scope of the research, these variables (age, gender, education level, chronic disease, continuous medication use, etc.) were included in the regression analysis. In addition, in order to obtain a stronger prediction model with fewer variables, regression analysis was performed only with some independent variables. Among the independent variables, age group (p: 0.019 O.R: 2.83), gender (p: 0.048 O.R: 1.60), education level (p: 0.003 O.R: 3.96) and regular medication use (p: 0.000 O.R: 0.18) were found to contribute significantly to the model. Female gender, advanced age, low education level and regular medication use decrease the level of health literacy. In the traditional sociocultural structure of Syrian immigrants, it is mostly men who have more contact with the outside social environment, attend school and have a job. For this reason, immigrant women are less likely to access the internet as they lack both language learning and economic independence. This situation also contributes to the limited ability of immigrant women to search, understand and use health-related information on the internet. Immigrants with older age and lower educational attainment have more problems accessing the internet and understanding and evaluating accurate and reliable health-related information on the internet. In a study conducted with Syrian immigrants in Sweden, it was shown that immigrants with low educational levels had limited health literacy [2]. Providing education to individuals with chronic diseases positively affected/increased rational drug use and health literacy [49]. Prolonged length of stay, positive perception of social status and educational level of immigrants in the country of migration affect the level of health literacy [20]. Immigrants are at high risk of having limited health literacy. This plays an important role in achieving better health for themselves and their families [50]. In another meta-analysis, it was shown that providing accessible and reliable health information on the internet or in the media in simple and understandable language would contribute to improving individuals' health literacy levels [51]. --- Strengths and Limitations of the Research Given Turkey's significant standing in global migration statistics, research conducted on migrants within the country undeniably offers critical contributions to the literature. The present study was meticulously executed in an area densely populated by migrants, employing Arabic-speaking interpreters. This approach ensured a direct engagement with the migrants, allowing for a more authentic representation of their voices and experiences. Specifically, by focusing on this distinct and often hard-to-reach migrant group, our research aims to fill a palpable gap in the literature by centering on their subjective evaluations. However, it is imperative to underscore certain limitations of our study. Conducting the research in a singular region may impose constraints on the generalizability of the findings to the broader migrant population in Turkey. Additionally, the involvement of interpreters, while invaluable, could potentially raise concerns about the accuracy and impartiality of the translated responses from the migrants. Moreover, as the study predominantly focuses on Arabic-speaking migrants, it does not encompass insights from migrants of other linguistic backgrounds. --- Materials and Methods --- Research Type and Research Population A cross-sectional study was conducted. The population of the study consisted of Syrian immigrants over the age of 18 who applied to Sultanbeyli Strengthened Migrant Health Center. Sultanbeyli is the district with a total population of 358,201 and has the lowest socioeconomic level in Istanbul. Around 22,000 Syrian immigrants live in the district. Strengthened Migrant Health Centers are organizations that provide primary health care services to Syrian refugees who have settled in Turkey. These centers are staffed by specialist physicians, general practitioners, dentists, allied health personnel, psychologists and social workers. The centers are mostly staffed by Syrian healthcare professionals. Therefore, there is no language barrier/problem [52]. There are 8 of these centers in Istanbul, 1 of which is located in Sultanbeyli district. All immigrants over the age of 18 who volunteered to participate in the study were included in the study without sampling. --- Measurement Tools For the study, a questionnaire was prepared based on the literature and consisted of three sections. The first part of the questionnaire consisted of statements evaluating sociodemographic characteristics and health status. The second section includes statements on rational drug use prepared according to the guidelines and guidelines in the literature. The third section includes the E-Health Literacy Scale Arabic form. The survey was conducted face-to-face with immigrants through Arabic-speaking interpreters. --- Rational Drug and Antibiotic Use Survey The rational drug use questionnaire was prepared based on the World Health Organization's (WHO) public awareness survey on antibiotic resistance conducted in 6 different WHO regions in 2015, the rational drug use scale whose validity and reliability studies have been conducted in Turkey, and other sources in the literature. This section consists of statements aiming to obtain information about the rational drug use status and attitudes of immigrants [7,30,53]. The statements in the section are in a 5-point Likert type and consist of a total of 13 items. Each item has a response scale ranging from "Strongly Disagree" to "Strongly Agree". The section also includes negative statements. The relevant statements were compiled in order to learn the level of knowledge of the participants about the use of medicines and antibiotics and to evaluate their attitudes. The items in the section provide a subjective assessment of the rational use of medicines and antibiotics by immigrants [7,30]. --- E-Health Literacy Scale (eHEALS) The eHEALS was developed by Norman and Skinner in 2006 and aims to measure literacy skills useful in assessing the effects of strategies for delivering online information and applications [1,18]. The eHEALS consists of 8 items, and participants are asked to rate each item on a 5-point Likert scale (strongly disagree, disagree, undecided, agree, or strongly agree). Total scores range from 8 to 40, with higher scores indicating higher self-perceived eHealth [1,54]. eHEALS scores are divided into thresholds of inadequate (8-20 points), problematic (21-26 points) and adequate (27-40 points). The eHEALS Arabic validity and reliability study was conducted by Wangdahl et al. [19]. However, since the use of 3 thresholds in the Arabic eHEALS threatens the validity and reliability of the scale, the scale was divided into two: limited (insufficient + problematic = 8-26 points) and sufficient (27-40 points). In our study, a 2-point version of the scale was used to identify those with eHealth literacy problems [1,19]. eHEALS psychometric tests show that it is a valid and reliable instrument and has also been translated, adapted and validated in Arabic [2,54]. --- Statistical Analysis For statistical analysis, the eHEALS was accepted as the dependent variable. Statistical Package for the Social Sciences (SPSS) Program version 26.0 was used for statistical analysis. Continuous variables were expressed as mean <unk> standard deviation (SD) and median. Categorical variables were expressed as numbers and percentages (%). Kolmogorov-Smirnov and Shapiro-Wilk tests were performed for normality analysis of the data and Skewness and Kurtosis values of the scales with p <unk> 0.05 were analyzed. It was accepted that the values with Skewness and Kurtosis values between <unk>1.5 were normally distributed, and the values not between <unk>1.5 were not normally distributed. Since the data in the research group did not show normal distribution, the Mann-Whitney U test and Kruskal-Wallis test were used in data analysis. Chi-square and Fisher's exact tests were used to compare categorical variables between groups. Correlation (Spearman) analysis was used for the relationship between continuous variables. Logistic regression analysis was performed to predict the level of eHealth literacy according to the independent variables, model fits were evaluated, and the variables that contributed significantly to the model were examined. In statistical analyses, p <unk> 0.05 was considered significant. --- Ethics Committee Permission Ethics committee permission was obtained from the Istanbul Medipol University Non-Interventional Clinical Research Ethics Committee on 24 November 2022 with decision number 991. The individuals included in the study were asked to participate in the study after being informed about the research and permissions. A questionnaire was administered to individuals who agreed to participate in the study. --- Conclusions It was observed that Syrian immigrants have very good knowledge and attitudes about antibiotic supply and use. However, it was observed that their knowledge and attitudes regarding drug use, treatment compliance and herbal medicines were not sufficient. The eHealth literacy level of 80.3% of the immigrants in the research group was found to be limited (insufficient + problematic) and 19.7% was found to be sufficient. The eHEALS level of Syrian immigrants was found to be associated with being married, having a low income level, living in Turkey for a longer period of time, chronic disease, regular medication use and monthly out-of-pocket health expenditure. In addition, advanced age, low education level, female gender and regular medication use affected the low level of eHealth literacy. Interventions targeting disadvantaged groups such as immigrants are very important in preventing infectious diseases, reducing treatment costs and monitoring chronic diseases. At this stage, health literacy interventions play a critical role. In today's digital environment. eHealth literacy interventions for immigrants will help them access reliable health information online and make the right decisions about their health. In addition, health promotion interventions such as eHealth literacy will enable immigrants to care about their health and improve their quality of life. The success of health policies will be enhanced if countries with a high concentration of immigrants plan and implement health services by taking into account immigrants' needs, learning competencies, language problems, living conditions and sociocultural characteristics. eHealth literacy interventions for immigrants will facilitate the provision of health services and contribute to the safe access of immigrants to health services. --- Data Availability Statement: All datasets and analyses used throughout the study are available from the corresponding author upon reasonable request. --- Institutional Review Board Statement: Prior to initiating the research, ethical approval was secured from the Ethics Committee of Istanbul Medipol University on 24 November 2022 with decision number 991. All individuals involved in the study were comprehensively informed about the research aims and procedures and were subsequently invited to participate. Our research was conducted in full accordance with the Declaration of Helsinki, and informed consent was obtained from every participant. Informed Consent Statement: Informed consent was obtained from all participants involved in the study. --- Conflicts of Interest: The authors declare no conflict of interest.
Rational drug use is a pivotal concept linked with morbidity and mortality. Immigration plays a significant role as a determinant affecting individuals' health-related attitudes, behaviors, and the pursuit of health services. Within this context, the study was initiated to assess the factors influencing health literacy and rational drug use among Syrian immigrants in Istanbul. A cross-sectional study was undertaken on 542 Syrian adults utilizing a three-part questionnaire encompassing sociodemographics, rational drug use, and the e-health literacy scale (eHEALS). With an average age of 39.19 ± 13.10 years, a majority of participants believed medications should solely be doctor-prescribed (97%) and opposed keeping antibiotics at home (93.7%). Yet, 62.5% thought excessive herbal medicine use was harmless. The mean eHEALS score stood at 20.57 ± 7.26, and factors like age, marital status, income, and duration of stay in Turkey influenced e-health literacy. Associations were seen between low e-health literacy and being female, being older, having a lower education level, and regular medication use. Syrian immigrants displayed proper knowledge concerning antibiotics yet exhibited gaps in their understanding of general drug usage, treatment adherence, and herbal medicines. Approximately 80.3% had limited health literacy, pointing to the need for targeted interventions for enhanced health and societal assimilation.
convinced of the need to raise full awareness of the risks and to take action. This is no longer the case since the entry of an issue such as global warming in the public discourse at national and world-wide level, where a lack of clarity, dissenting opinions or biases generated by special interests (the coal industry in the USA being a case in point) are widespread. Also, while more cost and benefit calculations are now being made publicly about a serious emissions cut (either through a cap and trade system or a carbon tax), what the public seems to be sensitive to is not just economic expediency, but also the fate of the earth and future generations. Hence, a more and more complex public discourse about global threats implies a role for philosophy, as does the necessity to motivate views that used to be self-evident to narrow and specialized audiences. Having said something about the intention of this Special Issue, I do not think comments or criticism of the chapters are what is required from the editor-at least in this case. I have also refrained from writing a Conclusion because I deem it more fruitful for the reader to look at the plurality of positions, vocabularies and research interests expressed by the authors rather than come to a somehow unifying conclusion. Everyone will pick out the stimuli emanating from this plurality that is most relevant to themselves. What I will try to do in the following is rather to signal five foci in the articles that compose this issue: (1) What links are there between risk and responsibility? (2) What novelties are highlighted by the authors? (3) Definitions and the history of 'risk'. (4) Why act responsibly? (5) How can we act thus? (1) Not all of the authors problematize the linkage between risk and responsibility, and those who do so give different versions of it. Pellizzoni sticks to the classical notion of responsibility as imputability and sees responsibility as structurally coupled with risk taking. Pulcini looks at the emergence of global risks such as nuclear war as the factor that redefines responsibility as an attitude towards others rather than the imputability of a certain type of behaviour to an actor. In my contribution only risks that can be managed by humans are seen as capable of being a source of responsibility, which is regarded as feeling responsible for something and towards somebody; this is not the only limit I set to the scope and meaning of 'risk'. (2) Many authors converge in underlining that the magnitude of the new risks, particularly in the environmental realm, and more precisely the new magnitude (disruption of world society or even civilization) of the eventual loss, creates new settings for reflection on responsibility. Jamieson sees the difficulties of interpreting climate change as a problem of individual moral responsibility, but concludes that particularly with an eye to this problem it is the very 'everyday understandings' of moral responsibility that should be changed. That the new situation requires a redefinition of our moral categories is a position largely shared by Pulcini, as we have just seen. On other terrains, other authors point out the effects of these new elements. Ferretti argues that in the case of risks of a possibly catastrophic dimension and affecting different generations, the compensation model based on tort law can no longer apply. Pellizzoni points at the epistemological novelty of risks which, due to their very radical nature, cannot be assessed by the usual scientific procedure of trial and error. (3) Most authors adhere to the classical definition of risk as what combines the possibility of harm or loss with the probability that it will actually happen. Many authors also cite the distinction between risk and uncertainty, but only in my contribution is this distinction taken as narrowly as to exclude extreme events such as nuclear war or catastrophic climate change from the category of risk. Pellizzoni, on the contrary, regards uncertainty as a special case of risk. With regard to climate change, Jamieson distinguishes between the risks represented by a large, but still linear change and an even larger, non-linear change. As uncertainties in forecasting future phenomena are sometimes taken as grounds for not taking action to contain them, it is particularly important that Dalla Chiara makes evident how much 'uncertainty' contemporary science contains as a fundamental and, so to speak, physiological category. This is done by reference to quantum mechanics along with Heisenberg's uncertainty principle and the emergence of 'fuzzy thinking' in logics. Further controversy is met in the historical assessment of the risk category. All those who tackle this issue converge in seeing risk as feature of modernity, but Ferretti and Cerutti underline the progressive side of risk taking as a widening of choice and therefore of liberty, while Pellizzoni tends to view this stance as a manifestation of neo-liberal ideology. (4) As for the reasons justifying the acceptance of responsibility for major risks or threats impending on humankind, both Jamieson and Cerutti concur in maintaining the insufficiency of what Jamieson calls 'prudential responsibility', based on the self-interest of the present generations. While Jamieson resorts to respect for nature as the ultimate reason for assuming responsibility for climate change, in my contribution the argument is based on an obligation towards human generations of the distant future. Pulcini's reasons are teleological rather than normative: acting out of responsibility for 'global risks', as mentioned sub 1), is the only way out of the 'pathologies of the global age'. (5) There are a wealth of proposals as to how to implement our responsibility towards the risks and threats that impend over human life. Some authors put at the centre the redressing of the unjust distribution of risk and harm, whose geography however-Jamieson warns-is shaped by the social divide (poverty, high levels of inequality, poor public services) within rather than between countries. Perhaps surprisingly, more participation is not seen as a significant factor in combating injustice: Ferretti claims the superiority of distributive justice itself, while Pellizzoni argues that the problem is the need to democratize society rather than science and knowledge. Outside the justice paradigm, Pulcini points at the importance of new sentiments capable of letting us feel the new severity of the human condition under global risks, while I argue that the survival of humankind is the primary problem in the context of which considerations of fairness make sense. The Special Issue closes with Turnheim and Tezcan's analysis of a case in point, that is, the functioning of the UN Framework Convention on Climate Change seen as an instance of complex governance defined by the relationship with science, an inbuilt reflexivity and forms of governmentality. Obviously the papers in this issue contain more than I can possibly summarize in this introduction, whose goal is to give a sense of the variety of positions and approaches. As for the latter, I wish to stress the attempt made here to give a joint voice to philosophical positions as different as the normative ethics relating to the theory of justice and the moral and political philosophy concerned with the fate of modernity. This multiplicity is intended to provide a variety of stimuli to those who are open to them, not to generate an unlikely synthesis.
It is no novelty to couple together risk and responsibility as scientific themes for joint reflection. 1 What we have attempted to do in this Special Issue is primarily to investigate these two issues as categories, that is, as philosophical concepts that require clear-cut definitions as a starting point for examining their intertwinement and any ultimate shifts in their meanings under new circumstances such as the emergence of 'technological risks' or 'global challenges'. This intent has motivated a shift in the main role among disciplines: here political philosophy, ethics and philosophy of science are given the leading role in debating risk, whereas elsewhere this is given to decision theory, sociology (of risk) and political science, the latter represented in this issue by just one paper, that of Turnheim and Tezcan. This shift is, however, not just brought about by the disciplinary affiliation of this guest editor nor by a chance mix of authors. Its rationale lies rather in re ipsa, in the growing request for philosophical elaboration on themes that used to be confined to social or even hard sciences. There are two reasons for this: first, the amount of possible harm contained in technological development as a whole (global warming) or in its most lethal chapter (nuclear weapons) raises ultimate problems of life and death, well-being and extreme misery for the whole of humankind that can typically only be grasped by philosophy (ethics, metaphysics) or theology. Second comes a need currently emerging in public discourse about global risks or threats: as long as the reasoning about them took place in epistemic communities (for example of climatologists or public health specialists) or ecological advocacy groups, attitudes of scepticism or confusion rarely arose, and nearly every partner to the conversation was F. Cerutti (&)
Background Indigenous peoples around the world experience higher rates of poor health, poverty, poor diet, inadequate housing and other social and health problems relative to non-Indigenous people. These disparities are found in nearly all countries with Indigenous populations, including some of the wealthiest nations in the Organisation for Economic Co-operation and Development (OECD) [1,2]. The narrowing of these gaps in health and socio-economic outcomes has been a focus of successive governments in these nations since at least the 1970s. Understanding the complex historical, political and socio-economic factors that have led to the present situation has also been a key focus for medical and social sciences across the past four decades [1,2]. High-profile reviews published by the United Nations and others in recent years have documented the common factors underlying the continuation of health and social inequalities experienced by Indigenous populations across the globe, including systematic loss of culture and language, dispossession from traditional territories, and economic and social marginalization [2][3][4]. Indigenous inequality is a global health problem, but it is perhaps most surprising to witness its continuation in some of the world's most wealthy countries. A commonly used barometer for the comparison of health and socioeconomic development across countries is the United Nations' Human Development Index (HDI). Australia, Canada and New Zealand regularly place among the top 10 countries in the world on this annual measure, which combines education, income and life expectancy [5]. A previous study showed that these countries' Indigenous populations would rank far lower on the HDI league table than their total populations, revealing the relative disadvantage of Indigenous peoples [6]. Each of these countries has since demonstrated a commitment to improving outcomes for Indigenous peoples by signing the United Nations Declaration on the Rights of Indigenous Peoples [7], which specifically articulates Indigenous peoples' rights to "improvement of their economic and social conditions". The work of Marmot and others has demonstrated the existence of marked social gradients in health among the populations of wealthy nations [8]. In some cases the poorest groups in these societies have health and life-expectancy profiles similar to those living in developing nations. Much of this observed discrepancy in health outcomes has been attributed to so-called "social determinants of health", which we might define as those non-health indicators of life outcomes which influence an individual's health status across their life course. These can be socio-economic indicators such as education, employment status (including job type for those who are employed), income and wealth, property rights, justice system contacts, and social connections and supports, which impact a person's ability to: obtain preventive health knowledge; apply that knowledge to their own life; and access appropriate health services when treatment is required for a given condition. Marmot's observations around health outcomes for the poor in relation to the unequal distribution of resources in wealthy societies [8] have been placed into global Indigenous perspective by the work of Gracey, King and Smith [3,4]. Where Marmot suggests that improving education, employment and income among disadvantaged segments of society will have positive implications for health and general wellbeing [8], Gracey, King and Smith [3,4] point out that the health of Indigenous populations may also be affected by additional and unique factors, such as cultural security, connection to lands, language, and culturally defined notions of health and wellbeing [3,4]. Our focus is on Australia, Canada, and New Zealand. In 2006 the combined Indigenous populations for these developed nations was 2.7 million persons, from a total population of about 55 million people [9][10][11]. These countries share a common pattern of mainly British colonization over their Indigenous populations; however important factors have uniquely shaped Indigenous-settler relations in each. These include: geography; the relative size of Indigenous and settler populations; and, in Canada, the influence of other colonial powers [12]. Despite these differences, persistent social, economic, and health disparities between Indigenous and non-Indigenous populations exist in all three countries. Drawing on these perspectives, our study documents the relative progress made toward reaching equitable levels of socio-economic development among Indigenous citizens in Australia, Canada and New Zealand from 1981-2006, and looks at prospects for closing gaps in social determinants of health with non-Indigenous citizens in the coming 25 years. We focus on relative inequality in the human development domains of education, employment, and income, specifically among those aged 25 to 29 years. This is the age range by which most higher education has been completed, allowing us to more clearly see changes in educational attainment patterns. It is also the age by which a number of other important transitions have generally taken place, such as leaving the parental home, the transition from school to work, and the commencement of family formation, which have life-long implications for wellbeing and intergenerational transfers of human capability. Indeed, "closing the gap" likely requires particular attention to young people, and to the quality of these transitions. We believe this is the first time one study has brought together long-term data comparing these social determinants of health in the Indigenous populations of these three nations. --- Methods --- Study design This study reports results from an analysis of census data for Australia, Canada, and New Zealand. Census data were used in preference to other data sources because of: the long time series available; consistency in measurement of questions and concepts over time; the availability of data for the same time points for each country; the absence of sample size issues; and the coverage of both Indigenous and non-Indigenous populations. Any effects on Indigenous wellbeing of the recent global slowdown in economic activity are not represented, as 2006 is the most recent census year for which these data are available for comparison between all three countries. We measured progress of Indigenous persons aged 25-29 years relative to non-Indigenous persons aged 25-29 years over a 25 year period and across three human development domains: education; employment; and income. Information to support this investigation was obtained from the national statistics agencies of Australia, Canada and New Zealand for the census years 1981, 1986, 1991, 1996, 2001, and 2006, covering each domain of interest [13][14][15]. --- Indigenous populations Australia, Canada and New Zealand have all included questions in their population censuses to identify their Indigenous populations in each of the years 1981-2006. This has allowed the data for each of the domains examined in the analyses to be disaggregated by Indigenous status for the three countries. The term "Indigenous persons" is used interchangeably to refer to Australian Aboriginal and Torres Strait Islander peoples, Canadian Aboriginal peoples (including First Nations, Inuit and Métis), and New Zealand M<unk>ori. --- Data access and permissions Census data for Australia, Canada, and New Zealand were available to the authors via custom tabulations from their respective national statistical agencies. No special permissions or ethics committee approvals were required for this study as all research was undertaken using publically available de-identified and confidentialised data, ensuring the anonymity of all persons represented by the data. --- Measures --- Education domain Our measure was the proportion of Indigenous and non-Indigenous persons aged 25-29 years who had achieved a highest qualification of 'bachelor degree or above' in each of the census years 1981-2006 for each country. 'Bachelor degree or above' includes bachelor degrees, plus all postgraduate degrees, graduate diplomas and graduate certificates that require a completed bachelor degree as a pre-requisite for enrollment. While there are some differences in the way overall education statistics have been classified on the census forms of the three countries, there is very good comparability across all three countries for the classification 'bachelor degree or above' used by our study. --- Australia The Australian Bureau of Statistics (ABS) provided us with a set of customized data tables from the Census of Population and Housing showing 'highest level of qualification' by Indigenous status for persons aged 25-29 years, calculated for all census years 1981 to 2006. We report data from these tables on persons with a classification of 'bachelor degree or above'. 'Highest level of qualification' is derived from responses to census questions on the highest year of school completed and level of highest non-school qualification. The data excluded overseas visitors for all years [13]. --- Canada Statistics Canada provided us with a set of customized data tables from the Census of Population showing 'highest level of schooling' by Aboriginal designation, for persons aged 25-29 years, calculated for all census years 1981-2001, and 'highest degree, certificate or diploma' for 2006 [14]. The data refer to the highest grade or year of elementary or secondary school attended, or the highest year of university or other non-university education completed. University education is considered to be above other non-university education. Also, the attainment of a degree, certificate or diploma is considered to be at a higher level than years completed or attended without an educational qualification. From this data we were able to calculate the proportion of Aboriginal and non-Aboriginal persons aged 25-29 years who had achieved a highest qualification of 'bachelor degree or above' in each of the census years. --- New Zealand Statistics New Zealand provided us with a set of customized data tables from the Census of Population and Dwellings showing 'highest qualification' by M<unk>ori ethnic group for persons aged 25-29 years, calculated for all census years 1981 to 2006 [15]. 'Highest qualification' is derived for people aged 15 years and over, and combines responses to census questions on the highest secondary school qualification and post-school qualification, to derive a single highest qualification. The output categories prioritize post school qualifications over any qualification received at school. From this data we were able to calculate the proportion of M<unk>ori and non-M<unk>ori persons aged 25-29 years who had achieved a highest qualification of 'bachelor degree or above' in each of the census years. --- Labour force domain Our measure was a census derived unemployment rate for each country. The census labour force variables were consistent for all three countries, with classifications of 'employed', 'unemployed' and 'not in the labour force' provided via custom tables from the statistical agencies of each country [13][14][15]. A person is said to be 'unemployed' if they had no job in the past week but were actively looking for work. A person is regarded as being 'in the labour force' if they are currently employed or actively looking for work. Persons in neither category are regarded as being 'not in the labour force' and are not included in unemployment calculations. Unemployment rates were produced for the Indigenous and non-Indigenous populations for each country using the following calculation: Unemployment rate 1<unk>4 unemployed persons persons in the labour force <unk> 100 Additionally there had been little change in the categories of labour force status at the broad level across the six censuses for any of the countries, making this variable suitable for analysis across multiple time points. --- Income domain Our measure was median Indigenous personal income as a proportion of median non-Indigenous personal income in each of the census years for each country. The information on annual personal median incomes for persons aged 25-29 years for each census year for Australia, Canada, and New Zealand was sourced from the statistical agency of each country [13][14][15]. --- Results For the indicator 'the proportion of those with a bachelor degree or higher qualification' the gaps in all countries were wide, and in fact grew wider over the period (Figure 1). For example, in Australia for those aged 25 to 29 years the gap rose from 8 to 25 percentage points between 1981 and 2006. Australia clearly fared the worst of the three countries in terms of the increase in the gap for this indicator, but even the best performer, Canada, showed a gap of 17.6 percentage points by 2006. This is not to say that educational outcomes for Indigenous people have worsened. The data for all three countries clearly indicate absolute gains in the proportion of Indigenous people with bachelor degree or higher qualifications (Table 1). However, in relative terms Indigenous people were increasingly behind the non-Indigenous populations on this measure. While Indigenous people had consistently higher unemployment, there was fluctuation in the unemployment rate gap over the period 1981 to 2006 for all three countries (Figure 2). By 2006, both Australia and Canada showed a narrower gap than that observed in 1981, while the gap for New Zealand had widened slightly. However, Australia maintained the widest unemployment rate gap of the three countries over the entire period, despite the gap reducing from 16.9 to 11.0 percentage points. Canada finished the period with the narrowest gap (6.6 percentage points). Median Indigenous income as a proportion of non-Indigenous median income (whereby parity = 100%) ranged from 77.2% (New Zealand) to 45.2% (Australia) in 1981, and improved slightly over the period to range from 80.9% (Canada) to 54.4% (Australia) in 2006. Overall, the gap remained steady for Australia, while for Canada and New Zealand there was some fluctuation over the period (Figure 3). Again, Australia fared the worst, with Indigenous median annual income barely reaching above half that of non-Indigenous people across the reference period, while Canada and New Zealand had made some improvements by 2006. --- Discussion Wealthy developed nations with a colonial past, such as Australia, Canada, and New Zealand, have typically underresourced the human development of their Indigenous populations for much of their post-colonial histories. Impact has been felt across most aspects of Indigenous life, including health, education, participation in the economy, legal rights to traditional lands and resources, cultural security, and wider issues of social inclusion. Though government mandated reparations have been in place since at least the 1970s, long standing inequality has left the Indigenous peoples of these countries behind their non-Indigenous counterparts on indicators of health, wealth, social justice, and general wellbeing [2]. This research comparing social determinants of health for Australia, Canada, and New Zealand, suggests that such inequalities have persisted-in some cases barely improving across Employment Projects -CDEP), and doing so has meant being recorded as "employed" on official labor force statistics, reducing Indigenous unemployment and potentially distorting the true gap [16]. 25 years, with Australia the worst performer overalldespite concerted efforts by governments to close gaps in outcomes for Indigenous people in recent decades. These countries are now challenged with finding new approaches to solving this social inequality issue, if health and socio-economic conditions for Indigenous people are to even approach parity with non-Indigenous persons within a generation. The social determinants of health observed in this study covered educational attainment, labour force activity and income. We specifically examined the gaps between Indigenous and non-Indigenous people using the proportion with a bachelor's degree or higher, unemployment rates, and median annual income. There are other indicators of "wellbeing" upon which these populations could be compared. However, people's connection to the labour force, higher formal educational attainment and income are critical aspects of participation and inclusion in these societies, and key social determinants of health. In the terms of Nobel laureate and HDI author Amartya Sen, being engaged in work and having sufficient income represent "functionings" that help one to make meaningful life choices in order to realize "capabilities" [17]. In the context of advanced economies, these capabilities have direct implications for wellbeing. While a persistent gap exists between Indigenous and non-Indigenous outcomes for these indicators, we hypothesized that this gap should have narrowed over time. Our results show that in absolute terms there was some improvement on all three indicators for all three countries, but no consistent narrowing of the relative gaps for any country (Figures 1, 2 and3). As Table 1 shows, reductions in the indicator gaps for some time periods are due to fluctuation in the measures for non-Indigenous people, as opposed to improvements for Indigenous people. The increasing gap in educational attainment is largely due to rapid increases in the proportion of university qualified young people in the non-Indigenous populations of all three countries (Table 1). This expansion in higher education is closely linked to compositional shifts in developed economies away from manufacturing and into knowledge-based service industries, and each of these countries has experienced periods of macro-economic restructuring towards a more knowledge based economy [18]. As relatively fewer Indigenous people complete university education, they are largely excluded from this sector of the economy. With education becoming an increasingly critical component to accessing the employment and income benefits of advanced modern economies, the effects of these compositional changes may have offset any gains from social policy investments in closing socio-economic gaps. Reducing these gaps means addressing a complex set of issues. Increasing educational attainment requires appropriately resourced education support beginning in early childhood, sustained throughout regular schooling and into vocational and higher education settings. These programs should support Indigenous peoples' aspirations, including the maintenance of cultural integrity [19]. Factors beyond the school gate that support Indigenous engagement with the education process are also critical. We know that pathways to disadvantage in education begin in the early years, with high proportions of Indigenous children already behind their non-Indigenous peers in academic performance from their first year in school-a deficit that continues throughout primary and high school [20]. Higher rates of school absenteeism and lower levels of parental education may contribute to the widening disparity in academic performance over time for Indigenous children, and the resources and role models for scholastic learning that often exist in non-Indigenous homes may be largely absent in many Indigenous households [21]. Given these trends, closing the higher education gap between Indigenous and non-Indigenous young people will require a major change in policy approach, and patience. It must be recognized that changes made today to improve young people's readiness for school will take years to result in higher rates of university completion. This suggests that flow-on effects of higher employment and incomes may be even further away. Any suggestion that gaps in socioeconomic outcomes can be eliminated in the near future seems unrealistic. Without significant increases in the proportion of young Indigenous people completing higher education, these gaps will remain indefinitely. In developed economies, population-wide improvements in income are mostly related to improvements in educational achievement and opportunities for employment. Our study suggests both Canada and New Zealand are starting to improve income disparity issues for their Indigenous people, though each is still some way from achieving parity. The situation for Indigenous Australians is far less encouraging. The health and wellbeing of Indigenous populations in these countries has been a key aspect of national public policy for some time. In addition to important legal changes regarding the recognition of traditional rights, governments have engaged in various efforts to improve conditions for Indigenous peoples, including education, health and employment programmes, and policy changes. For example, most recently, the government of Australia has made "closing the gap" in human development outcomes between Indigenous and non-Indigenous people an explicit goal of national policy [22], and the Government of Canada and Assembly of First Nations' Joint Action Plan has a focus on increasing access to education and employment opportunity [23], while New Zealand has used a "closing the gaps" theme for policies aimed at social justice issues for M<unk>ori [24]. Adding to this already complex policy environment is the observation that these countries have seen some growth in Indigenous populations across the reference period, in addition to that from births, due to changing patterns of self-identification in their census [25][26][27]. --- Limitations There are several limitations to the methodology employed in this study. It is known that across time there has been a change in the propensity of people to identify as Indigenous in all three countries [25][26][27]. This means, for example, that the composition of the Indigenous population of 1981 is likely different to that of 2006 for all age groups, which may have influenced some of the results seen in this study. Another issue is that the scope of national census questions may be too limited to explain some of the differences in outcomes between Indigenous and non-Indigenous persons. For example, there may be sound cultural reasons for why an Indigenous person does not seek to participate in certain educational or employment spheres, but we can't measure that with census data. Lastly, as census data are only gathered once every five years we are unable to track economic and social change as closely as something like a longitudinal survey with annual follow-up. --- Conclusions Australia, Canada, and New Zealand represent nations with some of the highest levels of human development in the world, yet our research shows that their Indigenous populations were almost as disadvantaged in 2006 as they were in 1981, relative to their non-Indigenous populations, on three key social determinants of health. These ongoing disparities represent a major public policy concern, and a growing focus for science and human rights organizations. Given the breadth of scientific inquiry, the public spending and good intentions of successive Australian, Canadian and New Zealand governments regarding Indigenous health and social advancement since 1981, the fact that relative progress on key social determinants of health has been practically static for Indigenous peoples is alarming. Despite absolute improvements on these indicators, continuing disparities suggest that existing approaches to addressing Indigenous inequality are not as effective they need to be. They also suggest that achieving equity may take several more decades, especially as the young adult populations described here are the ones in which more progress was expected to have occurred across these domains. Surely Indigenous peoples in these nations would be within their rights to expect a narrowing of these gaps to occur over the coming 25 years, along with improvements in health outcomes. Science and policy are yet to provide viable solutions to this enduring social equity issue. If "closing the gap" in health and socio-economic disparity between Indigenous and non-Indigenous people remains a goal, it would seem that completely new approaches are required to achieve success, otherwise Indigenous persons in these developed nations are being consigned to a future of entrenched inequality for generations to come. --- Competing interests The authors declare that they have no competing interests. Authors' contributions FM and MC had the original idea for the study, developed the analytic concept, and acquired the data. DP and EM compiled the data and performed the analysis. FM, MC and DP wrote the first draft. DL, EG, and SRZ contributed to all subsequent drafts and revisions. All authors read and approved the final manuscript.
Background: Australia, Canada, and New Zealand are all developed nations that are home to Indigenous populations which have historically faced poorer outcomes than their non-Indigenous counterparts on a range of health, social, and economic measures. The past several decades have seen major efforts made to close gaps in health and social determinants of health for Indigenous persons. We ask whether relative progress toward these goals has been achieved. Methods: We used census data for each country to compare outcomes for the cohort aged 25-29 years at each census year 1981-2006 in the domains of education, employment, and income. Results: The percentage-point gaps between Indigenous and non-Indigenous persons holding a bachelor degree or higher qualification ranged from 6.6% (New Zealand) to 10.9% (Canada) in 1981, and grew wider over the period to range from 19.5% (New Zealand) to 25.2% (Australia) in 2006. The unemployment rate gap ranged from 5.4% (Canada) to 16.9% (Australia) in 1981, and fluctuated over the period to range from 6.6% (Canada) to 11.0% (Australia) in 2006. Median Indigenous income as a proportion of non-Indigenous median income (whereby parity = 100%) ranged from 77.2% (New Zealand) to 45.2% (Australia) in 1981, and improved slightly over the period to range from 80.9% (Canada) to 54.4% (Australia) in 2006. Conclusions: Australia, Canada, and New Zealand represent nations with some of the highest levels of human development in the world. Relative to their non-Indigenous populations, their Indigenous populations were almost as disadvantaged in 2006 as they were in 1981 in the employment and income domains, and more disadvantaged in the education domain. New approaches for closing gaps in social determinants of health are required if progress on achieving equity is to improve.
Introduction Boundaries: fires don't understand them. We can't draw a line and say we did our part up to this point, and now we are good...It's just a bigger picture. This forest landowner from eastern Oregon recognizes that fire occurs on a landscape scale. Although he believes people need to manage fire risk beyond their property lines, he has not cooperated with any of his neighbors to address hazardous fuel conditions locally. ''We communicated with them...but they have their own balance of what they want to do,'' he explained, referring to gulfs in values and priorities for forest conditions and management. This landowner thins thickets of trees but leaves brush for deer forage. He is concerned that one of his neighbors eliminates too much habitat in his efforts to reduce fuel, while another does nothing. The importance of managing natural processes and biodiversity at the landscape scale to promote the health and productivity of forest ecosystems is widely recognized (e.g., Lindenmayer and Franklin 2002). Doing so, however-especially when it entails managing across ownership boundaries-remains challenging. Different land ownerships, public and private, are managed for different goals using different actions, with differing ecological effects (Landres and others 1998). In the case of fire, hazardous fuel reduction on one ownership can reduce the risk of fire on neighboring lands. Similarly, suppression activities on one ownership can cause fire to be excluded from another ownership, causing fuel buildups that can lead to uncharacteristically severe fires having dire social, economic, and ecological consequences. Where management activities have ecological, economic, or social consequences beyond ownership boundaries, and the efficacy of one landowner's actions can be limited or improved by those of nearby landowners, cooperation can be an important strategy for achieving landscape-scale management goals (Yaffee and Wondolleck 2000). Cooperation is also an alternative to regulation for the management of common pool resources such as forests; local residents who develop voluntary, selfregulating management institutions may have greater expertise and incentive for managing these resources effectively than regulatory agencies (Ostrom 1990). Yet the decision to cooperate with others hinges on a balance between altruism and self-interest, and in this case, on whether landowners are willing to accept the immediate burden of cooperating with others in exchange for the longer term, but less certain, benefit of buffering their properties against fire. In this paper we explore the relationship between nonindustrial private forest (NIPF) owners' perceptions of fire risk, including risk associated with conditions on nearby forestlands (landscape-scale risk), and their decisions to treat hazardous fuel in cooperation with others. Our study area is the ponderosa pine (Pinus ponderosa) ecotype on the east side of Oregon's Cascade Mountains, where a history of fire suppression, grazing, and timber harvest has led to a buildup of hazardous fuel and thus, fire risk (Hessburg and others 2005). Although this area is dominated by federal lands, NIPF owners own 1/6th of the forestland in the area. Much of their land borders or is near federal land, creating a mixed-ownership landscape in which their management practices affect the connectivity of fuel, and potential movement of fire, between federal wildlands and populated areas (Ager and others 2012). Given that fire does not observe ownership boundaries, and that fuel conditions on one ownership can affect fire risk on neighboring ownerships, we hypothesized that owners who perceive a risk of wildfire to their properties, and perceive that conditions on nearby forestlands contribute to this risk, are more likely to cooperate with others to reduce fire risk across ownership boundaries. We expected owners to be motivated by the rationale that cooperation would enable them to accomplish fuel reduction activities more efficiently together than alone. Yet we also expected that social beliefs and norms about cooperation and private property ownership would influence owners' decisions to treat fuel through cooperation with others. We investigated the relationship between risk perception and cooperation through statistical analysis of mail survey data. We used qualitative interview data to examine how NIPF owners perceive fire risk on their own properties and on the wider landscape, and communicate and cooperate with other private and public owners to address fire risk. Interview data also allowed us to explore the influence of individual beliefs, social norms, and institutions on cooperative fuel treatments, and to identify potential models of cooperation. After presenting our results, we discuss barriers to cross-boundary cooperation in hazardous fuel reduction and ways to potentially overcome them. The ecological and socioeconomic conditions prevalent in our study area are common throughout the arid West. Thus, this case from eastern Oregon may shed light on opportunities for managing fire-prone forests using an ''all lands approach'' elsewhere in the West. --- Literature Review --- Risk Perception Risk perception, defined as the ''subjective probability of experiencing a damaging environmental extreme'' (Mileti 1994), is considered an important antecedent to mitigation and adaptation behavior according to the natural hazards literature (Paton 2003). In the case of wildfire and other natural hazards, risk perception has been identified as a key variable influencing mitigation behaviors such as taking action to reduce hazardous conditions, preparing for a hazardous event, or moving to a less hazardous area (Dessai and others 2004;Grothmann and Patt 2005;Amacher and others 2005;Niemeyer and others 2005;Jarrett and others 2009;McCaffrey 2004;Fischer 2011;Winter and Fried 2000). People form perceptions of risk through interaction with friends, peers, professionals, and the media on the basis of norms, world views, and ideologies (Douglas and Wildavsky 1982;Berger and Luckmann 1967;Tierney 1999). The process of coming to agreement on the causes and consequences of risk, and acceptable levels of uncertainty and exposure, is influenced by the level of legitimacy and trust between people and institutions (Slovic 1999). Cognitive biases (e.g., discounting future events, giving disproportionate weight to vivid or rare events, and denying risk associated with uncontrollable events) also play a role in risk perception (Maddux and Rogers 1983;Slovic 1987;Sims and Baumann 1983), as can people's past experience and objective knowledge (Hertwig and others 2004). However, risk perception alone does not always compel mitigation behavior. Other important variables include believing one is capable of acting to effectively mitigate risk, holding oneself responsible for one's welfare, and feeling sentimental attachment to a vulnerable community or place (Paton 2003). Moreover, decisions to mitigate risk occur under complex socioeconomic conditions that both shape people's vulnerability to risk (Slovic 1999), and determine their efficacy at addressing risk (Slovic 1987;Maddux and Rogers 1983;Tierney 1999). --- Cooperation Cooperation refers to a spectrum of behaviors that range from communicating with others about shared interests to engaging in activities that help others, including sharing resources and work (Yaffee 1998). The theory of cooperation is based on the benefits of reciprocity to participating parties when combined efforts can achieve more than individual efforts. Disciplines ranging from evolutionary biology to political science have examined cooperation as a response to adverse and unpredictable environments, and as a strategy for hedging against and coping with environmental risk (Andras and others 2003;Ostrom 1990;Cohen and others 2001;Axelrod and Hamilton 1981). Social conditions that foster cooperation among individuals include the presence of common goals and motivations, a perception of common problems (including risks), the use of similar communication styles, high levels of trust, and expectations and opportunities for frequent exchanges of information and ideas (Yaffee 1998;Bodin and others 2006;Ostrom 1990). Policy environments, land tenure arrangements, and power relations must also be conducive to cooperation (Ostrom 1990;Bergmann and Bliss 2004). Three important antecedents to cooperation, including cross-boundary cooperation among private landowners, are shared cognition, shared identity and legitimacy (Rickenbach and Reed 2002;Gass and others 2009). Shared cognition refers to sharing a similar perspective or having consensus on a problem or task (Bouas and Komorita 1996;Swaab and others 2007). Shared identity means sharing membership in a community or social group (Tyler 2002;Tyler and Degoey 1995;Swaab and others 2007). Legitimacy is when people or organizations are viewed as fair and capable and are empowered by others (Tyler 2006). Social exchange theory provides a framework for understanding when cross-boundary cooperation by NIPF owners might occur. Social exchanges are interdependent interactions among people that generate mutual benefits and obligations. One type, ''reciprocal exchanges'', consists of interactions that lack terms or assurance of reciprocation (Blau 1964). Reciprocal exchanges are an informal form of cooperation that functions on the basis of reciprocity rules (an action by one party leads to an action by another party), beliefs (that people who are helpful now will receive help in the future), and norms of behavior (that people should reciprocate based on social expectations) (Molm 1994;Cropanzano and Mitchell 2005). Reciprocal exchanges entail risk and uncertainty because they occur in the absence of a contract. When they are successful, they yield trust and commitment, which in turn lead to stronger relationships (Blau 1964). When they are unsuccessful, cooperation breaks down. In contrast, ''negotiated exchanges'' are social exchanges that have known terms and binding agreements to provide some assurance against exploitation (Coleman 1990). Negotiated exchanges do not entail as much risk or require as much trust as reciprocal exchanges (Molm and others 2000). The risks associated with cooperation increase when ''mismatches'' occur between the nature of the relationship among the cooperators and the nature of the transaction between them (Cropanzano and Mitchell 2005). For example, when two landowners who have an interpersonal relationship (one that depends on obligations, trust and interpersonal attachment) engage in an economic exchange (an exchange of goods or services), there is a mismatch. In such cases, people who act to the economic benefit of others may feel betrayed if that economic benefit is not reciprocated, and may be reluctant to enter into another such relationship. Thus, neighboring landowners who have an interpersonal relationship and who cooperate in fire risk reduction activities-which are economic because they entail investment of one person's resources in the protection of another's property-have a mismatch, exacerbating the risks associated with cooperation. We return to these observations in our Discussion. --- Methods --- Definitions Our construct of wildfire risk perception among NIPF owners includes concern about a wildfire occurring on one's land, and concern about hazardous fuel conditions on nearby private or public land contributing to the chance of wildfire on one's land, based on Mileti's (1994) definition of risk perception as subjective probability. We also included awareness of the ecological role of wildfire in ponderosa pine forests, and past experiences with wildfire on one's property as elements of our risk perception construct based on Hertwig and others (2004). For purposes of our analysis, we define cooperation as jointly planning, paying for, or conducting activities that reduce hazardous fuel. We focus on cooperation among NIPF owners, and between NIPF owners and public agencies. --- Data Collection In September 2008 Oregon State University and Oregon Department of Forestry funded and administered a mail survey to owners of a random sample of NIPF parcels in eastern Oregon's ponderosa pine ecosystem. The goal of the survey was to learn more about NIPF owners' wildfire management practices, constraints on fire management, and how public agencies could design better assistance programs. The survey sample was selected by casting random points across a GIS polygon created using layers of pixels that represent historical and potential ponderosa pine forests (Grossmann and others 2008;Ohmann and Gregory 2002;Youngblood and others 2004) and an ownership layer (Fig. 1). The NIPF polygon comprised approximately 1.2 million hectares, about 50 % of all NIPF land and 15 % of all forestland east of the Cascade Range in Oregon, which is consistent with other estimates of the proportion of land in NIPF ownership in eastern Oregon (Oregon Department of Forestry 2006). The point layer was joined with a state tax lot layer obtained from the Oregon Department of Revenue to create a list of owner names, addresses and tax lot numbers. The survey asked about owners' past (2003)(2004)(2005)(2006)(2007)(2008) and intended future (2008)(2009)(2010)(2011)(2012)(2013) hazardous fuel reduction activities, including cooperation with public agencies, nonprofit organizations, private consultants or other private landowners. Survey questions also addressed owners' goals, experiences with wildland fire, concern about fire risk in general, concern about specific hazards and potential losses, and demographic characteristics. Respondents were asked to reference the parcel associated with the tax lot number on their survey. The survey was reviewed by 20 natural resource professionals, landowners, and social scientists and approved by the Oregon State University Institutional Review Board prior to implementation. The survey was administered to 1,244 owners using the total design method (Dillman 1978): an announcement card, followed five days later by the survey; a second survey to non-respondents 2 weeks after the first; and at week four, a thank you card that also served as a final reminder to non-respondents. Of the 1,244 surveys mailed, The survey respondents consisted mostly of retirementage males, similar to NIPF owners in the American West (Butler and Leatherberry 2004), but more had obtained bachelor's degrees, earned above the national median household income ($50 K), and were absentee (Butler and Leatherberry 2004). Also, a high proportion had treated their parcel to reduce the risk of wildfire compared to owners in the West generally (Brett Butler, unpublished National Woodland Owner Survey data 2006). They also owned relatively large holdings compared to other owners in the West (Butler and Leatherberry 2004). These disparities reflect the sampling approach (based on forestland, not forest owners), and the social and biophysical conditions in eastern Oregon where land use rules set large minimum tax lot sizes, and arid climate limits productivity, favoring forestry and grazing over large areas. These and other characteristics of the sample are presented in Table 1. We conducted semi-structured key informant interviews in 2007 and 2008 with a purposive sample of 60 NIPF owners owning forestland in three watersheds in the study area that are considered high priority for hazardous fuel reduction (Oregon Department of Forestry 2006): the Sprague, Upper Deschutes, and Upper Grande Ronde (Fig. 1). We identified owners having diverse fire experiences, management intensities, and ownership characteristics with help from local natural resource agencies and organizations. Each interview included a walking tour of the owner's property and averaged two hours. Questions addressed their management approaches, experiences and concerns with fire, ecological knowledge and values about fire and forest conditions, and perceptions of opportunities and constraints for hazardous fuel reduction. Most interview informants had treated some portion of their parcel to reduce the risk of wildfire. Digital recordings of the interviews were transcribed verbatim and entered into Atlas.ti, a software program that aids qualitative data analysis. The interview sample was similar to the survey sample in terms of demographic characteristics. --- Data Analysis To analyze the mail survey data we used frequencies to describe respondents' perceptions of fire risk and their cooperation behaviors, and logistic regression to identify the relationship between risk perception, and cooperation on fuel reduction. We began the logistic regression analysis with a manual backward stepwise regression of the cooperation variables on the risk perception variables and a set of demographic control variables, and then built final models with the variables that were relevant to the hypothesis. Table 2 contains descriptions of the cooperation response variables and risk perception explanatory variables. To analyze the interview transcripts we followed a standard protocol of qualitative analysis (Patton 2002). We identified and coded quotations in the transcripts that provided evidence for how interview informants perceive fire risk, including the probability of fire, the hazardous conditions that contributed to the probability of fire, and what values they were concerned about losing in the case of fire. We also coded quotations that provided evidence for how owners view the barriers and opportunities of cooperation. We linked these quotations with additional codes and wrote memos about how wildfire risk perceptions motivated owners to cooperate with others. --- Results --- Risk Perception and Hazardous Fuel Management We are always concerned about fire. Our fear every summer is where is the lightning strike going to be and are we going to be able to survive the fire? That is one of the reasons we created fire breaks throughout the property, and because our neighbors didn't have any. Comments like this one indicate that some landowners interviewed were aware of fire risk beyond their property boundaries, and responded by treating fuel. Survey responses corroborated this finding. 67 % of the survey respondents said they were concerned about a fire affecting their property. A majority (53 %) were concerned about conditions on nearby public lands contributing to the risk of wildfire on their property. Interview informants articulated similar concerns, although few were aware of which land management agency controlled nearby public lands. ''You want to see risk? There's risk,'' responded one interviewee when asked for an example of hazardous forest conditions. Like many owners we interviewed, he pointed to land on the other side of his fence line, in this case national forest land in the Sprague River Watershed. ''Here you can see where it is thinned and then it gets really thick; that is a piece of government ground. That is the difference between my place and the government ground; theirs is jungle.'' Figure 2 shows forest conditions we often encountered across property lines owners shared with federal land management agencies. Some owners were also concerned about fuel conditions on neighboring private lands, as evidenced in this comment by another interviewee from the Sprague River Watershed: ''That is an inferno waiting to happen...He's endangering my property, my structures, and also my forest''. However, owners were less concerned about conditions on nearby private lands than on nearby public lands. Only 37 % of survey respondents were concerned about fire risk from nearby private lands. Some interview informants believed that most private owners managed their forests enough (i.e., thinned and harvested) that little fuel was left to be of consequence. ''They are logging the living daylights out of that,'' exclaimed one interviewee, referring to the surrounding industrial ownership. ''It's going to be fine for a lot of years.'' Other interviewees were simply more forgiving about the risk associated with private lands than with public lands. One owner guessed that her neighbors ''are doing fine...doing it about the same way we are: thinning, logging it every few years...The cattle are keeping the brush down.'' 70 % of the survey respondents had treated portions of their parcels to reduce the risk of fire between 2003 and 2008. They used a range of forest management practices that can reduce fuel, presented in Table 3. The median treatment area was 20 acres (interquartile range = 1-120 acres). Many interviewees said that they treated their properties to compensate for the lack of hazardous fuel management by their neighbors. As one owner in the Sprague River Watershed explained, If we have a higher risk because of heavy fuel buildup on adjacent land...we look at our management philosophy a little bit differently. We would do more in our cutting, more than we like...to keep a crown fire from spreading. Indeed, in a different analysis of the survey findings we found that owners' concern about fire risk, and concern about conditions on nearby public land contributing to this risk explained their likelihood of treating fuel (Fischer 2011). --- Risk Perception and Cooperation Most owners worked either on their own or with family members, or with private contractors to conduct forest management activities. However, many had also worked in cooperation with others. Between 2003 and 2008, 34 % of the survey respondents cooperated with public agencies, 18 % cooperated with other private owners, and 15 % cooperated with nonprofit organizations to plan, pay for, and/or conduct practices that can reduce fuel (Table 4). Interview informants provided examples of cooperative fuel treatment, particularly with public land neighbors: participating in fire management planning with the Forest Service and the Bureau of Land Management for lands adjacent to their properties; communicating with agencies about the need to reduce fuel along shared property boundaries; coordinating forest thinning and brush-clearing with treatments on adjacent public lands to widen fuel breaks; and synchronizing prescribed burns with those on adjacent public lands to take advantage of agency fire fighters and equipment. Interview informants cited fewer examples of cooperation with private landowners. These included allowing neighbors to graze livestock on their properties to reduce grass and brush, and planning treatments along shared property boundaries to create wider, shared fuel breaks. More often they observed the use of new techniques or equipment on each other's parcels. A number of owners said they had referred interested neighbors to their consulting foresters or operators to request treatments similar to the ones performed on their properties. Thus, some portion of the 41 % of survey respondents who had worked with private contractors may have been influenced by, or influenced, other private owners, an indirect form of cooperation. Owners expressed a greater willingness to cooperate with other landowners in the future to reduce fire risk than they had in the past. Most survey respondents said they would cooperate with both public owners (68 %) and private owners (75 %) to reduce fuel in the future, especially if it would release them from liability for fires resulting from escaped controlled burns, reduce their share of the cost of treatments, or make more public funding available to them for treatments (Table 5). According to the logistic regression tests, perceived risk explained cooperation between NIPF owners and public agencies, but not cooperation between NIPF owners and other private owners. Concern about a fire occurring on one's parcel, and concern about conditions on nearby public land contributing to this risk were both associated (P B.08) with whether owners reported having cooperated with public agencies in the past on forest management actions that can reduce fuel. Whether owners were aware of the historical role of fire in ponderosa pine ecosystems, and whether owners had experienced a fire on their land were also associated (P B.05) with whether owners reported cooperating with public agencies in the past to reduce fire risk. Owners' willingness to cooperate with public agencies in the future to reduce fire risk was also explained by the risk perception variables; specifically, whether owners were concerned about a fire occurring on their parcel (P B.05), were concerned about conditions on nearby public lands and private lands (both at P B.05), and were aware of the local fire ecology (P B.05). None of the risk perception variables were associated with whether owners had cooperated with other private owners in the past. Only awareness of the local fire ecology was associated with their willingness to cooperate with other private owners in the future (P B.01). P values and odds ratios for the risk perception variables are presented in Table 6. In addition, two demographic control variables were significant in preliminary manual backwards stepwise regression tests: living on one's parcel and age were associated (P B.05) with whether owners had cooperated in the past and were willing to cooperate in the future with both public agencies and other private owners, whereas parcel size, ownership size, tenure length, income, education and gender were not. Our logistic regression test partially confirmed our hypothesis (owners who perceive a risk of wildfire to their properties, and perceive that conditions on nearby forestlands contribute to this risk, are more likely to cooperate with others to reduce fire risk across ownership boundaries). All of the variables included in our risk perception --- Barriers to Cooperation Although many of the owners interviewed acknowledged the potential benefits of cooperation in fuel reductionparticularly for achieving economies of scale in their efforts-they identified numerous reasons for not cooperating. Barriers related to patterns of rural social organization were most commonly cited. ''People in the timber sector are in an isolated spot,'' explained an owner of 2,500 acres in the Sprague River Watershed, referring to the sparsely populated and mountainous landscape of Oregon's east side, which impedes interaction. ''[They] don't have many neighbors [to cooperate with].'' Furthermore, the markets and other natural resource-based economic activities that once provided a basis for interaction and reciprocity despite this topography are now in decline. An owner of 10 acres who recently moved to Union County in the Upper Grande Ronde Watershed explained: When this place was small family ownerships primarily there was more talk between people and more helping each other out because they were all managing the land. Now people aren't really deriving a significant amount of their income off the land...So they don't tend to talk to each other or help each other out much. As a result of demographic change, many newcomers own forestland primarily for privacy and solitude (Kendra and Hull 2005) or recreation. The isolation such owners seek counters interaction. ''We're like two separate little icebergs...we may touch...but only by necessity...it's why we live out here,'' explained an owner of 200 acres in the Deschutes River Watershed. A high rate of absentee ownership (74 % in our survey sample), often associated with recreational use, is a barrier to developing the social relationships upon which cooperation is predicated. Our regression results indicated that owners who live on their parcels were more likely to have cooperated with their neighbors in forest management than those who did not. In addition, gulfs in values, beliefs, and motivations regarding the management of fire risk, also attributable to demographic change, were seen as barriers to cooperation. Owners who manage for commodities or habitat tended to view fire as a historically important and persistent ecological force. They believed hazardous fuel needed to be managed to prevent fire from being overly destructive, but did not seek to eliminate fire from the ecosystem. In contrast, owners who hold land primarily for residential reasons tended to view fire as a threat to their homes and scenic views, defining hazardous fuel as anything in the forest that could carry fire. Differing perceptions of fire and fuel led to conflicting approaches to forest management. For example, the owners of a 200-acre parcel in the Deschutes River Watershed selectively treated the most hazardous fuels in order to preserve wildlife and scenic beauty, differentiating themselves from their neighbors who razed all vegetation (apart from large overstory trees) within a 150-yard radius of their future home. We understood their fire concerns, but we were also very concerned about how much they cleared out of the winter forage for the deer...We don't want to see our forests be safe for wildfire but good for nothing else. Conflict was especially apparent around fire treatments (conducting controlled burns, burning slash piles, and allowing naturally ignited fires to burn on one's property). Some interviewees viewed fire as a tool for reducing risk associated with brushy, overstocked stands; others viewed fire as the risk itself. An owner of 10 acres in the Sprague River Watershed who managed primarily for habitat had permission to clear and burn brush on the property of his absentee neighbor. However, another neighbor with less risk tolerance stymied his efforts. ''We had good conditions for burning,'' he explained. ''There were still snow drifts! Then these neighbors noticed what I was doing, got on the phone and threatened legal action. One guy threatened to kill me because they were so scared...And if you drive back there now you will see how much fuel there is; it's scary.'' Conflicting values and goals relating to fire risk also impeded cooperation between NIPF owners and public land management agencies. An owner of 2,500 acres in the Sprague River Watershed was disappointed about a prescribed burn he had jointly conducted with the Forest Service, and attributed the problem to differing scales of risk tolerance. He believed the Forest Service was comfortable losing more trees in the burn than he was: They were comfortable with a hotter controlled burn...than I was used to...For them this kind of mortality is nothing. They are dealing with thousands of thousands of acres. But when you [have] a limited number of acres, mortality has a different meaning. Social norms about private property ownership and appropriate behavior towards neighbors were also identified by owners as constraints to cooperation, despite concerns about hazardous fuel conditions on neighbors' lands. ''I kind of try to hint to them,'' said one interview informant, when asked why he hadn't encouraged his next door neighbor to address hazardous fuel on his property. ''But that is about as far as you can go because people are set in their ways.'' The owner of 1,000 acres in the Upper Grande Ronde River Watershed was more direct: ''If you want to have good neighbors you don't mention things like that.'' Social norms about reciprocity, including the age-old challenge to collective action, free-ridership, also worked against cooperation. ''The trouble with our society,'' explained an owner in his 80s who controls hazardous fuel on his property despite being handicapped ''is that one person can do the work...and other people will take the benefit.'' In other words, if your neighbors reduce fuel on their properties, the risk to your property will be reduced without you having to do anything. Owners were also concerned about potential risks to their autonomy as private property owners associated with participating in formal cooperative groups. For example, an owner of 650 acres in Klamath County recounted, I have seen people-good friends-who aren't speaking to each other today because they are in a big old group...It's no longer: 'Hey, Joe, come on over and help me fix my irrigation and I will come help you fix yours.' It's: 'No I can't come over because you have an inch more water than I do, and I don't want to sue you about it.'-I don't want to get into no organization. Owners were also worried about participating in formal groups that include public agencies because of bureaucratic or regulatory burdens that might be imposed on them, and the discomfort of unequal power relationships. An owner of 200 acres in the Deschutes River Watershed, who had experienced frustration cooperating with federal agencies on fuel reduction and fish passage activities, explained: ''it doesn't feel good when you are feeling the heavy hand of government coming in saying you shall do this!'' Nevertheless, about half of survey respondents declared membership in formal, natural resource-related groups (Table 7). Finally, some owners mentioned laws that counter cooperation. The risk of being legally liable for fires or injuries resulting from negligent conditions or activities on one's property discourages many owners from cooperating on fuel reduction work. ''The problem is the law and the way liability is written,'' explained one owner. ''Nobody wants to be responsible.'' --- Opportunities for Cooperation We asked interviewees to describe cooperative arrangements for fuel reduction that would be amenable to them, based on their observations or experiences, and grouped their responses into three informal and three formal models that we then named. In the informal, ''over the fence'' model, interviewees described landowners observing each other's activities and doing something similar, or encouraging other landowners (often public agencies) to do more. Interviewees also suggested that owners could also jointly identify an issue that affects them and address it together (e.g., creating a fuel break). In the informal ''wheel and spoke'' model, contractors and other natural resource professionals help multiple nearby landowners learn indirectly from each others' experiences, leverage financial resources, and access markets and fuel reduction services, without negotiating terms of cooperation among the landowners involved. In the ''local group'' model, interviewees described local change agents creating a forum in which landowners come together to address a common problem (e.g., the accumulation of hazardous fuel on nearby public lands). This informal process can lead to communication, cooperation, learning, and eventual leadership among members of the group. A number of interviewees claimed that informal models of cooperation are more effective than formal models because they don't impose terms or require reciprocation, which can create adversarial relationships by establishing expectations. Other landowners interviewed believed formal models of cooperation were more efficient and productive than informal models. In the ''agency-led'' model, interviewees described local natural resource management agencies providing education, technical, or financial support to help landowners learn from each other and interact around management activities; or, public funds so that landowners can implement fuel reduction themselves. In the ''collaborative group'' model, participants commit to a process and a product, are organized by a coordinator, and are guided by policy documents. Few owners had experience with formal ''landowner cooperatives''. However, some proposed this model whereby groups of landowners would pool harvests and develop contracts with processers, working through a common contractor to increase their leverage in marketing biomass and small-diameter logs. --- Discussion Cooperation is predicated on the benefits of reciprocity. People's perceptions of risk can determine how they weigh the benefits and costs of working with others. This study finds that the majority of NIPF owners in Oregon east of the Cascade Mountains are concerned about fire risk to their properties, and beyond their property boundaries at a broad scale. Those who have cooperated with others in forest management activities that can reduce hazardous fuel are in the minority, however. Concern over fire risk did not appear sufficient to warrant cooperation with other private landowners in particular. Of course, some owners may lack concern about forest conditions on other private properties; a smaller proportion of owners were concerned about hazardous fuel conditions on nearby private lands than on public lands. And, some owners felt protected by heavy management on nearby private ownerships, especially industrial holdings. Nevertheless, roughly one-third of owners were concerned about the fire risk associated with other private ownerships, and the majority were willing to cooperate with other private owners in the future to mitigate that risk. That they have not acted on their concern in the past by trying to influence fuel conditions around them through coordinated planning and treatments with neighbors highlights the importance of other forces that work against cooperation. Here we draw on the literature presented earlier in this paper to discuss possible reasons for the disjuncture between NIPF owners' ideals and behaviors regarding cooperation. --- Shared Cognition Shared cognition is an antecedent to cooperation because it reduces the risk of participation. When parties to a collective effort perceive consensus among group members about the nature of the problem being addressed, the goals of the effort, and their commitment to the group, they are less likely to defect (Bouas and Komorita 1996;Swaab and others 2007). Although most NIPF owners surveyed perceived fire risk, it was clear in interviews that they did not hold common perceptions of wildfire, risk, or hazardous fuel. This lack of perceived consensus around the constructs of risk and hazard may hinder joint planning and implementation of fuel reduction activities. Some owners An organization in at least one of the above categories 52.1 attributed their reluctance to cooperate to conflicting values and goals regarding forest conditions and perceptions of fire hazard and risk. However, awareness of fire as an important local ecological process was a predictor of willingness to cooperate with other private and public forest owners, suggesting that owners who share this view are more likely to cooperate. Social exchange theory suggests that without shared beliefs about the probability and nature of fire risk, hazard, and the risk-reducing benefits of cooperation, owners may face difficulty rationalizing efforts to engage in potentially burdensome social relationships (Cropanzano and Mitchell 2005). This observation echoes what scholars of cooperation in the context of natural resources have argued: without a vision of a common problem or a common future, there is little reason to work together (Ostrom 1990;Yaffee 1998). Other studies of private forest owners have reached similar conclusions about the relationship between congruency of perceptions, attitudes and values, and joint planning (Rickenbach and Reed 2002;Jacobson and others 2000;Gass and others 2009). --- Group Membership The constraints to cooperation that NIPF owners described in interviews were predominantly related to social organization: spatial isolation, a dearth of integrating economic activities, and social norms that inhibit communication
Managing natural processes at the landscape scale to promote forest health is important, especially in the case of wildfire, where the ability of a landowner to protect his or her individual parcel is constrained by conditions on neighboring ownerships. However, management at a landscape scale is also challenging because it requires cooperation on plans and actions that cross ownership boundaries. Cooperation depends on people's beliefs and norms about reciprocity and perceptions of the risks and benefits of interacting with others. Using logistic regression tests on mail survey data and qualitative analysis of interviews with landowners, we examined the relationship between perceived wildfire risk and cooperation in the management of hazardous fuel by nonindustrial private forest (NIPF) owners in fire-prone landscapes of eastern Oregon. We found that NIPF owners who perceived a risk of wildfire to their properties, and perceived that conditions on nearby public forestlands contributed to this risk, were more likely to have cooperated with public agencies in the past to reduce fire risk than owners who did not perceive a risk of wildfire to their properties. Wildfire risk perception was not associated with past cooperation among NIPF owners. The greater social barriers to private-private cooperation than to private-public cooperation, and perceptions of more hazardous conditions on public compared with private forestlands may explain this difference. Owners expressed a strong willingness to cooperate with others in future cross-boundary efforts to reduce fire risk, however. We explore barriers to cooperative forest management across ownerships, and identify models of cooperation that hold potential for future collective action to reduce wildfire risk.
and perceptions of fire hazard and risk. However, awareness of fire as an important local ecological process was a predictor of willingness to cooperate with other private and public forest owners, suggesting that owners who share this view are more likely to cooperate. Social exchange theory suggests that without shared beliefs about the probability and nature of fire risk, hazard, and the risk-reducing benefits of cooperation, owners may face difficulty rationalizing efforts to engage in potentially burdensome social relationships (Cropanzano and Mitchell 2005). This observation echoes what scholars of cooperation in the context of natural resources have argued: without a vision of a common problem or a common future, there is little reason to work together (Ostrom 1990;Yaffee 1998). Other studies of private forest owners have reached similar conclusions about the relationship between congruency of perceptions, attitudes and values, and joint planning (Rickenbach and Reed 2002;Jacobson and others 2000;Gass and others 2009). --- Group Membership The constraints to cooperation that NIPF owners described in interviews were predominantly related to social organization: spatial isolation, a dearth of integrating economic activities, and social norms that inhibit communication and reciprocity among neighbors about fuel reduction. Survey findings that three-quarters of owners do not live on their properties provide additional evidence that social organization is a constraint on cooperation. Rural sociologists documented early on how topographical relief and spatial isolation influence social organization, and how resulting social relations affect the development of sociability (Field and Luloff 2002). Rural residents in eastern Oregon are spread out and isolated from each other. Interview informants perceived this isolation as an impediment to sociability, and in turn, cooperation. Owners described the deterioration of rural, natural resource-based economies as a barrier to cooperation. Although formal cooperatives have never been pervasive among NIPF owners in the West (Kittredge 2005), agricultural cooperatives have served the practical need of connecting isolated rural residents with external markets, political processes, and each other (Hobbs 1995). With the decline in timber, cattle and other commodity markets, the basis for interaction and reciprocity among rural landowners in eastern Oregon has become scarce. Moreover, as communities of place are being incorporated into wider market economies and supplanted by social networks that are not geographically based, people may be less inclined to rely on local residents and resources (Brown 1993). Some theories suggest that less bounded contexts discourage cooperation because individuals are less likely to anticipate reciprocity due to remote relationships (Cohen and others 2001). The demographic change associated with this shift in the rural economy may be further alienating landowners. In some areas of Oregon's east side, affluent, retired, and otherwise mobile urbanites have migrated to rural areas for their amenities, bringing new values and expectations for land that can come into conflict with those of locals (Egan and Luloff 2000). The more recent rise of property individualism (Singer 2000) and increasing focus on privacy among forest owners (Butler 2008) also run counter to cooperation. Landowners' fears of losing autonomy or control of their properties have been well-documented (Ellefson 2000;Fischer and Bliss 2009). For some, sharing information or inviting people over to discuss forest conditions and management may contradict values for privacy. Even poking one's head over a fence to comment on conditions about which one is concerned is an invasion of privacy, as evidenced in the adage ''good fences make good neighbors.'' Without membership to a common community or social group, landowners lack the structural and cultural basis for developing norms of reciprocity. Without interaction, they lack capacity to communicate and social mechanisms for developing trust among individuals. These are key conditions for cooperation (Ostrom 1990;Yaffee 1998;Tyler and Degoey 1995). Lack of group identity not only reduces interaction among landowners, it may also cause the lack of shared cognition about wildfire risk that owners said make cooperation difficult. --- Legitimacy Although we found that some cooperation among private forest owners and public agencies occurs, many owners we interviewed reported cumbersome bureaucratic processes, corrosive expert-lay person relationships, and a lack of trustworthy leadership in natural resource management efforts that involved public agencies, which discouraged them from cooperating. Other research has shown that NIPF owners' concerns about allowing government representatives onto their property, and agreeing to accept agency assistance lead to struggles over private property rights and undermine cooperation (Fischer and Bliss 2009). These concerns arise from owners' perceptions of the legitimacy of public agencies. If people view an institution as legitimate they develop a voluntary sense of obligation to obey decisions, follow rules, or abide by social arrangements rather than doing so out of fear of punishment or anticipation of reward (Tyler 2006). This feeling of obligation is essential for successful cooperation. --- Risks and Benefits in Social Exchange Survey results indicated that cooperation in fire hazard reduction does not occur frequently among private owners, yet many of the owners we interviewed said they communicated and cooperated frequently with other owners to address other land management problems. This discrepancy provides evidence that cooperation on fuel reduction depends on the benefits of social exchange outweighing the costs. In reciprocal social exchanges, the risk of betrayal is high (Cropanzano and Mitchell 2005). The potential for misunderstanding or failure to meet expectations of reciprocity may explain why owners infrequently cooperated with each other, despite a future willingness to do so. Perhaps some forms of cooperation-such as moving cattle and equipment onto each other's property, and suppressing fires that have ignited-have benefits that outweigh the risk and inconvenience of working together. In contrast, the benefits of cooperation in fuel reduction are less certain given the mismatch in the nature of the transaction. Furthermore, it may be easier for parties to agree about things like relocating cattle and suppressing wildfires (shared cognition), than about fire risk mitigation, which invokes judgments about how well people manage land and protect others from risk. Although there are substantial risks associated with cooperation between NIPF owners and public agencies, these social exchanges are generally negotiated, with both parties agreeing to a set of rules regarding commitments and expectations. In addition, substantial incentives exist for private-public cooperation, for example, when federal agencies offer cost-share monies, administrative and technical support, and other opportunities. In contrast, few policies or programs encourage or reward cooperation among private owners. These factors may help explain why owners have cooperated more frequently with public agencies than with each other. --- Models for Cooperative Wildfire Risk Management The fact that so many owners expressed a willingness to cooperate with other private and public owners in the future despite limited past experience and recognized constraints; and the fact that about half already belong to organized, natural resource-related groups, suggests the potential for cooperation in landscape-scale forest management. Perceived fire risk alone may not compel owners to cooperate, but other policy and institutional incentives might. Interview informants identified a range of potential formal and informal models for cooperation. The tension between the informal and formal models lies in the need for flexible, low-pressure arrangements as well as coordination and efficiency. Some owners were willing to cooperate on an ad hoc basis; others wanted cooperation to be formally organized so that it would be efficient and ensure a benefit. Owners suggested that among neighbors, informal models may be preferable because they are less likely to make people feel rigid and defensive. Although owners described ''over the fence'', ''wheel and spoke'' and ''local group'' models, we found only a few examples of these models operating in the context of fuel reduction in our study. Despite owners' beliefs about the importance of cooperation, and in light of the apparent lack of cooperation among owners, a less risky approach to cooperation among neighboring landowners may be one in which fuel reduction occurs through formal institutions (Cropanzano and Mitchell 2005). For example, the high cost of removing woody biomass and small-diameter logs, and lack of financial assistance and markets for this material are commonly identified barriers to fuel reduction (Fischer 2011). Formal institutional arrangements that enable owners to jointly apply for cost-share funds, coordinate treatments, and collectively offer biomass to the market could increase the economy of scale of management activities (Goldman and others 2007). Owners also identify liability and free ridership as drawbacks of cooperative fuel reduction. Formal institutions that coordinate management actions and pool risk can offer protection against liability and other risks associated with working with others (Amacher and others 2003). Evidence exists for the emergence of new institutions that may offer an alternative path to addressing fire risk in Oregon and elsewhere in the western United States. Local collaborative institutions can provide an organized process for increasing the efficiency and focus of collaborative efforts without the binding terms that seem to put NIPF owners on edge. For example, Community Wildfire Protection Plans (CWPPs), established under the Healthy Forest Restoration Act, are tools for involving communities in fire risk mitigation on federal and nonfederal lands. They are funded by states but developed and implemented locally. While CWPP planning and implementation efforts don't always reach beyond wildland-urban interface (WUI) boundaries and engage rural forestland owners, they have brought together many stakeholders and built relationships among community members around the issue of fire risk (Jakes and others 2007). In California, Fire Safe Councils (that implement CWPPs in that state) have been recognized for their ability to promote innovative fire mitigation activities and build social capital in WUI communities (Everett and Fuller 2011). In Oregon, the nonprofit group Sustainable Northwest is working with landowner associations to expand processing facilities and develop merchandising yards for small-diameter wood, and to promote woody biomass heating systems (Sustainable Northwest 2011). Collaborative institutions such as these create the opportunity for frequent and sustained interaction among landowners having diverse motivations and values, a necessary foundation for building shared cognition, norms of reciprocity, and in cases where public agencies are involved, legitimacy (Bodin and others 2006). Other cooperative models that could involve NIPF owners include The Nature Conservancy's Fire Learning Network, and the U.S. Forest Service's Collaborative Forest Landscape Restoration Program (CFLRP). Fire Learning Networks are regional groups that bring together public agencies, tribes, and municipal governments (though not specifically private forest owners) to plan and coordinate fuel reduction and forest restoration activities across ownerships. The CFLRP provides funding to local collaborative groups to plan science-based, economically viable fuel reduction and ecological restoration activities on select national forest lands. Although focused on federal lands, these efforts may be attractive to private forest owners if they help reduce the costs of, or create returns on, treatments on other ownerships, or decrease the legal risks associated with treatments through Memorandums of Understanding and formal partnerships. Future research could explore such models and the opportunities they offer for collective action for landscape-scale ecosystem management across ownership boundaries. --- Conclusion In articulating his vision for America's forests, U.S. Secretary of Agriculture Tom Vilsack has emphasized an ''all lands approach'' to forest restoration that calls for collaboration in undertaking landscape-scale restoration activities. Cooperation across ownership boundaries in fire prone, mixed-ownership forest landscapes is desirable yet challenging. Most of the NIPF landowners interviewed and surveyed for this study were concerned about fire risk on their lands and hazardous fuel conditions on the properties around them (and on public lands in particular), and treated fuel on their properties to reduce this risk. Although NIPF owners indicated a substantial willingness to cooperate with others on fuel reduction activities in the future, their past behavior demonstrated limited cooperation. Perceived risk of fire occurring on one's property, and from nearby public forestlands were predictors of cooperation in fuel reduction with public land management agencies. Risk perception was not associated with cooperation among private landowners. The availability of funding and technical assistance from public agencies to help support fuel reduction on private lands, the greater social barriers to private-private cooperation than to private-public cooperation, and perceptions of more hazardous forest conditions on public lands relative to private lands may explain this difference. Interview data suggest that social values and norms about property ownership work against cooperation, especially among NIPF owners, even when they perceive a risk of fire to their properties. Nevertheless, cooperation does occur among private owners in arenas other than fuel reductionand it may occur indirectly through third parties, such as private contractors. Furthermore, owners say they are willing to cooperate with one another in the future. Thus, given the benefits of cooperation for landscape-scale natural resource management, new institutional models of cooperation to manage landscape-scale fire risk may hold promise. From a policy standpoint, building a common understanding of fire risk among landowners, including fire risk on lands beyond their own property boundaries, may increase the likelihood that landowners will cooperate with others to reduce hazardous fuel. Promoting this awareness among landowners who reside on their properties may be particularly effective given the positive association between residing on one's parcel and cooperation. Nevertheless, in the absence of policies and institutions that improve the balance between the costs of cooperation and the benefits of protecting one's property from fire, cooperative landscape-scale management of natural hazards across ownership boundaries will be limited.
Managing natural processes at the landscape scale to promote forest health is important, especially in the case of wildfire, where the ability of a landowner to protect his or her individual parcel is constrained by conditions on neighboring ownerships. However, management at a landscape scale is also challenging because it requires cooperation on plans and actions that cross ownership boundaries. Cooperation depends on people's beliefs and norms about reciprocity and perceptions of the risks and benefits of interacting with others. Using logistic regression tests on mail survey data and qualitative analysis of interviews with landowners, we examined the relationship between perceived wildfire risk and cooperation in the management of hazardous fuel by nonindustrial private forest (NIPF) owners in fire-prone landscapes of eastern Oregon. We found that NIPF owners who perceived a risk of wildfire to their properties, and perceived that conditions on nearby public forestlands contributed to this risk, were more likely to have cooperated with public agencies in the past to reduce fire risk than owners who did not perceive a risk of wildfire to their properties. Wildfire risk perception was not associated with past cooperation among NIPF owners. The greater social barriers to private-private cooperation than to private-public cooperation, and perceptions of more hazardous conditions on public compared with private forestlands may explain this difference. Owners expressed a strong willingness to cooperate with others in future cross-boundary efforts to reduce fire risk, however. We explore barriers to cooperative forest management across ownerships, and identify models of cooperation that hold potential for future collective action to reduce wildfire risk.
Obesity, Hypertension, Social Determinants of Health, and the Epidemiologic Transition among Traditional Amazonian Populations The Amazon is one of the last ecological frontiers of the planet. In recent decades it has been the focus of intense social, economic and environmental changes which have led to important epidemiologic implications for the local populations (Piperata and Dufour, 2007;Piperata et al., 2011;Melo and Silva, 2015;Silva 2004aSilva,b, 2011)). Studies about nutrition and health of non-indigenous traditional populations of the Brazilian Amazon such as caboclo/ribeirinhos and quilombolas are still limited. Only recently, due to new governmental policies, more attention has been given to social, economic, territorial and health aspects of these groups (Brasil, 2007a,b,c). However, because of logistic difficulties, and high costs involved with investigations of smaller and more geographically isolated populations, research reporting on their health and nutrition situation continues to be a challenge. In this article data related to the evaluation of adult health, nutritional status and blood pressure, for three different rural groups representing an important part of Amazonian social diversity, are presented. These groups are considered vulnerable due to their ethnic and socio-ecological conditions (Adams, 2002;Adams, et al. 2006;Brasil, 2007a;Freitas et al., 2011;Gomes et al., 2013;Lima and Pereira, 2007;Silva, 2006), and here their situation is analysed from a Social Determinants of Health (SDH) perspective (CSDH, 2005;CNDSS, 2008;Marmot, 2001;Rose, 1985). According to Rose (1985) it is necessary to look at the "causes of causes" of disease, that is, to go beyond the disease of the individual to the reasons why people become sick. When this is done it becomes clear that the primary determinants of diseases, in any population, are social and economic rather than simply biologic. Even though genetic factors may have a strong influence on individual susceptibility, genetics alone has little explanatory power over population differences in incidence of diseases (Rose, 1985). Marmot (2001), argues that many causes of diseases are social and political, and looking only at differences between individuals often misses the point that major differences in the incidence of maladies occur between populations. Considering the impact of the social factors on disease occurrence, in March 2005, the World Health Organization created the Commission on Social Determinants of Health (CSDH), with the objective of making the world aware of the importance of social determinants in the health situation of individuals and populations, and the need to combat inequities in health created by social disparities (CNDSS, 2008). According to the WHO, SDH are "the conditions in which people are born, grow, work, live, and age, and the wider set of forces and systems shaping the conditions of daily life. These forces and systems include economic policies and systems, development agendas, social norms, social policies and political systems" (http://www.who.int/social_determinants/en/). Throughout this paper we will attempt to show how different environments and socioeconomic settings impact the health of traditional Amazonian populations, in order to call attention to the need for the implementation of public policies aimed specifically at these groups. --- Research Location and Populations Data comes from research projects developed between 2008 and 2014 in the Brazilian Amazon basin designed to provide subsidies for debates about the health of rural populations and public policies. Populations with different historic origins and socio-ecological settings are evaluated to compare how their lifestyles and body habitus are influenced by the region's Social Determinants of Health (SDH). Data collection was accomplished in areas that represent a large extent of the environmental diversity found in the Brazilian Amazon. Morán (1993) presents detailed description and analysis of the Amazonian ecosystems, and Dufour et al. (2016), in this issue, provide a general synthesis of the Amazon basin and its main geographic and ecological features, for this reason we will describe only the specific populations and ecosystems of interest to this research. The Mamirauá Sustainable Development Reservation (RDSM) is located in the Municipal district of Tefé, Amazonas State (Figure 1). It was the first conservation unit of sustainable use implemented in Brazil (1990) which included the idea of environmental protection and shared administration of natural resources between users and the government (Queiroz, 2005;Moura, 2007). According to Moura (2007): "Mamirauá Sustainable Development Reservation (RDSM) has an area of 1,124,000 hectares, located in the confluence of the Solim<unk>es and Japurá rivers, and next to Aman<unk> Sustainable Development Reservation (RDSA), in the Medium Solim<unk>es area, Amazonas State. It is recognized by the international conservationist organizations as the largest floodplain protection reservation of the world" (pg. 28). According to J. M. Ayres (1954Ayres ( -2003)), biologist and creator of the proposal for the sustainable development reservation, the RDSM was created to reconcile the traditional mode of occupation of the Amazonian floodplain (Várzea) with the environmental conservation practices and possibilities of providing better living conditions to local populations. A recent census counted a total of 492 houses in the RDSM (IDSM, 2013). The population is divided into small communities with sometimes 4-5, and up to 30-40 houses, usually scattered along the margins of the main rivers of the region. As in other riverine areas, the exact number of communities is difficult to specify because they split frequently and new ones are created while the old ones are abandoned for several reasons such as religious differences among residents, family fights, and environmental circumstances such as insect infestations, changes in the floodplain geomorphology, or shifts in river and lake courses (Moura, 2007). The RDSM is located in a region characterised by extended periods of alternation between floods and dryness, and it is an extremely diverse environment in terms of biodiversity. The annual floods bring giant amounts of sediments from the Andes which create a rich environment responsible for the high biomass productivity of the Amazonian floodplains. The alternation of wet and dry periods defines the geomorphology of the area, the abundance and endemicity of flora and fauna, and even the patterns of human occupation (Queiroz, 2005;Moura et al., 2016). Human and animal activities are driven by the rhythm of the waters and the seasonal variations. The alternation of periods determines access to resources and transit in the Reservation. During the raining/flooding period there is more abundance of fish, and the duration of transportation between different locations and in the direction of the urban centres is reduced. In the dry period everything is more difficult, from access to clean water and food to the movement between houses and the cities (Moura, 2007). The current occupation of the Mamirauá region began in the 19th century by people migrating from the northeast of Brazil during the Rubber Boom. The migrants integrated with local native populations and became today <unk>s caboclos or ribeirinhos (Lyma-Ayres, 1992;Queiroz, 2005). The term caboclo has many meanings and connotations (see Lima-Ayres, 1992;Silva, 2001;Rodrigues, 2006). In this paper we adopt the concept presented in Silva and Eckhardt (1994) where caboclo are tri-hybrid populations with European, African, and Amerindian ancestry living mainly in the rural areas of the Brazilian Amazon. The participant samples from the RDSM include 76 men and 73 women; all adults (<unk> 18 years) and residents of 78% of the homes of eight communities representative of the socio-environmental diversity of that conservation unit. Data were obtained in a study to identify health and ecosystemic indicators of the Amazonian floodplain, involving about 550 residents of 88 houses of those localities (Moura, 2008). The Caxiuan<unk> riverine/caboclo groups live in and around the Caxiuan<unk> National Forest (FLONA), a protected area of 330,000 hectares covered mainly by upland (terra firme) tropical forests, located in the municipal district of Melgaço, Pará State, about 400 km from Belém, the State's Capital (Figure 1). The FLONA is composed mainly of primary tropical rain forest (85%), flooded forests (12%), secondary vegetation and non-forested areas (3%). This protected area belongs to a black water river system with relatively acidic pH in the Caxiuan<unk> bay, and the daily tides have little influence on water level (MPEG, 1994). In Caxiuan<unk>, houses are dispersed throughout the FLONA in clusters varying from 2 to 10 homes, but some families live in isolation, in houses that are from 500 metres to 5 or more kilometres away from one another. A total of 148 individuals were investigated (72 men and 76 women), representing about 65% of the adult population resident in the area. Mamirauá and Caxiuan<unk> exemplify traditional rural populations as they originated from and have lived in Amazonia since the middle of the 19 th Century. They are descendants of the encounter of Amerindians with European settlers, and of Africans brought to Brazil as slaves, but who sometimes escaped from urban centres and farms to distant places in the jungle (Lima-Ayres, 1992;Silva, 2001). They have lifestyles strongly dependent on subsistence activities such as agriculture of manioc (Manihot esculenta) and beans and corn, artisanal fishery for domestic consumption and sale, collection of forest products for consumption and sale in the local towns, and small animal husbandry. They also maintain regular contacts with regional markets and participate in temporary jobs in the ecotourism activities and provide support to scientific research (Filgueiras and Silva, 2013;Lisboa et al., 2013;Moura, 2007Moura,, 2010;;Piperata, 2007;Piperata et al., 2011Piperata et al.,, 2013;;Silva, 2001Silva,, 2011;;Silveira et al., 2013). In the last decade, these groups have also benefited from several social programs of the federal government, such as retirements and rural pensions, and Bolsa Fam<unk>lia (a federal welfare program); the impact of these programs on health has not yet been fully evaluated (Brasil, 2009(Brasil,, 2010;;Ivanova and Piperata 2010;Moura, 2007;Piperata et al., 2011;2013). The main differences between the two protected areas are related to their ecological and political settings. The first is in a floodplain ecosystem and it was intended that the local communities would be involved in its management, with total access to the natural resources, and the latter is mainly a forest<unk>upland ecosystem, legally a national protected area, where the families are considered intruders and their access to the local resources is formally limited. Quilombos are groups formed predominantly by African-derived populations originated in Brazil from slave escapees who survived in the Amazon basin and other regions, making use of common systems of land ownership and tenure (Arruti, 2008;Salles, 2005;Treccani, 2006). Although there are no specific genetic studies yet of the groups discussed in this study, in general, quilombolas of Amazonia also present, in varied percentages, biological and cultural influences of Amerindian and European groups (Guerreiro et al., 1994(Guerreiro et al.,, 1999;;Santos, et al., 2008). Even though there is great variation among communities, the rural quilombolas are organised in settlements varying from 5, 6 to two dozen or more houses close to each other, usually ordered in a linear way and near to rivers and other water sources. They practice mainly subsistence agriculture, fishing, extraction of natural products, production of handicrafts for sale, and small animal husbandry for survival (Brasil, 2007b;Oliveira, 2011). In recent years the quilombolas also started to receive the Bolsa Fam<unk>lia, which became an important source of cash to many families (Oliveira, 2011;Guimar<unk>es and Silva, 2015). Overall 351 people (154 men and 197 women) from five quilombola communities: Africa, Laranjituba, Santo Antonio, Mangueiras, and Mola, all in the State of Pará, were included in this analysis, encompassing at least 60% of the adults (<unk> 18 years of age) in the participant communities (Figure 1). The investigated quilombo residents have subsistence patterns and socioeconomic situations similar to the riverine/caboclo groups, except that they are located predominantly in areas of upland, closer to the largest regional urban centre (Belém), and some of them have better access to basic infrastructure such as proximity to highways, electricity, health centres, telephones and primary schools, although they suffer constant discrimination due to their assumed slave ancestry (Cavalcante, 2011;Pinho et al., 2013). Quilombolas are included in this study because they encompass a large segment of the rural Amazonian populations and hence increase the diversity and range of the sample investigated, and because politically they are in the same situation of social and environmental vulnerability as the riverine/caboclo groups, being subject to most of the same SDH factors. From a biological point of view, differences among the investigated populations are possibly smaller than the similarities because of the historical origin of the participant quilombolas. Other information about the ecologic situation, history, geography, social and economic conditions, subsistence and health aspects of the groups and investigated areas are available in previous publications (Borges, 2011;Cavalcante, 2011;Filgueiras and Silva, 2013;Guimar<unk>es and Silva, 2015;Lisboa et al., 2013;Melo and Silva, 2015;Moura, 2007Moura,, 2008;;Moura et al., 2016;Pinho et al., 2013;Piperata and Dufour, 2007;Piperata et al., 2011Piperata et al.,, 2013;;Silva, 2002Silva,, 2009Silva,, 2011;;Silva and Padez, 2010;Silva et al., 2006;Silveira et al., 2013). --- Methods All the projects were approved by the institutional Committee of Ethics in Research and the communities involved. All participants signed a research consent form following the Resolutions CNS/Brazil 196/96 and 466/12 (Brasil, 2012). In all groups investigated, sampling strategy involved a first contact with the communities to explain the research objectives, obtain their group approval for participation and conduct a first population survey. This was followed by one or more field trips where individual consent was obtained, and personal (including health and anthropometric), family and household information was collected at each home of the locality or, according to the desire of the community and the time frame for data collection, at a central place, such as a community health centre or school, to where the families converged at a certain date and time previously defined. This research design, adapted from Silva (2001) and Moura (2007), made it possible to guarantee a high rate of participation of adults and children and men and women, representative of the overall population of each study area. The anthropometric measures were taken following procedures described by Weiner and Lourie (1981) and SISVAN (2008). The anthropometric measurements were done by the same individuals to reduce inter-observer error. The anthropometric variables analysed include height, weight, arm circumference, waist and hip circumferences, and triceps, subscapular and subprailiac skinfolds. Body Mass Index (BMI) was generated from the weights and heights (WHO, 2011;Deurenberg et al., 1990. Mart<unk>nez et al., 1993). Circumference measures were taken with a fabric anthropometric tape, following protocols of the World Health Organization (2000). Skinfold measures were made with a Cescorf caliper, according to Frisancho (1999). The parameters adopted are described in Table 1. The percentage of general adiposity and the amount of fat free mass were calculated from the skinfolds according to Durnim and Womersley (1974). The anthropometric measures were compared among sexes through an analysis of variance (oneway ANOVA) and the differences considered statistically significant at p<unk>0.05. To analyse differences among populations a covariance analysis (ANCOVA) was performed in which the age effect was adjusted. Statistical analyses were performed using the SPSS® version 17.0. General health evaluation was made through a clinical exam accomplished by a physician. Blood pressure was checked in the brachial artery on the left side, using a certified aneroid sphygmomanometer following procedures recommended by the Brazilian Health Ministry (Brasil, 2006) which follows the WHO parameters. The parameter values for blood pressure assessment according to the Brazilian Health Ministry (Brasil, 2006) are presented in Table 2. : Brasil, 2006. *SAH -Systemic Arterial Hypertension Systemic Arterial Hypertension (SAH) is characterised as "systolic blood pressure higher or equal to 140 mmHg and diastolic blood pressure higher or equal to 90 mmHg in individuals not making use of anti-hypertensive medication" (Brasil, 2006, p.14). Individuals with elevated blood pressure, between 120-139 mmHg of systolic and 80-89 mmHg of diastolic, tend to maintain pressure above the population average and they are potentially at higher risk of developing SAH and associated cardiovascular events being considered in a stage of "pre-hypertension" (Brasil, 2006). Information about the situation of environmental risks, labour activities, subsistence strategies, life and housing conditions and geographic/land/social conflicts were also obtained through participant observation and interviews as part of the SDH assessment. --- Results The studied communities have a diverse set of economic and subsistence activities, from agriculture for domestic consumption and sale, artisanal fishery, small animal husbandry, forest management, handicraft manufacture and ecotourism, to formal work as teachers, health agents and municipal technicians. This range of activities is similar to other Amazonian rural populations (Brasil, 2007b(Brasil,, 2008;;Guerrero, 2010;Lima-Ayres, 1992;Moura, 2007Moura,, 2010;;Murrieta, 1994;Murrieta and Dufour, 2004;Nugent, 1993;Piperata and Dufour, 2007). In the last decade an important cash contribution from income distribution programs of the federal government (Bolsa Fam<unk>lia) has been given to riverine and quilombola families. Together with rural retirement pensions and temporary contracts of work, these have increased their access to consumer goods and affected their diet (Ivanova and Piperata, 2010;Moura, 2007;Pinho et al., 2013;Piperata et al., 2011Piperata et al.,, 2013;;Silva, 2011). The investigated communities live in different ecological environments and, due to the historical combination of the main groups that contributed biologically and culturally to the formation of the current Amazonian population (Amerindians, Europeans and Africans), they represent a significant portion of the regional biocultural diversity for which information about health, nutrition, and the SDH is still very limited. All communities present precarious conditions of environmental sanitation, lacking basic sanitary infrastructure and piped water, and have difficult housing situations, with most buildings made of wood, with a small number of rooms, and many of them without an internal water closet, which relates directly to high intestinal parasite loads and other infectious and deficiency diseases found among these groups (Giatti et al., 2007;Lisboa et al., 2013;Moura, 2007Moura,, 2008;;Pinho et al., 2013;Silva, 2001;2009). Social and economic activities developed in the rural areas of Amazonia, mainly in the floodplain, are strongly marked by seasonality (flood and reduction of the water levels), which directly influences the life rhythm and affects access to health and education, increasing the difficulty of reaching the health centres and schools during some periods of the year (Filgueiras and Silva, 2013;Moura, 2007;Silva, 2001;2006, 2011;Silva et al., 2006). The sociodemographic situation of these populations is presented in Table 3. Caboclos and quilombolas show similar socioeconomic conditions, particularly in relation to education, income and number of rooms in the house. However, some particularities were noticed such as: more access<unk>dependence on government programs among quilombolas, likely related to a closer political proximity with the urban centres and better social organisation in associations; more consumer goods in Mamirauá, due to the several projects generated by the Mamirauá Institute throughout the years and access to cheaper goods from the "Zona Franca de Manaus" (Manaus Free Trade Zone); and more residents in the houses in Caxiuan<unk>, due to the legislation of the FLONA that limits the expansion of new families. The difference in the frequency of kitchens inside or outside the houses is associated with the habits of the riverine populations who traditionally maintain their "girau" outside the house to facilitate food processing, mainly fish, and the drainage of waste water. The quilombolas prefer the kitchen inside the house, especially if there is a faucet with water pumped from an open well with an electric engine, which indicates higher social status. Among quilombolas, over 80% of the houses have access to electricity while less than 20% of houses among the riverine do. The high number of latrines out of the house in all groups, usually wholes dug directly in the ground, indicates the lack of access to environmental sanitation and potential contamination of the populations and water sources by fecal materials. ** Includes Bolsa Fam<unk>lia, retirement pensions, rural pensions and other support provided by the governments in the form of cash. *** Includes items such as motorboat, boat engine, gas stove, radio, TV, parabolic antenna, stereo, DVD player, bicycle, chainsaw, clock, sofa, sewing machine, washing machine, shotgun, mattress, electricity generator. In Table 4 the median values of BMI, adiposity percentage, amount of fat free mass and the subscapular/triceps index (STI) after adjusting for the age effect are described. In the quilombolas all variables present significant differences among the two sexes, except the upper arm circumference (UAC) and the waist. The three groups present a similar pattern in which men have significantly higher values in height, weight, fat free mass and STI, and women have significantly higher values of skinfolds, adiposity percentage and waist and hip circumferences, except in the population of Caxiuan<unk> where men present a mean value of waist circumference significantly higher than women. Comparing the three groups (F1), quilombola men are significantly taller than men from Caxiuan<unk> and Mamirauá. Mamirauá men have significantly higher adiposity values (skinfolds) and percentage of general adiposity, hip perimeter and amount of fat free mass. In all groups, women present statistically significant differences (F2) in several variables: Mamirauá women present height, waist and hip circumferences and STI superior to the quilombolas and Caxiuan<unk> women. The quilombola women present general adiposity values higher than Caxiuan<unk> and Mamirauá women. Insert Table 4 Figures 2 and3 present the obesity values (including overweight) according to WHO intervals in men and women of the different age groups. As the study samples are small because Amazonian rural populations' settlements characteristically have a small total size, we evaluated overweight together with obesity as both are correlated with the elevation of morbidity and mortality rates (Hu, 2008;SISVAN, 2008). In men, in all age groups, the Mamirauá population presents higher values than the quilombola and Caxiuan<unk>. Caxiuan<unk> men present lower values than quilombolas and Mamirauá males, except in the 60-75 year-old age group (Figure 2). In women, the quilombola sample presents higher values than Caxiuan<unk> and Mamirauá in all age groups, except the 18-29 and 50-59 years (Figure 3). In all age groups, Caxiuan<unk> women present less overweight/obesity than quilombola and Mamirauá females. Insert Figures 2 and3 The frequency of blood pressure status in the studied populations is shown in Table 5. SAH is more frequent among the quilombola men and women; Caxiuan<unk> men have more systolic and diastolic pre-hypertension than the other men, and Mamirauá women have a higher frequency of systolic pre-hypertension than everyone even though this group presents the overall lower frequency of SAH. --- Discussion Obesity, arterial hypertension and type 2 diabetes are among the chronic-degenerative diseases of larger demographic, economic and social impact nowadays (Hu, 2008;SISVAN, 2008;SBEM, 2010;SBH, 2010). Several studies have shown that the prevalence of obesity, hypertension and diseases associated with them varies among populations depending on their degree of contact with the Western culture, socio-ecological situation, impact of the market economy and government policies on their diet and lifestyle and, perhaps, their biological ancestry (Blanes, 2008;Dressler, 1999;Liebert et al., 2013;Silva, 2001Silva,, 2011)). According to the Brazilian Commission on the Social Determinants of Health, SDH are the social, economic, cultural, racial/ethnic, psychological and behavioural factors that influence the occurrence of health problems and the risk factors in a population (CNDSS, 2008). Nutritional and cardiovascular diseases are known to be associated with all these factors, hence, by using a SDH perspective it is possible to look for the causes of the causes of these diseases and propose more adequate public policies to deal with them. In Brazil, most of the epidemiologic studies of non-transmissible chronic diseases have been concentrated in urban areas and in the South and Southeastern regions. There are only a relatively limited number of studies in the North, and those accomplished among rural populations are few (Adams, 2002;Alencar et al., 1999;Borges, 2011;Borges and Silva, 2010;Giugliano et al., 1981;Melo and Silva, 2015;Pinho et al.;2013;Silva, 2004bSilva,, 2006Silva,, 2009Silva,, 2011)). More studies are still needed to understand the situation and distribution patterns of the "diseases of modernity" (hypertension, type 2 diabetes, obesity, metabolic syndrome, among others), and to identify the biological, environmental and social factors that determine the risk dynamics in these populations. Several investigations have shown that there is a relationship between infant malnutrition and overweight<unk>obesity and their associated diseases in adult life (Bogin, 2010;Hu, 2008;Popkin, 2003). Recent long term studies among native Americans such as the Shuar and the Tsimane' have also shown that the impacts of socio-economic changes on health in traditional populations can be fast and dramatic, although varied in level according to a number of factors (Rosinger et al., 2013;Liebert et al., 2013;Urlacher et al., 2016 in this volume). Research among caboclo and quilombola populations have already demonstrated high percentages of infant malnutrition and the epidemiologic transition in these groups (Brasil, 2007b;Lahr, 1994;Oliveira, 2011;Silva, 2001Silva,, 2009;;Silva and Guimar<unk>es, 2015). As has been established in other developing countries, the results presented here highlight that more than any single biological factor, there is a direct relationship between the situation of socio-ecological vulnerability and the populations' health in relation to both infectiousparasitic as well as chronic non-transmissible diseases. Both overweight and obesity prevalence in the studied populations is high, especially in men from Mamirauá and in quilombola women, while low weight (BMI <unk>18.5 Kg/m 2 ) is not significant in either men or women. Compared to other Brazilian populations (Brasil, 2009;Blanes, 2008;CNDSS, 2008;IBGE, 2010), men from Mamirauá present very high frequencies of overweight/obesity (51.3%), only lower than the Brazilian urban population (53.5%), while men from Caxiuan<unk> present the lowest values (13.3%). On the other hand, quilombola women present values (53.4%) as high as the general population of women of the country (53.1%), including from urban (53.1%) and rural (53.4%) areas separately. Women from Caxiuan<unk> present lower values (26.3%) compared to other Brazilian groups. From a cultural point of view, among Amazonian rural populations who are used to food insecurity and all types of infrastructure deficiencies, there is a widespread perception that fat is healthy and that being a "chubby" child or adult is an indicator of good life quality. According to Hu (2008) the different perceptions among populations of the meaning of being "fat" (the African-Americans in the USA, or the Samoans, for instance), creates in some groups a higher social tolerance to people with overweight and obesity as these are not seen as potentially sick. In the investigated groups, overweight is not considered a risk for diseases but a sign of health and that the family has financial resources and high social status, demonstrating the need to understand the socio-cultural dynamics when investigating the epidemiologic situation of these populations. Besides eating, physical activity is one of the decisive factors in weight maintenance and gain (Hu, 2008). The differential patterns of physical activity and eating habits between men and women traditionally present in the rural populations, the current reduction in women's physical activity as a function of a smaller number of children, the acquisition of industrialised and frozen foods that require little processing for preparation, and consumer goods such as gas stove, television, DVD player and washing machines, may have an important role in the differences observed in the frequency of obesity<unk>overweight and hypertension among the investigated groups. As gender roles, work, and daily activities are important SDH, more detailed research at the home level is necessary to elucidate the impact of the new consumption patterns on the health of the Amazonian rural populations, particularly the women. In relation to blood pressure, the prevalence of SAH is higher in the quilombola population than in Caxiuan<unk> and Mamirauá (Table 5). The overall pre-hypertension prevalence and SAH are correlated with the overweight/obesity patterns and are of particular concern among the women, who are an especially vulnerable segment of the rural populations (Brasil, 2009;Borges and Silva, 2010;Borges, 2011;Paix<unk>o and Carvalho, 2008;Silva, 2001). Although the prevalence of SAH is not above that observed in other Brazilian rural and urban populations (Brasil, 2006;Silva et al, 2006), the values show that hypertension is already a public health problem among these Amazonian rural populations. There are still few investigations of SAH prevalence in the non-indigenous inhabitants of the rural areas of Northern Brazil and the direct comparison with other studies is difficult, as different works usually use populations with higher age groups (> 19 years old at least), while for this study individuals from 18 years old were considered. In a general analysis, it is possible to notice that although the overall prevalence observed here is not above what has been reported elsewhere, when the high pre-hypertension and the isolated systolic or diastolic hypertension prevalence in the three groups are also taken into consideration, added to their overweight/obesity situation and the socio-ecological precariousness in which they live, a complex picture arises which combines epidemiologic and nutritional transitions, reflecting the importance of the social determinants in the rural populations' health, and requiring immediate action to avoid an SAH epidemic and its accompanying chronic manifestations. Generally, groups exposed to greater influence of the Western culture and those more involved with market economy present higher obesity and SAH levels, and a stronger association between blood pressure and chronological age (Dressler, 1999;Hu, 2008;Rosinger et al., 2013;Silva et al., 1995). Although these effects have been observed independently of the place the populations inhabit, the association patterns and environmental factors that contribute to the elevation of blood pressure, and the obesity levels as well, with age and ancestry have been shown to be highly variable as a consequence of the economic, ecologic, historic-cultural and biological factors of each population (Dressler, 1999;Wirsing, 1985), characterising a strong relation among the socio-ecological situationthe Social and Environmental Determinants (Blanes, 2008, SBEM, 2010;SCDH, 2005) -and the health/illness of the investigated populations. Among the riverine/caboclo and the quilombola, the difficulties related to access to potable water, environmental sanitation, and health services, although they have improved in some areas, especially Mamirauá, in the last years (Moura, 2007(Moura,, 2008;;Moura et al., 2016), are still a matter of concern as they are involved with the origin of many diseases such as diarrhoeas, anaemia, and infant malnutrition and death, which are among the main morbidity factors related to the SDH in the Amazon region (Brasil, 2008(Brasil,, 2009;;CNDSS, 2008;Lahr, 1994;Moura, 2008;Piperata et al., 2013;Silva, 2009). Although Brazil has gone through several economic and social turbulences in the last 50 years, an accelerated process of nutritional transition is underway, which has increased overweight/obesity prevalence (and also SAH), mainly among women, while still maintaining high, although falling, malnutrition prevalence, mainly infantile (Brasil, 2009(Brasil,, 2010;;CNDSS, 2008;Ivanova and Piperata, 2010;Piperata et al., 2011). This puts rural Amazonian populations, such as the riverine and quilombolas, in the vulnerable situation of having a double burden of disease, characterising the epidemiologic transition taking place in the country, and particularly in Amazonia (CNDSS, 2008;Monteiro et al., 2010;Oliveira, 2010;Silva, 2006). In the States of the North, circulatory system diseases are currently among the main causes of death in adults, while neonatal and infant mortality continues to be among the highest in the country (Brasil, 2010;CNDSS, 2008). On the other hand, the amount of disease and death underreporting and registered as causes poorly defined make the existing statistics difficult to believe
Background: The health and nutritional situation of adults from three rural vulnerable Amazonian populations are investigated in relation to the Social Determinants of Health (SDH) and the epidemiologic transition. Aim: To investigate the role of the environment and the SDH on the occurrence of chronicdegenerative diseases in these groups. Subjects and Methods: Anthropometric, blood pressure, and demographic data were collected in adults from the RDS Mamirauá, AM (n=149), Flona Caxiuanã, PA (n=146), and quilombolas, PA (n=351), populations living in a variety of socio-ecological environments in the Brazilian Amazon. Results: Adjusting for the effect of age, quilombola men are taller (F=9.85; p<0.001), and quilombola women present with higher adiposity (F=20.43; p<0.001) and are more overweight/obese. Men from Mamirauá present higher adiposity (F=9.58; p<0.001). Mamirauá women are taller (F=5.55; p<0.01) and have higher values of waist circumference and subscapular/triceps index. Quilombolas present higher prevalence of hypertension in both sexes, and there are significant differences in rates of hypertension among the women (X 2 =17.45; p<0.01). The quilombolas are more dependent on government programs, people from Mamirauá have more economic resources, and the group from Caxiunã have the lowest SES.In these populations, the SDH play a key role in the ontogeny of diseases, and the "diseases of modernity" occur simultaneously with the always present infectoparasitic pathologies, substantially increasing social vulnerability.
the last 50 years, an accelerated process of nutritional transition is underway, which has increased overweight/obesity prevalence (and also SAH), mainly among women, while still maintaining high, although falling, malnutrition prevalence, mainly infantile (Brasil, 2009(Brasil,, 2010;;CNDSS, 2008;Ivanova and Piperata, 2010;Piperata et al., 2011). This puts rural Amazonian populations, such as the riverine and quilombolas, in the vulnerable situation of having a double burden of disease, characterising the epidemiologic transition taking place in the country, and particularly in Amazonia (CNDSS, 2008;Monteiro et al., 2010;Oliveira, 2010;Silva, 2006). In the States of the North, circulatory system diseases are currently among the main causes of death in adults, while neonatal and infant mortality continues to be among the highest in the country (Brasil, 2010;CNDSS, 2008). On the other hand, the amount of disease and death underreporting and registered as causes poorly defined make the existing statistics difficult to believe, possibly underestimating the real health situation of the region (Lisboa et al., 2013;Silva, 2006). The population of Amazonia had the lowest Gini index in the country in 2013 (0.478) (IBGE, 2015), and the second smallest per capita income of the nation in that year (IBGE, 2015). The groups investigated here reflect that index. As in other areas of Brazil, poverty, the precariousness of environmental sanitation and of other basic infrastructure conditions, illiteracy, unemployment, and racism/discrimination affect mainly the self-declared "pardo and negro" (brown and black) and the poorer rural segments of the population (CNDSS, 2008;Paix<unk>o and Carvalho, 2008;Pinho et al., 2013), where the quilombolas and the riverine/caboclo can be included, further characterising their socio-ecological vulnerability. Studies indicate that in Northern Brazil obesity affects the poorest and lower educated population mostly, and mainly women; in some rural populations SAH prevalence is higher among them as well, and there is higher mortality among black and brown women due to circulatory diseases (CNDSS, 2008;Oliveira, 2011;Silva et al., 2006). However, as there is limited data on SAH or obesity prevalence in rural populations, their true impact on the several vulnerable groups of Amazonia are still ignored. The investigated populations fit in all the social and environmental vulnerability descriptors, making it clear that they are especially vulnerable to the SDH and that specific public policies ought to be implemented urgently to improve their quality of life and health. --- Conclusions There are still few studies about the human biology of riverine/caboclo and quilombola populations. This group of investigations is pioneering in the simultaneous interdisciplinary study of the morbidity situation for chronic diseases and the SDH of these combined populations. It was identified that, overall, the precarious socio-ecological situation in which the studied populations live exposes them to a double burden of disease. The Caxiuan<unk> population, more isolated physically, with less access to financial resources and more precarious infrastructure, present the shortest and thinnest individuals, and intermediate pre and SAH levels compared to quilombolas and Mamirauá. Quilombola men are taller and the women present higher overweight<unk>obesity prevalence; both men and women have higher pre and SAH prevalence among the three populations. Mamirauá women are the tallest, the men have higher overweight<unk>obesity and there is smaller pre and SAH prevalence in general. The differences observed among the groups can be attributed to factors such as psycho-social stress (racism/discrimination); cultural behavioural patterns; more access to cash and the proximity to urban centres found among the quilombolas; the intense work of the Mamirauá Institute for Sustainable Development to improve infra-structure, the epidemiologic, and the income situation of the resident families in Mamirauá; and to the particularly precarious conditions of survival, sanitation, health in general, and almost total absence of the State in Caxiuan<unk>. Overall, there is a strong connection between what has been defined as the SDH and the epidemiologic situation of these groups. Further studies in these and other populations using an SDH framework will contribute to the proposition of future measures seeking to reduce the double burden of disease associated with the epidemiologic transition, and prevent, among the Amazonian rural populations, the high mortality rates due to cardiovascular disorders observed in the urban areas. In the development of our projects, dialogue with the communities, the local health and education professionals, and the researchers has been prioritised. This is to promote knowledge exchange and local empowerment as the riverine and quilombolas have been historically kept out of national public policies. These research endeavours also motivated discussion with community and municipal health managers about their health knowledge and needs, contributing to public policy planning aimed specifically to them (Silva, 2015). We believe the information presented here can also be of use to policy planners elsewhere throughout the Amazon basin, where some of the world's most vulnerable rural populations survive in different countries and are exposed to similar problems.
Background: The health and nutritional situation of adults from three rural vulnerable Amazonian populations are investigated in relation to the Social Determinants of Health (SDH) and the epidemiologic transition. Aim: To investigate the role of the environment and the SDH on the occurrence of chronicdegenerative diseases in these groups. Subjects and Methods: Anthropometric, blood pressure, and demographic data were collected in adults from the RDS Mamirauá, AM (n=149), Flona Caxiuanã, PA (n=146), and quilombolas, PA (n=351), populations living in a variety of socio-ecological environments in the Brazilian Amazon. Results: Adjusting for the effect of age, quilombola men are taller (F=9.85; p<0.001), and quilombola women present with higher adiposity (F=20.43; p<0.001) and are more overweight/obese. Men from Mamirauá present higher adiposity (F=9.58; p<0.001). Mamirauá women are taller (F=5.55; p<0.01) and have higher values of waist circumference and subscapular/triceps index. Quilombolas present higher prevalence of hypertension in both sexes, and there are significant differences in rates of hypertension among the women (X 2 =17.45; p<0.01). The quilombolas are more dependent on government programs, people from Mamirauá have more economic resources, and the group from Caxiunã have the lowest SES.In these populations, the SDH play a key role in the ontogeny of diseases, and the "diseases of modernity" occur simultaneously with the always present infectoparasitic pathologies, substantially increasing social vulnerability.
INTRODUCTION The estimated number of new human immunodeficiency virus (HIV) infections among men who have sex with men (MSM) in the United States (US) in 2010 was 29,800 (Centers for Disease Control and Prevention [CDC], 2015). Black MSM accounted for the largest proportion of infections (38%) (CDC, 2016). Although the number of new HIV diagnoses among Black MSM increased 22% between 2005 and 2014, the upward trend appears to be slowing in recent years, increasing less than 1% between 2010 and 2014 (CDC, 2016). While estimating the rate of HIV among MSM has proven difficult, one study in New York City estimated that the case rate per 100,000 among non-Latino Black MSM was 8,781 between 2005-2008, compared with 3,221 among Latino MSM, and 1,241 among non-Latino White MSM (Pathela et al., 2011). Unfortunately, of the estimated 647,700 MSM with HIV in the US at the end of 2011, only about 85% had been tested for and diagnosed with HIV (CDC, 2014a). This suggests that continued efforts to diagnose MSM with HIV promptly are needed to curb the HIV epidemic in this group. The general population of MSM with HIV experience better outcomes along the HIV care continuum when compared with other risk groups (CDC, 2014a).However, Black MSM with HIV experience the lowest rates of linkage to HIV care, retention in care, prescription of antiretroviral therapy (ART), and viral suppression compared with MSM from all other racial/ethnic groups (CDC, 2014b), and compared with their male heterosexual counterparts (CDC, 2014c). While predictors such as Black race, homelessness, MSM disclosure (Nelson, et al., 2010), life stressors (Nelson, et al., 2014), and MSM-related stigma (Glick & Golden, 2010) have been associated with HIV testing and delayed diagnosis among MSM, less is known about predictors of delayed diagnosis among specific racial/ethnic groups of MSM, particularly those factors at the neighborhood level. Therefore, the objectives of this study were to (a) examine racial/ethnic disparities in delayed HIV diagnosis among MSM, and (b) identify specific individual-and neighborhood-level determinants of delayed HIV diagnosis for each MSM racial/ethnic group in Florida. --- METHODS --- Datasets De-identified HIV surveillance records were obtained from the Florida Department of Health enhanced HIV/AIDS reporting system (eHARS). Cases age <unk>13 who met the CDC HIV case definition (CDC, 2008) during the years 2000-2014 and had a reported HIV transmission mode of MSM were analyzed. Cases with missing or invalid data for ZIP code at time of HIV diagnosis and missing month and year of HIV diagnosis, and cases diagnosed in a correctional facility, were excluded. Cases diagnosed in a correctional facility were excluded because they are not representative of the HIV population in the neighborhood where the facility is located, and because they have different access to care than the general population with HIV infection. The 2009-2013 American Community Survey (ACS) was used to obtain neighborhood-level data using ZIP code tabulation areas (ZCTAs) (ACS, 2015). ZCTAs are used by the US Census Bureau to tabulate summary statistics and approximate US postal service ZIP codes (US Census Bureau, n.d.). --- Individual-level variables The following individual-level data were extracted from eHARS: ethnicity, race, HIV diagnosis year, sex at birth, age at HIV diagnosis, HIV transmission mode, birth country, HIV-to-AIDS interval in months (if case progressed to AIDS), residential ZIP code at time of HIV diagnosis, and whether the case was diagnosed at a correctional facility. Data on mode of HIV transmission were self-reported during HIV testing, reported by a health care provider, or extracted from medical chart reviews. Cases were coded as US-born if they were born in any of the 50 states, District of Columbia, Puerto Rico, or any US dependent territory. Delayed HIV diagnosis was defined as an AIDS diagnosis within 3 months of HIV diagnosis (CDC, 2013). --- Neighborhood-level variables Thirteen neighborhood-level socioeconomic status (SES) indicators were extracted from the ACS to develop an SES index of Florida neighborhoods (ZCTAs) (Niyonsenga et al., 2013): percent of households without access to a car, percent of households with <unk>1 person per room, percent of population living below the poverty line, percent of owner-occupied homes worth <unk>$300,000, median household income in 2013 percent of households with annual income <unk>$15,000, percent of households with annual income <unk>$150,000, income disparity (derived from percent of households with annual income <unk>$10,000 and percent of households with annual income <unk>$50,000), percent of population age <unk>25 with less than a 12 th grade education, percent of population age <unk>25 with a graduate professional degree, percent of households living in rented housing, percent of population age <unk>16 who were unemployed, and percent of population age <unk>16 employed in high working class occupation (ACS occupation group: "managerial, business, science, and arts occupations"). Income disparity was calculated as the logarithm of 100 times the percent of households with annual income <unk>$10,000 divided by the percent of households with annual income <unk>$50,000 and was used as a proxy for the Gini-coefficient (Niyonsenga et al., 2013;Singh & Siahpush, 2002). All neighborhood-level indicators were coded so that higher scores corresponded with higher SES; they were then standardized (Niyonsenga et al., 2013). To calculate the SES index, we started by conducting a reliability analysis. The Cronbach's alpha for all 13 indicators was 0.93. We selected 7 indicators based on the correlation of the indicator with the total index (high correlation), and the Cronbach's alpha if the item was deleted (low alpha). The 7 indicators selected were: percent below poverty, median household income, percent of households with annual income <unk>$15,000, percent of households with annual income <unk>$150,000, income disparity, percent of population age <unk>25 with less than a 12 th grade education, and high-class work. The resulting Cronbach's alpha increased (0.94). Second, we conducted a principal component analysis with and without varimax rotation, which revealed one factor with an eigenvalue greater than 1 (5.14). This factor accounted for 73.49% of the variance in the indicators. Because all the factor loadings were high (between 0.80 and 0.93), we retained all 7 indicators. Finally, we added the standardized scores for the 7 variables and categorized the scores into quartiles. To categorize ZCTAs into rural or urban, we used Categorization C of Version 2.0 of the Rural-Urban Commuting Area (RUCA) codes, developed by the University of Washington WWAMI Rural Research Center (WWAMI Rural Health Research Center, n.d.). --- Statistical analyses Individual-and neighborhood-level data were merged by matching the ZIP code at time of HIV diagnosis of each case with the ZIP code's corresponding ZCTA. We compared individual-and neighborhood-level characteristics by race/ethnicity. We used the Cochran-Mantel-Haenszel general association statistic for individual-level variables controlling for ZCTA, and the chi-square test for neighborhood-level variables. Multi-level (Level 1: individual; Level 2: neighborhood) logistic regression modeling was used to account for correlation among cases living in the same neighborhood through a random intercept using ZCTA. Crude and adjusted odds ratios and 95% confidence intervals for delayed diagnosis were calculated comparing cases by race/ethnicity. First, we estimated crude odds ratios (Model 1). Then we controlled for individual-level factors (Model 2). Finally, we controlled for individual-and neighborhood-level variables (Model 3). To identify unique predictors of delayed diagnosis for each group, separate models were estimated stratifying by race/ ethnicity adjusting for year of HIV diagnosis, age, US/foreign-born status, injection drug use, socioeconomic status (index of 7 indicators), and rural/urban status. SAS software, version 9.4 (SAS Institute, Cary, NC 2002), was used to conduct analyses. Multivariate models were adjusted for year of HIV diagnosis to control for likely changes in HIV testing behaviors and HIV testing strategies over the 15-year study period. The Florida International University institutional review board approved this study, and the Florida Department of Health designated this study to be non-human subjects research. --- RESULTS --- Characteristics of participants Of 91,867 HIV cases reported in Florida 2000-2014, 42,493 had MSM listed as a mode of HIV transmission. Of these, 1,311 were diagnosed in a correctional facility, 1,785 had missing data on ZIP code at time of HIV diagnosis, and 176 had missing data on month of HIV diagnosis (categories are not mutually exclusive). No cases under the age of 13 reported transmission mode as MSM. Of the remaining 39,301 cases analyzed in this study, 27.3% were diagnosed late (see Table 1). This represented a downward trend that started at 38.4% in 2000 and decreased to 18.5% by 2014. --- Racial/ethnic disparities in delayed HIV diagnosis The proportion of cases diagnosed late decreased from 2000-2014 for all racial/ethnic groups (see Figure 1). In crude logistic regression models, Latino MSM had lower odds of delayed diagnosis compared with White MSM (see Table 2). After controlling for individual-level factors, Black MSM had higher odds of delayed diagnosis compared with White MSM, and the protective effect of Latino MSM disappeared. The higher odds for delayed diagnosis among Black MSM remained after controlling for neighborhood-level SES and rural/urban status. --- Predictors of delayed HIV diagnosis by race/ethnicity HIV diagnosis during 2000-2009 compared with 2010-2014 and diagnosis at 20 years of age or older compared with 13-19 were predictors of delayed diagnosis for Black, Latino, and White MSM (see Table 3). Among Black MSM, being foreign-born compared with USborn, and living in a rural area compared with an urban area were additionally associated. Among Latino MSM, only residing in a rural area at time of HIV diagnosis was independently associated with delayed HIV diagnosis. Among White MSM, being foreignborn compared with US-born was protective. --- DISCUSSION Twenty-seven percent of HIV diagnoses in Florida 2000-2014 with a reported mode of HIV transmission as MSM were diagnosed late. After adjusting for individual-and neighborhood-level factors, Black MSM were at increased odds of delayed diagnosis compared with White MSM. Among Black MSM, being foreign-born and residing in a rural area at the time of HIV diagnosis were risk factors. Rural residence was also a strong predictor of delayed diagnosis for Latino MSM. Neighborhood-level SES was not associated with delayed HIV diagnosis among any racial/ethnic MSM group in Florida. The proportion of late HIV diagnoses among MSM in Florida for the years 2000-2014 was 27.3% (consistent with national estimates, [CDC, 2013)], decreased from 38.4% in 2000 to 18.5% in 2014. The decline may be partially due to revised recommendation for HIV testing such as the 2006 CDC (Branson, et al., 2006) and 2013 US Preventive Services Task Force (Moyer & US Preventive Services Task Force, 2013) guidelines for opt-out screening of adolescent and adult patients in healthcare settings. While several studies have examined racial/ethnic disparities in delayed HIV diagnosis among the general HIV infected population (Tang, Levy & Hernandez, 2011;Trepka et al., 2014;Yang et al., 2010), few studies have examined these disparities among MSM. One study of MSM diagnosed in 33 US states between 1996-2002 found significant differences in the proportion of Black MSM (23.1%, 95% CI 22.4-23.7) and Latino MSM (23.7%, 95% CI 22.6-24.7) who were diagnosed late compared with White MSM (18.4%, 95% CI 17.9-18.9) (Hall et al., 2007). However, the study included the earlier years of the epidemic and used AIDS diagnosis within 12 months of HIV diagnosis to define delayed diagnosis. In our study, Black MSM had higher odds of delayed diagnosis compared with White MSM after adjusting for individual-level factors. Black MSM tended to be younger, with over 70% diagnosed between the ages of 13 and 39, compared with 47% for White MSM. Differences in age, as well as year of diagnosis and nativity, appear to confound disparities in delayed diagnosis between Black and White MSM. Conversely, the apparent advantage among Latinos when compared with Whites in the crude model appears to be related to differences in individuallevel factors. It remains unclear why Black MSM are more likely to be diagnosed late with HIV. Previous studies and a meta-analysis suggest that Black MSM have higher rates of HIV testing (Pathela et al., 2011;Millet et al., 2007). However, a population-based study suggested that MSM-related stigma among Blacks (72%) and Black MSM (57%) is high, and higher than among Whites (52%) and White MSM (27%), and that unfavorable attitudes toward MSM are associated with no prior HIV testing (Glick & Golden, 2010). A quantitative study comparing MSM who tested late for HIV with those who did not test late found that being Black and homelessness, disclosing male-male sex to 50% or less of people in social circle, having 1 sexual partner versus more than 1 sexual partner in the past 6 months (Nelson et al., 2010), and experiencing multiple life stressors (Nelson et al., 2014) were associated with delayed HIV testing and diagnosis. Further, Black MSM experience more homelessness (Sullivan et al., 2014) and higher rates of depression than White MSM (Richardson et al., 1997), may be less likely to disclose their MSM status to others (Gates, 2010) and may have or perceive less social support (Stokes, Vanable & McKirnan, 1996). Over 50% of Black MSM resided in neighborhoods in the lowest quartile of SES, compared with 35% of Latino MSM, and 20% of White MSM in our study. The disparity in delayed diagnosis between Black and White MSM decreased but remained after adjusting for neighborhood SES and rural/urban residence. Our results suggest that a comprehensive index of neighborhood SES and rural/urban status explain a portion of the observed disparities between Black MSM and White MSM and do not account for the disparity that remains after controlling for individual-level factors. Being foreign-born was associated with delayed HIV diagnosis for Black MSM. Our results are similar to those from a national study of 33 US states that found a higher proportion of delayed HIV diagnosis (AIDS within 12 months of HIV diagnosis) among foreign-born Black MSM (44.1%) compared with US-born Black MSM (36.7%) (Johnson, Hu, & Dean, 2010). Our population of foreign-born Black MSM was primarily born in Haiti (49.1%), Jamaica (16.3%), and the Bahamas (5.6%). In the national study mentioned above by Johnson and colleagues, the proportion of Caribbean-born Blacks diagnosed late was 44.2%, higher than the proportion of African-born Blacks (42.1%). A study of 1,060 Blacks in Massachusetts found that foreign-born Blacks were less likely to report HIV testing compared with US-born Blacks (42% vs. 56%) (Ojikutu et al., 2013). Ojikutu et al. found that HIV-related stigma was higher, and knowledge was lower, among foreign-born Blacks compared with US-born Blacks, particularly among Caribbean-born participants than among sub-Saharan African participants (Ojikutu et al., 2013). They also found that over 50% of foreign-born Blacks reported that their most recent HIV test was part of an immigration requirement. The HIV testing requirement for immigrants was lifted in 2010 and has likely impacted testing patterns among immigrants (Winston & Beckwith, 2011). After adjusting for individual-level factors and neighborhood SES, rural residence was a predictor of delayed diagnosis among Black and Latino MSM. Forty-one percent of both Black MSM and Latino MSM who resided in rural areas were diagnosed late, compared with 27% and 25% of their urban counterparts. A previous population-based cohort study of Florida HIV cases reported that 35% of Blacks in rural areas were diagnosed late, compared with 29% in urban areas (Trepka et al., 2014). This suggests that not only do Black MSM in rural areas have a higher risk of delayed HIV diagnosis when compared with Black MSM in urban areas, but also when compared with both the rural and urban general HIV infected Black population. It is possible that high levels of HIV-and MSM-related stigma, and higher risk of loss of confidentiality in rural areas compared with urban areas are preventing MSM from routine HIV testing, particularly for racial/ethnic minorities (Preston, et al., 2002). Fear of being the target of a violent crime due to hostility against MSM has been reported in a qualitative study of MSM in rural Wyoming (Williams, Bowen & Horvath, 2005). Of note, rural areas in Wyoming are likely very different and more isolated from larger cities than rural areas in Florida. A study in Europe found that MSM who resided in smaller cities reported higher internalized homonegativity compared to those who resided in larger cities, and that higher homonegativity was associated with decreased likelihood of HIV testing (Berg et al., 2011). A limitation of this study is related to our definition of late diagnosis. It is possible that some individuals who had AIDS within three months of HIV diagnosis were not diagnosed with AIDS until after three months. However, we believe that the possibility of misclassification is small given that cases with AIDS likely had symptoms that encouraged prompt HIV care seeking behavior. Furthermore, HIV reporting was not mandated in Florida until 1997. It is possible that cases diagnosed prior to 1997 were later reported as new HIV diagnoses, and therefore, mistakenly appear to have a shorter HIV-to-AIDS time interval. Nevertheless, it is worth noting that our rate of delayed diagnosis for MSM was nearly identical to national estimates (CDC, 2013). Additionally, our dataset did not allow us to examine important variables, such as individual-level SES, access to health insurance, and HIV testing patterns and barriers. Finally, the small number of rural cases limited our ability to stratify racial/ ethnic groups by rural/urban status to identify unique predictors of delayed diagnosis in rural areas. Most cases of late HIV diagnosis can be prevented; it is estimated that only 3.6-13% of infections are due to accelerated disease progression (Sabharwal et al., 2011). Therefore, regular HIV testing, as per the current guidelines, offers an opportunity to diagnose individuals prior to developing AIDS. However, barriers to the implementation of routine testing exist, creating disparities across racial/ethnic and other groups. Our findings warrant future investigations on potential cultural barriers to HIV testing among foreign-born Black MSM, as well as on the contextual differences between rural and urban culture that appear to affect HIV testing among MSM. Strategies, such as using social networks to increase HIV testing, have shown promising results among Black MSM (Fuqua et al., 2012) and may also be effective among foreign-born and rural populations of Black MSM.
Only about 85% of men who have sex with men (MSM) with HIV have been tested for and diagnosed with HIV. Racial/ethnic disparities in HIV risk and HIV care outcomes exist within MSM. We examined racial/ethnic disparities in delayed HIV diagnosis MSM. Males aged ≥13 reported to the Florida Enhanced HIV/AIDS Reporting System 2000-2014 with a reported HIV transmission mode of MSM were analyzed. We defined delayed HIV diagnosis as an AIDS diagnosis within three months of the HIV diagnosis. Multilevel logistic regressions were used to estimate adjusted odds ratios (aOR). Of 39,301 MSM, 27% were diagnosed late. After controlling for individual factors, neighborhood socioeconomic status, and rural-urban residence, non-Latino Black MSM had higher odds of delayed diagnosis compared with non-Latino White MSM (aOR 1.15, 95% confidence interval [CI] 1.08-1.23). Foreign birth compared with US birth was a risk factor for Black MSM (aOR 1.27, 95% CI 1.12-1.44), but a protective factor for White MSM (aOR 0.77, 95% CI 0.68-0.87). Rural residence was a risk for Black MSM (aOR 1.79, 95% CI 1.36-2.35) and Latino MSM (aOR 1.87, 95% CI 1.24-2.84), but not for White MSM (aOR 1.26, 95% CI 0.99-1.60). HIV testing barriers particularly affect non-Latino Black MSM. Social and/or structural barriers to testing in rural communities may be significantly contributing to delayed HIV diagnosis among minority MSM.
Background The increasingly elderly population in many western countries has created an increased demand for high quality medical and social care services. This includes nursing home (NH) care, referring to facilities providing 24-h functional support and care for persons who require assistance with activities of daily living and who often have complex healthcare needs [1]. Achieving quality in NH care is complicated by the fact that care quality is multifaceted, difficult to define and measure, and may be perceived differently by different stakeholders [2]. Regulatory agencies thus often struggle to identify factors most important in achieving high-quality NH care [3]. A particular challenge in regulating quality in NH care is that it is in many regards a'soft' service in which the individual experiences of the NH residents is an important dimension of quality. While many aspects of quality (e.g, clinical quality and cost effectiveness) must be considered in order to achieve a well-rounded assessment of the care provided at a given nursing home, some scholars have argued that resident satisfaction may be the most appropriate assessment of quality in NH care [4,5]. In health care, investigations of patient satisfaction are abundant [6,7], while studies measuring NH resident satisfaction are less common. This may be due to the suggestion that elderly patients with cognitive weaknesses have difficulty reliably answering surveys [5], though studies have shown that patients in cognitive decline are capable of answering surveys, particularly if they are designed with their needs in mind [8][9][10][11]. Given that the satisfaction of residents is an important dimension of quality in NH care, the question becomes how this is achieved. That is to say, what factors are most important to focus on when seeking to improve the satisfaction of NH residents? The most commonly used analytical framework for understanding how quality is generated in health and social care is Donabedian's structureprocessoutcome model [12,13]. A central distinction in Donabedian's model is that between structural and processual quality factors, which are seen as potential explanatory factors behind quality outcomes. Structural factors refer to the physical attributes of the setting in which care is provided, including the number and qualifications of staff, equipment, and physical facilities [13]. Processual factors denote the manner in which the care services are delivered, e.g. whether care routines follow set guidelines, and the extent to which residents are involved in decisions about their care. Quality outcomes can be measured in many ways, both objectively in the form of health status or subjectively in the form of patient/resident satisfaction [12]. A central unresolved question posed in Donabedian's work is whether structural or processual measures are most important for generating outcome quality, and precisely how these factors interact to produce the desired outcomes. The literature on medical quality in NH care in terms of, for instance, mortality and adverse event rates, has investigated numerous explanatory factors including staffing, ownership, care routines, and the size of facilities [14][15][16][17]. Such studies are particularly abundant in the United States, where collection of the Minimum Data Set provides a robust basis for performing broad studies of clinical outcomes. There are considerably fewer investigations of the determinants of resident satisfaction. Previous studies have investigated structural factors including staff satisfaction [18], and job commitment [19], with both studies finding positive associations with resident satisfaction. A broader study of the influence of organizational factors found that NH ownership, staffing levels, and the provision of family councils were important predictors of NH resident satisfaction [20]. Others have investigated specific interventions related to processual quality factors such as improved meal time routines [21], "person-centered care" initiatives [22], and social activity programs such as gardening [23]. While generally finding positive effects on resident satisfaction, these interventional studies are narrow, and differ in terms of setting and methodology, making them difficult to compare. Taken together, the prior literature on what factors are associated with resident satisfaction in NHs is largely limited to evaluations of specific interventions, and there are few studies investigating the relative influence of structural and processual factors, particularly in the European context. In Sweden, several public investigations have pointed to quality deficiencies, and a lack of systematic knowledge about factors leading to improved quality [24,25]. The issue of NH care quality has increased in significance in Swedish public debate as reforms have led to an increasing number of homes contracted out by local governments (municipalities) to private, often for-profit firms. In 2017, one study found that about one fifth of the Swedish NHs were run by for-profit providers [26]. This study, as well as another recent investigation of Danish NHs, found that overall, privately operated homes outperformed public and non-profit homes in terms of process measures, while underperforming in terms of structural measures [26,27]. Neither of these studies investigated resident satisfaction however. In Sweden, there is good availability of data on various aspects of NH care due to comprehensive data collection efforts by the Swedish National Board of Health and Welfare (NBHW). Annual surveys measuring satisfaction are sent by the NBHW to all NH residents, and surveys assessing processual and structural measures of quality are sent to every NH in Sweden. So far however, the use of these data for research has been limited. One exception is a study by Kajonius and Kazemi [28] which investigated differences in satisfaction among NH residents at the municipal level, finding that processual quality factors such as respect and access to information appeared to be more important for residents than structural factors such as staffing and budget. In this study, we aim to evaluate which structural and processual measures of quality have the strongest associations with overall NH resident satisfaction. In doing so, we hope to provide policymakers and researchers with a broader picture of the determinants of resident satisfaction as NHs than has previously been available. --- Methods --- Setting In Sweden, all citizens have access to publicly funded NH services at heavily subsidized rates. The eldercare system in Sweden is decentralized, with responsibility for service provision resting with the nation's 290 municipalities. Municipalities are obliged to offer NH care to those determined to have a need for such care based on national criteria. The municipality may provide services themselves, or contract out service provision to private entities [29]. In 2016, there were in total 88,886 individuals [30] living in ca. 2300 NHs in Sweden [31], with 20.5% of residents living in NHs operated by private providers [30]. While marketization reforms have led to an increase in the proportion of privately managed NHs, they remain publicly funded [32]. All NHs, both public and private, are subjected to the same national quality reporting requirements, user safety regulations, and auditing measures [33]. This study includes all NHs in Sweden providing care to individuals over 65 years of age in 2016, excluding facilities offering only short-term care. --- Data collection Two nationally representative surveys conducted in 2016, both developed and administered by the NBHW, serve as the primary sources of data. The first survey is a user satisfaction survey (Brukarundersökningen, or user survey) distributed yearly to all individuals over 65 years of age receiving elder care services including NH care. This survey consists of 27 separate items to be rated on a five-point Likert scale, relating to their satisfaction with a variety of aspects of elder care services, as well as their health status. Among those living in NHs the survey had a response rate of 56% in 2016, resulting in a total of 40,371 responses [34]. The second data source is a survey sent directly to all NHs in Sweden by the NBHW, which assesses a number of processual and structural measures of quality. This survey (Enhetsundersökningen, or unit survey) is completed by administrative staff at each NH, and had a response rate of 93% in 2016, resulting in 2153 responses [35]. In addition to quality measures, the unit survey provides data on the type of services provided by the NH (general, dementia and/or assisted living), the number of residents in each home, and whether the NH is operated by a public or private entity. While the NBHW has a long experience of developing and administering surveys, and assessments of loss-to follow-up in the user survey have been performed [36] the psychometric properties of these surveys have not been published in the publicly available literature. Observations in the two NBHW survey datasets for 2016 were matched based on the NH name and municipality. This involved both an automated matching process, and a subsequent manual review of unmatched records. Municipality-level variables were extracted from the national municipality and county council database Kolada [37] and merged into the dataset. --- Variables Variables for analysis were aggregated from the two surveys based on their conceptual meaning and the results of an exploratory factor analysis which may be found in Additional file 1, p 1-7. The extracted variables are detailed below, and a summary of the categorization is available as Additional file 2. --- Dependent variable Upon exploratory factor analysis, it was found that questions in the user survey were highly correlated (Cronbach's Alpha = 0.92), and was a poor candidate for approaches based on extraction of distinct latent variables. As such, we chose to extract a single composite measure of satisfaction from the user survey for use as the dependent variable, consisting of questions 5-19, 21-25, and 27. To generate a composite measure for use as the dependent variable, the percent of residents at a nursing home responding positively to a given survey question was normalized by subtracting the average percentage of patients responding positively to that question in the population, and dividing by the standard deviation of the population, resulting in a standardized z-score. Z-scores were then averaged across all included survey items to result in a composite score with equal weights for each question. --- Independent variables The NBHW divided the unit survey into 12 conceptual categories. A factor analysis showed that the individual questions generally loaded well onto the categories proposed by the NBHW and it was therefore chosen, with a few exceptions, to retain this categorization as the basis for the independent variables used in the analysis. Based on the Donabedian model, the independent variables were divided into "structural" and "processual" variables. --- Processual variables The first seven variables related to different processual factors, such as meal-related routines or physical or social activities. Questions 1 and 1a in the unit survey related to the ability of residents to participate in "resident councils" where residents regularly meet to voice concerns in the NH. Issues raised during resident councils may for instance include the planning of common activities or menus for the coming weeks. These were aggregated and reported as the variable Participation in resident councils. Questions 2 and 3 in the unit survey concerned the existence of-, and the residents participation in, the creation of "action plans" concerning the care needs and wishes of the resident. These action plans contain information about how various care activities are to be carried out and should be updated every 6 months. The questions were combined into the variable Individualized action plans. Questions 4 and 5 addressed the existence of mealrelated routines, and the documentation of meal preferences in the residents' action plans. Such meal routines are to be based on the Five Aspects Meal Model (FAMM) proposed by Gustafsson et al. [38], and should be updated every 24 months. The questions were combined into the variable Meal-related routines and plans. Questions 6a-c in the survey related to the existence of formal routines for handling resident safety issues such as threats, violence, and addiction. While the NBHW grouped question 7 (routines for cooperation with relatives) into this category, it did not load well onto a common factor and is conceptually quite distinct, and was therefore excluded. The remaining questions were combined into the variable Patient safety routines. Questions 8 and 8a-b in the unit survey related to facilities for-, and availability of, exercise and social activities. We excluded question 8 (whether the NH residents have access to facilities for physical activity), which had a weak-to-moderate factor loading, so as to interpret this variable as a purely process-related measure. The remaining questions were combined into the variable Availability of exercise and social activity. Questions 9 and 10 related to the existence of routines for planning care in cooperation with other healthcare providers, and whether resident's involvement was documented. Similarly, questions 11 and 12 related to routines for medication reviews and whether resident participation is documented in the medical record. We reported these as the variables Care coordination routines and Medication review routines, respectively. --- Structural variables The structural variables included indicators of staffing, ownership and size. Three factors relating to staffing from the unit survey, including the ratio of nurses per resident (questions 13 and 14), non-nurse staff per resident (questions 15 and 16), and the portion of staff with an "adequate education" for their position (questions 17 & 18) were identified. These are reported as the variables Nurses per resident, Staff per resident, and Staff with adequate education respectively, and weekday and weekend staffing levels were weighted at a 5:2 ratio to represent average daily staffing levels. While staffing ratios are fairly straightforward to calculate, the definition of what constitutes an "adequate education" is more complex. Adequacy is determined by the amount of healthcarerelated training completed by non-nurse staff based on a point scale established by the NBHW [39]. The number of beds available at each NH was reported as Size of nursing home. The NH's ownership status, i.e. whether it was run by a private or a public provider, was reported as the variable Private ownership. --- Controls Several variables were included in the analysis to control for population health differences between the NHs included in this study. Self-rated health has been found to be an excellent predictor of clinical outcomes [40,41], and we used questions 1-3 and 20 in the user satisfaction survey, which asked about the residents' physical and mental well-being, to control for health status. The type of facilities (general, dementia and/or assisted living) available at the NH was also controlled for. It was further deemed necessary to control for demographic factors for which data was only available at the municipal level. This refers to different demographic, economical, and political conditions which may vary significantly between the 290 municipalities. A set of controls were adapted from previous studies [26,42,43] including per capita income levels, population density, age profiles, political control, and expenditures, the details of which may be found in Table 1. Data at the municipality level was collected from the Kolada database [37]. --- Statistical analysis As the large number of quality measures made available by the NBHW was unsuited to direct inclusion in a regression-modelling framework, an initial exploratory factor analysis was performed to reduce the dimensionality of the dataset as described above. Data from the user satisfaction survey and the unit survey were aggregated at the NH level. We sought to minimize bias in the estimation of the effects of the investigated quality measures by drawing upon the approach to causal modelling first described by Pearl [44], using the assumptions of causal directionality described by the Donabedian model of healthcare quality [12,13]. The Donabedian model asserts that a causal relationship exists between structural and processual aspects of healthcare quality, and we assumed that the satisfaction of NH residents would be confounded by their health status. To control for confounding due to these causal relationships, the effects of processual measures of quality were modeled controlling for resident health and structural measures of quality. We present coefficient estimates for structural measures including controls for other measures of structural quality, though the direction of causality within the selected set of structural measures is in many cases unclear. In addition to these full models, we present additional nested models estimating bivariate associations, and models controlling only for resident health. In this framework, variations in the regression coefficients between the full and nested models allowed for the interpretation of the impact of health status and structural factors on the effect of the quality measures. The aggregated variables were first analyzed in a classical ordinary least squares regression framework using the Huber-White sandwich estimator to account for heteroscedasticity and clustering as implemented in the rms R package [45]. Hierarchal models including municipality-level controls with random intercepts for municipalities were implemented using a "Partial pooling" approach to account for clustering and confounding due to municipal-level factors [46], as implemented in the lme4 R package [47]. Confidence intervals were generated using basic parametric bootstrap resampling. In this analysis, we report our results in terms of standardized regression coefficients. While this allows for direct comparison of the importance of each independent variable in predicting resident satisfaction, it makes interpretation in terms of absolute effects cumbersome. Given the low rates of missing data at the unit level, multiple imputation was not deemed to be necessary, and cases with missing values were deleted list-wise in the relevant models. All statistical analyses were performed using R version 3.5.0, and a reproducible accounting of our reported findings is included as Additional file 1. A number of sensitivity analyses investigating the impact of various model specifications, potential biases due to loss to followup, and assumptions made in the main analysis are also included in Additional file 1. Source code and the data necessary to reproduce these findings are available on Mendeley Data [48]. --- Results Data from both surveys (the user survey and the unit survey) were aggregated at the NH level, resulting in 1921 records in the user survey, and 2189 records in the unit survey. 1711 records could be automatically linked based on municipality and NH names, and an additional 87 records could be matched through manual review, resulting in a dataset containing 1798 NHs. An analysis of non-matched records may be found in Additional file 1. p 7-8. An analysis of the association between survey response rates and the investigated variables was performed. We found a positive association between response rates and resident satisfaction, as well as a negative association between response rates and nursing home size, and an effect indicating that private nursing homes had higher response rates (See dropout analysis in Additional file 1, p 8). Generally, residents of NHs were quite satisfied; in the 2016 survey, 83% answered that they overall were fairly or very satisfied with the care they received. --- Descriptive data Descriptive statistics were generated for each of the variables included in the analysis, and are presented in Table 1. We found that the average NH in Sweden has space for 43 residents, a resident to staff ratio of roughly 3.5:1, a resident to nurse ratio of 30:1, and that 83% of nonnurse staff had an adequate level of education as defined by the NBHW criteria. 19% of included NHs were operated by private providers. 80% of NHs offered general care services, while 60% offered dementia care services, and only 5% had assisted living facilities -These sum up to over 100% as a single NH can offer more than one type of service. With regard to municipality level statistics, we see that about 21% of Swedes are over the age of 65, 4% of whom live in NHs, where the average age of residents is 83. The average annual per-resident cost for the municipality is 838 thousand SEK (around 80 thousand EUR), while average per capita taxable income is 188 thousand SEK (Table 1). --- Regression analysis Figure 1 presents the summarized results of each of the models developed to characterize the independent variables created from the unit survey. 1a presents the results using a classical OLS regression framework, while 1b presents the results of hierarchal mixed-effects models controlling for municipal level effects. In terms of overall predictive value, an OLS model including all covariates achieved an adjusted r 2 of 0.182, while the conditional r 2 value [49] of the multi-level model containing all predictor variables was 0.254. In the multi-level framework, we found that variation between municipalities accounted for 10% of the total variation found between NHs. A total of 12 processual and structural variables were extracted from the unit survey for analysis as independent variables. Upon analyzing the results, variable groupings were identified post hoc based on similarities with regard to effect sizes and conceptual meanings, which are used to simplify the discussion of our findings, and are labelled on the right-hand side of Fig. 1. The variables in the first group, labelled Individualized care, are all related to the individual care process. They include the variables Participation in resident councils, Individualized care plans, and Meal-related routines and plans. This group had an average effect size of 0.06 in our fully controlled models, and while 95% confidence intervals in the main model consistently excluded zero after adjusting for municipality-level covariates. The significance of the variables in this group varied upon sensitivity analyses however (See Additional file 1, p [22][23][24][25]. The next group, labelled Safe care, includes the variables Patient safety routines, Care coordination routines, and Medication review routines. They are all related to the existence of formal guidelines dealing with various aspects of care. As seen in Fig. 1, none of these variables displayed significant correlations to resident satisfaction. The final group in the processual category consists of only one variable, Availability of exercise and social activity. This variable, labelled Activity, displayed the highest degree of correlation with overall resident satisfaction among the process variables, with an effect size of 0.11 in our fully controlled model, and was robust across a range of sensitivity analyses. Turning to the structural variables, another three variable groups were identified. We identified no significant effects in the OLS model with regard to ownership status. Upon controlling for municipality-level variables, a significant positive correlation with a magnitude of 0.06 in the fully controlled model was found, though the significance of the association was sensitive to variations in model specifications. The Size of the NH was by a significant margin the most important predictor of resident satisfaction in this analysis, with the negative coefficient suggesting that smaller NHs are associated with more satisfied residents. A small decrease in the effect of this variable could be noticed upon controlling for municipality level effects, suggesting that larger NHs may be more common in municipalities where residents are on average, less satisfied with their NH care. The effect of size was robust in our sensitivity analyses. The third group of structural variables included Nurses per resident, Staff per resident and Staff with adequate education, and was labelled Staffing. The group as a whole had an average effect size of 0.05 among the fully controlled models. With the exception of nurse staffing ratios, 95% confidence intervals consistently excluded zero in the main models, but the significance of the effect was sensitive to varying model specifications. Taken together, the results of the analysis presented in Fig. 1 show that the structural measure Size of the NH was the most important predictor of resident satisfaction, followed by the processual Availability of exercise and social activity variable. The effects of the processual Individualized care variables and the structural Staffing variables were similar in magnitude, as was the effect of Private ownership, upon controlling for municipality-level effects. These effects were also sensitive to alternate model specifications. The processual Safe care variables were not found to have any significant association with resident satisfaction. Finally, a comment on the significant effects found among our control variables is in order. In our fully controlled model, self-rated health was found to have a strong positive correlation with satisfaction (standardized regression coefficient of 0.34) suggesting that healthier residents reported considerably higher levels of satisfaction. Among the municipality level controls, average NH resident age had a positive correlation with satisfaction, and average per capita taxable income had a negative correlation with satisfaction. Interestingly, no significant relationship between the amount spent per resident and satisfaction was identified. Full model summaries, along with a table reporting the data upon which Fig. 1 is based may be found in Additional file 1, p 12-15. --- Discussion In this study, we investigated a total of 12 variables representing different aspects of care quality reported in the NBHW unit survey. Of these, seven were considered to represent process-related quality, and five to represent structural quality. Our main findings were that the Size of a NH (a structural measure) had the greatest impact on resident satisfaction, followed by the processual measure Availability of exercise and social activities. The processual variables concerning Individualized care and the structural Staffing and Private ownership all had similar, weakly positive, effects on resident satisfaction. The processual Safe care variables had no significant effect on resident satisfaction. We found no clear differences in terms of effect sizes between processual and structural variables. Below, we discuss these findings in order of the effect size identified in our results. The fact that NH size was the best predictor of resident satisfaction suggests that smaller NHs in Sweden had more satisfied residents than their larger counterparts. A recent literature review surveying studies examining the impact of NH size on quality outcomes showed size to be an important predictor of quality, with smaller homes generally having better quality outcomes [15]. None of the 30 studies investigated the relationship between size and resident satisfaction, though five investigated similar composite "Quality of Life" measures. There are however some indications that larger nursing homes may be associated with better clinical outcomes such as lower hospitalization risks [50] and lower rates of antipsychotic medication use [51]. NH quality is a multi-faceted concept, and it is not necessarily the case that the determinants of quality will affect all aspects of quality in the same way. As such, while this study does add to the evidence that smaller NHs are associated with the type of "soft" quality which resident satisfaction may be said to represent, the results should not be interpreted as saying anything regarding "harder" measures including clinical outcomes, the determinants of which may be quite different. While size may be an important predictor of satisfaction in and of itself, it is also likely that there are causal mechanisms behind this association which mediate the effect of size. Previous research has for instance indicated that staff turnover may be lower [52] and staff continuity higher [53] at smaller NHs. The findings of this study thus emphasize the importance of identifying the more proximal mechanisms by which smaller NHs generate higher levels of satisfaction. The interpersonal aspects of nursing home care which these measures reflect are however difficult to measure, and investigating the mechanisms behind these softer dimensions of nursing home care may require a more qualitative approach. The Availability of exercise and social activities was found to have the strongest association with resident satisfaction among the processual variables. Previous research has found that physical activity-related interventions can improve the subjective health status of NH residents [54], although other studies have found weaker or even negative effects [55]. Our results suggest that, overall, NHs which offer more frequent opportunities for exercise and social activity have higher levels of resident satisfaction. The effect of activity was not diminished by controlling for resident health or NH structure; rather, the effect increased slightly suggesting that the provision of such activities may be even more important at NHs with poorer structural preconditions, particularly with regard to facility size. Three other variable groups had weaker effects with regards to resident satisfaction: Individualized care, Private ownership, and Staffing. The Individualized care variables included participation in resident councils, the use of individualized care plans and the use of meal routines. We identified no previous research regarding the impact of resident councils or the use of individualized care plans on satisfaction in the literature, though Lucas et al., [20] did identify a positive impact of similar "family councils". Our findings suggest that these quality improvement measures may indeed be associated with higher levels of resident satisfaction, although more directed studies are necessary to confirm this. There is some evidence that interventions to improve mealrelated processes are effective [56,57], and our results are consistent with a positive impact of such improvements on resident satisfaction. The structural measures related to staffing had effect sizes similar to those found among the processual individualized care measures. Staffing as a determinant of care quality has been well researched. In a review of 70 articles, Castle [58] found a preponderance of evidence suggesting that increased staffing levels are positively associated with several measures of NH care quality. More recent studies by Castle and Anderson [59], Hyer et al. [60], and Shin and Hyun [61] point to similar results. However, none of these studies investigated effects on resident satisfaction. We found that both non-nurse staffing ratios and education levels were associated with resident satisfaction in all models, while nurse to resident ratios were significant upon controlling for municipal-level factors, and effect sizes were reduced upon controlling for other structural factors. Our results are thus consistent with a positive relationship between staffing levels and NH care quality. Regarding the effect of ownership, the main results suggest a higher level of resident satisfaction among privately operated NHs after controlling for municipal level covariates. That is to say, while there was no overall difference in absolute levels of satisfaction, a difference was identified upon taking into account that public and private NHs are not evenly distributed across Sweden, and that when the effects of this non-uniform distribution was accounted for (in effect comparing NHs within the same county), a difference could be identified. The somewhat counter-intuitive effect could, at least in part, be explained by the tendency of private care providers in Sweden to establish themselves in municipalities with higher income levels, where resident expectations may be higher. This supposition is supported by the finding that average per capita income had a significant negative association with resident satisfaction (see Additional file 1, p 17). The significance of ownership status was not robust in sensitivity analyses however, and as such constitutes quite weak evidence for the superiority of private over public nursing homes with regards to resident satisfaction. While we found no association between measures of safe care and resident satisfaction, it stands to reason that the processes which these measures represent (e.g. the performance of regular medication reviews and the existence of care coordination plans) are not immediately visible to residents, and are thus less likely to influence satisfaction. Studies investigating the impact of these measures on clinical outcomes may well find that they do have an effect with regards to quality in that respect. Taken together, the findings of this study indicate that NH residents are more satisfied in smaller NHs, and NHs with frequent opportunities for physical and social activity. Only weak effects were identified with regards to processual individualized care measures, private nursing home ownership, and staffing levels. Formal routines had no effect on the satisfaction of residents. Another contribution of the study is the comparison of the effect of structural and processual variables on satisfaction. In contrast to a previous study on Swedish NH care [28], this study did not lend support to any firm conclusions regarding the superiority of one type of quality measure over the other. Rather, it was demonstrated that both structural variables such as size, staffing and ownership, and processual variables including individualized care and activities play a role in determining resident satisfaction. The difference in results between the two studies could be explained by the fact that the processual and outcome variables in the Kajonius and Kazemi study were both drawn from the resident survey (which we found upon factor analysis to be highly inter-correlated), while the structural variables they were compared with were drawn from a separate statistical database lacking this overall correlation. It is thus likely that the differential effects identified by Kajonius and Kazemi are an artefact of how the authors chose to operationalize the processual and structural measures. Furthermore, in the study data was aggregated at the municipal level, thereby investigating only differences in resident satisfaction between municipalities, which we found to account for only 10% of the total variation in satisfaction between NHs. --- Strengths and limitations This study was a secondary analysis of two nationally representative surveys collected for quality improvement purposes. A strength of the study is thus that the results are likely to generalize well to other contexts similar to that of Sweden, and the wide scope of these surveys allowed us to investigate and compare a broad range of factors. A limitation of the study was that the validity and reliability of these surveys has not been established in the publicly available literature, although the NHBW has analyzed the impact of loss to follow-up in the user survey [62], and performs ongoing internal quality assurance of the surveys it conducts. Another risk involved in the secondary analysis of data is the proliferation of "researcher degrees of freedom" arising from the numerous decisions which must be made in transforming and analyzing such data [63]. To ameliorate these risks, we sought to define our analysis strategy a priori, and provide the resources necessary to fully reproduce our results [48]. Another limitation is that the aggregate data used in this study precludes the interpretation results in terms of individual-level effects, and readers must be careful to not commit the "ecological fallacy" of interpreting effects operative at the NH level as applying to individuals. Among other simplifying statistical assumptions including those of additivity and linear effects, we assumed that each question in the survey was equally important to residents in generating the composite measure used as the dependent in our analysis. Weighting each question equally would seem to be a reasonable assumption to make in the absence evidence regarding resident preferences, and the main findings regarding nursing home size and availability of activities were robust to a range of sensitivity analyses and alternate survey question weights. It was common for the satisfaction surveys to be completed with the assistance of third parties, which could potentially influence reported outcomes, and while the rate of missing data was too high to include this variable in the formal analysis, a sub group analysis of homes reporting data on this variable may be found in Additional file 1, p 21-22. Based on our findings, we do not expect this factor to be a threat to the validity of our results. We also analyzed the associations present within the user survey data between NH level response rates and the quality measurements reported in the study. We identified a positive correlation between response rates and satisfaction rates, as has been found in previous studies of this phenomenon [64,65]. We also identified effects suggesting that response rates were higher at smaller nursing homes, and at private nursing homes (See Additional file 1, p 8). Previous studies have suggested that low response rates are likely to result in an over-estimation of satisfaction [64]. As such, bias resulting from the systematic differences in response rates would likely be in the direction of underestimating the association of size and private ownership with satisfaction. --- Conclusions Of the quality factors investigated, NH size had the most prominent association with satisfaction, followed by the availability of exercise and social activities. Processual measures relating to individualized care, such as participation in resident councils and the formulation of individualized action plans had a weak association with resident satisfaction, as did other structural factors such as staffing ratios and staff education. The results also suggested that privately managed NHs had a slightly higher level of resident satisfaction, though the effect was similarly weak and appeared only after adjusting for municipality-level covariates. The results in this study suggest that both structural and processual quality factors matter in determining resident satisfaction, with NH size and the availability of exercise and activities having the greatest impact. --- Implications for policy and practice While the findings in this study suggest a direct link between offering more activities and a higher rate of satisfaction, more research is needed to determine why residents appear more satisfied at smaller homes. It may be that the proximal causes of satisfaction at smaller NHs could be replicated at their larger counterparts, for instance by improving staff continuity and turnover. If so, this could be a cost-effective alternative to building smaller nursing homes. Qualitative studies using methods such as interviews and participant observation may be most appropriate to investigate such effects in more depth. Another policy implication is that activities for residents should be a priority in NH care, and in cases where NHs care is contracted out, offering physical and social activities should be a requirement. --- Availability of data and materials All data used in this study are publicly available. The data and code use to generate these results are available on Mendeley data at: https://doi.org/10. 17632/y69zhgxym3.2 --- Supplementary information Supplementary information accompanies this paper at https://doi.org/10. 1186/s12913-019-4694-9. Additional file 1. Analysis_notebook. This document provides additional details regarding the factor analysis undertaken to reduce the dimensionality of the data prior to regression analysis, additional details regarding the main analysis, and a number of post-hoc analyses undertaken to evaluate the sensitivity of the findings, and investigate a number of interesting findings suitable for pursuit in further research. Additional file 2. Survey_questions. This document details the specific questions from the two NHBW surveys constituting the aggregate variables included as independent variables in the regression analysis reported in this manuscript. Abbreviations FAMM: Five Aspects Meal Model; IQR: Inter-Quartile Range; NBHW: National Board of Health and Welfare; NH: Nursing Home; OLS: Ordinary Least Squares regression; SEK: Swedish Krona Authors' contributions DS, UW and PB conceived of and designed the study. DS performed the analysis and drafted parts of the manuscript
Background: Resident satisfaction is an important aspect of nursing home quality. Despite this, few studies have systematically investigated what aspects of nursing home care are most strongly associated with satisfaction. In Sweden, a large number of processual and structural measures are collected to describe the quality of nursing home care, though the impact of these measures on outcomes including resident satisfaction is poorly understood. Methods: A cross-sectional analysis of data collected in two nationally representative surveys of Swedish eldercare quality using multi-level models to account for geographic differences. Results: Of the factors examined, nursing home size was found to be the most important predictor of resident satisfaction, followed by the amount of exercise and activities offered by the nursing home. Measures of individualized care processes, ownership status, staffing ratios, and staff education levels were also weakly associated with resident satisfaction. Contrary to previous research, we found no clear differences between processual and structural variables in terms of their association with resident satisfaction. Conclusions: The results suggest that of the investigated aspects of nursing home care, the size of the nursing home and the amount activities offered to residents were the strongest predictors of satisfaction. Investigation of the mechanisms behind the higher levels of satisfaction found at smaller nursing homes may be a fruitful avenue for further research.
org/10. 17632/y69zhgxym3.2 --- Supplementary information Supplementary information accompanies this paper at https://doi.org/10. 1186/s12913-019-4694-9. Additional file 1. Analysis_notebook. This document provides additional details regarding the factor analysis undertaken to reduce the dimensionality of the data prior to regression analysis, additional details regarding the main analysis, and a number of post-hoc analyses undertaken to evaluate the sensitivity of the findings, and investigate a number of interesting findings suitable for pursuit in further research. Additional file 2. Survey_questions. This document details the specific questions from the two NHBW surveys constituting the aggregate variables included as independent variables in the regression analysis reported in this manuscript. Abbreviations FAMM: Five Aspects Meal Model; IQR: Inter-Quartile Range; NBHW: National Board of Health and Welfare; NH: Nursing Home; OLS: Ordinary Least Squares regression; SEK: Swedish Krona Authors' contributions DS, UW and PB conceived of and designed the study. DS performed the analysis and drafted parts of the manuscript. YL performed data cleaning, record matching, and drafted parts of the manuscript. All authors provided substantial input and revisions, and approved the final manuscript. --- Funding The study was funded by the Swedish Research Council for Health, Working Life, and Welfare (FORTE), dnr 2014-05134. The funding body had no role in the design of the study or collection, analysis, and interpretation of data or in writing the manuscript. Open access funding provided by Uppsala University. --- Ethics approval and consent to participate This study was approved by the Uppsala regional ethics review board (dnr 2017-342). A waiver of informed consent was granted by the review board. --- Consent for publication Not applicable. --- Competing interests The authors declare that they have no competing interests. --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Background: Resident satisfaction is an important aspect of nursing home quality. Despite this, few studies have systematically investigated what aspects of nursing home care are most strongly associated with satisfaction. In Sweden, a large number of processual and structural measures are collected to describe the quality of nursing home care, though the impact of these measures on outcomes including resident satisfaction is poorly understood. Methods: A cross-sectional analysis of data collected in two nationally representative surveys of Swedish eldercare quality using multi-level models to account for geographic differences. Results: Of the factors examined, nursing home size was found to be the most important predictor of resident satisfaction, followed by the amount of exercise and activities offered by the nursing home. Measures of individualized care processes, ownership status, staffing ratios, and staff education levels were also weakly associated with resident satisfaction. Contrary to previous research, we found no clear differences between processual and structural variables in terms of their association with resident satisfaction. Conclusions: The results suggest that of the investigated aspects of nursing home care, the size of the nursing home and the amount activities offered to residents were the strongest predictors of satisfaction. Investigation of the mechanisms behind the higher levels of satisfaction found at smaller nursing homes may be a fruitful avenue for further research.
Introduction Worldwide, nearly 20 million children suffer because of SAM. Every year, half of the global childhood mortality is caused by malnutrition and one-third of these deaths are caused by SAM alone [1]. South Asia and Sub-Saharan Africa show the highest rates of underweight and stunting [2,3]. Almost 78% of wasted children belong to the three South-Asian nations: Pakistan, Bangladesh, and India [4]. As every sixth person in Pakistan lives in poverty [5,6], the rate of child malnutrition in the country is higher than those of other South-Asian nations [7]. Previous evidence around the globe shows that supplementary programs have reduced moderate and severe acute malnutrition in children [8][9][10][11]. Recently, governments across the world adopted multisectoral strategies to address the problem of malnutrition. These strategies combine nutrition-specific and nutrition-sensitive indicators. However, evidence showed that the multisectoral solution strategy remained less successful in achieving the desired results [12,13]. In Pakistan, a nutrition-specific CMAM therapeutic program was set up in Southern Punjab's poverty-stricken and flood-affected districts to deal with SAM. Under the CMAM program, moderate as well as noncomplicated SAM cases are treated with Ready-to-Use Therapeutic Food (RUTF), whereas complicated SAM children are referred first to the nutrition Stabilization Center (SC) by LHWs. Once complicated SAM cases become stabilized by using specialized medical milk 75/100, they can use RUTF. Ingredients in RUTF depend on the local acceptability availability and cost, but a standard RUTF is made up of milk powder, peanut butter, vegetable oil, vitamins, minerals, and sugar. The advantage of the product is its long shelf-life without refrigeration, but its demerit is its import as it is not prepared locally. A sufferer's experience is a social product shaped by structural violence, which may be defined as violence "built into the (social) structure and shows up as unequal power and consequently as unequal life chances" [14] (p. 171). Kawachi et al. [15] observed that unequal health hazards for individuals are the product of social, economic, cultural, and political processes in society because health outcomes are curtailed by the exploitative apparatuses of resource distribution, power, and social control. Structural violence is indirectly exercised by different parts of the social machinery of oppression and is apparently "nobody's fault" [16]. Similarly, Quesada, Hart, and Bourgois [17] (p. 339) defined structural vulnerability as "a product of class-based economic exploitation and cultural, gender/sexual, and racialized discrimination and processes of symbolic violence and subjectivity formation that have increasingly legitimized punitive neoliberal discourses of individual unworthiness". State institutions and programs ignore certain individuals based on caste, gender, and class, thus subjecting them to indirect violence. This results often in the failure of development programs [18], and children and mothers with lower social and cultural capital bear the brunt of this structural violence [19]. As these development programs often fail to achieve their stipulated targets, a lasting impact of such programs is that the poor in the target population becomes indifferent to similar interventions in the future. They tend to deprioritize health and normalize disease and malnutrition [20] rather than seeking to benefit from government intervention due to their negative experience of these programs. The legacies of underdevelopment, stigma, and discrimination, along with insufficient public healthcare systems, lead to poorer health outcomes for rural poor and ethnically marginalized households. State institutions, development, and poverty alleviation programs often ignore the individuals belonging to poor, rural, and lower castes [21]. Inequalities based upon castes, gender, and class in South Asia have failed development programs because they marginalized poor and weaker members [18,19], which resulted in maternal and child health disparities [19]. In South Asia, the poor often face difficulties becoming beneficiaries; therefore, evidence [22] suggested that area, gender, caste, and class determinants of social exclusion must be advised for program objectives, eligibility criteria of clients, and the selection process. Social capital is required to access medical settings [23]. Along with it, studies have explained that the corruption within the government's medical settings in Pakistan and India showed a lot of parallels [24][25][26]. This study gives the narratives of healthcare providers and mothers of SAM children seeking treatment from the therapeutic program in the district of Rajanpur of Punjab province in Pakistan. This qualitative study contributes to the literature by describing barriers and resources while accessing nutrition-specific services. The study focuses on the issues of health sector corruption, structural inequalities, and the role of social capital. It adds to critical medical anthropology and the public health literature. It also investigates challenges and barriers to health and therapeutic coverage, why the government lacks interest in the implementation of the nutrition-specific program, and how the poor are generally secluded. --- Materials and Methods --- Data Collection The qualitative data for this study were collected during fieldwork in the Rajanpur district of South Punjab from January to May 2017. This area was selected purposefully because it was flood-affected, poverty-stricken, and where female illiteracy and maternalchild malnutrition rates were highest in the whole province. Development infrastructure such as healthcare facilities was also scarce and where rural poor women face disparities. This exploratory study was based on a purposive selection of key stakeholders involved in the CMAM program including healthcare providers and mothers of malnourished children (Table 1). After reviewing the available literature and using keywords such as social barriers and structural challenges to therapeutic coverage [10,[27][28][29], a semi-structured interview guide was developed for interviewing, which was pre-tested with a few respondents and also updated from time to time whenever more information about the issue was revealed during the fieldwork. Exploratory research, as a methodological approach, investigates those research questions that have not previously been studied in depth. Exploratory research is often qualitative, involving a limited number of respondents, but is in-depth in nature. Therefore, only the most relevant stakeholders were interviewed: healthcare providers first (supply side), because they might have been cooperative in introducing other key stakeholders, i.e., mothers of malnourished children enrolled in the therapeutic program. Thus, in the next phase, mothers of malnourished children (demand side) were interviewed for this study. First, Key Informants Interviews (KIIs) with key officials of the District Health Authority were conducted face to face by the principal author who has experience in public health nutrition and knowledge of medical anthropology. Secondly, a Focus Group Discussion (FGD) with LHWs was conducted in a healthcare facility by the two qualitative researchers (F.A. and S.Z.). In the group discussion, 10 participants were maximally allowed to take part. Participants of this discussion were inquired about the major difficulties, barriers, and challenges that hampered therapeutic coverage at the district level. Finally, healthcare providers helped to identify and communicate with mothers having SAM children. The mothers of malnourished children were identified by the Nutrition Assistants appointed at SCs and LHWs involved in the CMAM program. To seek consent to take part in this research, 30 mothers were informed about the nature of the study. However, consent could be agreed upon by 20 mothers. We chose the respondents' places deliberately so that they felt safe and comfortable. Audio recorders were not used, owing to locals' comfortability and cultural sensitivity. In-Depth Interviews (IDI) were in a flexible format, ranging from one to two hours. All interviews were conducted face-to-face in the local language (Seraiki). The open-ended in-depth interviews continued until experiences and essences were repeated, and until information saturation was achieved through 10 mothers (Table 1). The majority of the mothers of SAM children were either uneducated or had a few years of schooling along with minimal socio-cultural capital and disadvantaged economic status (i.e., <unk>USD 100/month). --- Data Analysis Without delays, researchers translated verbatim all the qualitative data obtained from group discussions, semi-structured interviews, and field notes from the local language to the English language. Then, we reviewed all the raw data available and labeled all sentences and text into different colors and codes to find out the common meanings. After this, we grouped similar codes to create broader categories. Next, we had to cross-verify the narratives and remove the inconsistencies, vagueness, and discrepancies. Lastly, codes and categories were analyzed and different themes that affected therapeutic coverage were identified using inductive research methods. In total, seven prominent subthemes subsequently emerged from the whole exploratory qualitative data. In the end, all conspicuous challenges, barriers, and difficulties were subsequently assembled into five leading themes: (1) politico-economic or financial, (2) administrative and planning, (3) logistical, (4) social or cultural capital, and (5) behavioral or interactive (see Figure 1). --- Data Analysis Without delays, researchers translated verbatim all the qualitative data obtained from group discussions, semi-structured interviews, and field notes from the local language to the English language. Then, we reviewed all the raw data available and labeled all sentences and text into different colors and codes to find out the common meanings. After this, we grouped similar codes to create broader categories. Next, we had to crossverify the narratives and remove the inconsistencies, vagueness, and discrepancies. Lastly, codes and categories were analyzed and different themes that affected therapeutic coverage were identified using inductive research methods. In total, seven prominent subthemes subsequently emerged from the whole exploratory qualitative data. In the end, all conspicuous challenges, barriers, and difficulties were subsequently assembled into five leading themes: (1) politico-economic or financial, (2) administrative and planning, (3) logistical, (4) social or cultural capital, and (5) behavioral or interactive (see Figure 1). --- Ethical Considerations The ethical approval for this study was acquired from the advanced study and research board (AS&RB) of Quaid-e-Azam University Islamabad in its 307th meeting held on 20 October 2016. The board committed to approving the endorsements of the Dean of the Faculty of Social Sciences to accept the current qualitative and ethnographic research in the Department of Anthropology. In addition to this, the Department of Health District Rajanpur also approved the study protocols and tools. All the participants were thoroughly informed about the nature and purpose of this study before taking their formal consent to be part of this exploratory qualitative research. As the majority of mothers were illiterate, oral consent was provided according to their wish and comfortability. After taking informed verbal consent from all study participants, we promised to ensure their anonymity, privacy, and confidentiality. --- Results Our overall qualitative findings revealed the emergence of multiple financial, administrative, logistical, and behavioral difficulties that challenged the CMAM therapeutic program for the treatment of severely malnourished children in the Southern Punjab region of Pakistan. --- Financial Barriers Health priorities at the micro-level are influenced by macro-level incentives. Funding for different national or provincial health or nutrition programs determines the focus of health staff. --- Funding and Priorities of Health Bureaucracy The national Polio Eradication Program, being the most favorite program, was prioritized by the health bureaucracy. The health department used most of its energy in this program and deprioritized others. "Although the nutrition program has been functional for many years, the staff isn't free to run this at the district level. The health office gives importance to their routine matters and does not let this kind of vertical program be implemented in full scale and strength". (Nutrition Official, KII) --- Work Burden on LHWs LHWs coordinate between the community and health department; therefore, they were involved in almost every program, whether provincial or national. They frequently complained that they faced extra work pressure and burden, particularly from the Polio eradication program. Their primary duty was to cover and coordinate with more than two thousand pregnant and lactating females in their concerned outreach areas. Over-involvement reduced their concentration in their original work about child and mother work. The over-involvement of LHWs de-prioritized nutrition activities by the health department. "LHWs are involved in other programs, especially Polio. After working three to five days in the Polio campaign, one LHW would not go into the field because she is already tired. Similarly, in Measles, LHW is fully engaged for 12 days and becomes so fatigued and rarely visits the field for some days, and demands rest. When the department asks working overly and extensively, how she can fill the high gaps created in the nutrition program. This pressure is regular; Polio and other activities are unfinishable". (LHW, FGD) "Funding availability in the Polio eradication program was the leading cause of why the health department Punjab always engaged LHWs for only this at the stake of another important program because their funding was low or none. It was owing to this fact that LHWs always wandered for Polio drops and skipped nutritional screening and education". (LHW, FGD) In Southern Punjab, several LHW posts were vacant according to reports of the district health information system. Out of a total of 900, only 650 LHW seats were filled, which showed that the covered-up population in the district was 44%. LHWs felt dissatisfaction with the lower salary packages and other allowances. Logistical and cultural hurdles, along with extra workload, jointly restricted their will and motivation. On many occasions, many of them took this duty as no more than a formality just because they could not merely refuse orders from the department. As a result, they ignored visiting assigned households regularly due to low salaries and poor economic incentives. Many LHWs were not well trained in anthropometric measurements of mothers and children for screening purposes. Unfortunately, these LHWs in the least developed areas were not appointed or even remained absent. Many of these LHWs reported that their performance was perfect, and they tried to justify their role. They always report that everything was going well. One official remarked: "LHWs are called almost every week, sometimes for meetings, sometimes for training, or sometimes for another task. She has to maintain and carry multiple registers. I mean, it's a serious matter that needs to be seen and fixed. The patients from remote rural and tribal areas are missed; SAM cases are from remote areas, where there is a water problem, and access is limited. So cases mostly come from rural areas". (Health Official, KII) --- Administrative and Planning Failures The training of field staff, screening, referral of SAM cases, and distribution of therapeutic food are compromised due to the weak administration of the program. --- Improper Utilization of Nutrition Field Staff: Lack of Training In 2008, the Government of Punjab recruited Health and Nutrition Supervisors at the BHU level to screen and train the community on common health diseases and nutritional issues. However, many of the remote BHUs missed them as there was no infrastructure. Since their creation, they have barely taken part in any significant nutrition intervention in the district. Their role in CMAM was never acknowledged until recently when the multisectoral nutrition center (MNSC) at the provincial level anticipated their future participation in the province of Punjab in a report in 2017. They were not fully trained on nutritional issues and, hence, lacked relevant knowledge about the causes and treatment of malnutrition. It was reported that an international organization (Micronutrient Initiatives (MI)) had trained them on the importance of micronutrient iodine for mothers and children. Therefore, these supervisors were mostly assigned monitoring duties for Polio, EPI, and dengue prevention programs instead of nutrition. However, they were properly trained on malnutrition for the first time in 2017 after 9 years of recruitment, which showed the lack of coordination and failure of the precise job description. This also showed a weak commitment to combatting malnutrition, lack of vision, and relevant policy failures. The staff appointed at remote health units rarely performed duties because of insecure environments, a lack of monitoring mechanisms, dilapidated hospital buildings, damaged roads, and a lack of transport facilities. These isolated areas are those where more attention is needed. Most recently, new district coordinators were recruited by the multisector nutrition center who also need training on nutrition issues. "There are gaps....as the district coordinator of the malnutrition addressing committee has only one or two meetings with the Deputy Commissioner of the district. Also, MSNC established by the Planning and Development Commission of Punjab province has recruited district coordinators, but they are new and have no significant work to do. Nutrition supervisors are also not so trained and involved, nor can they help measure and refer malnutrition cases, but their involvement is limited to the polio program. Although all these have been appointed, they have no work to do, except work on special weeks. Recently, we called nutrition supervisors on nutrition week. They were assigned to distribute multi-nutrient sachets in their schools as area in-charges, but they are not really in much coordination". (Health Official, KII) --- Weak Referral, Indifference, and Interpersonal Conflicts among Staff Intrahospital or staff interpersonal politics at the Basic Health Units (BHUs) level emerged as one of the most significant reasons behind the weak referral of SAM cases to the SC at District Headquarters Hospital (DHQ). It was remarked that: "The cases which reach DHQ without a referral are admitted right away, but SAM referral is constrained and slow, especially, people from remote rural areas are in great need because of the weak and poor referral system to the Stabilization Centre. Every month LAMA (who quit treatment) cases are increasing; 4-5 SAM cases are admitted daily, totaling approximately 120-150 in one month. Most of these cases are located at the basic health unit (BHU) level. For the treatment of SAM, it is very difficult to screen a child with a complication from the field by these LHWs through Mid Upper Arm Circumference (MUAC). LHW refers these SAM cases to Lady Health Visitor (LHV) who has to verify MUAC and complications, and forward complicated SAM cases to DHQ by an "1134 ambulance service". (Nutrition Official, KII) After the anthropometric screening, LHWs generally referred malnourished mothers and children to BHUs and Rural Health Centers. However, many mothers were kept waiting unnecessarily by Lady Health Supervisors (LHSs) appointed at BHUs. Many poor and illiterate mothers left health units because they felt they were being ignored, unattended, and devalued by these LHSs. "LHW and LHV are often at odds with each other. Sometimes LHS dislikes an LHW, who insists on checking children immediately. Every LHW expects that she has hardly convinced and referred parents of SAM case to BHU, so now LHS should give priority so that it could be further referred to Stabilization Centre at DHQ. LHS asks LHW to 'wait outside' and does not attend to the case even after hours. This is how SAM cases leave hope for treatment and run away, and this is why referral of severely malnourished children with complications is minimum. However, a child specialist and nutrition staff, specified for this work only, are readily available at SC; therefore, SAM cases are measured and admitted without trouble. However, people from only nearby areas can reach directly to SC, but cases from remote areas have to be ignored". (Nutrition official, KII) --- Lack of Monitoring and Medical Corruption The presence of formula milk companies inside hospitals and the sale of not-for-sale RUTF were two significant factors. Although banned theoretically, representatives of the multinational formula milk were reported to move freely in the SC, BHUs, and RHCs for advertising and selling formula milk to the poor parents of severely SAM children. "Soon after recovering from complicated SAM, mothers were motivated to try their products. The company trains its agent to remain alert and keep an eye on every person monitoring and conducting research. They are well trained in rapport building with medical staff and patients' attendants for convincing them to use their products after the advertisement. Nobody ever restricted such active advertisement and sale". (LHW, FGD) Therapeutic food was reported to be sold out at the hands of some LHWs. These packets of therapeutic food are not for sale. It was informed by community members that the Plumpy-Nuts were being sold out at some places at the price of PKR 20-30 per sachet by a few LHWs. A mother indicated: "I requested our LHW to give some food but she refused. I threatened one such LHW who used to sell it by saying, 'give some sachets for my son, or else I would complain against you that you sell off the therapeutic food illegally.' Never were any actions taken against such complaints by the concerned authorities". (Mother of SAM child, IDI) "The distribution of therapeutic food is not altogether transparent and fair. Health staff often prefer and prioritize their relatives and close ones first whenever the task of providing therapeutic food is given to them". (Mother, IDI) --- Lack of Social and Cultural Capital among Poor Mothers Relationships with those who control power, access to information, and interpersonal skills necessary to communicate are essential requirements for becoming beneficiaries of development programs. --- Rural-Urban Disparities: Accessing Therapeutic Program When asked whether the field staff visited your area or household and what were the impacts of therapeutic food, most respondents agreed that the milk provided at the stabilization center and RUTF had a good impact on the sick child. The majority of the enrolled mothers in CMAM showed that their children were recovering gradually. In their opinion, the specialized medical milk (75/100) and RUTF brought a positive impact on their severely malnourished children. When asked how mothers came to know about the treatment of severely malnourished children at stabilization centers or CMAM, most parents revealed that they were referred by the medical community or people from urban centers told them about this program and suggested visiting the nutrition Stabilization Centre at DHQ to obtain special milk (75/100) for malnourished babies. "Doctors, LHWs, and active community members helped to refer us to the CMAM program and SC, for therapeutic 75 milk for the severely malnourished baby". (Mother closer to the city area, IDI) "LHWs visited our area and told us to bring milk from CMAM staff; vaccinators also visit and inform us about the program". (Mother from Peri-urban area, IDI) "LHWs do not visit our area, but vaccinators do once a year so we sometimes bring our children to the hospital for immunization and sometimes not. People from the city informed us about this program; they suggested us to visit Stabilization Centre because milk [75/100] was being distributed there". (Mother from the remote village, IDI) Only a few parents reached the SC without any referral, which implies how different local forms of social capital or relationships helped mainly urban or peri-urban families to become beneficiaries and isolate and seclude the majority of the most deserving remote, rural, illiterate, and lower-income families with lower social capital. "We, the females, are carrying this unfortunate child without any help from other family members. I am a mother, how can I leave him alone in this condition, only my heart knows how much disturbed I am. No one can realize the state of my heart; I cannot see my child suffer. I am in profound psychological distress. When will my child feel normal and healthy, I don't know. I have tried my best to make him healthy and nourished. We have wandered everywhere, here and there, to find if someone could suggest a better way. Recently a person from our neighborhood informed us about this program, I requested my mother to test this place [Stabilization Center] too". (Mother from the remote village, IDI) --- Logistical Difficulties Treatment of complicated SAM children requires their mothers and other caregivers to stay at the SC for some days until the child is stabilized and can come to the simple RUTF stage. However, most mothers complained they had to leave medical advice due to logistical hurdles. --- Geographic Seclusion: Difficulties in Traveling The poorest of the poor mostly live in risky, far remote, and underdeveloped areas. Geography is one of the central causes of inequities in health and nutrition. Results showed that distance emerged as a substantial barrier to coverage and access to health and nutrition programs. Logistical problems emerged as the most significant reasons for low access. The bad transportation, long travel times, damaged roads, and long distances to the site were the major determinants of little coverage. Females are less empowered in these settings due to the lowest access to healthcare facilities and literacy and employment opportunities. One mother stated, "We are tired and we still have to travel". The mother informed that they reached the stabilization center after much difficulty and running errands: "The [Nutrition stabilization] center is very far from our village, and it took hours to get there. We had to catch several types of transport; the first motorcycle from our community to another town, then an auto-rickshaw to the main highway. After it, we had to catch a bus from the road to reach the district bus stand. From the bus stand to the hospital, we had to hire an auto again. After wandering here and there madly in the hospital building, we reached the stabilization center by asking for addresses with the help of so many people. We got tired when we arrived here, and we still have to travel, we'll have to go back home as it is not allowed to stay without permission". (Mother of complicated SAM Child, IDI) --- Problems Related to Staying at the Stabilization Center Many mothers insisted on the hospital staff that they wanted to treat the complicated SAM children at home. SC staff had objections to this idea because the condition of the severely malnourished children was unstable, and they were required to stay until the children became stable in the center. The Punjab government previously claimed to set up SCs at the sub-district level, but activities at the SCs were being limited at the district level, and at any time, the program may come to an end. UNICEF in Pakistan has recently intended to study the bottlenecks in the CMAM program. This indicates that these programs are still under the control of UN agencies and the government lacks ownership. "Convincing parents about the treatment at SC is a very complex task. Mental preparation of family and parents is essential for this because a mother or someone from the family has to stay for at least four days. They have to prepare their basket or bag". (LHW, FGD) The other strong reason for low therapeutic coverage was the loss of income if the mother and father were to stay at the SC receiving the treatment for only one severely malnourished child and ignoring the rest of their children. This made them indifferent to complete treatment. Therefore, most grandmothers had to stay at the SC. Mothers could not stay longer, because no one could take care of the rest of the children at home. During crop season, poor rural mothers could rarely afford to give proper time for treatment and health-seeking. Some domestic servants also complained about working hours. As they could not escape from their duty, they delayed check-ups and treatments of complicated SAM children. If mothers had to stay, they had to bring all of their kids along with them to the SC at the district headquarters hospital. As children were unaware of cross-infections at the sites, they were playing in the hospital's wards, touching the floor with their hands, and eating foods there without handwashing. --- Behavioral Problems with Nutrition Staff Another critical factor of low coverage of the therapeutic program in rural and Southern districts of Punjab province in Pakistan involves the elements of stigma, respect, and dignity. --- Stigmatization of Patients and Attendants Many poor parents felt stigmatized and complained of being unattended at the hands of the hospital and nutrition staff. Illiterate people with low socioeconomic status had low confidence to communicate with hospital staff and feared being insulted by the doctor and staff. The behavior of the staff was not supportive. Sometimes staff felt irritated by the poor's dirty clothes. CMAM staff was often reported to have been rude to mothers of severely malnourished children. Multiple times, mothers indicated taunts and offensive remarks and the mothers felt ashamed of this embarrassing situation. For example, on one occasion, a nutrition assistant at the SC vocalized to a mother, "you are always here to get this milk". Once, a female nutrition staff member threw the packets of formula milk 75 toward a mother in a very disgusting mood and said angrily, "hold this packet and get out". In another instance, when a poor mother brought her child to the stabilization center for the treatment of SAM, the on-duty staff responded "take your dirty luggage from here; it smells stinky". --- Not Being Attended Complaints about not being attended to by low-income parents were much more common. Mothers explained how the staff at the nutrition stabilization center was indifferent, careless, and rude. "We would wait all day and night, but no person attended a little. The sick child used to cry all night as they would give our child nothing to eat and drink. We were worried when the doctor and staff would pay attention to our child. Leaving such treatment [of indifference and disgust] would be better than just wasting time [in wait] here". (Mother at SC at DHQ, IDI) "My husband said to SC staff'my child is hungry, and you pay no attention. I do not want to leave my sick child as hungry all night.' Nurses complained about my husband to the head doctor, who called him and insulted him. My husband got disheartened and finally decided to quit the treatment at this center". (Mother at SC at DHQ, IDI) --- Discussion This study discovered mothers' interactions with the biomedical treatment and therapeutic system of the CMAM program and nutrition stabilization center. It specifically explored how poor, illiterate, and rural women were often incapable of navigating the therapeutic coverage and politics with institutions. These difficulties were perilous for many women, mainly from remote and secluded areas, who were illiterate and lacked the required minimum cultural assets and social skills to negotiate the complex and unfamiliar setting [29]. Women's communications with the health and nutrition staff illuminated how the administration strengthened health and nutrition inequities. Barriers related to geography, income, fears of maltreatment, and discrimination emerged as most striking and significant for the rural poor struggling to receive therapeutic care through the public healthcare system [20,30]. Many families could not access the CMAM program, owing to multiple socio-cultural and logistical reasons [27,28]. The staff of development programs often secluded poor mothers and children due to multiple power dynamics [22,23,31]. Families, who had some links within local power circles (social and cultural capital), received better chances of coverage. To combat the problem of malnutrition, the government needed to change priorities. At the primary level, deprioritization of the "nutrition program" in comparison with the "Polio eradication program" resulted because of heavy international funding for the latter. This suggests that the government must increase funds for nutrition [32]. Further, the burden and pressure on the LHWs must also be curtailed by focusing their attention on maternal-child health and nutrition programs. In remote areas, seats for the LHWs ought to be urgently allocated [33]. Nevertheless, all these steps require road construction and infrastructure provisions at basic health facilities. Human development infrastructure at the local level is also required in South Punjab, which is always facing regional or ethnic inequalities [34]. The poor rely on traditional treatment methods because of their low income. Stigmatization and the trust deficit of the poor in government departments are strong indicators of low biomedical service utilization along with expensive and uncontrolled private clinicians' prevalence that need urgent policy decisions. This study showed that the medical staff did not care much about the marginalized victims of social stigma and ignored the poor's feelings [35]. The future design and implementation of government programs must be made more socio-culturally sensitive. Plumpy nuts are effective only in emergency contexts but not in chronically poor settings, and such programs also create a dependency of low-income states on international companies, which prepares such foods. In addition, therapeutic food is not available in usual and regular circumstances, even though people need it. The permanent solution, therefore, lies not in treating the individual body but in searching for a cure for a social body through political-economic means of social justice and equity [36]. Evidence showed that nearly half of the population in several rural districts was not covered by LHW, especially in the most remote and the poorest areas [37,38]. UNICEF [39] has highlighted that the neonatal mortality rate was reduced in low-caste groups where LHWs made weekly visits in rural Indian Punjab. Recent studies [40,41] similarly demonstrated that sufficient training, financial compensation, and close supervision of community health workers are imperative for the successful delivery of SAM treatment along with the adequate quantity of ready-to-use therapeutic food. Some respondents revealed that therapeutic food was being sold off by LHWs, and representatives of formula milk producers were free to move into hospital settings. There is evidence that formula milk companies ignore the laws and continue marketing their products inappropriately [42]. The literature from Pakistan and India shows that corruption within medical settings restricts government services [26,43]. While drawing upon the anthropology of the state along with the perspective of structural violence, Gupta found that funds hardly reach their anticipated beneficiaries but mostly reach people with political acquaintances, cultural capital, and financial influence [44]. Inaccurate systems of information based on statistics, conflict, and wide-scale corruption in Indian bureaucracy systematically isolate and ignore the poor. Similarly, examining the "Government of Papers, in Pakistan", Hull [45] analyzed how the bureaucratic processes and management of records crafted partnerships among people as the core apparatus and governing emblem of the official measurement of bureaucracy. For him, papers should be seen "as mediators that shape the significance of the linguistic signs inscribed on them" [45] (p. 13), which shows that postcolonial bureaucratic records are materialized under the colonial policy of keeping government and society isolated. Many poor, illiterate, and rural mothers indicated that they faced rude behavior and stigma in medical settings. In a study in the Kenyan context, analogous shame, stigma, and discomfort at health clinics related to malnutrition and fear of mistreatment at the hands of the biomedical staff were noted as the most significant barriers to treatment for childhood acute malnutrition [46], which potentially constrained their access to the CMAM program. Chary et al. [20] argued that childhood diseases are treated incompletely because of the perception that the child is not being attended to. They linked the phenomenon of "not being attended" with healthcare inadequacies. Our findings showed that mothers faced logistical difficulties. Evidence in Guatemala similarly showed that poor women suffered from running errands [27]. Similar evidence showed that therapeutic programs in five African countries failed because of the low awareness about the program, long distances, the handling of rejection at sites [29][30][31], and the centralization of the program [47]. The study's findings corroborate that distant communities remained potentially disadvantageous to be covered by therapeutic programs particularly for the treatment of complicated SAM because caregivers had to stay for many days at the therapeutic center [29], more often adjacent to the children's hospital. Evidence [48] from the adjacent Sindh province of Pakistan also showed that remote areas were less exposed to the therapeutic program, and the common barriers included the low awareness of malnutrition and its services, the children's disapproval of RUTF, long distances, and high opportunity cost. This study also found that remaining in the program until full recovery was difficult. This article monitors mothers' interactions while accessing the nutrition-specific CMAM program. In doing so,
Severe Acute Malnutrition (SAM) is a serious public health problem in many low-and middle-income countries (LMICs). Therapeutic programs are often considered the most effective solution to this problem. However, multiple social and structural factors challenge the social inclusion, sustainability, and effectiveness of such programs. In this article, we aim to explore how poor and remote households face structural inequities and social exclusion in accessing nutrition-specific programs in Pakistan. The study specifically highlights significant reasons for the low coverage of the Community Management of Acute Malnutrition (CMAM) program in one of the most marginalized districts of south Punjab. Qualitative data are collected using in-depth interviews and FGDs with mothers and health and nutrition officials. The study reveals that mothers' access to the program is restricted by multiple structural, logistical, social, and behavioral causes. At the district level, certain populations are served, while illiterate, and poor mothers with lower cultural capital from rural and remote areas are neglected. The lack of funding for nutrition causes the deprioritization of nutrition by the health bureaucracy. The subsequent work burden on Lady Health Workers (LHWs) and the lack of proper training of field staff impact the screening of SAM cases. Moreover, medical corruption in the distribution of therapeutic food, long distances, traveling or staying difficulties, the lack of social capital, and the stigmatization of mothers are other prominent difficulties. The study concludes that nutrition governance in Pakistan must address these critical challenges so that optimal therapeutic coverage can be achieved.
attended to. They linked the phenomenon of "not being attended" with healthcare inadequacies. Our findings showed that mothers faced logistical difficulties. Evidence in Guatemala similarly showed that poor women suffered from running errands [27]. Similar evidence showed that therapeutic programs in five African countries failed because of the low awareness about the program, long distances, the handling of rejection at sites [29][30][31], and the centralization of the program [47]. The study's findings corroborate that distant communities remained potentially disadvantageous to be covered by therapeutic programs particularly for the treatment of complicated SAM because caregivers had to stay for many days at the therapeutic center [29], more often adjacent to the children's hospital. Evidence [48] from the adjacent Sindh province of Pakistan also showed that remote areas were less exposed to the therapeutic program, and the common barriers included the low awareness of malnutrition and its services, the children's disapproval of RUTF, long distances, and high opportunity cost. This study also found that remaining in the program until full recovery was difficult. This article monitors mothers' interactions while accessing the nutrition-specific CMAM program. In doing so, it proposes that a "politics of neglect" is at play in these programs in neglecting the social body and poorer sections of society in the program's target areas. These interventions do not consider processes of power and exploitation and ignore the complex and unequal social relations. The narratives showed how the poor often faced structural inequities and social exclusion due to a lack of social or cultural capital. Evidence showed that the poorest of the poor and low-caste families with the lowest social capital in Punjab were excluded from the cash transfers program (nutrition-sensitive program) at the will of local political leaders [49]. Some of the literature found similar results that only people with approaches and links to local politicians could be successful in becoming beneficiaries of the income support program in Pakistan [50]. Families with lower socio-cultural capital suffered the most because of the lack of transparent and impartial social protection policies and social safety nets. The literature from other contexts on so-called bureaucratic hurdles has highlighted such misery of poor women facing structural inequalities and the indifference of bureaucracy toward the poor people who have no relationships with influential notables [51]. The lack of social and cultural capital deprives the poor of their due rights, despite deserving them. On the other hand, people with such capital were witnessed on several occasions, becoming beneficiaries even if they did not deserve it well. According to Bourdieu [52], cultural capital plays a vital role in taking benefits from society. When they are guided to adopt specific procedures, illiterate mothers cannot remember the steps and names of officers. The poverty eradication or development programs preferably target the better-off, ignoring many poor of the poorest who have never been taken seriously by the bureaucratic structure of the development programs. When resources are limited, competition is high; therefore, the humanitarian apparatus has to be narrow in its scope, leaving many deserving and potential beneficiaries far behind [53]. In Pakistan, poverty is extensive. The poorest are deprived because they lack links and relationships with people in power. Pakistan is not a place where resources are equitably distributed and where the population is also under control. This bureaucratic structure does not let the poor and weaker enter their offices unless an officer, lawyer, politician, or any other notable accompanies them. The poor often endure social and structural difficulties in the process of being beneficiaries, so knowledge about social exclusion is fundamental to advise on program objectives, eligibility criteria of clients, and the selection process [53]. In addition, CMAM is a short-term curative measure, especially in emergency contexts. Not aligned well with local socio-cultural realities, the short-term global technical solution in the form of RUTFs and CMAM was implemented "under neoliberal governments and facilitated an increasingly inequitable economy with minimal state involvement in an increasingly individualistic social environment" [54] (p. 16). However, the permanent, long-term, sustainable solution to maternal child undernutrition lies in females' socioeconomic emancipation, and their health or nutrition literacy [55][56][57][58][59]. In addition, the inclusion of training of medical staff on respectful care is imperative. --- Conclusions The CMAM program in Southern Pakistan encounters multiple social, economic, and structural obstacles. First, funding in nutrition as compared with other programs deprioritizes officials' interest in nutrition and involves LHWs in other multiple tasks that increase their work burden and divert their attention from maternal-child health and nutrition. In addition, the corruption in food distribution and the unethical sale of RUTF by LHWs are reported, which need strict monitoring and fair dispensation. The normalization of social exclusion has roots in politico-economic and structural inequalities. The study includes the following recommendations: prioritizing more funding for nutrition; proper training of field staff; improving screening skills and referral of SAM cases; providing traveling incentives to needy, illiterate, and rural mothers; devolving the child stabilization service at the micro (UC/BHU) level; distributing RUTF fairly by LHWs; treating parents politely. Finally, vacant LHWs' seats in remote rural areas demand urgent allocation. --- Data Availability Statement: Not applicable. --- Acknowledgments: The input, contribution, and support of all those who provided generous data for this study are acknowledged here. --- Informed Consent Statement: All respondents were informed about the nature and purpose of the study before taking their formal oral consent. In addition, we strictly ensured the privacy, anonymity, and confidentiality of all study participants. --- Conflicts of Interest: The authors declare no conflict of interest.
Severe Acute Malnutrition (SAM) is a serious public health problem in many low-and middle-income countries (LMICs). Therapeutic programs are often considered the most effective solution to this problem. However, multiple social and structural factors challenge the social inclusion, sustainability, and effectiveness of such programs. In this article, we aim to explore how poor and remote households face structural inequities and social exclusion in accessing nutrition-specific programs in Pakistan. The study specifically highlights significant reasons for the low coverage of the Community Management of Acute Malnutrition (CMAM) program in one of the most marginalized districts of south Punjab. Qualitative data are collected using in-depth interviews and FGDs with mothers and health and nutrition officials. The study reveals that mothers' access to the program is restricted by multiple structural, logistical, social, and behavioral causes. At the district level, certain populations are served, while illiterate, and poor mothers with lower cultural capital from rural and remote areas are neglected. The lack of funding for nutrition causes the deprioritization of nutrition by the health bureaucracy. The subsequent work burden on Lady Health Workers (LHWs) and the lack of proper training of field staff impact the screening of SAM cases. Moreover, medical corruption in the distribution of therapeutic food, long distances, traveling or staying difficulties, the lack of social capital, and the stigmatization of mothers are other prominent difficulties. The study concludes that nutrition governance in Pakistan must address these critical challenges so that optimal therapeutic coverage can be achieved.
The coronavirus disease imposes an unusual risk to the physical and mental health of healthcare workers and thereby to the functioning of healthcare systems during the crisis. This study investigates the clinical knowledge of healthcare workers about COVID-19, their ways of acquiring information, their emotional distress and risk perception, their adherence to preventive guidelines, their changed work situation due to the pandemic, and their perception of how the healthcare system has coped with the pandemic. It is based on a quantitative cross-sectional survey of 185 Swiss healthcare workers directly attending to patients during the pandemic, with 22% (n = 40) of them being assigned to COVID-19-infected patients. The participants answered between 16th June and 15th July 2020, shortly after the first wave of COVID-19 had been overcome and the national government had relaxed its preventive regulations to a great extent. The questionnaire incorporated parts of the "Standard questionnaire on risk perception of an infectious disease outbreak" (version 2015), which were adapted to the case of COVID-19. Clinical knowledge was lowest regarding the effectiveness of standard hygiene (p <unk> 0.05). Knowledge of infectiousness, incubation time, and life-threatening disease progression was higher, however still significantly lower than regarding asymptomatic cases and transmission without physical contact (p <unk> 0.001). 70% (95%-confidence interval: 64-77%) of the healthcare workers reported considerable emotional distress on at least one of the measured dimensions. They worried significantly more strongly about patients, elderly people, and family members, than about their own health (p <unk> 0.001). Adherence to (not legally binding) preventive guidelines by the government displayed patterns such that not all guidelines were followed equally. Most of the participants were faced with a lack of protective materials, personnel, structures, processes, and contingency plans. An increase in stress level was the most prevalent among the diverse effects the pandemic had on their --- INTRODUCTION Several types of human coronaviruses with low pathogenicity had been studied before the severe acute respiratory syndrome (SARS) emerged in 2002 in China (Drosten et al., 2003;Ksiazek et al., 2003;Peiris et al., 2003). SARS spread to at least 29 countries in Asia, Europe, and North and South America, with a total of 8,098 infections and 774 SARS-related deaths reported (Kahn and McIntosh, 2005). The virus that causes the presently spreading human coronavirus disease, named COVID-19, was first noticed in Wuhan, China, in December 2019, and it resembles the prior SARS (Ali S. A. et al., 2020;Liu et al., 2020;Wu et al., 2020). The infected typically experience symptoms similar to those of a common flu, with an estimated 80% showing only mild symptoms (Hafeez et al., 2020). As of 22nd December 2020, 76,023,488 cases and 1,694,128 deaths have been reported due to COVID-19 worldwide (World Health Organization, 2020a). For Switzerland, there have been 402,264 cases and 5,981 COVID-19related deaths reported to this date (World Health Organization, 2020b) compared to a resident population of 8.606 million (by the end of 2019, Federal Statistical Office, 2020). The first COVID-19 case in Switzerland was registered on 25th February 2020 (Scire et al., 2020). The first wave of the pandemic took place in late March and early April 2020. By 23rd March, the effective reproductive number (Re)1 had decreased below one (95%confidence interval below one), as depicted in Figure 1, and the first wave was overcome by late May 2020, in the sense that daily new cases had decreased to single digits (Our world data, 2020). Shortly thereafter, the survey was conducted from 16th June until 15th July 2020. The subsequent second wave has recently grown significantly more severe than the first wave, with a maximum 7day average of 8,064 daily new cases reported on 2nd November 2020, which equals 94 daily cases per 100,000 inhabitants (Swiss Federal Institute ETH, 2020). The COVID-19 pandemic has induced a global crisis with unusual health-related and economic challenges. It has been claimed to have caused "a significant global shock" (Mishra, 2020) and has even been named "catastrophic" (Maliszewska et al., 2020). As a consequence, the psychological health of individuals and families has been greatly affected, particularly regarding issues such as stress, states of shock, fear, existential anxiety, and grief (Pawar, 2020). Switzerland is no exception. The first wave of the COVID-19 pandemic led to drastic measures by the Swiss federal government, including the mobilization of several thousand Swiss citizens through the militia system of the Swiss army (the greatest mobilization since World War II) (Federal Council, 2020a;Federal Office of Public Health, 2020). The most restrictive phase took place from 16th March until 26th April 2020, which has popularly been referred to in Swiss media as the "lockdown" (Abhari et al., 2020;Neue Zürcher Zeitung, 2020a). Registered unemployment increased from 121,018 to 153,413 people between January and April 2020 (+26.8%, State Secretariat for Economic Affairs, 2020a). After the precautionary measures had been gradually relaxed following 26th April, the Federal Council and the Federal Office of Public Health intensified the measures again in October 2020 in reaction to the second wave (Federal Office of Public Health, 2020). Several branches of the Swiss economy have been under considerable pressure (State Secretariat for Economic Affairs, 2020b), and prognoses for the near future remain unfavorable (State Secretariat for Economic Affairs, 2020c). By the end of November 2020, 153,270 people were registered as unemployed, amounting to an unemployment rate of 3.3% (State Secretariat for Economic Affairs, 2020a). Accordingly, the pressure on the economy is still high, as is the strain on the psychological health of the population, given this ongoing phase of restricted public and private life, economic uncertainty, health hazard, and loss. Healthcare workers are a primary group on which the COVID-19 pandemic has imposed extraordinary challenges. This has clearly been recognized in the international literature. As first responders in providing care, they have been exposed to feelings of stress and uncertainty, while working long hours and often not being fully protected against an infection (Shaukat et al., 2020). The risk of testing positive for COVID-19 is high among healthcare workers (Nguyen et al., 2020), which, combined with the responsibility they bear for their patients, has exposed them to ethical dilemma (Menon and Padhy, 2020). As private citizens, they have also had to cope with posing an increased infection risk to their social environment. Even being depicted as "heroes" by the media can in fact be counterproductive, as it increases their perceived pressure (Cox, 2020). This situation can significantly affect their mental health and even lead to work-related trauma (Probst et al., 2020;Vagni et al., 2020). Many healthcare workers have been documented to have developed mental issues for which they require psychological support (Lai et al., 2020). This is a clear indication that, besides infrastructural considerations, also the individual capacities of healthcare workers, including their psychological well-being, are a crucial ingredient in facing a pandemic of the magnitude of COVID-19. Shortly before the first wave of COVID-19 in Switzerland, northern Italy, a direct neighbor, experienced a severe overload of the healthcare system due to COVID-19, particularly of hospitals and intensive care units (ICU). This provided an alarming example to Swiss healthcare workers. The International Council of Nurses (2020) documented both the high rate of infection among healthcare workers in northern Italy, who then needed to be isolated outside of the workforce for 14 days, as well as the physical and mental exhaustion of them and their colleagues who were still/again in service. In mid-October 2020, as the second wave of COVID-19 infections had already emerged, the Swiss Society of Emergency and Rescue Medicine, Switzerland Emergency Care, and the Swiss Association of Paramedics together issued an open call to the Swiss government for support. They stated that the health of Swiss healthcare workers, which had already deteriorated due to the first wave, was at considerable risk of getting worse, if the government did not apply consistent measures across the entire country (SwissInfo.ch, 2020a). Beyond these challenges, the pandemic has exposed the vulnerability of people, among them also healthcare workers, towards receiving flawed information through popular media, which may affect their judgment. The conveyed information may be imprecise or even misleading, and it may originate within media outlets themselves or merely be transmitted by them. The notion of vast flows of information on a "hot topic" coming from all kinds of sources, of which it may not always be clear to the reader/listener which are proven facts and which are opinions, is known as infodemics (Lexico dictionary, 2020). Filtering information by assessing its source is therefore a necessity, particularly for healthcare workers. With the physical and mental health of healthcare workers being at stake, insight on their perspective and identification of their crucial challenges, as they perceive them, are greatly needed. It is a first step towards sensibly protecting them for their own sake, as well as for them to remain effective and efficient in their services, during a time when they are most needed by society. A rapid and effective response, as well as healthcare staff that is still able to take leadership, are pivotal in successfully handling the pandemic (see e.g., Nagesh and Chakraborty, 2020). Lessons from the first wave of the pandemic are therefore needed, and first-hand empirical data is key. This study presents a quantitative survey of Swiss healthcare workers (n = 185) conducted shortly after the first wave of the pandemic. Its aim is to provide evidence of their clinical knowledge about COVID-19, their emotional reaction, their adherence to preventive guidelines, and the impact on their work situation. For such insight to be accurately drawn, understanding the context is essential. Therefore, the circumstances under which the first wave impacted the healthcare workers need to be considered, which to a large degree depend on how the government and the healthcare system were prepared for and reacted to the pandemic. A few recent studies have provided quantitative evidence of the knowledge of healthcare workers on COVID-19. Wahed et al. (2020) have studied Egyptian healthcare workers, showing that knowledge was higher among the more highly educated individuals, as well as among those below the age of 30 years. Zhang et al. (2020) in their survey of Chinese healthcare workers concluded that knowledge was sufficient in 89% of them. Honarvar et al. (2020) have provided evidence of the knowledge of the general public on certain COVID-19-related issues for the case of Iran. Similarly, Abdelhafiz et al. (2020) have assessed the knowledge of the Egyptian general population. To our knowledge, no study has been published so far specifically focusing on the clinical knowledge of Swiss healthcare workers and their media use. Our study therefore fills in this gap in the literature. Several studies in the international literature have given insight on personal protective equipment (Park, 2020), specific work risks for healthcare workers related to COVID-19 (Ali S. et al., 2020), and psychological coping mechanisms (see e.g., Muller et al., 2020;Probst et al., 2020;Teo et al., 2020;Vagni et al., 2020). Further studies have shed light on risk perception and attitudes towards COVID-19 (see e.g., Führer et al., 2020;Hager et al., 2020;Honarvar et al., 2020;Zegarra-Valdvia et al., 2020). However, when considering risk perception and attitudes, many of the available studies refer to the general population instead of healthcare workers in particular. Exceptions are given as follows. Spiller et al. (2020), who focused specifically on a sample of Swiss healthcare workers, found no substantial changes in anxiety or depression over the course of the COVID-19 pandemic. Aebischer et al. (2020), who surveyed 227 resident medical doctors and 550 medical students through snowball sampling in Switzerland, found that those medical students who were involved in the COVID-19 response (30%) displayed higher levels of emotional distress than their non-involved peers, and lower levels of burnout compared to the residents. Dratva et al. (2020) analyzed Generalized Anxiety Disorder Scale-7 (GAD-7) in a sample of 2,429 Swiss university students, 595 of which (25%) were students of health professions. They found three classes of individuals regarding the perceived impact of the COVID-19 pandemic, with large differences in the odds of increased anxiety. They concluded that preventive/containment measures against COVID-19 had a selective effect on anxiety in students. However, these analyses were not differentiated across professions/fields, and therefore no results specific to healthcare workers or students of health professions were available. Puci et al. (2020) showed that the risk perception of getting infected with COVID-19 was high among Italian healthcare workers. They also reported sleep disturbances in 64% of the participants, and that 84% perceived a need for psychological support. Abolfotouh et al. (2020) in their survey of Saudi Arabian healthcare workers found that three in four respondents felt at risk of contracting COVID-19 at work, and that 28% did not feel safe at work given the available precautionary measures. Predictors of high concern were, among others, younger age, undergraduate education, and direct contact with patients. In a study of Ethiopian healthcare workers (Girma et al., 2020), risk perception due to the pandemic was measured by ten items on a five-point Likert scale. The mean score of perceived vulnerability was higher for COVID-19 than for the human immunodeficiency virus, the common cold, malaria, and tuberculosis. Wahed et al. (2020) studied a sample of Egyptian healthcare workers, finding that 83% were afraid of being infected with COVID-19. Therein, a lack of protective equipment, fear of transmitting the disease to their families, and social stigma were the most often named reasons. Two further studies are currently in their preprint phase: Firstly, Weilenmann et al. (2020) investigated mental health (depression, anxiety, and burnout) in physicians and nurses from Switzerland, considering work characteristics and demographics as explanatory factors. They concluded that support by the employer, as perceived by the physicians and nurses, was an important indicator of anxiety and burnout, while COVID-19 exposure was not strongly related with mental health. Secondly, Uccella et al. (2020) identified specific risk factors/groups among workers of public hospitals in Italy and Switzerland regarding psychological distress, such as being female and working in intensive care. Having both children and stress symptoms was associated with the perceived need to experience psychological support. Accordingly, while several studies are available regarding specific measures of psychological deterioration, such as anxiety or depression, and also regarding risk perception, quantitative evidence for the specific case of healthcare workers in Switzerland is still rare. Furthermore, the mentioned studies of risk perception referred to the situation at the time of the respective surveys during the pandemic, meaning that the available preventive measures and policies varied substantially. By contrast, the participants of our study were instructed to quantify the risk of COVID-19 independently of the specific precautionary measures that were in place at the time. That is, they answered for the scenario in which no other precautionary measures were taken during the first pandemic wave, other than the usual measures against common influenza. Albeit hypothetical, this allowed for a more general assessment of the threat imposed by COVID-19, making it more comparable to other health hazards. The precautionary health behavior practices of Ethiopian healthcare workers were assessed by Girma et al. (2020) with a ten-item questionnaire. The items covered dimensions such as the frequency of wearing gloves or wearing a mask. Zhang et al. (2020) surveyed the implementation of four mandatory practices in hospitals among Chinese healthcare workers, concluding that 90% followed them correctly. Our survey contributes to the literature by using a different set of guidelines, which were legally non-binding and issued by the national government towards the general population. Thereby, the study covers the adherence of healthcare workers also in their private life, and is specific to the case of Switzerland. Several studies have recently examined the responses to the COVID-19 pandemic in different countries. They adopted different perspectives, analyzing the effectiveness of governmental policies (Dergiades et al., 2020;Desson et al., 2020), epidemiological responses (Jefferies et al., 2020), testing, contact tracing and isolation (Salathe et al., 2020), lockdown policy (Faber et al., 2020), preparation of the healthcare sector (Barro et al., 2020), as well as key learned lessons (Han et al., 2020). However, empirical studies of how such measures are perceived by the healthcare staff, and of how the pandemic has affected their work situation from their own perspective, are still scarce. Spiller et al. (2020) compared two demographics-matched samples of healthcare workers, which were collected at two different points in time: at the height of the pandemic (T1) versus two weeks after the healthcare system had started its transition back to usual operations (T2). They found that working hours were higher at T1 compared to T2, and still higher at T2 compared to pre-pandemic levels. Uccella et al. (2020) found that healthcare staff working in intensive care experienced an increase in working hours. The study by Wolf et al. (2020) investigated the effect of policies such as the Swiss "lockdown" on dental practices and social issues such as unemployment and practice closures, assuming on a more economic perspective. Abolfotouh et al. (2020) found broad approval among healthcare workers of the following: the suggestion that the national government in Saudi Arabia should mandate the isolation of COVID-19 patients in specialized hospitals, travel restrictions within the country, and curfew. Our study contributes by providing evidence of how the work situation of healthcare workers had been impacted from their own perspective, and of how they perceived the measures that were implemented by the government. This study provides insight on several psycho-social factors that in combination are relevant to the role of healthcare workers in the current pandemic. They are not specific psychological diagnoses or concepts of psychological deterioration like depression, anxiety, or burnout, but concern a broader spectrum of issues relevant to the mental wellbeing and the capability to act of healthcare workers. This supports policymakers in pragmatically fostering their comprehensive view of the situation, and in designing policies to sustainably protect the wellbeing of healthcare workers. In addition, the healthcare workers named the specific lessons that needed to be learned from their perspective when facing further pandemic waves. --- MATERIALS AND METHODS --- Study Setting This cross-sectional survey was conducted from 16th June to 15th July 2020 with Swiss healthcare workers who regularly worked in direct contact with patients. The healthcare workers were also pursuing a professional development course at Careum Weiterbildung or had attended such a course within recent years. Careum Weiterbildung, situated in Aarau, is one out of several institutions in Switzerland offering extra-occupational courses of professional development (/vocational training) to healthcare workers. These courses vary in duration from 1 day to several days per month over several years and cover a broad range of practice-oriented topics and specializations within healthcare and social sciences. They are often multidisciplinary, and they are aimed at improving care by teaching methods of caregiving, knowledge of practical procedures, communication and organizational skills. Attending such professional development courses is highly common among healthcare workers of all specializations and hierarchical positions in the Swiss healthcare system. Participation was strictly voluntary and anonymous2. According to Swiss regulations, no approval by an ethics committee was required for this study. The participants were surveyed under the following circumstances: After the final day of the above-mentioned "lockdown" during the first wave in Switzerland on 26th April 2020 (see section "Introduction"), the preventive measures had been gradually eased by the national government (Neue Zürcher Zeitung, 2020b;Schweizer Radio und Fernsehen, 2020). From 27th April, businesses offering personal services with physical contact, such as hairdressers, beauty shops, and others, had been allowed to reopen, as well as florists and hardware stores (Federal Council, 2020b). From 11th May, primary and lover secondary school had resumed, and restaurants, markets (also others than food), museums and libraries had been allowed to re-open, along with sport events without physical contact (Federal Council, 2020c). From 28th May, religious events with larger groups of people could be held again (with a protection concept for the participants) (Federal Council, 2020d). From 6th June, private and public events with up to 300 people had been re-allowed, and touristic facilities (such as mountain railway, camping sites, etc.) could re-open. On 15th June, the borders with many countries within the EU/EFTA had been completely re-opened (SwissInfo.ch, 2020b). With the survey starting on 16th June, the participants answered the questionnaire after the first wave of COVID-19 had been overcome, and shortly after the government had relaxed preventive measures to a great extent. --- Participants All healthcare workers who were part of this study (n = 185) were directly attending to patients, with 22% (n = 40) of them either working with COVID-19 patients at the time of the survey or being scheduled to work with COVID-19 patients within the following 6 months. One in six individuals (17%, n = 31) indicated that because of their health condition, they themselves belonged to a risk group regarding COVID-19. The majority worked in a leading position (56%, n = 104) and roughly one in six had a technical lead position (18%, n = 33). They came from all major areas of the healthcare system, with 22% (n = 40) working in acute care (including psychiatric care), 54% (n = 100) in nursing homes, 16% (n = 30) in home care, and 12% (n = 22) in other areas such as rehabilitation and patient counseling 3. The median age was 49 years, while the minimum was 23, and the maximum was 68. The vast majority were women (89%, n = 164). For further characteristics of the sample, see Table 1. --- Data Collection The data were collected by two-stage cluster sampling, inviting all current and recent attendees (past 8 years) of Careum Weiterbildung for voluntary participation in the survey. A standardized online questionnaire was delivered to 1,747 attendees' addresses on 16th June via e-mail. 38.1% (n = 665) of the delivered messages were opened, and for 36.4% (n = 242) thereof the link to the survey was followed, as controlled by Mailworx software. A reminder was delivered to 1,684 attendees' addresses on 30th June, which was opened in 32.9% (n = 554) of the cases, and for 29.1% (n = 161) thereof the link to the survey was followed. A total of 194 participants completed the questionnaire, 185 of which directly attended to patients and therefore belonged to the population of main interest. Completion took 18.1 min at the median (minimum 9.3; maximum 54.6). The questions were posed with given answer options, predominantly in multiple-answer form, and some in multiplechoice form (As the only exception, the participants entered their age as an integer). Thereby, parts of the "Standard questionnaire on risk perception of an infectious disease outbreak" by the Municipal Public Health Service Rotterdam-Rijnmond and the National Institute for Public Health and the Environment (Voeten, 2015) were adapted to the case of the COVID-19 pandemic. The answer option "other" was frequently included which, if selected, led to a request for text input for specification by the participant. Questions were posed across the different parts of the questionnaire as follows. ( 1) Knowledge about COVID-19: The participants were presented with eight claims about COVID-19 as stated in Table 2 (labeled as items K1-K8). They were asked to choose for each claim whether it was correct, incorrect, or unknown to them (options "right"/"wrong"/"don't know"). The correct answers shown in Table 2 ("true" or "false" in parenthesis) were taken from the following sources: Day (2020) (K1); Mullard (2020) (K2); Morawska and Cao (2020) 2) those on which they needed more detailed information than they had at the time (for the precise wording of the question see Table 2). (2) Sources of information and means of communication: A first multiple-answer question on who should provide them with the necessary information on COVID-19 (seven answer options, S1-S7), as well as a second multiple-answer question on how they preferred to receive this information (ten answer options, M1-M10), measured their preferred media use (see Table 3 for the precise wording). Furthermore, the participants rated their use of each of five given types of media (U1-U5) on a six-point Likert scale ranging from "daily" to "never" (see Table 4 for the precise wording). ( 3) Emotional distress and risk perception: The first question was "how worried do you feel because of the possibility of [the respective scenario]?" The three scenarios of "getting COVID-19 yourself, " "family/friends getting COVID-19, " and "numerous cases of death among elderly and sick people due to COVID-19" were each rated on a four-point Likert scale ranging from "very worried" to "not worried at all, " as listed in graph A of Figure 2. For the questions on risk perception, a hypothetical scenario was introduced by the wording "please answer for the scenario in which no extraordinary measures were undertaken in Switzerland other than the usual measures against influenza (i.e., no prohibition of social gatherings/events, no lockdown, no extraordinary measures in hospitals)." For this scenario, the question "would COVID-19 be a threat to..." was asked in the five specific respects of "...your own life?", "...the life of your family members or friends?", "health professionals attending to COVID-19 patients?", "...the Swiss population?", and "...the global population?". The answers were given on a four-point Likert scale ranging from "very serious threat" to "no threat at all, " as listed in graph B of Figure 2. As a follow-up, the identical questions were asked a second time, with the answers on a discrete rating scale as described by Studer and Winkelmann (2017). The discrete rating scale ranged from zero to ten, and only the extremes were verbally labeled ("0 = no threat at all;" "10 = very serious threat"). This allowed for the application of different methods of analysis, as described in the section "Data Analysis." (4) Perception of and adherence to preventive guidelines: The participants rated the likelihood of a second wave of COVID-19 in Switzerland before the end of 2020 on a six-point Likert scale ranging from "certainly" to "certainly not." They also rated the likelihood of a different pathogen causing another pandemic of equivalent or greater magnitude within the upcoming 20 years on the same scale. Table 5 lists the precise wording of the question and the answer options. Note that for the intermediate levels of the Likert scale, the resulting frequencies are presented in cumulative form, as described in the section "Results." In the questionnaire, the Likert scale was included in typical fashion without cumulative meaning (i.e., no "<unk>" or "<unk>" signs). The participants repeated the assessment of the same two questions, but this second time with the answer options being on a discrete rating scale ranging from one to ten with only the extremes having a verbal label ("0 = certainly not;" "10 = certainly"). They were then shown six preventive guidelines (A1 and A3-A7 in Table 6). These guidelines were in place in Switzerland during the "lockdown" phase (with A3 and A4 formulated slightly less strictly/clearly), and some of them were relaxed afterwards. However, they had the status of recommendations by the federal government, not of legally binding rules. The participants indicated how strictly they followed them on a six-point Likert scale ranging from "always" to "never." The precise wording is given in Table 6. Like in Table 5, while the resulting frequencies for the intermediate levels are presented in their cumulative form, this was not the case in the questionnaire, where the ordinary Likert scale was used (without "<unk>" or "<unk>" signs). The participants were The six answer options were "certainly," "very likely," "rather likely," "rather unlikely," "very unlikely," and "certainly not;" "<unk>Rather likely" encompasses all individuals who answered "rather likely," "very likely," or "certainly;" "<unk>Rather unlikely" encompasses all individuals who answered "rather unlikely," "very unlikely," or "certainly not." "CI" stands for Wilson's confidence interval. further asked to indicate how strictly they expected to follow the same guidelines in the future, as listed in the lower part of Table 6 (A11 and A13-A17). There, the six-point Likert scale ranged from "presumedly forever" to "0 to 1 month, " and the alternative option of "don't know" was added. To evaluate these guidelines, the participants were asked "which of the following claims apply to the above-mentioned guidelines?" referring to guidelines A1 and A3 through A7. They were presented with the multiple answer options "most of them are exaggerated for persons not working with patients or elderly people, " "most of them are exaggerated for persons working with patients or elderly people, " "most of them are ineffective, " and "none of the answers above apply." Finally, the participants indicated whether they currently had any plans of traveling abroad for private reasons before the end of the year 2020 (multiplechoice options "yes"/"no"/"undetermined yet"), and whether they would have had such plans if the COVID-19 pandemic had not occurred (see the precise wording in Figure 3). (5) impact on work situation: For each of four claims regarding preparation (P1-P4 as shown in Table 7) it was asked whether the claim was true or not. By item P5 the choice was offered that none of the claims P1 through P4 were true, which, if chosen, implied that P1 through P4 could not be selected as well. The question "how has/had COVID-19 affected your work situation?" was then asked with eleven answer options (W1-W11 as listed in Table 7) of which the last option excluded all other ten. ( 6) Reaction by the government: The sentence "the measures implemented by the government between 17th March and 26th April ("lockdown") were..." could be completed with either "...exaggerated, " "...adequate, " or "...not strict enough / too late / too short in duration." The follow-up question was "which of the following claims applies to the gradual steps of relaxation of these measures, which are in place since 27th April and which are planned for the future?". The multiple-choice answer options were "the measures should have been relaxed earlier / more strongly, " "the relaxation plan is adequate, " and "the measures should have been relaxed later / less strongly." (7) Key lessons: The question "which lessons need to be learned and what should be different in case another pandemic should happen in the future?" was asked with ten answer options (L1-L10 as listed in Table 7) of which the last one excluded all other options. ( 8) Presumed cause of the pandemic: The participants were presented with a multiple-choice question phrased as shown in Figure 4. At the end of the questionnaire, the participants could enter any comments, regardless of their previous answers. --- Data Analysis Confidence intervals (CIs) of proportions, as shown in Table 2 through Table 7, as well as referred to in the text of the "Results" section, were calculated by Wilson's method (for a comparison of methods, see Newcombe, 1998). Fisher's exact test was used for testing the equality of proportions (see section "Emotional Distress and Risk Perception"). Pair-wise rank correlation was calculated by Spearman's method (see Table 8) and classified according to Cohen (1992). For any tests of hypotheses, whether univariate or within a multiple regression model, a type-one error probability (p) <unk> 0.05 was considered as "statistically significant." In the same regard, alternative hypotheses were two-sided. By binary logistic regression, the effects of multiple predictors on a binary outcome were modeled. The results were computed as average marginal effects (AME) representing percentage-point differences in the probability of the outcome being positive. By fractional logistic rating scale regression, the effects of multiple predictors on an outcome on an eleven-point discrete numeric rating scale (0-10, with labeled extremes) were modeled. The results were represented as AME representing differences on the 0-10 scale. For an explanation of this method, see e.g., Studer and Winkelmann (2017). Each regression model was optimized such that systematic factor elimination minimized Bayes' information criterion (BIC) 4. The following models were 4 The initial set of predictors for which factor elimination was performed comprised the following items, for which one-sided causality could be assumed : W2 through W5 (see a Items A2 and A12 of the questionnaire were not included in this survey. b "Don't know" was not given as a response option for items A1-A7. For items A1-A7, the six answer options were "always," "almost always," "predominantly," "sometimes," "almost never," and "never;" "<unk>Predominantly" encompasses all individuals who answered "predominantly," "almost always," or "always;" for items A11-A17, the seven answer options were "presumedly forever," "until vaccine available," "7 to 12 months," "4 to 6 months," "2 to 3 months," "0 to 1 month;" "<unk>Until vaccine available" encompasses all individuals who answered "until vaccine available" or "presumedly forever;" "<unk>2 to 3 months" encompasses all individuals who answered "2 to 3 months, ""4 to 6 months," "7 to 12 months," "until vaccine available," or "presumedly forever;" "0 to 1 month" encompasses all individuals who answered "0 to 1 month;" "CI" stands for Wilson's confidence interval. estimated for the different parts of the questionnaire. (1) Knowledge about COVID-19: A binary logistic model of item K4 (Table 2) being answered correctly (versus wrongly or by the answer option "don't know"). ( 3) Emotional distress and individual part of a COVID-19 risk group
work situation. Better medical equipment (including drugs), better protection for their own mental and physical health, more (assigned) personnel, more comprehensive information about the symptoms of the disease, and a system of earlier warning were the primary lessons to be learned in view of upcoming waves of the pandemic.