document
stringlengths
1
49.1k
summary
stringlengths
3
11.9k
Background Male circumcision (MC) is carried out for many reasons including cultural, medical, religious, and social reasons. Safe male circumcision (SMC), medical male circumcision (MMC), and voluntary medical male circumcision (VMMC) are some terms that refer to the permanent surgical removal of the foreskin for medical reasons. The prevalence of MC varies across the world, with Muslim countries having the highest prevalence, and southern African countries such as Botswana and Angola having some of the lowest prevalence (Morris et al. 2014). Male circumcision has been recommended by the World Health Organization (WHO) as an add-on strategy to reduce the transmission of HIV after studies in Uganda and South Africa showed that circumcision was effective in reducing HIV infection by up to 60% (Chiringa et al. 2016). Mathematical modeling and cost-effective analysis were also done to support this recommendation (Tobian and Gray 2011). Botswana is among the 14 southern and eastern African countries where MC was recommended. In Botswana, the SMC programme was launched in 2009 with the goal of 80% coverage by 2012 (Ministry of Health 2009). The SMC programme has been running in Botswana since then, and though some strides have been made (Keetile 2020), there is still a gap between the target and the actual rate of circumcision (Katisi and Daniel 2015). The first case of HIV in Botswana was recorded in 1985, and the country has been severely hit by the HIV epidemic, with the number of cases increasing significantly since then (Hardon et al. 2006). According to the Botswana AIDS Impact Survey IV (BAIS IV) the prevalence of HIV in 2013 was 18.5%, and the unadjusted incidence rate was 2.61% (Botswana AIDS Impact Survey [BAIS] IV Report 2013). Moreover, according to the Joint United Nations Programme on HIV/AIDS (UNAIDS) country data for 2016, Botswana has an approximate HIV prevalence of 21.9% among adults aged 15-49 years (WHO 2016). Studies also estimate that there were 10,000 new HIV infections in 2016, and 4300 of those infections were among males aged 15 years and above (World Health Organization 2017). The number of deaths due to AIDS has been estimated to be 3900 annually, with males accounting for about 55% of all deaths associated with HIV (World Health Organization 2017). Katisi and Daniel (2015) noted that "Botswana has been running safe male circumcision (SMC) since 2009 and has not yet met its target." Similar findings were noted by Keetile and Bowelo (2016), where only a tenth of the target population for the strategy had been reached, leaving a large proportion of the population at a higher risk as compared to those who had been circumcised (WHO & UNAIDS 2010). According to WHO (2017), there were a total of 19,756 circumcisions performed in Botswana in 2017 and a further 20,209 circumcisions between 2008 and 2017, which is far below the national target of 80% (World Health Organization 2018). The number of males being circumcised falls very far from the intended number in Botswana despite the many efforts to promote SMC such as street-level promotions using crowd pullers and mall activations, targeted campaigns in schools and workplaces, 24-hour digital billboards, and the use of models during traffic surges to create VMMC hype (ACHAP 2020; Keetile 2020). Given the low rates of circumcision and the lack of intention among males to circumcise, this paper aimed to investigate factors associated with low uptake of SMC and the intention not to circumcise among men in Botswana. The paper also investigates reasons given by individuals for not intending to undergo circumcision. The decision not to be circumcised is personal and is influenced by many factors. The advantages of circumcision outweigh those of not doing so, but the rate of circumcision is still very low, making it difficult to reduce the number of new HIV infections and prompting speculation that there is a significant association between different factors and SMC. An understanding of low rates of circumcision and non-intention among men to undergo SMC will provide the opportunity for the government to come up with effective interventions. --- Theoretical framework This paper employs the theory of planned behaviour (TPB) by Ajzen (1991), which is an extension of the theory of reasoned action, to investigate factors associated with the low uptake of SMC and the intention not to circumcise among men in Botswana. The theory is based on an individual's intent to engage in a particular behaviour, in this case circumcision. According to Ajzen (1991), intentions are assumed to capture the motivational factors that influence a behaviour; they are indications of how hard people are willing to try and how much effort they are planning to exert to engage in a certain behaviour. As a general rule, the stronger the willingness and desire to engage in a particular behaviour, the greater the likelihood of achieving such behaviour (Ajzen 1991). According to TPB, intent is based on three components, namely attitude towards behaviour, subjective norm, and perceived behavioural control. To predict an individual's likelihood of undergoing circumcision, knowledge of the individual's views of the procedure is needed. Furthermore, how an individual reacts to societal pressure and expectations, and thus subjective norms, plays a vital role in the decision not to circumcise. The extent to which an individual male resists the procedure is based on whether the individual believes he is in control of the decision to undergo the procedure. If individuals believe they are forced to undergo circumcision, they are more likely to resist (Ilo et al. 2018). According to the model, external factors such as availability of funds, expertise, time, and support of others also influence the intent to engage in a behaviour (Ajzen 1991). Here, socio-demographic characteristics are taken to be these external factors that also play a role in the decision not to circumcise. As a result, the analyses in this paper use constructs of the theory of planned behaviour and reasoned action to investigate factors associated with low uptake of SMC and not intending to circumcise among men in Botswana. --- Methods --- Study design This study used a quantitative cross-sectional analytical and non-experimental study design, as there was no manipulation of the independent variables: being uncircumcised and not intending to circumcise. This study uses data from the 2013 BAIS IV, which follows a series of surveys conducted every 5 years in Botswana by the National AIDS and Health Promotion Agency (NAHPA), Statistics Botswana, and Ministry of Health. --- Data collection methods and sampling The BAIS IV survey was a cross-sectional populationbased household survey employing quantitative methods and was carried out from January to April 2013. Data were collected for households and individuals using questionnaires written in English or Setswana. The information collected from the household questionnaire was used to identify individuals who were eligible to complete the individual questionnaire. For BAIS IV, the sampling frame was based on the 2011 Population and Housing Census (PHC), which comprised a list of the enumeration area (EA) together with the number of households. A stratified two-stage probability sample design was used for the selection of the BAIS IV sample. For this study, a total of 3809 men aged 15 to 64 years who had successfully completed the BAIS IV questionnaire were included for analyses. --- Variable measurement --- Dependent variables The study used two dependent variables, circumcision status and the intent to circumcise. The circumcision status was derived from the question are you circumcised, with possible answers of yes coded as 1, no coded as 0, and don't know coded as 3. During analysis, the responses for don't know were filtered out because they showed no knowledge of circumcision status, and the variable was recoded as yes coded 0 and no coded 1 to allow for interpretation of uncircumcised men. The intent to circumcise variable was derived from the question do you intend to get circumcised in the next 12 months, with possible answers yes coded 1, no coded 0, and don't know coded 3. The responses don't know were filtered out during analysis because this showed that respondents were undecided, and the variable was recoded as yes coded 0 or no coded 1 to allow for interpretation of men not intending to be circumcised. --- Independent variables Several demographic variables were used in this study and were recoded as follows. Age was transformed into 10-year age groups (15-24, 25-34, 35-44, 45-54, and 55-64). Place of residence was recoded as follows: cities, towns, and urban villages were recoded as urban, while rural areas were recoded as rural. Education was recoded as follows: none, non-formal, and primary education were recoded as primary education. Secondary and senior secondary education were recoded as secondary education. Tertiary or higher remained as higher education. Employment status was recoded as follows: individuals who were employed in any form or sector were coded as employed, and those actively seeking employment coded as unemployed. Respondents not eligible for employment because of their age or disability and pensioners were coded as not eligible for employment. Marital status was recoded as follows: married and separated were recoded as married, whereas never married, living together, divorced, and widowed were recoded as not married. --- Variables based on theory of reasoned action and planned behaviour The theory of reasoned action proposes that there are three components at play to change behaviour: attitude towards the behaviour, subjective norm, and perceived behavioural control. Firstly, to measure attitudes towards the behaviour, the question asking whether participants had heard or seen any information on safe male circumcision in the past 4 weeks, with possible answers being yes, no, and don't know, was used as a proxy, because having information and knowledge of SMC would influence an individual's attitude towards circumcision. Secondly, to measure subjective norms, the question asking should circumcised males stop using condoms, with possible answers yes, no and don't know, was used as a proxy because, while using condoms should be the norm, there is some level of subjectivity. Lastly, to measure perceived behavioural control, the question asking would you circumcise male children aged below 18 years was used as a proxy, because deciding to circumcise children shows some form of control over the procedure. --- Data analysis Frequencies were run for all variables, and cross-tabulations between variables were calculated. Chi-square tests were used to check for the statistical significance of the relationships. Logistic regression for complex samples was used to measure the likelihood of being uncircumcised and not intending to circumcise, because data collection used a multi-stage sampling protocol. The analysis used logistic regression for Models I to IV. The Statistical Package for the Social Sciences (SPSS) was used for analysis. --- Results --- Select sample background characteristics Table 1 shows the distribution of men in the sample by circumcision status and their intent towards circumcision. Almost three-quarters (74.9%) of men were uncircumcised, and half (50.5%) did not intend to circumcise. Table 2 shows the distribution of men by socio-demographic characteristics and selected variables on SMC, specifically, whether they had heard or seen any information on SMC in the past 4 weeks, whether they would circumcise male children under 18 years of age, whether circumcised men should stop using condoms, and reasons for not intending to circumcise. Results show that around four-fifths (79.8%) of men were aged between 15 and 45 years, and almost two-thirds (63.3%) of men resided in urban centres. The majority of men had a secondary education (49.1%), and slightly less than two-thirds of men (65.8%) were employed. Four-fifths (80.6%) of men reported that they were not married. When considering circumcision, more than a third (36.5%) of men had not heard or seen any information on safe male circumcision in the past 4 weeks, and approximately one-sixth (17.3%) would not circumcise their children under 18 years of age. In addition, 12.3% of men believed that circumcised males should stop using condoms. Table 2 also shows that more than half (52.9%) of men included fear of HIV testing and about a third (32.5%) included fear of the procedure as the major reasons for not intending to undergo circumcision. --- Association between socio-demographic characteristics and being uncircumcised and not intending to circumcise Table 3 shows the association between socio-demographic characteristics and select SMC characteristics (specifically, having heard or seen any information on SMC in the past 4 weeks, whether they would circumcise male children under 18 years of age, belief that circumcised men should stop using condoms) and circumcision status. Analysis shows that age, education, employment status, and marital status were all significantly associated with being uncircumcised. Place of residence was only significantly associated with being uncircumcised. When considering age, the table illustrates that the number of uncircumcised men is greater in all age groups, and four-fifths (80.2%) of men residing in rural areas were uncircumcised. Furthermore, the table demonstrates that a larger proportion of men with primary or less education (81.7%) than those with secondary education (74.6%) and higher education (62.6%) reported that they were uncircumcised. Results also show that a smaller proportion of employed men (71.2%) and unemployed men (76.2%) were uncircumcised compared with men not eligible for employment (81.0%). More than three-quarters (76.3%) of unmarried men and slightly more than two-thirds (67.1%) of married men reported that they were uncircumcised, as shown in Table 3. Furthermore, a significantly high proportion of men who had not seen or heard information on SMC in the past 4 weeks (78.4%) and men who had heard or seen information on SMC in the past 4 weeks (70.4%) reported that they were uncircumcised. Conversely, nine in ten men who were uncircumcised indicated that they would not circumcise their children under 18 years of age. About 85% of men who believed that circumcised men should stop using condoms were uncircumcised, while only 73.0% of men who did not believe that circumcised men should stop using condoms reported that they were uncircumcised. --- Association between socio-demographic characteristics and intention regarding circumcision Table 4 shows that the proportion of men not intending to circumcise increased with age, while almost an equal proportion of urban and rural residents did not intend to circumcise. A significantly higher proportion of men with higher education (57.3%) and employed men (54.7%) reported that they did not intend to circumcise. About twothirds (65.1%) of married men did not intend to get circumcised. Less than half of men who had not seen or heard any information on SMC in the past 4 weeks (45.5%) and men who had seen or heard any information on SMC in the past 4 weeks (42.6%) did not intend to get circumcised. More than three quarters (76. 9%) of men who said that they were not intending to circumcise reported that they would not circumcise their male children under 18 years of age, while 55% of men who believed circumcised men should stop using condoms were not intending to get circumcised. --- Association between not intending to circumcise and socio-demographic characteristics Table 5 shows the association between not intending to circumcise and different socio-demographic factors and testing for HIV. The table shows that there is a significant association between age and not intending to circumcise; in fact, the odds of not intending to circumcise increase as age increases. The association between place of residence and not intending to circumcise was found not to be significant, whereas the relationship between level of education and not intending to circumcise was significant. Analysis also showed that employed men were more likely not to undergo circumcision. Similarly, married men were also more likely not to undergo circumcision. Men who had never tested for HIV were found to be more likely not to undergo circumcision. --- Factors associated with men not intending to circumcise (gross and net models) Table 6 illustrates Models I and II, which show the gross and net effect of select SMC variables (heard or seen any information on SMC in past 4 weeks, would circumcise male children under 18 years of age, and circumcised males should stop using condoms) and socio-demographic characteristics on intending not to circumcise. The table shows that when control variables were introduced in Model II, the likelihood of not intending to circumcise increased and remained significant. For instance, men who believed that circumcised males should stop using condoms were 1.2 times more likely to not intend to circumcise, but when control variables were introduced in the net model, the association diminished. Men in age groups 35-44 years (OR = 1.60, CI 1.15-2.28), 45-54 years (OR = 2.53, CI 1.66-3.85), and 55-64 years (OR = 2.56, CI 1.56-4.19) were more likely to intend not to circumcise when compared to men in the age group 15-24 years. Results also show that the intent not to circumcise increased with age. When considering the place of residence, men residing in urban areas (OR = 1.27, CI 1.02-1.57) were more likely to not intend to circumcise when compared to their counterparts, while married men (OR 1.61,) were more likely to intend not to circumcise when compared with unmarried men. --- Factors associated with non-circumcision Table 7 shows Model III and Model IV, which indicate the gross and net effect of selected socio-demographic and behavioural variables on being uncircumcised. The net effects model (Model IV) shows that after controlling for cofounders, age, place of residence, and education were significantly associated with being uncircumcised. Model III shows that men aged [15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46][47][48][49][50][51][52][53][54][55]2.3 (OR = 2.32,), 1.9 (OR = 1.94, CI 1.34-2.81), and 1.5 (OR = 1.48, CI 1.01-2.16) times more likely to be uncircumcised, respectively, when compared with men aged 55-64 years. It was also observed that men residing in rural areas were 1.3 times more likely (OR = 1.31, CI 1.09-1.26) to be uncircumcised when compared with men residing in urban areas. Moreover, men with primary education or less and men with secondary education were 2.5 (OR = 2.48, CI 1.87-3.30) and 1.6 (OR = 1.65, CI 1.33-2.05) times more likely to be uncircumcised when compared to men with higher education. For behavioural variables, the odds of non-circumcision were significantly higher among men who had not heard or seen any Not intending to circumcise Intending to circumcise information on SMC in the past 4 weeks (OR = 1.25, CI 1.04-1.51) compared to their counterparts. --- Discussion As circumcision is a one-time procedure and preventative measure, the paper also focused on those who were not circumcised and did not intend to do so. It was found that a significantly high proportion of men were uncircumcised and were also not intending to circumcise. Low levels of circumcision observed in this study could be explained in part by the fact that in Botswana, HIV testing is mandatory before one is eligible for circumcision. As a result, many men may be hesitant to test for HIV to undergo circumcision. Katisi and Daniel (2015) made a similar observation that mandatory HIV testing and counselling for SMC is a major barrier to MC. Attitude towards behaviour, subjective norm, and perceived behavioural control were also individually significantly associated with not intending to circumcise. However, it was found that after controlling for other factors, only attitude towards the procedure and perceived behavioural control were significantly associated with not intending to circumcise. For instance, we found that men who had heard of SMC in the past 4 weeks were less likely to not intend to circumcise. This shows that education regarding SMC promotes a positive attitude towards SMC. Similar findings were noted in studies by Mugwanya et al. (2010) and Zamawe and Kusamula (2015), who reported that men who had been exposed to SMC promotions were more likely to be circumcised. After controlling for confounders, men who would circumcise male children under 18 years of age were less likely to intend not to circumcise. Similarly, Keetile and Bowelo (2016) found that men who expressed willingness to circumcise their male children were also likely to express willingness to get circumcised. On the other hand, the odds of not intending to circumcise increased as age increased, which is possibly because older men would not consider the protective benefits of SMC since most of them are not sexually active, especially elderly men. A similar finding was observed by Kripke et al. (2016) in a study conducted among 14 priority countries for circumcision, in which older men expressed unwillingness to undergo circumcision. However, it is a positive indication when men in younger age groups are more accepting towards SMC and are willing to get circumcised, since this will increase circumcision prevalence. Quite surprisingly, men who lived in urban areas were more likely to not intend to circumcise, which is quite indicative, since men in urban areas should have a greater understanding of health issues given their comparative access to media and HIV-related information. It was found that the proportion of uncircumcised men was higher among men at younger ages (15-35 years). This may be explained by the generally high proportion of the young male population compared to their older adult counterparts, consistent with the fact that the majority of the population of Botswana is young, with 34.6% of the population aged 15-35 years (Statistics Botswana 2018). On the other hand, the likelihood of being uncircumcised was higher among rural residents, possibly because there are fewer health facilities, and circumcision services are less accessible in rural areas. Moreover, information reaches rural populations later, so rural residents would likely be lagging behind in putting into action new recommendations such as circumcision. This was also found to be true in a study by Makatjane et al. (2016) in Lesotho. Men with primary education and men with secondary education were also more likely to be uncircumcised than men with tertiary education. A plausible explanation for this is that educated individuals have a greater understanding of the importance of healthy behaviours and therefore are more likely to engage in positive health-seeking behaviour. An earlier study by Keetile and Rakgoasi (2014) suggested that continued access to higher education can only improve the situation because education improves physical functioning and self-reported health, and it enhances a sense of personal control that encourages and enables a healthy lifestyle. A study by Jiang et al. (2013) also found that education level is a major determinant of circumcision status. This study has some limitations. The use of secondary data limited the analysis to variables within the dataset. The dataset used did not have qualitative questions on circumcision, which would have provided some depth in understanding the factors associated with being uncircumcised and not intending to circumcise. Despite these limitations, however, the study provides vital insights into factors associated with non-circumcision and not intending to circumcise among men in Botswana. --- Conclusion The SMC programme is pivotal in efforts to decrease HIV infections, as it is evident from the study that attitudes and perceived behavioural control influence decisions not to undergo circumcision. The results showed that people who had heard or seen information on SMC in the past 4 weeks were less likely to be uncircumcised and less likely to intend not to circumcise. This suggests that more and regular information on SMC would promote positive attitudes towards circumcision and increase the take-up of the practice. Results from the study also suggest that if men believe and perceive that undergoing circumcision is their choice and that they have control over when and where to undergo the procedure, they are more likely to embrace SMC. Findings from this study suggest the need for an enhanced and consolidated message on SMC as an HIV preventative measure. There is also a need for improved integration of SMC in HIV-related education with risk-reduction counselling. Programmes should also target men of older ages and address the changing attitudes and beliefs of men on SMC and HIV. Qualitative studies on this topic are needed as well to gain further insight into why men are uncircumcised and are not intending to get circumcised, and possibly a study on the role played by women in men's decisions not to get circumcised. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. --- Data availability Data for this study are available from the corresponding author at [email protected]. --- Authors' contributions MN conceived the study, performed the analysis, and wrote the first draft of the manuscript. GL and MK provided supervision, reviewed the manuscript and provided critical comments on its improvement. All authors read and approved the final manuscript. --- Declarations Ethics approval We used secondary data for analysis in this study, and Statistics Botswana provided data to the Department of Population Studies for further analysis. As a result, permission was not needed. The study was cleared by the Human Research and Development Council (HRDC) in the Ministry of Health and Wellness, and therefore all ethical issues were handled by the Ministry of Health and Wellness at the time of the survey. All methods were carried out in accordance with relevant guidelines and regulations for health research. --- Consent for publication Not applicable. Consent to participate All respondents for the Botswana AIDS Impact Surveys agreed to participate in the study by signing the informed consent form. --- Competing interests The authors declare that they have no competing interests. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Aim This paper investigates factors associated with low uptake of safe male circumcision (SMC) and the intention not to circumcise among men aged 15-64 years in Botswana. Subject and methods Data were collected during the 2013 Botswana AIDS Impact Survey (BAIS IV). For analysis, a sample of 3154 men was used to assess the association between being uncircumcised/not intending to undergo circumcision and different factors using descriptive statistics and logistic regression analysis. Data analysis was conducted using SPSS version 27. Logistic regression analysis results are presented as odds ratios together with their confidence intervals. All comparisons were statistically significant at p < 0.05. Results Results show that 25.1% of men reported that they were circumcised, while 50.5% did not intend to undergo circumcision. Multivariate analysis showed that several factors were significantly associated with being uncircumcised including age of 15-24 years (OR = 2.75, CI 1.82-4.19), residing in rural areas (OR = 1.31, CI 1.09-4.1.58), and having primary or less education (OR = 2.48, CI 1.87-3.30). Similarly, factors significantly associated with not intending to be circumcised included age of 34-44 years (OR = 1.60, CI 1.15-2.23), residing in urban areas (OR = 1.27, CI 1.02-1.58), and being married (OR = 1.61, CI 1.20-2.16). It was also observed that men who had not seen or heard of SMC in the past 4 weeks were 1.2 times more likely (OR = 1.27, CI 1.03-1.56) to report the intention not to undergo circumcision. On the other hand, men who indicated that they would not circumcise their male children under 18 years of age were 8.7 times more likely (OR = 8.70,) to report that they did not intend to circumcise. Conclusion Results from the study show high acceptability but low uptake of SMC. Some individual behavioral factors influencing circumcision status and decision whether to undergo circumcision were identified. Targeted interventions, continuous education, and expansion of the SMC programme are recommended, especially for older men and those in rural areas.
Introduction This paper concerns a planned study in which we plan to explore how feminist technoscience can contribute to challenging existing science practices, and a critical approach, while at the same time work as a theoretical resource for the integration of gender equality in Swedish higher IT educations in a broad sense -information systems/informatics, engineering with a focus on computers and IT, and media and digital technologies programs. According to the Swedish Higher Education Ordinance (Högskoleförordningen), science and a critical approach are central quality aims and an important part of the educational content on a higher educational level. Furthermore, in the Swedish Higher Education Authority's (UK<unk>) new quality assurance system for higher education, gender equality is one of several quality aspects that are being measured. In the planned study we are interested in exploring questions of what science means in Swedish higher IT educations, how it is practiced, and if the current science practices in Swedish IT educations are enough to prepare the students for the challenges they will face as practitioners in a society which is increasingly digitalized in complex ways, and in which the digital and the social are increasingly, and intimately, entangled. In these explorations we will use feminist technoscience as a resource that can provide guidance for how to make a difference. A central concern in feminist technoscience is knowledge processes, in terms of the development of scientific knowledge, but also in terms of the design of technologies, and the implicit and explicit knowledge of organizational and social structures, practices and hierarchies that are inscribed into technologies [27], [49]. Researchers within the field have shown how the development of knowledge is intimately related to how the involved actors (researchers, designers, users etc.) are implicated in social and material relations, including those of gender, ethnicity, class and sexuality [18], [30], [42], [49], [52]. Feminist technoscience is inspired by constructionist approaches, and a central point of departure is that neither technology nor gender is understood as fixed or given. Rather, technology is understood as "contingently stabilized and contestable" [ While the research field addresses a range of technologies, here we are interested in the technoscience processes that concerns digital technologies, both in terms of development of scientific knowledge, and processes of design and use. Feminist science and technoscience scholars have been studying technoscience use, design and development practices, as well as the consequences of these practices during several decades [10], [23,24], [20], [4], and have a lot to contribute with to more mainstream approaches, which have focused on other aspects of science practices, both in terms of how to understand and theorize these problems, but also for how they can be dealt with. Feminist technoscience constitutes a ground for posing critical questions about scientific practices, about researchers' positioning, about consequences of these practices for different actors, and about power issues related to knowledge making and scientific practices. A central point of departure for feminist technoscience is that science and technology are entangled with social interests, and that the involved researchers and knowledge developers must be understood as politically and ethically responsible for the practices and interventions that research may give rise to [52]. So, the aim with this study is to explore how feminist technoscience can contribute to challenging existing science practices, and a critical approach, while at the same time work as a theoretical resource for the integration of gender equality in Swedish higher IT educations in a broad sense. Exactly what the term science practices mean differs between disciplines, but our view of scientific practices is based on the use of this term in the research field of feminist technoscience, in which scientific practices are much more far reaching than those who take place in laboratories [26], [28]. The main research question is: How can feminist technoscience be a part of scientific practices and a critical approach in Swedish higher IT educations? This overarching question is broken down into three sub-questions: (1) Which are the scientific points of departure in Swedish higher IT educations? (2) Which are the possibilities or hindrances for an integration of gender equality in Swedish higher IT educations? And (3) How can feminist technoscience make a difference in the work with scientific practices and gender equality integration in Swedish higher IT educations? --- Background The background for our interest in gender equality and its relations to digital technologies is that these technologies are becoming more and more ubiquitous, and increasingly affect all the fine-grained parts of current societies and individuals' lives, and while they solve some of the existing problems, at the same time they give rise to new challenges [45], [44]. Some interpret this development as a fourth industrial revolution [46] (World Economic Forum, 2016), or as "a second machine age" [9], and then refer to how digital technologies such as 3D-printing, big data, artificial intelligence, robotics and automation, in combination with demographic changes, urbanization and globalization, are merged and amplify each other, and are expected to affect all parts of society in a disruptive way [45]. Be this a revolution or not, but it indicates a world of increasing complexity, in which digital technologies and relations play an important part, both in terms of constituting complexity, and in terms of expectations to contribute to solutions. Researchers have underscored that technologies are formative and do not only mirror an existing social order, but are designed in entangled relations of various agencies, and they reproduce the existing social, economic, cultural and political relations -including gender, ethnicity and class [27], [48,49], [7]. Consequently technologies make possible some ways of acting, being, and living, and make other activities, and ways of being and living harder [29], [52], [49], [37], something which contributes to making some identities, positions and parts of the world visible, while some are made invisible [8], [29]. Hence digital technologies must be understood as inextricable from other relations, practices, and structures of societies [52], [7], [49]. The actors involved in designing and developing digital technologies do this in a world that is increasingly complex, and in which these technologies are more and more entangled with other parts of societies, including gender relations. Insights from research in feminist technoscience underscores that the processes of scientific knowledge, as well as design and development of technologies, are intimately inter-twined with social issues -the social, the technological and the scientific are understood as knitted together in a seamless web of relations [49]. Researchers in the field also explore issues concerning consequences of technoscience practices, and argue that researchers, designers and developers must be understood as responsible -and accountable [3] -for the consequences of the technologies they contribute to shaping [52]. This requires that researchers and practitioners need to be prepared for this, in terms of for instance an ability to critically reflect on digital technologies' reproduction of problematic power relations and structures, their entanglement in power relations and their consequences for different actors -what the technologies do. These designers and developers -IT experts who often have a formal university degree of some sort, are shaped during their education. These higher IT educations prepare the students -who are the IT experts and decision makers of tomorrow -for professional practice. During higher education the disciplinary knowledge and traditions concerning which problems are interesting and possible to solve, what is doable, how the subject area is defined, and the view of what approaches and methods are useful in a specific situation, are communicated [25], [6], [36]. From the point of view of feminist technoscience, the design and production of science and technology cannot be distinguished from the networks, structures and practices in which it is enmeshed, so from this perspective, the issue of how to better prepare students in IT educations for their professional activities in an increasingly complex world, is all a matter of technoscience practices [26], [19,20], [4], [52]. It is a matter of how the design of technosciences are entangled in existing power relations, practices and structures, about the positioning of the researchers, and of the need for researchers to be aware of their responsibility of the possible consequences of technoscience practices and interventions. In this landscape of increasing digital complexity constituted of what S<unk>rensen [45] discusses as combinations of digitalization, distribution and scale, we are faced with new challenges in the crossroads between disciplines. These questions concern issues of who is included and excluded in the design and use of digital technologies [14], [35], the unintended inscription of gender stereotypes into seemingly gender neutral digital technologies [34], computer ethics [1], care in technoscience practices [13], digital technologies in relation to environmental sustainability [31], and to the Anthropocene [46], just to name a few. This necessitates the possibility to ask questions that might require wider approaches than are currently possible within disciplinary boundaries, but that rather require multidisciplinary approaches [45], [50], [2]. In this situation we view feminist technoscience -with its focus on entangled practices in which humans are deeply and ontologically related with the social and material world, and on the gendered and ethical issues that arise in these practices [4], [41] -as a resource for asking complex but pressing questions. --- Theoretical Framework For the study we will take as our analytical point of departure feminist technoscience [20], [52], [49], [38]. Feminist technoscience can be understood as a knowledge field that is part of the larger field of feminist studies, and borrow theoretical inspiration from feminist science scholars such as Donna Haraway [20,22], Sandra Harding [23] and Karen Barad [3]. <unk>sberg and Lykke [52, p. 299] write that "Feminist technoscience studies is a relentlessly transdisciplinary field if research which emerged out of decades of feminist critiques. These critiques have revealed the ways in which gender, in its intersections with other sociocultural power differentials and identity markers, is entangled in natural, medical and technical sciences as well as in the sociotechnical networks and practices of a globalized world". Feminist technoscience concerns the application of feminist science critique and analysis on scientific and other knowledge practices in order to explore the relations between feminism and science, and what they can learn from each other [52]. Moreover, technology and gender are viewed as mutually shaped, that is, technology is both a source and a consequence of gender relations [ibid.] (ibid.). Latour's [27] statement that "technology is society made durable" underscores how existing sociopolitical hierarchies and relations are inscribed into technologies, which then contribute to the (re)production of for instance gender relations. An important point of departure is that also so called pure basic science is entangled in social interests, and that the involved researchers and knowledge developers must be understood as politically and ethically responsible for the practices and interventions that research may give rise to [52]. Feminist technoscience is a critical approach, and underscore that technosciences are often used in order to advance the interests of capitalist interests [ibid.], but an important focus is that it does not have to be this way. Feminist technoscience concerns both technological and scientific (technoscience) practices in general, and analyze the design and development of technological artefacts and systems in the same way as science practices are analyzed. One central issue concerns how researchers' and other actors' situatedness affect their knowledge practices [19]. de la Bellacasa writes "That knowledge is situated means that knowing and thinking are inconceivable without a multitude of relations that also make possible the worlds we think with. The premise to my argument can therefore be formulated as follows: relations of thinking and knowing require care" [13, p. 198]. Another focus is how power relations affect who is included and who is not in technoscience practices [24], [14], how technosciences such as digital technologies contribute to both the reproduction of problematic social, economic and material structures, and to the destabilization of these [27], [8], problematic categorizations and representational practices [8], [3], [39], and power/knowledge in technoscience practices [17]. Feminist technosciences underscore that gender science is not only about relations between women and men, but also about understanding agency, bodies, rationality and the boundary making between e.g. nature and culture in technoscience practices [52]. The theoretical discussions in the field of feminist technoscience during the last years have centered on a number of 'turns' such as the posthumanist, materialist and ontological turn [52], and also the term Anthropocene is discussed [46]. These ideas have been used by a number of researchers in order to explore how gender and other aspects of reality are inscribed into information technology [5], [40], the accountability of designers, and strategies for designing without inscribing fixed or naturalized notions of gender into designs [47], entanglements of humans and machines [41], [16], sociomaterial relations in participatory design methods [15], gendered discourses in IT educations [12], and legal, ethical, and moral questions that surround security technologies [43]. These researchers focus on how, in design and use practices, humans are entangled with materialities (technological and other), how sociopolitical realities such as gender, ethnicity and class are inscribed into technologies which in turn reproduces these realities. These researchers explore how this takes place, the consequences of this, and on developing possible alternatives that are less problematic. The works of these researchers are often published in journals with an interdisciplinary scope, rather than in mainstream disciplinary journals, something which probably contributes to the fact that this knowledge is relatively unknown in related research fields such as in the more mainstream information systems (IS) field. In mainstream IS journals some of the ideas of feminist technoscience is discussed under the umbrella term of sociomaterialities [e.g. 33], [11], [32]. This research is based primarily on socio-technical systems theory, actor network theory, and practice theory [11], and less on feminist technoscience, but the works of Karen Barad [3,4] is nevertheless central. Consequently these discussions mostly go into the posthumanist ideas of feminist technoscience, and touch upon the consequences of this for information systems design, but do not go into the feminist concerns that are in focus in feminist technoscience. Here we argue that the feminist focus on who is involved in technoscience practices, and how the consequences of technoscience practices affect different bodies differently, would add important insights also in related disciplines. For the planned research application we argue that the area of feminist technoscience is relevant for contributing to scientific practices and gender equality in IT educations, as digital technologies today constitute an increasingly integral part of society, both in terms of infrastructural preconditions for societal functions and services, and in terms of how social development is highly affected by the innovation and design of digital technologies. In several respects these technologies contribute to solving existing problems, and to a better life for many individuals, but they also reproduce problematic structures, and cause new problems and challenges. This points to the importance of working with issues of scientific practices concerning those issues that are in focus in feminist technoscience such as technological consequences, the responsibility and accountability of the designers of digital technologies, and of the relations of gender, sexuality, ethnicity and power in which design practices are entangled. --- Methodological Approach The planned study will be conducted as a qualitative field study, in which we study how teachers and students in Swedish higher IT educations understand and work with scientific practices and a critical approach, and how they work with gender issues -if this is done in terms of gender equality or if it is also done in terms of gender science as a ground for scientific practices, and if so, how this is done. The field study will be conducted through interviews, but also through the study of documents such as course syllabuses, course literature lists and other documents that describe how the teaching in those areas is planned and conducted. We have as our starting point for the practical implementation of the study the Swedish Information Systems Academy (SISA: http://sisa-net.se). We are also part of a recently initiated Swedish network for feminist technoscience, through which we will be able to find more colleagues with this kind of competence. These colleagues work with higher IT educations such as information systems/informatics, engineering with a focus on IT, and media and digital technologies programs, programs located at both philosophical and technical faculties. Our plan is not to evaluate whether representatives of Swedish IT educations work with gender science as scientific practices, but rather to explore how this is currently done, ideas for how it can be done, and how feminist technoscience can make a difference compared to more mainstream approaches to science. This exploration of current competencies and practices in the area will be combined with the study of relevant research literature. Since the involved researchers work with feminist technoscience, this will constitute an analytical point of departure, with the aim of identifying different ways of working with feminist technoscience in higher IT educations, apart from working with gender equality and the recruitment of women to male dominated technical educations. Our plan is to start the work by exploring how scientific practices and a critical approach is understood and practiced in Swedish higher IT educations, through collecting central policy documents -both national and local -and through interviewing teachers and students at some of these educations. Then we will proceed by mapping the Swedish higher IT educations which in some way work with gender and feminist technoscience, and interview teachers and students in those educations with a focus on how this is done and what it contributes with. Through this we will obtain information about how working with feminist technoscience in higher IT educations differ from, and might contribute to the work with scientific practices and a critical approach from a more traditional perspective. --- Expected Results and Contributions We -the researchers who plan this study -position ourselves in the crossroads between feminist technoscience, informatics, information systems (IS), and media technology. As underscored by for instance Walsham [50], who work in the information systems (IS) field, this field has traditionally focused on helping organizations to use information and communication technologies more effectively, with the aim to improve organizational effectiveness in capitalist interests. Walsham [ibid.] argue that researchers in the IS field should focus more on how digital technologies can be developed and used in order to contribute to a better world, in a way that also serves other interests than those of efficiency and effectiveness. Ethical, as well as gender issues, related to information systems are not entirely absent to the IS field, but are nevertheless rather marginalized, as discussed by Adam [2]. Feminist technoscience is a research field that focus simultaneously on scientific practices and their embeddedness in social and political relations, and on the practical, political and ethical consequences of these practices [52]. In this application the significance and planned novelty concerns bringing into the related fields of informatics, information systems and media technology the insights of how gender and knowledge practices are related to both scientific and design practices, knowledge that can also be used in Swedish higher IT educations. These issues are relatively unknown in for instance the field of information systems, and would add significantly to the current discussion both on how the IS field should focus on contributing to a better world, rather than only focus on improving efficiency and effectiveness in capitalist interests [see 50], and the discussion about "sociomaterialities" [e.g. 33], [11], [32] which has introduced the posthumanist ideas embraced by feminist technoscience into the IS field, but which mostly bypasses the feminist concerns. We argue that this discussion would benefit significantly from acknowledging the research that over the years has been done in the field of feminist technoscience, albeit in interdisciplinary journals and conferences rather than in mainstream IS journals, and also acknowledging the full meaning and relevance of the posthumanist ideas now being discussed in the mainstream IS field, that is, of how the entanglement of the social and the material include also the entanglement of sociopolitical relations such as gender, ethnicity and class in the design and use of information systems. --- Discussion This short paper has presented a planned study with the aim we aim to explore how gender science can contribute to science practices and a critical approach, while at the same time work as a theoretical resource for the integration of gender equality, in Swedish higher IT educations in a broad sense -information systems/informatics, engineering with a focus on computers and IT, and media and digital technologies programs. The most expected result of the study is foremost to bring into the related areas of information systems, informatics, and media technology the insights of feminist technoscience, of how an analytical focus on gendered bodies matter in technoscience practices.
Science is according to the Swedish legislation for higher education (Högskoleförordningen) a central quality aim for higher educations. In the Swedish Higher Education Authority's (UKÄ) new quality assurance system, the integration of gender equality is one of several quality aspects that are being measured. This paper concerns a planned study with the aim to explore how feminist technoscience can contribute to challenging existing science practices, and a critical approach, while at the same time work as a theoretical resource for the integration of gender equality in Swedish higher IT educations. Feminist technoscience makes possible critical questions about scientific practices in both educational contexts and in work life, about researchers' positioning, about consequences, and about power issues. Posing such questions is central in IT educations, since we live in a society in which digital technologies increasingly constitute preconditions for a working reality, and both reproduce existing structures and form new patterns. In this reality it is central to ask whether current science practices are enough, and how feminist technoscience can make a difference, in those educations that produce the IT experts of the tomorrow. The study will be conducted as a qualitative field study with a focus on how teachers and students in Swedish higher IT educations practice science and a critical approach, and feminist technoscience in their educations.
Introduction Social justice is commonly viewed as one of the social work profession's core values. Thus, social workers should challenge social injustice by working for and with oppressed individuals and groups and by addressing issues of poverty, unemployment and discrimination (NASW, 2021;Marsh, 2005). Exploring social work students' views to understand how equipped they are to pursue the social justice mission of the profession should therefore be of central academic and practical interest. There are, however, surprisingly few empirical studies focussing on social work students' views on social justice-related issues from a comparative viewpoint. Such an approach is believed to help clarify whether future social workers are driven by common normative views that are important for (future) social workers' readiness to challenge social injustice, or whether, instead, it seems as if variations in factors such as institutional, cultural and socioeconomic circumstances at a macro-level shape views of social work students. Thus, such knowledge is thought to be of a wider international interest from a number of perspectives, including social work education and student exchange and in a wider context, for the development of social work as a profession and for discussing the prerequisites for shared international notions of social work. Thus, this article explores the views of social work students studying in different socio-economic contexts and welfare regimes in relation to some key aspects assumed to be vital for the profession. Our specific research questions are as follows: Do social work students' motivations to study social work and their understandings of poverty seem to comply with the profession's goal of advancing social justice? Are there differences in social work students' views between the different jurisdictions as a result of varying institutional and structural conditions affecting social work education and practice? This article describes the social work profession as part of the broader welfare policy context of Finland, the Republic of Ireland and Northern Ireland, and reports findings from student surveys, with the aim of exploring the differences and similarities of social work students' views in Finland and across the island of Ireland. With reference to our aim of exploring possible commonalities in views, the jurisdictions chosen are taken to represent substantially different contexts, at least from a European perspective. Two different indicators related to social justice in a student context are also discussed and utilised: various general perceptions on poverty and various motivations for studying social work, respectively. --- The social work profession: a common purpose? Promoting social justice underpins the origins of social work, but, over the last century, the profession has struggled to effectively meet this challenge (Stoeffler and Joseph, 2020). According to the Global Definition of Social Work, social justice is a core principle of the profession (International Federation of Social Workers, 2014) and is recognised as one of the main professional values and a key aim and aspiration of the profession (Austin, 2013;Watts and Hodgson, 2019). Social justice is one of the main organising values for social work practice and education (Marsh, 2005;Sewpaul, 2014;Lundy, 2011;Onalu and Okoye, 2021) and is a'shared ethical ground for social workers' across the world (Postan-Aizik et al., 2020). A commitment to social justice specifically relating to addressing issues of poverty is outlined in the profession's ethical codes of the jurisdictions included in this study (Talentia Union of Professional Social Workers, 2019;British Association of Social Workers, 2021;NASW, 2021), in practice frameworks (Department of Health, 2017) and standards (Northern Ireland Social Care Council, 2019), and social work scholarship (Sewpaul, 2014;Lundy, 2011;Onalu and Okoye, 2021). Rising levels of inequality and poverty globally (Oxfam, 2021) mean the pursuit of social justice by the social work profession is particularly urgent and necessitates working in solidarity with those who are oppressed to alleviate poverty (NASW, 2021). According to the Department of Health of Northern Ireland (2018), 'poverty is a social injustice' which needs to be tackled by the social work profession to enhance social wellbeing. This requires an understanding of the systemic nature of poverty and rejecting individualistic perceptions of the causes of poverty. --- Perceptions of poverty It has often been argued that attempts at alleviating social injustices are connected to a worldview in which poverty is caused by unjust structural features in society rather than individual flaws and shortcomings of people themselves. Theoretical discussions on the causes of poverty are traditionally divided into individual and structural categories (Delavega et al., 2017) or into three categories: individual, structural and fatalistic (Feather, 1974). Individualistic approaches refer to the importance of the behaviour of the poor, such as poor decision-making and poor work ethic. In the research literature, individual causes have also been linked to the concept of individual blame (Van Oorschot and Halman, 2000;Larsen, 2006;Kallio and Niemel€ a, 2014), attributing the causes of poverty to the individual, created by idleness and low morale. A structural approach to poverty relates to factors outside the individual's control (Cozzarelli et al., 2001). Here, poverty is seen as the result of low wages, high unemployment, lack of educational opportunities and other structural features which an individual is unable to influence. According to fatalistic explanations, poverty is the result of fate, such as illness or poor luck. In this article, we focus empirically on the attitudes of students on individual and structural causes, an approach common to previous attitude studies on poverty. Based on available research, social workers and social work students in various countries more often support structural rather than individual explanations for poverty (Kus and Fan 2015), findings which have been explained by, for example, social work education, professional ethics and the exposure to various social problems, like poverty (Clark, 2007;Byrne, 2019). Notwithstanding, within-group variations in views have not been uncommon (e.g. Blomberg et al., 2013). Assumptions regarding cross-national variation in views on the reasons for poverty have commonly been based on an 'institutional logic', assuming that values and attitudes of individuals are, on average, affected by those of the welfare state model of their country. From this point of departure (cf Larsen 2006), one might assume a higher support for individual poverty explanations on the island of Ireland than in Finland (cf further below). However, findings so far have not provided any clear-cut support for such assumptions as regards the general public's views regarding the jurisdiction included in their study (see Kallio and Niemel€ a, 2014), whilst there have been national variations in views between Nordic countries (Blomberg et al., 2013). --- Commitment to social justice and other study motivations Another way of approaching students' commitment to social justice, is by investigating to what extent whether their Motivations for choice of career may be related to core values of social justice, human rights and equality (cf Bradley et al., 2012). In previous research, motivations for studying social work have often been divided into intrinsic (or ideological), extrinsic (or instrumental) and those related to personal life experiences, such as childhood adversities (Byrne, 2019), respectively. Intrinsic motivations, which emphasise different ways of helping people, especially those in vulnerable and disadvantaged positions, have been regarded as closely linked to social justice (cf Hackett et al., 2003;Furness, 2007). Several previous studies have demonstrated that a majority of social workers and/or social work students embrace 'altruistic ', 'ideological' or 'intrinsic' values (cf Hackett et al., 2003;Furness, 2007). Nevertheless, some studies have demonstrated that also more extrinsic, instrumental motivations, linked to questions of obtaining a secure position in the labour market, favourable career prospects, a good salary level or a respected status in society, might be of importance (Puhakka et al., 2010;Stevens et al., 2010). This speaks in favour of also considering the prevalence of other than intrinsic motivations amongst students. In addition, difficult personal life experiences in particular, when relating to violence and psychological problems in the family as a child, have resulted in estimations by students that their family background had influenced their career choice (cf Sellers and Hunter, 2005). Further, positive experiences of being a social work client (Hackett et al., 2003, p. 170) have been linked to the choice of the (social work) profession and have also been associated with a desire to influence and improve social work practice. It could be assumed that also all these types of motivations discussed could be affected by country context in general and/or the position of social work in the student's country (cf further below). In addition to an impact of differences in welfare systems and underlying values, for example, on poverty and inequality (which might lead to cross-country variation concerning the importance of intrinsic motivations), the relatively high status of social workers in the Nordics (Meeuwisse and Sw€ ard 2006, pp. 216-17) could lead to external motivations being ranked as fairly important in Finland. Contextual, socio-economic, country differences could also affect students' previous personal experiences of social problems and/or social work, and thus their study motivations. --- Welfare states of Finland, the Republic of Ireland and Northern Ireland The welfare states of Finland, the Republic of Ireland and Northern Ireland (as part of the UK) have often proven somewhat challenging to typologise. Esping-Andersen's (1999) revisit of his seminal Worlds of Welfare describes the Finnish welfare state as Social Democratic (cf also Ferragina and Seeleib-Kaiser, 2011), characterised by the ideologies of universalism and equality, under which services and benefits are generous and accessible to all, taking on many aspects of traditional family responsibility, and social stratification is low. To achieve this, full employment and high taxes are essential (cf Alestalo and Kuhnle, 1986). The UK is typically classified as a liberal welfare state (Ferragina and Seeleib-Kaiser, 2011), influenced by the thinking that higher levels of benefit will reduce incentives to work. Therefore, in this regime, sourcing of private welfare or insurance from the market is encouraged by the state, either actively through subsidisation of private welfare schemes, or passively by keeping social welfare benefits at a modest or residual level. Welfare spending in this regime is at the lowest end of the spectrum, with high-threshold, means-tested, targeted and time-limited benefits aiming only to ameliorate poverty. Consequently, taxation, redistribution of income and social rights are low, whilst income inequality and social stratification are high. Like Finland, the UK is not considered a pure example of its welfare regime. Rather it is modestly liberal with some universalistic provisions such as the National Health Service. Esping-Andersen originally classified the Republic of Ireland as a liberal welfare state (1990) and later failed to categorise it at all (1999). Classification of the Irish welfare state has proven challenging, often situated between the Liberal and Christian Democratic paradigms (Ferragina and Seeleib-Kaiser, 2011). The Christian Democratic regime is characterised by intermediate levels of welfare spending arising from a balance between state and private provision of welfare. Taking up a midway position between Liberal and Social Democratic, Christian Democratic welfare states combine both social insurance and social assistance schemes. Lorentz (1994) has discussed the position of social work in relation to welfare state typologies, departing from a slightly different categorisation by Leibfried (1992). According to this classification, Finland belongs to the Scandinavian welfare state model, whilst (the republic of) Ireland is placed within what is called the 'rudimentary welfare model' and the UK is placed within the residual welfare model, whilst Meuuwisse and Sw€ ard (2006) place both Ireland and the UK within the residual model regarding social work. In the Scandinavian model, social workers are mainly employed in the public sector as a part of a multidisciplinary network of services, aiming at minimising stigmatising effects of decisions and measures, giving social workers a relatively high social status. Social work within both the residual and the rudimentary model seems, in comparison with the Scandinavian model, to be more focused on measures directed at vulnerable population groups, whilst one difference between these latter models, at least historically, seems to have been related to differences in the scope of social work as part of the public sector in relation to other sectors (Lorenz, 1994). --- Welfare and social work in Finland Taking a closer look at the Finish welfare state, it has, like its Scandinavian neighbours, been characterised by a strong reliance on publicly organised and high-quality health and social services, covering the whole population (Anttonen and Sipil€ a, 1996). Also social work is performed as a part of these services, and thus, a vast majority of the social workers are employed as civil servants and are given a wide range of tasks (cf L€ ahteinen et al., 2017). Social workers are required to have a master's degree from a university, with social work as the major subject or equivalent (Valvira, 2017). This means that social work education in Finland is both extensive and strongly research-orientated both in intraand extra-Nordic perspective (cf Juliusdottir, 2003), and it is attracting a high number of applicants often with high grades from secondary education. Finland was hit more severely by economic crises in the early 1990s as compared to other Nordic and European countries, in part due to the cessation of trade after the collapse of the Soviet Union. As a result, the fairly comprehensive Finnish welfare system became subject to various reforms and retrenchments, resulting in rising income inequality amongst other things (Kananen, 2016). Yet, as compared to the UK, welfare cuts in Finland have been rather subtle, gradually weakening social security benefits by not raising income transfers at the same rate as rising wages, combined with various tax cuts as well as by the introduction of stricter activation measures (Outinen, 2012). Public responsibility for services had also been narrowed down and subjected to various New Public Management (NPM) inspired changes and reforms (e.g. Blomberg and Kroll, 2017), but the responsibility for a wide range of mandatory social services, including social work, has remained within the public sphere. --- Welfare and social work on the island of Ireland The social welfare systems of the Republic of Ireland and Northern Ireland, whilst different (cf above), have many commonalities. The Irish welfare system developed in close alignment with the UK following the 1922 partition of Ireland; however, comparable levels of protection were not offered to Irish citizens compared to residents in Northern Ireland (Fitzpatrick and O'Sullivan, 2021). From 1979, the UK Conservative Thatcher government began a protracted shift away from welfare protection towards activation measures that required the unemployed to qualify for support (Adler, 2008). Politics of austerity following the financial crash in 2008 led to a suite of measures designed to reduce the welfare burden, dovetailing with the neo-liberal policies of the Conservative Party who has dominated government in one form or another since 2010. Activation policies for welfare entitlement aimed at enhancing employability and labour market participation for the unemployed did not become a feature of the Irish welfare system until the financial crash of 2008 (McGann et al., 2020). To the current day, social work practice principles and theories in both jurisdictions of Northern Ireland and Ireland continue to occupy and flow from a shared professional foundation, but simultaneously there are some clear examples of different approaches and emphases within specific fields of practice (Wilson and Kirwan, 2007). Social work in Northern Ireland has, for example, distinct differences from the rest of the UK not least because of the thirty-year history of conflict (cf. Heenan and Birrell, 2018). Differences in the political and social context are also reflected in the administration of social policy. In contrast to the rest of the UK, Northern Ireland has had an integrated health and social service for almost fifty years and this has contributed to increasing the numbers and prestige of social work as a profession (Pinkerton and Campbell, 2002). Despite differences in welfare provision to Finland, also most social work in the North and Republic of Ireland are today provided by statutory services and both jurisdictions require a minimum of a degree-level qualification to practice in the profession (McCartan et al., 2022). In conclusion, whilst not always quite clear exactly what may be the possible effects related to various differences between Finland and the island of Ireland when it comes to social work students' views on social justice and their importance as part of their choice of profession, our assumption is that an effect of the structural and institutional differences between the respective jurisdictions would be reflected in the patterns of views revealed in our surveys, which will be of considerable interest. If differences were to be minor, however, the assumption of the dominance of common views amongst students in the social work profession would gain support. --- Materials and methods The present study is based on a common interest amongst a group of researchers from Finland and the island of Ireland in increasing the knowledge concerning the factors behind the thinking of social work students on issues of social justice. The analyses are based on two national surveys (for details, see below) with similar ambitions as regards views on issues of social justice, which have been carried out in the respective jurisdictions. Through recurring discussions amongst the group of researchers involved on the interpretation of the questions to be included and their interpretation in their respective societal context, the ambition has been to guarantee sufficient conformity between the questions included to justify a comparison. The questions utilised are based on similar theoretical assumptions regarding study motives (e.g. Stevens et al. 2010) and perceptions of poverty (e.g. Van Oorschot and Halman, 2000) and, whilst not identical, they are chosen to match each other as far as possible with the intention of capturing intrinsic, extrinsic and personal motivations, and general perceptions of poverty, respectively. In order to further strengthen our assumptions, principal component analyses (not shown) were performed in order to determine whether the respective questions used to ensure study motivations load into three different components (intrinsic, extrinsic and personal motivations) in both countries, as should be expected. This was found to clearly be the case. However, since the questions used are not identical, the analysis focuses on the respective patterns of motivations and perceptions amongst the students in Finland and the island of Ireland, and the comparison focuses on the relative importance of different types of motivations, and of different types of poverty perceptions. The Irish data are based on The Social Work Student Survey conducted in 2018 by The All-Ireland Social Work Research and Education Forum in order to establish the demographic characteristics of applicants and explore beliefs about politics, society and factors informing their motivation to become a social worker. All students in their first year of study 2018-2019 were invited to participate in an anonymised online survey. (Students at UCC were in their third year of a four-year degree, but this was equivalent to the other social work pathways.) The studies received ethical approval from the research ethics committee in each participating institution, and students were provided with study information, so they could provide informed consent. Although there are some differences in the health and welfare system, there are close parallels between the two systems in the North and South of Ireland-the academic routes into social work, the role that religion plays in education, and welfare modelled on a centralised, insurancebased system, and the impact of austerity. Each year, a number of social work students educated in Northern Ireland will work in the Republic and vice versa. For these reasons, we chose to merge the student data and treat it as a single cohort of students across the island of Ireland. The Finnish national social work student survey data were collected in the autumn of 2019. The survey was sent by e-mail to the major social work students at the Universities of Helsinki, Jyv€ askyl€ a, Lapland, Eastern Finland, Tampere and Turku, who had registered as present during the autumn term of 2019 and who had given permission to the student registrar to provide their e-mail addresses for research purposes. According to Finnish practice, a separate ethical approval from a research ethics committee is not required in this type of study. Two reminder rounds were made for the students. Since the Finnish data included students at all levels, not only firstyear students as in Ireland, analyses were performed by study level (results not shown). No major differences were detected when it comes to motivations and perceptions, with the exception of intrinsic study motivations: if the Finnish data would have consisted of first-year students, the importance of intrinsic motivations would have been even more similar to the Irish results. Motivation to study social work in Ireland was assessed using a nineitem, six-point Likert scale constructed from themes in the literature. Respondents were asked to rate the nine statements from one to six (one 'not important at all' to six 'extremely important'). Six of these items were considered to be similar to those asked of the Finland students (see Table 1 for the wording of the Irish and Finnish items). The Irish items were recoded as a three-point Likert scale (very important, fairly important and not important at all) to be more comparable with the Finnish data, which used a three-point Likert scale. Further, two of Delavega's items belonging to the 'Blame Index' (Delavega et al., 2017) were considered similar enough to be compared to the individual and structural poverty items developed by Van Oorschot and Halman (2000) included in the Finnish survey. All students were asked to rate statements concerning individual and structural causes of poverty on a five-point Likert scale. The methods used consisted of direct distributions and crosstabulations. For statistical testing, we used chi-squared analysis. Independent variables, that were comparable between Finland and Ireland, included cross-tabulations were age, current family structure and political party preference. --- Results The total sample of students in the Finnish study was 608 (response rate of 36 per cent), of which the majority were female (92 per cent), whilst the largest age group consisted of those aged between twenty-three to thirty years (39 per cent). Sixteen per cent of students were 22 or under, with a further 25 per cent aged thirty-one to forty years, whilst 20 per cent were forty-one years or older. One-third of the students were living with their children (32 per cent). A total sample of students in the Ireland study was 279 (response rate of 54 per cent). Also here, a majority of students were female (82 per cent) and more likely to be mature students aged between twenty-three and thirty years (41 per cent) or over thirty years old (37 per cent), reflecting the relevant graduate entry pathways that are available. Twenty-two per cent were aged twenty-two years or under, 20 per cent were aged thirty-one to forty years, 15 per cent were forty-one to fifty years, and only 2 per cent were fifty-one to sixty years old. One-third of students were parents (32 per cent). Figures 1 and2 show social work students' motivations to study social work in Finland and Ireland, respectively. The tables show that the intrinsic motivations to study social work were widely embraced both in Finland and in Ireland. In both countries, a majority of the students stated that helping people and working for social change (contribute to solving societal grievances/help people to overcome oppression) were very important drivers for studying social work. Further, also extrinsic motivations are important for the students from Finland and Ireland, but not as important as intrinsic motivations. About half of the respondents in both countries stated that motivations related to good employment prospects/stable jobs and becoming a professional, respectively, were very important motivations for starting studying social work. Amongst Irish students, also personal experiences were nearly as important a motivation for entering a social work education as extrinsic motivations. In Finland, this motivation was, in contrast, far less common than extrinsic motivations. Having had personal experiences of social work was, in turn, the least important type of motivation both in Ireland and in Finland. The order of importance of the various motivations was the same in Finland and Ireland. Concerning perceptions of poverty, both students in Finland and Ireland supported structural explanations and disagreed with individual explanations (Table 1). In both countries, only a very small share of the students endorsed individual explanations of poverty. Next, we focus on the cross-tabulations between motivations to study social work and some relevant background variables. Motivations related to personal experience of social work were excluded from the analysis because of a small number of observations. According to the results in Table 2, students with children in Finland were more motivated by extrinsic factors (the desire to get a professional qualification) (62 per cent) than other students (53 per cent). Furthermore, their choice of career was less motivated by first-hand personal experiences of social problems (13 per cent) than it was amongst students without children (20 per cent). In Ireland, in turn, students without children were less driven to studying social work by personal experiences. Political party preference, in turn, was related to some study motivations in the Finnish data. Finnish students with a conservative party preference less often perceived that addressing societal grievances was a very important reason for studying social work (59 per cent) compared to students with a nonconservative party preference (73 per cent). Conservative voters were also less motivated to study social work by first-hand experiences of social problems (7 per cent) than other students (20 per cent). In Ireland, Finland Turning to students' perceptions of poverty, we excluded 'individual causes' from the cross-tabulations, since almost none of the students in either country were critical towards individual poverty explanations. According to results in Table 3, political party preference was connected to students' poverty perceptions in Finland, but not in Ireland. Those who had voted conservative were less likely to endorse structural explanations (66 per cent) as compared to others (79 per cent). Age or current family structure was not connected to perceptions of poverty. % (n) Ireland % (n) Finland % (n) Ireland % (n) Finland % (n) Ireland % (n) Finland % (n) Ireland % (n) Finland % (n) Ireland %( --- Discussion This article set out to study social work students' motivations-in the different socio-economic and welfare state contexts of the island of Ireland and Finland respectively-for becoming a social worker and their understanding of poverty, key aspects underpinning the central commitment of the social work profession to promote social justice. Our results suggest that social work students in different contexts are to a large extent committed to promoting social justice as a general goal; there are no clear indications of social work students' views on these matters being influenced considerably by the respective welfare regimes' varying prioritising of issues of equality, but appear instead to be influenced by issues of vulnerability and poverty per se, existing in all welfare regimes. However, the commitment to social justice shown by students is unlikely to diminish the potential challenges for future social workers emerging from a conflict between ideals and real-world conditions, such as financial constraints, undesired political contexts and/or institutional procedures and policy instruments. When sufficient structures and preconditions for helping clients are not in place, this might lead both to job-related moral distress and, in the long term, impact on job retention (M€ antt€ ari-Van Der Kuip, 2016). When it comes to extrinsic motivations, respondents across jurisdictions were similarly motivated by the prospect of becoming a professional, by good job prospects and stability. This seems intelligible, considering that studying social work results in a degree including formal professional qualifications, it provides a comparatively favourable position in terms of working life attachment and job security, as compared to many 'generalist' university educations (Puhakka et al., 2010). In sum, and whilst remembering that some caution is advisable as our analyses are based on similar, but not entirely identical questions for the island of Ireland and Finland, the results from our international comparison support the general conclusion that there are important, similar patterns of intrinsic and extrinsic motivations for students' choices to become social work professionals, despite substantial differences in histories, welfare state developments, current policies and societal conditions in various jurisdictions, and despite international variation in political beliefs amongst students. Notwithstanding some effects of varying societal conditions, the results might indicate the persistent prevalence of some basic, general drivers behind students' wishes to become a social worker also in today's turbulent societies. As in most international comparisons, similarities are accompanied by variations, making straightforward conclusions more demanding. In our data, personal experiences stood out as an important driver for students in Ireland, but not in Finland. Also, this might reflect higher levels of economic inequality, poverty and more severe impact of austerity measures (at least so far) in the jurisdictions on the island of Ireland as compared to Finland. In addition, results might reflect that gaining admission to social work programmes in Finland is very difficult, with only some 10 per cent of the (best) applicants being enrolled. Since school success correlates with parents' socio-economic status (Kestil€ a et al., 2019), it seems fair to assume that the differences in personal experience-related study motivations reflect a middle-class background of majority of Finnish social work students, thus differing compared to the students in the Irish jurisdictions. This is, however, an assumption we have not explicitly measured in the current study. Further, we also find some within-group differences, mainly in Finland. Overall, the different background variables are, however, not of any major importance for students' views. Some twenty years ago, Hackett et al. (2003) concluded in their comparative study of student motivations for choosing to study social work that there was little research comparing social work students in different European contexts. Up until today, that situation does not seem to have changed in any decisive way; also, our comparative study can only hope to contribute with some fairly general observations on the matter. At the same time, new challenges, many of which are of an international nature, seem to have only increased the need for comparative studies on social work students' motivations, perceptions and values and their development. --- Limitations and strengths The limitation of this article relates to the comparative contexts for students, whose geopolitical and socio-economic circumstances are both varied and complex, and comparisons are influenced by extraneous variables, outside of the measurements applied. However, the strength of this article tends to outweigh the limitation, as this is a beginning opportunity to compare and begin a wider discussion about the alignment of core social work values and principles and how these align with those who are attracted to work in this career. This will be an increasingly important international discussion for the profession in the decades ahead. --- Conclusion This article is an attempt at comparing social work students'-in different comparative settings-views on social justice, a central mission of their future profession, as measured by their motivations to study social work and their understandings of poverty. As a relatively uncharted topic, this work could be scaled up and replicated across other countries, to compare the same type of factors in a range of geopolitical contexts and test our results more broadly. Our findings point at similarities, rather than decisive differences, in students' views between clearly different social contexts and (socio-)political systems. Thus, there seems to be an intrinsic drive for social justice and integrity with the espoused values of the social work profession amongst social work students. As we have argued elsewhere (McCartan et al., 2022), better insights into social work students' backgrounds, motivations and values can provide educators with important knowledge for developing social work programmes and this article provides the potential for further international comparisons about these important areas. --- Conflict of interest statement. None declared.
Exploring social work students' views to understand how equipped they are to pursue the social justice mission of the profession should be of central academic and practical interest. There are, however, surprisingly few empirical studies focussing on social work students' views on social justice-related issues from a comparative viewpoint. Such knowledge is thought to be of a wider international interest from a number of perspectives, including social work education and student exchange and, in a wider context, for the development of social work as a profession and for discussing the prerequisites for shared international notions of social work. This article explores the views of social work students studying in different socio-economic contexts and welfare regimes in relation to some key aspects assumed to be vital for the profession. The results based on survey data from student cohorts in Finland (N ¼ 608) and the www.basw.co.uk
Introduction The 2000 census revealed that 29.4% of the United States population represents a variety of ethnic backgrounds. The main ethnic groups identified are: Hispanic, African American, Native American or American Indian, Asian, Native American or other Pacific Islander [14]. It has been projected that by the year 2050, the minority population will surpass the majority population [15]. With such a diverse population arises the issue of multiculturalism or what is commonly known as cultural diversity. Cultural differences are one of the main contributors of disparities in the health care industry with respect to the quality of services provided. Research indicates significant existence of racial and ethnic disparities in access to health care service [13]. The 'Healthy People 2020' initiative, launched by the U.S Department of Health and Human Services, has emphasized the need to eradicate these disparities and thereby improve the health of all groups [8]. Therefore, it has become necessary to provide "culturally competent" medical care to improve the quality of the health care industry [5]. To provide culturally relevant care, it is necessary to acknowledge patients' cultural practices and take their cultural influences into account. A major challenge in the health care industry is to educate and prepare future nurses with skills in transcultural nursing. This paper discusses the use of virtual humans as patients to teach cultural competence to nursing students. --- Understanding Culture and Cultural Competence According to Chamberlain, 2005 culture means "the values, norms and traditions that affect how individuals of a particular group perceive, think, interact, behave and make judgments about their world" [3]. Understanding culture helps us to understand how people see their world and interpret their environment. Culture also influences how people seek health care and how they respond towards health care providers [11]. Nurses must possess the ability and knowledge to communicate and to understand health behaviors influenced by culture. Having this ability and knowledge can eliminate barriers caused by race and ethnicity to provide culturally competent care. Cultural Competence is the ability to interact effectively with people from different cultural backgrounds [4]. To be culturally competent the nurse needs to understand his/her world views and those of the patients and integrate this knowledge while communicating with the patients. Nurses need to learn how to ask sensitive questions while showing respect for different cultural beliefs [2] [10]. Along with cultural sensitivity, it is also necessary to develop a trasdisciplinary, transcultural model that must be taught at the basic level of nursing education [7]. --- Agent Architecture To create a culturally-specific virtual patient we are utilizing the FAtiMA (FearNot Affective Mind Architecture), agent architecture. This architecture creates believable agents where emotions and personality play a central role [1]. We extended FAtiMA to allow for the cultural adaption of the agents (see Fig. 1). The cultural aspects are set through the Hofstede cultural dimension values for the culture of the character; culturally specific symbols; culturally specific goals and needs, and the rituals of the culture [9]. Agents perceive the outside world based on their sensors. The perceived events are then passed through symbol translation. Different cultures perceive events differently based on their symbols. The symbol translation captures the specificities of communication in the agent's culture. For example, shaking one's head in India means 'yes' as opposed to the US interpretation of 'no'. Once the event has been identified by the agent, the appraisal process is triggered. In the appraisal process the situations are interpreted to enable valence reaction. The appraisal process consists of two main components: 1. Motivational System: Calculates the desirability of an event towards the agent depending upon the agent's needs and drives. If an event is perceived to be positive for the agent's needs, the desirability of the event is high, and vice-versa. 2. Cultural Dimensions: They are psychological dimensions, or value constructs, which can be used to describe a specific culture. Cultural dimensions capture the social norms of the culture that the agent is part of. We have considered Geert Hofstede's cultural dimensions for India [9]. Geert Hofstede's research gives us insights into other cultures so that we can be more effective when interacting with people in other countries. Hofstede's cultural dimensions are-Power Distance Index (PDI), Individualism (IDV), Masculinity (MAS), Uncertainty Avoidance Index (UAI), and Long-term Orientation (LTO). --- Fig. 1. Culturally-modified FAtiMA The appraisal component then activates an emotion producing an update in the agent's memory and starts the deliberation process. The emotional state of the agent is determined by the OCC model of emotions defined by Ortony, Clore and Collins [12]. The deliberation process consists of the intention structure, which determines the agents-goals, intentions and plans. Once the action is chosen, symbol translation is invoked, and the agent translates the action taking into account its symbols, before the action is performed by its actuators. --- Methods In collaboration with the nursing department of University of North Carolina at Charlotte (UNCC), a life-sized virtual patient belonging to the Indian culture is being developed. The Indian culture was chosen due the large population of Indian students and families in the UNC system. The virtual patient is a 24-year old Indian girl, Sita. The initial test case will involve Sita visiting a clinic due to an outbreak of Tuberculosis (TB) in one of her classes. In India, the population is immunized against TB. Any subsequent screening for TB results in a positive result due to the presence of the antibodies in the blood. Sita, during her preliminary visit to the clinic has presented with a positive result on the TB screening test. Sita is at the clinic for her subsequent visit. The nursing students will interact with a life-size projection of Sita. The goal of the students is to receive answers to their list of required questions, with some of the questions eliciting negative desirability based on the cultural dimensions of young, Indian females. Sita's personality is based on Digman's Five Factor Model (FFM) [6] with her emotions governed by the OCC model of emotions [12]. The students interact with Sita on a one-on-one basis. Our goal is to design Sita such that she reacts to the student based upon the questions asked, how the questions are posed and the student's body language during the interaction. The interaction will be video recorded and analyzed by a faculty member. The faculty member is able to annotate the recording as they evaluate the student's performance. The annotated video will then used by the student as a study tool.
The United States consists of a diverse population of ethnic groups. Catering health care to such a culturally diverse population can be difficult for health care professionals. Culture plays a complex role in the development of health and human service delivery programs. Cultural Competence has emerged as an important issue to improve quality and eradicate racial/ethical disparities in health care. The Nursing Standards of proficiency for nursing education emphasize that nurses should be able to acknowledge patients cultural practices and take cultural influences into account when providing nursing care. A major challenge facing the nursing profession is educating and assisting nurses in providing culturally relevant care. To tackle this issue we have created virtual humans that will represent different cultures. These virtual humans will serve as educational tool that allow nurses to understand and handle patients from different cultures. Our first culturally-specific virtual human is a young Indian girl. In this paper we will discuss the architecture to create a culturally specific virtual patient.
Introduction Agricultural innovations play a crucial role in agricultural development, productivity improvement, environmental sustainability, and poverty reduction (Röling, 2009) and are integral for addressing global hunger, malnutrition, and food insecurity. Innovations are vital for resilience, adaptability, and flexibility (Moore et. al., However, despite a significant increase in cereal production since the 1990s (Kelly et al., 2013), these innovations have not been successful in combating food insecurity in the region (Davies, 2016). Several explanations have been offered for the increase in food insecurity despite an increase in cereal yield and production and a decrease in income inequality in Mali, including low innovation uptake in the case of exogenous innovations (Elliott, 2010;Pamuk et al., 2014;Minot, 2008) or the extremely localized nature of endogenous innovations with limited opportunities for scaling and diffusion (Mortimore, 2010;Nyong et al., 2007). The food security paradox in Mali (Cooper & West, 2017) highlights the need to understand how innovation systems operate across scales, particularly focusing on the mechanisms that drive innovation outcomes within the social-ecological system. The term'mechanism' has been used in several different ways across disciplines, sometimes as a 'causal process' and sometimes as a representation of the necessary elements of a process that produces a phenomenon of interest (Hedström & Ylikoski, 2010). In this paper, we adopt the sociological definition of mechanisms as a set of entities with distinct properties, roles, actions, and interactions with one another that bring about change based on the qualities of the entities and their spatial and temporal organization (Hedström, 2005). We present an empirically informed, stylized agent-based model (henceforth, Ag-Innovation model) that includes both social and ecological dimensions of innovation processes. In the model, food security and income inequality outcomes emerge from (inter-)actions of innovators and farmers in their social-ecological environments. Our research questions were: i) How does the inclusion of social-ecological interactions in the model affect food security and income inequality outcomes? ii) How do exogenous and endogenous mechanisms influence food insecurity and income inequality outcomes? iii) What are the conditions under which exogenous and endogenous mechanisms would improve food security? We intended the Ag-Innovation model as a tool to conduct thought experiments for the development and testing of model hypotheses and generate an understanding of the behavior of agricultural innovation systems. The model is partly stylized and informed by both theory and empirics. We compare the model outcomes from simulations from two different model structures (i.e., the two innovation mechanisms), and assess the results relative to a theoretical maximum of food security or income inequality. We call it a thought experiment because of the theoretical nature of our exploration, albeit one that is informed by empirical data. The use of key theories of innovations, substantiated by evidence of how agricultural innovation processes operate in Mali and sub-Saharan Africa in general, allows us to develop reasonable confidence in the operationalization of the two distinct innovation mechanisms in the model. We urge readers to note that in the real world, the two mechanisms can and do occur simultaneously, but we restrict this study to an exploration of the two mechanisms separately in order to understand the consequences of each mechanism in isolation. The paper is organized in the following ways: Section 2 highlights the model development process, including the incorporation of theories for exogenous and endogenous mechanisms of innovation, identification of key innovation actors and interactions, and the use of the Social-Ecological Action Situations Framework (SE-AS) (Schlüter et al., 2019) as a boundary object and diagnostic tool for integration of social and ecological dynamics within the model. Section 3 elaborates on the design and structure of the Ag-Innovation agent-based model, including agents and ecological entities and their attributes, model environment, and agent actions and interactions. Section 4 comprises the model analysis, including model runs, outcomes, and scenario experiments. Section 5 highlights model results followed by a discussion on three of the key insights drawn from the scenario experiments (Section 6), and the study's limitations (Section 7) and conclusion (Section 8). --- Model development Models that aim to theorize mechanisms underlying social-ecological systems need an approach that takes relevant contextual factors into account while still generating insights that hold across several similar cases. Such models need to be empirically embedded (Boero & Squazzoni, 2005) while representing stylized insights that are valid across cases. Models that are stylized but empirically grounded can serve as effective thought experiments in SES (Schlüter, Müller et al., 2019). Our model is empirically informed and structurally realistic but stylized model (Schlüter, Müller et al., 2019), in that we capture relevant contextual social-ecological factors within agricultural innovation systems in Mali while formalizing distinct exogenous and endogenous innovation mechanisms that can generate insights applicable across several similar cases. Model development involved an iterative process of drawing from theory and empirical evidence to construct a model that incorporated a sufficient level of empirical detail and generated outcomes comparable to real-world observations. The process of development of the conceptual framework of the model involved, first, the identification of key theories and empirical insights that would guide the structure of the model, including the identification of key actors (agents), their characteristics, behavior, and actions and interactions, followed by an iterative process of diagnosis of social-ecological elements and their interactions to ensure that the model adequately integrates both social and ecological dynamics in the innovation mechanisms. In the following sections, we highlight the process of combining theory with empirics that guided the model design and the application of the Social-Ecological Action Situation (SE-AS) framework (Schlüter, Haider et al., 2019) as a boundary object and diagnostic tool to facilitate the integration of SES dynamics within the model. --- Theory The formalization of the two innovation mechanisms within the ABM was an iterative process of distillation from theory and empirics to its most relevant elements and structures. We reviewed various theories of innovation development, dissemination, and diffusion and found four main theories that guided the formalization of innovation mechanisms within the model. We elaborate on these theories in Sections 2.1.1 and 2.1.2 below. --- Exogenous mechanism Early conceptualizations of innovation processes were influenced by the field of economics and organizational research and were instrumental in developing the national agricultural innovation development and dissemination channels in developing countries. The theory that informed the design of the exogenous mechanism of innovation were the theory of innovation (Schumpeter & Nichol, 1934), the technological push and pull theory (Schmookler,1966;Scherer, 1982), and the theory of innovation diffusion (Rogers et al., 2014). These theories assumed rational decision-making of innovation adopters and conceptualized innovation as a process of invention through research for technological improvement. The theory of technological push and pull viewed innovation as an interplay of knowledge-driven technology-push and market-driven demand-pull for innovation development. The theory of innovation diffusion focused on the spread of innovations through communication channels where innovations are adopted first by a small minority of early adopters and then followed by the early majority, late majority, and finally laggards. These theories collectively informed the implementation of agricultural innovation where external funds would be allocated to national and international agricultural research and development organizations to develop science-based agricultural technologies that would be then marketed through agricultural extension to farmers to ensure 'delivery' and subsequent adoption and diffusion of such solutions among users. Based on our theoretical review, we identified three key actors within the exogenous mechanism of innovation: external innovators (who represent national and international agricultural research organizations and companies involved in agricultural development), early adopters (who represent larger producers who directly adopt innovations from external innovators) and late adopters (small and medium producers who indirectly adopt innovations through social learning from other farmers). We found three main interactions between these actors: 1) interactions between donors and external innovators including capital allocation and innovation goal formation leading to innovation development; 2) interactions between innovators and producers leading to innovation dissemination, and 3) interactions between producers including social learning leading to innovation diffusion. Table 1 summarizes the theories, actors, and interactions represented in the model. --- Endogenous mechanism Critiques of the exogenously driven innovation promoted by development and aid agencies refocused attention on endogenous, locally driven innovation (see Matthews, 2017;Röling, 2009). The theory that informed the design of the endogenous mechanism of innovation was the spiral model of social innovation where innovation is seen as collective action between actors towards a common goal. Within this theoretical conceptualization, innovation is spread through the 'formation and re-formation of cooperating groups,' resulting in an expansion of a variety of products and processes (Tapsell & Woods, 2008). Within the spiral model of social innovation, the actors within an innovation system are seen as engaging in experimentation, exploration, learning, and adaptation to respond to a changing environment. Innovation occurs as an emergent outcome through feedback between individual micro-level producers and meso-level collectives (Hounkonnou et al., 2012). These circular and iterative interactions commonly result in consensus building over individualist, linear approaches (Matthews, 2017) where the goal of the decision-maker is to achieve social consensus as opposed to individual gain (Mangaliso, 2001). We identified three key actors within the endogenous mechanism of innovation: collective innovators (who represent farmer cooperatives and collective groups who test and experiment with agricultural techniques), early adopters (who represent smaller producers who directly adopt innovations), and late adopters (larger producers who indirectly adopt innovations through social learning from other farmers). We found two main interactions between these actors: 1) interactions between collectives and producers where collectives form innovation goals, develop innovations and interact with early producers leading to innovation dissemination for innovation adoption; and 2) interactions between producers through the collective formation, capital pooling leading to innovation development and innovation knowledge sharing leading to innovation diffusion (see Table 1). Note that all the interactions highlighted in these different innovation mechanisms are social interactions, indicating that these theories focused exclusively on interactions with the social entities within innovation. --- Innovators-Producers: Innovators interact with early producers through innovation dissemination for innovation adoption. --- Producers-Producers: Early adopter producers interact with late adopter producers through innovation knowledge sharing for innovation diffusion. --- Producers-Producers Producers interact with producers through collective formation and capital pooling for innovation development. --- Producer-Collectives: Collectives interact with producers in the network to form innovation goals. Collectives interact with early producers through innovation dissemination for innovation adoption. --- Producers-Producers: Early adopter producers interact with late adopter producers through innovation knowledge sharing for innovation diffusion. --- Diagnosis of social-ecological interactions in agricultural innovation systems We aimed to formalize innovation as a social-ecological phenomenon rather than just as a social phenomenon. However, as we noted above in Table 1, initial formalizations of the model were biased toward social interactions between the innovation actors with no social-ecological or ecological interactions. This occurred largely because existing theories focus solely on social interactions between actors within the innovation system (see Table 1). The expansion of innovation as a social-ecological phenomenon involved an iterative process of uncovering additional ecological variables that influence innovation processes, beyond social interactions of invention development, adoption, and diffusion. We used the framework of linked social-ecological action situations (SE-AS) (Schlüter, Haider et al., 2019) to set system boundaries and identify the key social as well as ecological entities and their interactions within social-ecological innovation. We also used the SE-AS framework as a diagnostic tool to ensure adequate representation of both social and ecological dynamics within the model. The SE-AS framework was originally developed to understand the actions and interactions between the social and ecological entities that lead to processes of emergence of complex social-ecological phenomena such as regime shifts, traps, and sustainable resource use (Schlüter et al., 2014;Schlüter, Haider et al., 2019). This framework has been used as a tool to capture interactions that are hypothesized to have generated a socialecological phenomenon of interest and support a process of developing hypotheses about configurations of action situations that may explain an emergent social-ecological phenomenon (Schlüter, Haider et al., 2019). These interactions can be either social (between human entities), social-ecological (between human and nonhuman entities), or ecological (between non-human entities). We identified three social-ecological and ecological interactions that were critical dynamics that needed to be incorporated into the model to ensure the assessment of innovations as a social-ecological phenomenon (Fig. 3 and Fig. 4). These additional interactions include: 1) donor-innovator interactions where climate risks trigger donors to allocate foreign aid to innovators; 2) producer-farmland interactions where climate risk perception influences producers crop choices and assessment of production history leads to the formation of beliefs on the need for innovation and the type of innovations desired by the producers; and 3) crop-soil interaction where the type of innovation adopted regulates soil fertility and crop diversity. --- Ecological interactions Crop -Soil Crop interacts with soil through the regulation of soil fertility. --- Ag-Innovation Model: Overview In this section, we provide an overview of the model, a detailed description of the Ag-Innovation model can be found in the ODD protocol (Grimm et al., 2006) in the Supplementary Material. The Ag-Innovation model captures three key processes of innovation: innovation development, innovation adoption, and innovation diffusion, while incorporating social-ecological interactions within each of these processes. These dynamics are essential for a system's perspective of how innovations operate across scales and influence or are influenced by various innovation actors (see Figure 3). Cross-scalar interactions in the model occur through the signaling of producers operating at the micro-scale and innovators operating at the mesoscale. Innovator agents (external innovators and collective innovators) are involved in innovation development and dissemination while producer agents are involved in innovation adoption and diffusion. The essential dynamics within innovation development include capital allocation, innovation goal formation, innovation development, and dissemination to potential adopters. The dynamics within innovation adoption include climate risk perception, crop production estimation, formation of innovation beliefs/desires, innovation adoption, and innovation diffusion. --- Model Structure The model environment for the Ag-Innovation model represents the entire country divided into four key agroclimatic zones (Figure 4). The four zones exist in a gradient with the extremely arid sandy Sahelian and Saharan zones in the North (with annual precipitation less than 200 mm) to the more tropical Sudanian-savanna regions in the South (with annual precipitation around 1000-1200 mm) (Waldman & Richardson, 2018). In the model, these are represented as patches divided into four climate zones (Zone 1-4) with respective attributes of temperature and precipitation, soil fertility, and crops grown within which agents reside. The Ag-Innovation model consists of three key agents: external innovators, collective innovators, and producers. Collective innovators represent farmers' associations or groups in farmer field schools who collectively test and experiment to develop innovations that may be suitable for the local context. The external innovators represent external agricultural entities such as international agricultural development organizations or private agencies that are funded by external or foreign aid for developing agricultural innovations. Producers are farmers who own land, cultivate crops, form beliefs and desires about innovation, and adopt innovations. Table 3 outlines the agent attributes and their description in the Ag-Innovation model. The model explores two distinct mechanisms of innovation (exogenous and endogenous) with different configurations, networks, roles, and actions of innovator and producer agents (Figure 4). In the exogenous mechanism of innovation, innovator agents are external innovators who are directly connected with early adopters (in this case, large producers) for innovation dissemination. Late adopters (small and medium producers) interact with early producers to spread innovation adoption. In the endogenous mechanism of innovation, innovators are collective agents who are directly connected with early adopters (small and medium producers) for innovation dissemination. Late adopters (large producers) interact with early producers to spread innovation adoption. In both exogenous and endogenous mechanisms, producers interact with ecological entities, for example, their farmlands, crops, and soils through climate risk perception that guides crop selection, formation of innovation beliefs and desires, and innovation adoption. Here, belief is based on the assessment of the producer's need for innovations based on the type of crops grown, crop production, and soil fertility. Desires are the type of innovations the producer needs, including 'production,''stability,' or 'conservation' oriented innovations1. The key outcome variables in the model are food security, income inequality, and adoption rates of different types of innovations (production, stability, and conservation) over time. Food security is an outcome that shows the proportion of producer agents who are food secure (i.e., whose food production is equal to or higher than their household food requirement). Surplus food is sold for income that increases capital owned by producers. Income inequality is an outcome that shows the Gini coefficient of capital distribution among producer agents which represents the degree of inequality in a distribution. A Gini coefficient of 0 expresses perfect equality while a Gini coefficient of 1 expresses maximum inequality among values. We ran the model over 200 times, each run with 100-time steps. Each time step represents an agricultural production year. --- Model calibration We used a combination of qualitative and quantitative empirical data to calibrate the model including the modelling environment, agent distribution, agent attributes, and decisions to replicate our stylized model as close to the relevant agricultural realities of Mali as possible. The calibration of climatological parameters in the model such as sowing, growing and maturing temperatures, and precipitation within the four climate zones were based on meteorological data from the World Meteorological Organization (WMO). Data was processed to compute the average monthly temperature and precipitation of weather stations for the period 1961-1990 in each zone. Sowing, growing, and maturing season temperature and precipitation were calibrated using mean monthly value for months May to July, August to September, and October to November respectively. Estimation of crop yield (for maize, sorghum, millet, and rice) was based on a series of four regression equations calculated for Mali using national-level meteorological data for period 1961-1990(see Sanga, 2020)) --- Agent actions Producers perform nine actions: i) assess climate risk; ii) make crop choice for cultivation; iii) estimate expected crop production; iv) assess innovation need; v) develop innovation desire; vi) adopt innovation (directly from innovator/collective agents by early adopters and social learning for late adopters); vii) assess crop production; viii) allocate produce for household consumption and selling; and ix) allocate a share of available capital to collectives (see Table 4 for details). The innovator agents (external innovator in case of exogenous and collective in case of endogenous mechanism) perform four actions: i) update capital for innovation, ii) set innovation goal, iii) develop innovation and iv) disseminate innovation to early adopters (see Table 5 for details). Figure 5 highlights a simplified illustration of the actions of the producers and innovators/collectives and the interactions between meso and micro levels. A detailed flowchart of agent actions can be found in the ODD protocol in Supplementary Material B. Figure 5: Meso and micro-level agent actions and their linkages for both exogenous and endogenous models (dashed blue line). --- <unk> Agents estimate soil fertility at their patches, estimate crop production at the current time step as well as the mean and standard deviation of past crop production history for the previous 10 timesteps. --- <unk> If soil fertility is lower than a certain threshold, the agents set their innovation belief as true and innovation desire as 'conservation'. --- <unk> If the agent has a negative production gap between current crop production and mean of previous 10 time-steps production history, the agents set their innovation belief as true and innovation desire as 'production'. --- <unk> If producer agents have a high standard deviation in crop production history (indicating high crop production variability), agents set their innovation belief as true and innovation desire as'stability'. --- Innovation adoption <unk> Early adopters adopt innovation if available innovation matches with their innovation desire. --- <unk> Late adopters adopt the most popular innovation adopted by the early adopters in their vicinity if it matches with their innovation desires. --- Crop production assessment --- <unk> If innovation adopted is <unk>production <unk>, crop yield increases by an amount proportional to the innovation efficiency and soil fertility decreases. --- <unk> If innovation adopted is <unk>conservation <unk>, crop yield increases by an amount proportional to the innovation efficiency and soil fertility increases. --- <unk> If innovation adopted is <unk>stability <unk>, crop yield is maintained by an amount proportional to the innovation efficiency. --- Consumption and selling --- <unk> Producer agents calculate household food requirements. If food production is greater than food requirement, agents set their status food secure and sell excess food. Otherwise, producer agents set status food insecure. Capital allocation <unk> Endogenous mechanism: Early adopter producer agents allocate a share of their capital to the innovation capital of the collective agent. <unk> Exogenous mechanism: Producer agents do not allocate capital to external innovator agents. --- Model Analysis --- Design of experiments As we highlighted in previous sections, we aimed to conduct an exploratory analysis of the Ag-Innovation model to answer three key questions: i) Does the inclusion of social-ecological interactions in the model change the effect of the two mechanisms on food security and income inequality? ii) How do exogenous and endogenous mechanisms influence food security and income inequality? iii) What are the conditions under which food security and income inequality will improve? To answer the first question, we designed a set of four model experiments that would allow us to answer these questions. In experiments 1 and 2, we included only social interactions in the innovation model. Experiment 1 explored the social endogenous mechanism (S-Endo) while experiment 2 explored the social exogenous mechanism (S-Exo). In experiments 3 and 4, in addition to social interactions, we included social-ecological interactions in the model, including climate risk perception, moderate increase in temperature, moderate decrease in precipitation, and regulatory ecological feedback on soil fertility from innovation adoption. Experiment 3 explored the social-ecological endogenous mechanism (SE-Endo) while experiment 4 explored the social-ecological exogenous mechanism (SE-Exo). A comparison of experiments 1 and 3 and experiments 2 and 4 allowed us to assess if the inclusion of social-ecological interactions in the innovation model would have any influence on food security and income inequality outcomes. Table 6 provides details of the design of experiments in the Ag-Innovation model. --- Not included Included To answer the second and third questions, we developed experiments 5, 6, and 7 that explored model outcomes of food security and income inequality under scenarios of no innovation, exogenous innovation, and endogenous innovation, respectively. Comparing model outcomes of experiments 5 and 6 allowed us to explore if an exogenous mechanism would lead to higher food security and income inequality. Comparing model outcomes of experiments 5 and 7 allowed us to explore if endogenous mechanism would lead to lower food security and income inequality. We also conducted a sensitivity analysis using the BehaviorSpace tool in NetLogo (version 6.2.2) (Wilensky, 1999) through parameter tuning by repeated execution, i.e., varying one input parameter at a time while keeping the remaining parameters unchanged (update-threshold, second chanceinterval; see Remondino and Correndo, 2006 for details) to find the conditions under which endogenous and exogenous innovation mechanisms would be effective in improving food security and income inequality outcomes. See Table 7 for the values explored for the sensitivity analysis. --- Inclusion of social and social-ecological interactions within innovation Food security outcomes were lower in social-ecological innovation than in social innovation and income inequality outcomes were higher in social-ecological innovation than in social innovation for both exogenous and endogenous mechanisms (Figure 6 a and b). See summary statistics of the distribution of food security and income inequality outcomes in Table 8. Model results showed that in all experiments, income inequality decreased with an increase in food security among producers (Figure 7 a-d). However, the relationship between income inequality and food security was stronger for both endogenous and exogenous social-ecological innovation (experiments 3 and 4) ( --- Exogenous and endogenous mechanisms of innovation --- Exogenous mechanisms of innovation Comparison of scenarios with no innovation and exogenous innovation (experiment 5 and 6) shows no difference in income inequality outcomes (mean: 0.7046, p-value = 0.8957) (see Table 9). Exogenous mechanisms demonstrated a greater variation in the results of income inequality than the model scenario with no innovation (Figure 8a). Exogenous innovation produced slightly higher food security and larger number of low-end outliers (mean = 0.70 and 0.73 respectively, p-value <unk> 2.2e-16) and a larger number of low-end outliers (Figure 8b). --- Endogenous mechanisms of innovation Comparison of scenarios with no innovation and endogenous innovation (experiments 5 and 7) shows that the endogenous innovation mechanism leads to higher income inequality (mean = 0.74 and 0.70 respectively, pvalue <unk> 2.2e-16 and food security (means 0.79 and 0.70 respectively, p-value <unk> 2.2e-16). Food security outcomes in the endogenous mechanism show the presence of several higher-end outliers for income inequality and several lower-end outliers for food security (Figure 8b). See Table 9 for summary statistics for the model outcomes for scenarios of no innovation, exogenous innovation, and endogenous innovation, respectively. Comparison of adoption rates of the different types of innovations (production, stability, and conservation) under the exogenous and endogenous innovation scenarios show that the endogenous innovation mechanism leads to a higher adoption rate of production-oriented innovations (Figure 9b), while the exogenous innovation mechanism leads to a higher adoption rate of stability-oriented innovations (Figure 9a). The adoption rate of conservation-oriented innovations is slightly higher for the endogenous mechanism than the exogenous mechanism. Overall, the rates of decline in food security (Figure 10a) and increase in income inequality (Figure 10b) are higher in both exogenous and exogenous mechanisms compared to the model scenario with no innovation. Table 9: Summary statistics for the income inequality and food security model outcomes for scenarios: no innovation, exogenous innovation, and endogenous innovation. --- Sensitivity Analysis Results The results of the sensitivity analysis show that an increase in the rate of capital allocation increases income inequality in endogenous mechanism (Figure 11 d). Capital allocation rate does not affect food security outcomes for both endogenous and exogenous mechanisms (Figure 11 a and b). Foreign aid does not affect food security or income inequality outcomes for both endogenous and exogenous mechanisms (Figure 12 a,b,c,d). Food security increases with an increase in network radius for both exogenous and endogenous mechanisms (Figure 13 a and b). Income inequality decreases with an increase in network radius for endogenous mechanism (Figure 13 d). However, there is a tipping point in the exogenous mechanism where lower network radius leads to an increase in income inequality, but higher network radius leads to decrease in income inequality (Figure 13 c). Increase in innovator density leads to an increase in food security for both exogenous and endogenous mechanisms (Figure 14 a and b). Increase in innovator density leads to a decrease in income inequality for both mechanisms (Figure 14 c andd). --- Discussion Models of social-ecological phenomena lie within a spectrum of theoretical, often abstract models and realistic, empirical models of specific case studies. While the former can lack links to the real world, making their results difficult to apply to real-world problems, the latter are very specific to a particular case with limited possibilities to generalize findings to other cases. In this study, we undertake an empirically driven, stylized modeling approach, where we iteratively combined theories of innovation processes with quantitative and qualitative empirical data and insights from a case of agricultural innovation in Mali. Our modeling approach does not aim to numerically replicate food security and income inequality in Mali as the model is stylized. Instead, we focus on the qualitative model validation through patterns generated by the model and use the model to explore our research questions through thought experiments. For example, the model generates an inverse relationship between food security and income inequality outcomes. This model result is supported and validated by evidence from observed patterns of food security and income inequality in Mali from previous studies. For example, Imai et al. (2015), who dynamically modeled the relationship between agricultural growth and income inequality in developing countries, found that agriculture-driven growth reduces inequality. Mali has seen a rise in agricultural growth and cereal production since the 1970s due to targeted agricultural policies that promote innovations and technologies that improve crop production (Giannini et al., 2017). Consistent with these observations, studies have noted a decreasing pattern of income inequality in Mali since the 1990s (Odusola et al., 2019). Learning from the exploratory scenario experiments analysis results, we draw three key insights relevant to agricultural innovation systems: i) Incorporation of social-ecological interactions in the formalization of innovation influences model outcomes Results from the experiments on the inclusion of social and social-ecological interactions in the model make a compelling case for the necessity of incorporating intertwined social-ecological dynamics in the assessment and modeling of agricultural innovation systems. The model scenario with the inclusion of social-ecological interactions showed a stronger inverse relationship between income inequality and food security outcomes than the model scenario with only social interactions. For both exogenous and endogenous mechanisms, the scenarios with social-ecological interactions also show lower levels of food security and higher levels of income inequality than scenarios with only social interactions. In other words, the absence of social-ecological interactions in the model overestimates the effect of innovation on food security and underestimates the effect on income inequality. We demonstrate how modelers can effectively diagnose and incorporate social-ecological action situations within their models using the SE-AS framework. Most model documentation practices by SES researchers and modelers such as ODD, ODD+D (Grimm et al., 2006;Müller et al., 2013), and TRACE (Schmolke et al., 2010), highlight the model-building process with little transparency on the process of model formalization through which the modelers achieved the desired simplification (Schlüter et al., 2014). This paper serves as a demonstration of how stylized models of social-ecological phenomena can be developed as thinking tools through the application of the SE-AS framework (Schlüter et al., 2019) as a diagnostic tool. The SE-AS framework enabled us to establish the boundaries of the model, as well as visualize the social and ecological interactions within the model. Further, the focus on action-situations of key social and ecological entities supported the selection of agents, their actions, and interactions in the ABM. The framework also allowed us to make modeling decisions and assumptions more explicit and intentional toward an integrated context-dependent understanding of the intertwined nature of social-ecological innovation systems. In the Ag-Innovation model, we included social-ecological interactions such as changes in temperature and precipitation, climate risk perception, formation of innovation beliefs and desires as well as regulatory ecological feedback on soil fertility. Model results show that these interactions and dynamics are key in influencing food security and income inequality outcomes. Climate patterns such as changes in temperature and precipitation influence both crop choices as well as crop production, where producers perceive climate risk to make crop choices, estimate crop production, and use adaptive learning from past production histories to develop innovation beliefs and desires. Once producers adopt certain innovation types, there is regulatory feedback on crop production and soil fertility. These factors together affect overall crop production, which in turn affects food security and income inequality outcomes. This insight is especially relevant for innovations oriented towards conservation, which may not offer short-term, immediate benefits, as opposed to innovations oriented towards increased production. Research has shown that the adoption of conservation practices is guided not only by economic, but also by complex socio-psychological and ecological factors such as values, beliefs, norms, and risk perception (Clearfield & Osgood, 1986;Delaroche, 2020;Knowler & Bradshaw, 2007;Greiner et al., 2009). Failure to take these fundamental social-ecological interactions into account when modelling and evaluating innovation systems could potentially lead to inaccurate assessments of the efficacy or demand of certain innovations that promote sustainable agriculture. --- ii) Endogenous innovation mechanism leads to higher food security and income inequality than the exogenous innovation mechanism. In our exploratory analysis, we compared scenarios of exogenous and endogenous mechanisms of innovation with scenarios of no innovation and explored the effect of each mechanism on food security and income inequality outcomes. We hypothesized that exogenous mechanisms would lead to higher food security and income inequality based on evidence that agricultural research organizations and extension often develop agricultural technologies that increase crop productivity (thereby leading to increased food security) and are accessible only to larger producers with enough financial resources to afford these technologies (thereby leading to increased income inequality) (Bambio et al.,2022;Ndjeunga & Bantilan, 2005;Okai, 1997). Lazarus' ( 2013) study based in Mali, also demonstrated that income inequality increases when agricultural improvements are targeted at larger farmers rather than smaller or poorer farmers. However, we find that while the exogenous mechanism leads to slightly better food security outcomes than a scenario where there is no innovation, the endogenous mechanism leads to higher food security as well as higher income inequality than the exogenous mechanism. This result is surprising and contrary to our hypothesis but can be explained through differences in innovation adoption patterns and network structures within the exogenous and endogenous innovation mechanisms. Results show that the adoption rate of production-oriented innovations is much higher for the endogenous mechanism and slightly declines over time along with a synchronous increase in the adoption of conservationoriented innovations. Adoption dynamics (including adoption patterns over time and rates of adoption) are determined both by the types of innovation desired by the producers as well as the type of innovation developed by innovators. The endogenous mechanism allows for bidirectional signaling of innovation demand and supply between producers and collective innovators as opposed to the exogenous mechanism's unidirectional signaling of innovation supply from external innovators to producers. Producer agents implement adaptive learning by assessing their past production histories to develop innovation beliefs (if innovation is needed) and innovation desires (what kind of innovation type is needed). Smaller producers who are connected to the collectives signal the most desired innovations to the collective innovators, who then develop the innovation and disseminate the innovation back to the producers. Higher proportion of smaller producers in the agricultural landscape lead to higher network strength of linked producers and collectives, making innovation signaling stronger. As a result, innovations are developed and disseminated more in line with the preferences of producers who ultimately adopt the desired innovations at a higher rate. In the exogenous mechanism, there is no signaling of innovation desires from the producers to the external innovators, who innovate randomly. Additionally, the developed innovations are disseminated to larger farmers (early adopters) in their network, which results in lower adoption rates for two reasons: first, the innovation developed may not be the innovation the producer desires and second, the low proportion of larger producers results in a weaker network and a lower rate of adoption diffusion. Readers are cautioned, however, to not interpret these results to mean that producers inherently seek production-oriented innovations as opposed to conservation or stability-oriented innovations. The higher adoption rates of production-oriented innovations in both exogenous and endogenous mechanisms are a consequence of model parameterization where temperature and precipitation change were set to a moderate increase and decline respectively, as projected for West Africa (Giannini et al., 2017). This calibration led to a decline in crop
Agricultural innovations involve both social and social-ecological dynamics where outcomes emerge from interactions of innovation actors embedded within their ecological environments. Neglecting the interconnected nature of social-ecological innovations can lead to a flawed understanding and assessment of innovations. In this paper, we present an empirically informed, stylized agent-based model of agricultural innovation systems in Mali, West Africa. The study aimed to understand the emergence of food security and income inequality outcomes through two distinct model structures: top-down, aid-driven (exogenous) innovation and bottom-up, communitydriven (endogenous) innovation. Our research questions were: i) How does the inclusion of social-ecological interactions in the model affect food security and income inequality outcomes? ii) How do exogenous and endogenous mechanisms influence food insecurity and income inequality? iii) What are the conditions under which exogenous and endogenous mechanisms would improve food security? The structural design of the model was based on a combination of theory, empirics, and mapping of social-ecological dynamics within innovation systems. Using the Social-Ecological Action Situation framework, we mapped the social, social-ecological, and ecological interactions that jointly produce food security outcomes. The exploratory model analysis reveals three key insights: i) Incorporation of social-ecological interactions influences model outcomes. Scenarios with socialecological interactions showed a stronger relationship between income inequality and food security, lower levels of food security, and higher levels of income inequality than scenarios with social interactions. ii) Endogenous mechanism leads to higher food security and income inequality than the exogenous mechanism. iii) Bidirectional outreach is more effective than unidirectional outreach in improving food security. Inclusion of social-ecological dynamics and interactions such as the role of climate risk perception, social learning and formation of innovation beliefs and desires is key for modelling and analysis of agricultural innovations.Stylized models, Social-Ecological Systems, Agriculture, Innovations, Agent-based modelsThe model code for the agricultural innovation (Ag-Innovation) agent-based model can be found in COMSES https://www.comses.net/codebases/80397098-9368-40ab-bb01-56b5f929ea04/releases/1.0.0/ 1. 2012) and can foster adaptive responses to changing social-ecological conditions (Olsson & Galaz, 2012). The understanding of agricultural innovations has evolved over the last several decades. According to Klerkx et al. (2012), who studied the evolution of thinking in agricultural innovation, in the 1950s to 1970s, innovation was viewed as a linear process of scientific invention of technologies that were then transferred to and adopted by the intended users (see also Godin & Lane, 2013). In several countries, agricultural innovations were funded by external aid, developed in controlled research environments by scientists, and disseminated through extension services for adoption by individual farmers (Valente & Rogers, 1995;Faure et al., 2018). The 1980s saw a shift in focus from external aid-driven technology transfer to community participation, learning, and ownership (Klerkx et al., 2012). Both scientists and farmers were viewed as central actors who undertook innovation development, dissemination, and diffusion roles through shared knowledge and resources (Hall & Clark, 2010;Brooks & Loevinsohn, 2011). Innovation systems came to be seen as a collection of entities or agents (individuals, organizations, institutions) that form 'a complex web of layered and nested connections that cross the typical space, time and sector boundaries…' (Moore, 2017 pg. 219). This presents a dynamic, systems perspective of the process of innovation which arises from the actions and interactions of agents embedded within a social context operating at different scales, and hence, termed 'social' innovation (Nicholls & Murdock, 2012). However, agricultural innovations are not purely social processes but also include ecological dimensions. Olsson & Galaz (2012) highlight the complex, intertwined nature of social-ecological innovations. In the context of agricultural sustainability, examples of such social-ecological dimensions include the influence of climate and ecological factors in innovation development and adoption, biophysical feedback from the implementation of agricultural technologies such as changes in crop yield, soil fertility, etc. Conceptualizing innovation solely as social processes and ignoring social-ecological interactions may result in maladaptive outcomes and misleading or myopic appraisals of the impact of innovation. Examples of such maladaptive outcomes include the Green Revolution in Asia in the 1960s that resulted in environmental damage and soil degradation despite significant increases in crop yields (Pingali & Rosegrant, 1994), or the development of agricultural innovations for increasing maize productivity suitable for certain agro-ecological zones in Kenya but without adequate consideration of social or ecological realities of the drier zones (Leach et al., 2012). Hence, agricultural innovation requires a deeper understanding of not only the role of innovation actors and their actions and interactions within innovation processes but also the social-ecological dimensions of innovation (de Boon et al., 2022). This paper presents a contextual case study of agricultural innovation in Mali, West Africa. Agricultural innovations in Mali have experienced similar challenges to previous examples such as the Green Revolution in Asia and maize production in Kenya. Between the 1960s and 1980s, Sub-Saharan Africa (SSA), including Mali, suffered from prolonged periods of droughts and famines, which prompted donors to provide aid and funding support for agricultural innovation. Innovations were driven through pathways or mechanisms that were exogenous to the system; where key agricultural innovations were financed through external funders, developed by specialists or researchers, distributed by agricultural extension services, and finally adopted by producers (Knickel et al., 2009). Examples of such exogenously developed innovations include improved maize and rice varieties, crop inputs such as crop fertilizers and pesticides, early maturing varieties, etc. (Davies, 2016). However, few studies also highlight alternate pathways or mechanisms that were more endogenous to the system. Farmers formed local 'innovation platforms' (Pamuk et al., 2014) that developed innovations that were more widely adopted, to enable farmers to adapt to long-term climate variability and drought (Mortimore, 2010;Nyong et al., 2007). These innovations closely aligned with the dynamic, non-linear pathways or mechanisms of innovation systems and were facilitated by social learning, community organization, and local adaptive knowledge transfer (Ajani et al., 2013;Nyong et al., 2007;Osbahr et al., 2008). Examples of such endogenously developed innovations include various crop management strategies such as the conservation of soil carbon content through zero tillage practices, mulching, use of organic manure, and agroforestry (Ajani et al., 2013).
network strength of linked producers and collectives, making innovation signaling stronger. As a result, innovations are developed and disseminated more in line with the preferences of producers who ultimately adopt the desired innovations at a higher rate. In the exogenous mechanism, there is no signaling of innovation desires from the producers to the external innovators, who innovate randomly. Additionally, the developed innovations are disseminated to larger farmers (early adopters) in their network, which results in lower adoption rates for two reasons: first, the innovation developed may not be the innovation the producer desires and second, the low proportion of larger producers results in a weaker network and a lower rate of adoption diffusion. Readers are cautioned, however, to not interpret these results to mean that producers inherently seek production-oriented innovations as opposed to conservation or stability-oriented innovations. The higher adoption rates of production-oriented innovations in both exogenous and endogenous mechanisms are a consequence of model parameterization where temperature and precipitation change were set to a moderate increase and decline respectively, as projected for West Africa (Giannini et al., 2017). This calibration led to a decline in crop yields, resulting in a larger proportion of producers desiring production-oriented innovations as opposed to stability or conservationoriented innovations, which in turn led to higher adoption rates of production-oriented innovations, an increase in crop production, and ultimately, higher food security. Model results also show that the endogenous mechanism led to higher income inequality than the scenario with no innovation. These results are also explained through the cross-scalar dynamics between producers and collectives. A larger proportion of early adopters (i.e., small and medium producers) enter a repeated cycle of capital allocation, innovation adoption, and income generation that prevents them from increasing their overall capital, thus creating "poverty traps" (Barrett & Swallow, 2006, Radosavljevic et al., 2021). On the other hand, larger farmers (i.e., late adopters), who are not connected to the collectives, do not allocate capital for innovation development, and can accumulate additional income from increased agricultural production; thereby leading to higher income inequality. A noteworthy point that needs highlighting here is that the difference in median values for food security outcomes for exogenous and endogenous mechanisms is more significant than those for income inequality. This suggests that the endogenous mechanism can be a more effective mechanism to address food security in the region despite some adverse effects of increased income inequality. --- iii) Bidirectional outreach is more effective than unidirectional outreach in improving food security. Results from the sensitivity analysis suggest that food security would improve with higher network radius and density in both the exogenous and endogenous mechanisms. Food security outcomes were not sensitive to capital allocation rate and foreign aid amount. In other words, food security outcomes would improve through higher outreach of innovation knowledge and information between producers and innovators (through wider and denser networks). On the other hand, income inequality rises with an increase in capital allocation rate and declines with an increase in innovator density and network radius. These results are neither new nor surprising. However, they emphasize how innovation agents (producers and innovators) influence food security and income inequality outcomes through the configuration and organization of their roles within innovation processes. Characteristics, such as network radius and density, but also composition (who is in and out) between the interactions of innovators and producers determine not only how innovation knowledge is created and shared across scales, but also the signaling of innovation needs and desires. Unidirectional outreach from innovators to producers, as shown in the exogenous mechanism, is likely to be less effective than bidirectional feedback in knowledge and resources as shown in the endogenous mechanism. There is a potential for both mechanisms to operate within the innovation system through a collaborative extension service system that leverages the strengths of collective action among sets of heterogenous producers and innovators. According to Wigboldus et al. (2016), innovation development that adheres to the justification of copying inventions that were successful in one area to another does not adequately consider complex social, ecological, and institutional realities. These innovations often are unable to scale up and may even produce undesirable effects. Our insight demonstrates the need for the development of innovations that are aligned with the ecological realities of the agricultural landscape as well as the needs and desires of farmers. Our study demonstrates how the interaction between innovators and producers plays an important role in knowledge and information transfer at all stages of innovation from innovation development, dissemination, adoption, and diffusion. As we also demonstrate, these interactions also play a large role in the success of collective innovation by facilitating knowledge creation and transfer, resource mobilization, and cooperation (Berthet & Hickey, 2018). --- Limitations According to Schlüter et al. (2019), models are simplified representations of reality in which the process of simplification is guided by the knowledge and assumptions of those involved in the model development process. Model results should always be interpreted considering these assumptions and the underlying system conceptualization. Our model juxtaposes two mechanisms against each other, whereas, in reality, both mechanisms can and certainly often do operate alongside each other and complement each other. This is a limitation of our model. Future investigations could consider expanding the model to examine different combinations of the mechanisms and how they interact. Additionally, we make a strong assumption in the model that in the exogenous mechanism, early adopters are larger farmers who are linked directly to external innovators while in the endogenous mechanism, early adopters are smaller farmers linked to collectives. This assumption is based on empirical evidence in the case study. However, this assumption can be relaxed to include a mix of farmer types in the networks in extensions of this model. Lastly, the model assumes that innovators develop and disseminate only one type of innovation at each time step. In reality, innovators can develop different types of innovations at the same time and offer a repertoire of innovation types that the producers can select from, but to keep the adoption dynamics simple, we maintained this assumption in the model. --- Conclusion We developed an empirically driven, stylized agent-based model through an iterative process of combining theory with empirical data. Our social-ecological modeling approach facilitates a deeper understanding of not only the different social-ecological dimensions of agricultural innovation but also the distinct cross-scalar mechanisms within innovation systems. Our results make a strong case for the incorporation of social-ecological interactions within the assessment and modeling of innovations in agricultural systems. Overall, results from the exploratory analysis show that food security and income inequality patterns arise due to the characteristics and configuration of innovator-producer networks and their modes of operations, goals, and actions as well as the decisions of the actors embedded within the innovation system. Contextualized knowledge of agricultural socialecological interactions plays an important role in the success of agricultural innovations. Hence, innovation needs to be aligned with the beliefs, desires, and ecological realities of the place where innovation interventions are sought. Overall, our results highlight the need for embedding contextualized knowledge of agricultural social-ecological interactions. By viewing innovation as an adaptive process that includes both social and ecological dynamics, we obtain a complete and more nuanced picture of the dynamics within innovation processes.
Agricultural innovations involve both social and social-ecological dynamics where outcomes emerge from interactions of innovation actors embedded within their ecological environments. Neglecting the interconnected nature of social-ecological innovations can lead to a flawed understanding and assessment of innovations. In this paper, we present an empirically informed, stylized agent-based model of agricultural innovation systems in Mali, West Africa. The study aimed to understand the emergence of food security and income inequality outcomes through two distinct model structures: top-down, aid-driven (exogenous) innovation and bottom-up, communitydriven (endogenous) innovation. Our research questions were: i) How does the inclusion of social-ecological interactions in the model affect food security and income inequality outcomes? ii) How do exogenous and endogenous mechanisms influence food insecurity and income inequality? iii) What are the conditions under which exogenous and endogenous mechanisms would improve food security? The structural design of the model was based on a combination of theory, empirics, and mapping of social-ecological dynamics within innovation systems. Using the Social-Ecological Action Situation framework, we mapped the social, social-ecological, and ecological interactions that jointly produce food security outcomes. The exploratory model analysis reveals three key insights: i) Incorporation of social-ecological interactions influences model outcomes. Scenarios with socialecological interactions showed a stronger relationship between income inequality and food security, lower levels of food security, and higher levels of income inequality than scenarios with social interactions. ii) Endogenous mechanism leads to higher food security and income inequality than the exogenous mechanism. iii) Bidirectional outreach is more effective than unidirectional outreach in improving food security. Inclusion of social-ecological dynamics and interactions such as the role of climate risk perception, social learning and formation of innovation beliefs and desires is key for modelling and analysis of agricultural innovations.Stylized models, Social-Ecological Systems, Agriculture, Innovations, Agent-based modelsThe model code for the agricultural innovation (Ag-Innovation) agent-based model can be found in COMSES https://www.comses.net/codebases/80397098-9368-40ab-bb01-56b5f929ea04/releases/1.0.0/ 1. 2012) and can foster adaptive responses to changing social-ecological conditions (Olsson & Galaz, 2012). The understanding of agricultural innovations has evolved over the last several decades. According to Klerkx et al. (2012), who studied the evolution of thinking in agricultural innovation, in the 1950s to 1970s, innovation was viewed as a linear process of scientific invention of technologies that were then transferred to and adopted by the intended users (see also Godin & Lane, 2013). In several countries, agricultural innovations were funded by external aid, developed in controlled research environments by scientists, and disseminated through extension services for adoption by individual farmers (Valente & Rogers, 1995;Faure et al., 2018). The 1980s saw a shift in focus from external aid-driven technology transfer to community participation, learning, and ownership (Klerkx et al., 2012). Both scientists and farmers were viewed as central actors who undertook innovation development, dissemination, and diffusion roles through shared knowledge and resources (Hall & Clark, 2010;Brooks & Loevinsohn, 2011). Innovation systems came to be seen as a collection of entities or agents (individuals, organizations, institutions) that form 'a complex web of layered and nested connections that cross the typical space, time and sector boundaries…' (Moore, 2017 pg. 219). This presents a dynamic, systems perspective of the process of innovation which arises from the actions and interactions of agents embedded within a social context operating at different scales, and hence, termed 'social' innovation (Nicholls & Murdock, 2012). However, agricultural innovations are not purely social processes but also include ecological dimensions. Olsson & Galaz (2012) highlight the complex, intertwined nature of social-ecological innovations. In the context of agricultural sustainability, examples of such social-ecological dimensions include the influence of climate and ecological factors in innovation development and adoption, biophysical feedback from the implementation of agricultural technologies such as changes in crop yield, soil fertility, etc. Conceptualizing innovation solely as social processes and ignoring social-ecological interactions may result in maladaptive outcomes and misleading or myopic appraisals of the impact of innovation. Examples of such maladaptive outcomes include the Green Revolution in Asia in the 1960s that resulted in environmental damage and soil degradation despite significant increases in crop yields (Pingali & Rosegrant, 1994), or the development of agricultural innovations for increasing maize productivity suitable for certain agro-ecological zones in Kenya but without adequate consideration of social or ecological realities of the drier zones (Leach et al., 2012). Hence, agricultural innovation requires a deeper understanding of not only the role of innovation actors and their actions and interactions within innovation processes but also the social-ecological dimensions of innovation (de Boon et al., 2022). This paper presents a contextual case study of agricultural innovation in Mali, West Africa. Agricultural innovations in Mali have experienced similar challenges to previous examples such as the Green Revolution in Asia and maize production in Kenya. Between the 1960s and 1980s, Sub-Saharan Africa (SSA), including Mali, suffered from prolonged periods of droughts and famines, which prompted donors to provide aid and funding support for agricultural innovation. Innovations were driven through pathways or mechanisms that were exogenous to the system; where key agricultural innovations were financed through external funders, developed by specialists or researchers, distributed by agricultural extension services, and finally adopted by producers (Knickel et al., 2009). Examples of such exogenously developed innovations include improved maize and rice varieties, crop inputs such as crop fertilizers and pesticides, early maturing varieties, etc. (Davies, 2016). However, few studies also highlight alternate pathways or mechanisms that were more endogenous to the system. Farmers formed local 'innovation platforms' (Pamuk et al., 2014) that developed innovations that were more widely adopted, to enable farmers to adapt to long-term climate variability and drought (Mortimore, 2010;Nyong et al., 2007). These innovations closely aligned with the dynamic, non-linear pathways or mechanisms of innovation systems and were facilitated by social learning, community organization, and local adaptive knowledge transfer (Ajani et al., 2013;Nyong et al., 2007;Osbahr et al., 2008). Examples of such endogenously developed innovations include various crop management strategies such as the conservation of soil carbon content through zero tillage practices, mulching, use of organic manure, and agroforestry (Ajani et al., 2013).
This paper provides an in-depth examination of factors that women report enable or impede their adherence to recommended health behaviors. In this study, women who experienced an abnormal screening mammogram requiring diagnostic follow-up care were interviewed about their daily lives and life history, in an effort to place this experience in a broader context. In a previous paper using data from this sample, we examined factors that were directly associated with whether women were compliant with recommended diagnostic follow-up (see Allen, Shelton et al., 2008). 14 The focus of this paper is to provide a comprehensive and contextualized account of the broader factors that affected women's abilities and motivation to adhere to recommended health behaviors. Qualitative methods are well-suited for conducting this research since they are ideal for: 1) explaining phenomena about which little is known; 2) understanding how people interpret and give meaning to the events and circumstances of their lives; and 3) exploring the social context in which behavior occurs. 15 The aims of this paper are to: 1) investigate social contextual and psychosocial factors that influence the ability of lower-income Black and Latina women to carry out recommended health behaviors; 2) describe any relevant differences in social contextual and individual factors for Black and Latina women; and 3) identify potential implications for interventions, policies, and to generate hypotheses for further exploration in future research. This research was informed by the social contextual framework, 6 a conceptual framework that emphasizes the importance of viewing health behaviors within a social context, or the larger structural forces that determine the nature of people's daily realities. Social contextual factors are seen as cutting across multiple levels of influence, including the individual, interpersonal, organizational, community, and societal levels. 16,17 According to this framework, race/ethnicity, gender and SEP are social categories that reflect societal inequalities and lead to differential distribution of stressors, power, status, and resources. 6 For example, race shapes differential exposure to life opportunities and resources in society, resulting in racial and ethnic minorities being disproportionately poor, having lower access to high quality medical care, and less continuity of care. 18 The social contextual framework, as applied in this study, is depicted in Figure 1. --- Methods --- Sampling and recruitment We chose a qualitative study design, using in-depth interviews, to achieve our study aims. A purposeful sampling technique 19 was used to obtain sufficient representation of both Black and Latina women who had a mammogram that resulted in need for follow-up. Women from locations with a high volume of lower-income, multi-ethnic patients, including a community health center, a breast evaluation center at a public hospital, and a mammography van, were invited to participate. Eligibility criteria included: 1) having an abnormal mammogram finding within the year prior to study enrollment; 2) being 40 or more years old; 3) fluency in English, Spanish or Haitian-Creole; and 4) being capable of providing informed consent. Women with a history of breast cancer were excluded. Clinic staff presented study information to potential participants and asked permission to provide contact information to research staff. If interested, research staff tried to contact each woman by phone up to ten times to obtain consent and schedule interviews. All study procedures were approved by the Institutional Review Board at the Dana-Farber Cancer Institute and within participating sites. More information about recruitment and sampling is available (see Allen, Shelton, et al., 2008). 14 --- Data collection Interviews took place between 2002 and 2005 at a time and place convenient to the participant, such as the woman's home or a community center. Participants were interviewed in their preferred language, usually by an interviewer of the same race/ethnicity. Interviews were audio-taped and professionally transcribed; interviews in languages other than English were professionally forward-and back-translated by a native speaker. A semistructured interview guide was developed based on existing literature and was broadly informed by the social contextual model. Open-ended questions covered a range of topics, with examples of sample questions provided in Table 1. --- Analyses A thematic content analysis approach was used to understand patterns in the data. Research team members (consisting primarily of study Investigators trained in Anthropology and Public Health) reviewed transcripts, identified major themes, and met regularly as a group to discuss interpretations. Through this iterative group process, code definitions were developed and refined, and new themes were identified (see Allen, Shelton et al. for more information). 14 Line-by-line coding was conducted using N'Vivo software (QSR International, 2000). 20 The social contextual framework served as an organizing approach for reporting results of these analyses and themes that emerged. --- Results --- Study sample The final sample was comprised of 64 women. Fifty-three percent of participants were between the ages of 40-49 years old and 56% were employed in part/full-time work. Sixtythree percent of the women were Hispanic and 33% were Black (predominately African American). The majority of women were born outside of the United States (69%) and preferred Spanish as their first language (59%). Only 23% of participants were married or living as married. Many women (43%) had a High School Education or less. In terms of health care access, 22% had private insurance, 13% had no insurance, 27% had the state's Medicaid coverage, and 35% had Free Care (a program requiring hospitals/health centers in Massachusetts to provide free or reduced cost health care to the uninsured). The full sampling scheme is presented elsewhere. 14 --- Social contextual and psychosocial factors -Individual level Cancer-related attitudes and beliefs-The overwhelming majority of women had very negative connotations of cancer; many associated it with a 'death sentence'. There was a great deal of shame and embarrassment associated with cancer, particularly cancer of the breast. For example, most women said that people diagnosed with breast cancer would likely hide their disease due to fear of social rejection or stigmatization, or because they would not want to be pitied or devalued by others in their community. Some feared they would no longer "feel like a woman" due to disfiguration if they lost a breast. They worried how this would impact their relationship with their spouse or partner. According to one Black woman: "I would want to speak about it [breast cancer]. But the majority of my friends and family? No. Because they come from the school of thought that a woman is not a woman if she doesn't have her uterus and her ovaries or her breast." Many Latinas also said that they would hide a diagnosis, but this was most often due to a desire to shield family members from the pain of knowing about a loved one with cancer. Most women attributed cancer to smoking, hereditary factors, and environmental pollutants or chemicals at work and home. A number of women held misperceptions about the causes of cancer, including the belief that cancer is contagious, caused by physical blows (including abuse), or related to 'bad' behaviors (e.g. drug use, abortion) or strong emotions (e.g. stress, anger). Misperceptions about the causes of cancer were often rooted in observations of their own lives: "My father smoked but my father did not die of cancer. My aunt smoked but she did not die of cancer either. My mother did not smoke and my mother died very young of cancer." Many women reported having positive attitudes about mammograms, although about half of the women did not recognize them specifically as 'cancer screening tests.' According to one Latina: "I don't have tests for cancer illnesses. The tests I do are either for mammogram or Pap smear." Religious and fatalistic beliefs-For many women, a belief in the will of God coexisted with a willingness to obtain medical care, such as breast screening. Nearly every participant referenced their faith in God, and said they pray for health. A common idea was that God determines both sickness and health. According to a Latina: "God gives the sore and He cures it". A sense of leaving everything in God's hands also arose, as reflected in this statement from a Latina about breast cancer: "If that's what God would give me, I won't reject what God wants." A few women did not worry about getting sick because they would be'saved'. According to a Black woman: "I've been smoking since I was 17...I don't get sick...I talk to God and 'by strife, I am healed'." Nevertheless, they spoke about the need to care for their health, in spite of believing in fate as determined by God. Health issues-Good health was highly valued and mentioned spontaneously in the majority of interviews. According to a Latina: "I would give anything to not have any more health problems. I don't care about being poor, but being sick." Nearly a third of the participants reported physical or mental health issues, including stomach and cervical cancer, lupus, fibroids, arthritis, depression, asthma, high blood pressure and cholesterol, chronic back pain, and diabetes. Several women thought that these health problems were somehow linked to cancer. A Latina said: "Even if they tell me it's arthritis, for me it's cancer," and a Black woman said: "I'm thinking maybe this isn't sciatica in my back; maybe it's cancer." Most women discussed the negative impact these health problems had on their lives. A Black woman with diabetes explained why she had been avoiding the doctor in general: It's very depressing to have to go to a doctor once a month...I'm not an old women...I'm older, but when you're forty-something and you're going to the doctor once a month, it does get depressing. When you're doing four needle shots a day, it's depressing...So a lot of times, I was suffering with depression. In contrast, although much less common, a handful of women discussed the facilitating role of health problems. For example, one Black woman talked about how her health problems actually enabled her to go back for her follow-up mammogram appointment: "So I had to see my diabetes doctor, and what I did is that I fitted that mammogram in at the same time... So, I can kill two birds with one stone. I think I killed three because I had to see my endocrinologist as well." Material and economic hardship-A common and pronounced theme throughout all of the interviews was the stress associated with economic hardship. Most women said that they struggled each week to make ends meet and to cover the costs of basic necessities (e.g. food, rent). Several women explained the measures they took to meet their basic needs, which included collecting cans and bottles in exchange for food, abstaining from eating three meals a day, and going to food pantries. Some women explained that they had put aside their own plans and dreams (e.g.. going to school) to care for their children, which contributed to their inability to have steady work and adequate income. A sense of shame about lack of money permeated the interviews, as did a sense of feeling de-valued generally by the broader society. A Black woman explained: "It's sad that in this country, if you don't have money, you really don't count." Lack of financial stability also contributed to a sense of hopelessness about the future. In response to a question about her hopes for the future, one woman related "That's a hard one, because I don't have no future...my future will be the same as now...I have dreams, but hey...dreams do not come true." Economic hardship was largely rooted in un-or under-employment. Just over half of the sample was employed, typically in the service industry, including childcare, cleaning, and food services. Some Latinas noted that they had better jobs in their home countries, but struggled to find comparable work in the U.S. because of documentation issues or language barriers. One woman was an accountant in Colombia, but could only find work ironing clothing in the U.S.: "Here I do what I can, since I don't know English. I do recognize it's because I'm ignorant." Reasons for unemployment varied; some women did not work because they were disabled or caring for family members, while others were frustrated at being unable to find a job. Embarrassment and shame also arose in the context of work and education. A Latina shared: "I want to work, but I'm scared to do it. I want to depend on myself, but sometimes it's not easy, especially when you're not finished school,...I'm 43 years old and that's embarrassing for myself." Economic hardship clearly impacted women's ability to afford housing and pay rent. Out of economic necessity, many women were living with family members; a few Latinas felt 'trapped' at home and frustrated by their dependence. An unemployed Latina who lived with one of her children explained: "I don't have any money. And...although they are my children, I feel bad.... Because I feel that I am a burden." Of note, nearly every woman in the study said she dreamed of one day owning her own home. --- Social contextual factors-Interpersonal level Social ties and social isolation-Across race and ethnicity, nearly every women in the study reported being emotionally close to their children. Beyond children, however, the nature and extent of social ties varied. In general, Black women reported having many family and friends in their network, both in the Boston area and across the U.S. In comparison, Latinas' relationships were more limited in the U.S., with most of their close ties within their home countries. Many Latina immigrants expressed feeling socially isolated; some connected this 'emptiness' to being separated from family, often because they were awaiting documentation so that they could visit their country of origin and be able to return to the US. According to one: "I only have my husband's family here, after that I don't trust anyone here." Another Latina whose husband had died said "I am alone...and alone I will remain...." Some commented on how loneliness can lead to other more serious problems. A Latina explained: "Because I think that when you're alone in another country it's not easy.... There are difficult times when you feel alone, you feel depressed...so you feel homesick or you don't want to be here. Many people turn to the streets, drugs or alcohol, prostitution." Another Latina expressed: "I'm concerned about the loneliness and I'm concerned about getting sick in this country, because I don't have anyone that supports me...Who is going to give me a hand?" Abusive/difficult relationships and major life stressors-A large number of women talked about major life traumas in connection with their social relationships: a few women had been married to alcoholics; several were orphaned; one had not seen her husband for many years; one lost many friends to HIV/AIDS; another was caring for 11 children; and many had been widowed and left to care for children alone. These events or relationships were often described as being major sources of stress, as expressed by a Latina woman: "Many, many times, I found myself in a situation, which I said, 'I want to kill myself' or 'I want to die.'...I have been through so much pain, because one suffers so much." Some of the women were victims of abuse, with a few still trying to get out of abusive relationships. One Latina explained why she has no future plans: "Honestly, I lived a rough life, I suffered too much, so I'm not thinking about myself now, what kind of future I'm going to have... My father, he was sick, he was alcoholic. He was abusive, to my mother, myself, my other sister, my brother. So, with life, it was not easy, it was a rough time." A few women talked about abuse by partners and relatives, and some cited psychologically and physically abusive work situations. It is important to note that a strong sense of pride and resilience emerged from some interviews, often in relation to overcoming tremendous challenges such as raising their children on their own. According to a Black woman who had overcome homelessness and drug addiction: "I'm proud of my determination...A lot of people call me the 'Rock of Gibraltar.' Demanding Family Roles and Responsibilities-Most of the women in the study had multiple family roles, assuming responsibility for the majority of household tasks for their families, often as single mothers with little support from a partner. According to one Black woman: "I am the backbone of my family...stressful...everyone depends on me to have an answer all the time or to be strong." In many cases, since multiple generations lived together, many women had caregiving responsibilities that pertained to not only their children, but also their grandchildren and/or parents. For example, a Black woman with five children helped care for her granddaughter, a mother with Alzheimers, and a sister with bone disease. Many women discussed the strain of parental caretaking, in particular. These caretaking responsibilities often hindered women's abilities to balance work, family, and household duties. Some acknowledged that this resulted in putting their own needs behind those of their family. When asked whether women should spend time taking care of themselves, most agreed that they should, but often qualified this by saying that they needed to care for themselves, so that they could care for others. One Latina stated: "We should take care of our health so we can be there for our family". Throughout the interviews, a major theme that emerged was that women's families, and most commonly their children, were a highly valued and central part of their lives. Many women talked about how the sociallydefined role of women as self-sacrificing caretakers was instilled in them by their culture, and passed on to them through family. According to a Black woman: "I think it was tradition. I guess [being] the ancestors of a slave [African-Americans], the women have to do double-duty." Cancer-related Experiences among Friends and Family-Most women knew at least one family member or friend who had cancer. For some with a family history of cancer, this led them to worry about their own health. A Black woman who had five family members with cancer relayed her fears: "If I put a blindfold on and stay ignorant, I won't know and I won't worry...and it won't bother me...." Exposure to others who had experienced cancer motivated some to go to their appointments and served as a wake-up call: "We are all more aware, due to what happened to our aunts. We are more on top of getting a mammogram." Others reported that having a family history of cancer was a deterrent to self-care. A Latina whose family member died of cancer explained:... each time I go to do an exam I get scared...that's why it's been difficult to go back and do the exam.... Because when you go through this experience with a family member, it stays on your mind." About half of the women said that family and friends were a key source of cancer information, often because they felt they received inadequate information from health care providers. Women who said they don't talk to family and friends about cancer attributed this to not knowing anyone with cancer, or because it is uncomfortable, scary or 'taboo' to discuss. --- Social Contextual Factors-Organizational level Access to Care and Insurance-Insurance-related barriers served as a common theme and a major source of stress for many. As one Latina recounted: "Sometimes we don't even want to go to the doctor, because Free Care doesn't cover some things... People are scared to get sick here." Some women felt a tension between attending to their health and the strain of not knowing what was covered by insurance. A number of women recounted receiving bills for hundreds or thousands of dollars for services or medications, after they had been told that they would be covered. According to one woman: "It's ridiculous! I mean, I have enough to worry about...I have $3000 worth of bills from it at home. I have to worry about...I only have $5." Some women with Free Care noted that their coverage had expired, and that they did not know how to renew it, or were in the process of obtaining approval. Several women said that they could not afford or did not qualify for insurance, despite being employed. A Latina who had to retire due to illness stated: "That's my biggest concern... I feel...like between two walls because the medicine that cures me and makes me feel better...I can't take it... because I had the health insurance of my job..." A few of the women also felt that they received lower quality services because they received Free Care. Notably, some women, particularly Latina immigrants discussed how grateful they were to have access to good health care in the U.S., contrasting it to the services offered in their home countries. Some women felt that the services in the US are more advanced, the providers more trustworthy, and that there are more programs for low-income women in comparison to their home countries. According to one woman whose two aunts died of cancer because they could not afford the exams in her country: "I take advantage of every opportunity that they give me [in the US].... Because I bring the experience of my country where if you don't have money you don't get examined." Health Care Providers-The Role of Gender and Language-Many women (particularly Blacks) expressed the importance of having female staff and health care providers for breast-related issues, due to increased trust and comfort. A Black woman explained: "You need a woman. I'm not being a female misogynist here, but in the case of breast cancer, that person that makes contact with people who won't come back, should not be the male primary care provider" and later continued "but if your primary care [provider] is a male...you only hear the medical, technical stuff." Another Black woman shared: "I had a male gynecologist...I think I was intimidated by him, and I wouldn't ask as many questions, or maybe I wouldn't understand the answers. And if I didn't understand the answers, I wouldn't press the issue." And another black woman said: "I find a woman doctor is more apt to talk about cancer and related issues than a man doctor is.... She's just more open with it." A Latina woman agreed: "I'll trust the female doctor more." Having a health care provider who spoke the same language was very important among Latinas. Many Latinas commented on the difficulties they faced in communicating with their providers. Even with interpreters, some stated that they were not fully able to express themselves: "I would want to take my frustration out or explain myself and can't. It is not the same if someone translates for you." As a result, some Latinas felt they were neglected or that they received incomplete information about their health. Employment-related Policies-Some women voiced the importance of staying employed to survive, often putting job security over other needs, including attending medical appointments. This tension arose most often in situations where women worked in settings with unsupportive or inflexible work policies. According to a Latina: "I'm worried because my job is my only income...to survive...If I'm careless about my job they can take it away or something, so I have to do it [miss appointments when they ask her to work]." A Black woman also expressed the tension between health and work: "..without a job and no insurance, no hospital appointments!...I still got to pay my bills and if I don't have any sick time then I go off the books." --- Social Contextual Factors-Community and Societal Levels Few themes arose at the community-level. Most women said that they chose their health center because it was conveniently located in their neighborhood, which made accessing care easier, particularly for women who relied on family members for transportation or faced other barriers to transportation. In contrast, a number of themes arose at the societal level, as presented below. Discrimination and Mistrust-Perceived discrimination emerged as a common theme particularly among Black women. This resulted in feelings of mistrust towards health care providers. A few Black women said they mistrusted the information from providers because 'health care is a business and it is in doctors' best interest for people to be sick.' One woman noted: Because of what society used to do to us as people, some of the elders are fearful, going to doctors. Based on their skin color. We used to actually have doctors who would... call us problems, as opposed to taking care of our problems, for medical research, because they figured our people weren't worth the value. So, we learned a lot of in-home medical procedures that was passed down from the elders through different generations, and there are a lot of people who are alive today that... they won't go see a doctor unless... you're there with them. You have to walk them through it because they actually fear that they're going to create something on them... Other Black women felt that health care providers were not forthcoming with health information or were too busy to address their concerns. One Black woman commented how angry she got about how "unfair things are for us" (being Black and female), and felt providers did not pay attention to her: "They have to take the time...especially amongst women of color". This may explain why several Black women expressed a preference for having Black doctors. Several women also talked about the importance of having access to educational materials that they could identify with. A Black woman asserted: "They need to have more diversity in the pictures...for women. Because it's our issues too.... It's all women...They need to show...you know black doctors." A theme also arose in relation to overall mistrust of the health system, and medical research in particular. One Black woman who said she had trouble accessing care because "I have a problem trusting doctors", also expressed distrust of research: "You know, especially over the history of medicine, they've always used minorities in their experiments, in their procedures...and that goes way back now, a stretch." Another Black woman agreed: "All my life, being Black, being female, someone has always had to die before I could benefit." Immigrant Status and Documentation-The interplay between financial challenges, social relationships, and need for health care was complex for Latina immigrants and arose as a common theme. Many commented on how limited finances restricted them from visiting their families in their home countries. Most came to the U.S. for their families, often with great sacrifice, in order to be near their children in the U.S., to earn money to support their families, or to give their children what they called a 'better life'. One Latina woman said: "I have not been able to have what I wanted, meaning studying, have a career, because I was poor. And I came to this country to persevere, to get ahead, and raise my children. And I told them, 'What I did not achieve, I want you to get.'" Another explained the stress she felt because she was responsible for providing economic support to her family back home: "My family depends a great deal on me. That is why I came to this country." A lot of Latinas expressed the difficult transition they experienced coming here. One Latina women said: "It's not like you are fine here, but because of the love [for your family] you hold on." Documentation issues were repeatedly raised by Latinas, particularly in relation to the emotional strain they felt due to being separated from their families. Several women also explained the stress they felt due to working illegally: "Honestly, I worry a lot about finances, it's that, I'd like to have the opportunity to work legally without thinking I'm breaking the law, I'd like to do that, but no..." A few women said they had feared going to the doctor when they came to this country because they were undocumented. --- Discussion The purpose of this paper was to explore the social context and psychosocial beliefs of lowincome Black and Latina women that may influence their ability to follow health recommendations and behaviors. We found that the social context of these women's lives was heavily influenced by a number of interconnected major life stressors that hinder health promotion in general, and the ability to follow behavioral recommendations in particular. These included: negative and inaccurate perceptions of cancer; fatalistic beliefs; competing health issues; economic hardship; abusive and difficult relationships; demanding caretaking responsibilities; cancer-related experiences of friends/family; insurance struggles; mistrust of health care providers; and unsupportive employment policies. The burden of the social context of socially disadvantaged populations has not been adequately described, especially as it relates to health behaviors and adherence, and is particularly poignant when viewed through the eyes of participants. In general, there has been a tendency in public health and medicine to focus on health behaviors and diseases in isolation, without full consideration of the broader social context. 11,21 However, as these narratives demonstrate, women's health is intimately connected to their social and contextual life circumstances. In the context of the major life stressors and competing priorities described by the women in this study, it should not be surprising that many have difficulty making their own health care a priority. These findings are consistent with prior research on gender-defined roles, responsibilities, and expectations that often result in women taking on a disproportionate burden of caretaking and household duties and facing competing work/family demands, [22][23][24] demands that often interfere with women completing behavioral health recommendations. [25][26][27] Some themes that arose were differentially influential by race and ethnicity. Latinas expressed challenges to self-care related to immigration, documentation, difficult transitions to the U.S. (i.e., language barriers), and social isolation. Similar themes have arisen in life history interviews previously conducted among working-class, multi-ethnic populations in the same geographical area. 12 Black women more often described experiences of discrimination, often in relation to distrust of health care providers, the health system and medical research, as has been previously documented in the literature. [28][29][30] Other studies have documented that mistrust is rooted in the history of harmful treatment of Blacks, ranging from slave experimentation, the Tuskegee Syphilis Study, and inequities in health care access and treatment. 2,31,32 Researchers may want to investigate how immigrationrelated difficulties, discrimination and medical mistrust impact adherence to behavioral recommendations, since these factors have only recently begun to be explored. Limitations of this research should be highlighted. First, we caution against generalizing these findings beyond the population examined here. These findings are not intended to capture all of the life experiences of urban, lower-income Black and Latina women, but are useful in generating hypotheses that can be tested in future research. We were not able to explore differences across the myriad groups that constitute Black and Latina communities, for example by region or country of origin. We recognize the tremendous heterogeneity within these populations, but were limited by sample size. In addition, while a sense of resilience and strength emerged from the interviews, the majority of themes focused on the hardships and challenges that women faced across multiple life domains. While this is reflective of the life circumstances of these women, future research should explore the strengths, assets, resources, and resiliency of underserved populations in more detail. Despite these limitations, this study offers a number of strengths. We used an in-depth qualitative methodology that is well-suited to achieving our research aims and is effective in establishing trust and rapport with minority women, and collecting their thoughts and opinions in their own words. This large qualitative dataset provided detailed exploration of social contextual factors from the perspective of low-income Black and Latina women themselves. This research provides rich narratives that can help inform future research and conceptual models among similar populations of lower-income women, and can be used to inform quantitative measures that seek to measure aspects of social context. These findings may also be useful in guiding interventions and policies to encourage and support adherence to behavioral recommendations among lower-income, multi-ethnic women. There are a number of implications that follow from this research. Qualitative data that considers the complex social, contextual, and material context of people's lives, as was collected here, is particularly useful for informing the design of socially and culturally appropriate policies and interventions. 12,33 Given that the themes and health-related barriers arose at multiple levels (i.e. individual, interpersonal, organizational, community, and societal levels), it is critical that future interventions and programs take a multi-level approach and address multiple levels for change. In the case of promoting follow-up after an abnormal mammogram, most interventions have provided patient-level education (i.e. through phone counseling, personalized letters). 34 Clearly, improved communication and health education is important, particularly among racial/ethnic minority populations who more commonly cite communication difficulties with physicians. 35,36 Some of the cancerrelated misperceptions and fatalistic beliefs that emerged here highlight the need for providers to understand patients' health belief systems; improved understanding of these culturally-informed beliefs may help improve patient/provider interactions, and ultimately health behaviors and outcomes. 36 Awareness of the social context of low-income women can also help increase physicians' understanding of the competing demands women facedemands that may take priority over health-related needs out of necessity. While individual-level interventions are important and may be particularly useful for educating patients about cancer prevention, systems-and policy-level interventions hold greater promise for long-term, sustainable change. 11 This is especially the case for disadvantaged populations who have received less benefit to date from individual behaviorchange interventions and suggests the need for novel and more contextually-based approaches that recognize the complexity of people's lives. 11 Health care provider-and systems-level interventions might include phone notification of results or reminders, centralized services, and patient navigators. 37 Patient navigator and lay health advisor programs are a particularly promising avenue, given that they can help address some of the challenges that low-income women face. For example, navigators provide centralized care and have been found to decrease barriers and anxiety, improve trust and communication, and improve behavioral adherence. [38][39][40][41] To facilitate trust, women in our study also identified the importance of having female health care providers of the same race/ethnicity and providers who spoke their language, aspects of programs and programs that also hold great promise. Given the stressful social contexts that we have documented here, delivery of health services must address the multiple challenges low-income populations face, and it is imperative that health care services must be made as accessible, convenient, affordable, comprehensive, and integrated as possible. Those in this study, who experienced multiple health issues and had to juggle multiple responsibilities and roles, faced sometimes insurmountable barriers to access of care. In a context where health services have become increasingly specialized and disaggregated, these women would greatly benefit from health services that are integrated across disease entities, and that offer both physical and mental health services in one location. Health systems could also be improved by policy changes, including having more flexible hours at health centers/clinics. Increasing the availability of services at the local and neighborhood levels may help improve accessibility of services, as might transportation vouchers or free shuttles. As health disparities are embedded in larger social, political, and economic contexts, elimination of inequities will require interventions that do more than address health care policies. Effective policies and efforts to eliminate health disparities must also address social inequities and fundamental non-medical determinants of health as well. [42][43][44] Specifically, social polices can help improve the living and working conditions of low-income populations, since our social environment structures our opportunities and chances for being healthy. For example, low-income women are more likely to be part-time employees or unemployed, and therefore inadequately insured; policies must be put in place to provide universal health care coverage to ensure that everyone is adequately covered. Steps to diminish financial barriers to health care have been instituted in Massachusetts though Massachusetts Health Reform, although only time will tell the impact of this legislation. For women who are working, employers can offer flexible work policies to facilitate attendance at medical appointments, though this recommendation may be met with strong reluctance
Understanding factors that promote or prevent adherence to recommended health behaviors is essential for developing effective health programs, particularly among lower-income populations who carry a disproportionate burden of disease. We conducted in-depth qualitative interviews (n=64) with low-income Black and Latina women who shared the experience of requiring diagnostic follow-up after having an abnormal screening mammogram. In addition to holding negative and fatalistic cancer-related beliefs, we found that the social context of these women was largely defined by multiple challenges and major life stressors that interfered with their ability to attain health. Factors commonly mentioned included competing health issues, economic hardship, demanding caretaking responsibilities and relationships, insurance-related challenges, distrust of healthcare providers, and inflexible work policies. Black women also reported discrimination and medical mistrust, while Latinas experienced difficulties associated with immigration and social isolation. These results suggest that effective health interventions not only address change among individuals, but must also change healthcare systems and social policies in order to reduce health disparities.
flexible hours at health centers/clinics. Increasing the availability of services at the local and neighborhood levels may help improve accessibility of services, as might transportation vouchers or free shuttles. As health disparities are embedded in larger social, political, and economic contexts, elimination of inequities will require interventions that do more than address health care policies. Effective policies and efforts to eliminate health disparities must also address social inequities and fundamental non-medical determinants of health as well. [42][43][44] Specifically, social polices can help improve the living and working conditions of low-income populations, since our social environment structures our opportunities and chances for being healthy. For example, low-income women are more likely to be part-time employees or unemployed, and therefore inadequately insured; policies must be put in place to provide universal health care coverage to ensure that everyone is adequately covered. Steps to diminish financial barriers to health care have been instituted in Massachusetts though Massachusetts Health Reform, although only time will tell the impact of this legislation. For women who are working, employers can offer flexible work policies to facilitate attendance at medical appointments, though this recommendation may be met with strong reluctance from employers in the service sector where much of this population works. Social policies can also help increase funding to improve the living conditions of lower-income women, devoting money and time towards improving the quality of housing, education, employment opportunities, income support, neighborhood conditions (e.g. safety), and access to resources and facilities (i.e. transportation services, clinics, parks, affordable and healthy supermarkets, job training) in low-income neighborhoods (see Williams et al., 2008) 44 for a review of interventions and policies that have been used to address social determinants of health). With ethnic and racial diversity growing rapidly within the U.S., eliminating health disparities is imperative and will require a better understanding of the social context in which health behaviors are developed and maintained. This research illuminated numerous life circumstances and social contextual factors --linked to the status of low-income minority women --that have important health consequences. Future research is needed to test some of the hypotheses formulated here. Specifically, a greater understanding of the pathways by which these circumstances and stressors interact and impact health behaviors and outcomes is needed in order to develop effective comprehensive multi-level interventions, and to identify resources, supports, services and policies that may help mitigate the potentially negative consequences of these social contextual influences on health. Application of the social contextual framework.
Understanding factors that promote or prevent adherence to recommended health behaviors is essential for developing effective health programs, particularly among lower-income populations who carry a disproportionate burden of disease. We conducted in-depth qualitative interviews (n=64) with low-income Black and Latina women who shared the experience of requiring diagnostic follow-up after having an abnormal screening mammogram. In addition to holding negative and fatalistic cancer-related beliefs, we found that the social context of these women was largely defined by multiple challenges and major life stressors that interfered with their ability to attain health. Factors commonly mentioned included competing health issues, economic hardship, demanding caretaking responsibilities and relationships, insurance-related challenges, distrust of healthcare providers, and inflexible work policies. Black women also reported discrimination and medical mistrust, while Latinas experienced difficulties associated with immigration and social isolation. These results suggest that effective health interventions not only address change among individuals, but must also change healthcare systems and social policies in order to reduce health disparities.
Introduction [Ev]ery time people look at White Dee... it will serve as a reminder to people of the mess the benefits system is in and how badly Iain Duncan Smith's reforms are needed. White Dee is bone idle and doesn't want to work another day in her life and has no intention of finding a job. She expects the taxpayer to fund her life on benefits -Conservative MP Philip Davies, 2014. I think a lot of people have seen that I'm exactly like them. I'm just an ordinary, everyday person -Deirdre Kelly, The Guardian 2014 --- Meeting 'White Dee' The first episode of Benefits Street (Channel Four, Love Productions, 2014) begins with a 36-second segment titled 'Meet White Dee' that establishes Deirdre Kelly (named in the programme as 'White Dee' from the outset) as the central protagonist of the drama to follow. 'At the heart of James Turner', explains the voice-over (spoken by former Coronation Street actor Tony Hirst), 'is the single mum, "White Dee"'. Throughout this sequence, we see White Dee -a large, middle-aged woman, dressed in a black vest top that reveals tattoos on her back and chest -dancing in the paved front yard outside a house with her teenage daughter. A high-tempo dance track ('Hello' by the Polish pop singer Candy Girl), is belting out of a car that has pulled up by the side of the road. White Dee's daughter moves to the pavement and dances in a style derived from Jamaican dance-hall which involves sexually exaggerated hip movements and a low, squatting stance. The voice-over continues,'she is bringing up two kids on benefits [pause] but can also find time to look out for the neighbours'. Then we hear White Dee's voice: 'the street feels like a family, because that's how we treat it, like a family. I am the Mam of the street'. As she speaks this line, the film cuts to a shot of a young family -a man, woman and two very young children -who are incongruously sitting together on a dilapidated sofa on a pavement outside a house, with rubbish bags piled and waste around them. The segment draws to a close with a close-up of White Dee talking on the phone, cigarette in mouth, sat on a sofa strewn with the detritus of everyday family life: papers; a 1.2 1.3 1.4 2.1 2.2 2.3 --- 2.4 girl's hair-slide; a can of pop; a newspaper; a child's school tie. A final extreme close-up shows a dirty ash-tray filled with cigarette butts. Despite the many 'judgement shots' (Skeggs et al. 2008) in this opening segment, which are arguably designed to invoke disgust reactions (the ash-tray, the young family sat on the rubbish strewn street, and the shameless'sexualised' dancing), White Dee is represented from the outset in conflicting and contradictory ways. She is certainly not a victim, nor is she straightforwardly represented as an abject 'benefits scrounging' single mother. Rather, she is an extrovert matriarchal figure, who is depicted as happy, witty, compassionate and perhaps most interestingly, as 'free' from the complaints and constraints of 'time-poor' middle-class working mothers. Indeed, White Dee is depicted as unbounded from the strictures of idealised forms of neoliberal femininity, and specifically the pressures of 'having it all'. This rapid response article offers an analysis of the relationship between media portrayals of people living with poverty and political agendas with respect to welfare and social security. Specifically, we examine the making and remaking of White Dee in the public sphere -as abject, heroic and caring -to think afresh about the gender politics of economic austerity measures unleashed by neoliberalism. Rather than seek to resolve the disparate meanings configured through White Dee, or uncover some 'authentic' subject amidst them, our intention is to ask: why has White Dee emerged as a paradoxical figure of revulsion, fascination, nostalgia and hope in the context of the current dramatic reconfiguration of the welfare state? In the rest of this article we briefly introduce Benefits Street as a genre of programming distinct to austerity, before fleshing out the complex and contradictory meanings and affects attached to White Dee. We argue that these public struggles over White Dee open up spaces for urgent feminist sociological enquiries into the gender politics of austerity. --- Austerity porn? Channel Four and Love Productions describe Benefits Street as a 'documentary series' which'reveals the reality of life on benefits, as the residents of one of Britain's most benefit-dependent streets invite cameras into their tight-knit community' (see Channel 4 2014). However, rather than having the political impetus of documentary realism, Benefits Street follows the conventions of reality television which emerged in the 1980s when US and European broadcasters developed low-cost alternatives to conventional programme formulas. As Imogen Tyler (2011) has previously argued, programmes such as Benefits Street draw on many of the formal techniques of socially committed television documentary; the use of hand-held cameras, 'fly-on the wall' camera angles, the employment of non-actors and an improvised, unscripted, low-budget 'authenticity', in order to justify exploitation (of unpaid participants) and voyeurism through an implied association with 'documentary realism'. As she argues, 'these kinds of reality TV programmes have none of the aspirations of longer standing socially critical and politicized traditions of British documentary film and television' ( Tyler 2013: 145;and Biressi and Nunn 2005). Benefits Street is not motivated by a desire 'to change social policy, uncover invisible lives and challenge an inequitable social system' (Biressi and Nunn 2005:10). As White Dee herself states, when reflecting on her participation in the show, 'it's like Big Brother, except no one is evicted. Or paid' ( Kelly 2014). A central feature of reality TV is it focus on 'class others' which has continued and intensified under current austerity regimes in pernicious ways. As a growing body of (largely feminist) class analysis has illuminated, these forms of programming operate as mechanisms of 'class making' within the cultural realm. They are characterised by the shaming of classed others through inviting audiences to read class stigma onto participants though evaluations of their conduct, bodies and dress as lacking and in need of transformation (see Allen and Mendick 2012;Biressi and Nunn 2005;Jensen 2013a;Skeggs and Wood 2012;Tyler 2011;Tyler and Gill 2013;Woods 2014). In many ways Benefits Street is archetypal of what Tracey Jensen calls 'poverty porn' ( 2013b), a subgenre of British reality television programmes that emerged in the summer of 2013. Focusing on We All Pay Your Benefits (BBC 2013), Jensen argues that, instrumental to the introduction of financial austerity measures ostensibly deigned to reduce welfare spending, these kinds of reality programmes individualise poverty, blaming and shaming the poor for their circumstances (Jensen 2013b). Yet, there is also something about Benefits Street's sensibilities, framing devices and emotional powermanifesting in its central protagonist White Dee -that troubles and exceeds such a critical reading. Benefits Street is not just about displays of 'poverty' that repulse and intrigue viewers. It also invites voyeuristic --- Abject White Dee In the wake Benefits Street, the figure of White Dee was struggled over more than any other of the show's participants. In public and political commentary, she was positioned in starkly oppositional ways and drawn upon as a key figure upon which competing agendas about welfare, austerity and the state were mobilised. One of the dominant meanings given to White Dee, both within Benefits Street and in audience responses to it, is as abject Other of the 'good', 'hard working' future-orientated, individualistic and entrepreneurial neoliberal citizen (Allen and Taylor 2012;De Benedictis 2013;Jensen and Tyler 2012). Through this framing, she is positioned as feckless, lazy and undeserving; the product of a bloated welfare system. White Dee has been mobilised by right-wing journalists and politicians as evidence of 'Broken Britain': White Dee is the woman who many think sums up everything that is wrong with this country today. With her two children by two different but absent fathers, her fags and her telly, her long-term unemployment (she last worked in 2007) and indolent ways, some see her as the ultimate poster girl for Benefits Britain. ( Moir 2014) Here White Dee represents the figure of 'the skiver' par-excellence. Her reproductive capacity and caring labour is framed as idleness and a drain on national resources. In January 2014, Secretary of State for Work and Pensions, Iain Duncan Smith, invoked Benefits Street as 'evidence' to justify punitive austerity driven benefits cuts and workfare reforms. Benefits Street, he argued, revealed 'the hidden reality' of the lives of people 'trapped' on state-benefits ( Duncan Smith 2014). 'Dole Queen White Dee', as the right-wing press named her, is defined through her inadequacies and failings in relation to her abject maternity (the mother of fatherless children), 'work' and time. White Dee is 'out of step' both in terms of her non-participation in paid work within the labour market, and subsequent 'dependency' on the state, and in her deficit relationship to time and space; stagnant, immobile and 'bone idle', unwilling and unable to move socially or spatially. We return to White Dee's imagined relationship to time in the penultimate section of this article. --- Heroic White Dee The dominant counter-framing to this abject figuration was 'White Dee as hero': a community worker and campaigner for working class communities. White Dee was figured in these heroic depictions as both a victim (of mental health problems, and of the underhand and exploitative tactics of TV producers) and as an agent of authenticity and 'common-sense'. After appearing on the Channel 5 'debate show', The Big Benefits Row, White Dee was praised by political and media commentators as articulate, charismatic and a potential future politician. Feminist journalist Decca Aitkenhead writing in The Guardian describes her in the following terms: White Dee is enormously likable. Unaffected yet knowing, she is very direct and can be extremely funny, with a natural gift for comic timing. She is also one of the most tolerant, least judgmental people I've ever met, and remarkably pragmatic about the hand she has been dealt. (Aitkenhead 2014) At the same time, the right-wing publication, Spectator, co-opted White Dee as a campaigner for benefits cuts for the unemployed and more 'in-work' benefits for the low-paid, and heralded her a future right-leaning independent MP (see Kelly 2014). Both the abject and heroic framings of White Dee pivot on common-sense notions of work, time and value. In doing so, both elide considerations of what is at stake -materially, symbolically and psychologically -in the current reformation of the state, and the disproportionate effect of the cuts on mothers and children (The Fawcett Society 2012; The Women's Budget Group 2012). --- 5.1 As Skeggs argues: '"reality" television points to solutions, ways to resolve this inadequate personhood through future person-production -a projected investment in self-transformation -in which participants resolve to work on themselves' (Skeggs 2010: 80). White Dee must become a campaigning MP, a celebrity, come off benefits and enter paid work in order to become intelligible and valuable. Seeking to disrupt the claiming of White Dee as either abject or heroic, we now turn to a third reading of White Dee as a figure of nostalgia and desire. In doing, we attempt to think with White Dee as a figure and as forms of practice which speak to alternative values concerned with relations of care. --- Caring White Dee As indicated at the start of this article, throughout the show White Dee is framed (albeit precariously) as the resilient and caring'mother' of James Turner Street. This is made evident throughout the series as her family and local residents turn to her for guidance. White Dee's relationship with neighbour, Fungi -who seeks advice from her subsequent to a cancer scare and who she accompanies to the hospital -exemplifies this role. Likewise, media commentary repeatedly emphasise the community spirit that she embodies. It is this figuration of White Dee as caring matriarch, and the feelings this generates, which we argue provide a way into thinking differently about austerity. Specifically, we are interested in White Dee's framing as a nostalgic figure. Heroism on the Left is often imagined in forms of a nostalgic desire to return to working class masculinities. As Stephanie Lawler writes (Lawler under review), dominant motifs in Left representations of its revolutionary potential and solidarity are intrinsically masculine: the 'angry young man' and the 'heroic worker'. Such romanticized figures exclude and elide women and their labour (see also Steedman 1986;Skeggs 1997). Public modes of collectivist class solidarity and consciousness have not only historically been 'less available or desirable to working-class women' (Hey 2003: 332). Working-class women -in their feminised labour of reproduction and care and location within the space of the domestic -have troubled the Left's emblematic motifs of 'Working Class Changes of the World' (Lawler under review: 18), past and present. White Dee represents an alternative nostalgic figure; one produced of a different set of desires -for slower and caring forms of community relations and inter-reliance -which brings into view the gendered politics of austerity. Neoliberalism shapes a particular relationship to time: there is never enough time; we must always maximize time; we must not stand still (Davies and Bansel 2005). White Dee is mediated within Benefits Street as a figure from another time. While this engenders forms of symbolic and material violence such as demands that she get a job and accusations that she is a lazy benefits cheat -this 'out of sync-ness' provokes something that exceeds this. Rather, White Dee's 'different' relationship to 'public time', and specifically her insistence on'maternal time' (see Baraitser 2012: 236) becomes something that 'we', the middle-class viewer framed by the programme, envy. As White Dee states on invitations to capitalise on her celebrity through participation in reality TV programmes: I could do those shows. But I'm not going to sacrifice my kids. I've never been without my kids. I'm a parent first". If it weren't for her responsibilities as a mother, would the reality circuit appeal to her? "Course it would!" she laughs. "People offering to throw money at me for this, that and the other? But it's not all about the money. I'm not the type of person who would give up being a proper mum just for money. (Kelly, in Aitkenhead 2014) The nostalgic longing figured through White Dee provides an insight into the kinds of fantasies and 'psychic damage' current neoliberal regimes engender (Layton et al 2014). In other words, if the competitive neoliberal market economy demands particular kinds of entrepreneurial, future-oriented, self-sufficient and individualistic selves, then White Dee figures a desire for modes of caring and common forms of social and economic relations which are an anathema to the logic of financial capitalism. In this respect, White Dee is a resistant figure and struggles over her within the public sphere are revealing of (middle-class) fantasies and desire for solace and escape from the surveillance of the cruel and penal neoliberal state, and the individualising and competitive qualities of everyday life. --- The Gender Politics of Austerity We can see caring as a crisis of value -the value of women's work. [...] Caring offers us a different way of being in the world, relating to others as if they matter, with attentiveness and compassion (Skeggs 2013) In the present moment, it is women like White Dee who are filling the gap left by the British government's decimation of state-supported services such as childcare and care for the elderly (Jensen and Tyler 2012; Levitas 2012). They are carrying out the unpaid domestic and caring work within communities that goes unrecognised within policy rhetoric about 'worklessness' which saturates the political register of austerity. In the context of a war of austerity waged against women and children, we urgently need to think -again -about questions of care, labour and social reproduction. Important challenges to the gendered impacts of austerity are manifesting in organised, collective spaces such as Women's Budget Group and The Fawcett Society. Indeed, a recent statement by an anonymous collective, publishing under the name 'The Feminist Fightback Collective', reanimates long-standing feminist debates about the central role of social reproduction in sustaining the fabric of society. The collective states: Exploring the focus, distribution and likely effects of this austerity programme through the lens of social reproduction allows us to better understand not only the uneven impacts it will have on different sectors of society, but also the ways in which it supports the production and accumulation of wealth, and its concentration into the hands of the few. And it may also point to sites of resistance and transformation (2011: 74) Thinking through 'austerity' with White Dee, as a figure that is representative of unvalued forms of social reproduction, is instructive as a way of considering resistance to the punishing demands of the neoliberal postwelfare society. As Kathi Weeks similarly argues, if what is considered to hold value was broadened and shifted so that social reproduction (in its myriad of forms), rather than production (defined primarily as paid work) underpinned the driving mechanism of sociality, then this would signal a shift away from the logic of capital to 'demanding not income for the production that is necessary to sustain social worlds, but income to sustain the social worlds necessary for, among other things, production' (Weeks 2010: 230). Perhaps surprisingly, struggles over figures such as White Dee in the public sphere, open up spaces for discussion of the gendered impacts of austerity, and the ways in which 'cutbacks in social provision are privatising work that is crucial to the sustenance of life' (The Feminist Fightback Collective 2011: 73). In this short article, we have argued that counter-readings that resist the dominant figuration of White Dee as an abject and/or heroic working class figure, allow us to ask bigger questions about what counts as labour? What counts as work? Who and what has value and is value? under the present social and political conditions. Popular culture is in this regard one site through which we should attend to the gender politics of austerity. --- Notes This dance-style became notorious in 2013, when white US pop star Miley Cyrus incorporated this style into a sexually explicit 'twerking' performance at the MTV Video Music Awards. Predictably enough, a second series is in production, with Love Productions currently scoping locations -and 'characters' (unpaid participants) -for the next 'Benefits Street' (see Vernalls 2014). Writer and campaigner Owen Jones also used this term in his lecture for the Royal Television Society in November 2013. See: http://www.rts.org.uk/rts-huw-wheldon-memorial-lecture, accessed 13 March 2014.
Focusing on Benefits Street, and specifically the figure of White Dee, this rapid response article offers a feminist analysis of the relationship between media portrayals of people living with poverty and the gender politics of austerity. To do this we locate and unpick the paradoxical desires coalescing in the making and remaking of the figure of 'White Dee' in the public sphere. We detail how Benefits Street operates through forms of classed and gendered shaming to generate public consent for the government's welfare reform. However, we also examine how White Dee functions as a potential object of desire and figure of feminist resistance to the transformations in self and communities engendered by neoliberal social and economic policies. In this way, we argue that these public struggles over White Dee open up spaces for urgent feminist sociological enquiries into the gender politics of care, labour and social reproduction.
Introduction Appalachia is a region of the USA that includes 420 counties in 13 states [1]. It follows the Appalachian mountains and extends more than 1,000 mi from southern New York to northern Mississippi [1]. A large percentage (42%) of the region's population lives in rural areas compared to 20% of the population in the U.S. Appalachia, which once was dependent on mining, forestry, agriculture, and industry, and has developed a more diversified economy, but remains economically distressed with a higher percent of residents living in poverty, unemployed, and have lower educational attainment compared to national rates [1,2]. In addition, residents of Appalachia have limited access to health services and experience many health disparities. A significant disparity among residents of Appalachia is the elevated cancer incidence, prevalence, and mortality rates (lung, colorectal, and cervical cancers) [3][4][5][6][7][8]. Contributing to the elevated cancer rates are many factors included in the various levels of the social determinants of health framework [9]. Examples of these factors are individual risk factors (e.g., decreased cancer screening rates, increased tobacco use), social context (e.g., social cohesion), social condition (e.g., culture), and institutional context (e.g., health care system) [10][11][12][13][14][15]. Measurement of contextual and social variables at multiple levels, such as the environment, neighborhood, community, and social network, is important to include in research that attempts to understand the mechanisms responsible for the cancer disparities among residents of Appalachia. To address the complex nature of this problem, community-based participatory research (CBPR) strategies have been used as the keystone for working in underserved Appalachian communities to address cancer risk factors (e.g., cancer screening rates, physical inactivity, and uptake of the HPV vaccine) [15][16][17][18]. The Appalachia Community Cancer Network (ACCN), one of the National Cancer Institute Community Network Program sites, has a mission to reduce cancer disparities in Appalachia through community participation in education, training, and research. The ACCN has established relationships with community leaders, researchers, clinicians, public health professionals, health and human service agencies, and universities across central Appalachia to accomplish its mission. To address ACCN's mission, a series of seminars for community members and individuals interested in cancer disparities in Appalachia entitled "Addressing Health Disparities in Appalachia" were conducted in collaboration with ACCN's partner institutions: the University of Kentucky, The Ohio State University, Pennsylvania State University, Virginia Polytechnic Institute, and West Virginia University. The seminar series consisted of three regional and one national seminar. The educational objectives of the seminars were to increase knowledge of existing cancer disparities in Appalachia and to disseminate research findings from CBPR projects conducted in Appalachia. An additional objective of the national seminar was to foster capacity building among Appalachian community members for CBPR by promoting networking at the seminars. Evaluation of the Appalachian cancer disparities seminars was conducted to assess changes in knowledge and attitudes by analyzing pre-post-surveys of participants attending the four seminars. In addition, at the national seminar, a social network analysis was conducted among the participants prior to and at the end of the meeting to evaluate potentially new patterns of collaboration for future CBPR research. The purpose of the evaluation of the seminars was to determine if the short-term outcomes of the seminars would assist ACCN in reaching its long-term goal of reducing cancer disparities in Appalachia. --- Methods The process used for evaluation of the seminars is displayed in a logic model (Fig. 1). A pre-post-evaluation of all participants was conducted for each seminar. Three 1-day regional seminars were held in Kentucky (n=22), Ohio (n=120), and Pennsylvania (n=92). A 2.5-day national seminar was conducted in West Virginia (n=138). The seminars were conducted from October 2008 to September 2009 and were hosted by one of the ACCN-affiliated institutions. The seminars were supported by an NIH conference grant that allowed all participants to receive free registration. ACCN staff members and ACCN-affiliated, community-based cancer coalition members advertised the seminars to public health professionals, cancer control advocates, community leaders, cancer survivors, and other community members involved in eliminating health disparities in Appalachia. Advertising the seminars was accomplished by posting flyers in local Appalachia community locations (health departments, libraries, etc.) and by sending seminar information by fax and email to different community groups and agencies. The seminars were designed to draw attention to the cancer disparities that exist in Appalachia and to highlight the CBPR projects and evidence-based educational programs being conducted by academic and community partnerships in Appalachia. Each seminar used a common agenda format including speakers who were academic researchers, junior investigators, and community members from local cancer coalitions. Panel discussions were featured to facilitate sharing ideas with the members of the audience. In addition to presentations directed at cancer disparities and interventions to reduce cancer, the seminars also addressed Appalachian identity, the impact of culture and heritage on cancer disparities in Appalachia, and the importance of storytelling in Appalachia. Although the content of all seminars was comparable, the regional seminars featured local researchers and community members compared to the national seminar which featured researchers, community members, and cancer-related issues associated with the entire Appalachian region. Individuals preregistered for the seminars on-line or by calling a toll-free telephone number. After preregistering for a seminar, individuals received a subject identification number and were requested to complete a web-based pre-seminar survey using SurveyMonkey ®. The short survey developed for this seminar series (and not tested for reliability or validity) included: demographic characteristics, knowledge (10 true/false items) and attitudes (10 items on a five-point Likert scale: strongly disagree to strongly agree) about cancer and cancer disparities in Appalachia, and one open-ended question that asked participants to describe the unique qualities of people living in Appalachia that best represent the overall spirit of this population. The identical pre-post-surveys took approximately 5 to 10 min to complete. Individuals who registered on the day of the seminar completed a pre-seminar paper survey and received a subject identification number. At the end of the seminar, individuals completed a short post-seminar paper survey that included their subject ID number, knowledge and attitudes about cancer and cancer disparities in Appalachia, as well as a speaker evaluation form. All paper surveys were structured for TeleForm electronic scanning, were completed on site, and were scanned and verified after completion of the seminars. The option of completing the pre-seminar survey on-line was designed to increase response rates and reduce costs. A mixed mode pre-post-survey design may cause measurement error that impacts the ability to measure change over time [19]. To minimize this error, all questions on the pre-and post-surveys were presented in the same format and order. The participants did not receive any incentive for completing the pre-post-surveys. A unique feature of the evaluation of the national seminar was inclusion of a social network component, operationally defined for this study as familiarity with individuals attending the conference. Individuals attending the national seminar were provided with a list of all preregistered attendees categorized by state of residence. At the beginning of the seminar, participants were asked to review the list of attendees and indicate each person they knew prior to attending the seminar. At the end of the seminar, participants were asked to complete the same form and to mark additional people they met and talked to at the 2.5-day seminar. Special events to improve networking at the seminar included a special poster presentation event, randomly assigned seating during meals, and an "Appalachia Cancer Jeopardy" game during an evening session. The social network analysis included in this study was based on the identification of the names on the pre-and post-surveys. An ACCN report, "The Cancer Burden in Appalachia-2009," was distributed at the national seminar, and approximately 1 month later, an email request was sent to all participants requesting their assessment of the report and its usefulness [20]. The evaluation plan for the seminar series was approved by the institutional review board of The Ohio State University. Summary statistics (means, percentages) were used to describe the participants. Participants were assigned knowledge scores pre-and post-seminar using the number of correct responses out of 10 true/false questions. Attitude scores were also assigned using the sum of the 10 Likert scale items mentioned above. Due to an administrative error, participants from the Ohio seminar were excluded from the knowledge analysis. Since some participants failed to complete a pre-or posttest survey, simple paired t tests could not be used to test for pre-post-difference in knowledge and attitudes. Instead, knowledge and attitudes data were analyzed using repeated measures models fit using restricted maximum likelihood (SAS PROC MIXED REPEATED statement), which provide unbiased estimates of pre-post-differences assuming that the data are missing at random [21]. Our models included fixed effects for seminar, time (pre-/post-), and a seminar-by-time interaction. When analyzing the knowledge data, an unstructured variance-covariance matrix was used to model the residual errors while a compound symmetric matrix was used when analyzing the attitudes data. If the seminar-by-time interaction was significant (based on an F test evaluated at <unk>=0.05), we performed separate tests of post-pre-differences for each seminar using t tests of linear contrasts of our model parameters evaluated under a Bonferroni-corrected significance level (<unk>=0.0167 for knowledge and 0.0125 for attitudes), otherwise we evaluated the main effect of time using a single t test evaluated at <unk>=0.05. The Kenward-Roger method was used to calculate the denominator degrees of freedom for both the F and t tests [22]. Since the knowledge and attitudes data were left skewed, we performed a power transformation of each outcome to remove skewness (fifth power for knowledge, cubed for attitudes following the methods of Box-Cox) [23]. Reported p values are based on these power transformations, though the pre-and posttest means and standard errors we report are based on running the models on the original scale. All analyses were conducted using SAS Version 9.2 (SAS Inc., Cary, NC). The social network data were analyzed to determine visual changes in network patterns [24]. The written comments submitted by the seminar attendees to the one open-ended question on the survey ("Describe the qualities that best represent the overall spirit of the people living in Appalachia") were categorized into repeated themes. --- Results --- Participants Participants (n=335) attending the four seminars were predominantly college educated (83.9%), non-Hispanic (97.3%), white (80.3%), and female (74.6%, Table 1). Only 14% of the participants reported living in an urban setting. The occupation of the participants included academic researchers (29.0%), healthcare providers (15.8%), public health professionals (15.2%), and members of community agencies (13.4%). --- Pre-Posttest Prior to the seminars, 309 (92%) participants answered the knowledge questions (true/false) and 291 (87%) participants completed the attitude items (Likert scale). After the seminars, 211 (63%) participants completed the knowledge questions and 202 (60%) participants completed the attitude items. Assessment of change in knowledge (Table 2) was limited to data from Kentucky, West Virginia, and Pennsylvania seminars and was found to differ by seminar (F(2, 148)=3.60, p=0.030). There was no change in knowledge following the Kentucky seminar, however knowledge improved following the national and Pennsylvania seminars (Table 3). Change in attitudes also differed by seminar (F(3, 218)=4.75, p=0.003), with a significant change only occurring following the Ohio seminar (Table 3). --- Description of Appalachian Residents The comments from the participants included statements about the overall spirit of people living in Appalachia including the following terms: family oriented, independent, proud, community connected, hardworking, friendly, patriotic, resistant to change, deep rooted in culture, and hospitable but cautious of "outsiders." One participant summed up the residents of Appalachia as "filled with beautiful contradictions." --- Social Network Analysis This analysis consisted of measuring and mapping the normally invisible relationships between people. In the social network analysis, the nodes were the national seminar participants (color-coded circles based on participant's state of residence) and the lines were the ties between the different participants. The pre-meeting social network map (Fig. 2a) demonstrated that most individuals knew colleagues from the same state, with a few participants having cross-state connections. The post-meeting map (Fig. 2b) showed a significant increase in the number of cross-state connections. --- Cancer Burden in Appalachia-2009 Report Approximately 1 month after the national seminar, an email request was sent to the 138 participants asking them to complete a short web-based survey (SurveyMonkey ® ) about the ACCN cancer disparities report that was distributed at the meeting. The survey completion rate was 48.6% (n=67). Of those completing the survey, 75% (n=50) reported looking at the report after the meeting and 45% (n=30) reported using the document during the month following the WV seminar. Participants (n=63; 94%) reported that they planned to use the report in the future for grant writing, for presentations, program planning, and to share with the local media. Among participants reporting already using the ACCN report, 100% thought the information was easy to locate and 90% were satisfied with the information. --- Discussion Evaluation of a seminar focusing on cancer health disparities in Appalachia was conducted to determine the ability of the educational seminars to accomplish three objectives: (1) increase knowledge of existing cancer disparities in Appalachia, (2) disseminate research findings from CBPR projects conducted in Appalachia, and (3) foster CBPR capacity building by promoting networking among participants of the seminars. Results of the evaluation suggest that the objectives, or short-term outcomes, of the educational seminars were accomplished. Typically, educational seminars directed at health care professionals and community members are evaluated for attendance, satisfaction with the speakers and the overall program. In addition to these standard measures, the evaluation of the Appalachia cancer disparities seminars also included measuring changes in knowledge, attitudes, and the unique feature of measuring the change in social networks among participants of a national seminar. By planning activities within the seminar agenda to promote networking, we hoped that participants would gain an awareness of assets within each other's communities and become aware of potential new collaborators to address the cancer health disparities in their communities. Key principles of CBPR include community members participating in the planning, implementation, data collection and interpretation, and the dissemination of communitybased programs [25]. The Appalachia cancer disparities seminars provided an opportunity for community members to participate in these tenets of CBPR by including community members in the planning of the seminars, as speakers who reported findings from projects that they conducted in their communities, having community members exchange ideas during panel discussions, and having free exchanges with community members in the audience and at social networking events. This component of the seminars was positively received by the seminar participants as documented in the post-seminar evaluation. Based on comments from community members attending the seminars, a second more community-friendly ACCN report was developed, "Addressing the Cancer Burden in Appalachian Communities-2010" [26]. The 2010 ACCN report provided more information on cancer risk factors and risk reduction, a glossary of terms, and step-by-step instructions for completing a community assessment. Although the short-term objective of promoting networking among the seminar attendees was accomplished at the seminar, assessment of the long-term effects of the networking at the meeting is beyond the scope of this evaluation. The networking events at the seminar, however, focused on the process instead of the seminar's content, providing the opportunity for seminar attendees to develop new partnerships. It takes time to build trustworthy and effective partnerships to address the mutual goal of reducing cancer disparities among the residents of Appalachia. Capacity building is an important step to build the infrastructure for future CBPR projects to reduce cancer disparities, improves community empowerment, provides a better likelihood for sustainability of interventions, and is a critical step for policy advocacy [27][28][29]. This study is not without limitations. Limitations include that the pre-post-surveys were developed specifically for the seminars, and although questions were reviewed by content experts, the surveys were not tested for reliability or validity. Thus, because the test was newly developed, interpretation of the meaning of the test performance is limited, given the absence of comparative data. The majority of participants in this study were college graduates and are not representative of the residents of Appalachia who are most affected by cancer health disparities. In addition, surveys were not completed by all seminar attendees and a mixed mode administration of the surveys may have introduced measurement error in the analysis. Planning innovative strategies to increase response rates from program participants should be developed for future educational seminars and programs. The limitations of this study might limit the ability to generalize its findings to other populations Among participants attending an Appalachia cancer disparities seminar, an evaluation found improved knowledge, dissemination of findings from CBPR projects and evidence-based educational programs, and changes in the social network of participants that potentially will increase CBPR projects conducted in Appalachia to address the cancer burden among its residents. This evaluation included the unique methodology of mapping the social network of the participants to document changes and including these methods in program evaluation should be assessed by others in the future. Model-based estimates of mean (SE) knowledge and attitude scores by seminar Ohio was omitted from the knowledge analysis, administrative error --- Logic model for Appalachia cancer health disparities seminars
Cancer education seminars for Appalachian populations were conducted to: (1) increase knowledge of existing cancer disparities, (2) disseminate findings from Appalachian communitybased participatory research (CBPR) projects, and (3) foster CBPR capacity building among community members by promoting social networking. Evaluation of the seminars was completed by: (1) using pre-post-surveys to assess changes in knowledge and attitudes at three regional and one national seminar and (2) measuring a change in the social network patterns of participants at a national seminar by analyzing the names of individuals known at the beginning and at the end of the seminar by each participant. Among participants, there was a significant increase in knowledge of Appalachian cancer disparities at two seminars [national, t(145)=3.41, p=0.001; Pennsylvania, t(189)=3.00, p=0.003] and a change in attitudes about Appalachia at one seminar [Ohio t(193)= -2.80, p=0.006]. Social network analysis, operationally defined for this study as familiarity with individuals attending the conference, showed participation in the national seminar fostered capacity building for future CBPR by the development of new network ties. Findings indicate that short-term outcomes of the seminars were accomplished. Future educational seminars should consider using social network analysis as a new evaluation methodology.
Background United Nations Population Division states that by 2050, approximately 66% of the globe's population will live in urban areas [1]. The urban poor have higher fertility, high unmet need for family planning services and poor maternal health outcomes [1]. A range of factors that characterize urban poverty contribute to these poor reproductive health outcomes: unemployment, unsanitary and overcrowded living conditions, inadequate access to formal health services, gender-based violence and limited autonomous decision-making for women [1]. The urban poor therefore face vulnerabilities that can put them at the disadvantage compared to their rural counterparts [1]. Also, the unmet need for family planning has been reported to be the highest among women who are younger than 20 years of age, and lowest among women aged 35 and older; these differences being found to be widest in South Central Asia, including India [2]. Similar findings have also been reported in the studies done in South East Asia [3], South Africa [4] and other developing nations of the world [5]. The unmet need for family planning among all ages (15-45 years) married women in India is 12.9% [6] with unmet need for spacing and limiting being 5.6 and 7.2% respectively [6]. No significant decline in the unmet need for family planning has been observed over the past decades in the country [6,7]. A high level of unmet need for family planning is seen among the age groups of 15-19 years and 20-24 years (27.1 and 22.1% respectively) [7]. Among all the states, Uttar Pradesh, with one sixth of India's population (200 million) [8] shows an even worse picture with very high levels of unmet need of about 18.1% [6]. The state has an annual growth rate of about 16.5 [8], with total fertility rate (TFR) of 2.7 [6]. Also in the two target age groups i.e. 15-19 and 20-24 years the Age Specific Marital Fertility Rate (ASMFR) is reported to be the highest (271.0 and 383.9 respectively) [8]. In addition to that low Contraceptive prevalence rate (CPR) was also reported in these age groups (14.5 and 26.7% respectively) [7]. High fertility (2.96) and low contraceptive prevalence rate (58.2%) have also been reported in slums in comparison to non-slum areas (2.78 and 65.1%) [7]. State level data for slums in Uttar Pradesh shows a wide difference in unmet need between slum and non-slum areas (12.9 and 8.9% respectively) [7]. Other studies conducted in urban slums of Uttar Pradesh have also indicated a high unmet need among married women of 15-45 years age group [9][10][11]. About 44.5 million people reside in urban slums in UP (Census 2011) [8]. Mostly, young people migrating from rural areas in search of earning opportunities are settling in slums. Here, they not only lack basic amenities for living, they also do not have enough access to health services, which negates them from utilizing the facilities of health programs (Fig. 1). Large population with relatively high fertility due to low use of contraceptives by this age group (15-24 years) and living in suboptimal conditions makes them the most preponderate group for family planning services from public health perspective [14]. To catch this young population it is imperative for policy makers and program managers to understand their need for FP services and factors influencing their needs for family planning. No such data is currently available in the country for this age group especially for the young married women living in the urban slums. As the reproductive health needs of the millions of urban poor cannot be ignored, therefore this study was conducted with an aim to assess the unmet need for family planning services among the currently married young women living in urban slums of Lucknow (Uttar Pradesh, India), the reasons for this unmet need for family planning services and the factors influencing it. This will help in delineating the individual, community and health services level factors that can be harnessed or changed to improve the contraceptive use and enable young women living in Fig. 1 Trends of Unmet Need for Family Planning Uttar Pradesh (%) [6,7,12,13] urban slums to satisfy their need for contraceptives at this stage of family building process. Objectives 1. To assess the unmet need for family planning services among the young married women living in urban slums of Lucknow, India. 2. To explore the facttors influencing the unmet need of family planning services among the young married women. --- Methods --- Study design Cross sectional study. --- Study settings The study was conducted in the catchment slums of Urban-Primary Health Centres (UPHCs) of Lucknow. [15]. Health services to the urban poor are provided through Urban-Primary Health Centres (U-PHCs), Bal Mahila Chikitsalays (BMCs), District Hospitals and plethora of private practitioners. --- Study period The study was conducted from August 2015 to July 2016. --- Study universe All the Young married women (15)(16)(17)(18)(19)(20)(21)(22)(23)(24) living in urban slums. --- Young married women [7, 16] Currently married young women in the age group of 15-24 years. (Census-India, UN Secretariat). --- Study population Young married women (15)(16)(17)(18)(19)(20)(21)(22)(23)(24) living in the urban slums of Lucknow. --- Study unit Young married woman (15)(16)(17)(18)(19)(20)(21)(22)(23)(24) currently living in the urban slums of Lucknow for at least 6 months. However, women who were currently pregnant or had undergone hysterectomy/ bilateral oophorectomy or were divorced / separated / disserted from their husband were excluded from the study. --- Sample size determination Sample size was calculated by the following formula, n = z 2 *p*(1-p)/ d 2. Taking the unmet need of family planning services in Uttar Pradesh (p) as 14.6% (AHS 2012-13) [15], an allowable error (d) of 3% and the value of the standard normal variable at 0.05 (two sided) level of significance (z) as 1.96, the sample size was calculated to be 533. Considering a 10% non-response rate; the final sample size was calculated as 586. Excluding 37 non responding women; a sample of 535 was analyzed (Fig. 2). --- Sampling To identify the eligible young women to be selected in the sample, a three staged random sampling technique was used. All the eight Municipal Corporation zones in Lucknow city were taken into consideration for selection of the study participants. One U-PHC was randomly selected from each Municipal Corporation zone. The zone wise list of slums notified by the Municipal Corporation was obtained from the Municipal Corporation office and two slums were randomly selected from each U-PHC. To obtain the desired sample from each slum, the total sample size was divided equally among the eight Municipal Corporation zones. A sample of 67 young married women was obtained for each zone. Thus, at least 33 young married women were selected from each slum (Figs. 3 and4). In each slum, the centre of the slum was arbitrarily identified and a sample of at least eight YMW was obtained from each direction. First household was randomly selected and all the households were visited until the desired sample was obtained for that slum. --- Operational definitions Slum [8] The slum areas broadly constitute of: All specified areas in a town or city notified as 'Slum' by State/Local Government and UT Administration under any Act including a 'Slum Act'. All areas recognized as 'Slum' by State/Local Government and UT Administration, Housing and Slum Boards, which may have not been formally notified as slum under any Act. A compact area of at least 300 populations or about 60-70 households of poorly built congested tenements, in unhygienic environment usually with inadequate infrastructure and lacking in proper sanitary and drinking water facilities. Catchment slum [18] Slum in the geographic area defined and served by a health facility, which is delineated on the basis of such factors as population distribution, natural geographic boundaries, and transportation accessibility. By definition, all residents of the area needing the services provided by the health facility are usually eligible for them. Urban primary health Centre (U-PHC) [17] Established by Government of India under the National Health Mission (NHM) to improve the health status of the urban poor particularly the slum dwellers and other disadvantaged groups by provisioning access to quality primary health care services along with strengthening the existing capacity of health delivery systems leading to improved health [17] status and quality of life. A U-PHC caters to a population of 50,000. Currently, there are 52 U-PHCs in Lucknow. Family planning services [19] It includes services that enable individuals to determine freely the number and spacing of their children and to select the means by which this may be achieved. Modern spacing methods [20] Include contraceptive pills, condoms, injectables, intrauterine devices (IUDs / PPIUDs) and emergency contraception. Modern limiting methods [20] Include male and female sterilization. --- Unmet need for modern family planningmethods [7] The percentage of women of reproductive age who are not using any modern method of family planning but who would like to postpone the next pregnancy (unmet need for spacing) or do not want any more children (unmet need for limiting). The sum of the unmet need for limiting and the unmet need for spacing is the total unmet need for family planning. --- Unmet need for spacing [7] It includes fecund women who are neither pregnant nor amenorrhoeic, who are not using any modern spacing method of family planning, and say they want to wait two or more years for their next birth. Also included in unmet need for spacing are fecund women who are not using any modern method of family planning and say they are unsure whether they want another child or who want another child but are unsure when to have the birth. --- Unmet need for limiting [7] It refers to fecund women who are neither pregnant nor amenorrhoeic, who are not using any modern limiting method of family planning, and who want no more children. Met need for modern contraceptive methods [21] Refers to those currently married women who want to space births or limit the number of children and are using modern contraceptive methods to avoid unwanted or mistimed pregnancies. Total demand for family Planning [21] The total demand for family planning is the sum of unmet need and met need. --- Tools of data collection A pre-designed and pre-tested interview schedule [see Additional file 1] was used for data collection. Information was collected regarding: Bio-social characteristics, autonomy status of the women, knowledge regarding family planning, attitude towards contraceptive use, current use of contraceptives, factors favoring / limiting access and utilization of family planning services in young married women (Fig. 5). Religion was based on the belief system followed by the participant and caste / category on the official classification of the population of India [8]. Other Backward Class (OBC) is a collective term used by the Government of India to classify castes which are educationally or socially disadvantaged [8]. Scheduled Caste (SCs) and Scheduled Tribes (STs) are officially designated groups of people by the Constitution of India [8]. YMW above the age of 7 who can read and write in any language with an ability to understand was considered as literate [8]. Modified Kuppuswamy's socioeconomic classification, a composite scale based on education, occupation of the head of the family and the monthly income of the family, was used to determine the socioeconomic status [22]. Autonomy [23] of the women was assessed in three dimensions of household decision making concerning money spent, health care and physical mobility and scored accordingly. Attitude of the women and her husband towards family planning was assessed from the responses given by the women on the pertaining questions. The schedule was pretested on a sample of 30 young married women living in urban slums of Lucknow. Inconsistencies and confusions in the pre-test exercise including the interview protocol were corrected before actual data collection. Result of pre-test was not included in final study. Completed schedules were checked weekly for consistency and completeness by the supervisors. The collected information was rechecked for its completeness and consistency before entering the data into a computer. --- Data management --- Data collection procedure During the visits to the slums, the investigator approached the young married women fulfilling the inclusion criteria and after explaining then about the study; an informed consent was sought from them for their participation in the study. Complete confidentiality and anonymity of the respondents was maintained. Written and informed consent was taken. The study included 535 YMW who met the inclusion/exclusion criteria for the study. --- Data processing and analysis Descriptive summary using frequencies, percentages, graphs and cross tabs were used to present study results. Univariate analysis was performed using binary logistic regression and the factors which were found significant during univariate analysis were forwarded to multiple logistic regression model in a step wise manner for calculation of Adjusted Odds Ratio. A p value <unk> 0.05 was considered statistically significant. --- Results The total demand for family planning among the young married women living in urban slums of Lucknow was 87.6% (68.2% for spacing and 19.4% for limiting). Findings (Fig. 6) demonstrated considerably high unmet need for contraceptives among young married women in urban slums. It was found to be present in more than half (55.3%) of the young married women; of which in about 40.9% was for spacing methods and in 14.4% for limiting methods. --- Bio-social characteristics of women The mean age of the study participants was found to be 21.28 <unk> 1.9 years. Most of the women were Hindu by religion (87.1%) and about 48.2% of them belonged to other backward classes (OBCs) (Table 1). About 18.7% of the study participants were illiterate. More (69.7%) women in the older age group (20-24 years) had high school and above level of education as compared to the younger age group (37.6%). Majority (81.5%) of the women in older age group were working outside home for money (Table 1). The mean duration of stay in the city was 2.72 <unk> 1.95 years. The mean age at marriage and at the birth of first child was found to be 17.87 <unk> 1.85 and 19.23 <unk> 1.67 years respectively. More than half (59.6%) of the women in the age group of 15-19 years were nulliparous as compared to older age group (15.7%). About half (54%) of the older women had <unk>2 children (Table 1). Teenage childbearing was reported to be about 8%. Knowledge of contraceptives was significantly low (9.1%) in the younger women as compared to women in the older age group (90.9%) (Table 1). Autonomy in family and media exposure was significantly more in the women of older age group (Table 1). Contact with health worker was very low (8.3%) in the younger age group as well as with the women of older age group (24.6%) (Table 1). None of the young married women had received any education on family planning before marriage. --- Reasons for unmet need More than two-third (69.2%) of the women in the study cited embarrassment / hesitancy / shyness to be a reason for unmet need for contraception. Knowledge of family planning methods and place where FP services are available was significantly low in 15-19 years age group in comparison to older age group (Table 2). About half (48.5%) of the older women had a negligent attitude towards adopting any family planning method and 45.6% of them faced opposition to contraceptive use; as a consequence of expectation for early child bearing; by the husband and family members. Health concerns and fear of side effects were frequently cited reasons of non use of contraceptives in the older age group (Table 2). --- Factors influencing need of contraceptives among young married women:-Bivariate analysis Bio-social factors Age of the respondent: Majority (90.9%) of the women in the age group of 15-19 had an unmet need for family planning services. The increase in age group was found to be significantly associated with decrease in unmet need. Women of the 15-19 year age group were about 3 times more likely to have an unmet need than women of the age group (20-24 years) (Table 3). Religion and caste: Religion was found to be significantly associated with unmet need for family planning services. More unmet need was observed among Hindus (83.4%) as compared to Muslims (69.8%) (COR: 2.17, CI: 1.06-4.44) (Table 3). Majority (87.6%) of the women of scheduled caste / scheduled tribe (SC/ST) category had an unmet need for family planning services. Women belonging to the other categories were significantly less likely to have an unmet need than those belonging to SC / ST category (COR: 0.52, CI: 0.28-0.95) (Table 3). --- Level of education: The unmet need of family planning services was found to be significantly higher among the illiterate women (92.9%). Women who were literate were less likely of having unmet need for family planning services as compared to illiterate women (COR:0.29, CI: 0.11-0.75). Also women whose husband was educated were less likely of having unmet need as compared to those having uneducated husband (COR: 0.46, CI: 0.21-0.96) (Table 3). Socioeconomic and employment status: Unmet need was high (81.1%) among the unemployed women. It was only 18.9% in the employed women. No statistically significant association was observed between working status of women and unmet need for family planning. Unmet need for family planning services was also found to be high (84.2%) among women from lower and upper lower socio-economic class in comparison to the women belonging to middle and upper middle class (74.2%) (Table 3). Duration of stay in the slum: Women who were residing for more than a year in the slums were less likely (78.7%) to have an unmet need than those residing in the slum for less than a year (89.0%) and the association was statistically significant (COR:0.46, CI: 0.23-0.89) (Table 3). --- Fertility related factors Duration of marriage: Women who were married for less than 1 year were significantly more likely to have an unmet need (92.8%; COR: 0.28, CI: 0.11-0.68), in comparison to women who were married for more than a year (Table 4). Total number of pregnancies, number of living children, number of male children and desired number of children: Parous women were 2.22 times more likely to have an unmet need than nulliparous women and this association was found to be statistically significant (Table 4). Women who had one or more living children had a high (88.0%) unmet need for family planning services and majority (88.8%) of the women with a male child had an unmet need for family planning services. Women who had one or more living children were 8.84 times more likely to have an unmet need than women with no living children and this association was found to be statistically significant. The association between number of male children and unmet need was found to be statistically insignificant (Table 4). The number of children desired by the women was found to have a statistically significant association with unmet need for family planning services; with higher unmet need (87.8%) in women desiring <unk> 2 children. Women who desired <unk>2 children were significantly less likely to have an unmet need than women desiring <unk> 2 children (COR: 0.43, CI: 0.24-0.75) (Table 4). --- Knowledge of contraceptive methods and its access: Women who did not have any knowledge of contraceptive methods had statistically significant (COR: 0.36, CI: 0.18-0.71) high unmet need (90.3%) for family planning services. Majority (83.1%) of the women who did not have any knowledge of place where family planning services are available near their slum had an unmet need. Women who had knowledge of availability of family planning services at the U-PHC were significantly less likely to have an unmet need than women with no knowledge (COR: 0.35, CI: 0.13-0.94) (Table 5). Media exposure: Less than half (40.3%) of the young married women were exposed to family planning message on TV/ radio. The association between media exposure and unmet need for family planning services was found to be statistically insignificant (Table 5). Contact with health worker: Association between contact of ANM (Auxiliary Nurse Midwife) during household visits in the slums or during HNDs (Health and Nutrition Days) and unmet need for family planning services was found to be statistically significant. Women who did not have a contact with ANM were about 3 times more likely to have an unmet need than women who had a contact (Table 5). Autonomy status of women: Women who had "no autonomy" in their family had a higher (89.2%) unmet need for family planning services. Women who had "some autonomy" and those who had "autonomy" were less likely to have an unmet need than women with "no autonomy" (COR: 0.46, CI: 0.24-0.87 & COR: 0.25, CI: 0.10-0.58 respectively) (Table 6). --- Motivation and opposition to contraceptive use: Women whose husbands had an unfavorable attitude towards family planning had a high (83.9%) unmet need. Women with husbands having a favorable attitude were found to be less likely to have an unmet need as compared to women whose husband had an unfavorable attitude (COR: 0.42, CI: 0.22-0.78) (Table 6). Also the unmet need was found to be more (66.9%) in absence of any discussion of family planning with husband and with others; the association being statistically insignificant. On the other hand only 13.9% of the women who were motivated to use contraceptive methods had an unmet need. Unmet need was found to be less in those women who were motivated to use family planning methods by husbands, by other family members / friends / relatives, by health care providers but this association was found to be statistically insignificant (Table 6). About 11.6% young married women reported opposition to contraceptive use. Women having opposition to contraceptive use were 5.00 times more likely to have an unmet need than women with no opposition and this association was found to be statistically significant (Table 6). --- Multivariate logistic regression Factors found to be statistically significant (p value <unk> 0.05) in bivariate analysis were subjected to conditional multiple logistic regression for adjustment and controlling the effect of confounding variables. Several factors that were found to be statistically significant on bivariate analysis lost their significant on multivariate analysis which could be partly explained due to co-linearity and possible confounding observed between predictor variable. Age of the women, educational status of the women, duration of marriage, number of pregnancies, knowledge of contraceptive methods, opposition to contraceptive use and contact with ANM showed independently significant association with unmet need for family planning. Women of 20-24 year age group were significantly less likely to have an unmet need than women of the lower age group (AOR: 0.34, CI: 0.12-0.95). Women who were literate were significantly less likely to have an unmet need for family planning as compared to illiterate women (AOR: 0.12, CI: 0.02-0.53) (Table 3). Parous women were significantly more likely to have an unmet need than nulliparous women (AOR: 10.90, CI: 3.8-30.8) (Table 4). Women who had any knowledge of contraceptive methods were significantly less likely to have an unmet need than women with no knowledge (AOR: 0.27, CI: 0.10-0.73) (Table 5). Women having opposition to contraceptive use were significantly more likely to have an unmet need than women with no opposition (AOR: 7.36, CI: 1.3-40.7) (Table 6). Women that had a contact with ANM were significantly less likely to have an unmet need than women who did not have a contact (AOR: 0.38, CI: 0.14-0.96) (Table 5). --- Discussion More than half (55.3%) of the young married women living in the slums were having an unmet need for family planning, of which 40.9% was for spacing and 14.4% for limiting. Almost all (95.6%) women in the younger age group (15-19 years) had an unmet need for spacing methods as compared to older age group (64.6%). This is much higher than the unmet need for family planning as reported by NFHS-IV [6] in Uttar Pradesh (13.4%) and in Lucknow (14.5%). The unmet need in YMW is even higher than that reported in rural Uttar Pradesh (19.6%) [6]. Shukla, M., et. al., [11] in urban slums of Lucknow also found a higher unmet need (62.5%) among young married women. However, Pal, A., et. al., [10] reported very high (85.5%) unmet need in the urban slums of Lucknow about a decade ago. In selected slums unmet need is higher than that found by Sherin, R., et. al., [24] (23.4%) in their study in Rajasthan. Age of the women was found to be a significant predictor for unmet need of family planning. In the present study the unmet need for family planning was found to be significantly higher in the age group of 15-19 years (90.9%). Women of the younger age group (15-19 years) are more likely to have an unmet need as women of the age group 20-24 years are more educated and have more knowledge and experience of contraception [25]. They tend to be more mature and play a role in decision making, thereby less prone to have an unmet need [25]. Similar to these findings the younger women in the present study are reported to have significantly less knowledge and poor access to information, lack of decision making power, are shy / hesitant and are undermined by socio-cultural expectations of early marriage and childbearing. Duration of marriage less than 1 year was found as one of the determinants of unmet need in the study. Begum, S., et. al., [26] perceived that this high unmet need among newly-wed couples might be due to socio-cultural practice in the Bangladeshi community to have a child immediately after marriage. Socio cultural practices of Indian community are more or less similar to the Bangladeshi community. Begum, S., et. al., [26] also reported that sometimes the health providers impose barriers in accessing FP services by young women and, resulting in increase in unmet need for services among this group. Education level of the women emerged out as one of the strong predictor for unmet need for FP services in the urban slums. Majority (92.9%) of the women in the present study with lower level of education were found to have an unmet need for family planning services and unmet need was found to decrease with increase in level of education with only 79.1% among those who were literate having an unmet need for family planning services. Similar findings were reported by Sherin, R., [24], Wulifan., et. al., [27] and Hamsa, L., et. al., [28] who also observed that a lower level of education was significantly associated with higher unmet need. In our study, majority of both the multiparous (86.5%) and nulliparous (72.8%) women expressed no desire for childbirth at present but were still not using any of the contraceptive methods. Almost all of the nulliparous and primiparous women had an unmet need for spacing whereas two-third of the multiparous women (61.8%) had an unmet need for limiting methods. Unmet need for family planning services was significantly higher in women with more number of pregnancies but nulliparous women also constituted the major bulk of those having an unmet need. This is in accordance to the findings of studies done in developing and developed nations around the world [24,27,28], which reported a lower unmet need among nulliparous women. Contrary to this Imasiku., et. al., [29] and Shukla, M., et. al., [11] found unmet need to be more in nulliparous women. Calhoun, LM., et. al., [30] found that providers restrict clients' access to spacing and long-acting and permanent methods of family planning based on parity. Similar views were echoed by Begum, S., et. al., [26]. Unmet need was also found to be significantly associated with the number of children that are desired by a woman, which is in concurrence with study done by Bhattathiry, MM., and Ethirajan, N., [31]. In concurrence with Mosha, I., [32] and Woldemicael, G., and Beaujot, R., [33] who found that women who had less autonomy in the family; were more likely to have an unmet need. Significant association was found between the autonomy of the young married women and unmet need. The study reported that most of the women in the younger age group had no autonomy in the family and hence more prone to unmet need. Chafo, K., [21] attributed the availability of an enabling environment in the family helpful for women in implementing fertility desires and fulfilling their contraceptive needs. In this study also significant association was found between husband's favorable attitude for family planning and low unmet need. However, only 16.1% reported that their husbands were favorable towards family planning methods. This is similar to the findings of other studies done in various low and middle income countries among slum women aged 15-24 years [10,31,34,35]. In accordance to other researchers (Kabagenyi, A., et. al., [36] and Hall, MAK., et. al., [37], the present study also found significantly high unmet need among young women who faced opposition to contraceptive use by the husband or families. In this study 45% of YMW reported opposition from either husband or other family members. This needs to be dealt by utmost attention by the program managers during planning for FP services for this group in the slums. A study of reproductive health service providers in urban Uttar Pradesh highlighted that providers also imposed restrictions to younger clients' access to FP methods based on partner consent [30]. Approximately one quarter of midwives restricted client access to pills and condoms based on partner consent and nearly 75% restricted access to the IUCD based on partner consent [30]. The pattern that has emerged from the study that a particular profile of clients-under educated, poor, having few or no children, not having the support of their partner, and newly-wed women; are less likely to receive FP counseling by a provider in the urban Uttar Pradesh [30]. Similar to that reported by other studies [28,31,33], who found that women were less likely to have unmet need if they were aware of contaceptive methods and site from where FP can be procured, our study also observed significantly high unmet need among women who have low knowledge of contraceptive methods and place for FP services procurement. In the present study knowledge of contraceptive methods was found low (33.9%) among the young women (15-19 years) and 76.7% were not aware of place from where they can avail the FP services. Role of frontline workers is crucial in uptake of family planning services by the community. Researchers in various parts of the world [21,35,38,39] found the met need of family planning to be significantly higher among those women who had a contact with ANM. Similar findings are reported in the present study where unmet need for family planning services was found to be significantly higher among those women who had no contact with ANM (89.1%) as compared to those who had a contact with ANM (11.1%). In this study only 21.3% women had any contact with the ANM and about 8.8% women were recommended by health care provider for adoption of FP methods. Contact with health worker was almost negligible in the case of 15-19 years age group. Wulifan., et. al., [27] stated that though women of reproductive age in low and middle income countries are in favor of birth spacing but they were less likely to engage in family planning discussion with health workers in comparison to the older women. This reluctance in actively expressing their FP needs is in parts explained by prevailing stigma, shyness, hesitation, embarrassment, myths / misconceptions and socio-cultural expectations attached to contraceptive use in young as found in the present study. It reflects the dire need for the national and regional program managers to take into consideration the favorable effect of contact with ANMs as a golden opportunity to increase the use of family planning methods especially by young married women living in urban slums. Recently Urban-ASHA has been deployed under the National Urban Health Mission [17] and it is expected that they will reduce the unmet need in these urban slums. The main reasons for unmet need for family planning services among the young married women in the present study were found to be shyness / embarrassment / hesitancy followed by lack of knowledge regarding family planning method as well as their accessibility. About 40% of the women had a negligent attitude towards family planning. Opposition for contraceptive use was faced by one third of the women. In concurrence to the resent study, Sultana, B., et. al., [40] in their study in urban slums of Pondicherry, also found that client related factors (lack of knowledge, shyness, etc.); and contraception related factors (availability, accessibility, affordability, side effects) were the cause for unmet need. Huda, FA., et. al., [41] in the study among married adolescent girls in slums of Bangladesh, reported that lack of knowledge of the available methods, family pressure to prove fertility, opposition from husbands and mothers-in-laws were the main reasons for unmet need. Nazish, R., et. al., [42] in their study in Uttar Pradesh found that the major reasons for unmet need for FP were opposition from husband or family, poor accessibility of the method and negligent attitude of the women towards family planning. The coverage of a large slum population and use of a strong methodology enhances the internal and external validity of the research work. However considering the important role men play in the dynamics of family planning, their non inclusion in the present study may not reflect the overall perspective of the couple with regard to the use of family planning services. Therefore further studies can be done for in-depth exploration of these factors. --- Conclusions Unmet need for family planning was found to be very high among the young married women of urban slums. The study identifies the focus areas which have to be addressed to achieve reduction in unmet need and there by attainment of the desired goal of population stabilization and better reproductive and maternal health. Molding the minds of young generation at an early stage by inculcating reproductive and sexual education as a
Background: NFHS-4 stated high unmet need for family planning (FP) among married women in Uttar Pradesh. Unmet need is highest among age groups: 15-19 and 20-24 years. Currently few data is available about unmet need for FP among vulnerable section of the community, i.e.15-24 year's age group living in the urban slums. Therefore this study was conducted to assess the unmet need for FP services and its determinants among this under-privileged and under-served section of society residing in urban slums of Uttar Pradesh, India. Methods: Cross sectional study was conducted in the slums of Lucknow, India. One Urban-Primary Health Centre (U-PHC) was randomly selected from each of the eight Municipal Corporation zones in Lucknow and two notified slums were randomly selected from each U-PHC. All the households in the selected slums were visited for interviewing 33 young married women (YMW) in each slum, with a pre-structured and pre tested questionnaire, to achieve the sample size of 535. Analysis of the data was done using logistic regression.The unmet need for family planning services among YMW was 55.3%. About 40.9% of the unmet need was for spacing methods and 14.4% for limiting methods. Important reasons cited for unmet need for family planning services were negligent attitude of the women towards family planning, opposition by husband or others, embarrassment / hesitation / shyness for contraceptive use, poor knowledge of the FP method or availability of family planning services. Among method related reasons health concerns and fear of side effects were frequently cited reasons. On multiple logistic regression: age, educational status, duration of marriage, number of pregnancies, knowledge of contraceptive methods, opposition to contraceptive use and contact with Auxiliary Nurse Midwife (ANM) showed independently significant association with unmet need for family planning services. Conclusions: Unmet need for family planning services is very high among the YMW of urban slums. The findings stress that program managers should take into cognizance these determinants of high level of unmet need for family planning among YMW and make intense efforts for addressing these issues in a holistic manner.
to prove fertility, opposition from husbands and mothers-in-laws were the main reasons for unmet need. Nazish, R., et. al., [42] in their study in Uttar Pradesh found that the major reasons for unmet need for FP were opposition from husband or family, poor accessibility of the method and negligent attitude of the women towards family planning. The coverage of a large slum population and use of a strong methodology enhances the internal and external validity of the research work. However considering the important role men play in the dynamics of family planning, their non inclusion in the present study may not reflect the overall perspective of the couple with regard to the use of family planning services. Therefore further studies can be done for in-depth exploration of these factors. --- Conclusions Unmet need for family planning was found to be very high among the young married women of urban slums. The study identifies the focus areas which have to be addressed to achieve reduction in unmet need and there by attainment of the desired goal of population stabilization and better reproductive and maternal health. Molding the minds of young generation at an early stage by inculcating reproductive and sexual education as a part of routine school health services could go a long way in motivating them to adopt contraceptive use in future and subsequently follow a healthy fertility behavior. Formation of community based peer system will provide an opportunity for holistic discussion about family planning methods. These community based peer groups will help the young women to overcome embarrassment, shyness, or hesitation and will also give them autonomy to avail FP services. A comprehensive approach should be used by the health worker working in these slums to provide counseling services not only to the young married woman but to all stakeholders. Training should be imparted to health workers to improve their interpersonal behavior change communication skills to tackle the myths, misconceptions, embarrassment / hesitancy / shyness and fears regarding contraceptive use among this young population. Apart from training it is also importance to sensitize the health workers that within this age group there are various vulnerable sub sections with their diverse need for FP services; newlywed, recently settled in urban part, nulliparous, less educated, woman with no autonomy and with opposition from partners and families, warranting a combined and coordinated approach directed towards each subgroup. --- Availability of data and materials The datasets generated and / or analyzed during the current study are not publicly available due to the topic of the study and concerns regarding confidentiality of the data but are available from the corresponding author on reasonable request. --- Supplementary information Supplementary information accompanies this paper at https://doi.org/10. 1186/s12905-020-01010-9. Additional file 1. Final questionnaire. Questionnaire. Authors' contributions KY conceived and planned the study. KY and MA analyzed the data and wrote the first manuscript draft. JVS and MS provided constructive feedback with regard to interpretation of results and writing of manuscript. VKS helped in acquisition of data. MA helped accessing the study sites in order to collect data. KY, MA, MS, JVS and VKS contributed to study design and consultation during the ongoing study and data collection. All authors read and approved the final manuscript. --- Abbreviations --- Ethics approval and consent to participate The Institutional Ethics Committee, King George's Medical University, India; Committee No. ECR/262/Inst/UP/2013 approved the study with permission letter reference no.78th ECM II B-Thesis/P13. All participants in the study agreed voluntarily to participate. All participants received an explanation of the research aims, of potential risks involved in participating in the interviews, and all participants signed a written informed consent form. Written consent to participate was obtained from the parents / guardians of the minors (<unk> 16 years of age) included in the study. --- Consent for publication Not Applicable. --- Competing interests The authors declare that they have no competing interests. --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Background: NFHS-4 stated high unmet need for family planning (FP) among married women in Uttar Pradesh. Unmet need is highest among age groups: 15-19 and 20-24 years. Currently few data is available about unmet need for FP among vulnerable section of the community, i.e.15-24 year's age group living in the urban slums. Therefore this study was conducted to assess the unmet need for FP services and its determinants among this under-privileged and under-served section of society residing in urban slums of Uttar Pradesh, India. Methods: Cross sectional study was conducted in the slums of Lucknow, India. One Urban-Primary Health Centre (U-PHC) was randomly selected from each of the eight Municipal Corporation zones in Lucknow and two notified slums were randomly selected from each U-PHC. All the households in the selected slums were visited for interviewing 33 young married women (YMW) in each slum, with a pre-structured and pre tested questionnaire, to achieve the sample size of 535. Analysis of the data was done using logistic regression.The unmet need for family planning services among YMW was 55.3%. About 40.9% of the unmet need was for spacing methods and 14.4% for limiting methods. Important reasons cited for unmet need for family planning services were negligent attitude of the women towards family planning, opposition by husband or others, embarrassment / hesitation / shyness for contraceptive use, poor knowledge of the FP method or availability of family planning services. Among method related reasons health concerns and fear of side effects were frequently cited reasons. On multiple logistic regression: age, educational status, duration of marriage, number of pregnancies, knowledge of contraceptive methods, opposition to contraceptive use and contact with Auxiliary Nurse Midwife (ANM) showed independently significant association with unmet need for family planning services. Conclusions: Unmet need for family planning services is very high among the YMW of urban slums. The findings stress that program managers should take into cognizance these determinants of high level of unmet need for family planning among YMW and make intense efforts for addressing these issues in a holistic manner.
actions received in real time in-app with social copresence. Social photo-sharing services let users view an abundance of photos in a continual flow, and rather than there being the selection of a set of valuable objects there is instead an abundance of media. These online interactions are not trivial: they prompt discussions, reflections, seed conversations, or illustrate arguments. We characterise this as ephemeral photowork: the use of photographs with lightweight rapid practices, photographs quickly produced, shared and consumed. The data from this paper is based around screen and audio recordings of in-situ mobile device use, supporting a close look at mobile photo work and talk around photos as they are captured and shared. --- Related Work There is much literature around photos use in the HCI, CSCW, and Multimedia communities, much of it pre-dating the modern internet-connected smartphone. Koskinen et al. (2002) conducted a study of MMS use with 25 participants, finding that humour and fun were intrinsic to many of the exchanges and involved friends teasing each other or staged and manipulated images of fake experiences. Okabe and Ito (2003) documented the use of camera phones for capturing casual mementos of everyday life. Research on Flickr showed the use of the site to share photos with restricted groups of family and friends for communication and relationship maintenance, if not just for memory archiving (Ames et al. 2010). Social networking sites such as Facebook and Twitter integrate text-based messaging, media sharing, and contact management in the same application. Photos can be posted as personal profile images or associated image collections but serve in either case to support text-based communication as the primary function. A small amount of research has studied pre-smartphone use of photographs in social situations. Lindley et al. (2008) organized a CHI workshop around in-person interactions with photos. Given the era, many of these interactions involved printed photos, laptops, or grainy early-generation cell phone photos. Van House (2009) also explored this topic through this workshop, and described an interview-based study that explored photo practices in the home, including recalling storytelling around vacation photos, using photos on a fridge as conversation starters, and what people remembered pointing out while discussing photos with others. We seek to go beyond this work, not only to study behaviors on current smartphones, but also to capture the actual, inthe-moment conversations and screen captures around the sharing instances to uncover how photos are actually discussed. For example, it is unlikely that coarse language that we recorded would have emerged in an interview study. --- Methods and Corpus Our interest here was not in specific apps or settings but rather the broad range of new photo interactions using phones and social media. We adopted an in-situ recording method that used a local recording application installed on participants' iPhones. This recording ran in the background on the phone and captured the screen of the device, its location, the apps used during that session, and the surrounding background audio from the microphone. Participants all reviewed their recordings before the researchers received access and had the opportunity to delete recordings that they did not want to share. After an average of seven days of recording, interviews with all participants were conducted either face to face or over Skype to discuss interesting behavior or ambiguities captured in their video data. We have used this material in earlier papers looking at mobile search, and how phone use is incorporated into everyday life (Brown, McGregor, and McMillan 2014). The corpus contains data from fifteen users in three countries, recording their phone use for between 5-10 days for each user. Of the 15 participants, six were female and nine male-all participants fell within the age range of 22-50 years, and lived in the UK, Sweden, or the US. From the corpus of overall phone use, photo app usage comprised of 8.4% of the corpus' video in total: 0.74% in the Photo app, 0.65% in the camera app, 1.3% in Pinterest and 5.8% in Instagram. An additional 7.6% of our recordings involved Facebook use, with photo viewing and posting part of that use, mixed with other social media interaction. Extracting these recordings resulted in a corpus of roughly 4 hours of Instagram use (182 clips from 5 users), 30 minutes of the Photo viewing app (29 clips from 8 users), and 27 minutes of the camera app. (15 clips, 7 users). From the Facebook usage we extracted a sample of around 30 minutes of photography use, although much of the consumption of photos was embedded as part of general Facebook browsing and so was difficult to extract exhaustively. The screen captures, ambient audio, location, diary entries and qualitative data from the post-study interviews gives an opportunity to look in depth at the broader activities around photographs, beyond log data, as we have instrumented viewing, commenting, and face to face discussions around photographsa considerable corpus of different photo actions. Drawing on an ethnomethodological position our interest in these videos was not retrospective accounts (which are inherently distanced from the events in question) but rather understanding in situ behavior. This style of recordings carry certain advantages versus retrospective accounts of behavior (Brown, McGregor, and McMillan 2014). For each clip we listed themes and particular critical incidents, and in joint data analysis sessions we analysed interaction and photo use. We selected 25 clips for full transcription and in-depth analysis, of which we present a selection from here. --- Results The following themes emerged from the analysis described above. Specific examples will be given that are representative of the larger themes that were observed. --- Viewing photographs Alongside the Camera app, the iPhone offers a Photos app to browse through one's own photographs (or even screenshots taken). In much of the viewing of photos we recorded, users browsed through photos for discussion in-person with others, such as sharing photos from a family vacation, or showing the status of home improvement projects. However, the majority of photo browsing in our data comes from outside of the Photos app, and consists largely of browsing the timeline on Instagram. The Instagram timeline allows users to scroll through an almost unlimited list of photos posted by those one is following. "Reading Instagram" seemed to follow a fairly continuous pattern of scrolling to an image, looking at the photo and the commentary, potentially interacting with the photo (such as 'liking' the photo), and then scrolling further. Although viewing the timeline on Instagram makes up the majority of the time spent in the application, like television watching it appears to be fairly passive media consumption; photographs might prompt laughter but in most cases of consumption that we recorded, photos are quickly and silently browsed one after another. Participants would view Instagram on breaks from activity, or opportunistically, such as when waiting to meet a friend. In this use it was not so different from other social media consumption. --- Interactions with Photographs Instagram is markedly different from the iPhone's photo app in that online social interaction is core to its use. --- Liking photos The main mechanism of photo interaction on Instagram (and to a lesser extent Facebook) is the posting of photographs and the viewing of those photographs by others. Over and above the photographs, however, these social networking sites allow users to comment on and like photos, but also to link to other users in those comments or to insert hashtags that can "topic-alize" photographs. Much of the interaction online on social networking sites takes place through these relatively lightweight mechanisms. Take the 'like' for example-a simple action on Instagram done by touching the photo or an adjacent heart icon. The social graph controls whose photographs (posts) feature in a user's newsfeed. So the ability to like a photo has become the centre of social interaction between users with liking supporting emergent practices such as posting particular images on certain days (such as 'women crush Wednesday'), as well as, allowing users to transverse social connections through browsing users who have commented or liked others pictures. Some popular users can gain thousands of likes on their photographs, but less well known users can still have the likes as a form of affirmation on the photograph being taken and shared. In one clip, one of our participations Veronica, uploads a photograph to Instagram and then stays in the app constantly'reloading' to count the reactions that she gets to the photograph. She occasionally goes into check who it was has liked her photograph. Interestingly, this posting also prompts users to go in and view and like her past photographs. This behavior thus suggests that 'liking' offers real-time gratification to users and can encourage content production. This behaviour is not unique to Instagram or only Veronica. Another participant, Erin, took a photo of her baked donuts, shared them on Instagram and Facebook, and used an in-app functionality in which she can share likes on both Instagram and Facebook. We even saw cases where a photo did not receive enough likes in the first few minutes after being posted, and was then taken down. We noticed considerable differences in how much users liked photographs with our most committed Instagram user (Cathy) liking around 87% of all the photographs she viewed, but with our other five Instagram users we saw much less liking: among the other Instagram users only liked around 20% or so of images viewed. More extensive studies show that 75% of Instagram photographs receive at least 3 likes (Bakhshi, Shamma, and Gilbert 2014). Online Discussions Online photo-sharing sites such as Instagram and Flickr also over mechanisms for discussions with the makers of the photo and their broader social network through comments. Comments were initially designed to enable users to provide feedback on the content presented. However, our observations show that comments can go beyond simple feedback and at times they are places where users participate in discussions of a variety of topics. We see several instances of videos where users tag friends in the comments and wait to receive responses from them. The person who is tagged in the comments receives a notification, and so the conversation points then to that specific person. One of our participants, for example, tagged the person who originally posted the photo in the comment to express interest in her reply. Later, when she received a reply on the comment, she went to the app and immediately opened the comment thread, even before checking any of her other notifications in the app. In some cases, these notifications triggered correspondence between individuals who were not attached or related to the photo. For some pictures or videos, comments on the posted media could lead to heated debate. One of our participants posted a reply to a spam comment on "Ciara's" (a celebrary singer and model) Instagram feed. After the spam was deleted this was misstook by other commentors as being a critique of the singer, leading to heated debate, "... i was talking about the comment that obviously got deleted thats it and she's like ((my bad girl i thought you were coming for her)) get out of hewer... its fucking Instagram get over your go Away." This small online 'fight' is tellable as an event in its own right-and provides an opportunity for a short discussion of Instagram and people who take comment feeds too seriously. Of course, the story that is told later suggests that the fight, and Instagram comments, are of importance. Co-present Interactions Since our recording set up captured ambient audio around the mobile phone use, we were able to listen in on how photographs played into conversations while using the phone. We observed how a photo could be brought in to enhance an in-person conversation. In one instance, a conversation was already on going and a participant brought up the camera roll to find a photo of an "awkward thing" that conveys the needed visual. More complex cases involved in-situ storytelling typically from a series of related event with a narration given to the listener. In this example, translated from Swedish, a participant discusses a trip to Northern Sweden. Several photos were shown in succession that described the trip, picking various types of berries, and finally making pancakes with those berries. The photos are used to drive the story itself, with the participant talking about the content or answering questions about whatever photo happens to appear next in the stream. We will close this section with two cases of many offline photo-sharing conversations that we captured -from bookmarking, to storytelling, to experiences from online that are recalled later when people meet face-to-face. --- Discussion We have touched on different ways in which photos are used as part of contemporary phone use. Much of the photowork practices here seem quite different from the ways in which photos have generally been considered in the literature, where there has been an emphasis on slowness, preservation, and memory. Indeed, many modern applications speak towards instant-gratification, disposable image collection and sharing. We characterise our data as examples of ephemeral photowork-the use of photographs with lightweight rapid practices, photographs quickly produced, shared and consumed. By ephemerality we do not mean to belittle or downplay the importance of the photo practices. Rather our point is that the attention given to individual photograph is fleeting, yet cumulatively these photos produce value, attention and social connections. Take the 'like' on Instagram as example-an ephemeral lightweight way of communicating with another person, although not one without communicative intent. Showing the likers' names supports interaction between individuals, particularly when combined with a lightweight way to topically tag photographs, using hashtags. These interactions can potentially grow into 'follow' relationships and even richer interactions and conversations. An ephemeral communication over a photo can thus become something of value. Photos themselves are a very lightweight form of communication by showing a selfie, object, or environment. The practice we observed of taking both a front-and rear-facing photo at the same time to capture the sender and their surroundings conveys a large amount of information in one moment. Through studying the exact moment of capture through screen recording and open audio, we could understand how these images were captured and shared, including times when a participant decided not to share or abandonded sharing mid-stream. The face-to-face interactions around photos uniquely highlight the ability of photos to enable conversation. It is important to note how mobile photos can be brought into a conversation momentarily and then put away (only 37 seconds in the case of the discussion of the 'awkward thing'). These brief interactions are unlikely to have appeared in previous interview-based studies as they are just so mundane as to be forgotten. One research challenge is the privileging of physical artifacts for supporting interactions around photos. This object/device physicality encourages social engagement with the photographs (Odom et al. 2014). This highlights the importance of rapid search and browsing photo tools for in-person discussion in a matter of seconds. If the entire interaction is 37 seconds, a tool that takes 20 seconds to find a photo is going to inhibit these quick "let me show you something" interactions. --- Conclusion Screens and digital surfaces appear to have their own affordances that support new and potentially more interesting behaviors. Perhaps the most important aspect of the phone is that it is always available and can be pulled out spontaneously in conversation. This means that photographs can be brought into conversation opportunistically, rather than as a premeditated 'photo event.' Indeed, many of the online interactions we documented would have been impossible with physical photos. This interaction can also take place in lulls during other events: the quick'snack' of social media consumption. Through networks of followers, amusement and art can equally pass causing ephemeral, but still real, emotions. We have discussed how lightweight interactions with photographs allow people to thread media from themselves and others into their ordinary conversations. In closing we might remark, what if this ephemerality was the focus of design, rather than concentrating on preservation and remembrance? It may be that rather than designing for slowness there may be new and exciting opportunities in embracing the fleeting ephemeral nature of media in our everyday lives.
For many years, researchers have explored digital support for photographs and various methods of interaction around those photos. Services like Instagram, Facebook, and Flickr have demonstrated the value of online photographs in social media. Yet we know relatively little about these new practices of mobile social photography and in-situ sharing. Drawing on screen and audio recordings of mobile photo app use, this paper documents the ephemeral practices of social photography with mobile devices. We uncover how photo use on mobile devices is centered around social interactions through online services, but also face-to-face around the devices themselves. We argue for a new role for the mobile photograph, supporting networks of communication through instantaneous interactions, complemented with rich, in person discussions of captured images with family and friends; photography not for careful selection and archive, but as quick social play and talk. The paper concludes by discussing the design possibilities of ephemeral communication. Researchers have had a longstanding interest in photography, and as digital technology has transformed photography practice, changing user practices (Kirk et al. 2006) and even as a research method in its own right (Carter and Mankoff 2005). The photo and its practice is under constant change and new applications in the past decade, from Flickr to Snapchat, have changed photo sharing practices. But herein lies a problem. It is all too easy to ignore the value and impact of offline sharing in the wake of an abundance of data in a single ecosystem/application. Research has shown us that photographs' physicality provides a "resource for individual identity construction. . . viscerally remind[ing] people of who they once were in a way" (Odom et al. 2014) especially in close social-particularly family-relationships. In other words, there's more to photo sharing than online comment threads, and much of this interaction still occurs offline. An alternate, and more neglected, form of photowork is ephemeral (Bayer et al. 2015;Counts and Fellheimer 2004) where photographs are used in the moment, shared, talking about and then discarded. These are not photos that are archived and reflected upon years later, nor are they just photos that are cross posted to various social sites but rather they are also photos instantly shared with friends, with re-
Introduction Epigenetics encompasses interactions between living conditions, lifestyle, gene expression and health whose effects might be inherited by future generations (Gilbert and Epel 2009). The field of epigenetics has become consolidated over the last decade for several reasons. The first driving factor was the 'failure' of the Human Genome Project to assist us in a complete understanding of the nature of all genetic disease. This led to the conceptualization that genetics alone cannot explain the most basic dynamics of living beings, such as how inheritance works (Maher 2008). In addition, technological advancements brought about within 'omics' disciplines has led to the transfer of their methods and approaches into biological laboratories in order to realize economies of scale (Hilgartner 2004;Rose and Rose 2013). Despite its recent establishment as a field, in its early form, epigenetics developed about a century ago through the embryological studies carried out by Charles Manning Child, Conrad H. Waddington and Joseph Needham (D'Abramo 2017). The idea behind epigenetics consisted of considering organisms as a product of the interaction between genetic and environmental factors. Child, Waddington and Needham were politically engaged-Child was a biologist with reformatory ideas, whereas Waddington and Needham embraced the Marxist ideology to different degrees. This resulted in all three men placing a central emphasis on the environment, in both its material and social components. The principles behind Danya F. Vears and Flavio D'Abramo are joint first authors. This article is part of the Topical Collection on Citizen's Health through public-private Initiatives: Public health, Market and Ethical perspectives epigenetics are that (i) environmental and behavioural factors, in a more or less direct manner, act at the biological/ physiological level, such that these external factors elicit epigenetic dynamics that control and coordinate genes during all the developmental phases of the organism, and (ii) these biological dynamics controlled by behavioural and environmental factors are heritable to future cells and generations (Jablonka and Raz 2009). This is reminiscent of the historical Lamarckian concepts of inheritance of acquired characters, for which the initial proposal of epigenetics was, and still is, framed within the critical debate that considers the relationship between science and ideology (Gissis and Jablonka 2011;Jablonka and Lamb 2005). As shown by Schicktanz, during the last decades, the social and political framework has changed deeply so that we can define the concept of responsibility in three phases (Schicktanz 2016). The first phase, from the 1960s on, was focused on collective responsibilities towards future generations, human kind or nature. In the second phase that started in the mid-1970s, responsibility was focused on professional responsibilities towards individuals, as shown by the rise of informed consent. In the third, starting in the 1990s, it was an intertwining of social and individual responsibility, a trend that mirrored a reaction to political reforms cutting back public welfare and health care (Schicktanz 2016). From a technical perspective, the term 'epigenetics' refers to mechanisms involved in the regulation of cell type, transcription within specific tissues or expression of genes where there is no change in the DNA sequence (Ku et al. 2011). There are a number of ways in which this non-DNA gene regulation can occur. One biochemical modification involved is DNA methylation, where a methyl group is added to part of the DNA sequence, leading to activation or repression of the transcription of that gene (Ku et al. 2011). Regulation can also occur through histone modification, nucleosome positioning and expression of non-coding RNAs, among other mechanisms. The result of these changes is the winding, unwinding and clumping of the DNA which alter the degrees to which certain genes are expressed (Ku et al. 2011). There are several ways in which these epigenetic changes are thought to relate to the development of disease. One predominant theory relates to the developmental origins of health and disease (DOHaD) hypothesis in which exposure to external factors during critical developmental phases, often in utero, are thought to influence an organism's predisposition to disease (Barker 2007). These environmental factors can take a number of forms, such as the presence of pathogens, exposure to toxins and the availability of nutrients and water (Bateson et al. 2004). The theory is that during a 'critical period when a system is plastic and sensitive to the environment', this exposure takes place which 'programs' the genome of the organism to function at a certain capacity through these epigenetic regulatory components (Barker 2007). Consider, for example, a pregnant woman who is undernourished. The exposure of the fetus to the reduced levels of nutrition it is receiving from the mother is thought to lead to changes in the metabolic interaction between the mother and the fetus in different ways, depending on (a) when it happens during fetal development and (b) how prolonged the period of undernourishment lasts (Barker et al. 1993). These metabolic changes relate to growth hormones which can affect the development of a number of different tissues, such as the development of the pancreatic cells and also the vascular system, as well as affect placental and/or fetal growth which results in a smaller baby at term (Barker et al. 1993). While in some cases these changes might be transient, often these critical periods of developmental plasticity are 'followed by loss of plasticity and a fixed functional capacity' (Barker 2007). This exposure might then impact on the disease status of the fetus in a number of different ways. Barker uses his own research to highlight that programming that takes place during maternal undernourishment during critical periods of plasticity in pregnancy might lead to the poor development of the vascular system. This is not adaptive to the future environment of the fetus but may reflect how the fetus is developing in order to adapt to the reduced available nutrition (Barker 2007). The result is that the undernourished fetus may develop cardiovascular disease in adulthood, regardless of the environmental factors that it is exposed to after birth (Barker et al. 1993). In contrast, there are other situations where, in conjunction with this early programming, the lifestyle or exposures of the adult may also contribute to the development of disease. An example of this would be the high incidence of non-insulindependent diabetes in people who had low weight at birth or during infancy but who developed obesity in adulthood (Barker et al. 1993). In this example, the programming was present which resulted from exposure to maternal undernourishment and subsequent changes in glucose-insulin metabolism during fetal development. Yet, the development of obesity and the challenge this presents to the pancreas lead to the onset of diabetes. This model of disease development has been labelled the'mismatch model', because the rationale behind it is that the early programming in environments where food is in short supply might actually have an adaptive quality. However, when the environment changes, such as when food is in abundance, the programming becomes maladaptive and leads to the development of disease (Bateson et al. 2004). At this stage, research investigating the potential for interventions in order to change our epigenomes to improve health status is still in its infancy and much of the evidence to date, particularly in relation to the potential for transgenerational inheritance of epigenetic phenomena, has come from animal studies (Joly et al. 2016). In addition, researchers working in the field hold quite divergent views about the significance of epigenetics, with some 'champions' believing that it is the key to understanding what we know from traditional genetics, and other, more skeptical researchers disagreeing that epigenetics drastically changes our knowledge in the field (Tolwinski 2013). Despite these reservations, the new-found knowledge of how epigenetics can impact on disease could have great power and there are hopes that it may provide us with an opportunity to move away from a genetic deterministic perspective and allow individuals the ability to change their health status (Canning 2008;Van de Vijver et al. 2002). While this could be equated with patient empowerment, we need to be aware that it could also lead to stigmatization and discrimination where individuals are deemed responsible for their health, even if they are not in social situations where they are able to enact changes that could alter their health status. Given that epigenetics is already receiving considerable media coverage (Lappe 2016), the concerns about potential misunderstandings, discrimination and stigmatization need careful consideration sooner rather than later (Cozzens and Woodhouse 1995). In addition, we need to be aware of the potential for the field of epigenetics to get stuck in adopting a 'technical fix' approach. This trend, which has developed over the last decades due to the collaboration between the private/financial sector and the public institutions of research (Young et al. 2008), has changed the functioning and aims of research, leading to overlap between financial and academic aims (Cozzens and Woodhouse 1995;Etzkowitz and Webster 1995). Technological fixes are indeed instrumental to financial dynamics focused on handling societal problems within private corporate structures. This approach in turn fuels a deficit model where people are conceived as ignorant and in need of education regarding scientific arguments (Irwin and Wynne 2003;Wynne 2014). The concern is that epigenetics might also follow this trend where innovation (e.g. production of therapeutics and diagnostics through use of patents and intellectual property rights) might supersede public goods (e.g. policies to incentivize health promotion), which in turn could easily lead to a range of moral discourses subjecting women, patients and citizens to increased scrutiny (Kenney and Müller 2016;Meloni 2016a;Pickersgill 2016). In addition, the public may not want to be 'educated' in this regard and may react negatively to experts wanting to discipline them without the presence of shared values, which can hinder the fair translation of responsibilities in the public sphere. The interaction of knowledge built by experts and reception of that knowledge by the public has been scrutinized in different manners, so that some categories/criteria were formulated to describe the more or less basic steps to understand allocation of responsibilities (Hedlund 2012;Schicktanz 2016). In order for an agent to be responsible for an action or situation, a number of criteria must be met. First, there needs to be a causal link between the agent and the situation under consideration (Young 2006). Second, they have to be aware, or cognizant, that their action caused the event (Hedlund 2012). Third, there needs to be a motivation for the agent to act in a certain way that is societally or culturally agreed upon (e.g. obligations, rewards, incentives, encouragement, etc.), rather than just based on the agent's own will (Gilbert 1993;Hedlund 2012). And fourth, the agent needs to be able to exercise some degree of control over the situation and to be able to exercise autonomy in her choice to act (or not) to cause that action (Fischer 2006;Hedlund 2012). In relation to responsibilities in epigenetics, Dupras has warned against assigning epigenetic responsibilities too readily to individuals without proper consideration of 'the ambiguous nature of epigenetic mechanisms' (Dupras and Ravitsky 2016). Moreover, Schicktanz has highlighted genetic responsibility, that here we place on the same level as epigenetic responsibility, as a notion to identify the internalization of individual feelings of guilt or self-restriction (Schicktanz 2016). Likewise, in contrast with the normative position of Hedlund, Pickersgill and colleagues have argued, that biomedical research in epigenetics will create further ways in which individuals can be made responsible, as caretakers of life that does not yet exist (Pickersgill 2016). We will not attempt to provide any specific solution to the issues raised in this paper, as we think political problems need to be addressed by local communities in order to initiate a negotiation with both public and private scientific institutions. A common idea runs through all four contexts analysed below that relates to the social, political, behavioural and environmental factors as determinants of health. This idea that the context influences, determines or causes biological and health changes traces back to Hippocrates, among others, who more than two thousand years ago described environmental, social and political factors as determinants of health (Jones 1957). Jean-Baptiste de Lamarck and, to a lesser extent, Charles Darwin also focused on the effect of environmental conditions on biological variations. More recently, institutions like the World Health Organization (WHO) and the International Agency for Research on Cancer (IARC) have developed analyses and interventions around the relevance of social factors on health (i.e. working conditions, diet, educations, poverty, living habits, education, etc.) (James and Ronald 2012;Marmot 2015;Marmot and Wilkinson 2005;Tomatis 1997;World Health Organization 2013). In addition, challenging programs on epigenetics and DOHaD are pointing precisely at the manner in which social and material context modulates health of humans (Párrizas et al. 2012;Rosenfeld 2015), for instance, how globalization might impact on epigenetic patterns and non-communicable diseases (Vineis et al. 2014). With this in mind, in this paper, we explore the responsibilities of different actors in the healthcare sphere in relation to epigenetic testing across four different contexts: (1) genetic research, (2) clinical diagnostics, (3) prenatal care and 4) the workplace; and discuss the potential constraints that might prevent the patient, research participant, employee or mother-to-be, from enacting any necessary steps in order to increase their health status based on epigenetic information. --- Scenario 1-genetic research A research team is conducting a study investigating the impact of night shifts on risk of developing breast cancer. The team explores the hypothesis that the disruption of the circadian rhythms caused by working at night, and the exposure to the lighting used in these workplaces, alters patterns of gene expression and melatonin homeostasis, leading to the development of cancer. This is based on previous research showing an association between night shifts, circadian rhythms and breast cancer (Fenga 2016;IARC 2010a;Reszka and Przybek 2016;Stevens 2009;Straif et al. 2014). This association may be explained either as deregulation of the genes' expression for the changes of endocrine levels caused by working at nights or as the effect of the presence of some genetic polymorphisms in the circadian pathway genes responsible for increasing breast cancer risk when triggered by disruption of circadian rhythms. The project, as with many other scientific endeavours, is a public-private partnership (Meslin et al. 2015;Perkmann et al. 2013). In order to conduct it, the research team sets up a biobank of biological samples from shift workers which will comprise blood and hair. The DNA from the samples will be analysed to look for single nucleotide polymorphisms (SNPs) and epigenetic patterns of the genes' expression using genomic sequencing (GS). The findings of the research may lead to new insights into policy-making for cancer prevention or potential innovative treatments for cancer patients. One unclear aspect of this research relates to the proximity of the two hypotheses of the project. While apparently complementary, they address the problem in two different manners. An epigenetic approach might focus on links between gene expression, endocrine factors, circadian rhythms and working at night. Within this model, the individuals' physiology might be considered in order to develop interventions at either the environmental level, such as reducing shifts and altering the lighting of the working place, or at the endocrine level, such as producing pharmaceutical agents able to restore the levels of melatonin and estrogen which leads to a deregulation of genes expression underpinning cancer initiation and development. Instead, the genetic explanation based on genome-wide association studies (GWAS), conducted to discover single nucleotide polymorphisms associated with cancer susceptibility, confers to women a predisposition based on innate, genetic characteristics. This could lead researchers to discover genetic pathways to act upon through the use of targeted drugs. These two approaches support two different models, one which considers individuals as dynamic systems changing together with their environments, and another considering individuals as a fixed nexus of mechanisms, mainly determined by their genes or epigenetic characteristics. But are the two approaches really complementary or are they opposed? The erroneous presupposition here consists of conferring certain powers to certain specific technologies and models of causation, so that epigenetic and epigenomic analyses using GS should lead to an epistemic, causal justice, where models and practices utilized by scientists grant ontological primacy to DNA-i.e. the phenotype is the result of either environment or genotype. It is instead necessary to consider a more dynamic and comprehensive relationship between individuals and their environments, in order to overcome causal impasses affecting the possibility to formulate an aetiological explanation, i.e. the phenotype is the result of the interaction between environment and genotype, and in no case are two genotypes identical in their reactions (Lewontin 2006;Waddington 1953). Both genotypes and environments are causes of phenotypic variations, and as such necessary objects of study to understand phenotypes or diseases (D'Abramo 2014). As both Richardson and Meloni highlight, the modern programs of research on human epigenetics do not challenge genetic determinism and biological reductionism. Instead, epigenetics might be used to pathologise the poor or reinforce the biological differences or inferiority of individuals living in disadvantaged social conditions (Meloni 2016a;Richardson 2015). When epigenetics considers either the genotype or the environment, it might easily lead to discrimination by allocating responsibilities to (biological functioning of) individuals without producing any increase in power to impact on social, individual or physiological determinants of health. Indeed, epigenetics may rely on empirical evidences produced within laboratories where the foreseen interventions are mainly conceived at the molecular level. How the new postgenomic science of epigenetics will allocate responsibilities to realize particular types of social justice, after having molecularised the social milieu and biographies of individuals (Niewöhner 2011), is yet to be determined (Del Savio et al. 2015;Loi et al. 2013;Waggoner and Uller 2015). Allocation of social and individual responsibilities through scientific research also pertains to perspectives of longue durée, where the metaphysical presupposition of translating social and cultural issues in molecular terms formulated some decades ago (Hacking 1995;Waddington 1967) will propel part of the future biomedical research. In order to disentangle the social effects of scientific practices, it might be useful to consider the roles and interactions among responsible stakeholders. A matrix that heuristically inspired the analysis of the case here presented was recently sketched in respect of 'genetic risk and responsibility' (Schicktanz 2016). In our scenario, the main actor is a hypothetical principal investigator (PI). The PI is constrained, both through the working contract they sign when they commence their role and the evaluation processes of scientific research, of which dissemination of findings is a significant component. If the research project is carried out using public funds and infrastructures, it is fair to expect results to benefit taxpayers who indirectly fund biomedical research. Therefore, as a moral agent, the PI has a responsibility towards a moral object, the taxpayers, and is supervised by ethical committees, institutional review boards and bureaucratic mechanisms. The standards he applies are derived from scientific customs, or research ethos, and have certain consequences that are framed within a precarious labour market. The principal investigator also has the burden of securing both his own salary and the wages of the research team. However, determining how to balance the responsibilities of the PI towards taxpayers and the workers engaged in medical research is a complex issue. In fact, it is likely that some conflicts between these two social responsibilities might arise. Imagine that the PI secured private funds through a pharmaceutical company which he uses to pay the postdoctoral researchers. Also, imagine that he discourages researchers from scrutinizing results that suggest that epigenetic factors increasing the women's risk of breast cancer could be reversed by altering the night shifts themselves and encourages them to focus on results that suggest a potential for pharmacological interventions. The PI wants to secure future funds from the same foundation and is therefore prone to please the interests of the foundation trustees. In other words, the principal investigator wants to give his'scientific' contribution to support the working place's profits through the intensive pace of production required. With the best of intentions, the principal investigator is primarily concerned about his own salary and of his team. In pleasing the funding body by excluding some hypotheses from the project, is he being unfair to research participants that are also taxpayers? And if so, is the principal investigator accountable for having subordinate subjects of research to job positions of his research team? Here, it seems that some aspects characteristic of funding bodies and of hierarchical order of biomedical research might narrow the possible gamut of hypotheses, and eventual solutions, for social medical problems, to legitimize a deterministic stance (i.e. that problems derived from social conditions like working at night are mainly biological problems) by means of anti-reductionistic, postgenomic approaches. Another problem researchers might face is of epistemic nature and relates to the possibility of reversing epigenetic dynamics. The debate, on which there is no consensus, surrounds the possibility of reducing social dynamics to biological ones; diseases derived from certain working conditions are reduced to biomedical problems. Based on the answer to whether it is possible to reverse specific epigenetic biological factors in women who develop breast cancer because they work night shifts, and on the fact that ethics committees often do not encourage dissemination of findings apart from in scientific articles, researchers will decide if it is worth communicating the results of the study to the women engaged in the research. The question of reversibility of epigenetic factors is related to the aims of the research themselves. If researchers also consider the possibility of influencing those who are empowered to influence policy relating to the frequency and length of night shifts, then the possibility of addressing the problem might materialize. The potential to find a solution might then increase not only the desire but also the responsibility of researchers to communicate the findings of the research to participants. Nevertheless, the PI's drive to secure future funding by pleasing the funding body might easily translate into a reluctance to consider other solutions which would alter the current high production rhythms of workers. If researchers are not inhibited by this 'pleasing chain' of the precarious labour market (i.e. doing research to support policies dismantling public welfare and healthcare systems), they might instead aim to identify primary interventions from their research findings, in order to prevent women working night shifts from developing cancer. They might then consider communicating the outcomes of their research to employers and policy-makers to contribute to a negotiation between employers and employees. Both the genetic and epigenetic models we described, the former indicating that women who might develop cancer because of their genetic makeup, and the latter indicating the incidence showed by cohort studies engaging women working in specific working settings, could be used to develop primary interventions, and make policies addressing the safety of workers. What if, however, there is no other manner to address cancer predisposition aside from by not working at night? Is it responsible for researchers to communicate a risk for which no solutions are envisaged and that might eventually result in a deterioration of the participants' social factors? For example, workers who live in an area where there is a high rate of unemployment might be faced with the choice to either work or be healthy. In addition, researchers might be constrained in their ability to communicate the specific aims of the study to participants, because the research aims of investigating genetic polymorphisms and epigenetic patterns associated with cancer initiation and development may not have any direct translational outcomes for policy, diagnostics, or therapeutics, at least not in the short/medium term. Therefore, even if researchers would like to communicate more specific aims with participants, they may not be able to do so, as they cannot foresee the translational or social value of their research (researchers mostly work to publish articles that might increase their chance to secure a future job). This lack of information in turn inhibits the participants' ability to make autonomous decisions about entering the research study, as well as their ability to be actively engaged. This lack of engagement may then inhibit the researchers' ability to ask participants for more information, for instance, to enrich the study with updated individual phenotypic data. These aspects of the research that are deeply determined by the nature of private-public partnership (i.e. the 'unknowability' of aims, inability to actively engage participants, lack of communication between researchers and participants, overlapping of social and for-profit/innovative aims, etc.) constrain the manner in which researchers create the scientific facts that will eventually be used to assign responsibility at certain levels. Engagement of the private sector in biomedical research and epidemiology is not a novelty and is necessary to different degrees (e.g. technological tools that are supplied by corporations). A question that might help to shape a constructive debate regard the roles and modalities of engagement of the private sector in public health. Indeed, rather than the private nature of the funds, what creates the problem is the private nature of some dynamics which shapes the research, such as the PI shaping the aims of the research by excluding public health interventions to please the foundation's trustee and antiwelfare policies. This impasse is tightly bound to issues related to labour market policies. One could imagine that using a broad consent approach, in which the aims, benefits and risks of research are not necessarily discussed in detail, would mean that the origin of funding and the research aims would not be disclosed (D 'Abramo 2015;D'Abramo et al. 2015;Hofmann 2009). Most of the time, broad consent translates into secrecy about the for-profit nature of the research, whereas open disclosure of funding and the aims of the research might clarify the boundary between private and public interests (Jasanoff 2002;Krimsky 2005;Krimsky and Nader 2004). In turn, this aspect influences the role of scientists in their interaction with the public, so that other questions might be better addressed, such as how biomedical research can encourage an open dialogue across scientists, citizens, patients and stakeholders. However, is epigenetics, that was developed as a discipline which captures the dynamic, dialectical interaction between organisms and their environments, instead proposing a narrower concept of environment which cuts off those factors the supporters of anti-welfare reforms want to remain undisputed? Indeed, lines of biomedical research are principally shaped through devices that, besides being a fair approach to the privatization of science, can also produce profits-i.e. data production, data sharing, patents and intellectual properties (Sunder Rajan 2006;Sunder Rajan and Leonelli 2013). Does it mean that medical research driven by a corporate logic is not capable of producing facts that underpin preventive, welfare-supporting policies? And if these preventive policies are then produced, are these policies, at any point, confronted with the values of participants of research? Are any of the outcomes of research co-constructed by researchers and lay people? These questions lead us to consideration of the role of patients and healthy recipients of preventive, diagnostics and therapeutic practices. --- Scenario 2-clinical care A 52-year-old male goes to a general practitioner, because he is experiencing difficulty urinating during the day and also increased frequency of urination during the night. His father developed prostate cancer, and he is concerned that his symptoms are similar to those his father experienced. The doctor performs a digital rectum examination, which suggests some inflammation, and takes a blood sample for a prostate-specific antigen test. The doctor has been reading about new biomarkers for prostate cancer and decides to send a sample from the patient for a test to investigate DNA methylation. While the PSA is within the normal range, indicating the patient does not have prostate cancer, the DNA methylation test identifies that the patient has a higher than average level of global hypomethylation. This can lead to genomic instability, focal hypermethylation of promoter regions in tumour suppressor genes and, subsequently, a high risk of cancer. This hypomethylation could be due to a number of factors, including the fact that he grew up in a poor area, but also his tendency to smoke 40 cigarettes a day, his poor diet and heavy drinking. The doctor advises the patient that in order to reduce his risk of developing prostate cancer, as well as other forms of cancer, he should take steps to improve his lifestyle such as quit smoking, eat better and cut back his alcohol consumption. The doctor suggests that this may reverse some of the effects and reduce the patient's cancer risk. On the surface, some might view this information as empowering for the patient because it gives them the opportunity to enact change in their diet and lifestyle to increase their health. However, if we consider this more deeply, we can see that placing the responsibility on the individual here is problematic. As discussed previously, according to Hedlund, in order for an actor to be responsible, there are a number of components that are necessary: causation, cognizance, obligation and capacity (Hedlund 2012). One could argue that by performing this test, the healthcare professional has established a link between the patient's behaviours and their risk of cancer, thereby fulfilling the first criteria, causation. By receiving the test results, the patient has been made aware of their increased risks, fulfilling the second criteria, cognizance. In addition, as the patient is seeking medical investigations in order to prevent developing a medical condition which would place additional burden on the healthcare system, some might argue that he has an obligation to enact change which will prevent the development of this condition in order to benefit future generations. However, by stating that these three criteria are fulfilled carries with it a number of assumptions. First, it assumes that enough is known about the interactions between behavioural factors, such as smoking, drinking and diet, and their effect on DNA methylation to guide medical recommendations. However, to date, there is little in the way of evidence, particularly in humans, that epigenetic patterns can be altered through medical, lifestyle and/or chemical interventions. The doctor has also made the assumption that the results of the DNA methylation study are due to the patient's current lifestyle behaviours and that by changing these behaviours, the cause of the epigenetic signature will be removed and that this will, in turn, lead to amelioration of their health. But what if the patient grew up in a position of low socioeconomic status, had poor nutrition from a young age and lived in an area with high levels of pollution? This early exposure could also be the cause of his epigenetic results, rather from his current lifestyle. It also assumes that the patient has understood the results of the test and the connections that the doctor is drawing between his lifestyle and his risk of cancer. However, understanding genetic and epigenetic risks represents a huge challenge for both laypersons and experts. Third, the 'obligation' criteria assumes that there is a collective agreement about what constitutes 'good epigenetic health' and that this is something that one can strive for (Hedlund 2012). However, as Dupras points out, this is far from straightforward (Dupras and Ravitsky 2016). For example, it is possible that the patient's epigenetic pattern is actually due to his exposures during fetal development. The mismatch model of disease development proposes that the fetus is, through the mother, exposed to the kind of environment that it is likely to be born into and the resulting epigenetic pattern is imprinted in order to allow better adaptation once they are born and throughout their lifespan (Bateson et al. 2004). Using this logic, the patient's epigenetic pattern is not abnormal in and of itself. Rather, it is mismatched to the environment in which the patient is currently living. This theory means that there is no such thing as a 'normal' epigenome that one can aim for in order to achieve good epigenetic health (Dupras and Ravitsky 2016). In addition, while one can imagine that quitting smoking, eating a better diet and consuming less alcohol would have a positive impact on his health, there is currently insufficient knowledge in this field to conclude that, even if our patient made radical lifestyle changes as per the doctor's recommendations, there would be any significant change to his global DNA methylation levels and any increased risk of cancer associated with this. This lack of evidence for an ability to alter DNA methylation patterns therefore signifies that with our current level of knowledge, the capacity criterion cannot be fulfilled. Let us assume, however, that these criteria have actually been met in that the patient is actually the cause of his increased risk, is aware of it, that there is some concept of good epigenetic health he can aim for and that there are interventions available to reliable alter his epigenetic pattern. In order to assign responsibility to him for his health, we would still require that he was actually capable of doing something to change it (Hedlund 2012). But what level of control does the patient actually have to change their health status? Whether the patient has the capacity to make these changes is questionable, because individuals are embedded in different collectives, such as families, friendship groups or work places, and within these collectives, their choices are constrained in various ways (Mol 2008). For this reason, rather than a choice being an individual decision, it becomes a decision which is either facilitated or not, by the collectives in which the individual is embedded (Mol 2008). Perhaps our patient has a stressful job, works very long hours and has no wife or children. He has little time to make friends, and therefore, his only stress release is to go out after work with his colleagues, who also drink and smoke heavily. With these colleagues as his only support network, implementing behavioural changes that go against the behaviours of the collective is very difficult and might result in a situation of isolation and deprivation. Societal factors also impact on one's capacity to implement change. Consider that it might have taken our patient 6 months from when he first developed symptoms to visit the doctor. While this delay could have eventuated due to his long working hours, making it difficult to attend appointments during the work day, it may also be culturally based as men (for reasons relating to social constraints, such as job status or gender role) are less likely to seek medical advice when they are ill (Baker et al. 2014). While we have established that it would be unjustified for the epigenetic responsibility within this scenario to rest (solely) with the individual patient, we need to think about the responsibilities of other actors. We can, for example, consider the role, and therefore the responsibilities, of the doctor in this scenario. The role of a doctor is to promote the wellbeing of their patient primarily through beneficence and non-maleficence (Beauchamp and Childress 2001). Superficially, it may seem that the doctor is fulfilling his responsibility to the patient by ordering the epigenetic testing in order to determine the patient's risk of cancer and provide them with the opportunity to implement behavioural change. However, if we consider the patient's lack of capacity to enact this change because of the collectives in which he is embedded, then perhaps the doctor is actually doing more harm than good by ordering epigenetic testing and disclosing the results to the patient. If the medical information is not realistically actionable, disclosure of the results from the epigenomic test could easily lead
The field of epigenetics is leading to new conceptualizations of the role of environmental factors in health and genetic disease. Although more evidence is required, epigenetic mechanisms are being implicated in the link between low socioeconomic status and poor health status. Epigenetic phenomena work in a number of ways: they can be established early in development, transmitted from previous generations and/or responsive to environmental factors. Knowledge about these types of epigenetic traits might therefore allow us to move away from a genetic deterministic perspective, and provide individuals with the opportunity to change their health status. Although this could be equated with patient empowerment, it could also lead to stigmatization and discrimination where individuals are deemed responsible for their health, even if they are not in social situations where they are able to enact change that would alter their health status. In this paper, we will explore the responsibilities of different actors in the healthcare sphere in relation to epigenetics across four different contexts: (1) genetic research, (2) clinical practice, (3) prenatal care and (4) the workplace. Within this exploration of role responsibilities, we will also discuss the potential constraints that might prevent the patient, mother-to-be, research participant or employee, from enacting any necessary steps in order to increase their health status in response to epigenetic information.
gender role) are less likely to seek medical advice when they are ill (Baker et al. 2014). While we have established that it would be unjustified for the epigenetic responsibility within this scenario to rest (solely) with the individual patient, we need to think about the responsibilities of other actors. We can, for example, consider the role, and therefore the responsibilities, of the doctor in this scenario. The role of a doctor is to promote the wellbeing of their patient primarily through beneficence and non-maleficence (Beauchamp and Childress 2001). Superficially, it may seem that the doctor is fulfilling his responsibility to the patient by ordering the epigenetic testing in order to determine the patient's risk of cancer and provide them with the opportunity to implement behavioural change. However, if we consider the patient's lack of capacity to enact this change because of the collectives in which he is embedded, then perhaps the doctor is actually doing more harm than good by ordering epigenetic testing and disclosing the results to the patient. If the medical information is not realistically actionable, disclosure of the results from the epigenomic test could easily lead to an increase of the patient's concerns and stress. If we also consider the nature of the lifestyle recommendations provided, one might question whether the doctor would have suggested anything differently, regardless of the test outcomes. One might also suggest that it is a component of the doctor's role to take the situation of the individual patient into account by assessing how these 'unhealthy behaviours' might be created by their social situations and therefore how their ability to implement change might be constrained. It is also important to consider what impact this knowledge is likely to have on the patient. They may feel empowered by the knowledge that they have an increased risk of developing cancer, because they have the potential to do something about it. But what if he does take steps to improve his health and a repeat methylation test shows no difference? This failure, despite his attempts at compliance, is not likely to empower him to take steps to improve his health in the future. What if the patient does not make lifestyle changes and he develops cancer? Is he more responsible for his health status than someone who has not had their epigenome tested, because they were informed about their risks? In order for an individual to bear more responsibility, they must also be given more power. Therefore, just giving the patient information is not enough to increase their level of responsibility. Despite this, although it may be unjustified to place the responsibility on an individual for their epigenetic health, such that they should be blamed for their ill health if it eventuates, it might still be beneficial to empower individuals to take better care of themselves generally as this could lead to disease prevention, seeking help which may result in early identification, and potentially access to a broader range of possible treatments. In addition to caring for patient wellbeing, over time, there has also been a shift, driven by patient preferences, from paternalistic models of care to those which have a greater focus on patient autonomy and self-determination (McCoy 2008;Quill and Brody 1996). This shift to promote autonomy should entail providing the patient with the ability to give informed consent for the test. Given the complex nature of epigenetics, one can imagine that it would be difficult to explain the potential outcomes, including the potential for incidental findings, related to the test in sufficient detail for the patient to make an informed decision about submitting their sample for testing. Of course, there are also financial implications associated with using this technology which need to be considered. On one hand, the information about the patient's increased risk of cancer could be used to benefit the patient. If they did develop cancer, then perhaps they would be entitled to reduced rates for their investigations, treatments and general medical expenses because they were 'epigenetically disadvantaged'. But would this still be justified if the patients were informed that they were at risk, had the knowledge of how to reduce their risk and chose not to change their behaviours? On the other hand, one could foresee health insurance companies using this kind of information to their advantage and charging higher premiums to those who were deemed to be more at risk based on their epigenetic profiles, similar to their current practices relating to asking consumers about their smoking behaviours and family history. If we consider what are the responsibilities of insurance companies in this situation, one could argue that they are responsible both for providing the service the consumer is paying for, and also for the way they charge for that service to be equitable (i.e. prices are dependent on some predetermined, logical and consistent stratification). Therefore, based on the current knowledge and within a liberal context, using epigenetic profiles to stratify the consumers' premiums might lead to contexts in which discrimination based on epigenetic characteristics is produced. In particular, even if legal provisions created in several states and communitarian institutions prevent discrimination in general (European Parliament 2000) and on the base of genetic characteristics (German Ethics Council 2013;Slaughter 2007), that often equate to epigenetic information, this might not translate to concrete avoidance of discriminations for persons living in daily social contexts. We can instead consider whether corporations should take responsibility for the health of individuals. In this scenario, if we assume that our patient's increased cancer risk is due to his unhealthy behaviours, the tobacco, alcohol and fast food industries are all contributing to his potential to acquire ill health, both through making their products accessible and through their advertising campaigns. Would it be reasonable to expect these actors to assist members of the society to implement behavioural change? And if so, what kind of model might this follow? One possibility might be that the taxation of the corporations' profits producing and trading toxicants, like plastics, dyes, tobacco, oil or carbon, would be allocated to fund those parts of the healthcare system that might take care of those people suffering from diseases caused by those chemicals. Nevertheless, given that transnational companies do not want to be considered as liable for the increase of number of persons living with and dying from diseases caused by those chemicals (Chapman 2004;Hirschhorn 2004), at what level this negotiation should take place is highly problematic. Indeed, traditional institutional decision-making processes are far from considering other forms of negotiation like environmental conflicts among local residents, civil society groups, private industry profits and public, national and communitarian institutions (Greyl et al. 2013;Martinez-Alier et al. 2016;Perez et al. 2015). --- Scenario 3-prenatal care A 23-year-old woman, who lives in a low socioeconomic area, goes to a general practitioner because she suspects that she is pregnant. The doctor confirms the pregnancy, the woman's first, and they discuss her options. She decides to continue the pregnancy, and the doctor discusses lifestyle changes she should make in order to increase the health of the fetus, such as ceasing smoking, avoiding foods considered 'risky' during pregnancy (unpasteurised cheeses, uncooked fish, etc.) and also the importance of adequate nutrition. The doctor asks about the woman's eating habits and identifies that she has a poor diet that is both inadequate in nutrition, to ensure the health of the baby, and is also comprised predominantly of packaged foods. The doctor mentions that there is some evidence to suggest that, from an epigenetics perspective, eating large quantities of foods that have been exposed to particular plastics that contain endocrine disrupting chemicals can carry considerable health risks to the fetus, such as abnormal uterine and cervical development (Bondesson et al. 2009;Brotons et al. 1995;Casas et al. 2011). In addition, the doctor informs her that poor maternal nutrition can result in cardiovascular disease when the child reaches adulthood. The doctor suggests that in order to promote the health of her baby, she should drastically reduce her consumption of packaged products and eat more fresh food. While in scenario 2 we discussed the responsibilities of an individual patient to implement behavioural change in response to epigenetic information about his own health, here we are focusing on the responsibilities of this young, pregnant woman to implement behavioural change in order to enhance the health of her future child. On the surface, this does not seem very different from the expectations we normally place on women during pregnancy. The internet is riddled with information (and misinformation) about what women should do during pregnancy in order to ensure the health of their baby. Women are instructed not to drink alcohol, not to smoke, to eat folate-rich foods, to exercise, to be careful about exposure to kitty litter, to avoid undercooked meat and eggs, to avoid unpasteurised cheeses, to avoid too much caffeine, to eat fish, but not too much fish, etc. However, what differs in this scenario, compared to the standard expectations placed on women to adapt their behaviours to promote the wellbeing of their future child, is that some of the advice the doctor is recommending is to promote the health of the future child based on epigenetics. To explore whether the mother-to-be in this scenario has any responsibility to change her behaviour in response to this information from her doctor, we must first consider the moral status of the fetus and the obligations of mothers to their unborn children more broadly. Authors have suggested that although women are free to choose whether they want to continue a pregnancy, once a pregnant woman has decided to do so, she, and other members of society, then have fiduciary obligations towards the fetus (McCullough and Chervenak 2008). The determination of the point at which these obligations commence is based on the idea that the fetus is not viable (i.e. able to sustain its life independently), so its ability to become a child is dependent on whether the woman decides to continue the pregnancy (McCullough and Chervenak 2008). Although the fetus does not have independent moral status and therefore no 'rights', it has dependent moral status, based on the role it is ascribed by the mother-to-be, the doctor and the society. According to McCullough and Chervenak (2008), this dependent moral status means that there are beneficence-based, rather than rights-based, obligations towards the fetus. However, once a mother-to-be has decided to continue the pregnancy, some have postulated that the fetus may then acquire a different moral status-that of a future person-with their own full moral rights (Loi and Nobile 2016). If we accept this argument, then not only would the mother-to-be have a responsibility to act in a way that protects the future health of the fetus but also it would be justifiable for the State to reinforce this if the mother was non-compliant because '[...] the interests of future children and adults matter as much as the interests of pregnant mothers' (Loi and Nobile 2016). If we accept then that the mother-to-be has an obligation to promote the health of the fetus, we need to consider whether she has the capacity to do so given (a) the reliability of the information she has been provided, and (b) the situation in which she is embedded. In relation to the reliability of the information, there is considerable evidence that exposure of the fetus to high levels of endocrine disrupters leads to disorders in the development of the reproductive system and therefore that exposure to these chemicals should be avoided (Bondesson et al. 2009;Casas et al. 2011;Fernandez et al. 2016;Skinner 2014). There is also evidence to suggest that poor maternal nutrition during pregnancy leads to low birth weight and also increased risks of cardiovascular disease (Barker et al. 1993). Therefore, taking steps to improve her diet is likely to lead to better health outcomes for the child, both in the short and in the long term. But is it realistic to expect her to reduce her intake of packaged foods that contain endocrine disrupting agents and to eat a more nutritious diet? In reality, at the level of the individual and without support, the options for our pregnant woman are quite limited. Firstly, she needs to be provided with information so she can make informed decisions about which foods to choose. Perhaps she has never been educated as to which foods have greater nutritional value or taught how to cook good quality meals, because this is how her parents ate. She might also currently live with the father of the child-to-be who also works long hours and has poor knowledge of what constitutes a good diet, and is therefore not going to be able to provide support in her attempts to change her diet. In addition, the information that she receives from the doctor might be quite confusing for her because, at first glance, to advise someone to both increase food intake and also to restrict intake of particular foods might seem contradictory. This could result in further reduction in food intake by not eating packaged foods without replacement with more nutritious foods, increasing the overall risk of cardiovascular defects for the child-to-be. If we think about the role of the doctor in this scenario, they might feel that they have informed her of the risks to her future child so she is empowered to enact change to improve their health. However, perhaps all they have done is place the burden of responsibility on the woman, making her anxious about a situation that she is not in a financial or social position to change. Or perhaps she will attempt to change her eating habits. Although the doctor may feel that he fulfilled his medical obligations, one might consider the doctor irresponsible to disclose this kind of information without also providing assistance in implementing behavioural change. But what kinds of solutions might actually be beneficial in this situation? Although her doctor might be able to provide her with some of the educational aspects, she also needs to be able to access and afford the healthy options if she chooses to do so. We know that she has a low socioeconomic status, so it may be difficult for her to afford to buy fresh produce when often packaged and processed foods are much cheaper, due to their poorer quality. Perhaps she works very long hours and eats these sorts of meals because they do not require much cooking time. If she is quite motivated, she might, for example, start driving to a different supermarket, which is an extra 20 minutes away, in order to shop for fresher foods. But this takes more time, so she has less time to cook meals than she already had, which means she misses out on sleep, which is also not healthy for her or the fetus. Or maybe she will start buying fresh foods rather than packaged ones from her local supermarket. But this is more expensive, and she needs to work even longer hours to cover the costs, which has the same effect. All of these factors mean that her ability to implement behavioural change in order to enhance the health of the fetus might be impaired. But does the fact that it might be difficult for the mother-to-be to enact change mean that she should not be informed of her potential to do so? While one might argue that informing her may place an unrealistic burden on her, on the other hand, not passing on this information removes any possibility for her to improve the future health of her child. A number of authors have drawn attention to the inaccurate weighting, and therefore unfair responsibility, that is placed on the maternal contribution to the disease states of their future children (Hedlund 2012;Kenney and Müller 2016;Richardson et al. 2014). As Richardson et al. state, there is 'the need for societal changes rather than individual solutions' (Richardson et al. 2014). Therefore, in order to provide the kind of support this pregnant woman needs, we need to think about the potential for other actors to assist at the societal level. For example, the public health sector has an interest in having a healthier population, both because they have the ultimate goal to foster the right of members of the society to health and also because this places less of a burden on hospitals. Therefore, although education could be provided to an individual woman by a health practitioner, one could also consider whether larger health institutions could organize educational sessions for pregnant women in order to target more of the population, as implemented in Denmark (Lemus 2015). One might consider giving food allowance vouchers to pregnant women to shop in organic food stores. However, if we think about the mismatch model, then this means that once the child is born and the food vouchers cease, children may not be epigenetically 'programmed' to their environment. Instead, perhaps representatives could assist pregnant women living in urban areas to establish a farmers' market by giving them guidance and connecting them with local producers. This would create ongoing access to fresh food by adopting a 'teach a man to fish' mentality. Policy-makers could also play a role in assisting pregnant women to avoid eating packaged foods, such as developing policies which place pressure and obligations on food producers to reduce the use of harmful plastics. Of course, the development of these interventions and support systems should always involve discussions with the members of the society who need them in order to understand the problems and ensure that the strategies are appropriate to the population. Therefore, it would be important to set up a dialogue with women who might want to change their diet to determine precisely which barriers are preventing them from achieving this behavioural change. Only then can the State effectively implement these strategies. Nevertheless, in those cases in which a negotiation takes place, citizens, consumers and lay people might not understand, or even be confused, by policy-makers and experts. Indeed, scientific opinions and regulations of chemicals such as endocrine disruptors can be contradictory. Take for instance the 2010 statement produced by the European Food Safety Authority (EFSA) about Bisphenol A, a plastic used in food packaging that is an endocrine disruptor (EFSA Panel on food contact materials and processing 2010). The 2010 statement produced by EFSA declared the safety of Bisphenol A. In contrast, in 2010 the Danish Environmental Protection Agency (EPA), instead prohibited the use of Bisphenol A in all food contact material for children aged 0-3 years. On the base of studies showing the exponential effects on human health of combination of endocrine disruptors, in 2012, the Danish EPA banned four phthalates from all consumer products (phthalates are molecules used, for instance, to soften food packaging, that in combination with Bisphenol A might easily increase health problems of individuals and their children) (Lemus 2015). Or consider the French ban for using Bisphenol A in all food containers, underpinned by the French Agency for Food, Environmental and Occupational Health & Safety (ANSES). In this specific case, there is an explicit conflict between national and communitarian institutions that make scientific opinions and policies less understandable for experts and lay people. There are various solutions here. for instance, local communities might organize meeting days with information materials, scientific experts and institutions based in the same area, to proactively adapt to specific, situated contexts. At the same time, it might be important to consider dissenting opinions and facts on biomedical research and public health policies, in order to actively engage citizenry and lay people in science and politics. Last but not the least, it is important that scientists, technicians and researchers embrace a more comprehensive analysis (i.e. compared to the airy and principlistic approach of bioethics) of the issues produced in the science/society interaction. --- Scenario 4-the workplace The CEO of a power plant has received reports that a number of employees have recently been diagnosed with a range of cancers affecting various different tissues, such as the lungs, skin and bladder, and various oral and esophageal carcinomas. Their medical advisor suggests that this may be due to exposure to benzo[a]pyrene (BaP), a polycyclic aromatic hydrocarbon that is produced through incomplete combustion (Tong et al. 2006) and classified by the International Agency on Research on Cancer (IARC) as carcinogenic in animals and humans (IARC 2010b). BaP is lipid soluble, accumulates in adipose tissue and is transferred across the placenta and the fetal blood-brain barrier (Brown et al. 2007;Hood et al. 2000). BaP has shown both genetic and epigenetic toxicity (Perera and Herbstman 2011). Moreover, BaP is an endocrine disruptor -a steroid-mimicking chemical affecting fetal growth (Choi et al. 2006), cognitive development and behavioural disorders. Studies on animals have shown that BaP interferes with early brain development, peripheral lymphocyte development and causes alterations in levels of noradrenaline, dopamine and serotonin (Konstandi et al. 2007;Stephanou et al. 1998;Tekes et al. 2007). Concerned both for the other employees, and also for the reputation of the company, the CEO, along with the board, decides that all employees must submit their samples for epigenetic testing in order to assess their current DNA methylation levels. Those who are shown to have low global DNA methylation levels will be given a payout but will lose their jobs, because they are considered to be at high risk of cancer and should not continue to be exposed to BaP. Those with more normal levels of DNA methylation will be allowed to remain in their positions. However, they will be required to sign new contracts where they commit to health-promoting behaviours, such as exercise and good diet in order to combat their BaP exposure. In this case, epigenetic tools can have several nonoverlapping potential uses and the interventions developed following epigenetic testing could serve a gamut of solutions, each focusing on a different level. One solution might focus on the worker as not being epigenetically adapted to a specific, toxic environment. Another might focus on the workers' habits conceived as a means to individually adapt and cope with damaging pollutants. Alternatively, the focus could be on the company as responsible for damaging the environment, its inhabitants and especially the workers. Here, the major issue is the compatibility, or lack thereof, of all these uses of epigenetic testing. Is it possible to, at the same time, protect the health of the workers, the industrial activities and the population more broadly, including the workers' families? As we have outlined in previous scenarios, it is unfair to place all of the responsibility on the individual workers for their own health, as the employers here are doing by making them sign a contract committing them to undertake activities to optimize their health in response to the risks posed by their work environment. Because they have greater power, corporations need to take on more responsibility for the health of their workers. But what actions might be possible in response to this scenario? It is possible that both the groups of workers, those who were fired and also those asked to sign a contract, might decide to initiate a lawsuit against the employers. Those who were fired might invoke legal intervention on the base of their right to work. Perhaps the factory is in an economically depressed area that is disbanding most of its industrial sites and without social measures of welfare to guarantee the fired workers either a decent subsidy or alternative jobs. Those who were obliged to sign a new contract might decide to file a lawsuit against the employers on the basis of violation of the environmental law on health and safety of workers. In this case, those most motivated to initiate the lawsuit might be the workers' family members, specifically their partners and children, because the workers themselves are under occupational blackmail, being forced to choose whether to live or to work. We need to also consider whether epigenetic testing ordered from the employers is discrimination operated at the workers' expense. Should the company have the right to ask employees their health status in order to fire those who might already have been damaged from the pollutants? Can we invoke laws and regulations, such as the 'Genetic Information Nondiscrimination Act' that prohibits employers from asking and using the individuals' genetic information when making hiring, firing or job placement (Slaughter 2007)? We could envisage this legal action as a negotiation between all the actors interested in the rights and welfare of workers and citizens, which might be lacking in some countries. We can imagine that a regional agency that takes care of the environmental protection, together with some grassroots movements that want to protect the people living in the area surrounding the plant, might also enter the scene. These grassroots movements may initiate a massive media campaign to encourage the public to boycott the company. These actors from the civil society aiming at enhancing public and communal goods might then push institutional bodies to order other tests in order to analyse the association between epigenetic dynamics present in the workers' samples with exposure to BaP. On the basis of these outcomes, the regional agency might order the closure of the plant to convert it into a more sustainable and less polluting activity, a move that would also cut many job positions. A theoretical point is slowly emerging to overwhelmingly disrupt the existing tradition of epidemiology and public health. Historically, epidemiological knowledge that was meant to be generalizable for most animals and human populations, derived from in silico, in vitro, in vivo, logical/ mathematical models, cell cultures, model organisms and cohorts of humans, has been translated into public policies which are meant to be universally applicable. However, epigenetics, by some of its accounts, seems to say something different and points to the capability of each specific organism to cope with a specific environment. Is the focus of epigenetics on individual biological plasticity challenging those preventive policies developed by communitarian agencies on pollutants of various kinds, habits or jobs (Davis 1986)? Now, the entity causing a disease may not only (and not primarily) be a specific molecule or human behaviour but also the genetic or epigenetic susceptibility of a person to that specific disease, e.g. a specific epigenetic makeup, programmed in the early phases of development, that may eventually not match with a specific environment. Within this'mismatch' aetiological model, what are the responsibilities given to those actors or factors that shaped the two, non-matching environments (i.e. the perinatal environment that programmed the individual, and the environment in relation to which the adult develops the disease)? Is there a resurgence of the importance given to plasticity of an individual's biological makeup in spite of environmental, sociocultural factors? Will biological plasticity be used to rank individuals, classes, genders, etc. as was proposed some decades ago, by right-wing Lamarckians (Meloni 2016b)? --- Discussion In all these four scenarios, we have shown that epigenetic testing is mainly used to scrutinize the relationship between individuals, public and private institutions, future generations and the environment, be that material or social in nature. Depending on the context in which epigenetic testing is embedded, these relationships will carry with them certain roles, and therefore responsibilities, for the actors involved. Compared to the sociological notion of genetic responsibility, where the emphasis is on individuals, as we have illustrated, epigenetic responsibility could instead redistribute roles within the community. In this sense, epigenetics may allow for better realization of the relational concept of responsibility. Indeed, the biological concept of inheritance has been reshaped by epigenetic studies (Gilbert 2011;Gilbert and Epel 2009;Meloni 2016a). During the last century, we have witnessed the birth of, and increasing importance given, to genetics and individual agency. This normative genetic shift corresponds with changes in the moral obligations of individuals, withdrawal of solidarity and reduction of professional responsibility (Schicktanz 2016). If epigenetics is used within the same ideological framework where the agency of individuals plays a main role to the detriment of collective agency, then other important concepts will be reshaped and responsibilities reallocated. What we have tried to sketch here is the use of epigenetic tools and models within dialogical scenarios where different actors from several levels of the society are considered. We have focused mainly on the agency of individuals, corporations and the State; concepts that are often overlooked within the current scientific literature and discourse on epigenetics. Caring for oneself, for future generations and for environmental protection, are aspects which are interlinked and pertain to interactions among individuals, the State and the private sector, and are under negotiation at a global scale. These three notions, and their interactions, challenge the individuals', communities' and public or private entities' conception of time relating to the length and effects of an event (e.g. what effect does the quantity and quality of diet have for a person's health for the next month, versus for the next 20 years of health of her child?). The interaction between the three notions of individuality, next generations and environment also raises questions regarding who should be the moral agent to whom responsibilities are allocated. For example, are workers or citizens responsible for their own health, or should the employers, the industry and the State also be considered as responsible for certain environments that contribute to diseases? In addition, the interweaving of these three notions to redistribute responsibilities is captured by the temporal direction (backwards or forwards) considered by the scientific enquiry, such as whether researchers should focus on preventive policies to help people not get sick or should they instead focus on developing therapeutics to cure and care for persons with diseases? And how should limited research resources be allocated between these two views? Allocation of responsibilities is a process following norms that are under the supervision of authorities that are defined within specific forms of government and at the State, supranational or corporate levels. Moreover, the norms used to allocate these responsibilities might be used to produce regulations in which processes, actors or subjects will be considered, such as whether emphasis could be placed either on scientific/epistemic norms or on social norms. For example, should scientific practices and theories impacting directly, and at different levels on the people's lives, be discussed through norms developed by the civil society or are scientific/epistemic norms sufficient to regulate science and its effects on society? In some of the scenarios in this paper, we have situated our point of view sympathetically with certain scientific 'truths', such as that hard drinking and smoking is an unhealthy habit for men and women, whether they are pregnant or not. This might make it difficult for the reader to disentangle epistemic truths from philosophical, ethical and moral arguments. This of course might be considered either a limitation or a point of advantage, depending on their point of view. On one hand, having plausible case scenarios and scientifically informed stories might improve the comprehension of practices and ideas. On the other hand, being partisan on specific scientific truths might propose a simplistic picture of science, in which facts are instead both realistic and constructed, depending on negotiations and interests of stakeholders. As an example, a molecule like Bisphenol A is, to date, considered toxic by some countries like France or Denmark but not by the European Union to whom these two countries belong. At the same time, considering a scientific fact as true might obscure the moral, ethical and political aspects of concrete situations, reducing these latter aspects to epistemic arguments, and leading to obligations and ethical imperatives. Furthermore, as epigenetics was developed in a specific period of time in which the States were less challenged by translational corporations and globalization, it is of primary importance to consider the models and practices of epigenetics within specific contexts where international networks of research can be aligned to interests of different actors, such as national or supranational public institutions, translational corporations or foundations and grassroots movement of citizens. We have challenged the importance given to individual agency, both in practice and as a concept, in that it does not allow for concrete possibilities of action for individuals. Indeed, being included in a framework of liberal governance, epigenetics is mainly used to discipline individuals considered as isolated from their social and economical contexts (Santoro 2010). Here, we instead propose a model in which a dialogical relationship among collective, individual, private and public agencies, is put in motion. As we have shown, epigenetics can be used to foster either individual or social rights. Trying to establish an equilibrium between social and individual rights by means of epigenetics practices might be a manner to foster social justice. --- Compliance with ethical standards Conflict of interest The authors declare that they have no conflict of interest. Consent This article does not contain any studies with human or animal subjects performed by the any of the authors.
The field of epigenetics is leading to new conceptualizations of the role of environmental factors in health and genetic disease. Although more evidence is required, epigenetic mechanisms are being implicated in the link between low socioeconomic status and poor health status. Epigenetic phenomena work in a number of ways: they can be established early in development, transmitted from previous generations and/or responsive to environmental factors. Knowledge about these types of epigenetic traits might therefore allow us to move away from a genetic deterministic perspective, and provide individuals with the opportunity to change their health status. Although this could be equated with patient empowerment, it could also lead to stigmatization and discrimination where individuals are deemed responsible for their health, even if they are not in social situations where they are able to enact change that would alter their health status. In this paper, we will explore the responsibilities of different actors in the healthcare sphere in relation to epigenetics across four different contexts: (1) genetic research, (2) clinical practice, (3) prenatal care and (4) the workplace. Within this exploration of role responsibilities, we will also discuss the potential constraints that might prevent the patient, mother-to-be, research participant or employee, from enacting any necessary steps in order to increase their health status in response to epigenetic information.
Introduction Colorectal cancer (CRC) is the second leading cause of cancer-related deaths in the United States, including among Latinos (American Cancer Society 2015a, 2017). This statistic is of concern as the Latino population in the US is expected to triple its current size by 2050 (Kotkin 2010). Although CRC is one of the most detectable, preventable, and treatable cancers (Rex 2008, Winawer 2015, Siegel et al. 2015), Latinos are less likely to be diagnosed with early stage CRC than non-Hispanic Whites (Siegel, Naishadham, and Jemal 2012). Timely and consistent CRC screening and early detection efforts have effectively reduced CRC morbidity and mortality, yet CRC screening remains low among Latinos (Fernandez et al. 2008, Buscemi et al. 2017, Nagelhout et al. 2017, American Cancer Society 2015b). New scientific advancements in screening and early detecting modalities have emerged to address some of the commonly cited impediments to screening among Latinos, which in turn could greatly reduce the CRC health disparity gap. Furthermore, national goals for Healthy People 2020 call for increasing screening rates for CRC to 70% (U.S. Department of Health and Human Services 2014). Other national organizations (e.g., American Cancer Society) have set an even more laudable goal of achieving 80% screening rates by 2018 (Centers for Disease Control andPrevention 2016, Simon 2015). The American Cancer Society and the US Preventative Task Force recommend that asymptomatic adults at average risk for CRC begin screening at 50 years of age, using myriad options which include, but are not limited, to the following: (1) colonoscopy every 10 years and (2) annual fecal occult blood test (FOBT) or high-sensitivity and highspecificity fecal immunochemical test (FIT) (American Cancer Society 2015a, Gwede et al. 2015, Pignone andSox 2008). Despite the availability of various CRC screening options, half of all US adults aged 50 years and older are not up-to-date with the national screening guidelines (Centers for Disease Control and Prevention 2012). Given the growing national imperative to improve CRC screening, an acute challenge is to develop effective patient-centered and clinic-based strategies to improve screening rates in federally qualified health centers (FQHCs) using tests that are accessible, acceptable, affordable and actionable. Albeit colonoscopy is considered the most thorough CRC screening modality, FIT testing offers a promising first option for patients who face barriers to colonoscopy screening. In fact, FOBT tests have shown to reduce CRC mortality by 30% and incidence by 20% (Allison 2005, Levin 2011, Mandel 2008, Quintero et al. 2012, Sanford 2009). However, at the time this study was launched, FIT testing was relatively new in FQHC settings and little was understood about its acceptability among Latino populations. The partnering FQHCs were using three-card fecal occult blood tests (FOBT) and use rates were poor. However, the clinics expressed strong desire to convert to the simpler and high specificity/high sensitivity FIT if it was shown to be more acceptable in this setting and for this population. Thus, the purpose of this study was to explore Latinos' perceptions of a relatively new CRC screening modality, FIT to (1) document the awareness/ knowledge of the FIT test among Latinos, (2) gauge general perceptions of providers and patients about the FIT test, and (3) explore the feasibility of adoption/uptake to ameliorate disparities among Latinos. In the long-term, partnering FQHCs would use this information to guide future directions for implementing innovations or new screening modalities in other disease areas as well. --- Subjects and Methods --- Setting and overview The study was conceptualized, designed and implemented within the context of a larger ongoing community-based participatory research (CBPR) program, Tampa Bay Community Cancer Network (TBCCN) (Gwede et al. 2015), a network of community partners dedicated to tackling health disparities in the Tampa Bay area. The concept for this study originated from an identified community need to address barriers to CRC screening and to reduce the unequal burden among Latinos. Given the dearth of materials or studies among these communities, Latinos CARES (Colorectal Cancer Awareness, Research, Education and Screening) was developed. As guided by ethnographic study methods, the study herein focuses on the employed use of focus groups and key-informant interviews. The results of these laid the foundation for the adaptation and transcreation of a low-literacy Spanishlanguage CRC educational materials (video and photonovella) for Latinos to inform patients about this new CRC screening test modality. This study was theoretically informed by the Preventive Health Model (McQueen, Tiro, and Vernon 2008, Myers et al. 2007, Tiro et al. 2005).This model has been shown to predict CRCS intention and behavior in multi-ethnic populations (McQueen, Tiro, and Vernon 2008, Myers et al. 2007, Tiro et al. 2005). PHM constructs include salience and coherence, perceived susceptibility, self-efficacy/response efficacy, cancer worries, and social influence. These constructs contributed to the development of the focus group guide and provided a blueprint to organize themes during interpretation and reporting. --- Community advisory board A bilingual (English and Spanish) community advisory board (CAB) informed research efforts from conceptualization through data analysis. Members from the CAB were identified from TBCCN partner organizations and represented individuals from diverse Hispanic heritage including from Caribbean, Central and South America. CAB members ensured that the study design and data collection, data analysis and interpretation and materials content were culturally, linguistic and literacy salient by offering suggestions on wording, phrasing of instruments and materials as well as providing ideas on recruitment strategies and meaning of results. --- Instruments The focus group and key informant interview guides (see table 1) were co-developed with the CAB, including representatives from FQHCs. The published literature also directed the content of the interview guides (Gwede et al. 2011, Gwede et al. 2013, Gwede et al. 2015, Kelly et al. 2007, Tarasenko et al. 2011, Walsh et al. 2010). The focus group objectives were to identify patients' beliefs and attitudes about general CRC screening, reactions to the FIT (acceptability, overall perceptions, barriers, motivators) and elicit strategies for improving CRC screening (FIT) uptake among patients. A brief demographic survey was also used to collect basic patient demographic information. The key informant guide objectives were to assess health care providers' perspectives on the following content areas, with an emphasis on FIT: (1) CRC information needs of patients, (2) factors that prevent or facilitate patient-provider discussion of CRC screening, (3) strategies that enhance the efficacy of educational materials to increase CRC screening, (4) factors that prevent or motivate uptake of CRC screening, and (5) communication strategies and resources to enhance follow-up with CRC screening recommendations. --- Eligibility and study participants Focus group participants (Table 2) included men and women aged 50-75 years of age who self-identified as Hispanic/Latino; are able to read, speak and understand Spanish; and prefer to receive health information in Spanish. Participants were FQHC patients recruited in clinics or community settings (herein referred to simply and collectively as patients). Regarding educational level, most 67% had a high school diploma/GED or less years of schooling. Most participants reported having health insurance, albeit their health insurance could have been county provisioned health insurance. Over a third (38.8%) of participants were not up to date on CRCS. A majority of participants (89.8%) were born outside of the U.S. representing a diverse number of counties/territories (e.g. Puerto Rico, Mexico). Most participants (59.2%) for the focus groups were recruited from among community sites that serve underrepresented populations. Key informant (KI) participants (Table 3) were health care providers from diverse racial/ ethnic backgrounds and health care professions. Eligible health care professionals included primary care physicians, nurse practitioners, and physician assistants whose usual role included identification of individuals eligible for CRC screening (herein referred to collectively as providers), included education and recommending CRC screening as per age appropriate guidelines. The median age of providers was 37 and age range 30-64. A majority of providers were female (60%) and self-identified as white (60%). Half (50%) of the providers were physicians. The majority (70%) of providers had worked in community clinics that served the medically underserved for over 5 years. --- Procedures Trained, bilingual and bicultural research staff members recruited patients and health care providers from FQHCs and other community settings. Data collection occurred in the fall of 2014 through spring of 2015. Recruitment efforts spanned different local geographic regions to ensure a diverse population of Latinos including rural and urban community settings. Non-probability, purposive and snowball sampling was employed. A research staff member assessed eligibility for focus group participation. Eligible participants were assigned to a focus group based on previous CRC screening status (previously screened vs. never screened). Each of these groups was conducted separately according to the group's screening status. Eight focus groups (n=49) were conducted in Spanish and led by two experienced bilingual moderators. Patients were provided a description of the FIT kit in Spanish. The description included the purpose of the FIT kit, sample kit with a description of the collection steps, storage and shipping. A research staff member modeled the steps using a FIT kit as they were described. Focus groups were audio recorded and lasted between 1.5-2 hours. Completion of the demographic questionnaire and a brief question and answer session followed the focus group to address any unanswered questions about CRC screening. Key informant interviews were held at the provider's site. Interviews were conducted by two trained research staff members, audio recorded and averaged 30-40 minutes. Providers were provided a description of the FIT kit as if they had never heard of it. This study received the university's Institutional Review Board and cancer center's Scientific Review approval. All participants signed informed consent forms prior to engaging in any research study activities. All participants in this study received a $30 incentive. --- Data analysis Verbatim transcripts were created for each focus group and provider interview in the primary language (Spanish or English) conducted. Both qualitative data sources were analyzed separately using applied thematic analysis using ATLAS.ti v7.0. The data were coded and analyzed by two bilingual investigators. Discrepancies were discussed until consensus was reached for all transcripts. Emergent codes centered on perceptions of FIT. The investigators used the study's theoretical model to guide the organization of preliminary findings, allowing for the inclusion of emergent themes that did not fit within each of the theory's constructs. The investigators further looked for synergy and distinctions among the two participant groups (patient focus groups and provider interviews) in the results. The finding were summarized and shared with the CAB to ensure culturally appropriate interpretation of the results. CAB members confirmed findings and provided additional insight that further shaped final results. This iterative process was used to assess trustworthiness (validity) of findings. --- Results The results reflect cross cutting themes that transcend both patient and provider perspectives as well as distinct themes between these groups. Table 4 summarizes the list of themes by participant group (focus groups and key informant participants). Focus group specific themes are noted as 'FG patient' and key informants as 'KI provider'. In exploring perceptions of the FIT test, many of the commonly known impediments to CRC screening emerged (e.g., lack of health insurance, embarrassment, fear) in both patient and provider groups. Commonly cited facilitators were also discussed (e.g., family history, peer/ family support, physician reminders) among all participant groups. This paper focuses on reactions to the FIT test, thus findings reflect several themes that fall into three overarching focus areas: 1) awareness/knowledge of FIT test, 2) perceptions specific to the feasibility of adoption/uptake of the FIT test, and 3) messaging/communication of the FIT test to patients. --- Awareness and knowledge about the FIT test There were varying informational needs and awareness and knowledge levels based on FG patients' previous experiences with screening (previous screening vs. never screened). KI providers' knowledge about the FIT test also was limited as their organizations had not yet introduced the FIT test as the primary modality of screening. --- Limited knowledge and awareness-Patients who had previous experience with CRC screening were familiar with CRC screening tests in general, but expressed none to limited familiarity with the FIT test. Instead, other examples of FOBTs such as a 3-card test or parasite tests commonly practiced in South American countries were discussed. Limited to no knowledge and awareness of CRC screening, including the FIT test, was more evident among the never screened FG group patients. Those who had some awareness were cognizant of cancer screening and existing approaches, but unfamiliar about specific screening tests/procedures, resources, or guidelines regarding CRC screening. Patients in the never screened groups exhibited greater difficulty in understanding the questions that discussed "detección de cancer temprano" (early detection) or "exámenes para detectar el cancer temprano" (test for early detection). Awareness was more evident among the group when "CRC screening" terminology was rephrased "chequeo de cáncer" (checking for cancer) or "examenes para el cáncer" (exams for cancer). Overall awareness about CRC screening and the FIT test among providers was high; however, knowledge level of FIT varied. At the time of the interviews, the FIT test was not part of usual care at the respective FQHCs. Most providers and their organizations were still recommending the traditional 3-card FOBT (e.g., Guaiac based test). 3.1.2 Informational needs-Among patients, there was confusion about what a positive result meant, process for locating and returning the FIT kits, cost and follow-up if cancer is detected. Furthermore, patients in focus groups from rural areas, especially among patients who were never screened, were more likely to be unfamiliar about the anatomy of the colon. Thus, pictures and a verbal description were used to move forward with the discussion. Informational needs among providers were specific to the newer FIT. Although most were familiar with the three-card FOBT sample collection methods, most providers were unfamiliar with the FIT collection process, its sensitivity and specificity, as well as general acceptability and reactions to FIT among their patient populations. Even though FIT is relatively inexpensive, providers generally felt that FOBT was more affordable than FIT-a feature that served to perpetuate use of FOBT in this setting (despite the poor FOBT use rates). --- Perceptions of feasibility of adoption and uptake of the FIT test 3.2.1 Acceptability-During focus groups, patients were provided a description of the FIT test, shown a FIT kit, and how to collect a single sample using the kit. Overall, the reactions were favorable and encouraged further discussion among the participants who were unfamiliar to the FIT test who wanted to know, "Where can I get a test? Can I take one home?" Regardless of the FG participant's CRC screening status, they felt it was easy and simple to use. Both patients and providers felt FIT was more acceptable than the FOBT and the colonoscopy since it required collection of only one sample. They appreciated and valued its potential to overcome barriers such as lack of transportation, and embarrassment since it could be done at home. There was general agreement among patients and providers about its ease of storage, ability to maintain privacy, and ease of return (e.g., mail or in person). However, there was concern shared among some patients about the unpleasant nature of dealing with fecal matter and challenges with passing a stool in general. --- Motivated to stay healthy-Patients also discussed wanting to live longer and have good health for their family. They were highly motivated to talk with their health care providers about the FIT test. They also wanted information about how and where they could access the test. --- CRC screening impediments-Main concerns expressed among both participant groups were costs related to screening tests including the FIT test itself and subsequent follow-up costs upon a positive FIT test result. Fear also was mentioned about cancer diagnosis and undergoing additional, possibly costly, follow-up tests. Commonly held beliefs and social norms (cancer as a taboo, machismo, and male resistance) were also acknowledged and discussed by both patients and providers as impediments to screening. --- 3.2.4 Trustworthiness of test-Although the FIT test was seen favorably due to its simplicity, a few FG participants expressed some doubts. Particularly, those who were familiar with colonoscopy were concerned about the FIT's effectiveness as compared to colonoscopy. In contrast, those who never been screened were keen on process questions and the types of results that would be produced from the FIT test. They had questions such as: "What does a positive result means? Do I have cancer if positive?" Regardless of FG participants' screening status, there was some skepticism about the reliability and accuracy of the FIT test. For example, FG participants were concerned about the reliability of the sample after being exposed to environmental elements (e.g. heat) during mailing. Some FG participants also questioned the test's ability to discern the origin of the blood and to detect occult blood. The majority of the providers interviewed viewed colonoscopy as the gold standard for screening. Screening through a FOBT/FIT was seen as a second best option, and described as a viable means to address access issues such as lack of health care insurance. --- Messaging and Communication Providers shared a variety of impediments to CRC discussions and screening as well as strategies to overcome them. Impediments included unavailability of educational materials for patients with low literacy levels or limited English proficiency, lack of health insurance, and fear. Patients echoed this educational material/information void. Common strategies to engage patients in a CRC screening dialogue included personalizing messages to emphasize the importance of early prevention and describing screening as life-saving. FIT was offered as an alternative solution to colonoscopy as a strategy to overcome access to costly screening or among individuals unlikely to up-take colonoscopy. Another access strategy used by providers included asking clinic staff and family to serve as the patient's interpreter to overcome language barriers. There was general consensus among providers that health education materials are valuable and serve as a primer to engage patients in dialogue about screening. Providers also felt that preparatory education strategies would facilitate more informative conversations about screening. Both patients and providers identified the long clinic waiting time as an optimal time for educating patients about CRC and screening. --- Discussion and Conclusion --- Discussion Most of the current research on barriers and facilitators to CRC screening has focused on FOBT and colonoscopy. Research specific to FIT testing has recently gained attention, especially among the international scientific community (Sinnott et al. 2015, Chiu andChen 2015). Research conducted by Coronado and colleagues (Coronado et al. 2015), suggest that English speaking individuals had more awareness of FIT testing than Spanish speaking individuals. Beyond such aspects, prior to the current study perceptions specific to the feasibility of the uptake of the FIT was unknown for Latinos who prefer to receive health information in Spanish. This study sheds further light on Latino perceptions about the FIT, perspectives on FIT testing from health care providers, and informs findings relevant to the messaging/communication of FIT to Latinos. --- Overall there was a lack of awareness of the FIT test-A cross-cutting theme from both patients and provider data was the lack of knowledge on the newer FIT. At the time this study was conducted, FIT was relatively new. In fact, FIT was endorsed by a body of physicians in 2008 to replace the older FOBT (Lee, Boden-Albala, et al. 2014, Lee, Liles, et al. 2014). Yet, 6 years later, many of the health care providers interviewed were from FQHC that had not yet transitioned to the FIT and were primarily using the guaiacbased FOBT, which may limit opportunities to have patient-provider discussions about the FIT. It is expected that as additional clinical institutions adopt the FIT into standard practice, awareness will increase among providers and concerns over cost of FIT may be mitigated by increased acceptance and uptake by patients. Among focus group participants, general CRC screening awareness appeared low during initial conversations with groups who were never screened especially among those that took place in rural areas. Using additional plain language examples to describe general CRC and screening concepts mitigated this challenge. Participants, regardless of past CRC screening history, had generally low awareness and knowledge of the FIT test. These findings speak to novelty of the FIT test among underserved populations and the health disparity gap between health innovations/discoveries (Chu et al. 2008, Freeman 2004). Despite the increasing acceptance of the FIT among the health care community (Lee, Boden-Albala, et al. 2014, Lee, Liles, et al. 2014), a significant lag time still exists among our study's populations. Our findings did support that those patients who had prior experiences with traditional 3card FOBT testing were more familiar with the general process of collecting an annual stool specimen. This also applies to providers who generally use the older FOBT. Both segments of the study population can be viewed as the low-hanging fruit to initiate intervention efforts. Thus, intervention efforts can begin to engage this group to perform a simpler test. Informational needs were driven by multitude of factors including knowledge and awareness level and prior experience or participation in CRC screening. Participants' questions were directed mostly on clarifying the process of FIT collection, mailing and testing, but a few participants, particularly those who were never screened, were skeptical or had doubts about effectiveness and trustworthiness of the FIT test. This is important to note as each commercially available FIT option have varying performance characteristics (e.g., differences in sensitivity/specificity) (Lee, Liles, et al. 2014). Messaging from providers or from strong marketing campaigns that emphasize one screening option (e.g., DNA test, colonoscopy) may influence Latinos' views on screening effectiveness. There are national efforts from the American Cancer Society and National Colorectal Cancer Round Table Consortium to unify messaging among various stakeholder groups (e.g., providers, patients, insurance companies). --- 4.1.2 Overall enthusiastic response to FIT test-Focus group participants provided enthusiastic feedback and positive reactions to the FIT test, demonstrating potential receptivity and acceptability. Although some of the commonly cited impediments to CRC screening were mentioned, findings suggest that participants were less apprehensive about FIT [compared to colonoscopy], citing it as simple, easy to use, and private. Latinos in this study can be viewed as late adopters to FIT, according to Rogers et al.'s (Rogers 2003) Diffusion of Innovation Theory. However, findings support that Latinos may be viewed as innovators or early adopters to FIT when provided with educational resources that are salient and reflect their situational circumstance as supported by the Preventive Health Model (McQueen, Tiro, and Vernon 2008, Myers et al. 2007, Tiro et al. 2005). Several elements are required for innovations such as FIT to become adopted widespread. However, impediments in the social system (e.g., policy), adopters (e.g., FQHC late adoption of FIT), and communication channels (e.g., lack of culturally salient material) can limit the rate of FIT adoption among underserved populations. The U.S. Service Prevention Task Force recommends both FIT and colonoscopy as primary methods for CRC screening (U. S. Preventive Services Task Force et al. 2016). Yet, providers viewed the FIT test not as an effective primary option, but as a means to overcome access and other structural barriers related to CRC screening. However, attitudes towards FIT were favorable to achieve the goal of an up-to-date CRC screened patient. These findings support a survey conducted by Baker and colleagues (Baker et al. 2015), that examined clinicians' attitudes, practice patterns, and perceived barriers to CRC screening. Participants in the study agreed that colonoscopy is less accessible to patients than FOBT tests. A possible recommendation is to increase awareness of FIT and to educate providers about the Task Force recommendations, and emphasize the message that "the best test is one that gets done" (Gupta et al. 2014). When patients are given a choice, many patients prefer FIT to colonoscopy (Inadomi et al. 2012). Further, recent studies have seen greater up-take in CRC screening in practices that offer FIT (Khalili, Higuchi, andAnanthakrishnan 2015, Verma et al. 2015). This is an important consideration among FQHCs and community clinics aiming to meet two of the most widely used sets of health care quality performance measures for chronic disease screening in the US (e.g., Uniform Data Standards [UDS] and Healthcare Effectiveness Data and Information Set [HEDIS]) (US Department of Health and Human Services 2015, HEDIS 2016). --- Messaging should consider literacy, and social norms, beliefs and practices-This study's findings demonstrated that there is still a need to address certain Latino cultural beliefs (e.g., cancer as taboo, machismo) and reduce fear and possible stigmatization from communities and their families. CRC screening promotion messages should be responsive to these realities and address these beliefs. This reasserts the literature on addressing the appropriateness of health information for the user (Doak, Doak, and Meade 1996). Messages should also empower patients with the information needed to understand the saliency and relevance of CRC screening, where to access the FIT kit, how to complete the test, and follow-up procedures in plain language avoiding technical words like early detection in Spanish. Providers and other health related staff should be vigilant to the patients' awareness and knowledge level. Latinos with low awareness and knowledge on CRC screening may need additional information on the human anatomy (e.g., Where is the colon?) before engaging them in CRC screening discussion. Messages may also consider including general reactions garnered in this study such as the FIT test simplicity and privacy when raising awareness of the FIT test. There was also a need of Spanish education materials. The availability of these materials was seen of great benefit and would facilitate CRC screening discussion with patients. Moreover, capitalizing on long wait times to provide this education was seen as promising strategy to engage patients in CRC screening education (e.g., education video), and such a strategy is supported by other studies (Gwede et al. 2015, Davis et al. 2016). Health clinics could also empower non-clinician staff in CRC prevention/education strategies. Preparatory education would help increase patient knowledge and awareness and prime patients about CRC screening before they see their providers. Finally, messaging about health care innovations/discoveries must also aim to reach various disadvantaged populations such as Latinos and the institutions that serve them. As evidence from this study's findings, awareness and knowledge of FIT was limited among patients and providers. As new discoveries are introduced (e.g., DNA blood testing for CRC or advances in Precision Medicine), similar research methods as employed in this study are required to evaluate acceptance and to document information needs to further disseminate innovations. --- Conclusion Our study revealed low knowledge and awareness among patient and providers about the newer FIT. Findings also support high receptivity to this mode of screening. This suggests a need for increased education to increase awareness and adoption. This might be accomplished in a variety of ways. For patients, this might include the provision of duallanguage patient education materials and media. For providers, it might entail brief educational updates at staff meetings to highlight innovations in CRC screening. Overall, the positive receptivity by providers is likely to position FIT as an important primary screening option (along with colonoscopy) for average risk individuals, consistent with national guidelines (U. S. Preventive Services Task Force et al. 2016). --- Author Manuscript Loi et al. --- Author Manuscript Loi et al. Page 17 Most of patients in that demographic do not have insurance, majority of them, I'd say 75% so the only one that we have is the occult blood test (3 cards)... Obviously the colonoscopy is preferred but the current program is five hundred dollars and it may as well will be a million as far as they're concerned, so I offer it but most of them decline." [Provider] "The problem with this is that...there isn't a guarantee...I think, a colonoscopy gives you information about the inside of the colon, while this [FIT] I think doesn't." [Patient, Previously screened]. "We do not recommend FIT, the first choice is to send them for a colonoscopy...but then of course when they're not funded, they don't want to go for that and then, the second better is the FIT, which is available, it's free for most of our patients." [Provider] Emergent Themes
Objective: Colorectal cancer (CRC) screening efforts have effectively reduced CRC morbidity and mortality, yet screening remains relatively low among Latinos. The study's purpose was to document the awareness/knowledge of Fecal Immunochemical Test (FIT) among Latinos, gain
Introduction The importance of socioeconomic determinants of health such as income, educational attainment or occupation has been well established [1,2,3,4] although the relationship among them and the causal pathways linking socioeconomics with men's and women's health is not yet fully clear. The special relevance of educational attainment on health has been highlighted by a wide range of studies which have shown that the most highly educated individuals have better selfrated health (SRH) [5] as well as lower morbidity and better mortality rates [6]. This relationship is explained in various ways [7,8,9]. From an individual perspective, higher education, as a human capital endowment [10], is related to higher income and improved working conditions, which have been shown to result in better health [11,12]. Moreover, a higher level of education provides better cognitive skills and access to information, which can lead more highly educated people to have access to better means of improving their health. It is well documented that more highly educated people report a greater sense of control over their lives and, hence, exhibit healthier behaviours [13]. Indeed, less educated people smoke more [14], consume more alcohol [15] and are less physically active than their more highly-educated counterparts. From a social viewpoint, higher education is related to greater social integration, which provides social support, influence and access to resources, all of which contribute to better individual health [16,17]. It also leads individuals to choose better areas to live, where there is greater access to spaces for physical activity and health care resources, and which curb the possibility of crime and violence [7]. Considering a gender perspective, gender differences in health are well documented. Male mortality rates are higher than female's although women report more symptoms, use more health care services than men, and tend to report worse SRH [18,19,20,21]. Women's lower SRH may indicate female socioeconomic disadvantage due to lower income, poorer working conditions, less economic independence, etc. As pointed out before, education may result in better health, although the issue of how the benefits of education on health differ between men and women has received little attention and the few studies that do focus on the subject have thus far failed to yield any clear conclusions. Some recent studies report higher health returns to education for women than men in the USA [22,23], although others find the opposite in Europe [24,25], while some researchers report no statistically significant difference [26,27,28]. The aim of the present paper is to delve deeper into the relationship between gender, education and health. The analysis focuses on the active Spanish population, with SRH being the measure considered to account for the health level of individuals. --- Methods --- Sample selection European Union statistics on income and living conditions (EU-SILC) provide the reference source for comparative statistics on income distribution and social inclusion in the European Union. In Spain, almost 15,000 private households are selected each year to represent all the private households in the country and all their members aged 16 and over are interviewed. They provide information on household and personal income, education, health, employment, economic deprivation, childcare and household conditions. A total of 28,210 individuals completed the questionnaires in Spain for the 2012 wave. We only considered those respondents between 25 and 65 years of age who were working either part-time or full-time, or who were unemployed or freelance. From this selection, 288 individuals living in the autonomous cities of Ceuta and Melilla were excluded for reasons of sample homogeneity, as were those who presented missing values in our dependent variable (148 individuals did not declare their health status). Therefore, a total of 14,120 individuals were finally included in our analysis. --- Self-rated health and educational attainment Health status was measured by individual's SRH. Indicators of SRH have proved to be good predictors of mortality rates [29,30] although they are multidimensional measures which include different aspects of individuals' health such as their physical and mental status and are widely used when analysing determinants of health [31]. Respondents were asked to value their own health (How would you rate your health in general?) choosing from among five possible answers: very good, good, fair, bad and very bad. Answers were dichotomized into a dependent variable with two categories: good if the individual's valuation was very good or good, and bad otherwise. With this variable, we formulated a bivariate logit model. Individuals' educational attainment and its influence on their health was studied by considering two categories: lower educated population, which includes those who hold primary or secondary studies or who declared a non-educational background, and higher educated, if they completed tertiary education (mostly university) --- Demographic variables Among the covariates considered, we included age, splitting the sample into three intervals: from 25 to 40, from 41 to 55, and from 56 to 65 years old. The lower limit is justified by considering that, at that age, individuals have already completed their academic training, and hence, their educational attainment may be measured better. Gender inequalities were studied by means of a dichotomous variable, and whether the individual was considered an immigrant as a result of having been born outside the country was also taken into account. --- Socioeconomic variables In order to consider an individual's situation in the job market, we split respondents into four categories: freelance, part-time worker, full-time worker, and unemployed. Part-time workers were also split into two additional groups, depending on the reason why individuals work less than 30 hours per week. They can either be forced into this type of contract for various reasons (such as studies or training commitments, sickness, housework or because they cannot find a full-time job), or may opt for such employment of their own accord. Individuals' income was computed by calculating their equivalised income, according to the so-called modified OECD equivalence scale. Moreover, a special disfavoured economic situation of the household was taken into account with the variable "material deprivation". Household composition was also analysed as was whether individuals belong to a family containing economically dependent members. --- Contextual variables Certain elements concerning where individuals live may determine the final impact of their personal factors on their health [32]. Introducing contextual characteristics ensures that we do not lapse into any ecological and atomistic fallacies [33] when drawing inferences. With this aim, the degree of urbanization of the location where individuals live was included in the analysis, since low populated areas tend to lack certain basic facilities such as primary healthcare centres and hospitals, in addition to which accessibility to them may prove more difficult. A further negative influence may be the presence of noise, pollution, dirt or other environmental problems in the area where they live, in addition to crime or vandalism issues. Hence, we split living areas into two categories: favourable or unfavourable environment. --- Other variables We took into consideration the lack of health assistance in case of need, whether the individual mentions not being able to visit the doctor on at least one occasion, when necessary, in the past twelve months. This may have been due to cost, waiting lists or travel difficulties or to the respondent having decided to wait until the symptoms disappeared. Delayed and foregone medical care is a good indicator of inequalities in access to health and can be associated with prolonged morbidity and increased severity of illness [34,35]. Recent studies point out that this indicator has increased in Europe in recent years [36]. --- Statistical analysis For the empirical analysis, we used a bivariate logistic regression, reporting the odds ratios and their significance level. We adopted a multilevel analysis due to the hierarchical nature of our data, with two levels: individual and regional, in order to analyse the possible relationship between individuals' health and the particular characteristics of the region where their place of residence is located [37]. This perspective allows us to distinguish between individual and environmental factors which affect health. --- Results Table 1 shows that individuals in the selected sample report good health in general (86%), a fact reinforced for higher educated individuals (92%). Moreover, the percentage of men (54%) is higher than women (46%), although the latter have better educational attainment. Approximately half of the individuals are middle-aged (46%), with a full-time job (55%) and have minors in their care (53%). The majority of the sample are of Spanish nationality (90%), have no post-secondary studies (65%), visit the doctor when necessary (93%), and live in a highly urbanized (71%) and favourable (78%) environment. When stratifying the sample by educational level, differences appear with regard to the situation in the labour market. The unemployment rate of less educated individuals (31%) is double that of their more highly-educated counterparts (15%). In addition, the percentage of full-time workers is lower in this group (48%). This may lead to below average income in the group (12.66 versus 20.24 among highly educated individuals) and to them living in less populated areas (34% versus 20%) to a greater extent. Table 2 presents the odds ratios of the probability of reporting good health among respondents related to their educational attainment. Higher educated individuals are more likely to report good health than less educated individuals in the unadjusted model (OR: 2.52, 95% CI: 2.23-2.83). Nevertheless, the odds ratios related to educational attainment change to 1.67 (95% CI: 1.46-1.90) in the final model when individual and contextual characteristics are introduced into the estimation, although it remains statistically significant. Other interesting results can be obtained from the estimations reported in Table 2. In the model adjusted only for personal factors, we found a negative age gradient since, as individuals grow older, the likelihood of them reporting good health decreases (OR: 0.41, 95% CI: 0.36-0.46 and OR: 0.20, 95% CI: 0.17-0.23). Being a woman or an immigrant reduces the likelihood of reporting good health too (OR: 0.84, 95% CI: 0.76-0.93 and OR: 0.79, 95% CI: 0.67-0.93 respectively). When the rest of the covariates are included in the estimation, these results remain fairly stable, except those concerning being an immigrant, the odds ratio for which becomes non-significant. As for the remaining variables, income displays a positive albeit small gradient (OR: 1.02, 95% CI: 1.01-1.02). In addition, individuals who suffer material deprivation (OR: 0.57, 95% CI: 0.47-0.69) are much less likely to report good health. The odds ratio of unemployed people is also lower (OR: 0.57, 95% CI: 0.50-0.64) as is that of part-time workers, although in the latter case only for those workers who are forced to accept a part-time job but who would like to work on a full-time basis (OR: 0.71, 95% CI: 0.57-0.87). The same effect occurs when individuals live in an unfavourable environment (OR: 0.63, 95% CI: 0.56-0.71), or in a low urbanized As the aim of this paper is to investigate the effects of education and gender on health, we performed different estimations, stratifying by educational attainment and sex. Table 3 summarizes the results of the analysis of gender health inequalities at each educational level: higher and lower educated individuals. Regarding less educated individuals, women display a likelihood of around 15% less than men of reporting good health, with the odds ratios being significant in almost all estimated models (unadjusted and adjusted for the different covariates). With regard to the more educated, women show less likelihood of declaring good health than men in all the estimations carried out, although the odds ratios are not statistically significant in any of the cases. Looking at the disparities between higher and lower educated (detailed results available from the figshare repository at the following URL: https://figshare.com/s/61f663a75e1bc50a83b3), we found that certain labour situations characterized by precariousness, such as working part-time not through choice, and household material deprivation, are only significant vis-à-vis explaining less educated individuals' health, specifically where women have a higher risk of presenting poor health. This outcome might be due to women attaching greater importance to family and other life dimensions [38] and, hence, tending to choose non-standard jobs in an effort to strike the right work-life balance [39]. Table 4 displays the results of measuring the effect of education on SRH when the sample is stratified by sex. Achieving a higher level of education increases the likelihood of reporting good health more for women than for men (OR: 2.74, 95% CI: 2.32-3.23 and OR: 2.38, 95% CI: 2.00-2.82 respectively) in the unadjusted model. However, when the analysis is controlled by the remaining covariates, differences between men and women disappear. Hence, it may be concluded that the general effect of educational attainment on health is equal for men and women and that they experience a 69% increase in the likelihood of reporting good health when they achieve a higher level of education. --- Discussion Information concerning the Spanish working population extracted from European Union statistics on income and living conditions (EU-SILC) reveals that higher educated individuals report better health more often than less educated individuals do. Our bivariate logistic analysis, controlling by gender, socioeconomic and contextual variables, and adopting a multilevel perspective (individuals and regions), confirms a significant higher probability of reporting good health for higher educated individuals. It also points to the existence of gender inequalities in health as women show a significantly lower likelihood of reporting good health than men. The issue of how education affects women's and men's health differently has been addressed in the specific literature applying two alternative hypotheses. The resource substitution view suggests that when resources substitute each other, the lack of one will produce a less important negative effect on health when other resources are present [23,40]. Women have fewer socioeconomic resources than men (less economic independence, fewer opportunities for a full-time job, lower authority...). Hence, women's health will be more favoured than men's as a result of improved educational attainment, since the presence of educational resources reduces the negative effect of the lack of other resources for women. The opposite view of the reinforced status proposes that socioeconomically favoured individuals obtain greater gains from improvements in their resources, thus amplifying the gap when compared to the less favoured. In this case, the health benefits provided by increased educational attainment will be greater for men, and will further men's advantage [23,40]. In order to gain deeper insights into the subject, we conducted separate analysis by educational level, and found that although less educated women display a lower likelihood of reporting good health than men, there are no statistical gender differences in health between higher educated men and higher educated women. This result might lead us to accept a confirmation of the former theory, as women show worse health than men when their educational attainment is lower, whilst improving their educational level allows them to overcome the gap. We carried out a fresh analysis, this time stratified by sex, and found that when educational attainment rises there is a significantly higher increase in the likelihood of reporting good health for women than for men. Nevertheless, this result is only present in the unadjusted model and does not remain when all the socioeconomic and contextual covariates are considered. The final odds ratios of the effect of educational attainment on health encountered for women and men are the same when all control variables are taken into account. Hence, analysing all the results together, it seems that education has the same direct effect on health for men and women, although at the same time it provides women with an increase in other socioeconomic resources (for instance, it has been shown that higher levels of education lead to reductions in the gender wage gap suffered by women in the job market [41]), reducing men's advantage and enhancing their health more. Thus, education allows women to overcome the observed gender health gap within the low educated individuals group. Our analysis has certain limitations. The data source selected to conduct the study fails to provide any information on individuals' behavioural risk factors such as tobacco and alcohol consumption, exercise, or whether respondents keep to a healthy and balanced diet. Although healthy lifestyles are important determinants when explaining SRH, we decided to carry out the analysis with the EU-SILC as it provides more detailed information than other surveys about the socioeconomic situation of individuals, particularly with regard to personal and household income and social exclusion. This aspect is quite important as regards ascertaining whether precarious work or unemployment and the consequent loss of income might impact on men and women differently, particularly in the current economic crisis. Some recent papers have focused on the influence of individuals' socioeconomic background [42,43] and have pointed out that health returns to education depend on socioeconomic origin. They show that the social position of the family with whom individuals live when they are young is crucial vis-à-vis determining their current education level, their health habits and their socioeconomic position. We did not have such information available although it does pose an interesting subject for further research. Despite these limitations, our study provides a valuable analysis of the influence of educational attainment on gender inequalities in SRH. The work highlights the importance of promoting education, since this raises the general health level of the population and tends to reduce socioeconomic gender inequalities over time. --- All relevant data are available from the figshare repository at the following URL: https://figshare.com/s/ 346307383cda044916c5. --- Author Contributions Conceptualization: Sara Pinillos-Franco, Carmen Garc<unk> <unk>a-Prieto. --- Data curation: Sara Pinillos-Franco. Formal analysis: Sara Pinillos-Franco. Funding acquisition: Carmen Garc<unk> <unk>a-Prieto. Investigation: Sara Pinillos-Franco, Carmen Garc<unk> <unk>a-Prieto. Methodology: Sara Pinillos-Franco, Carmen Garc<unk> <unk>a-Prieto. --- Project administration: Carmen Garc<unk> <unk>a-Prieto. Resources: Sara Pinillos-Franco, Carmen Garc<unk> <unk>a-Prieto. Software: Sara Pinillos-Franco. Supervision: Carmen Garc<unk> <unk>a-Prieto. Validation: Carmen Garc<unk> <unk>a-Prieto. Visualization: Sara Pinillos-Franco. Writing -original draft: Sara Pinillos-Franco, Carmen Garc<unk> <unk>a-Prieto. Writing -review & editing: Sara Pinillos-Franco, Carmen Garc<unk> <unk>a-Prieto.
Women tend to report poorer self-rated health than men. It is also well established that education has a positive effect on health. However, the issue of how the benefits of education on health differ between men and women has not received enough attention and the few existing studies which do focus on the subject do not draw a clear conclusion. Therefore, this study aims to analyse whether the positive influence of educational attainment on health is higher for women and whether education helps to overcome the gender gap in self-rated health.We analyse cross-sectional data from the 2012 European Union statistics on income and living conditions. We use a logit regression model with odds ratios and a multilevel perspective to carry out a study which includes several individual and contextual control variables. We focused our study on the working population in Spain aged between 25 and 65. The final sample considered is composed of 14,120 subjects: 7,653 men and 6,467 women.There is a gender gap in self-rated health only for the less educated. This gap is not statistically significant among more highly educated individuals. Attaining a high level of education has the same positive effect on both women's and men's self-rated health.Although we did not find gender disparities when considering the effect of education on health, we show that women's health is poorer among the less educated, mainly due to labour precariousness and household conditions.
Introduction In public opinion, the notion of risk, as a situation involving exposure to danger, is heteroclite and complex. According to Slovic (1987), it includes uncertainty, fear, catastrophic potential, possibilities of control and equity, along with risk for future generations. One of the shared assumptions concerning risk is that there is a difference between reality and probability (Zinn, 2008), as well as between experts and non-experts. Whereas economists conceptualize risk as expected utility but not as physical damage (Renn et al., 1992), research on lay perceptions of risk identifies the psychological and cognitive aspects of risk evaluation. Most psychological theories of risk were elaborated to study risk from the perspective of lay perceptions, with its biases, as opposed to the expert's approach. They expressed doubts about people's rationality when facing risk. In the psychology of risk, some descriptive approaches have already focused on cognitive factors, such as in the Prospect Theory (Kahneman & Tversky, 1979), or on risk characteristics, with the psychometric paradigm (Fischhoff et al., 1978;Lichtenstein et al., 1978). These descriptive approaches concentrate on different biases about how people react when they have to decide under uncertainty. Researchers highlighted that there is a need to legitimize what people who are concerned by risk think about it. In this vein, Kahneman (1991) pointed out that psychological research on risk and judgment under uncertainty should be less solely concerned with cognitive factors. For Joffe, 'the response to risk is a highly social, emotive and symbolic entity'(2003: 42). The legitimation of public opinion affected by risks is also explored (Slovic, 1987;Tulloch & Lupton, 2003;Zinn, 2008). In this context, Tulloch and Lupton (2003) studied risk by using the spontaneous evocations technique; when associated with emotions, such as fear and dread, risk was considered as dangerous and unknown. According to these authors, uncertainty, insecurity and loss of control were associated with risk, as were some positive aspects, such as adventure, excitement, joy and the opportunity to excel. Interconnections between social objects have been highlighted by a series of authors (Bonardi et al., 1994;Di Giacomo, 1980;Larrue et al., 2000;Roland-Lévy et al., 2010). These empirical contributions suggest that a social object cannot be completely isolated from other social objects, that is to say the representations of some social objects are built on earlier representations, as is the case for banks, savings and money (Vergès, 2001). In this context, risk and the economic crisis involve different conceptualizations, which might be interconnected. In Europe, the economic crisis is no longer viewed as a short-lived paroxysmal moment with an immediate and dramatic impact (Eurobarometer, 2013). In France, at the time of writing, most people consider the economic crisis as a fact of life. This creates a situation with overall economic and social difficulties. The crisis gives rise to uncertainty about the future and can be considered as a collective threat (Ernst-Vintila et al., 2010). In an international study comparing four European countries ---France, Greece, Italy and Romania --- Galli et al. (2010) confirmed that there is a semantic background common to the economic crisis, credit and savings. Even if there were some differences in terms of economic positions and sociocultural situations, unemployment was identified across all four countries as a structuring element of crises. Gangl et al. (2012), whose study explored lay-people's and experts' social representations of the financial crisis, reported similar findings. Consequently, as risk analysis requires an analytical framework integrating both social and psychological dimensions (Breakwell, 2007), crisis should also be studied as a social object. Along with Joffe, we consider that, more than a critique of'models of 'perception" in the risk sphere, where people are regarded as erroneous perceivers' (Joffe, 2003: 67), the Social Representation Theory is an interesting methodological framework for analyzing lay perceptions, and thus makes it possible to complete existing approaches. It finalizes the descriptive approach with information about how people design a social object; moreover, it may contribute to linking social knowledge with behaviors. Based on Durkheim's (1898) notion of collective representations, the Social Representation Theory was developed by Moscovici (1961). For him, social representations are socially constructed and shared forms of common knowledge. Research conducted since Moscovici's pioneering study (1961) on social representations has sought to develop new methods for studying social representations. Today, the main extensions of this theory are based on the structural approach. According to this approach (Abric, 1984;Flament, 1981), a social representation is made up of a central system (central core) surrounded by a peripheral system containing different categories of elements. The central system is composed of common elements, which are shared by most of the members of a group, whereas the most distant peripheral zone allows the expression of more individual differences. According to Moscovici, two main processes are involved in the creation and development of a social representation: objectification and anchoring. Objectification is the process whereby complex elements are translated into an understandable social reality (e.g. how lay-people, without any expert knowledge about these topics, describe crises and risks). New elements are classified according to pre-existing mental structures, or standard categories, via the anchoring process (e.g. lay-perceptions of crises and risks are part of broader systems involving socialization processes, cultural contexts and historical backgrounds). Social representations are not intentionally communicated but are disseminated in the daily discourse, for instance through images or behaviors (De Rosa et al., 2010). By analyzing verbal productions, the Social Representation Theory provides an appropriate theoretical framework for exploring lay explanations of topics such as the recent economic recession. As Zappalà states, 'Social Representation Theory is relevant for identifying the components, structure and developments of economic representations'(2001: 200). Vergès defines economic representations as'social representations in a particular field, that of the economic society'(1989: 507). Lay perceptions of the economic crisis have already been studied at the social level, using the Social Representation Theory, with analysis of spontaneous words that lay people associate and share when they think about the economic crisis. For Leiser et al. (2010) as well as for Gangl et al. (2012), the social representation of the crisis is mainly descriptive. O'Connor (2012) obtained three themes to explain the economic recession: 'power', 'ordinary people' and 'fatalism', without any economic explanation. In the same vein, Leiser et al. (2010) showed that lay perceptions of the factors involved in financial and economic crises are organized around two major conceptions: 'economy' from an individual perspective and 'economy' as a complex system, the first being stronger than the second. Combining lay representations of crises and their links with risk knowledge is a new manner of considering the significance of crises. According to Vergès (2001), because some social objects are built on earlier ones, a social representation is not necessarily completely autonomous; for example according to Morin and Vergès (1992), the social representation of AIDS was initially a compromise between illness and social curse. In the same vein, economic social representations are anchored in both previous knowledge and context. Consequently, representations do not exist in isolation. Vergès (1998) states that social representations can be embedded, reciprocal or intertwined. In the same vein, as shown by Roland-Lévy et al. (2010), economic representations are both anchored in both previous knowledge and interconnected. In their study, the representations of credit and savings are influenced by the social representation of the economic crisis. Flament (1994) and Abric (1994) clearly established the relationship between social representations and practices. Nevertheless, it remains unclear whether it is the social representation that determines the behavior or if it is, as pointed by Guimelli (1994) or Roland-Lévy (1996), a change of a social practice which will modify the social representation itself. Moreover, according to Ernst-Vintila et al. (2010), there is a relationship between thinking about a crisis and the intention to act. Also, as has been shown (Kmiec & Roland-Lévy, 2014), it is an interesting idea to study the capacity to act when risk in general as well other specific risks connected to the financial crisis, are approached together. This provided a better understanding about what worries people and why they fulfill, or not, specific actions, i.e. investment, consumption, savings or spending. Therefore, the combination of the social representations of risk and crises can contribute to a better understanding of why people engage, or not, in certain actions when facing an economic crisis. These actions could be influenced by the level of personal involvement and, more precisely, by perceived ability to act. Personal involvement is an indicator of how individuals are connected with a social object or situation. Flament and Rouquette (2003) identified three components of personal involvement: (1) how the object is valued (i.e. the social object represents something that is important vs. unimportant); (2) how individuals identify with the social object (i.e. individuals feel personally involved with the social object vs they feel that the object concerns everyone); (3) and perceived ability to act (i.e. we can act when facing a social object vs we feel powerless). With this in mind, we claim that the economic crisis and risk are two distinct social objects; however, these two social objects should be interconnected rather than autonomous. They might generate common knowledge shared across different social groups. Finally, these social objects should influence behavior through the perceived ability to act. The three main hypotheses for this paper are (H1) that there are two distinct social representations of risk and the crisis; (H2) that some connections between the social representations of risk and crisis are expected; (H3) we predict that, in the context of a crisis, the two social representations, risk and crisis will have an effect on the perceived ability to act. ---, STUDY 1 The aim of the first study is to test the existence of two distinct social representations of risk and the crisis (H1), and to identify how verbal productions around these two notions are structured. --- Method --- Participants Seven hundred and thirty-two students took in part in this study; among them 490 (67 %) were women. Participants' mean age was 21.62 years; they were enrolled from various programs, including humanities and social sciences (n = 290), business and management (n = 267), science (n = 75) and technical studies (n = 68). --- Procedure and measures The technique employed is the free-association task, which makes it possible to identify and to describe social representations of a given social object. It allows highlighting latent dimensions structuring the semantic world; it also allows accessing the figurative nucleus of the social representation (De Rosa, 1988). As pointed out by Moliner et al. (2002), analysis of verbal productions provides access to relationships that can connect different concepts together. According to Vergès and Bastounis (2001), this technique, based on spontaneous evocations, allows defining the structure of both the central system and the peripheral system of the social representation. It also allows determining the hierarchy of the mentioned terms at the collective level. In order to identify the content of the social representation of risk and the crisis, two freeassociation tasks, in which the target terms are 'risk' and 'crisis', were administered. Participants had to answer the first association task based on the question: 'What do you think about when you read the term "risk"?' For each word or expression they produced, participants then had to say whether it evoked something positive, neutral or negative in relation to the target term 'risk'. Participants received an email with an invitation to fill in an online questionnaire. They were told that the survey focused on students' representations. No other information was given to the participants in order to limit the priming effect. They were told that their responses would remain anonymous and confidential. Participation was voluntary and non-incentive. After answering a few demographic questions (sex, age, type of education and year of study), participants answered the free-association task based on the inductor 'risk'. Then, for each word or expression they produced, participants had to give their valence in relation to the target term 'risk'. The same questions were asked for the target term 'crisis'. All participants were presented first with the risk target term and after with the crisis target term. --- Data analyses To define the hierarchical structure of the social representations based on the prototypical analysis, two kinds of data were intersected: (1) the frequency of the evocations (i.e. spontaneously mentioned more or less often, which is an indicator of the degree of words-sharing among participants) and ( 2) the order of appearance (i.e. among the first or the last terms to be mentioned) known as the rank of appearance. This reveals the degree of proximity between the target term and associated words or expressions (Vergès, 1989(Vergès,, 1992)); it is an indicator of the accessibility of the word in the participant's memory (Abric, 2003). In an association task, the words or expressions among the first to be produced (lower rank) with a high frequency are considered to be salient and important to the participants. This becomes an indicator for the typicality (Rosch, 1973) of words cited, with two characteristics: (1) great accessibility (typical elements are cited among the first) and ( 2) shared accessibility (the most typical elements are cited by a large number of participants). Based on these elements, it is assumed that the terms or expressions with a high frequency and a low rank (cited among the first ones) are most central and thus belong to the common and shared central system. Those mentioned less often and with a higher rank (i.e. among the last to be listed) are considered more peripheral. The peripheral elements are organized into three categories: two distinct zones in the near periphery (first near periphery: high frequency and high ranking; second near periphery: low frequency and low ranking), and one zone in the distant periphery, with terms or expressions that are produced at a low frequency and with a high rank, thus allowing space for more individual ideas. --- Results A lemmatization was carried out on the corpus but no categorization was realized. The frequency of occurrence (Vergès & Bastounis, 2001) was considered for each word or expression produced, relative to the total number of participants. Concerning the rank of appearance (among the first terms or among the last), we calculated the mean rank of appearance, which is based on all the ranks produced by all the participants for a given term. This was completed by the attitudinal valence of the produced term in relation to the target term. Since, on average, participants produced 5.43 words, the low mean rank is established as being from 1 to 2.5; what ever is above 2.5 is considered here as being of a high mean rank. In agreement with Vergès et al. (1994), a term is considered to have a high frequency when it is spontaneously produced by a minimum of 20 % of the participants. --- The social representation of risk As shown in Table 1, based on a minimum threshold of 10 %, the social representation of risk is composed of 10 terms: one term is hypothesized as central (as it has a frequency of occurrence of 67 %, which is higher than the 20 % threshold, and it has a low mean rank of 1.81); three terms belong to the first near periphery, while the remaining six belong to the more distant periphery (as their frequency of occurrence is below the 20 % threshold and their mean ranks are higher than 2.5). --- TABLE 1 ABOUT HERE The only term that can be hypothesized as central is 'danger'. The idea of danger is shared by a large number of participants (67 % of the sample) in relation to risk; it also has a low mean rank (1.89), the lowest of the social representation, which implies that it is often the first term mentioned. Therefore, danger occupies an important place in the social representation of risk. The first near peripheral zone (high frequency and high mean rank) of the social representation of risk is composed of three terms: 'fear', 'courage' and 'adrenalin'. As suggested by their position in the social representation, these terms are shared by the participants but they do not correspond to the most important ideas associated with risk. No term belongs to the second near peripheral zone. In the distant periphery, composed of terms that are neither frequently nor immediately mentioned by the participants (high rank and low frequency) among the different elements of the social representation of risk, it is possible to highlight an opposition between negative terms (i.e. 'losses', 'uncertainty', 'accident' and 'difficulties') and positive ones (i.e. 'challenge', 'opportunity'). Whereas negative terms indicate consequences ('losses', 'difficulties'), situations ('accident') or a description ('uncertainty') of risk, positive words designate risk as involving a situation creating opportunities, as well as risk-taking seen as a challenge. These contrasting ideas illustrate the contribution of individual differences in the shared representation. To summarize, the social representation of risk appears to be organized around the concept of 'danger', which is shared by two thirds of our sample; it is by far the most shared element. Globally, the terms belonging to the social representation could be organized around three topics: the consequences of risk, the emotions and the actions associated with risk. According to this sample, the consequences of risk are characterized as mainly negative: 'danger', 'losses', 'accident' and 'difficulties' (negative valence). However, one term represents a positive consequence of risk: 'opportunity'. The theme concerning the emotions associated with risk is composed of two terms, which are in the first near periphery: 'fear' (32 %) and 'adrenaline' (20 %). Fear has a negative valence, while adrenaline has here a positive valence. It shows that risk can lead to both positive and negative emotions. Even if those terms are not central, they are shared by a rather large part of the concerned population. Thus, emotions have an important place in this social representation. Two terms compose the actions associated with the topic of risk: 'courage' and 'challenge'; both these terms have a positive valence. 'Courage' and 'challenge' are not actions but concepts related to actions. According to the online Oxford Dictionary, courage is 'the ability to do something that frightens one', in other words the capacity for action when the emotional demand is important; a challenge is 'a task or situation that tests someone's abilities'. Overall, the distant periphery of risk features an opposition between two consequences: 'losses' versus 'opportunity' (which could represent the opportunity to gain). A fairly large number of participants from the sample tend to associate 'losses' (n = 17 %) and 'opportunity' (n = 10 %) with the target term 'risk'. To some extent, this is consistent with Prospect Theory (Kahneman & Tversky, (1979), which materializes outcomes in terms of gains and losses under uncertainty, confirming that it is much more unpleasant to lose than it is pleasant to win. In the same vein, the higher proportion of evocations of losses (17 %) than of gains (9 %) could be emphasized here. Nevertheless, their peripheral location in the social representation of risk suggests that these elements reflect differences between individuals rather than something that is shared by the whole population. The identified social representation of risk is structured around subjective elements and supported by references to emotions and to the 'adrenalin' generated by risk-taking. For the French university students who performed this free-association task, the shared social representation of risk is not directly related to losses and gains, as these ideas are not located among the top-ranking evocations; instead they belong to the elements in the distant peripheral zone, which allows space for inter-individual differences. --- The social representation of the crisis With a minimum threshold set at 10 %, the social representation of the 'crisis' is composed of 16 terms (see Table 2). Two of them are hypothesize as being part of the central system (i.e. both of them having a high frequency and a low mean rank); four terms belong to the first near periphery and the remaining ten belong to the distant periphery. --- TABLE 2 ABOUT HERE The two terms that can be hypothesized as being part of the central system are 'economy' and'money'. The former is mentioned by almost half of the participants (n = 46 %), with a low mean rank (2), while the latter is mentioned by one third of the students (n = 31 %), with a mean rank of 2.3. These terms are therefore the two most shared terms; moreover they have the lowest mean ranks of all the terms composing the social representation of the crisis. While the first near peripheral zone is structured around 'unemployment', 'difficulties', 'finance' and 'politics', there is no term belonging to the second near peripheral zone. In the distant periphery, the different elements of the social representation of the crisis are 'poverty', 'austerity','social disorder', 'purchasing power', 'countries','recession', 'banks', 'debts', 'fear' and 'opportunity'. Some participants consider the names of certain banks or countries, including Greece, the USA and France, as also being specific to the crisis. As we have seen, the social representation of the crisis appears to be organized around 'economy' and'money', which are two ideas globally shared by the members of our sample. The terms belonging to the social representation of the crisis can be organized into two main themes: the characteristics of a crisis and the consequences of the crisis. Most of the terms belong to the first theme, thus dealing with the characteristics of crisis, have according to the participants themselves a neutral valence: 'economy','money', 'politics', 'countries' and 'bank'. Only one, 'finance', has, according to the participants, an overall negative valence in relation to the crisis (i.e. participants expressed that 'finance' evoked something negative in relation to crisis). This theme is composed of six terms, out of sixteen, including the two most central terms of the social representation of the crisis ('economy' and'money'). For our participants, this theme has an important place when thinking about the crisis. With the exception of 'opportunity', all the consequences of the crisis expressed by our sample have a negative valence: 'unemployment', 'difficulties', 'poverty', 'austerity','social disorder', 'purchasing power','recession' and 'debts'. Most of the terms (9 out of 16) belonging to the social representation of crisis represent consequences of the crisis. Most of them are in the distant periphery; however, 'unemployment' and 'difficulties' are in the first near periphery. This theme, as well as the previous one, is a key theme for the social representation of the crisis. --- Comparison of the two social representations: Risk and crisis The results shown in Tables 1 and2 indicate different social representations, which have some similarities. It can be assumed that 'danger' is a central element for our participants' social representation of risk, and that 'economy' and'money' potentially belong to the central system of the social representation of the crisis. While the social representation of risk is well balanced in terms of valence (according to the participants, there are five negative terms, four positive ones and one neutral), the social representation of the crisis is, for them, mainly negative (10 negative terms, 5 neutral terms and only one positive term). The social representation of risk is organized around three main themes related to consequences, emotions and actions, while the social representation of the crisis is mainly organized around two themes, one descriptive and one which emphasizes the consequences of a crisis. Among the themes characterizing the two social representations, there is one that is similar and shared by both representations; it concerns the consequences of both risk and the economic crisis. Some elements, namely 'fear', 'difficulties' and the notion of 'opportunity' are also common to both representations. On the one hand, 'fear' is an emotion associated with both concepts; both risk and crisis lead to an increase of fear, which is the main emotion related to these two concepts. On the other hand, 'difficulties' and 'opportunity' are two consequences of risk and crisis. 'Difficulties' presents the negative consequences, while 'opportunity' presents the positive ones. 'Opportunity' also makes the link between consequences and actions. There is no difference between students according to their university program, except the rank of the word 'danger', which was produced later by business students (mean rank = 2.37) than students from other programs (mean rank = 1.61), thus indicating that the crisis is perceived as less dangerous by business students. Results of this study suggest that the representation of risk is here very similar among the different field of studies; only small changes are noticeable. This finding is also true for the social representation of the crisis. Words belonging to these two social representations were used to construct the material of the second study. --- STUDY 2 The aim of the second study was to test how the social representations of crisis and risk are anchored. The effects of the social representations on the participants' ratings of perceived crisis seriousness and perceived ability to act were also analyzed. Two hypotheses were tested in Study 2: (1) Relations between the social representations of risk and crisis are expected (H2). ( 2) It is predicted that, in the context of a crisis, the two social representations will have an effect on the perceived ability to act (H3). --- Method --- Participants One hundred and sixteen French students (68 % women) from Rheims University, France, with a mean age of 22.28 years, participated in this study on a voluntary and non-incentive basis. They came from the following fields of study: psychology (n = 27), marketing (n = 24), management (n = 22) and finance (n = 19); while the remaining participants came from diverse other fields, including commerce (n = 7), human resources (n = 4), supply chain (n = 2) and philosophy (n = 1) and 10 not specified. --- Procedure and measures First, the relationship between the two social representations was explored. Adopting the method recommended by Vergès (2001) for investigating how two social representations may be linked, participants were asked to fill out a questionnaire featuring twenty words that had emerged from the previous study (Study 1) and which corresponded either to one or to both representations. They were asked to indicate if, in their opinion, the words in the list corresponded or not to 'risk', and if they correspond or not to 'crisis'. The order of presentation of the target terms was randomly counterbalanced. The main criterion for selecting these twenty words was their specificity for each of our target terms; we included all the terms up to the limit of 9 %, e.g. 'gains' (a specific term of the economic definition of risk) and'success' (as a possible outcome of a global risky situation) were the only two terms included with a frequency of 9 %. The final list is composed of the twenty following terms: 1) Eight terms were specific to risk: 'danger', 'courage', 'adrenalin', 'losses', 'uncertainty', 'challenge', 'gains' and'success'. 2) Nine terms were specific to crisis:'money', 'economy', 'unemployment', 'finance', 'politics', 'poverty', 'austerity', 'purchasing power' and'recession'. 3) Three terms were common to both risk and crisis: 'fear', 'difficulties' and 'opportunity'. After this first task, participants were also asked to rate their perceived ability to act ('Some people think that acting when facing economic crisis does not depend on themselves, whereas others think that they can act. What do you think concerning yourself'(from 1 = I can do nothing to 7 = I can act)), on a 7-point Likert scale. Responses were provided via a computerized questionnaire distributed by email. --- Data analyses The answers to the questions about the correspondence between the list of words and either 'risk' or 'crisis' enabled us to categorize each word according to one of the four possible patterns: the word corresponds neither to risk nor to crisis (pattern1), only to risk (pattern 2), only to crisis (pattern 3), to both risk and crisis (pattern 4). According to Vergès (2001), this technique allows gathering information about those words or expressions that are associated by the majority with the object of the social representation, versus those that may be the expression of a more composite or uncertain representation. The answers also enabled us to categorize the participants into four groups for each word: those who consider that the word does not correspond either to risk or to crisis (group 1), only to risk (group 2), only to crisis (group 3) or to both risk and to crisis (group 4). For example, participants considering the word 'danger' as characteristic of risk, and not of crisis, belong to the second group for the word 'danger'; participants choosing 'fear' for risk and also for crisis belong to the fourth group for the word 'fear', and so on. This categorization enabled us to create 20 qualitative variables composed of four categories each. To predict the influence of these variables on the perceived ability to act, these variables can be transformed into dummy variables, which enabled us to conduct multiple regressions. --- Results In this study, participants had to decide whether the 20 words correspond to pattern 1, 2, 3 or 4. The choices they made for each word are displayed, in percentages, in Table 3. --- TABLE 3 ABOUT HERE Table 3 shows, in percentages, that, among the eight words coming from the social representation of risk, four are related mainly to risk (pattern 2): 'danger', 'adrenalin', 'courage' and 'challenge'. Among the four others, 'uncertainty' is related to both risk and crisis (pattern 4), 'losses' is related to crisis (pattern 3), while'success' and 'gains' are related neither to risk nor to crisis (pattern 1). Table 3 indicates that the nine words coming from the social representation of crisis are categorized as typical only of crisis (pattern 3). Among the three words that were common to both social representations, 'difficulties' is categorized as typical of crisis, 'opportunity' is related neither to risk nor to crisis and the chi-square test indicates that 'fear' does not belong to any of these four patterns (<unk>2 (3, N = 116) = 5.86, p =.119.).[*] Specific attention was paid to the word 'danger', which is the central element in the representation of risk. This term was categorized as being specific only to risk or to both risk and crisis by 79 % of the participants; this emphasizes the results of the prototypical analysis from Study 1, which suggest that 'danger' occupies a central place in the social representation of risk. The word 'danger' is also associated with crisis or with both risk and crisis by almost one third of our sample. This could imply that the social construction of the crisis is not based only on economic description, but also on 'danger'. Although, according to the prototypical analysis from Study 1, 'uncertainty' belonged only to the social representation of risk, in Study 2, it was categorized as specific of both risk and crisis by 48 % of the participants. This result suggests that 'uncertainty' may be part of the peripheral system of the social representation of the crisis. The same comment can be made for the idea of 'losses', which is associated with crisis, and with both risk and crisis, by 41 % and 22 % of participants, respectively. Results concerning the terms coming from the social representation of the crisis indicate that all these terms are categorized as mostly related to crisis. According to the results gathered from the prototypical analysis carried out in Study 1, the words 'economy' and'money' are hypothesized as being a part of the central system of the representation of the crisis. In Study 2, 'economy' is categorized as being specific only to crisis, or to both risk and crisis, by 73 % of the participants; this emphasizes the results of the prototypical analysis from Study 1, which suggested that 'economy' had a central place in the social representation of the crisis. The result for'money' is less straightforward: 58 % of our sample indicated that money was specific only to crisis (42 %), or to both risk and crisis (16 %) (cf. Table 3, line 10). This implies that'money' does not have such a central place in the social representation of the crisis. 'Difficulties' is a term that was sometimes chosen for crisis (45 %) and sometimes for both risk and crisis (31 %), and less often for risk on its own (10 %). That might be due to the fact that risk is socially perceived as less negative than crisis and its consequences (previous analyses from Study 1 highlighted more positive elements for the representation of risk than for that of crisis). This is also confirmed by the position of the word 'opportunity', which is selected more often in Study 2 for risk (20 %) than for crisis (15 %). Moreover, for the participants, words used to describe risk are also employed to describe the crisis, or both risk and crisis, whereas specific terms related to the crisis have less descriptive power for risk (pattern 2 in Table 3). Words such as 'losses', 'danger' and 'uncertainty' (identified as part of the representation of risk in Study 1) are, in Study 2, also chosen as belonging to crisis. Risk means 'danger', 'losses' and 'uncertainty', while crisis is considered as a specific type of risk (described as 'dangerous', 'uncertain' and a'source of losses') that has a certain specificity (e.g. 'economy' and'money'). Positive aspects of risk, such as 'challenge', 'opportunity','success' or 'adrenalin'
Based on the Social Representation Theory, the purpose of this paper is to explore how laypeople consider both the economic crisis and risk, and to link these social representations to behavior. The paper offers an original approach with the articulation of two studies about the social construction of risk and crises. It also contributes to the development of research methods in order to study the connections between representations and practical implications. Based on this, the impact of the social representation of the crisis on the perceived ability to act is approached. The first study focuses on free-association tasks, with two distinct target terms: 'risk' and 'crisis'. The structural approach, with a prototypical analysis, allowed the identification of two different representations: (i) for risk, 'danger' is the most central element; (ii) for crisis, 'economy' and 'money' constitute the main components of the representation. The second study investigates the links between the two previously detected structures and their relations with the perceived ability to act in a financial crisis context. Some aspects of social knowledge were found to have an impact on perceived ability to act.economic crisis, perceived ability to act, prototypical analysis, risk, social representation Résumé Fondé sur la Théorie des Représentations Sociales, cet article explore comment la crise économique et le risque sont perçus en lien avec des représentations sociales communes et partagées. Il s'agit d'une approche originale qui articule deux études sur la construction sociale du risque et de la crise, tout en contribuant au développement d'outils méthodologiques permettant d'accéder aux liens entre représentation sociales et pratiques sociales. Ainsi, par exemple, on aborde la représentation sociale de la crise économique et financière des capacités d'actions perçues. La première étude est basée sur une épreuve associative avec deux termes inducteurs : la crise et le risque. L'approche structurale, à partir d'une analyse prototypique, permet l'identification de deux représentations distinctes : (i) pour le risque, c'est le danger qui émerge de façon centrale, alors que (ii) la crise est focalisée sur l'économie et l'argent. La seconde étude étudie les liens ente les deux structures précédemment dégagées et leurs relations en terme d'action dans le contexte de la crise économique. Nous avons montré que plusieurs aspects du savoir social ont un effet sur les perceptions d'actions face à la crise.
confirmed by the position of the word 'opportunity', which is selected more often in Study 2 for risk (20 %) than for crisis (15 %). Moreover, for the participants, words used to describe risk are also employed to describe the crisis, or both risk and crisis, whereas specific terms related to the crisis have less descriptive power for risk (pattern 2 in Table 3). Words such as 'losses', 'danger' and 'uncertainty' (identified as part of the representation of risk in Study 1) are, in Study 2, also chosen as belonging to crisis. Risk means 'danger', 'losses' and 'uncertainty', while crisis is considered as a specific type of risk (described as 'dangerous', 'uncertain' and a'source of losses') that has a certain specificity (e.g. 'economy' and'money'). Positive aspects of risk, such as 'challenge', 'opportunity','success' or 'adrenalin', are not often recognized as belonging to crisis. In this study, the perceived ability to act in the context of a crisis was measured because we hypothesized that, in the context of a crisis, the two social representations of risk and crisis would have an effect on perceived ability to act. The mean score for the perceived ability to act is 2.73 (SD = 1.11, min = 1, max = 5), based on a 7-point Likert scale. The perceived ability to act was normally distributed based on the skewness and kurtosis (skewness perceived ability to act = 0.21, kurtosis perceived ability to act = -0.58). There is no significant effect of sex and university program on the dependent variable. In order to test the influence of the categorization of each word on perceived ability to act in the context of a crisis, 20 multiple regressions, corresponding to the twenty words, were carried out. In each regression the categorization of each word was coded as a dummy variable as a predictor, and the score of the perceived ability to act as a dependent variable. In order to correct for multiple comparisons, the Bonferroni correction was applied to our analyses. The traditional <unk> value of.05 was divided by 20 (the number of multiple regressions performed), which resulted in a new <unk> value of.0025. Thus, in order to consider the differences found between the means as being statistically significant, the probability (p value) that their differences are not due to chance should be lower than.0025 instead of the traditional.05 threshold. Among the categorizations of the words, only one predicts the perceived ability to act: the categorization of the word 'challenge'. The multiple regression analysis showed that the categorization of the word 'challenge' predicts 16 % of the variance of the perceived ability to act (R 2 =.16, F(3,111) = 7.07, p <unk>.001). --- TABLE 4 ABOUT HERE As can be seen in Table 4, the categorization of the word 'challenge' as a term related to crisis does not significantly increase the perceived ability to act compared to the categorization of the word 'challenge' as a term related neither to crisis nor to risk (<unk> =.24, p =.016). The categorization of the word 'challenge' as a term related to risk increases the perceived ability to act compared to the categorization of the word 'challenge' as a term related neither to crisis nor to risk (<unk> =.35, p <unk>.001). The categorization of the word 'challenge' as a term related to crisis and risk increases the perceived ability to act compared to the categorization of the word 'challenge' as a term neither related to crisis nor to risk (<unk> =.34, p =.001). H3 was verified only for the choices of the word 'challenge' as affecting perceived ability to act. Students associating the idea of challenge with crisis, or with risk or with both risk and crisis express that they feel more able to cope with the crisis than those who do not. --- Discussion We posited that there are two distinct but interconnected social representations of risk and crisis. The data and analyses confirmed these hypotheses and enabled us to identify the emergence of two distinct social representations, one for risk and one for crisis, with two specific identified structures. Risk is organized mainly around the idea of danger, with some added emotional dimensions. Several concrete aspects of risk (results, actions), which appeared to reflect essentially the expression of individual differences rather than shared knowledge, were identified. The idea of losing and gaining (through the opportunity offered by risk-taking) belongs to the social representation of risk, with a stronger anchoring for losses; this is consistent with the Prospect Theory, which postulates that feelings connected to losing are stronger than those connected to gaining (Kahneman & Tversky, 1979). This is also in agreement with Hobfoll's (1989) Conservation of Resources Theory, which states that a loss of resources will have a much greater impact than gains; loss of resources is disproportionately more salient than gain of resources. The social representation of the crisis could be considered as an economic representation because it contains words connected to the economy. It is structured around themes such as the neutrally valued economy and money. Negative references to the consequences of the crisis (unemployment, difficulties, poverty...) appear to be peripheral elements. Although the representation of risk is broader and contains more items that could be classified in the more general framework of the system of emotion/action/results-consequences, it gives rise to emotions in a configuration in which action is related to risk. The social representation of crisis is narrower and more concrete, since its elements are more related to economic concerns. As in Gangl et al. (2012), participants included economic descriptive variables in their verbal production. The two social representations appear to be distinct, since their central systems are different (H1). Nevertheless, the first level of analysis highlighted some common elements, which led us to think that some elements from the social representations of crisis and risk may form a network. The test with the word list in the second study confirmed this part of our second hypothesis: the economic crisis is a social representation, which cannot exist in an isolated way in the participants' mind. The components of the representation of risk are almost always deemed to belong not only to risk but also to crises (H2). The activation of certain elements of the social knowledge about risk and crisis influenced the way people perceived the ability to act in the context of crisis. Participants used social knowledge as their reference point. Thus, it partially confirms H3. Attitudes about the social object of crises and actions are not determined by the isolated social representation of crises but by a set of interacting representations, which influence each other. One of the social representations tested here is risk as a social object; 'challenge' as a component of the social representation of risk, when associated with risk, with crises or with both risk and crises, allows students to feel better able to cope with the crisis. These findings are consistent with results stressing the fact that a social representation does not exist independently but as a part of a symbolic and social frame in which people are living (Jeoffrion, 2009). This is also consistent with previous studies, which showed that lay perceptions and risk assessments are not the results of computations and probabilities of occurrence, but instead rely on meaning or on 'qualitative understanding' (Boholm, 1998). Cognitive psychologists, most notably in Prospect Theory, have shown that, when individuals have to make decisions under risk and uncertainty, instead of relying on probabilistic judgments, their choices are biased. For instance, according to Brehmer (1994), risk judgment is not influenced by probabilities and utilities but depends on the expected nature of the consequences (fear-related catastrophic potential and degree of knowledge about the risk). For Brehmer, this is why 'judgments of risk by non-specialists are made in a way that is almost totally unconnected with the types of concepts that fall within the estimates of engineers and statisticians'(1994: 86). What role does social knowledge play in assessing risks and crises in the economic world? The analysis of verbal productions highlighted how the representation of risk can influence the evaluation and assessment of crises. Crises are perceived as more negative than risk alone. Risk, which is characterized by danger and loss as well as by confidence and adrenalin, is in the present economic situation a collective risk, which is negatively affecting communities. The actions involved in each case are quite different: avoidance will be associated with threat, while actions in order to cope with a difficult situation will correspond to people thinking that a crisis is a challenge. These results may also be related to the Stress Transactional Theory (Lazarus & Folkman, 1984), in which the authors suggest that a stressor may be appraised as a threat (anxiety) or as a challenge (excitement), thus that how inter-individual differences (as found in the peripheral zones of the representation of risk) are present in the way people perceive a stressor. Today, the crisis is a stressor of everyday life and is incorporated in the framework of global risk, which also helps some people to consider it as a challenge. This is an important point since it indicates ways of predicting when individuals will act in a positive and constructive way. Risk judgment depends on the expected nature of the consequences and fear-related potential. Consequently, discourse analysis can yield important clues for understanding how social representations provide guidance on how to act. When people think, talk and share their knowledge about risk, this social discourse provides them with elements on which to base their judgments and actions. To make a judgment about a crisis, people use a 'number of cognitive shortcuts as well as na<unk>ve theories' (Gana et al., 2010: 142). The social reality of the economic crisis perceived as a risk can provide fresh insights into this phenomenon. Moreover, considering the crisis as a risk at a social level, and not just at an economic one, could open up new possibilities for action. One of the main implications of these findings is that one can restructure how a situation is perceived (i.e. more as a risk and less as a crisis), thus creating greater ability to act and to be more optimistic. This coincides with Fredrickson's (2001) cognitive theory, which states that, even if negative emotions narrow one's cognitive field, positive emotions, on the contrary, broaden them, thus making one more receptive to new and constructive ideas, for example on how to cope with economic difficulties. ; it allows more creativity and can perhaps guide economic behavior. Some limits have to be considered before generalizing our results to other categories of people. The first limit is that the participants in these two studies were students. It would be interesting to administer the same tasks to experts who are dealing with crises, as well as to other 'lay-people', such as unemployed people or managers, in order to understand how scientific knowledge is mixed within social knowledge. A second limit is that, in the first study, all participants had to produce answers to two free-association tasks, one with the inductor 'risk' and then another one with the inductor 'crisis'. It is obviously possible that the first task might have contaminated the second and that it would have been preferable to have different participants for the two social representations. However, some complementary questionnaires (not reported in this paper) have been completed by another group of participants to counterbalance this effect, and the results show no order effect. In the second study, a single item was used to measure the perceived ability to act, which also constitutes a weakness. It would be interesting to employ specific tools in order to have more precise measures for this variable. For example, various scales could be used on perceived control, as it can influence the belief that one can determine his/her own internal attitudes and behaviors, in order to produce the desired issues (cf. Wallston et al. (1987)). Finally, a last limit concerns the statistical analyses of the second study: since it is an exploratory study, we had no specific hypotheses concerning which words would have an impact on perceived ability to act. Therefore, a large number of multiple regressions were carried out, and we had to use the Bonferroni correction to correct for multiple comparisons. This correction reduced the <unk> value from.05 to.0025. With more precise hypotheses and thus fewer analyses, the <unk> value would have been higher, and more multiple regressions would have been significant. In further studies, it could be interesting to cross analyses of people's verbal productions with media discourse, as in De Rosa et al. (2010); this could be another way of exploring this topic more deeply. As the social representation is a mediator of economic judgments and decisions, the psychological and behavioral implications need to be explored in more depth. It seems important to integrate the study of motivations and emotions within a cognitive and social approach to the perceptions of risk and of crisis. --- Conclusion A major result of these two studies is the logic of the connection of the components of 'crisis' with the social representation of 'risk' and its possible influence on actions taken when facing a crisis. When participants link positive elements of global risk to crisis, i.e. challenge, they feel more able to act (risk-seeking motivated by the challenge, and adrenalin provided by risk-taking). The representation of the crisis as a negative heuristic precludes action.
Based on the Social Representation Theory, the purpose of this paper is to explore how laypeople consider both the economic crisis and risk, and to link these social representations to behavior. The paper offers an original approach with the articulation of two studies about the social construction of risk and crises. It also contributes to the development of research methods in order to study the connections between representations and practical implications. Based on this, the impact of the social representation of the crisis on the perceived ability to act is approached. The first study focuses on free-association tasks, with two distinct target terms: 'risk' and 'crisis'. The structural approach, with a prototypical analysis, allowed the identification of two different representations: (i) for risk, 'danger' is the most central element; (ii) for crisis, 'economy' and 'money' constitute the main components of the representation. The second study investigates the links between the two previously detected structures and their relations with the perceived ability to act in a financial crisis context. Some aspects of social knowledge were found to have an impact on perceived ability to act.economic crisis, perceived ability to act, prototypical analysis, risk, social representation Résumé Fondé sur la Théorie des Représentations Sociales, cet article explore comment la crise économique et le risque sont perçus en lien avec des représentations sociales communes et partagées. Il s'agit d'une approche originale qui articule deux études sur la construction sociale du risque et de la crise, tout en contribuant au développement d'outils méthodologiques permettant d'accéder aux liens entre représentation sociales et pratiques sociales. Ainsi, par exemple, on aborde la représentation sociale de la crise économique et financière des capacités d'actions perçues. La première étude est basée sur une épreuve associative avec deux termes inducteurs : la crise et le risque. L'approche structurale, à partir d'une analyse prototypique, permet l'identification de deux représentations distinctes : (i) pour le risque, c'est le danger qui émerge de façon centrale, alors que (ii) la crise est focalisée sur l'économie et l'argent. La seconde étude étudie les liens ente les deux structures précédemment dégagées et leurs relations en terme d'action dans le contexte de la crise économique. Nous avons montré que plusieurs aspects du savoir social ont un effet sur les perceptions d'actions face à la crise.
Introduction Vaccination is considered a safe, effective and costsaving public health measure for disease prevention [1,2]. Next to safe water, the impact of vaccines on mortality reduction and population growth is estimated to be larger than that of antibiotics and improvements in nutrition [3]. The success of global immunization programs has been impressively demonstrated by the dramatic decrease in morbidity and mortality of diseases, such as measles, polio, and tetanus [4]. Despite this success, today we face a global hesitancy and skepticism against vaccination, primarily in industrialized countries [5,6], which correlates with the re-emergence of vaccine-preventable diseases, such as measles or pertussis. With the World --- original article Health Organization (WHO) goal of 95% measles vaccination coverage rate unmet, Europe faces a yearly increase in measles outbreaks. In 2019, 13,200 cases of measles were reported by 30 European Union (EU)/ European Economic Area (EEA) member states with Lithuania (298.5/million), Bulgaria (176.4/million), and Romania (87.9/million) showing particularly high rates. The overall notification rate was 25.4 cases per million population, which was lower than in 2018 and 2017 (34.4 and 35.5 per million population, respectively), but much higher than the rates observed in 2015-2016 (7.8-9.0 per million population) in Europe. In Austria, 17.6 cases per million inhabitants (n = 151) were reported and 4 years earlier, in 2015, Austria had the second highest case-per million rate in all EU/EEA countries making up 36.0 cases/million with 309 notified cases of measles [7]. For pertussis, notified cases in Austria have risen steadily from 579 to 2231 between 2015-2019 [8]. Some European countries have recently introduced various forms of mandatory vaccination or extended their programs [9,10]. Since then it has been a matter of debate whether such a strategy is applicable to all European countries, including Austria. In 2014, the WHO Special Advisory Group of Experts (SAGE) on Vaccine Hesitancy defined vaccine hesitancy as "a delay in acceptance or refusal of vaccination despite availability of vaccination service. Vaccine hesitancy is complex and context specific, varying across time, place and vaccines. It is influenced by factors such as complacency, convenience and confidence." Determinants include risk perception of vaccine-preventable diseases and necessity of vaccines, availability, affordability, willingness to pay and health literacy as well as trust in vaccine effectiveness, vaccine safety, health services, professionals and policy makers [11]. Another term, vaccine denier, refers to a member of a subgroup at the extreme end of the hesitancy continuum (between undoubtful acceptance and complete and undoubtful refusal); one who has a very negative attitude towards vaccination and is not open to a change of mind no matter what the scientific evidence says. A vaccine denier ignores any quantity of evidence provided and criticizes the scientific approach as a whole [12]. According to a survey performed in 2013, 4% of Austrian parents considered themselves vaccine deniers, and 57% said they were skeptical towards vaccination [13]. In another study conducted in an Austrian emergency department in 2012, 11.4% of people said they were vaccine deniers and 38.9% stated that they were skeptical [14]. In a representative sample of Viennese parents with children, 82.7% had a generally positive view about vaccination, but 25.1% refused at least 1 recommended vaccination for their child [15]. Recently, two EU-wide surveys on vaccine confidence and attitudes, one online and one with representative face-to-face interviews, were commissioned by the European Union. In the online survey for Austria, 70.5% of adult participants agreed that "vaccines are important for children to have" while 4.7% tended to disagree and 3.0% strongly disagreed with this statement [16]. In the face-to-face interviews, 71% of Austrians agreed that "it is important for everybody to have routine vaccinations" while 18% tended to disagree and 5% strongly disagreed with the aforementioned statement [17]. In a convenience sample of parents in 18 European countries, another study found a self-reported vaccine hesitancy in 33% of Austrian participants, undecided ones in 16%, and 51% not reporting to be vaccine hesitant [18]. In Austria, a surveillance system to monitor changes in vaccination coverage especially at a regional level is lacking. Since 2015, the official national vaccination coverage for measles and polio in Austrian children and young adults is estimated based on an agentbased computer-simulated model using documented administered vaccines and orders by pediatricians as well as sales numbers of vaccines by producers. Coverage for the recommended 2 doses with measlesmumps-rubella (MMR) vaccine is estimated at 82% for the 2-5 year-old and 89% for the 6-9 year-old groups. The biggest deficit is estimated in the 19-30 years age group with a 2-dose coverage of just over 70% [19]. For polio immunization, this model suggests a significant delay for the third dose of the hexavalent vaccine in 30% of eligible children and 6.5% of completely unvaccinated individuals in the 5-9 year age group [20]. For adults older than 30 years mainly sales numbers of vaccines, which are not included in the state financed national vaccination program, are available. With this information, recently published estimates for the influenza vaccination rate of the Austrian population went down from 15.4% in the 2006/2007 season to 6.1% for the 2015/2016 season. In additional telephone surveys the influenza vaccination rate in people older than 60 years was determined at 14% [21]. As age distribution of vaccinated persons for other vaccinations is unknown, they provide no reliable estimate for vaccine coverage. To increase vaccination coverage, it is important to understand the major drivers for a reduced vaccine uptake in general and vaccine hesitancy and vaccine refusal in particular, to be able to effectively counteract prejudices and fear by population-tailored information and improvement of the accessibility of vaccines. --- Aim Addressing a rural Austrian population of adults and of children attending public schools, the aim of this study was to find out about: 1. self-reported vaccination rates 2. attitudes towards vaccination in general and mandatory vaccination --- Education No response 0.3 (n = 1) -a n = 2 children did not state their gender, one of them was 6-9 years old, the other indicated to attend the middle school and was 10-15 years old 3. knowledge about vaccines and vaccine-preventable diseases 4. concerns about vaccines and vaccination and sources of information about these issues 5. preferred source and content of future information on vaccination. --- Ethics The ethics committee of the Medical University of Vienna reviewed and approved the study with the vote number 1681/2015. --- Methods --- Study population Within the framework of a larger healthy village initiative in Lower Austria (https://praevenire.at), one community (Pöggstall) was randomly selected for studying vaccination hesitancy as well as providing and testing concepts for a tailored information campaign. The community facilitated contact with the local schools to ensure high participation of children. The anonymous questionnaire for the children's population included broadly similar sections but added a question on their parents' attitude towards vaccination and did not ask about concerns around vaccinations. The questionnaires were in German, the English versions can be found in the supplement S1 Appendix and S2 Appendix. --- Distribution of the questionnaires A total of 1200 questionnaires for the adult population were sent out with the quarterly village newspaper to all households in Pöggstall (one each). The children's questionnaire (n = 350) was handed out at the local primary and middle school by teachers to all children of all school years. Children were asked to voluntarily participate by filling it out at home and handing it in back at school. Both questionnaires were also put up at the local doctor's office, the local pharmacy, and the community office, to which all the questionnaires, except for the ones collected by teachers, should be returned. --- Statistical analysis Descriptive statistics were produced as numbers and percentages. Percentages not summing up to 100% for forced-choice questions are due to missing values. A knowledge score was calculated as the number of correct responses to the questions on vaccines and vaccination-preventable diseases, which included six possible correct answers in the questionnaire of the adult population (six single choice questions) and ten possible correct answers in the questionnaires for children (three single choice and two multiple choice questions). To compare responses between the various subgroups of age, gender, education, and. Fig. 1 Self-reported vaccination rates in surveyed adults and children for selected vaccinations recommended in the Austrian National Vaccination Plan. HPV human papillomavirus, TBE tick borne encephalitis knowledge (and parents' attitude towards vaccination in children) a generalized linear model was applied with binomial counts and logit link. Variables were chosen based on previous studies. Open-ended questions were noted separately, and a list of the answers was compiled. Paraphrases were combined to obtain meaningful categories. Exponentials of parameter estimates and 95% confidence intervals were obtained that reflect odds ratios relative to the reference category. All calculations were done using IBM SPSS Statistics for Windows, Version 25.0 (IBM Corp., Armonk, NY, USA). P-values below 0.05 were considered significant. --- Results After 3 months of collection, we received a total of 306 completed questionnaires from the adult population (response rate 26%) and 320 from the children's population (response rate 91%). Questionnaires completed less than 75% were excluded from analysis (n = 6 adults, n = 0 children). Out of 300 of the respondents 5 were removed from the survey as they stated they were <unk>16 years, leaving 295 for further analysis among the adults. Four children were excluded due to being <unk>6 years, leaving 316 children for further analysis. We included the few >15-year-olds into the group 10-15year-olds and renamed the group children aged 10+ years. Demographic data of the adult and children group of respondents can be found in Tables 1 and2. --- Vaccination rate for common vaccines in adults and children As depicted in Fig. 1, a high percentage of adults and children reported a positive vaccination history towards tetanus, followed by TBE and diphtheria, whereas only few gave a positive feedback to pertussis vaccination. With respect to the desired 95% vaccination coverage rate against measles a concern-706 Towards understanding vaccine hesitancy and vaccination refusal in Austria K original article Of the adults 24.7% reported being vaccinated against hepatitis B and 20.7% against hepatitis A. Only 5.1% of adults reported being vaccinated against pneumococci (PNC10/13, PPV23 not specified), and 1.0% against herpes zoster. Detailed results can be found in the supplementary Table S3. Overall, 25.7% of children reported vaccination against hepatitis B, 7.6% against hepatitis A, and 11.1% against pneumococci. Of note, 3% of children explicitly and without having been asked remarked in an extra paragraph that they had never been vaccinated or do not get vaccinated. Detailed information can be found in the supplementary Table S4. Regarding the HPV vaccination rate in adults, only 3.7% (n = 11, all female) reported a positive vaccination history. Of those women five said they had received all three vaccinations, one person said she had received one, and another five left that question unanswered. Most of the women vaccinated against HPV were between 25 and 60 years old. Female adults were also asked for their reasons for not having been vaccinated and offered multiple answers: 13% of them said the vaccination was not necessary, 9% said they were afraid of adverse reactions, 4% said because of costs, 3% said their doctor had advised against it, 2% said they were afraid of needles, another 2% said they missed the vaccination appointment, and 25% chose other reasons (5% nonresponse rate). Concerning children, only 7.0% (n = 22) of 316 children said they were vaccinated against HPV, 17 female and 5 male, most of them aged 10 years or older (n = 21; n = 1 child 6-9 years old). --- Attitudes towards recommended and mandatory vaccination Adults Vaccinating according to national recommendations When asked about their general attitude towards vaccination, 56.6% had a positive attitude, 21.0% claimed a neutral attitude, 15.6% were skeptical, and 5.4% had a negative attitude. Concerning age distribution, 67% aged 60+ years, 58% aged 40-60 years, 51% aged 25-39 years, and 45% aged 16-24 years viewed vaccination as positive. Concerning education, 73.6% with tertiary education, 52.8% with secondary higher education, 49.6% with secondary lower education, and 58.6% with primary education viewed vaccination as positive. Those with a skeptical or negative attitude towards vaccination were less likely to score higher on the knowledge score (odds ratio [OR] 0.63, 95% confidence interval [CI] 0.50-0.79), compared to people with a positive attitude, while no statistically significant differences concerning knowledge about vaccination were found for age, gender, and education. In total, 55.6% of adults would recommend vaccination to their social environment, while 37.6% stated they would not. Those willing to recommend vaccination showed an OR of 1.66 (95% CI 1.39-2.00) for K Towards understanding vaccine hesitancy and vaccination refusal in Austria 707 original article higher knowledge score. Those with secondary higher education were less likely to recommend vaccination (OR 0.34, 95% CI 0.14-0.82) compared to people with tertiary education (OR 1.0, reference category), while no difference between the latter and people of primary and secondary lower education was found. No statistically significant effects on the likelihood of recommending vaccines were found for age and gender. Overall, 73.2% answered affirmatively when asked whether they would get their children vaccinated or whether they have had their children vaccinated according to the current ANVS "Impfplan <unk>sterreich 2016" [22], while 20.0% denied it. No statistically significant correlation was found for age, gender, education, and knowledge score. Mandatory vaccination Among adults, 39.3% agreed to a possible introduction of mandatory vaccination for attending state-operated facilities, such as schools, 34.2% did not agree, and 25.4% were undecided. With 60+ years old as reference category, people aged 40-60 years were less likely to agree to mandatory vaccination (OR 0.51, 95% CI 0.26-0.99), as were people aged 25-39 years (OR 0.46, 95% CI 0.22-0.97), while people aged 16-24 years were the least likely to agree to mandatory vaccination (OR 0.17, 95% CI 0.05-0.55). Those who agreed were more likely to score higher on the knowledge score (OR 1.46, 95% CI 1.24-1.73). No statistically significant differences concerning approval of mandatory vaccination were found for gender and education. While 54.2% of adults were in favor of general mandatory vaccination for healthcare workers in hospitals and at doctor's and midwifery practices, 20.7% were against, and 23.7% were undecided. Using tertiary education as the reference category, people with secondary higher education (OR 0.39, 95% CI 0.17-0.91) and people with secondary lower education (OR 0.41, 95% CI 0.19-0.85) were less likely to agree to mandatory vaccination for HCWs. Those who agreed were more likely to score higher on the knowledge score (OR 1.36, 95% CI 1.16-1.60). No statistically significant difference for people with primary education was found. Detailed results can be found in the supplementary Table S5. --- Children and their parents' opinion When asked about their general attitude towards vaccination, 47.4% of children answered having a positive attitude towards vaccination, 34.5% had a neutral opinion, 10.4% of children said they were rather skeptical and 7.0% were negative. Younger children aged 6-9 years were more likely to be of a skeptical or negative opinion (OR 2.51, 95% CI 1.04-6.05) compared to children aged 10+ years. Children with a skeptical or negative attitude were less likely to score higher on the knowledge score (OR 0.77, 95% CI 0.66-0.91). No statistical difference was found for children's gender. Regarding their parents' opinion, 57.0% of children answered their parents had a positive opinion about vaccination, 23.4% claimed their parents had a neutral opinion, 10.8% said they were rather skeptical and 7.6% said their parents had a negative opinion concerning vaccination. Children who claimed their parents thought positively of vaccination were unlikely to have a skeptical or negative opinion themselves (OR 0.04, 95% CI 0.02-0.09), compared to children with parents with a skeptical or negative attitude. Female children were more likely to say their parents were of a skeptical or negative opinion (OR 2.09, 95% CI 1. 16-3.78). No statistical difference was found for children's age. Vaccinating according to national recommendations Overall, 63.0% of children thought they had received all the scheduled vaccinations recommended in the ANVP "Impfplan <unk>sterreich 2016" [22], while 33.5% of children answered they had not. Children claiming their parents had a positive opinion of vaccination were more likely to say they had received all scheduled recommended vaccinations (OR 3.86, 95% CI 2.02-7.37), compared to children who said their parents had a skeptical or negative attitude. Children who believe they had received all scheduled vaccinations were more likely to score higher on the knowledge score (OR 1.28, 95% CI 1.14-1.43). No statistically significant effects were found for children's age or gender. --- Mandatory vaccination Of the children 30.7% agreed to vaccine mandates prior to attendance of kindergarten or school, 49.4% did not agree and 19.6% were undecided. Children reporting their parent's opinion about vaccination being positive were much more likely to agree to mandatory vaccination for the attendance of kindergarten or schools (OR 13.33, 95% CI 3.15-56.42), compared to children who believe their parent's opinion to be skeptical or negative. Children who agreed to the introduction of mandatory vaccinations were also more likely to score higher on the knowledge score (OR 1.16, 95% CI 1.03-1.30). No statistically significant difference in opinion about mandatory vaccination was found for children's age or gender. Among children, 40.2% approved mandatory vaccination for healthcare workers, 20.6% disapproved and 38.6% were undecided. Children who thought their parent's opinion about vaccination to be positive were much more likely to agree to mandatory vaccination for HCWs (OR 6.39, 95% CI 2.58-15.84), compared to children who believe their parent's opinion to be skeptical or negative. Children who agreed to the introduction of mandatory vaccination were also more likely to score higher on the knowledge score (OR 1.28, 95% CI 1.14-1.44). Detailed results can be found in the supplementary Table S6. --- Subjective comprehension of vaccination and knowledge about vaccine-preventable diseases and vaccinations Adults were asked four questions about people's understanding of vaccination, followed by six questions on vaccine-related knowledge of measles, HPV and their respective vaccines (Table 3). A knowledge score was calculated as a number of correct answers for the six questions. Table 4 shows ORs and 95% CIs of scoring one point or more on the knowledge score by age gender and education. A higher age, female gender, and tertiary education were positively associated with points on the knowledge score. Children were asked the same four questions as the adults to learn about their subjective understanding of vaccination (Table 5), followed by five questions (two out of five multiple choice questions) with a total of ten correct answers on their vaccine-related knowledge. Again, a knowledge score was calculated as correct answers out of ten. Table 6 shows ORs and 95% CIs for a higher knowledge score, which were positively associated with a positive parents' opinion on vaccination. --- Concerns around vaccination Participants in the adult group were asked to describe the concerns they had towards vaccination as an open question. Overall, 59.7% answered this --- original article Causes and consequences of question. Answers were categorized into concerns about side effects (n = 114; 38.6%), 12.5% (n = 37) said vaccinations were not important or unnatural, 5.4% (n = 16) said they were potentially harmful to the immune system, 4.4% (n = 13) said they objected to the money-driven pharmaceutical industry, 4.1% (n = 12) were concerned about vaccine ingredients and 1.4% (n = 4) objected to the practice of multiple vaccinations. Further 12.5% (n = 37) named concerns or made statements that could not be as easily categorized, such as some were concerned that panic is spread (e.g. avian influenza in 2009/2010) to sell medication or vaccinations, some believe that the number of vaccines in the vaccination schedule cannot be good for their children, that the costs were too high to afford all the recommended vaccines, and that potentially massive damage could be done to the human body through vaccination. As a specific example, adults were asked what they regarded as the primary reason for refusal of the influenza vaccination in Austria. Among the multiple answers given, 37.6% chose afraid of side effects, 20.7% said the vaccination makes me ill, another 18.0% said it was the ineffective protection, 9.2% said I am not at risk, 11.9% chose other reasons. Children were not asked for concerns regarding vaccination. --- Source of information and content Most adults named the family doctor (44.7%) as their source of information on vaccination. Among children, 75.0% named their parents as their source of information about vaccination. Figs. 2 and3 show detailed results. Further information about vaccination was preferred in 38.6% of adults and 37.3% of children. Both groups specified the family or specialist doctor as their preferred future source of information. See Figs. 4 and5 for detailed results. The majority in both groups (22% of adults and 25% of children) wanted to receive more information about adverse reactions in the future (see Figs. 6 and7). --- Discussion This survey provides information on attitudes and knowledge about vaccination along with self-reported vaccination rates in children and adults of a small Austrian village. We could identify the most trusted sources of information and important reasons for concerns towards vaccination. The high response rate in the group of children up to 16 years (320 out of 350 questionnaires or 91%) offers valuable information on the attitudes and knowledge about vaccination to tailor educational programs to the needs of this generation. Our results showed a moderate self-reported vaccination coverage for TBE and tetanus, low coverage of measles, mumps, rubella, diphtheria, pertussis and polio and very low vaccination coverage for influenza and HPV in adults and children. Of the children 3% reported they had never been vaccinated or do not get vaccinated, which seems to be in line with studies of vaccine refusal in western Europe [17,23]. The generally low self-reported coverage rates could either suggest a substantial lack in many essential vaccinations and/or a poor knowledge of their own vaccination status. Some authors have attributed this lack of awareness in the general population to a lack of social marketing: preventive measures cannot be successful unless the tools of modern communication sciences are put to full use. In Austria, there has been insufficient vaccination promotion activity in the past, and the stakeholders have not been able to agree on a common approach [24]. Furthermore, while it is well known that financial reimbursement and the free supply of vaccines are important factors for increasing vaccination rates [25], self-funding is still the norm for adults in Austria, and with the exception of the MMR vaccine no general financial reimbursement has been implemented for immunizations. While the general attitude towards vaccination was positive in two thirds of people aged 60+ years, this dropped to less than half in people aged 16-24 years. Older people may have personal experience with certain vaccine-preventable diseases and therefore value disease prevention higher than younger people [26]. A large survey of Italian pediatricians found an advantage in vaccine knowledge and confidence in older professionals [27]. Furthermore, older people might have higher trust in their physician due to more frequent consultation for other health problems. In adults, tertiary education appeared to be correlated with a positive attitude towards vaccination, while people of secondary higher education showed a trend to have the most skeptical views, although the differences were not statistically significant. We found people of secondary higher education to be least likely to recommend vaccination to their social surroundings. These findings are not completely in line with other research, which found a high educational and socioeconomic level as a marker of vaccine acceptance for themselves and their children [16,17,28] or no effect of these variables [29]. In recent years, mandatory vaccination was introduced or expanded in several European countries and came with some protests of the respective public [30]. While compulsory vaccination is not envisaged by the government for the general population in Austria, but a matter of consideration for healthcare professionals, it is of value for public health policy makers to learn that more than half of the adults (54.2%) in our survey support mandatory vaccination for HCWs and only one in five (20.7%) are against it. Regarding adult's and children's subjective comprehension of immunization, many children and even more adults in our study had trouble understanding why they needed vaccinations. Most of them found it especially hard to consider the quality of information concerning health hazards in the media and found it hard to understand which vaccinations they personally needed. Our adult population showed only limited knowledge when it came to measles and HPV, and many children stated they did not know how vaccines worked. A recent EU-wide survey found a high variability in vaccine knowledge with Austrians ranked around the EU-average. A considerable difference in knowledge between subjective social classes (self-defined upper class vs. working class) has been observed at the EU-wide level [17]. Regarding the major source of information, we confirm the physician as the most important contact person for adults to deliver information about and build trust in vaccinations, as has been shown extensively in other research [15,17,28,29,31]. The majority of children (75.0%) named their parents as influential source of information about vaccination, but a significant percentage (39.2%) also valued their family doctor. Our study offers further insights about what kind of information people want from their physician and shows that doctors have a chance in delivering important messages on vaccination before people seek information from other sources, especially online and print media. It appears that healthcare professionals need to become more aware about their significance as role models and source of trusted and valued information. Greater efforts to support health education and physician training are needed to give tailored vaccine information allowing a sound and well-informed de-cision by their clients and patients. Our study shows that also children could benefit from an early ageappropriate vaccine education to strengthen their health literacy. --- Limitations Paper-based surveys or telephone-based surveys are valuable measures in public health epidemiology; however, our questionnaires were not validated and therefore we cannot quantify how accurately they measure the endpoints. The adult questionnaire showed only a limited response rate and two thirds of the responders were women. It is also unknown how many of the surveys were completed at the doctor's office, the pharmacy, or the community center, where they were also available (in terms of response rate). Concerning the lay population, we did not ask specifically whether people were employed in the healthcare sector. Regarding vaccination coverage, self-reported numbers of past vaccinations do not necessarily mirror the actual vaccination coverage. As we correlated knowledge with opinion, it needs to be noted that some children with a skeptical or negative attitude towards vaccination might have purposely answered in the negative when being asked whether vaccination is important to be protected from possibly severe diseases or if vaccination helped their own body's defenses to be protected later on by learning about sickness-causing triggers, as they or their parents might not trust the scientific basis of these established facts, despite abstract knowledge of them. The same could be true for adults being asked about the measles and HPV vaccine. --- Conclusion In Austria, studies on determinants of vaccine hesitancy are scarce. In our survey, self-reported coverage rates children and adults were found to be low and could either suggest problems with vaccine uptake and/or a poor knowledge of vaccination status. Of the children 3% reported they had never been vaccinated or do not get vaccinated. The general attitude towards vaccination was positive in two thirds of adults aged 60+ years, but this dropped to less than half in people aged 16-24 years. Adults with a secondary higher education were least likely to recommend vaccination to their social surroundings. More than half of the adults (54.2%) supported mandatory vaccination for HCWs and one out of five (20.7%) were against it. We could confirm the physician as the most trusted source of information around vaccination in adults. Greater efforts by healthcare professionals are needed to give tailored vaccine information, allowing a sound and well-informed decision. Doctors should be aware of their very important role in transmitting trusted healthcare information. This should include an up to original article date education in communicable disease prevention and immunization during their whole medical career. In Austria, more research regarding determinants and state of vaccine hesitancy is needed to be able to implement evidence-based strategies for improvement of vaccination coverage and disease prevention by vaccination. Funding Open access funding provided by Medical University of Vienna. Conflict of interest A. Bauer, D. Tiefengraber and U. Wiedermann declare that they have no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
In Austria, data on vaccine hesitancy is scarce. Available studies suggest around 1-11% of parents refuse vaccination, while many more are hesitant and consider refraining from some but not all of the recommended vaccinations. However, the key drivers for vaccine hesitancy in Austria are largely unknown. To learn more about vaccination coverage, attitude towards and knowledge around immunization as well as views on mandatory vaccination, we conducted a survey in a rural Austrian lay population including adults and children. Two paper-based questionnaires, one for adults 16 years or older and one for children aged 6-15 years, were developed, then sent to all houses of a rural community in Austria as well as handed out at the local primary and middle school, respectively. Self-reported coverage rates of children and adults were found to be low. Within the surveyed population 3% of children had never been or do not get vaccinated. More than half (57%) of the survey participants had a positive attitude towards vaccines, 21% were without reserves, 16% were found skeptical and 5% had a generally negative attitude. Knowledge about immunization in general was poor. Younger adults and people with secondary education appear to be most skeptical and negative towards vaccination. Children's attitudes were closely linked to those of their parents. The major concern around vaccina-Electronic supplementary material The online version of this article (https://doi.org/10.1007/s00508-020-01777-9) contains supplementary material, which is available to authorized users.
In this study, quantitative data were gathered through a survey form, and qualitative data were collected through the interview method. Survey form allows researchers to organise questions and receive feedbacks without having to communicate verbally with each respondent (Williams, 2006). In this study, the questionnaire was designed by the researcher based on literature reviews and in-depth interviews. In developing the study questionnaire, the researcher examined related literatures in order to form an operational definition for each variable. These definitions formed the basis for developing a proper survey to be used in this study. This is in line with Yan (2011) who stated that operationalized variables have accurate quantitative measurements. Meanwhile, according to Sabitha (2006), a questionnaire form which presents clear variable definitions has undeniably strong validity. This point is also in conjunction with Robbins (2008) who stated that a questionnaire form which was constructed based on literature reviews would comply with validity and reliability requirement. Next, the researcher also conducted a series of in-depth interviews. According to Oppenheim (1998), in-depth interviews can assist researchers in constructing survey questions more accurately. Therefore in this study, the researcher was able to ascertain the obtained information from the literature reviews based on the actual realities of Chinese Muslims' life. A total of four Chinese Muslims were interviewed through open-ended questions. Each informant were asked in an open manner based on the operationalized definitions. After the interview sessions, data were analyzed and the analysis results were used to guide the development of the survey items in the questionnaire. The constructed questionnaire form in this study achieved the required content and face validity. Content validity refers to the extent to which the measurement of a variable represents what it should be measuring (Yan, 2011). In this study, conceptual and operational definitions were used by referring to the literature reviews in order to obtain content validity of the questionnaire. This is supported by the opinion of Muijs (2004) who stated that content validity can be obtained through the review of past literatures. Through this validity, the researcher confirmed that the study variables were able to measure their actual concepts. As stated by Muijs (2004), content validity can be used to represent a measurable latent concept. Face validity of the study questionnaire was also obtained whereby the researcher asked several respondents from the Chinese Muslim community through an informal survey. According to Muijs (2004), by asking related questions to the study respondents, face validity of a questionnaire form can be confirmed as the respondents were asked about whether the questions are relevant or not according to their view. This has guided the researcher to measure each of the studied aspects based on the realities of the respondents' life. Besides, the questions were also helpful for the researcher to ascertain that the constructed questionnaire has met its intended outcomes. As stated by Ary et al. (2010), face validity is where the researchers believe that their questionnaire has measured what it should measure. In addition to the above, a pilot study was also conducted in order to confirm the reliability of the questionnaire. A pilot study can be used to ensure the stability and consistency of a constructed questionnaire in measuring certain concepts, at the same to evaluate whether the questionnaire is properly developed or not. As for qualitative data, this study utilised the interview method to address the study objectives with the purpose to detail out the quantitative findings. This was due to the use of sequential explanatory design. According to McMillan (2012), sequential explanatory design requires the quantitative data to be further explained, elaborated, and clarified. Quantative data obtained from the survey were analysed by using two types of statistical procedures, which are descriptive statistics and inference. All data were processed using the SPSS software. As for the analysis of qualitative data obtained from the interview, manual method was used whereby data were analysed through Open Coding, Clustering, Category, and Thematic processes. According to Tiawa, Hafidz, and Sumarni (2012), Open coding is the provision of code to each data so that it can be classified according to the study objectives. Meanwhile, Clustering is the process of classifying data which have been assigned with open coding in specific categories. Next, the Category process of the data aims to facilitate researchers in dividing the data according to sections in the study. Thematic is the process of classifying each gathered study data based on more specific themes or concepts. --- ANALYSIS AND DISCUSSION --- Demographics Profile Table 1 shows the background of the study respondents in terms of age, gender, and educational level. With regards to age, Figure 4.1 shows that majority of the respondents (32%) were around 46 to 55 years old. This is followed by those with age from 56 years and above (27%) and then those aged between 36 to 45 years (23%). Meanwhile, only few of them were aged around 26 to 35 years and 16 to 25 years (9% respectively). Thus, young groups were smaller compared to the older ones. This was due to the fact that the study respondents were selected among those who involved in the official activities organized by MAIK and MACMA Kelantan. This situation is also in line with Mohd Azmi and Maimunah (2003) who stated that majority of the new Muslim converts who attended the guidance classes are mostly adults. In terms of gender, the number of female respondents exceeded the male respondents by five percent (i.e., 55% females and 45% males). This indicates a nearly equal distribution in the involvement of male and female Muslim converts in formal activities and guidance classes. Although there were reportedly more females than males among new Muslim converts (Mohd Azmi & Maimunah, 2003), this study has shown that the males' participation in formal activities and guidance classes was not affected by this situation. As for the educational background, all respondents obtained their formal education where majority (64%) achieved the secondary school level, followed by the primary school level (23%). Meanwhile, only 13 percent of them obtained higher education level whereby 9 percent received university education and the remaining four percent received college education. These data reveal that most respondents in this study obtained their formal education until the school level. With regards to the reason, more than half (55%) of the respondents converted to Islam due to their interaction with the local Malay community. Meanwhile, 20 percent of them said that they were attracted to Islam. Other reasons for embracing Islam are the marriage factor (16%), due to research and reading (7%), and following or influenced by spouse (3%). Interaction as the main factor for the non-Muslims to embrace Islam is probably due to the high sociability among the multi-cultural community especially Kelantan Chinese (Mohd Shahrul Imran Lim, 2014). In fact, the interaction among the Kelantan Chinese indicates a high level of assimilation in the way of life of this community group (Teo, 2005). Indirectly, this situation has caused the Kelantan Chinese community to accept Islam. The result in this study is in line with Azarudin and Khadijah (2015) who stated that the interaction among the Muslim community is the main factor of conversion to Islam among the Chinese community in the state of Terengganu. In addition, the result shows that the original religion of majority of the respondents (87%) before converting to Islam was Buddhism. A total of 7 percent were initially Christians, 5 percent were Confucian, and the remaining 1 percent were atheists. Thus, almost all respondents were originally Buddhists. These results are in line with previous studies which reported that the Kelantan Peranakan Chinese community are still maintaining their religious belief of their ancestors, namely Theravada Buddhism (Teo, 2008;Mohd Roslan & Haryati, 2011;Khoo, 2010). --- Trust Attitude The frequency of trust element was measured to indicate how frequent respondents trust their bonding social capital, i.e., their original family who are not converted to Islam, and also bridging social capital, i.e., the Malay community, in terms of social, religious, and financial aspects. Figure 1 shows the level of trust in bonding social capital. A total of 47 percent of trust in bonding social capital were located at the low level, 32 percent were at the moderate level, and 21 percent were at the high level. Meanwhile, Figure 2 indicates the level of trust in bridging social capital. Only 9 percent of respondents had a low level of trust. Furthermore, a total of 49 percent were at the moderate level and 41 percent were at the high level. In conclusion, majority of respondents had a low level of trust in bonding social capital (i.e. original family who are not converted to Islam). Bridging social capital (the Malay community) obtained a higher level of trust from respondents whereby almost all were distributed at the moderate and high levels. 2 displays the frequency of bonding and bridging social capitals for the trust element. Overall, respondents seemed to only occasionally trust their bonding social capital (2.6) and bridging social capital (3.4). Respondents sometimes trust their bonding social capital in the aspects of practicing religion (3.2), giving cooperation (2.9), and sharing problems with them (2.7). Nevertheless, as for bridging social capital, respondents occasionally build their trust in different aspects, which are sharing problems (3.4), receiving financial support when needed (2.6), and lending money (2.5). However, in other aspects, respondents indicated that they frequently trust their bridging social capital in giving cooperation (3.9), speaking about religion (3.6), and practicing religion together (4.1). Nevertheless, some respondents also showed that they seldom trust their bonding social capital, particularly in religious and financial aspects. Specifically, respondents seldom trust to talk about religion (2.3), to get money when needed According to Informant 1, it was quite difficult to trust own family due to religious differences between them. It is of primary concern that such difference might give a bad impression to the religion being practiced; therefore, trust is not placed in the family, either in the aspects of social, religion, and financial. Furthermore, Informant 1 also provided a statement regarding the frequency of trust in bridging social capital: This statement indicates that Chinese Muslims frequently trust the Malay community because of their concern towards them. However, regarding their trust in the financial aspect, the informants said that Chinese Muslims do not put high trust in Malays. This is as stated by Informant 3 below: " When it comes to money, it's a bit hard.. it's about certain Malays who are reluctant to pay back. And then, before converted to Islam, there was also a perception among the Chinese who said that it's hard for Malays to pay money (debt). They are reluctant to pay.. like my grandmother who sells living chicken.. she let them took items first, but they're behind payments until now.. maybe my grandmother told others that it's difficult to deal with Malays.. and then when I was still a kid.. I always heard Malay people said, it's okay to not settle your debt to Chinese.. they're kafir (non-believers)." (2.3), "oooo.. it Informant 3 said that it is quite difficult for Chinese Muslims to trust Malays in the financial aspect due to the perception nurtured in them since they were still not converted to Islam; they were even being exposed to that mindset since they were kids. In conclusion, respondents' trust attitude towards bonding social capital was low and happened on occasional basis only. A similar result was observed for bridging social capital whereby the frequency of trust was also occasionally, yet the level of trust in bridging social capital was found to be at the good level. Besides, other statement also indicates that respondents frequently trust their bridging social capital in social and religious aspects. This shows that bridging social capital, i.e., the Malay community, receive a better trust from respondents compared to bonding social capital. i.e., their original family who are not converted to Islam. The occasional occurrence and moderate level of trust suggest that the Islamisation of an individual in a certain bonding social capital has caused the lack of trust in their bonding social capital. This imply that Islamisation has changed the trust attitude due to the difference of values in religion which was common before. This relationship was seen to be limited because of such difference in values. This is similar to the view by Brennan and Barnett (2009) who stated that a relationship can be retained because of the common values between connected individuals. A limited relationship restricts the interaction among bonding social capitals, whereas development of trust depends on interaction (Amir Zal, 2016). Furthermore, according to Payne (2006), interaction manifests that trust has taken place. It was observed in this study that failure to maintain the interaction between respondents and bonding social capital happened due to the factor that Chinese Muslims individuals were fear of the negative views by bonding social capital towards their newly embraced religion. This is in contrast to other views stating that the interaction of Chinese Muslims was affected when their family reject the Fariza, 2009;Suraya et al., 2013;Marlon et al., 2014). Therefore, it can be concluded that the low level of trust among respondents towards the family was due to the religious difference which made them feel afraid to connect with bonding social capital, and this was not related to the conflicts with bonding social capitals as faced by Chinese Muslims in other states in this country. Although the trust element to bridging social capital occurs on occasional basis, respondents indicated their frequent trust for bridging social capital in terms of social and religious aspects. This is because, even before converted to Islam, the Chinese Muslim community in Kelantan generally have a high level of societal ability with the Malay community, and this makes it easier for them to respond to changes in the current environment, such as clothing, food, and leisure activities that are similar to the Malay community (Mohd Shahrul Imran Lim, 2014;Pue & Charanjit, 2014). Furthermore, according to Hanapi (1986), the assimilation of Kelantan Chinese has transformed the social and household organisations into Malay as far as crossing their religious boundary, whereby the Chinese community in Kelantan even invite Muslim spiritual leader among Malays to perform prayer for entering new home. Meanwhile, according to Pue and Charanjit (2014), there is no hindrance for Kelantan people of Chinese descent (peranakan) to apply other religious elements if it is believed to be of benefit to them. Nevertheless, these factors did not make the respondents trust in their bridging social capital for problem sharing and financial aspect. This indicates that closeness, assimilation, and Islamisation do not make it easy for them to share problems and obtain financial resources through their bridging social capital. It is also similar to the aspect of lending money to bridging social capital. --- Reciprocity The reciprocal element refers to the mutuality that occurs between respondents' bonding and bridging social capitals in social, religious, and financial aspects. Figure 3 illustrates the level of respondents' reciprocity with bonding social capital. The study findings indicate that 47 percent of respondents' reciprocity with bonding social capital were at the moderate level, 31 percent were at the low level, and the other 23 percent were at the high level. Meanwhile, Figure 4 shows the level of reciprocity for bridging social capital. 52 percent of respondents indicated the high level, 37 percent were at the moderate level, and only 11 percent of them were at the low level. Overall, it can be seen that there was a higher level of respondents' reciprocity with bridging social capital, whereas the level of respondents' reciprocity with bonding social capital was moderate and some of them were even noted at the low level. Table 3 indicates results pertaining to the frequency of respondents' reciprocity with bonding and bridging social capitals. Generally, based on the study findings, reciprocity occurred on occasional basis for bonding social capital (2.7) and frequently for bridging social capital (3.5). In terms of bonding social capital, reciprocal occurs occasionally in the aspects of mutual respect for religious belief (3.1), visiting each other (3.1), and helping one another (3.0). Then, data also shows that respondents seldom (2.4) help each other in the financial aspect. Other than that, reciprocity also rarely happened in the aspect of exchanging religious opinions (2.3) as well as borrowing and lending money (2.1). As for bridging social capital, the study findings showed that respondents were often reciprocal in the aspects of mutual respect for religious belief (4.0), visiting each other (3.8), helping one another (3.8), and exchanging religious opinions (3.7). However, reciprocity seldom occurred in the aspects of giving financial supports to each other (2.9), likewise for borrowing money from each other (2.7). Frequencies of reciprocity as discussed above were also supported by findings obtained from the interview with respondents. In terms of bonding social capital, informant 2 mentioned that: ISSN 0127-9386 (Online) http://dx.doi.org/10.24200/jonus.vol8iss1pp357-383 372 "It's difficult to help each other.. or anything.. How to help others when we don't even have enough to eat? Then, my mother wanted to come to help. But, we seldom meet.. So, we don't ask for help.. just on our own. After all, we already have our own family." According to Informant 2, it is difficult for reciprocity to occur among respondents because they are also living a hard life, and their family seldom come to help them because they rarely meet. Due to this, recriprocity only happened occasionally. In addition, the informant also stated that they do not ask for helps from family and they manage everything by themselves, especially when they already have their own Muslim family. This shows that reciprocity occurs among respondents at moderate level and only on occasional basis with bonding social capital. As for bridging social capital, the following statement was obtained from the interview with Informant 2: "I have asked for rice from Malays. We didn't have any rice to cook.. we didn't borrow. We asked for one or two cups. The Malay people gave us. If we have some rice, we also gave one or two cups to them. There is no problem with Malays. If they are doing hard, we help them as much as we could. It's because we live together" (Informant 2) Informant 2 mentioned that the Malay community always help them when they are out of rice, and likewise, the informant also helps to the possible extent the Malay community who are in need. This is because they are living together in the social environment of the Malay community. Thus, it can be implied that respondents' reciprocity with bonding social capital occurred less frequently compared to the one established with bridging social capital which was more frequent. At the same time, reciprocity related to the financial aspect which happened on occasional basis for both social capitals. The occasional occurrence and the moderate level of reciprocity with bonding social capital indicate that respondents' interdependency towards bonding social capital is decreasing after they converted to Islam. This is considering that, according to Aeby, Widmerb, and Carlob (2014), family is the resource of social capital that involves mutually beneficial relationships, as well as information and emotional supports from one another. This is in contrast to the reality of respondents before embracing Islam. Hanapi (2007) stated that interdependence in family is the characteristic of Chinese in Kelantan, where they help and respect each other, and thus having close relationships among them. However, qualitative findings indicated that respondents are living in hardship after converted to Islam, suggesting that reciprocity did not occur among them. Therefore, this community no longer has a strong bonding (family) to support the community members. Inasmuch as stated by Schmid (2000), the role of bonding social capital is to support the community members. Meanwhile, the reciprocal element with bridging social capital occurred frequently at the higher level. This is line with Azarudin (2015) who stated that the tolerance values between the Chinese Muslim and Malay communities, such as in doing daily activities, visiting each other, exchanging food, helping one another, and so on, implies that there is a mutually supportive level between each other. Nevertheless, this level of reciprocity did not happen in the financial aspect. This is considering that the purpose of bridging social capital is not only for obtaining social needs, but also for economy (Grafton & Knowles, 2004). This finding suggests that Islamisation allows respondents to work cooperatively to obtain social and religious benefits from bridging social capital, but not in the financial aspect. --- Cohesion The cohesive element indicates respondents' feeling that they are being accepted, belonged to, and loved by both bonding social capital (i.e., family members who are not converted to Islam) and bridging social capital (i.e., the Malay community). Figure 5 shows the cohesive level of bonding social capital. The study findings reveal that 37 percent of respondents indicated the moderate level, 32 percent were at the low level, and 31 percent were at the high level. Meanwhile, Figure 6 illustrates the cohesive level of bridging social capital. Based on the results, it can be seen that 52 percent of respondents were at the high level, 44 percent were at the moderate level, and only four percent were at the low level. In terms of cohesion with bonding social capital, respondents were distributed almost equally in each level. This is different with the case of bridging social capital in which majority of respondents indicated a high level of cohesion, and only a few had a low level. 4 shows results pertaining to the frequencies of cohesion with bonding and bridging social capitals. As a whole, cohesion with bonding social capital happened on occasion only (2.8), whereby respondents at times feel in agreement (3.3), friendly (3.3), their religion is being respected (3.3), and can talk about religion (2.5) with their bonding social capital. In ISSN 0127-9386 (Online) http://dx.doi.org/10.24200/jonus.vol8iss1pp357-383 375 addition, cohesion in the financial aspect rarely happened, whereby respondents seldom talk about finance (2.2) and rarely could borrow money easily (2.2) from their bonding social capital. This finding was noted by Informant 4 who stated that: "We're not close because we feel that they (family) shun us.. they feel that we shun them. That's why we've become not very close. Because it's different now, right... we don't have the same religion, that's why." (Informant 4). According to Informant 4, cohesion happened not so frequently in bonding social capital due to the different perception between respondents and bonding social capital. The informant also mentioned that religious difference is the reason explaining the occasional occurrence of cohesion. With regards to the frequency of cohesion in bridging social capital, Table 3 reveals that cohesion generally happened frequently (3.5). Specifically, respondents are often being friendly (4.0) and in agreement (3.9) with bridging social capital. Furthermore, the Islamic religion which they have embraced are frequently being respected (3.9) and they often talk about Islamic religion (3.9). However, when it comes to financial aspect, their cohesion only took place occasionally. At times, respondents can borrow money easily (2.7), and only at occasion where they could talk about financial aspect (2.9) with their bridging social capital. This finding indicates that cohesion between respondents and bridging social capital frequently occurs, except for those involving the financial aspect. This point was also agreed by informants, such as the statement by Informant 3: In conclusion, the cohesive element occurs more frequently between respondents and bridging social capital, compared to bonding social capital which occurs on occasion basis. The occasional occurrence and moderate level of cohesive element as found in this study is different from the reality of the Chinese community. According to Lyndon, Wei, and Mohd Helmi (2014), the Chinese community has a close relationship with their family. In this study, respondents' conversion to Islam did not lead to the persistence of cohesion between respondents and bonding social capital. Based on qualitative findings, respondents generally stated that there are different perceptions between respondents and bonding social capital whereby in the relationship context, respondents felt that they are being shunned by family, and likewise for their family. The lack of cohesion among respondents in this study contradicts with the views of Amran (1985), Mohd Syukri Yeoh and Osman (2004), andOsman (2005) stating that the Chinese community have more negative perceptions towards Chinese people who converted to Islam than other religions. On the contrary, our study findings were only related to the family's perception that the Islamic converts do not want to be a part of the family anymore. Furthermore, the transformation of Chinese Muslims' way of life to adapt with the living of the Malay community (Razaleigh et al., 2012) was also observed to be the factor causing the lack of cohesion in respondents' bonding social capital. This is because the Islamisation of Chinese Muslims is regarded as they are becoming Malay, and thus causing Chinese Muslims to abandon their life as a Chinese. Another factor that negatively affect respondents' cohesive elements is because religion has a disintegrative effect in which its presence has built a subtle and thin boundary between those who have embraced a new religion and those who are still holding the inherited old world (Taufik, 2009). Therefore, respondents' Islamisation has given a certain perception towards respondents and their bonding social capital, and thus reducing the cohesion among them. The fact is that, the cohesive element is important for the community members to feel that they are being accepted by and belonged to the community, as well as having 'a sense of own place' in the community (Dale & Sparkes, 2008). Furthermore, the high level and frequent occurrence of respondents' cohesion with bridging social capital as noted in this study are not in line with Razaleigh et al. (2012) who found that Chinese people are socially less integrated with the Malay community after they embraced Islam. Similarly, Marlon et al. (2014) who stipulated that racial sectionalism is still taking place between the Chinese Muslim and Malay communities, reported that the levels of understanding, acceptance, and integration of Chinese Muslims towards the Malay culture are still moderate. This shows that there is a difference between Chinese Muslims in Kelantan and those in other states in this country in terms of the cohesive aspect. --- CONCLUSION The Chinese Muslim community in this study indicated that their social capitals in the aspects of trust, reciprocity, and cohesion with family members who are not converted to Islam (i.e., bonding social capital) only occur on occasional basis after their conversion to Islam. A different result was observed for respondents' bridging social capital, i.e. the Malay community, whereby their social capitals in the aspects of trust, reciprocity, and cohesion has taken place frequently and most of them were noted at the high level. Thus, it can be concluded that potentials of community, which are social capitals, can be affected by the religious factor. Other than that, bonding social capital could also give implications on maintaining the sustainability of relationship and affect its ownership because those elements bring impacts on the interaction between respondents and bonding social capital. Limited interaction in a strong relationship, like bonding social capital, will lead to the lack of psychological support, as well as negative impacts on the quality of relationship, mutual assistance, and togetherness between respondents and bridging social capital. The lack of those elements can also affect respondents' harmonious living, and it might as well jeopardise their daily life. Moreover, it is also of concern that respondents' low level of bonding social capital could cause them to lose social and economic supports in own community, and it is also worrying that it might break the ties between respondents and their bonding social capital. Indirectly, this could lead to negative perceptions among non-Muslim family members towards Islam and the Muslim community. Meanwhile, bridging social capital (i.e. the Malay community) was found to give positive implications to the Chinese Muslim community whereby they can obtain various benefits from the close social bridging. Moreover, it also provides a wider network to respondents when their bonding social capital becomes limited in terms of closeness. Other than that, bridging social capital also contributed to respondents' collective actions in solving problems, while increasing the closeness among the local community in order to provide a good social environment to the respondents' group of community. However, in view of the negative aspect, such high level of closeness and frequent occurrence of this element might give the implication revealing respondents as the absolute property of bridging social capital (the Malay community), yet in fact they are actually a part of bonding social capital (family of origin). It is worrying that this element could lead to the emergence of negative perceptions which jeopardise respondents' bonding social capital. It is even worse when functions of bonding social capital are no longer needed by respondents, although there are still more spaces for them to re-establish their relationship with bonding social capital after they converted to Islam. Therefore, it is suggested that the Chinese Muslim community should improve and strengthen their relationship and interaction with bonding social capital. Chinese Muslims should also bear in their mind that the Islamic religion as they have embraced is highly emphasising on the need to establish a good relationship with original family who are non-Muslims. Through this effort, the Chinese Muslim community can regain their position as part of their bonding social capital, even with differences in the aspects of religion and values. From another context, Chinese Muslims should also continually strengthen and maintain the relationship with bridging social capital so that such good relationship can give more contributions towards the quality of life of the Chinese Muslim community.
Background and Purpose: Typically, Chinese Muslims have relationship conflict with their non-Muslim family (bonding social capital) and Malay community (bridging social capitals) after converting to Islam. The conflict will affect their social capital. The main aim of this study was to identify the bonding and bridging social capitals among Kelantan Peranakan Chinese Muslim community in Kelantan, Malaysia in the aspects of trust, reciprocity, and cohesion. Methodology: This descriptive study was conducted utilising the sequential explanatory mixed method approaches, involving Chinese Muslims in the Kelantan state. A total of 75 respondents participated in the quantitative study, and five of them involved in the qualitative study. The methods used for sampling were the purposive sampling and snowball sampling. The quantitative data were collected through a survey questionnaire, while the qualitative data were gathered through semi-structured interviews.The findings revealed that the reciprocal and cohesive elements mostly occurred with bridging social capital only. As for the trust aspect, the respondents indicated that they believe in bonding and bridging social capitals only on occasional basis. It was also found that the relationship conflict existed among Chinese Muslim after conversion with their family members who are not converted to Islam and also with the Malay community.
resources during childhood [2], with cascading effects on health during adulthood and late adulthood. Proponents of the latency model suggest that poor childhood conditions could have a long-term and irreversible influence on individuals' health trajectories patterns [3]. For example, malnutrition in childhood could weaken immune systems and contribute to lower growth rates of musculoskeletal systems, which could further influence joint inflammation in later life [4]. Adverse experience and poor health care resources in childhood could also impose a long-term adverse impact on brain development, which could contribute to cognitive impairment at older age [5]. Furthermore, the pathway model suggests that childhood conditions could indirectly affect health in later life through adulthood conditions [1]. Life-course perspective and cumulative inequality theory have further enriched our understandings of protective and risk factors in early life and how they affect the health of older adults [3]. Given that the first 1000 days of life between conception and a child's second birthday have short and longterm effects on human health and function, and are identified as the most crucial window of opportunity for interventions [6], a growing number of studies have In recent years, there has been a growing trend among social scientists and public health researchers to employ life course data and analytical techniques as means of better comprehending the biological, social and environmental factors that determine health outcomes during the later stages of life. By tracing the association between social circumstances and health over the course of an individual's life, from childhood through to older age, this approach seeks to develop a more nuanced understanding of this complex relationship. The importance of early life experiences on people's health throughout the life-course is not novel. Decades of research have identified the impact of early life experiences on later health [1]. Indeed, recent studies have found a number of relevant childhood variables, including but not limited to socioeconomic status, adverse experiences (e.g., abuse and neglect), disease, and health The roots of healthy aging: investigating the link between early-life and childhood experiences and later-life health Nan Lu 1, Peng Nie 2 and Joyce Siette 3* investigated the linkage between in utero circumstances and health in later ages. From a life-course perspective, the fetal origins hypothesis posits that fetal exposure to an adverse environment, in particular to in utero malnutrition, is associated with increased risks of cardiovascular and metabolic diseases of adults and older ages [7]. A large body of literature has validated this hypothesis in older populations. For instance, a strand of existing studies has linked fetal malnutrition or famine with later-life health problems, including decreased glucose tolerance, schizophrenia, heart disease, obesity, type 2 diabetes, increased mental illness and mortality. Additionally, prior literature also associated other in-utero risk factors such as exposure to conflict and violence [8], and influenza pandemic [9], with ill health in older age. Nonetheless, life-course studies on the nexus between prenatal adversities and later health suffer from threats from mortality selection [10]. As such, the early-life impacts of adversities on later-life health may be weak or even disappear when the influence of selection outweighs the detrimental effects of fetal exposure to adversities or fetal exposure indeed has no long-run health impacts [10]. We acknowledge that estimates of the effects of in utero exposures on later-life health may be sensitive for different analytic approaches and measures of health outcomes. Although there are a number of studies concerning the "long-arm" of childhood conditions, there remains major research gaps. First, the interpretations of childhood experiences are culturally and socially dependent. Therefore, empirical evidences across countries and culture, especially those from developing countries and regions, are needed to test these hypotheses. Additionally, there is a lack of consensus on the conceptualization and measurement of childhood conditions. While many studies assess one or several aspects of childhood conditions, future studies are recommended to use a comprehensive set of measures of childhood conditions to test their combined effects on an individual's health in later life simultaneously. Indeed, the exploration of multiple experiences and exposures will enable a better assessment of the breadth of childhood adversity and opportunities and its link with both adults' and older adults' health. Longitudinal research allows us to not only test the baseline level (i.e. intercept) and change rate (i.e. slope) of health outcomes and how they were affected by childhood conditions, but also examine the mechanisms linking childhood conditions, adulthood conditions, and health outcomes in later life. Enhancing our comprehension of the cumulative impact of childhood experiences across various key timepoints can promote multidisciplinary prevention strategies that emphasize early intervention. By providing collaborative services that address diverse adversities affecting individuals and families throughout their lives, these efforts can deliver integrated programs that offer support and decrease the likelihood of future generations being impacted by negative experiences. Optimizing the long-term health of individuals requires an in-depth understanding of the roots of healthy aging, from early experiences to mid-life health, and its associated impact on later-life health. Physical, social, mental and biological environments are likely to play a synergistic, critical, yet complex role in promoting and maintaining healthy aging. In this Collection, we aim to present original research and evidence synthesis to advance our understanding of the relationship between early experiences, later-life health, and the physical, social, and organizational aspects of being. We particularly welcome contributions that explore this relationship and offer insights into optimizing aging and wellbeing. We hope that this collection will empower healthcare professionals, researchers and policy makers to find innovative ways to enhance care and promote healthy aging on a population-level. --- Data Availability Not applicable. --- Authors' contributions All authors conceived and drafted the Editorial. PN and JS revised the Editorial. All authors read and approved the final manuscript. --- Declarations Ethics approval and consent to participate Not applicable. --- Consent for publication Not applicable. Competing interests NL, PN and JS are guest editors of the Collection. NL, PN and JS are Editorial Board members. --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Whilst early-life conditions have been understood to impact upon the health of older adults, further exploration of the field is required. There is a lack of consensus on conceptualising these conditions, and interpretation of experiences are socially and culturally dependent. To advance this important topic we invite authors to submit their research to the Collection on 'The impact of early-life/childhood circumstances or conditions on the health of older adults' .
Background The coronavirus disease 2019 (COVID-19) pandemic has posed a serious and persistent threat to global public health and has brought unprecedented changes to daily life. Moreover, the unprecedented scope of the worldwide pandemic has led to extraordinary demands on the healthcare system, resulting in critical shortages of medical resources and serious reductions in social capital [1]. Thus, to alleviate the burden of the pandemic, numerous countries have implemented a number of non-pharmaceutical interventions, such as social distancing and individual hygiene practices, although there have been differences in both the intensity and effectiveness of these interventions [2]. Along with the risk of infection itself, the collateral effects of the pandemic have affected population health and may be associated with mortality risk through various pathways [3][4][5][6][7][8][9][10]. During the pandemic, medical resources and mobilisation have been concentrated on patients with confirmed COVID-19, and less critical medical services for non-COVID-19 patients with less severe or less urgent diseases and/or those at a lower age-related risk have frequently been postponed or cancelled [3,4]. In addition, previous studies have reported that medical accessibility is closely associated with socioeconomic status [5,6], and that changes in lifestyle and health behaviours during the pandemic (such as wearing masks and engaging in fewer social and physical activities) might exhibit non-uniform effects on people with heterogeneous characteristics, with differential findings according to disease type, age, sex, educational level, and marital status [5,[7][8][9][10]. In sum, these results indicate that limited access to medical services during the pandemic might disproportionally affect individuals depending on their medical and socioeconomic status. Excess mortality, defined as the increase in deaths compared to the expected number of deaths, has been widely used as a representative indicator for the damage caused by the pandemic with respect to human health [11]. Multiple studies have reported excess mortality attributed to the pandemic [12][13][14][15][16]. Nevertheless, although it can be strongly conjectured that the health damage related to the pandemic is heterogeneous among populations, most previous studies on pandemic-associated excess mortality have solely addressed total mortality (i.e., without consideration of causes of death and variation according to individual characteristics), and only a few studies have evaluated cause-, sex-, age-, race-, or income level-specific impacts [13][14][15][16][17][18]. However, an in-depth examination of cause-specific and individual-specific excess mortality can provide scientific evidence informing interventions in vulnerable populations as well as public health resource allocation. We note that South Korea (hereafter termed Korea) has been evaluated as a country that has successfully responded to the pandemic with widespread testing and epidemiological investigations at the initial pandemic stage; therefore, assessing excess deaths occurring due to the pandemic in Korea can provide an informative evidence base for public health researchers and policymakers [19,20]. Nevertheless, although the socio-demographic characteristics may be involved in shaping the consequences of COVID-19, they have not been considered in previous studies in Korea. Hence, this study aimed to investigate nationwide excess mortality during the 2020 pandemic period in Korea and to identify relevant factors that could affect excess mortality, including causes of death and individual characteristics (i.e., age, sex, educational level, and marital status). We hypothesised that we would observe social inequities in mortality outcomes during the pandemic period. --- Methods --- Statement on guidelines This study complies with relevant guidelines and regulations. All our dataset has been publicly available and did not include any identifiable information. This study was carried out using only data from Statistics Korea, Korea Disease Control and Prevention Agency, and the Korea Meteorological Administration, and there was no direct involvement of participants. Thus, patient consent procedures and ethics approval were not required for this study. --- Data We downloaded data on deaths occurring between 2015-2020 in all 16 regions of Korea from Statistics Korea [21]; the information available for death case with individual characteristics: date of death, age, sex, education level, marital status, and underlying causes of death (classified according to the 10 th Revision of the International Classification of Diseases; ICD -10). From this data, we calculated the daily number of deaths from all causes and by eight leading causes of death and the individual characteristics. We also collected data on confirmed cases of COVID-19 occurring in 2020 from Korea Disease Control and Prevention Agency [22]. Data on daily average temperatures in 2015-2020 across 16 regions in Korea were obtained from the Korea Meteorological Administration [23]. --- Causes of death We considered deaths from all causes as well as due to eight leading causes of death based on the main category (i.e. the first letter of the code) of the ICD-10 code, including infectious diseases, neoplasms, metabolic diseases, circulatory diseases, respiratory diseases, genitourinary diseases, ill-defined causes, and external causes (see Supplementary Table S1 for more detailed information). The ICD-10 codes for COVID-19 deaths (U07.1, U07.2) were excluded from this study to identify the collateral impacts of the pandemic on mortality and the COVID-19 deaths accounted for only a small portion of total deaths (950 in total; 0.3% of total deaths) (see Supplementary Table S2 for more detailed information). --- Individual characteristics To investigate the impact of COVID-19 on excess deaths according to socio-demographic factors, death cases were aggregated by sex, age (<unk> 65, 65-79, and <unk> 80 years), education level (elementary school, middle school, high school, and <unk> college), and marital status (single, married, other [e.g., divorced, widowed]). --- Two-stage analyses We conducted two-stage interrupted time-series analyses to quantify the excess risk of mortality during the COVID-19 pandemic period as compared with the prepandemic period in Korea, following a methodological approach delineated in previous studies [24,25]. In the first stage, a quasi-Poisson regression model was applied to each of the 16 regions in Korea [26]. In the time-series analysis, the usage of other methods (e.g., autoregressive integrated moving average model) [27] was limited, because we used the death count data which takes values in non-negative integers. Thus, we performed quasi-Poisson regression with seasonality and long-term trend adjustments using a spline function [26]. We used the number of days from the first COVID-19 confirmed case to estimate the time-varying risk during the outbreak period (January 20 to December 31, 2020). We included a linear term for date to model long-term trends, a term for days of the year to control for seasonality, and dummy indicators for the day of the week to adjust for variation by week. We also modelled the relationship between average daily temperature readings and mortality using a distributed lag nonlinear model [28,29]. The characteristics of the 16 regions considered in this study are presented in Supplementary Table S2. In the second stage, we pooled the region-specific coefficients of excess risk obtained during the COVID-19 period to the nationwide level using a mixed-effects multivariate meta-analysis approach [30]. The best linear unbiased prediction (BLUP) was then calculated for each of the 16 regions to stabilise the variability due to the large differences in population size between regions, leading to more precise estimates [31]. More detailed information on the two-stage interrupted time-series design employed herein can be found in the Supplementary Material. --- Quantification of excess deaths The relative risk (RR) of excess mortality was calculated to quantify excess deaths attributable to COVID-19. We obtained the predicted values for excess mortality via BLUP region-specific estimates and exponentiated these values to obtain the RR for each day of the outbreak period in each region. The daily number of excess deaths was computed as n * (RR -1)/RR, where n represents the number of deaths per day. We aggregated the daily excess number of deaths by pandemic wave and plateau for each of the 16 regions and for the entirety of Korea. The definition of the COVID-19 period is presented in the Supplementary Material. We computed empirical 95% confidence intervals (eCIs) for the coefficients using Monte Carlo simulations. We repeated the main analysis described earlier for stratified analyses to estimate the number of excess deaths for each eight leading causes of death and individual characteristics. --- Sensitivity analyses We conducted several sensitivity analyses to assess the robustness of our findings. More specifically, we applied five and six internal knots in the quadratic B-spline function for days since the first COVID-19 confirmed case, four and six knots in the cyclic B-spline function for days of the year, and 14 and 28 days of lag period in the distributed lag nonlinear model. --- Results --- Excess all-cause mortality Total deaths and estimated excess deaths during the pandemic period between February 18 and December 31, 2020 are reported in Table 1. During this period, 260,432 deaths were reported in Korea. The number of excess deaths from all causes was estimated as 663 (95% eCI: -2356-3584), indicating that there was no evident excess in total mortality during the pandemic period as compared with the pre-pandemic period. --- Excess cause-specific mortality Nevertheless, we found heterogeneous excess deaths when evaluating cause-specific deaths (Table 1). For example, the number of deaths related to respiratory diseases decreased by 4371 due to the pandemic (95% eCI: 3452-5480), corresponding to a 12.8% percentage decrease in this mortality outcome (10.4%-15.5%). However, excess deaths due to metabolic diseases and "ill-defined cause" diseases attributable to the pandemic increased by 808 (456-1080) and 2756 (2021 to 3378), corresponding to percentage increases of 10.4% (5.6%-14.4%) and 11.1% (7.9%-14%), respectively. --- Excess mortality by individual characteristics We found that the impact of the pandemic on mortality was disproportionate according to socio-demographic characteristics (Table 1, Fig. 1). For example, the excess mortality attributable to the pandemic was prominent in those aged 65 to 79 years (excess deaths 941, 95% eCI: 88-1795; percentage excess 1.2%, 95% eCI: 0.1%-2.4%), those with an elementary school or lower educational level (1757, 371-3,030; 1.5%, 0.3%-2.6%), and in the single population (785, 384-1174; 3.9%, 1.9%-5.9%). However, we found a decrease in the mortality rate during the pandemic in people with a college-level or higher educational attainment (1471, 589-2328; 4.1%, 1.7%-6.4%). --- Temporal trends in excess mortality For all-cause deaths, we found fluctuations and inconsistent patterns in the temporal trend for excess risk (RR) and the percent excess in mortality across waves and plateaus of the COVID-19 pandemic (Fig. 2). The excess risk of mortality started decreasing from the beginning of the 1 st plateau, then gradually increased until it reached its peak in the 2 nd wave. Subsequently, the risk continued to decrease, with a sharp decline evident during the 3 rd wave. For cause-specific deaths, three types of deaths showed obvious and consistent patterns during the pandemic period. Namely, we found a decrease in mortality related to respiratory diseases and an increase in mortality due to both metabolic diseases and diseases with illdefined causes (see Supplementary Table S3). --- Excess mortality both by causes of death and individual characteristics Excess mortality due to cause-specific deaths and according to individual characteristics is shown in Fig. 3 and Supplementary Table S4. Respiratory disease-related mortality showed an evident reduction during the pandemic in all age groups. However, excess mortality due to ill-defined causes prominently increased in those aged 80 years or older with percentage excess of 15% (8.4%-20.8%). Moreover, across all specific causes, an increase in mortality due to the pandemic was generally more evident in those with lower education levels (high school or lower), while a decrease in mortality was more obvious in those with higher education levels (college or higher). The exception to this trend was with regard to respiratory disease deaths, which showed reduced mortality across all educational groups. This pattern (i.e., a higher excess mortality in those with lower education levels) was more prominent for metabolic and ill-defined causes of death. Excess mortality attributable to the pandemic was generally more pronounced across all specific causes in the single population. --- Sensitivity analysis results Sensitivity analyses were performed to assess whether these findings were consistent with the modelling specifications; the sensitivity analysis results revealed the robustness of our main results (see Supplementary Tables S5 andS6). --- Discussion This study investigated nationwide excess mortality during the COVID-19 pandemic in Korea according to cause of death and individual characteristics. In the total population, although there were no substantial excess deaths evident during the pandemic period when estimated except for deaths from COVID-19, we found disproportionate impacts of the pandemic on mortality by cause of death, education level, and marital status. In general, the excess mortality attributable to the pandemic was more evident in deaths from metabolic and ill-defined diseases, in those with lower education levels, and in the single population. To our knowledge, several previous studies have evaluated trends in excess mortality during the first year of the pandemic. For example, a study evaluating mortality trends in 29 industrialised countries reported an increase in mortality due to the pandemic [13]. Another study evaluating trends in 67 countries also showed that most countries experienced an increase in mortality during the pandemic, with the exception of some countries with higher testing capacities [32]. Nevertheless, we did not detect evident increases in mortality due to the pandemic in Korea in the current study. We conjecture that this pattern may be closely associated with the early and extensive testing and comprehensive epidemiological investigations implemented in Korea in response to the pandemic, which have been identified as effective countermeasures in reducing the spread and mortality rate associated with COVID-19 [19,20]. Although some previous studies have reported excess mortality during the pandemic in Korea, the results have been mixed. For example, some studies showed no evident increase in annual deaths in 2020 [13,17], whereas another study reported a decrease in mortality in 2020 [32]. However, these previous studies were based on weekly or monthly mortality data. Thus, we believe that our study, which was based on daily data and employed a cutting-edge standardised time-series analysis, can provide more precise estimates than these prior investigations. This study identified that the impact of the pandemic on mortality was disproportionate in accordance with cause of death, age group, educational level, and marital status. First, we found that a large decrease in mortality from respiratory diseases during the pandemic was the major factor in the non-increased mortality pattern evident in this study, and that this trend may have offset increases in the mortality rate due to metabolic and Fig. 3 Percentage excess in mortality (with 95% empirical confidence interval) during the COVID-19 pandemic period (February 18 to December 31, 2020) in Korea for each cause of death by age, education, and marital status. Abbreviations: Infectious = Certain infectious and parasitic diseases, Metabolic = Endocrine, nutritional and metabolic diseases, Circulatory = Diseases of circulatory system, Respiratory = Diseases of respiratory system, Genitourinary = Diseases of genitourinary system, Ill-defined = Symptoms, signs and abnormal clinical and laboratory findings, not elsewhere classified, External = Injury, poisoning and certain other consequences of external cause ill-defined diseases that occurred during the pandemic period. This result is consistent with that of a previous study that examined the decline in the incidence and mortality of respiratory diseases in Korea during the pandemic period [8,17]. We note that, the Korean government has generally implemented high levels of social distancing, personal hygiene, and mask-wearing since the initial stage of the pandemic, and that these are major factors in the decrease in respiratory virus infection evident in this country [19,20]. However, we found that prominent excess mortality attributable to the pandemic was observed in metabolic disease-related deaths, and that this excess mortality was more evident in those with lower education levels and single marital status. These results could partly be explained by the unintended impact of interventions against the pandemic [7]. For example, social distancing and fewer outdoor activities could increase time spent indoors and lead to worsened health behaviours, such as unhealthier diets and less exercise. Moreover, restricted and reduced accessibility to medical services during the pandemic could negatively affect consistent care for patients with chronic metabolic diseases and the impacts of this decreased accessibility to medical services could be more pronounced in those with low socioeconomic status, which may result in fewer hospital visits and medications. In addition, we found that excess mortality due to the pandemic was evident in regard to deaths from illdefined causes. Interestingly, we found that the number of deaths due to ill-defined causes increased throughout 2020 in Korea, and that this pattern was not observed for other causes of death. Moreover, this increasing pattern was more pronounced in those aged 80 years or older. From our study data, we found that 67.8% of deaths from ill-defined causes in 2020 occurred in those aged 80 years or older, and that senility (one of the specific causes of "ill-defined cause" mortality) accounted for nearly half (49.8%) of these deaths (see Supplementary Tables S7 andS8). Although additional investigations are needed, our results imply the possibility that older individuals at the end of life may have reduced their hospital visits due to the pandemic. Thus, exact causes of death might not be reported accurately for this population. Therefore, we cautiously surmise that these increased cases of ill-defined mortality may have been related to increased deaths due to ill-defined causes during the pandemic. We also found a disproportionate impact of the pandemic regarding individual characteristics associated with socioeconomic inequality. First, we found that an increase in mortality during the pandemic was evident in those aged 65-79 years, but we did not detect obvious excess mortality in those aged 80 years or older. We conjecture that this may be related to the fact that the young older population (i.e., those less than 80 years of age [approximately]) may be more likely to delay or cancel their medical care services voluntarily or involuntarily. In other words, we conjectured that the results might be related to the fact that most medical services prioritized older populations and COVID-19 patients during the pandemic period. Also, considering the "depletion of susceptible" or "healthy survivor" effect, those in the very old age group may be less susceptible to risk factors that can lead to death than those who died earlier in life [4]; however, more in-depth studies are required in the future to support this conjecture. The above speculation is substantiated by the following figures. In Korea, hospital visits, hospitalisations, and emergency department (ED) visits during the pandemic decreased to a greater degree in those aged 65-79 years than in those aged 80 years or older [33][34][35], In addition, the impact of restricted medical access on deaths in those aged 65-79 years can be inferred given that the leading causes of death in this age group in 2020 were diseases that require regular and timely care, such as neoplasms and circulatory diseases (see Supplementary Table S7). Moreover, when investigating the older population, the "depletion of susceptibles" or "healthy survivor" effect should be kept in mind; more specifically, survival to very old age may indicate that individuals are less susceptible to risk factors that can lead to death, including the impact of COVID-19 pandemic, than are those who died earlier in life [36,37]. One of our main findings was that excess deaths due to the pandemic were more prominent in those with low educational levels and in the single population, and that this pattern was common to most causes of death. Previous studies have reported that people with low educational levels (i.e. a proxy for low socioeconomic status) generally have worse health outcomes as well as more limited access to health care resources as compared with highly educated people [38]. Single people are also likely to have worse health status than married people, although there is no consensus as to whether marriage provides a protective effect against adverse health outcomes or whether less healthy or socially disadvantaged individuals are more likely to remain unmarried [39]. It should also be considered that unmarried people may have lived with their parents and received protection from their families [40], but it was not observed in this study. Moreover, during the pandemic, single or unmarried people might become more socially isolated, and people with lower socioeconomic status might face more threats to health, including reduced necessary care, unemployment, financial insecurity, lack of psychosocial resources, and less healthy lifestyles [9]. Regarding the temporal trend in regard to the impact of the pandemic on all-cause mortality, we found that the associated excess risk increased during the 2 nd wave of the pandemic period and then sharply decreased during the 3 rd wave, resulting in an offset of the total excess in mortality. This reflects the fact that the trend in total deaths during the period corresponding to the 3 rd wave (October 26 to December 31) in 2020 was lower than that in the previous period (see Supplementary Fig. S1). In particular, our results imply that the reduction in allcause mortality in the winter season, corresponding to the 3 rd wave in this study, may be associated with a prominent decrease in mortality from respiratory diseases during that period (Supplementary Table S3). Preventive behaviours for ameliorating the spread of the pandemic, such as wearing masks and maintaining personal hygiene, can reduce the risk of infection-related mortality, and these effects might be more pronounced in the winter (when respiratory infections commonly occur) [8]. Nevertheless, this study only investigated trends during the first year of the COVID-19 pandemic, and additional studies are needed to explore long-term trends in excess mortality attributable to the pandemic. Some limitations of our study must be acknowledged when interpreting the findings reported herein. First, we did not account for seasonal influenza activity and other time-varying confounders, which can affect the relationship between COVID-19 and mortality, as relevant data were not available. Future studies should consider how the time-varying confounders can be controlled in the model. Second, in addition to the insufficient confounders, our study design (an ecological study with a timeseries design) is limited in showing the causal effect of COVID-19. Therefore, further studies with more elaborate data and robust methods for counterfactual analyses, such as the synthetic control method. Finally, we only examined excess deaths during the COVID-19 period in 2020, which may be insufficient to capture the prolonged effects of the pandemic on mortality. This issue can be addressed by additional investigations regarding trends in excess mortality due to the pandemic over a longer period. Despite these drawbacks, a notable strength of our study is the application of a cutting-edge two-stage interrupted time-series design that allows for flexible estimation of excess mortality and adjusts for temporal trends and variations in known risk factors. Another major strength of our study is that we performed this analysis using officially reported nationwide death data with daily count units and stratified the primary findings by cause of death as well as by individual characteristics, thus offering comprehensive and evidence-based information on the impact of the pandemic in Korea in regard to informing future public health research, policy decisionmaking, and resource allocation. --- Conclusion In conclusion, our study indicates that no excess in allcause deaths occurred during the COVID-19 pandemic period in Korea during the year 2020, although differential risks of mortality were evident across specific causes of death and individual characteristics. The findings of our study highlight the need for efforts to address disproportionate access to medical care as well as inequities in health status that have been exacerbated by the pandemic and likewise provide important information regarding the allocation of resources for interventions aimed at addressing inequities in medical and socioeconomic status. --- Availability of data and materials The data used for this study will be made available to other researchers upon reasonable request. Data for this study has been provided by Statistics Korea, Korea Disease Control and Prevention Agency, and Korea Meteorological Administration, and the data are publicly available. --- Supplementary Information The online version contains supplementary material available at https:// doi. org/ 10. 1186/ s12889-022-14785-3. Additional file 1: Supplementary Table S1. Causes of death and corresponding ICD-10 codes. Supplementary Table S2. Characteristics of the 16 regions in Korea. Supplementary Methods. Supplementary Table S3. Number of total deaths and estimated excess deaths (with 95% empirical confidence intervals) during the COVID-19 pandemic period (February 18 to December 31, 2020) in Korea by cause of death and phase of the pandemic. Supplementary Table S4. Percentage excess in mortality (with 95% empirical confidence interval) during the COVID-19 pandemic period (February 18 to December 31, 2020) in Korea by cause of death and individual characteristic. Supplementary Table S5. Percentage excess in mortality (with 95% empirical confidence interval) by cause of death for main model and each sensitivity analysis. Supplementary Table S6. Percentage excess in mortality (with 95% empirical confidence interval) by individual characteristic for main model and each sensitivity analysis. Supplementary Table S7. Number of total deaths (%) during the COVID-19 pandemic period (February 18 to December 31, 2020) in Korea by cause of death and individual characteristic. Supplementary Table S8. Number of total deaths (%) by main specific causes of Symptoms, signs and abnormal clinical and laboratory findings, not elsewhere classified (R00-R99) in 2020. Supplementary Figure S1. Temporal trends of total deaths during the study period (2015-2020). Authors' contributions J.O., J.M., C.K., and W.L. conceived and designed the study. J.O. performed the statistical analysis and wrote the manuscript. W.L. and H.K. supervised all manuscript procedures. All authors provided input to the preparation and subsequent revisions of the manuscript. The author(s) read and approved the final manuscript. --- Declarations Ethics approval and consent to participate Not applicable. --- Consent for publication Not applicable. --- Competing interests The authors have no actual or potential competing interests to declare. --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Background: During the coronavirus diseases 2019 (COVID-19) pandemic, population's mortality has been affected not only by the risk of infection itself, but also through deferred care for other causes and changes in lifestyle. This study aims to investigate excess mortality by cause of death and socio-demographic context during the COVID-19 pandemic in South Korea. Methods: Mortality data within the period 2015-2020 were obtained from Statistics Korea, and deaths from COVID-19 were excluded. We estimated 2020 daily excess deaths for all causes, the eight leading causes of death, and according to individual characteristics, using a two-stage interrupted time series design accounting for temporal trends and variations in other risk factors. Results: During the pandemic period (February 18 to December 31, 2020), an estimated 663 (95% empirical confidence interval [eCI]: -2356-3584) excess deaths occurred in South Korea. Mortality related to respiratory diseases decreased by 4371 (3452-5480), whereas deaths due to metabolic diseases and ill-defined causes increased by 808 (456-1080) and 2756 (2021-3378), respectively. The increase in all-cause deaths was prominent in those aged 65-79 years (941, 88-1795), with an elementary school education or below (1757,, or who were single (785,, while a decrease in deaths was pronounced in those with a college-level or higher educational attainment (1471,.No evidence of a substantial increase in all-cause mortality was found during the 2020 pandemic period in South Korea, as a result of a large decrease in deaths related to respiratory diseases that offset increased mortality from metabolic disease and diseases of ill-defined cause. The COVID-19 pandemic has disproportionately affected those of lower socioeconomic status and has exacerbated inequalities in mortality.
I. Introduction Following the novel coronavirus disease (COVID-19) outbreak in Wuhan, China, in December 2019, COVID-19 had spread rapidly to other countries, including Korea and Japan, two of the closest countries to China. Korea and Japan's first confirmed case was reported on January 19, 2020 and January 16, 2020, respectively. Considering its severity, the World Health Organization declared this disease as a pandemic on March 11, 2020 [1]. By April 30, 2020, the incidence rate of COVID-19 in Korea (10,765 confirmed cases, 247 deaths) and Japan (14,119 confirmed cases, 435 deaths) showed downward trends, due to the responses of both governments [2]. However, the physical and psychological stress among the general public has continued, given the continued occurrence of new confirmed cases. Psychological counseling services have reportedly increased after the emergence of COVID-19 due to the increased incidence of depression and excessive stress [3][4][5]. Previous studies also indicated the possibility of increased distress and psychological fatigue among the general public due to the escalation of governmental regulations [6]. In response, the Korean government announced "psychological quarantine" on March 6 for the psychological stress induced by COVID-19. Their National Trauma Center (http://nct.go.kr) provides "COVID-19 Integrated Psychological Support Group", "Ways to keep good mental health", and "Counseling services to patients and their parents" [7]. The Japanese Ministry of Health, Labor, and Welfare has implemented psychological treatment & support projects in response to COVID-19. They provide counseling in the form of a chat called "Consultation of the social network services (SNS) mind related to COVID-19" [8]. While these nationallevel mental health/stress policies should be implemented after properly identifying citizens' mental health conditions, stress, and needs concerning COVID-19, previous studies showed that the COVID-19 mental health policies in Korea and Japan lacked information about these needs [9,10]. People from different countries show different reactions to the COVID-19 pandemic due to their different sensitivities, government responses, and psychological support; thus, consideration of these diverse aspects is critical [11][12][13]. In recent years, Internet and smartphone usage has increased rapidly, and social media platforms function as new modern forms of communication. More than 60% of Koreans use the Internet for social networking, and more than 50% of Japanese reported using more than one SNS platform. As a major social media platform, previous studies analyzed the perceptions and emotional status of the public through Twitter, which showed that social media platforms could reflect users' emotional states [14]. This study analyzed the perceptions and emotions of Korean and Japanese citizens regarding COVID-19. It analyzed the frequency of words used in Korean and Japanese tweets related to COVID-19 and the corresponding changes in their interests. It also aimed to provide evidence to establish the COVID-19 mental health policies of both governments. --- II. Methods --- Study Design This cross-sectional study analyzed Twitter posts (Tweets) from February 1, 2020 to April 30, 2020 to determine public opinion regarding the COVID-19 pandemic in Korea and Japan. --- Data Collection We collected data from Twitter (https://twitter.com/), a major social media platform in Korea and Japan, between February 1, 2020 and April 30, 2020. Search terms to collect tweets (posts on Twitter) included "corona (<unk>)" in Korean and "corona (<unk>)" in Japanese. Python 3.7 Library (Beautifulsoup and GetOldTweet3) was used for data collection. Due to a large number of tweets in Japan, we limited the daily collected data to 50,000 tweets. The number of tweets collected in Korea and Japan was 1,470,673 and 4,195,457, respectively. The collected tweets were then classified into words using morphology analysis, and nouns and hashtags were extracted. After the morphology analysis, duplicate and irrelevant words were removed. The final analysis included 1,244,923 and 3,706,366 tweets from Korea and Japan, respectively (Figure 1). --- Statistical Analysis After the data collection, we used three kinds of statistical analysis. First, we used the KR-WordRank analysis in Korea and frequency analysis in Japan. Given the difficulties related to the text-mining analysis of the Korean language due to the ambiguity of spacing words, the fitness of domain, and postposition such as "eun", "neum", "iga" [15], the KR-WordRank method, used widely in previous studies, was selected for analysis. The KR-WordRank is one of the textmining approaches which performs unsupervised word segmentation. It can be divided into the exterior boundary value (EBV), which represents the probability of words around the central word, and the interior boundary value --- Opinion of COVID-19 on Twitter (IBV), which shows the cohesion of continuation characters related to the central word. After the EBV is derived and reinforced through the EBV of neighboring words, each EBV is calculated, reinforced by the relationship between the words, and mutually strengthened through the EBV of adjacent words. In contrast, IBV is scored by words' importance using mutual information (MI), which calculates the continuous probabilities of characters. Through this process, the KR-WordRank ranks by their importance in the network. For Japanese tweets, frequency analysis was more suitable because Japanese words are easily recognized. Moreover, frequency analysis has the advantages of high computation speed and easy access. Through the analysis, we estimated the changes of frequency using each word over time. Following the KR-WordRank analysis, the data from February 1, 2020 to April 30, 2020 were visually represented through the "heat diagram". Second, we used "Word Cloud" to analyze word frequency from February 1, 2020 to April 30, 2020 in Korea and Japan. Third, we analyzed the rank flowcharts by categorizing the words into four types (social distancing, prevention, issue, and emotion) in both Korea and Japan. --- III. Results --- Crawling Data Characteristics This study collected a total of 2,965,770 tweets, including 1,470,313 tweets from Korea, and 4,195,457 tweets from Japan. Since we had limited the daily tweets in Japan to 50,000, 250,000 tweets were the maximum data collected in 5 days. In Korea, 371,051 corona-related tweets from February 21, 2020 to March 1, 2020 accounted for 25.2% of all tweets. On the other hand, 13,943 tweets from March 12-16 accounted for 0.9%, which was the lowest (Table 1). https://doi.org/10.4258/hir.2020.26.4.335 --- Heat Diagram Figures 2 and3 present the word trend in Korea and Japan for every 5 days from February 1, 2020 to April 30, 2020. In Korea, the words "COVID-19 (<unk>)" and "News (<unk>)" were consistently high since February, while "MERS (<unk> <unk>)" appeared on Twitter until February 10 and then disappeared. "Shincheonji (<unk>)" first appeared on February 15 and continued to rank high until April 30. "Travel (<unk>)" was highly ranked on February 5 but disappeared after February 20. "Online (<unk>)" first appeared on April 5, and its rank increased gradually until April 30 (Figure 2). In Japan, "COVID-19 (Corona)", "Impact (<unk>)", "Mask (<unk>)", "China (<unk>)", "Response (<unk>)", "Economy (<unk> <unk>)", and "Government (<unk>)" continuously ranked high from February 5 to April 30. The word "Olympics (<unk> <unk>)" was not in the rankings from February 29 to March 10 but gradually increased from March 15 to April and became a high ranked word in April. The rank of "Washing hands (<unk>)" decreased from March, but increased again from April 20 (Figure 3). --- Word Cloud Figure 4 presents the results of the word cloud analysis of the Korean and Japanese tweets from February 1 to April 30, 2020. In Korea, "COVID-19", "Shinchonji", "Mask", "Daegu", and "Travel" occurred frequently. In Japan, "COVID-19", "Mask", "Test", "Impact", and "China" were identified as highfrequency words. --- Rank Flowchart We analyzed the rank flowcharts of the Korean and Japanese tweets from February 1, 2020 and divided them into four categories, namely, social distancing, prevention, issue, and emotion (Figure 5). --- 1) Social distancing In Korea, the rank of "World" increased from March 4, while the rank of "Travel" decreased after February 26. Since April, the word "Online" appeared and continuously increased in rank. In Japan, the rank of "Going out" and "Home" continuously increased from February 13 and February 19, while the rank of "Postpone" decreased. --- 2) Prevention In Korea, the rank of "Mask" was consistently high. The rank of "Prevention" decreased after February 12 and increased again after April 15. In Japan, the rank of "Mask" was consistently high, while the ranks of "Washing hands" and "Disinfection" decreased. The rank of "Prevention" began to increase after March 25. 3) Issue In Korea, "Shinchonji" and "Daegu" continuously ranked high from February 19. The word "Donation" decreased from March 4 to March 25, after which it increased. Additionally, the ranking of "Economy" showed an upward trend since March 25. In Japan, "Economy" increased since February and reached the highest rank on March 11. The word "Olympics" rapidly decreased since February 26, and then increased from March 11. --- 4) Emotion In Korea, "Government" was ranked 10th, "Overcome" continued to increase after February 26, and the rank trend for "Support" changed from decreasing to increasing from February 12 to March 16. In Japan, the rank of "End" rose since February and ranked in the top 10, while "Worry" and "Anxiety" decreased from April 1. --- IV. Discussion This study analyzed the perceptions and emotions of Korean and Japanese citizens about COVID-19 to gain insight for future COVID-19 responses. There was a difference in the number of tweets between Korea and Japan. The final analysis included 1,470,313 tweets from Korea and 4,195,457 tweets from Japan; the daily Japanese tweets were limited because of their volume. Most Japanese citizens mainly use Twitter as SNS [16] rather than Facebook or Instagram, whereas the ranking of Twitter usage rate by Koreans is 7th (0.2%), which is relatively lower than Japan [17]. This may be due to the difference in populations: the current population of Japan (126,264,931) is 2.4 times higher than Korea (51,709,098) [18]. Based on the heat diagram analysis of the tweets, the words "Online", "Economy", and "Donation" had gradually reached a high rank since April 2020. Beginning March 1, elementary, middle, and high schools were temporarily closed and moved to online classes, which possibly affected the high ranks of "Online". The frequency of the word "Online" continuously increased after the Korean Ministry of Education announced an online education system on March 31 [19]. Citizens showed an interest in the economy, which is one of the major effects of COVID-19 in Korea. The economic growth rate (-0.1%) had declined after COVID-19 compared to 2019 (2.0%). This also affected Koreans' perceptions of the economy, reporting it as the most difficult economic time since the International Monetary Fund (IMF) intervention in 1998 [20]. The words "Travel" and "Postpone" were ranked high at the beginning of February 2020, but their ranks gradually decreased. "Travel" began to be rarely mentioned in tweets and disappeared from the rank list after February 25, indicating that Koreans had changed their opinions regarding traveling. The rapid spread of COVID-19 from the last week of February might have affected people's interest in traveling. The number of flights and travelers had sharply declined to 70.8%, which provided an additional explanation for this trend [21]. Regarding "Postpone," school reopening and events were postponed for 1 to 2 months in the early stages of COVID-19; however, due to their continuous delay [19], people may have begun to lose interest in it. In Japan, "COVID-19 (Corona)", "Impact (<unk>)", "Mask (<unk>)", "China (<unk>)", "Response (<unk>)", "Economy (<unk>)", and "Government (<unk>)" were highly ranked be- https://doi.org/10.4258/hir.2020.26.4.335 tween February 5 and April 30. The Japanese government distributed two face masks per household on April 17, which generated significant public opinion and possibly affected the high rank of the word "Mask". Similar to Korea, Japanese citizens showed interest in their economy, reflecting their difficult economic situation compared to 2019. The cancellation of the Olympics by Japan, the host country, may explain the continued increase in the rank of "Olympics". The rank of "Washing hands" decreased in March and then again increased since April, indicating people's interest in personal non-pharmaceutical interventions (NPIs). The Japanese government emphasized isolation and strict social distancing until March, and then promoted personal NPIs since April [22]. We divided the words into four categories, namely, social distancing, prevention, issue, and emotion, to analyze the rank flowcharts. Regarding social distancing, the word "World" began to increase in Korea since March 4, 2020, which is close to the time of a pandemic declaration by the World Health Organization (WHO). In Japan, contrary to the decreasing number of indicators related to going outside (traffic volume, using of public transportation, etc.), the --- Opinion of COVID-19 on Twitter word "going out" had continuously increased since February 19th. Considering the request to stay home and close schools, the rank for the word "Postpone" continued to decrease and "Home" began to increase. Regarding prevention, the high rank of "Mask" in both countries can be explained by the high compliance of wearing masks in Korea and Japan (Korea 78.7%; Japan 77.0%) [23]. In April, the words related to personal prevention began to rank higher compared to March, which showed a decline in ranks, thereby indicating citizens' increased interest in personal NPIs. This trend may be based on changes in policies of both governments, which shifted focus to personal hygiene from strict social distancing. In Korea, after the government announced the disaster relief plan and people began donating for COVID-19 eradication, the rank of "Economic" and "Donation" increased on March 25. In contrast, the rank of "Olympics" in Japan decreased from March 25, the day Japan postponed the Olympics scheduled for 2020. The word "Store" emerged in the rank list from March 4. Some infected cases emerged in several stores during the first week of March, and Japanese local governments began to request stores to close temporarily from April, which may have affected this trend [24]. Concerning emotion, the rank of "Please" consistently increased in Korea. A previous study showed that people with high compliance of personal prevention experienced high psychological stress due to those who did not maintain the preventive practices [25], which might have affected the increasing numbers of tweets with requests to maintain preventive measures. Furthermore, the word "Please" was also associated with the wish to end the COVID-19 pandemic. We also observed the increasing ranks of "Overcome" and "Support", which reflected the current economic slump. In Japan, the citizens expected the COVID-19 pandemic to end since they mentioned the word "End" frequently. In contrast, the mention of the words "Worry (<unk>)" and "Anxiety (<unk> <unk>)" decreased from April, which showed Japanese adaptations and insensibilities toward COVID-19. The WHO issued warnings about the possibility of a second pandemic after several countries eased their policies and citizens began to relax regarding COVID-19 [26]. This study has some limitations. First, it did not represent all age groups because the SNS was mainly used by the younger generation compared to the elderly generation. Second, although there are several SNS platforms such as Facebook, Never, Yahoo, Twitter, Instagram, KakaoTalk, and Band, we only collected posts from Twitter because of the API permission. To minimize non-sampling error, we limited data collection to twitter although Naver in Korea and Yahoo in Japan are the most popular website. Future studies should analyze posts from various SNS to sufficiently represent the public opinion of each country. In conclusion, this study analyzed the perceptions and --- Hocheol Lee et al https://doi.org/10.4258/hir.2020. 26.4.335 emotions of Korean and Japanese citizens about COVID-19 to gain insight for future COVID-19 responses. The words in high frequencies were COVID-19, Shincheonji, Mask, Daegu and Travel in Korea, and COVID-19, Mask, Inspection, Capability and China in Japan. In both countries, CO-VID-19 and masks were frequently searched. As result of the rank flowchart, we observed that peoples' interests in the economy were high in both countries, which showed their worries about the economic downturn as a result of CO-VID-19 on twitter. Although interest in prevention increased since April in both countries, it also showed that the general public began to assuage their worries regarding COVID-19. We strongly suggest that psychological support strategies should be established in consideration of their various aspects of emotions. --- Conflict of Interest No potential conflict of interest relevant to this article was reported.
Objectives: This study analyzed the perceptions and emotions of Korean and Japanese citizens regarding coronavirus disease 2019 . It examined the frequency of words used in Korean and Japanese tweets regarding COVID-19 and the corresponding changes in their interests. Methods: This cross-sectional study analyzed Twitter posts (Tweets) from February 1, 2020 to April 30, 2020 to determine public opinion of the COVID-19 pandemic in Korea and Japan. We collected data from Twitter (https://twitter.com/), a major social media platform in Korea and Japan. Python 3.7 Library was used for data collection. Data analysis included KR-WordRank and frequency analyses in Korea and Japan, respectively. Heat diagrams, word clouds, and rank flowcharts were also used. Results: Overall, 1,470,673 and 4,195,457 tweets were collected from Korea and Japan, respectively. The word trend in Korea and Japan was analyzed every 5 days. The word cloud analysis revealed "COVID-19", "Shinchonji", "Mask", "Daegu", and "Travel" as frequently used words in Korea. While in Japan, "COVID-19", "Mask", "Test", "Impact", and "China" were identified as high-frequency words. They were divided into four categories: social distancing, prevention, issue, and emotion for the rank flowcharts. Concerning emotion, "Overcome" and "Support" increased from February in Korea, while "Worry" and "Anxiety" decreased in Japan from April 1. Conclusions: As a result of the trend, people's interests in the economy were high in both countries, indicating their reservations on the economic downturn. Therefore, focusing policies toward economic stability is essential. Although the interest in prevention increased since April in both countries, the general public's relaxation regarding COVID-19 was also observed.
INTRODUCTION Broadly defined, entrepreneurship involves efforts to bring about new economic, social, institutional or cultural environments (Rindova, Barry, & Ketchen, 2009). Since Schumpeter's (1911Schumpeter's (, 1942) ) pioneering work, entrepreneurship has become widely acknowledged as the key driver of the market economy. Yet, entrepreneurship research as a scholarly discipline is relatively young, and several attempts toward developing a coherent entrepreneurship'research paradigm' have been made (e.g., Davidsson, 2003;Katz & Gartner, 1988;Sarasvathy, 2001;Shane & Venkataraman, 2000;Shane, 2003;Stevenson & Jarillo, 1990). In this respect, the landscape of entrepreneurship research is still to a large extent multi-paradigmatic in nature, including fundamentally different perspectives on what entrepreneurship is, how entrepreneurial opportunities are formed, what determines the performance of new ventures, and so forth (Ireland, Webb, & Coombs, 2005;Leitch, Hill, & Harrison, 2010;Zahra & Wright, 2011). This results in widespread confusion and frustration among entrepreneurship researchers regarding the lack of convergence toward a single paradigm and the continuing lack of definitional clarity (Davidsson, 2008;Ireland et al., 2005). Shane's (2012) and Venkataraman et al.'s (2012) reflections on the 2010 AMR decade award for their article "The promise of entrepreneurship as a field of research" (Shane & Venkataraman, 2000), as well as the subsequent debate, illustrate the disagreement on key paradigmatic issues among prominent entrepreneurship researchers. These differences are not only academic in nature, but also have profound practical implications. For instance, the narrative-constructivist notion of transformation implies that entrepreneurs should focus on acting and experimenting rather than trying to predict the future, as they cannot acquire valid knowledge about uncertain and partly unknowable environments (e.g., Sarasvathy, 2001;Venkataraman et al., 2012). By contrast, other researchers advocate that entrepreneurs should predict carefully, using comprehensive analysis and systematic procedures, before engaging in entrepreneurial activities (e.g., Delmar & Shane, 2003). Fundamentally different perspectives on the phenomenon of entrepreneurship together may provide a deeper and broader understanding than any single perspective can do. However, different ontological and epistemological points of view are also difficult to reconcile and may have diverging implications (Alvarez & Barney, 2010;Leitch et al., 2010). In this paper, we seek to respect the distinct research paradigms currently existing in the field of entrepreneurship, rather than attempt to reconcile highly different assumptions. We start from the idea that the future development of the field of entrepreneurship, as a body of evidence-based knowledge, largely depends on building platforms for communication and collaboration across different paradigms as well as across the practice-academia divide (cf. Argyris, Putnam, & McLain Smith, 1985;Frese, Bausch, Schmidt, Strauch, & Kabst, 2012;Romme, 2003;Rousseau, 2012). In this paper we draw on the literature on mechanism-based explanations (e.g., Gross, 2009;Hedström & Ylikoski, 2010;Pajunen, 2008) to introduce a mechanism-based research synthesis framework that involves outcome patterns, mechanisms and contextual conditions. Moreover, we illustrate how this framework can synthesize research across different entrepreneurship paradigms. This paper contributes to the literature on entrepreneurship research methods (e.g., Davidsson, 2008;Frese et al., 2012;Ireland et al., 2005) as well as the literature on balancing the scientific and practical utility of research (Corley & Gioia, 2011;Van de Ven, 2007;Van de Ven & Johnson, 2006), by developing a coherent approach that enhances the practical relevance of scholarly work. Defining and developing a research synthesis framework is essential to this endeavor. The framework developed in this paper serves to review and synthesize a dispersed body of research evidence in terms of outcome patterns, contextual conditions and social mechanisms. As such, this paper may also spur a dialogue on the plurality of the entrepreneurship field's ontology, epistemology and research methods, and thus advance it as a scholarly discipline and professional practice. The argument is organized as follows. First, we discuss three modes of studying entrepreneurship that have emerged in the literature: the positivist, narrative and design research mode. Subsequently, a mechanism-based framework for research synthesis across the three research modes is introduced. A synthesis of the fragmented body of literature on opportunity perception, exploration and exploitation then serves to demonstrate how this framework can be applied and can result in actionable insights. Finally, we discuss how the research synthesis framework developed in this paper serves to connect entrepreneurship theory and practice in a more systematic manner, in order to build a cumulative body of knowledge on entrepreneurship. --- THREE MODES OF ENTREPRENEURSHIP RESEARCH The field of entrepreneurship research is multi-disciplinary and pluralistic in nature. It is multidisciplinary in terms of the economic, psychological, sociological, and other theories and methods it draws upon. More importantly, the pluralistic nature of the current landscape of entrepreneurship research arises from three very different modes of engaging in entrepreneurship research, labeled here as the positivist, narrative and design mode. Table 1 outlines the main differences and complementarities of these research modes. The logical positivist research mode starts from a representational view of knowledge, and looks at entrepreneurial phenomena as (relatively objective) empirical objects with well-defined descriptive properties studied from an outsider position (e.g., Davidsson, 2008;Katz & Gartner, 1988). Shane and Venkataraman's (2000) seminal paper exemplifies the positivist mode by staking out a distinctive territory for entrepreneurship (with the opportunity-entrepreneur nexus as a key notion) that essentially draws on mainstream social science. Most entrepreneurship studies published in leading journals draw on positivism, by emphasizing hypothesis testing, inferential statistics and internal validity (e.g., Coviello & Jones, 2004;Haber & Reichel, 2007;Hoskisson, Covin, Volberda, & Johnson, 2011;Welter, 2011). The narrative mode draws on a constructivist view of knowledge, assuming it is impossible to establish objective knowledge as all knowledge arises from how entrepreneurs and their stakeholders make sense of the world (Cornelissen & Clarke, 2010;Leitch et al., 2010). The nature of scholarly thinking here is imaginative, critical and reflexive, in order to cultivate a critical sensitivity to hidden assumptions (Chia, 1996;Gartner, 2007aGartner,, 2007b)). Therefore, studies drawing on the narrative mode typically focus on qualitative data, for example in the form of case studies or grounded theory development. Whereas the positivist mode emphasizes processes at the level of either the individual entrepreneur or the configuration of the social context and institutional outcomes (Cornelissen & Clarke, 2010), researchers drawing on the narrative mode acknowledge the complexity of entrepreneurial action and sense-making in its broader context (e.g., Downing, 2005;Garud & Karn<unk>e, 2003;Hjorth & Steyaert, 2005). As such, a key notion in the narrative tradition is the notion of (entrepreneurial) action and sensemaking as genuinely creative acts (e.g., Berglund, 2007;Chiles, Bluedorn, & Gupta, 2007;Foss, Klein, Kor, & Mahoney, 2008;Sarasvathy & Dew, 2005). Appreciating the authenticity and complexity of these acts is thus given precedence over the goal of achieving general knowledge. An example of this type of work is Garud and Karn<unk>e's (2003) study of technology entrepreneurship in the area of wind turbines in Denmark and the US. The design mode draws on Herbert Simon's (1996) notion of a science of the artificial, implying that entrepreneurial behavior and outcomes are considered as largely artificial (i.e., human made) in nature (Sarasvathy, 2004). As such, entrepreneurial behavior and accomplishments are considered as tangible or intangible artifacts with descriptive as well as imperative (although possibly ill-defined) properties. Consequently, entrepreneurship researchers need to "actually observe experienced entrepreneurs in action, read their diaries, examine their documents and sit in on negotiations" and then "extract and codify the'real helps' of entrepreneurial thought and action" (Sarasvathy & Venkataraman, 2011, p. 130) to develop pragmatic tools and mechanisms that can possibly be refined in experimental work. The rise of'scientific' positivism almost completely drove the design mode from the agenda of business schools (Simon, 1996), but design thinking and research have recently been regaining momentum among entrepreneurship researchers (e.g., Dew, Read, Sarasvathy, & Wiltbank, 2009;Sarasvathy, 2003Sarasvathy,, 2004;;Van Burg, Romme, Gilsing, & Reymen, 2008;Venkataraman et al., 2012). Although the initial work of Simon is often considered as having a strong positivist stance, the design research discourse has subsequently developed into a research mode that focuses on how people construct tangible and intangible artifacts, which embraces both positivist and constructivist approaches (Cross, 2001;Romme, 2003). Table 1 provides a more detailed account of each research mode. ---------------Insert Table 1 about here-----------------As can be inferred from Table 1, each research mode may share characteristics with another one. For example, studies drawing on the design mode often also draw on constructivist perspectives on knowledge (e.g., Dew et al., 2009;Van Burg et al., 2008) that are at the center of the narrative perspective. However, the overall purpose of design research is a pragmatic one (i.e., to develop actionable knowledge), whereas the main purpose of narrative research is to portray and critically reflect. The overall purpose driving each research mode strongly affects the assumptions made about what scholarly knowledge is, how to engage in research, and so forth (see Table 1). In this respect, each research mode can be linked to one of the 'intellectual' virtues or modes identified by Aristotle: episteme, techne and phronesis. Following Flyvbjerg (2001), the intellectual mode of episteme draws on universal, invariable and context-independent knowledge and seeks to uncover universal truths (e.g., about entrepreneurship). Episteme thus thrives on the positivist idea that knowledge represents reality, and as such, it draws on denotative statements regarding the world as-it-is. Evidently, the mainstream positivist mode in entrepreneurship research largely exploits and advances the intellectual mode of episteme. By contrast, the narrative mode mainly draws on phronesis, which involves discussing and questioning the values and strategies enacted in a particular setting (e.g. the values and strategy that drive a new venture). A key role of phronesis thus is to provide concrete examples and detailed narratives of the ways in which power and values work in organizational settings (Cairns & <unk>liwa, 2008;Flyvbjerg, 2001). Finally, techne refers to pragmatic, variable and context-dependent knowledge that is highly instrumental (Flyvbjerg, 2001), for example, in getting a new venture started. This is the intellectual mode that is strongly developed among experienced entrepreneurs, who leverage their own expertise and competences and get things done in a pragmatic 'can-do' manner (cf. Sarasvathy, 2001). Aristotle's three intellectual modes appear to be essential and complementary assets to any attempt to create an integrated body of scholarly and pragmatic knowledge on entrepreneurship. Consequently, the three research modes outlined in Table 1 can be positioned as complementary resources in an integrated body of knowledge. This raises the question how research findings arising from the positivist, narrative and design modes can be combined in a cumulative body of knowledge on entrepreneurship. --- MECHANISM-BASED RESEARCH SYNTHESIS The future development of the field of entrepreneurship largely depends on efforts to combine and synthesize contributions from all three modes in Table 1, to be able to develop a body of evidence-based and actionable knowledge. In this section, we describe a framework for research synthesis. In doing so, we seek to respect the uniqueness and integrity of each of the three modes outlined in Table 1, rather than comparing and possibly integrating them. The literature on evidence-based management, and more recently evidence-based entrepreneurship, has been advocating the adoption of systematic review and research synthesis methods ( e.g., Denyer & Tranfield, 2006;Denyer, Tranfield, & Van Aken, 2008;Rousseau, 2006;Rousseau, Manning, & Denyer, 2008) and quantitative meta-analyses (Frese et al., 2012). Briner and Denyer (2012) recently argued that systematic review and research synthesis tools can be distinguished from prevailing practices of reviewing and summarizing existing knowledge in management -such as in textbooks for students, literature review sections in empirical studies, or papers focusing on literature review. The latter practices tend to motivate reviewers to be very selective and emphasize 'what is known' rather than 'what is not known'; reviewers also tend to cherry-pick particular findings or observations, possibly producing distorted views about the body of knowledge reviewed (Briner & Denyer, 2012;Geyskens, Krishnan, Steenkamp, & Cunha, 2009). Therefore, systematic review and research synthesis methods should be instrumental in synthesizing the literature, by drawing on systematic and transparent procedures (Briner & Denyer, 2012). Quantitative meta-analysis serves to systematically accumulate evidence by establishing the effects that are repeatedly observed and cancelling out weaknesses of individual studies, but there always remains a gap between knowledge and action (Frese et al., 2012). Essentially, a meta-analysis can deliver well-validated and tested predictions of a phenomenon as the regular outcome of the presence/absence of a number of antecedents, without explaining why this phenomenon occurs (cf. Hedström & Ylikoski, 2010;Woodward, 2003). Here, qualitative review and research synthesis protocols, as extensively described and discussed elsewhere ( e.g., Denyer & Tranfield, 2006;Denyer et al., 2008;Tranfield, Denyer, & Smart, 2003), have a key complementary role in explaining the contextual contingencies and mechanisms through which particular experiences, perceptions, actions or interventions generate regular or irregular outcomes (Briner & Denyer, 2012). Therefore, we draw on mechanism-based explanation to develop a broadly applicable perspective on research synthesis in entrepreneurship. A large and growing body of literature in a wide range of disciplines, ranging from biology to sociology and economics, draws on the'mechanism' notion to explain phenomena (Hedström & Ylikoski, 2010). Basically, mechanisms are defined as something that explains why a certain outcome is produced in a particular context. For instance, organization theorists use the mechanism of 'escalation of commitment' to explain ongoing investments in a failing course of action (Pajunen, 2008) and mechanism-based explanations have also gained some foothold elsewhere in management and organization studies (Anderson et al., 2006;Davis & Marquis, 2005;Durand & Vaara, 2009;Pajunen, 2008;Pentland, 1999). In particular, studies drawing on a critical realist perspective (cf. Bhaskar, 1978;Sayer, 2000) have used the notion of mechanism to bridge and accumulate insights from different philosophical perspectives (Kwan & Tsang, 2001;Miller & Tsang, 2011;Reed, 2008;Tsoukas, 1989). This focus on abstract mechanisms is relatively agnostic about the nature of social action (Gross, 2009) and thus can steer a path between positivist, narrative and design perspectives on research. In the remainder of this paper, we therefore start from the idea that research synthesis serves to identify mechanisms within different studies and establish the context in which they produce a particular outcome (Briner & Denyer, 2012;Denyer et al., 2008;Tranfield et al., 2003;Rousseau et al., 2008). We build on mechanism-based work in sociology that draws on a pragmatic notion of mechanisms (Gross, 2009) and thus avoids the ontological assumptions of critical realism which some have criticized (Hedström & Ylikoski, 2010;Kuorikoski & Pöyhönen, 2012). The literature on pragmatism has identified the so-called 'philosophical fallacy' in which scholars consider categories (e.g., the layered account of reality in critical realism) as essences, although these are merely nominal concepts that have been created to help solve specific problems (Dewey, 1929;Hildebrand, 2003;Kuorikoski & Pöyhönen, 2012). This fallacy causes conceptual confusion, in the sense that both (critical) realists and anti-realists may not appreciate the integrative function and identity of inquiry, which leads them to create accounts of knowledge that project the products of extensive abstraction back onto experience (Hildebrand, 2003). Although there is some variety in the definition and description of mechanisms, the following four characteristics are almost always present (Hedström & Ylikoski, 2010;Pawson, 2002;Ylikoski, 2012). First, a mechanism explains how a particular outcome or effect is created. Second, a mechanism is an irreducible causal notion, referring to how the participating entities (e.g., entrepreneurs or managers) of a process (e.g., decision-making) generate a particular effect (e.g., ongoing investments in a failing course of action). In some cases, this mechanism is not directly observable (e.g., the market mechanism). Third, mechanisms are not a black box, but have a transparent structure or process that makes clear how the participating entities produce the effect. For instance, Pajunen (2008) demonstrates how an 'escalation of commitment' mechanism consists of entities (e.g., decision makers) that jointly do not want to admit the lack of success of prior resource allocations to a particular course of action and therefore decide to continue this course of action. Fourth, mechanisms can form a hierarchy; while parts of the structure of the mechanism can be taken for granted at one level, there may be a lower-level mechanism explaining them. In the escalation of commitment example, Pajunen (2008) identified three underlying mechanisms: (1) managers assure each other that the past course of action is still the correct one; (2) the owners of the company promote the ongoing course of action and issue bylaws that make divestments more difficult; (3) creditors fund the continuation of the (failing) course of action by granting more loans. In sum, a well-specified mechanism is a basic theory that explains why particular actions, beliefs or perceptions in a specific context lead to particular outcomes. To capture the variety of micro-to-macro levels at which mechanisms can operate in the social sciences, Hedström and Swedberg (1996) Third, mechanisms at a collective level describe how individuals collectively create a particular outcome. Yet, multiple mechanisms can co-produce a particular outcome at a certain level and in a given context. To identify the correct and most parsimonious mechanisms, counterfactual or rival mechanisms need to be considered (Durand & Vaara, 2009;Woodward, 2003;Ylikoski, 2012). By exploring and/or testing different alternative scenarios, that have varying degrees of similarities with the explanatory mechanism proposed, one can assess and establish to what extent this mechanism is necessary, sufficient, conditional and/or unique. For instance, by explicitly contrasting two rival mechanism-based explanations, Wiklund and Shepherd (2011) established experimentation as the mechanism explaining the relationship between entrepreneurial orientation and firm performance. Clearly, even a mechanism-based explanation does not resolve the paradigmatic differences outlined in Table 1 (cf. Durand & Vaara, 2009), nor is it entirely ontologically and epistemologically neutral. As such, the framework for research synthesis outlined in the remainder of this section may be somewhat more sympathetic toward representational and pragmatic than the constructivist-narrative view of knowledge, particularly if the latter rejects every effort at developing general knowledge (Gross, 2009). Nevertheless, our framework does create common ground between all three perspectives on entrepreneurship by focusing on outcome patterns, social mechanisms as well as contextual conditions. --- Outcome Patterns An idea that cuts across the three literatures outlined in Table 1 is to understand entrepreneurship as a societal phenomenon involving particular effects or outcome patterns. That is, merely contemplating radically new ideas or pioneering innovative pathways as such do not constitute 'entrepreneurship' (Davidsson, 2003;Garud & Karn<unk>e, 2003;Sarasvathy, Dew, Read, & Wiltbank, 2008). Accordingly, entrepreneurship must also include empirical observable outcome patterns such as, for example 'wealth or value creation' (Davidsson, 2003),'market creation' (Sarasvathy et al., 2008), 'creating new options' (Garud & Karn<unk>e, 2003), or creating new social environments (Rindova et al., 2009). A key assumption here is that there are no universal truths or straightforward causalities in the world of entrepreneurship. What works well in a new venture in the professional services industry may not work at all in a high-tech startup. Thus, we need to go beyond a focus on simple outcome regularities, as there might be different <unk> possibly unobserved <unk> factors (e.g., conditions and mechanisms) influencing the mechanisms at work (Durand & Vaara, 2009). The aim is to establish causal explanations that have the capacity or power to establish the effect of interest (Woodward, 2003). Therefore, research synthesis focuses on (partly) successful or unsuccessful outcome patterns, which can be characterized as so-called 'demi-regularities' in the sense that they are more than randomly produced, although countervailing factors and human agency may also prevent the outcome (Lawson, 1997;Pawson, 2006). --- Social Mechanisms As previously argued, mechanisms explain why particular outcome patterns occur in a particular context. Many scholars connect social mechanisms to Merton's theories of the middle range that "lie between the minor but necessary working hypotheses that evolve in abundance during dayto-day research and the all-inclusive systematic efforts to develop a unified theory that will explain all the observed uniformities of social behavior, social organization and social change" (Merton, 1968: 39; see Hedström & Ylikoski, 2010;Pawson, 2000). Thus, mechanisms do not aim to describe the causal process in a very comprehensive, detailed fashion, but depict the key factors and processes that explain the essence of an outcome pattern. Considering mechanisms as middle-range theories also highlights that mechanisms are not necessarily empirical observable and that conceptual and theoretical work may be needed to identify the mechanisms explaining why certain outcomes are observed in a particular context. Social mechanisms in the context of entrepreneurship research involve theoretical explanations, for example, learning in the area of opportunity identification (Dimov, 2007), the accumulation of social capital in organizational emergence (Nicolaou & Birley, 2003), fairness perceptions in cooperation processes (e.g., Busenitz, Moesel, Fiet, & Barney, 1997) or effectuation logic in entrepreneurial decision making (Sarasvathy, Forster, & Ramesh, 2013). Social mechanisms are a pivotal notion in research synthesis because a coherent and integrated body of knowledge can only begin to develop when there is increasing agreement on which mechanisms generate certain outcome patterns in particular contexts. --- Contextual Conditions A key theme in the literature is the heterogeneity and diversity of entrepreneurial practices and phenomena (e.g., Aldrich & Ruef, 2006;Davidsson, 2008;Shane & Venkataraman, 2000). In this respect, Zahra (2007) argues a deeper understanding is needed of the nature, dynamics, uniqueness and limitations of the context of these practices and phenomena. Contextual conditions therefore are a key dimension of the framework for research synthesis proposed here. In this respect, how mechanisms generate outcome patterns is contingent on contextual or situational conditions (Durand & Vaara, 2009;Gross, 2009). For example, continental European universities operating in a social market economy offer very different institutional, economic and cultural conditions for creating university spin-offs than their US counterparts. In particular, European universities that want to create university spin-offs need to support and facilitate the mechanism of opportunity perception and exploitation much more actively than their American counterparts (e.g., Van Burg et al., 2008). Contextual conditions operate by enabling or constraining the choices and behaviors of actors (Anderson et al., 2006;Pentland, 1999). Agents typically do have a choice in the face of particular contextual conditions, even if these conditions bias and restrict the choice. For example, a doctoral student seeking to commercialize her research findings by means of a university spin-off may face more substantial cultural barriers in a European context than in a US context (e.g., her supervisors may find "this is a dumb thing to do for a brilliant researcher"), but she may decide to push through these barriers. Other types of contextual conditions more forcefully restrict the number of options an agent can choose from; for example, particular legal constraints at the national level may prohibit universities to transfer or license their intellectual property (IP) to spin-offs, which (for the doctoral student mentioned earlier) eliminates the option of an IP-based startup. In general, the key role of contextual conditions in our research synthesis framework serves to incorporate institutional and structurationist perspectives (DiMaggio & Powell, 1983;Giddens, 1984) that have been widely applied in the entrepreneurship literature (e.g., Aldrich & Fiol, 1994;Battilana, Leca, & Boxenbaum, 2009;Garud, Hardy, & Maguire, 2007). --- THE DISCOVERY AND CREATION OF OPPORTUNITIES We now turn to an example of research synthesis based on this framework. In this section we synthesize previous research on entrepreneurship drawing on the notion of "opportunity". This substantial body of literature is highly interesting in the context of research synthesis, because the positivist, narrative and design mode have been used to conduct empirical work in this area (cf. Dimov, 2011). Moreover, Alvarez and colleagues (Alvarez & Barney, 2007, 2010;Alvarez, Barney, & Young, 2010) recently reviewed a sample of both positivist and narrative studies in this area and concluded these studies draw on epistemological assumptions that are mutually exclusive, which would impede "developing a single integrated theory of opportunities" (Alvarez & Barney, 2010, p. 558). While we agree with Alvarez and Barney that a single integrated theory based on a coherent set of epistemological assumptions (cf. Table 1) may not be feasible, our argument in the previous sections implies that key research findings arising from each of the three research modes outlined in Table 1 can be synthesized in a mechanism-based framework. --- Review Approach The key question driving the literature review is: which evidence-based insights can be inferred from the literature with regard to how and when entrepreneurs perceive and act upon opportunities? In view of the evidence-based nature of this question, the first step is to include only articles containing empirical studies. In a second phase, after the review of empirical studies, we also turn to related conceptual work. We selected articles that explicitly deal with opportunity perception and/or opportunity-based action. We used the ABI/Inform database and searched for articles in which "opportunity" and "entrepreneur*" or "opportunities" and "entrepreneur*" were used in the title, keywords or abstract. To be able to assess the potential consensus and capture the entire scope of epistemological perspectives in the literature, articles were not only selected from first tier entrepreneurship and management journals, but also from some other relevant journals. The articles were selected from Academy of Management Journal, Academy of Management Review, Administrative Science Quarterly, American Journal of Sociology, American Sociological Review, British Journal of Management, Entrepreneurship -------------Insert Table 2 and Table 3 about here -----------------To synthesize the findings, we read each article and coded key relationships between contextual conditions, social mechanisms and outcome patterns. In addition, we coded the theoretical and philosophical perspectives used by the authors, which showed 51 empirical articles predominantly draw on a positivist mode, 20 empirical articles follow the constructivenarrative mode, whereas 8 articles are within the design mode or are explicitly agnostic or pragmatic (see Table 3). Similar mechanisms, contexts and outcome patterns were subsequently clustered, which resulted in an overview of contextual conditions, social mechanisms and outcome patterns. --- Synthesis Results Table 4 provides -------------Insert Table 4 about here----------------- --- Individual cognitive framing of opportunities One of the most discussed mechanisms generating and directing opportunity perception and exploitation (as outcome pattern) is the individual's framing of the situation at hand, in light of existing knowledge and experience (Short, Ketchen, Shook, & Ireland, 2010). Many studies seek to understand this relationship, providing an in-depth understanding of the underlying social mechanisms and contextual conditions. Figure 1 provides an overview of the specific contexts, social mechanisms and outcome patterns. ----------------------Insert Figure 1 about here------------------------The general mechanism-based explanation here is that if an entrepreneur identifies or constructs an opportunity, (s)he most likely perceives and acts upon this opportunity if it is in line with his/her (perceived) prior experience and knowledge. Thus, an important contextual condition is formed by the amount and type of experience and knowledge. A second generic contextual condition are the external circumstances, such as technological inventions and changes in these circumstances, which individuals may frame as opportunities. Within these contextual conditions, a number of different social mechanisms explain the outcome patterns of perceiving one or more opportunities, perceiving particular types of opportunities, the degree of innovativeness and development of these opportunities, and finally whether and how people act upon the perceived opportunity. Our review serves to identify three social mechanisms within the individual cognitive framing of opportunities. First, the type and amount of knowledge enables or constrains framing the situation at hand as an opportunity. In general, people with entrepreneurial experience are more likely than non-entrepreneurs to frame something as an opportunity (Palich & Bagby, 1995). Higher levels of education and prior knowledge enhance the likelihood of identifying opportunities (Arenius & De Clercq, 2005; Ramos-Rodr<unk>guez, Medina-Garrido, Lorenzo-Gómez, & Ruiz-Navarro, 2010) and thus increase the number of opportunities identified (Smith, Matthews, & Schenkel, 2008;Ucbasaran, Westhead, & Wright, 2007, 2009;Westhead, Ucbasaran, & Wright, 2009) or lead to more innovative ones (Shepherd & DeTienne, 2005), while industry experience makes it more likely that people act upon perceived opportunities and start a venture (Dimov, 2010). More specifically, Shane (2000) showed the existing knowledge of entrepreneurs directs the type of opportunity identified for commercializing that specific technology (see also Park, 2005). This mechanism appears to have an optimum level, as too much experience can hinder the entrepreneur in identifying new promising opportunities (Ucbasaran et al., 2009). Beyond perceiving an opportunity, knowledge and experience also appear to direct the way in which opportunities are exploited (Dencker, Gruber, & Shah, 2009). The underlying submechanism -explaining the cognitive framing mechanism -is that prior knowledge and experience facilitate recognizing patterns from snippets of information and 'connecting the dots' to ideate, identify and evaluate a meaningful opportunity (Baron & Ensley, 2006;Grégoire, Barr, & Shepherd, 2010;Van Gelderen, 2010). The second social mechanism (see Figure 1) serves to explain that the individual's perception about his/her knowledge and abilities is also influential, as studies from a more narrativeconstructivist mode point out (Gartner, Shaver, & Liao, 2008), thus complementing the first mechanism. The third mechanism says that framing the situation at hand in light of existing knowledge and experience (as a mechanism) does not facilitate the process of identifying an opportunity if the situation does not match the entrepreneur's learning style (Dimov, 2007); this suggests the second and third mechanism have to operate together. Evidently, other contextual conditions and mechanisms, such as social network structure, also play a role (Arenius & De Clercq, 2005). In fact, the absence of social network structures can hinder the 'individual cognitive framing of opportunities' mechanism, as shown in a study of Finnish entrepreneurs whose lack of ties in the foreign market tend to hinder perception of internationalization opportunities, even when they have specific industry knowledge (Kontinen & Ojala, 2011). After completing the review of empirical papers, we turned to related conceptual papers. These papers provide a number of additional insights, which are not yet or only to a limited extent empirically studied. First, conceptual studies have put forward the additional mechanism of entrepreneurial alertness that explains why some entrepreneurs are more aware of opportunities than others (Baron, 2004;Gaglio & Katz, 2001;Tang, Kacmar, & Busenitz, 2012). Second, entrepreneur's reasoning processes, including metaphorical, analogical and counterfactual reasoning, provide an additional mechanism that serves to explain how entrepreneurs come up with new opportunities (Cornelissen & Clarke, 2010;Gaglio, 2004). Besides these two additional mechanisms, recent theorizing on the role of affect indicates that the feelings and moods of individuals form a contextual condition that influences alertness, experimentation and framing (Baron, Hmieleski, & Henry, 2012;Baron, 2008). As a next step, we considered whether the social mechanisms identified are (e.g., hierarchical, sequential or parallel) dependent on each other, redundant or counterfactual, and whether there are likely any unobserved mechanisms (cf. Durand & Vaara, 2009;Hedström & Ylikoski, 2010). With regard to the cluster of mechanisms pertaining to individual cognitive framing of opportunities,
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal ?If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.
that explains why some entrepreneurs are more aware of opportunities than others (Baron, 2004;Gaglio & Katz, 2001;Tang, Kacmar, & Busenitz, 2012). Second, entrepreneur's reasoning processes, including metaphorical, analogical and counterfactual reasoning, provide an additional mechanism that serves to explain how entrepreneurs come up with new opportunities (Cornelissen & Clarke, 2010;Gaglio, 2004). Besides these two additional mechanisms, recent theorizing on the role of affect indicates that the feelings and moods of individuals form a contextual condition that influences alertness, experimentation and framing (Baron, Hmieleski, & Henry, 2012;Baron, 2008). As a next step, we considered whether the social mechanisms identified are (e.g., hierarchical, sequential or parallel) dependent on each other, redundant or counterfactual, and whether there are likely any unobserved mechanisms (cf. Durand & Vaara, 2009;Hedström & Ylikoski, 2010). With regard to the cluster of mechanisms pertaining to individual cognitive framing of opportunities, Figure 1 lists no counterfactual mechanisms but does display a number of parallel, partly overlapping mechanisms dealing with the amount of knowledge and experience, the perception about this knowledge and experience, and the domain-specificity of that knowledge and experience. As indicated by the underlying studies, however, these mechanisms are not sufficient to produce the outcome patterns, but require other mechanisms, such as social mediation. The 'perception about one's abilities' (Gartner et al., 2008) may be redundant because most other mechanisms identified in our review do not require that entrepreneurs are aware of their abilities. Further research has to establish whether this is the case. - -----------------Insert Figure 2 about here-------------------- --- Socially situated opportunity perception and exploitation Many studies show the individual entrepreneur <unk>s social embeddedness in a context of weak and/or strong ties mediates the perception of opportunities. We identified multiple social mechanisms basically implying that people by being embedded in a context of social ties get access to new knowledge, ideas and useful contacts (e.g., Arenius & De Clercq, 2005;Bhagavatula, Elfring, Van Tilburg, & Van de Bunt, 2010;Jack & Anderson, 2002;Ozgen & Baron, 2007). Figure 2 summarizes the details of specific contexts, social mechanisms and outcome patterns. For instance, through the presence of social connections that exert explicit influence, such as in an incubator program, people can blend new and diverse ideas and obtain access to specialized resources and also get stimulated by others to become more aware of new opportunities, resulting in the perception of one or more opportunities (Cooper & Park, 2008;Stuart & Sorenson, 2003). A study of entrepreneurship in the windmill industry uncovered the same mechanism by showing that social movements co-shape the perception of opportunities and lead people to imagine opportunities of building and operating windmills (Sine & Lee, 2009). In addition, engaging in social contacts may influence opportunity perception; for instance, people interacting with coworkers that can draw on prior entrepreneurial experiences are more likely to perceive entrepreneurial opportunities themselves (Nanda & S<unk>rensen, 2010). Moreover, networking activities of entrepreneurs, in combination with observing and experimenting, enable the mechanism of associational thinking (Dyer, Gregersen, & Christensen, 2008) and serve to jointly construct opportunities by combining and shaping insights, as studies in the narrative research mode particularly emphasize (e.g., Corner & Ho, 2010;Fletcher, 2006). The outcome pattern typically observed here is that (potential) entrepreneurs perceive one of more particular opportunities. The social network context also affects the outcome pattern of opportunity exploitation. For instance, in a 'closed network' involving strong ties, the mechanism of acquiring resources from trusted connections can enable resource acquisition and result in better opportunity exploitation (Bhagavatula et al., 2010). Moreover, such ties can provide a new entrepreneur with the legitimacy of established parties and/or reference customers (Elfring & Hulsink, 2003;Jack & Anderson, 2002). In addition, the support and encouragement of entrepreneurs' social networks help entrepreneurs gain more confidence to pursue radically new opportunities (Samuelsson & Davidsson, 2009) or growth opportunities (Tominc & Rebernik, 2007). However, these mechanisms can also hinder opportunity perceptions when shared ideas and norms constrain people in perceiving and exploiting radically new opportunities, as Zahra, Yavuz and Ucbasaran (2006) showed in a corporate entrepreneurship context. Contextual conditions such as geographic, psychic and linguistic proximity limit a person's existing network, which reduces the number and variation of opportunities that can be mediated by these social ties (Ellis, 2010). In addition, observations in the African context suggest strong family ties also bring many social obligations with them, which may hinder opportunity exploitation; being exposed to a diversity of strong community ties can counterbalance this effect (Khavul, Bruton, & Wood, 2009). As a result, the mechanisms explaining positive effects of network ties (e.g., access to knowledge and resources leading to more opportunities and better exploitation) and those causing negative effects (e.g., cognitive lock-in and limited resource availability) appear to be antagonistic. However, the contexts in which these mechanisms operate may explain the divergent processes and outcomes, as diverse networks provide more and diverse information and resources, while closed networks can create a lock-in effect (see Martinez & Aldrich, 2011). Yet, closed networks may also have positive effects, in particular on opportunity exploitation in a western context, through trust and resource availability. As there is a large body of empirical studies in this domain (Jack, 2010;Martinez & Aldrich, 2011;Stuart & Sorenson, 2007), an evidence-based analysis of the social mechanisms, their conditions and outcomes can be instrumental in explaining the remaining inconsistencies. A subsequent review of conceptual work in this area shows that most conceptual arguments are firmly grounded in empirical work and as such in line with our synthesis of empirical studies of socially situated opportunity perception and exploitation. Yet, conceptual work serves to draw a broader picture, theoretically explaining both the positive and negative effects of social networks. For instance, conceptual work has used structuration theory to explain how social network structures both enable and constrain entrepreneurial opportunity perception as well as the agency of individuals to act upon those opportunities (Chiasson & Saunders, 2005;Sarason, Dean, & Dillard, 2006), thus highlighting that the social mechanisms of for instance limiting and providing access can be at work under the very same contextual (network) conditions. Moreover, the entrepreneur's social connections (as a contextual condition) are not stable, but are also subject to active shaping (e.g., Luksha, 2008;Mole & Mole, 2010;Sarason et al., 2006), thus putting forward a 'feedback loop' from the perception of an opportunity, via the mechanism of shaping the social connections, to a co-evolved social network which in turn influences opportunity perception and exploitation. Figure 2 suggests some overlap and/or redundancy among several mechanisms. In particular, the legitimation and resource-or knowledge-provision mechanisms appear to co-operate, and are thus difficult to disentangle. Possibly, these social mechanisms operate in a sequential manner, when legitimacy of the entrepreneur and/or venture is a necessary condition for building trust with and obtaining access to the connection (e.g., a potential investor). --- Practice-Oriented Action Principles This literature synthesis illustrates that the social mechanisms and outcome patterns identified in different streams of literature can be integrated in a mechanism-based framework. We identified three empirically observed mechanisms and two theoretical mechanisms with regard to the directivity of knowledge and experience in perceiving, developing and exploiting opportunities (see Figure 1). With regard to the in-depth review of socially situated opportunity perception and exploitation, we found seven mechanisms operating in a diversity of contextual conditions (see Figure 2). Table 4 presents an overview of the entire set of prevailing contextual conditions, social mechanisms and outcome patterns in the literature on entrepreneurial opportunities. The philosophical perspectives adopted in the studies reviewed range from studying opportunities as actualized by individuals and constructed in social relationships and practices (Fletcher, 2006;Gartner et al., 2008;Hjorth, 2007) to opportunities arising from and shaped by technological inventions (e.g., Clarysse, Tartari, & Salter, 2011;Cooper & Park, 2008;Eckhardt & Shane, 2011;Shane, 2000). Nonetheless, social mechanisms such as the type of existing knowledge and outcome patterns such as opportunity type are consistent. This suggests the research synthesis framework proposed in this paper is largely agnostic to underlying assumptions, and serves to build a cumulative understanding of contextual conditions, social mechanisms and outcome patterns. --------Insert Table 4 about here---------As a next step, we can develop practice-oriented products from this synthesis. Multiple studies have developed such practice-oriented products, for instance by codifying entrepreneurial principles for action (see Frese et al., 2012) or by developing design principles that are grounded in the available research evidence (e.g., Denyer et al., 2008). In the particular format proposed by Denyer et al. (2008), these design principles draw on a context-intervention-mechanism-outcome format, in which thus explicitly the intervention or action is described. In our research synthesis framework, the entrepreneurial action domain is captured by describing the boundaries of these actions in terms of contextual conditions, social mechanisms and outcome patterns. As such, highly idiosyncratic entrepreneurial actions within these (typically rather broad) boundaries are likely to be more effective in producing particular outcome patterns than those who fail to acknowledge these boundaries. Consequently, because the action space is specified one can develop specific action principles for practitioners such as entrepreneurs, policy makers, advisors or educators. To give an impression of how such a practical end-product of a mechanism-based synthesis looks like, we have transformed the findings with regard to 'individual cognitive framing of opportunities' and'socially situated opportunity perception and exploitation' into a set of entrepreneur-focused action principles displayed in Table 5. Moreover, this table also provides some potential actions based on these principles, describing ways to trigger the social mechanism and/or change contextual conditions in order to influence the outcome pattern. Overall, these action principles are evidence-based, in the sense that they are grounded in our research synthesis, but are not yet tested as such by practitioners in a specific context; in this respect, Denyer et al. (2008) have argued the most powerful action principles are grounded in the available research evidence as well as extensively field-tested in practice. --------Insert Table 5 about here---------Similarly, other context-mechanism-outcome combinations can be transformed in principles for action, pointing at ways to adapt contextual factors or ways to establish or trigger the relevant mechanisms. Previous work on evidence-based management has not only described in detail how such principles for action can be codified, but has also demonstrated that well-specified and field-tested principles need to incorporate the pragmatic and emergent knowledge from practitioners (Van Burg et al., 2008;Van de Ven & Johnson, 2006). In this respect, the research synthesis approach presented in this paper merely constitutes a first step toward integrating actionable insights from very diverse research modes into context-specific principles that inform evidence-based actions. --- DISCUSSION Entrepreneurship theorizing currently is subject to a debate between highly different philosophical positions, for instance in the discourse on the ontology and epistemology of opportunities (Short et al., 2010). To conceptually reconcile the two positions in this debate, McMullen and Shepherd (2006) proposed a focus on entrepreneurial action that would make ontological assumptions less important. Entrepreneurial action is thus defined as inherently --- Research Implications An important benefit of the research synthesis framework presented in this paper is that it facilitates the synthesis of dispersed and divergent streams of literature on entrepreneurship. This framework does not imply a particular epistemological stance, such as a narrative or positivist one. If any, then the epistemological perspective adopted in this paper is rooted in a pragmatic view of the world that acknowledges the complementary nature of narrative, positivist and design knowledge (Gross, 2009;Romme, 2003). Our proposal to develop a professional practice of research synthesis may also serve to avoid a stalemate in the current disagreement on key paradigmatic issues among entrepreneurship researchers (Davidsson, 2008;Ireland et al., 2005). Rather than engaging in a paradigmatic debate that possibly results in the kind of 'paradigm wars' that have raged elsewhere in management studies (e.g., Denison, 1996), a broad framework for research synthesis will be instrumental in spurring and facilitating a discourse on actionable insights dealing with 'what', 'why', 'when' and 'how' entrepreneurial ideas, strategies, practices and actions (do not) work. In particular, we advocate to build mechanism-based explanations for entrepreneurship phenomena. Entrepreneurship studies need to go beyond establishing mere relationships, by exploring and uncovering the social mechanisms that explain why variables are related to each other, as recent calls for mechanism-based explanations of entrepreneurship phenomena also imply (Aldrich, 2010;Frese et al., 2012;McKelvie & Wiklund, 2010;Sarasvathy et al., 2013;Wiklund & Shepherd, 2011). A focus on social mechanisms not only serves to transcend paradigmatic differences, but also creates detailed explanations by identifying mechanisms and contrasting with counterfactuals. For instance, we observed similar mechanisms at work in a diversity of contexts in which an entrepreneur's knowledge and experience affect opportunity identification and exploitation. The literature in this area, although highly diverse in terms of its ontological and epistemological assumptions, is thus starting to converge toward a common understanding of how particular entrepreneurial contexts through certain social mechanisms generate particular outcome patterns. Our framework also advances the literature on methods of research synthesis in evidencebased management. Early pioneers in this area have argued for a systematic collection of evidence regarding the effect of interventions in particular management contexts (Tranfield et al., 2003). Later work has introduced the notion of mechanisms, as an explanation of the effect of an intervention in a particular context (e.g., Denyer et al., 2008;Rousseau et al., 2008;Rousseau, 2012;Van Aken, 2004), mostly drawing on the critical realist synthesis approach developed by Pawson (e.g., Pawson, 2006). Our study highlights that the notion of mechanisms is central to overcome the fragmented nature of the field (see Denyer et al., 2008), and further develops this notion by adopting a pragmatic perspective on mechanisms that avoids the restrictive assumptions of (critical) realism, which makes it widely acceptable. Moreover, and more importantly, the synthesis approach developed in this paper specifies how detailed mechanism-based explanations can be created by qualitative assessments of different types of mechanisms and their hierarchy, dependency and sequence, including an analysis of rival mechanisms or counterfactuals. Our synthesis also shows the importance of context-dependency of those mechanisms and thus provides an approach that responds to repeated calls for a better inclusion of context in theorizing and researching entrepreneurship (e.g., Welter, 2011;Zahra, 2007). A key task of any research synthesis is to take stock of what the existing body of knowledge tells about the context dependency of entrepreneurial action, thus informing a broader audience about why and how particular mechanisms produce an outcome in a particular context and not in others. Finally, the example of the synthesis of the 'entrepreneurial opportunity' literature demonstrates that mechanism-based synthesis can effectively combine fragmented findings arising from quantitative studies of cause-effect relations with those arising from studies using qualitative data to assess the impact of mechanisms and contexts. --- Practical Implications The research synthesis perspective developed in this paper serves to bridge the so-called'relevance gap' between mainstream entrepreneurship science and entrepreneurial practice. In search of a research domain and a strong theory, entrepreneurship researchers have increasingly moved away from practically relevant questions (Zahra & Wright, 2011). This has led to an increased awareness of the scientific rationale of entrepreneurship research (Shane & Venkataraman, 2000), but also reinforced the boundaries between the science and practice of entrepreneurship and provoked an ongoing debate on epistemic differences. As our synthesis of the entrepreneurial opportunity literature illustrates, few studies adopt a pragmatic and actionable orientation with a clear focus on the processes of practicing entrepreneurs. Meanwhile, policy fashions rather than empirical evidence or well-established theory tend to influence entrepreneurial behavior and public policy (Bower, 2003;Mowery & Ziedonis, 2004;Weick, 2001). Moreover, previous attempts to develop practice-oriented design recommendations from 'thick' case descriptions provide only a partial view of policy (actions and interventions) or refrain from specifying the specific contexts of these recommendations. This makes it rather difficult to formulate recommendations that bear contextual validity as well as synthesize scholarly insights (Welter, 2011;Zahra, 2007). In other words, there is a major risk that many entrepreneurs, investors and other stakeholders in entrepreneurial initiatives and processes miss out on key scholarly insights, as a solid basis from which adequate strategies, policies and measures can be developed. In this respect, evidence-based insights codified in terms of contextual conditions, key social mechanisms and outcome patterns can inform and support entrepreneurs and their stakeholders in the process of designing and developing new ventures. Although this article may not be read by many practicing entrepreneurs, its results -and future work using such an approach -are of direct relevance for those who want to take stock of the existing knowledge base with the aim to learn, educate and support evidence-based entrepreneurship. In that sense, the contextual conditions and social mechanisms identified (e.g., in our synthesis of the entrepreneurial opportunity literature) do not provide a universal blueprint but evidence-based insights that can easily be transformed into context-specific principles for action, as demonstrated in Table 5. For instance, the research synthesis conducted in this paper demonstrates legitimacy creation, cognitive lock-in, information and resource gathering as well as social obligations are key mechanisms explaining the highly diverse effects of social ties. Entrepreneurs who become aware of these mechanisms are likely to become more effective in social networking efforts, for example, by searching for variety, engaging in deliberate efforts to reshape their network structure, and so forth. --- Limitations and Further Research This paper presents a mechanism-based research synthesis approach that is applied to the literature on entrepreneurial opportunity formation, exploration and exploitation. We systematically collected the relevant papers on this topic using a list of journals, but both the article collection as well as the presentation of the synthesis were limited. A proper systematic review of the existing body of knowledge should start by collecting all research outputincluding working papers, books and monographs -and then explain how the number of documents was reduced according to clear and reproducible guidelines. Furthermore, in this paper we were only able to present a snippet of the synthesis and the assumptions of the studies (cf. Dimov, 2011). It is up to future work in this area to develop a full-fledged systematic database of research documents and research synthesis, including collecting insights from other relevant fields, and to do this exercise for other relevant topics in the entrepreneurship literature as well. Moreover, we merely touched on the analysis of the dependency and redundancy of the social mechanisms identified. A formal and more detailed analysis of dependency, redundancy, counterfactuals and unobserved mechanisms (cf. Durand & Vaara, 2009) is a very promising route for further research, which may also serve to identify new mechanisms and areas of research. Finally, future research will need to focus on systematically distinguishing different types of mechanisms -ranging from micro to macro. For instance, Hedström and Swedberg (1996) refer to situational, action-formation and transformational mechanisms; alternatively, Gross (2009) distinguishes individual-cognitive, individual-behavioral, and collectively enacted mechanisms. Distinguishing these different types of mechanisms will serve to identify the social levels at which and contexts in which practitioners can intervene. Stevenson and Jarillo (1990: 21) --- CONCLUSION
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal ?If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.
Introduction Moving to a different school is very common among children in the United States. Following a cohort of kindergarteners from 1998 to 2007, the U.S. Government Accountability Office (2010) reported that 31% changed schools once, 34% changed schools twice, 18% changed schools three times, and 13% changed schools four or more times before entering high school. Mobility tends to be highest in urban schools with disadvantaged populations, especially at the elementary school level, and the South and West regions (as opposed to the Northeast and Midwest) have the highest percentage of a substance abuse (Gasper et al., 2012;Grigg, 2012;Parke & Kanyango, 2012;Rumberger, Larson, Ream, & Palardy, 1999;Reynolds et al., 2009;Swanson & Schneider, 1999;Wood, Halfon, Scarlata, Newacheck, & Nessim, 1993). In a meta-analysis of 26 studies of school mobility, Mehana and Reynolds (2004) estimated a three to four month performance disadvantage in math and reading achievement for mobile students. Beyond the impact on individuals, there are spillover effects in high-mobility schools, as student turnover affects not only movers but also the non-movers whose classrooms and schools are disrupted (Hanushek et al., 2004). About 11.5% of schools serving kindergarten through eighth grade have at least 10% of their students leave during the school year (United States Government Accountability Office, 2010). In these high-mobility schools, even nonmobile students exhibit lower levels of school attachment, weaker academic performance, and higher dropout rates (South, Haynie, & Bose, 2007). At the school level, high mobility promotes chaos, decreases teacher morale, and increases administrative burdens (Rumberger, 2003;Rumberger et al., 1999). At the classroom level, high student turnover frustrates teachers, compromises longterm planning, and leads teachers to develop a more generic teaching approach (Lash & Kirkpatrick, 1990). Instead of addressing individual student needs, teachers slow the pace of instruction and become more review-oriented (Kerbow, 1996). This means that after only a few years, students attending high-mobility schools are exposed to considerably less information than those attending schools with lower mobility rates. These school-level mobility effects are not trivial. While they have the potential to harm all students, there is evidence that they are worse for poor and minority students, contributing to racial and socioeconomic gaps in achievement (Hanushek et al., 2004). Furthermore, school reform efforts usually assume that students will remain in a specific school long enough for reforms to take effect, but schools in need of reform often have the highest rates of student turnover (Kerbow, 1996). High rates of student mobility are so problematic that some schools have implemented programs to discourage families from moving, such as Chicago's "Staying Put" project (Kerbow et al., 2003). --- Causal inference Researchers warn of potentially spurious relationships between mobility and student outcomes, since the families most likely to move are often the most disadvantaged (Gasper et al., 2010(Gasper et al.,, 2012)). In general, students of low socioeconomic status are more mobile than their more advantaged peers, Black and Hispanic students are more mobile than their White and Asian American peers, and students from single-parent or step-parent families are more mobile than those from traditional two-parent families (Alexander, Entwisle, & Dauber, 1996;Burkam, Lee, & Dwyer, 2009;Hanushek et al., 2004;Nelson, Simoni, & Adelman, 1996;Rumberger, 2003;Rumberger & Larson, 1998;United States Government Accountability Office, 2010). Although a causal interpretation of findings on mobility effects remains a challenge because of the many common factors associated with school moves and child outcomes, studies that attempt to disentangle the effects of these confounders consistently find student mobility to have negative consequences, both for the students who change schools and for high-mobility schools (Rumberger, 2003). The evidence is sufficient to warrant examination of why families change schools and how schools can address this issue. --- Types of school mobility It is important to distinguish among different types of school moves. Some types are more common than others, some are more likely to have negative consequences than others, and some potentially can be addressed by schools while others likely cannot. Researchers have made such distinctions along four dimensions: (1) whether a school move is accompanied by a residential move, (2) when the move occurs, (3) whether the move is voluntary, and (4) if it is voluntary, whether the move is dictated by a negative life event. First, residential mobility is very common in the United States; 22% of the U.S. population moved between 2008 and 2009, and two-thirds of these moves occurred within the same county (U.S. Census Bureau, 2009). Residential and school mobility are closely linked. Approximately two-thirds of secondary school changes are associated with a residential move (Rumberger & Larson, 1998), and these moves may be more detrimental than simply changing schools. Residentially mobile adolescents have been found to have school-based friendships characterized by weaker academic performance and lower expectations, less school engagement, and higher rates of deviance (Haynie, South, & Bose, 2006a). They also tend to have higher rates of violent behavior, and among adolescent girls, a higher likelihood of attempted suicide (Haynie & South, 2005;Haynie, South, & Bose, 2006b). Not surprisingly, residential mobility is also associated with reduced achievement in elementary and middle school (Voight, Shinn, & Nation 2012). Although our data permit an examination of residential mobility, we include it only as a supplementary analysis, because residential mobility was not affected by our schoolbased intervention, and controlling for residential mobility did not alter our findings. Second, the timing of school moves matters because moving during the academic year is more disruptive than moving during the summer (Hanushek et al., 2004). The student's age and grade-level also matter; moving during early elementary school is associated with worse outcomes than moves that occur later in the schooling process, especially when school changes are frequent (Burkam, Lee, & Dwyer, 2009). Our study cannot differentiate between academic year and summer moves, but we are able to examine mobility during the early elementary grades, a critical period in child development that few studies of school mobility have explored. Third, scholars have distinguished between compulsory and non-compulsory school changes. While the majority of school mobility occurs for non-compulsory reasons, compulsory moves, such as the transition from elementary to middle school or from middle school to high school, affect all students and are built into the structure of schooling. These moves are generally less disruptive than non-compulsory moves because school systems are set up for these transitions and all grade-equivalent students experience them together, but they are not free of negative consequences (Grigg, 2012). Because we focus on grades 1-3, our study is not complicated by compulsory moves, so we focus on an effort to curtail noncompulsory (voluntary) school changes. Finally, voluntary school changes can be subdivided into strategic and reactive moves. Strategic moves historically have been more prevalent among white or socioeconomically advantaged families and are based on a family's choice to seek out a higher-quality or better-fitting school (also known as "Tiebout" mobility, named after C. M. Tiebout). Reactive moves occur in response to negative events, are more common among minorities and disadvantaged families, and are the type of move most frequently associated with harmful consequences (Fantuzzo, LeBoeuf, Chen, Rouse, & Culhane, 2012;Hanushek et al., 2004;Warren-Sohlberg & Jason, 1992). Some reactive moves are school-related, such as those motivated by dissatisfaction with a school's social or academic climate, conflict with students or teachers, or disciplinary problems and expulsions (Kerbow 1996). Others are not motivated by school-related factors, but instead by negative life events such as family disruption, dissolution, or economic hardship. This distinction suggests that schools have the potential to curtail certain moves but are unlikely to influence others. It also explains why some school changes are associated with positive effects, yet (most) others are not. Our data do not allow us to differentiate between strategic and reactive moves, but prior research shows that reactive mobility is high in predominantly low-income and minority urban populations, so it is very likely that most -though certainly not all -mobility in our sample is reactive rather than strategic (Alexander et al.,1996;Fong et al., 2010;Hanushek et al., 2004;Kerbow, 1996). To the extent that a school-based intervention can reduce mobility, it is likely to be through deterring school-related reactive moves. --- Heterogeneity in school mobility School mobility rates differ according to the characteristics of schools, where those with the highest levels of mobility are also the most disadvantaged and tend to have larger proportions of minority and low-income students (Nelson et al., 1996). Again, this fits the profile of our sample of schools. Mobility rates also differ according to the characteristics of students. Differences in mobility along racial/ethnic lines have been studied extensively. Generally, Black and Hispanic students are more likely to change schools than White and Asian American students, due in part to greater economic disadvantage (Alexander et al., 1996). Blacks also tend to change schools more frequently than other race/ethnic groups, and frequent moves are associated with an increased risk of underachievement (Temple & Reynolds, 1999). Evidence that immigrant students and English Language Learners have above-average mobility rates is also troubling because mobility is associated with a longer time for achieving proficiency in English (Fong et al, 2010;Mitchell, Destino, & Karam, 1997;United States Government Accountability Office, 2010). Moreover, differences in Hispanic subpopulations leave open the possibility of heterogeneity in school mobility among Hispanics; Mexican Americans -who comprise the majority of our sample -display particularly high mobility rates (Ream, 2005). Student characteristics and school characteristics also interact to affect mobility. School segregation research finds evidence of white flight from predominantly minority public schools (Clotfelter, 2001), evidence of segregation between Black and Hispanic students across the public and private sectors (Fairlie, 2002), and self-segregation of a variety of groups into charter schools (Garcia, 2008). Thus, it is important to examine differential mobility across racial/ethnic groups while keeping the racial composition of schools in mind. The availability of school choice may play a role in differential mobility patterns as well. Recent data show that Blacks (24%) are more likely to enroll in chosen (as opposed to assigned) public schools than Hispanics (17%), Asian Americans (14%), or Whites (13%) (Grady, Bielick, & Aud, 2010). Presumably, students are more likely to exercise choice when their families are dissatisfied with their assigned school, or if a new school seems particularly promising. That Blacks have the highest rates of exercising school choice suggests that, compared to other race/ethnic groups, they are either more dissatisfied with their assigned schools, more sensitive to school-related factors, more heavily recruited by choice schools, or have greater access to choice schools in their communities. With only two research sites and limited information on which schools mobile students attend, we cannot fully address the role of choice, but we do examine the extent to which proximity to charter schools influences mobility in our sample. Thus, families change schools for a variety of reasons, including family or economic circumstances, aversion to certain groups of students, dissatisfaction or conflict with the school, or attraction to other schools, and these reasons are likely to vary according to the characteristics of students and their schools. This means that strategies to reduce mobility will be more or less effective across students and schools as well. Accordingly, it is important to examine heterogeneity, both in overall mobility rates and in the effects of mobility-reducing efforts, as we do in the following analyses. --- School Mobility and Social Capital Relations of trust between families and school personnel, or social capital, play an important role not only in explaining why school mobility can be detrimental, but also in identifying how schools can reduce mobility. Much research implicates social capital in the negative effects of changing schools; the disruption in relationships among students, school personnel, and parents that accompanies school moves helps explain why mobile students exhibit lower achievement (Coleman, 1988;Pribesh & Downey, 1999;Ream, 2005). However, the relationship between mobility and social capital is multidirectional; not only does mobility affect social capital, but social capital also affects mobility. --- Reducing mobility Studies of residential mobility provide evidence that social networks play an important role in encouraging families to stay. Both nuclear and extended family ties deter long-range residential mobility, especially for racial/ethnic minorities and families of low socioeconomic status (Dawkins, 2006;Spilimbergo & Ubeda, 2004). Social ties with others living nearby deter long-distance mobility as well (Kan, 2007). Coleman (1988) lamented the decline in these informal sources of social capital and highlighted the need for formal organizations to take their place. Accordingly, there is evidence that local institutions such as churches and businesses can serve a socially integrating function that deters residential mobility (Irwin, Blanchard, Tolbert, Nucci, & Lyson, 2004). Schools are an obvious candidate to serve this purpose with regard to school mobility. Researchers have suggested several ways for schools to encourage families to stay, many of which relate to building social capital. By improving their social and academic climates and making an effort to boost students' and their families' sense of membership in the school community, schools can increase parent engagement (Rumberger & Larson, 1998). Schools can also make themselves more attractive to students and their parents by implementing programs that promote positive relationships with families (Kerbow, 1996;Kerbow et al., 2003;Rumberger, 2003;Rumberger et al., 1999;Fleming et al., 2001). Thus, by making efforts to improve the number and quality of social relations among students, parents, and school personnel, and providing a space in which these networks can develop and operate, schools can aid in the production of social capital and possibly reduce student mobility. --- The intervention: Families and Schools Together (FAST) Our study examines an intervention expected to reduce school mobility by enacting the recommendations listed above. Families and Schools Together (FAST) is an intensive 8week multi-family after-school program designed to empower parents, promote child resilience, and increase social capital -relations of trust and shared expectations -within and between families and among parents and school personnel. FAST is typically implemented in three stages: (1) active outreach to recruit and engage parents, (2) eight weeks of multi-family group meetings at the school, followed by (3) two years of monthly parent-led meetings (FASTWORKS). 2 The eight weekly sessions -which take place at the school -last approximately two and a half hours and follow a pre-set schedule, where about two-thirds of the activities center around building relationships between families and schools, and the remainder target within-family bonding (Kratochwill, McDonald, Levin, Bear-Tibbetts, & Demaray, 2004). During each session, these activities include: family communication and bonding games, parent-directed family meals, parent social support groups, between-family bonding activities, one-on-one child-directed play therapy, and opening and closing routines modeling family rituals (see the Appendix for a detailed description of each FAST activity). FAST activities are theoretically motivated, incorporating work from social ecological theory (Bronfenbrenner 1979), family systems theory and family therapy (Minuchin 1977), family stress theory (McCubbin, Sussman, & Patterson 1983), and research in the areas of community development and social capital (Coleman 1988;Dunst, Trivette, & Deal 1988;Putman 2000) in order to build social networks by strengthening bonds among families and schools (see Kratochwill et al., 2004 and www.familiesandschools.org for specific information about FAST activities and their theoretical framework). These research-based activities, adapted to be culturally and linguistically representative, are led by a trained team that includes at least one member of the school staff in addition to a combination of school parents and community professionals from local social service agencies. The FAST intervention has been successfully replicated and implemented across diverse racial, ethnic, and social class groups in urban and rural settings within 45 states and internationally (McDonald, 2002;McDonald et al., 1997). Several recent randomized controlled trials, including one involving the sample studied here, demonstrate that FAST engages socially marginalized families with schools and school staff and improves the academic performance and social skills of participating children (Gamoran, Turley, Turner, & Fish, 2012;Kratochwill et al., 2004;Kratochwill et al., 2009;Layzer, Goodson, Bernstein, & Price, 2001;McDonald et al., 2006). Each of these RCTs had a different study focus and explored the impact of FAST on children's educational and behavioral outcomes for samples that differed by geographic region and race/ethnicity of participants (Supplementary Table S1 briefly summarizes these previous RCTs). Our study is unique in that it examines low-income predominantly Latino Southwestern communities, recruits all families rather than those of at-risk children, and it is the first to investigate effects of FAST on school mobility. Although FAST was not explicitly designed to reduce school mobility, its proven ability to build and enhance social relationships among members of the school community directly addresses one of the most important mechanisms by which schools can reduce mobility. FAST activities work to strengthen relationships among three specific types of networks: within families, between families within the same school community, and between families and school personnel. By developing and improving these types of relationships -and doing so within the physical boundaries of the school -FAST decreases school-related anxiety for both children and parents, reduces barriers to parent engagement, makes the school a more welcoming environment for families, and fosters the creation of parent networks within schools, where resources and social support can be exchanged (Kratochwill et al., 2004;Kratochwill et al., 2009;Layzer, Goodson, Bernstein, & Price, 2001;McDonald et al., 2006). Thus, FAST is just the sort of social capital-building organization advocated by Coleman (1988) and others to reduce school mobility. The research on social capital, school mobility, and FAST suggests that the intervention could reduce school mobility for three reasons. First, building relationships among families within a school should increase parents' sense of membership in the school community and reduce mobility. Second, FAST makes schools central to the social networks of parents, providing physical space where these networks develop and operate and where families exchange resources and social support. Changing schools would result in a loss of this source of social capital. Third, increasing families' familiarity with, and trust of, the school and school personnel by offering a new and informal context where parents can interact with school staff should reduce school moves driven by dissatisfaction, discomfort or distrust. Thus, even though reducing mobility is not an explicit goal of the FAST intervention, it is for these reasons that we expect students in schools assigned to the FAST program to be less likely to change schools between grades 1-3 than students in control schools. Moreover, we expect FAST to be particularly effective at reducing educationally motivated moves, such as those spurred by school dissatisfaction or feelings of isolation from the school community, which, as discussed above, may be more likely for Black families. Since motives for changing schools likely vary across students, we anticipate heterogeneity in the effects of FAST on school mobility across different types of students. Because our sample of schools is relatively homogeneous, we expect less variation in effects across schools. --- Data and Measures --- Sample recruitment and randomization We use data drawn from the Children, Families and Schools (CFS) study, a clusterrandomized controlled trial targeting first grade students and their families in eligible elementary schools that agreed to randomization in Phoenix, Arizona, and San Antonio, Texas. 3 These cities and schools were selected because of their high proportions of Hispanic students and students eligible for the national school lunch program, and our sample reflects these characteristics. Fifty two elementary schools were randomly assigned to a treatment condition, with half selected to receive the intervention (26 FAST schools), and half selected to continue with business as usual (26 control schools). Randomization produced two comparable groups of schools with no statistically significant differences on pre-treatment demographics or academic performance characteristics. Participant data were collected during the students' first-grade year (2008)(2009) for Cohort 1 and 2009-2010 for Cohort 2), with follow-ups at the end of Year 2 and a final survey in Year 3, when students were expected to be in third grade (2010-2011 for Cohort 1, and 2011-2012 for Cohort 2). 4 Just below 60% of first grade families consented to participate in the study, which limits the generalizability of our results to some extent, but since there were no statistically significant differences in the recruitment rates between FAST and control schools, our results should be unbiased. In FAST schools, 73% of families who consented attended at least one FAST session, and among those who attended at least one session, 33% "graduated" with a "full dose" of FAST, meaning that they began in week 1 or 2 and attended six or more of the eight sessions. On average, participants attended 35% of FAST sessions, and half the participants attended multiple sessions. Fortunately, we are not missing any data related to treatment assignment, randomization, school mobility, or school characteristics. Thus, our analytic sample includes all 3,091 students who consented to the study and the 52 schools they attended in first grade. We discuss additional covariates and our handling of missing student data below. --- Outcome and key independent variables The outcome is a binary indicator of whether a student was enrolled in a different school in third grade than s/he attended in first grade. School moves were identified using rosters provided by schools at the beginning of the first and third years and should be very accurate. Students retained in grade were also identified so as not to be incorrectly labeled as movers. The weakness of this measure is that we are unable to identify students who made multiple school moves, or who changed schools but returned to their original school between first and third grade. We conducted both an intent-to-treat (ITT) analysis, which estimates the average treatment effect for those in schools assigned to FAST, and a complier average causal effect (CACE) analysis, which estimates the average treatment effect for those who 3 Given the large number of schools participating in the study over the two sites, a staggered implementation was necessary. Two consecutive cohorts of first graders were each divided between three seasons (fall, winter, spring). Schools were selected to have at least 25% of students from low-income families and 25% of Hispanic origin. More details about the RCT design and implementation are available upon request. 4 Students retained in first or second grade were also included. actually complied (i.e., who graduated from FAST by attending one of the first two sessions and at least six of the total eight sessions). The key independent variable in the ITT analysis is a school-level treatment indicator, and the key independent variable in the CACE analysis is an individual-level indicator of graduating from FAST. --- Control variables The randomization of FAST occurred within three districts in Phoenix and two randomization blocks in San Antonio, so estimating an unbiased average treatment effect requires controlling for these units of randomization.5 These controls were included at the school-level in our analyses. Additional controls can increase statistical power and correct for pre-treatment differences that may arise in spite of randomization. At the schoollevel, we included the size of the school, the proportion of students receiving subsidized lunch, the proportion of students identified as Hispanic, Black, White, Other (Asian or American Indian), English Language Learners, and the proportion of third-graders scoring proficient on state assessments in reading (all based on the 2008-2009 school year). Because school choice may play a role in school mobility, we also included measures of the number of charter elementary schools located within three miles of each school in Year 1 of the study, and the change in the number of such schools between Years 1 and 3. At the student-level, we included each student's age, a log-transformed measure of travel time (in minutes) from home to school, indicators of the student's gender and race/ethnicity (Hispanic, Black, White, or Other), and indicators of whether the student was an English Language Learner, a recipient of special education services, or eligible for the national school lunch program. We also conducted supplementary analyses that incorporate information on participants' residential mobility, which we discuss at the end of the results section. Since FAST is expected to reduce mobility by building social capital among families and between families and schools, several pre-treatment measures of parent-reported social capital were also included. These include parent reports of the number of school staff they felt comfortable approaching (staff contacts), the number of parents of their child's friends they knew (intergenerational closure; Coleman, 1988), the degree to which they agreed that they shared expectations for their child with other parents, whether they regularly discussed school with their child, and whether they regularly participated in school activities. Two additional scales were constructed from a battery of questions. The first is a parent-staff trust scale constructed from four items related to parents' perceived trust of school staff (<unk> = 0.86). The second is a parent-parent involvement scale (<unk> = 0.91) measuring how involved each parent was with other parents at the school, in terms of exchanging favors and social support. Together, these measures provide information on both the quantity and quality of relationships between families in the community, as well as between families and schools. More details on these social capital indicators and scale construction are provided in the Appendix. --- Missing data Supplementary Table S2 summarizes the raw student-level data, including the number of observations for each variable. We used multiple imputation procedures to impute missing data values for student-level covariates in order to maximize the use of available information and minimize bias (Royston, 2005;Rubin, 1987;von Hippel, 2009). 6 We created five imputed data sets using -ice-in Stata 12, analyzed each individually, and derived final estimates adjusted for variability between these datasets. 7 --- Method and Analysis --- Intent-to-treat (ITT) analysis Because the outcome is a dichotomous indicator of whether each student changed schools between Years 1 and 3, and treatment assignment occurred at the school-level, we used a two-level logistic regression approach, as described by Raudenbush and Bryk (2002). For the ITT analysis, the comparison is based on school assignment to the treatment versus control condition rather than actual receipt of the treatment, which varied among participants. It should be noted that the ITT effect encompasses the total average effect of treatment assignment, including any effects driven by participation in the FAST sessions, subsequent FASTWORKS meetings over the next two years, as well as any spillover effects to families who did not participate. The null model, shown in equation 1, partitions the variance in the log-odds of mobility into within-and between-school components. There is no within-school error term because logistic regression predicts probabilities rather than expected values, and the error is a function of these predicted probabilities. The betweenschool error term, u 0j, represents each school's deviation from the grand mean (<unk> 00 ) and is used to estimate between-school variability. (1) To estimate the unbiased ITT effect of FAST, we added the treatment indicator (FAST) along with controls for the units of randomization (RAND) to the second-level model, as shown in equation 2. 9 (2) In further specifications, we added the pre-treatment student-level and school-level covariates listed above. Throughout, we used random-intercept models, which hold the effects of all student-level predictors fixed, meaning they do not vary across schools. To 6 Multiple imputation is the preferred method of handling missing data among many researchers, but our results are unlikely to depend on the particular strategy used. No students were missing the outcome variable or treatment indicator, there were low levels of missingness on imputed covariates, and findings were practically identical when we used listwise deletion. 7 Interactions and variable transformations were created prior to imputation. School fixed effects were included in imputation models to address the multilevel nature of the data. Analyses include indicators for students missing pre-test or demographic variables. 9 There was no evidence of Cohort and Season of implementation effects or interactions. examine heterogeneous effects of FAST on mobility, we also included cross-level interactions of the treatment with selected student-level covariates, including race/ethnicity, gender, travel time to school, survey language, free/reduced lunch status, English Language Learner status, and special education status. These cross-level interactions permit nonrandomly varying slopes for studentlevel predictors. 8 Similarly, we examined schoollevel interactions between FAST and school characteristics, although with only 52 fairly homogeneous schools, our study is underpowered to detect school-level interactions. --- Complier average causal effect (CACE) analysis If FAST affects school mobility, this should be especially true for students who comply with their treatment assignment and actually attend the FAST sessions, which are the core of the intervention. However, since compliance cannot be randomly assigned, quasi-experimental methods are required to estimate the effect for compliers. Families in the treatment group who attended the sessions are likely to be less prone to move than families in the treatment group who did not attend the sessions. To account for selection bias, we must compare the compliers from the treatment group to those in the control group who would have complied, had they been offered the treatment. Our approach views compliers as a latent class of individuals that is observed for the treatment group but unobserved for the control group. By using observed data on the compliance of the treatment group and observed pre-treatment predictors of compliance for all participants, we are able to identify members of the control group who would have been most likely to comply if they had been given the opportunity. We examined several specifications of the compliance model and present the one that best distinguishes compliers and non-compliers. The compliance model was estimated simultaneously with a multilevel model predicting school mobility, similar to those used in the ITT analysis. This model assumes that FAST affected only those who complied with the treatment, and it estimates the complier average causal effect (Muthén & Muthén 2010). 9 We provide more information in the results section, and further details are available from the authors upon request. --- Results Table 1 summarizes school-level descriptive statistics by treatment and shows that there were no statistically significant differences in school characteristics across the two conditions, as expected under random assignment. Post-imputation student-level descriptive statistics, by treatment, are summarized in Table 2. The students in our sample reflect the demographic composition of the schools. About 15% of the sample was White, just over 70% was Hispanic, and nearly 10% was Black, while less than 5% made up a combination of other race/ethnic groups. We did find statistically significant differences between FAST and control schools at the individual level for some covariates. Students in FAST schools lived farther away from their schools and were more disadvantaged on most pre-treatment measures of social capital (the lone exception was parent-staff trust, which favored the FAST group). It is unclear whether these differences were due to chance, differential selection to participate in our study, or an effect of treatment assignment on survey responses relating to social capital. In any case, it is important to consider these differences and account for them in our analyses. Although this study does not focus on FAST effects on social capital per se, we note that FAST did significantly boost social capital between the beginning and end of first grade (Supplementary Table S2; Gamoran et al. 2012). --- Intent-to-treat (ITT) results The results of our ITT analysis are summarized in Table 3. Coefficients are presented on the logit scale, so positive coefficients correspond to a higher likelihood of changing schools, and negative coefficients correspond to a lower likelihood of changing schools. A full table with standard errors is included in Supplementary Table S3. The null model (Column 1) estimates a between-school variance of.156. The latent intraclass correlation (which uses <unk> 2 /3 as the within-group variance in multilevel logit models) is.047; in other words, less than 5% of the variance in the probability of changing schools occurred between schools. This model also estimates the typical student in the typical school's probability of changing schools to be.380. This is roughly equal to the proportion of students in our sample making a school change and is comparable to prior studies of student mobility in early elementary school. Thus, overall levels of school mobility were quite high in our sample but consistent with prior studies, and there was not much variability in mobility among schools. Column 2 shows the unbiased average ITT effect of FAST on mobility, which is small, positive, and non-significant. According to this model, the predicted probability of the average student in a FAST school changing schools was about.39, compared to.37 for students in control schools. There were no statistically significant differences in mobility across the units of randomization, and further analyses found no evidence of heterogeneous FAST effects (interactions) across districts or randomization blocks. In short, the findings suggest that on average, attending a school assigned to FAST did not reduce school mobility. Earlier we reported some pre-treatment differences in student characteristics between treatment conditions. Specifically, students in FAST schools tended to report lower levels of social capital prior to treatment and to live farther from school than students in Control schools. Column 3 shows the estimates after controlling for pre-treatment student background and social capital variables. The FAST effect is even smaller and continues to be indistinguishable from zero, further suggesting that there was no main effect of FAST on school mobility. Not surprisingly, students who lived farther from their school were more likely to change schools, and mobility was lower for students whose parents knew more of their friends' parents at the beginning of first grade, suggesting that more intergenerational closure was associated with less school mobility. Together, these findings imply that pretreatment differences in social capital and distance to school do not substantially bias FAST effects, but if anything, the bias is upward, making mobility in FAST schools appear higher than it should be. There were also differences in mobility by race/ethnicity and subsidized lunch status. School mobility was higher among Black and White students than Hispanic students, and it was higher among students who qualified for free or reduced-price lunch than those who did not. The corresponding predicted probabilities of changing schools were.35 for Hispanic students,.46 for Black students,.43 for White students, and.39 for students in the Other category. Students qualifying for free or reduced-price lunch had a.39 predicted probability of making a school change, compared to.31 for students who did not qualify for subsidized lunch. The FAST effect estimates do not change after accounting for pre-treatment school characteristics (Column 4), which is not surprising considering there were no school-level differences in observed characteristics across treatment conditions. Mobility was significantly higher in larger schools and may have been higher in schools with more White
Student turnover has many negative consequences for students and schools, and the high mobility rates of disadvantaged students may exacerbate inequality. Scholars have advised schools to reduce mobility by building and improving relationships with and among families, but such efforts are rarely tested rigorously. A cluster-randomized field experiment in 52 predominantly Hispanic elementary schools in San Antonio, TX, and Phoenix, AZ, tested whether student mobility in early elementary school was reduced through Families and Schools Together (FAST), an intervention that builds social capital among families, children, and schools. FAST failed to reduce mobility overall but substantially reduced the mobility of Black students, who were especially likely to change schools. Improved relationships among families help explain this finding.
associated with less school mobility. Together, these findings imply that pretreatment differences in social capital and distance to school do not substantially bias FAST effects, but if anything, the bias is upward, making mobility in FAST schools appear higher than it should be. There were also differences in mobility by race/ethnicity and subsidized lunch status. School mobility was higher among Black and White students than Hispanic students, and it was higher among students who qualified for free or reduced-price lunch than those who did not. The corresponding predicted probabilities of changing schools were.35 for Hispanic students,.46 for Black students,.43 for White students, and.39 for students in the Other category. Students qualifying for free or reduced-price lunch had a.39 predicted probability of making a school change, compared to.31 for students who did not qualify for subsidized lunch. The FAST effect estimates do not change after accounting for pre-treatment school characteristics (Column 4), which is not surprising considering there were no school-level differences in observed characteristics across treatment conditions. Mobility was significantly higher in larger schools and may have been higher in schools with more Whites and more students qualifying for free/reduced-price lunch. It also appears that mobility was higher in schools with more charter schools located nearby at the beginning of the study, but lower for schools that experienced a growth in nearby charter schools. We found no evidence of treatment effect interactions with any of these school characteristics (Supplementary Table S3). Column 5 examines interactions of FAST with selected student characteristics. The significant negative interaction with time to school suggests that although living farther from school was associated with higher mobility, this association was significantly weaker in FAST schools. There are no significant interactions with survey language, gender, English Language Learner status, or free or reduced-price lunch status, suggesting that FAST was equally ineffective in reducing school mobility for these groups of students in our sample. Of the interactions with race/ethnicity, there is a negative and statistically significant interaction for Blacks, and smaller negative interactions for Whites and Others that do not reach statistical significance. Figure 1 translates these estimates into predicted probabilities. The substantial effect of FAST on school mobility for Black students is particularly striking considering their high mobility rates. Net of all other covariates, Black students in control schools were more likely to move (.53) than not, but in FAST schools their probability of moving was much lower (.38), bringing them to par with other non-Hispanics and nearly equal to Hispanics, who had the lowest school mobility rates in our sample. Exploring the FAST effect on Black mobility-The significant reduction of mobility for Black students warrants further exploration, so we took several measures to examine the robustness of this finding and found convincing evidence that FAST reduced school mobility for Black students. First, we examined pre-treatment descriptive statistics by treatment for the black subsample. There were no statistically significant differences in school characteristics (averages weighted by Black enrollment), although Black students in FAST attended schools with fewer Hispanics and more charter schools nearby (Supplementary Table S4). There were also no statistically significant differences in student characteristics by treatment among Blacks, and the patterns of these differences were similar to those reported for the overall sample (Supplementary Table S5). Second, we added interactions of FAST with other pre-treatment variables, such as indicators of social capital, the racial composition of the school, and the proximity to charter schools (available upon request). Though Blacks were more likely to move from predominantly Hispanic schools and when there were more charter schools nearby, these interactions were not statistically significant, and allowing for them did not explain the Black-by-FAST interaction. Another threat to validity is that in logistic regressions, coefficients are scaled relative to the variance of the error term, so this interaction could potentially be an artifact of differences in unobserved factors related to mobility among Blacks (Allison, 1999), but models that estimated a unique variance for Blacks found no significant evidence of this unobserved heterogeneity, and allowing for it did not alter our findings. Thus, we are convinced that Black families in this sample were indeed very likely to change schools and that FAST substantially reduced their propensity to move. We suggest two plausible explanations for the particularly high levels of Black mobility and the FAST effect reducing Black mobility. First, suppose Black families were more likely to change schools out of dissatisfaction related to poor relationships with schools, and FAST improved these relationships. The variable most relevant to this explanation is the "parentstaff trust" scale measured in the post-treatment parent survey. Second, suppose Black families were more likely to change schools because they felt isolated from other families in the school community, but FAST helped these families build relationships with others at the school. The variable most relevant to this explanation is post-treatment intergenerational closure. We tested these explanations among families completing a post-treatment survey (roughly two thirds of our sample) by adding each of these social capital measures, as well as its interactions with FAST, Black, and three-way interactions with FAST and Black, to simplified models using a package designed to test mediation in logistic regressions (Kohler, Karlson, & Holm, 2011). 10 The results of this analysis are presented in Table 4. The reduced model shows the FAST and Black main effects and the Black-by-FAST interaction without allowing them to be correlated with the mediators (in gray), and the full model shows the extent to which these effects are mediated when they are allowed to be correlated. The final two columns show the percent of the Black main effect and Black-by-FAST interaction that are explained by each mediator. The results suggest that the Black-by-FAST interaction is almost totally explained by the three-way interaction of Black, FAST, and intergenerational closure. Thus, it was not simply that FAST boosted intergenerational closure, or that intergenerational closure had a larger impact on Black mobility, but that the intergenerational closure promoted by FAST had a particularly strong impact on reducing Black mobility. It should be stressed that this analysis is non-experimental in that mediators are not randomized, so we treat this as an exploratory procedure. Nonetheless, the findings fit a story in which Black families were socially isolated from other parents in these schools, but FAST helped bring them into parental networks and reduced their propensity to change schools. 10 Testing mediators in logistic regression analyses requires special techniques. Logistic regression coefficients are scaled relative to the unobserved variance in the outcome, which changes when covariates are added to a model. The solution is to explain the same amount of variability across all models so that changes in coefficients are due solely to their relationships with mediators (Kohler, Karlson, & Holm, 2011). We use the -khbprogram in Stata to conduct our mediation analyses. --- Complier average causal effect (CACE) results The CACE analysis focuses on the effects of FAST for those families who actually complied with the treatment assignment and graduated from FAST. Given the striking findings presented above, we estimated compliance models for the full sample and for the subsample of Blacks. 11 Table 5A shows the estimates of the preferred compliance models, which classify 25% of the full sample and 16% of the Black sample as compliers or wouldbe compliers. 12 The results from the CACE analysis, shown in Table 5B, are practically identical to those provided by the ITT analysis. FAST had no overall effect for compliers; the predicted probability of changing schools for compliers in FAST schools was.41, compared to.40 for would-be compliers in control schools. Echoing earlier findings, mobility was higher among Blacks and Whites than among Hispanics, and lower for those with higher initial levels of intergenerational closure. Because the finding of reduced mobility for Blacks in the FAST group was so intriguing, we also estimated a CACE model on the subsample of Blacks in the study. The results indicate a huge FAST effect on reducing school mobility for Black compliers. While the predicted probability of changing schools for Black compliers in FAST schools was.43, it was almost 1.0 in control schools, as practically all Black would-be compliers in these schools moved. 13 Other estimates suggest potential reasons for this effect. For Blacks, pre-treatment parent-parent social capital measures were especially important predictors of reduced mobility. In particular, both higher levels of shared expectations with other parents and intergenerational closure were significantly associated with lower probabilities of school mobility. This further supports the hypothesis that increased parent-parent social capital played an important role in lowering the mobility of Black students. --- Supplementary analyses: Residential mobility Given the close relationship between school mobility and residential mobility, we conducted supplementary analyses incorporating data on participants' residential moves. ITT models similar to those presented above provided no evidence of FAST effects on residential mobility overall, or for any subgroup of students. We also found that controlling for residential mobility did not alter the FAST effect on school mobility, and there was no evidence of an interaction to suggest FAST effects on school mobility differed between families who did or did not move residences. 11 This model excludes the student-level covariates that were irrelevant to the Black sample (the race dummies and language variables), as well as several school-level covariates because of the lower statistical power resulting from a smaller sample. The findings hold when the school-level variables are included, but standard errors are larger. 12 We examined the robustness of our findings to the specification of compliance. Our preferred model defines compliance in terms of the FAST program's official definition of "graduation" as attending at least six of the eight weekly sessions. Using lower cut-offs such as two or four sessions yields higher compliance rates and produces qualitatively similar but less precise results than those presented here. 13 Although the especially high rates of mobility for black would-be compliers seem odd, they were robust across different specifications of both the compliance and outcome models. It is also important to keep in mind that there were only about 15 Black compliers in each treatment condition, so 14 or 15 of them moving is not implausible given the high mobility of Black students. --- Discussion This study provides a rare experimental evaluation of a social capital-building intervention hypothesized to reduce student mobility in early elementary school, a significant period when moving is particularly harmful. School mobility rates were high in our sample but consistent with previously published reports; the probability of a first-grader changing schools by third grade was nearly 40%. FAST was expected to reduce mobility due to program components that build and improve relationships between families and among families and schools. This social capital was theorized to improve families' perceptions of the school's commitment or effectiveness and increase families' identification with the school community, making them less likely to leave. For the majority of students in our sample of predominantly low-income Hispanic schools, FAST had no effect on mobility. There was evidence, however, of heterogeneity in treatment effects. First, Black students had especially high rates of school mobility, but FAST reduced their probability of changing schools between first and third grade by 29 percent. This effect held up across a variety of robustness checks, and it was even larger for those who complied with the treatment and graduated from FAST. Given recent reports that Blacks are more likely to exercise school choice than other groups (Grady et al., 2010), it is possible that FAST helped reduce school dissatisfaction among Black families in our sample by building social capital between families and schools. It is also possible that Black mobility was high because Black families felt socially disconnected from families in these predominantly Hispanic schools, but FAST aided in their integration into these communities. Our evidence, though tentative given the non-experimental nature of the mediation analysis, favors the second explanation. The intergenerational closure promoted by FAST was particularly beneficial for Black families in terms of reducing mobility, and the CACE analyses offered further evidence that parent-parent relationships were an especially important deterrent to mobility for Blacks. Second, although students who lived farther from their schools were considerably more likely to change schools than others, this association was significantly weaker in FAST schools. It is plausible that children who lived farther away from school were more mobile because their families were less connected with the community of families at their child's school, but FAST helped incorporate them into school networks and communities, making them less likely to move. Unfortunately, further analyses were unsuccessful in supporting this hypothesis or any other social capital-related explanation of this finding. The heterogeneity in mobility rates among race/ethnic groups is also worth revisiting. The high rates of mobility among Blacks in this sample align with prior findings, but the high rates of White mobility and the lower rates of Hispanic mobility are atypical. Similar trends have been documented in predominantly minority schools (Nelson et al., 1996) and could be related to the schools' racial composition. Whites may exhibit higher levels of school mobility in predominantly Hispanic areas due to White flight or the types of strategic moves documented elsewhere (Hanushek et al., 2004). When viewed alongside the high mobility of all non-Hispanics in this study, another explanation is that non-Hispanic students and their families feel out of place in predominantly Hispanic schools. However, we found minimal variation in white mobility rates across schools, so our data provide no evidence on such speculation. The experimental design of our study supports the causal claim that FAST reduced mobility for Black families in our sample. This is an important finding given the role of high Black mobility in the persistence of racial achievement gaps (Hanushek et al., 2004), and the accompanying long-term consequences of these achievement gaps for Black students' later schooling, occupational, and labor market outcomes compared to their White and Asian counterparts (Jencks & Phillips, 1998;Magnuson & Waldfogel 2008). Whether parentparent social capital is the true causal mediator of this effect is less certain. Because the mediators we tested were not randomly assigned, unobserved factors that affect both the mediator and school mobility could lead to bias. There is no way to rule this out, but our results did hold after controlling for pre-treatment measures of our mediators. Ultimately, intensive qualitative research may be required to uncover the reasons families change schools and to understand why programs like FAST have heterogeneous effects. While our results are illuminating, there are important limitations. Data constraints prevent us from drawing stronger inferences about why FAST decreased school mobility for Blacks or why it failed to reduce mobility for other students. Our sample is not representative of schools nationally, and the 60% of consenting families may not be representative of all families in these schools, so we encourage future research to examine the generalizability of these findings to other contexts; if similar school-based programs can promote social capital among parents within a school community and reduce mobility, this could benefit many students and schools. It may be important to examine the timing of school moves as well; convincing families to delay school changes until the summer could be beneficial, but we are unable to identify the timing of moves in our data. On a related note, it is unclear whether these findings would hold if we examined mobility over a longer period of time. FAST may have simply delayed the mobility of Black students, a short-term effect that could fade over time. Conversely, FAST could reduce mobility beyond third grade if the effects of social capital accumulate over time. We are also unable to differentiate between the two types of non-compulsory moves -strategic and reactive -as we do not know why children changed schools. School moves may have been beneficial for some students and harmful to others, but the high rates of mobility in our sample were almost certainly disruptive to schools. Understanding the different motivations behind school moves is a next step toward understanding how schools or policymakers could address student turnover. To conclude, school mobility is an important outcome to be studied in its own right, and very little published research has examined efforts to curtail it or mitigate its negative consequences (Alexander et al., 1996;Kerbow et al., 2003;Nelson et al., 1996). Our study provides rigorous evidence that building relationships between and among families and schools may significantly reduce mobility for Black students in predominantly Hispanic schools. It is possible that these types of interventions also reduce mobility for other groups of students in schools with different racial/ethnic compositions, which we encourage future work to explore. We also encourage researchers studying the effects of educational programs and reforms to examine their impact on school mobility, as this is only the second experimental study to test ways in which schools can reduce student turnover (Fleming et al., 2001). Finally, we urge researchers to move beyond simply exploring the effects of mobility and to examine its causes as well as potential ways to prevent unnecessary and harmful moves or to mitigate their negative consequences. Social capital theory may be a critical element in these pursuits. Feeling Charades (15 min): Parents and children engage in experiential learning by acting out feelings while other members of the family attempt to guess the emotion. The parent is in charge of ensuring turn-taking and facilitates talking about the game. --- Kid's Time (1 hour): FAST staff engages children with each other in supervised developmentally-appropriate activities while their parents participate in Parent time. Parent time (1 hour): Parents from the same school connect with one another through oneon-one adult conversation ("buddy time") followed by larger-group parent discussions ("parent group") led by a FAST facilitator. Parents direct the topics of conversation, share their own issues, and offer help to each other, building informal social support networks over time and facilitating the development of intergenerational closure. Special Play (15 min): Parent and child engage in child-directed one-on-one play. The parent is coached to follow the child's lead and not to teach, direct or judge the child in any way. FAST team members do not engage with children but support parents through discrete coaching. Lottery: Each week, one family wins a basket filled with prizes specifically chosen for that family and valuing up to $50. The winning family is showcased during closing circle. Each family is guaranteed to win once, a secret known by parents but not children, and the winner serves as the host family for the next week's meal. This creates a tradition that is valued, respected and repeated each week among participants. Closing circle and rain: At the conclusion of every FAST Night, parents and team members create a circle and share announcements with each other. Rain is a game played with no talking and involves turn-taking and close attention. The families' status as a community is visually and actively reinforced through this activity. Family graduation: At the last weekly session, families attend a graduation ceremony to commemorate their completion of the program. FAST team members write affirmations to parents. This is a special event; for example, families might dress up, receive diplomas, wear graduation hats, and take photographs. Each family is announced in front of the group, and school representatives (such as the school's principal and the child's teacher) are invited to observe or participate in the graduation ceremony. --- FAST Team members: The FAST program is run by a trained and collaborative team of individuals that reflect the social ecology of the child (e.g. family, school, community). The team must include a parent from the child's school, a school representative (often a counselor, social worker, librarian, or teacher), and two members from local community service agencies. FAST teams are required to be representative of the racial, cultural and linguistic diversity of the families that will be participating in the program, which enables teams to communicate respectfully and appropriately with program participants. --- Social Capital Scales and Items Parent-Parent Social Capital (from Parent Surveys) 1. Parent-parent involvement (<unk>=0.91) 6 items with 4 categories each ("None", "A little", "Some", "A Lot"); items averaged and standardized for those answering at least 4 of 6 items. "How much do other parents at this school..." --- a) Help you with babysitting, shopping, etc.? --- b) Listen to you talk about your problems? --- c) Invite you to social activities? "How much do you..." --- d) Help other parents with babysitting, shopping, etc.? --- e) Listen to other parents talk about their problems? --- f) Invite other parents to social activities? 2. Intergenerational closure: 7 categories (0, 1, 2, 3, 4, 5, 6 or more) a. "At this school, how many of your child's friends do you know?" 3. Shared expectations with parents: 4 categories ("None", "A little", "Some", "A Lot") a. "How much do other parents at this school share your expectations for your child?" Parent-School Social Capital (from Parent Surveys) 1. Parent-staff trust (<unk>=0.86) 4 items, each with 4 categories ("None", "A little", "Some", "A Lot"); items were averaged and standardized for those answering at least 3 of 4 items. 3. School participation 5 categories ("Strongly disagree", "Somewhat disagree", "Neither agree nor disagree", "Somewhat agree", "Strongly agree"). a. "I regularly participate in activities at my child's school." --- 4. Talk to child about school 5 categories ("Strongly disagree", "Somewhat disagree", "Neither agree nor disagree", "Somewhat agree", "Strongly agree"). a. "I regularly talk to my child about his or her school activities." Full table with standard errors and school interactions provided in Supplement. --- School Mobility by Treatment and Race/Ethnicity --- Table 4 Mediators of Black-by-FAST Interaction Estimates combined across 5 imputations. Note: * p<unk>.05. 25% of full sample and 16% of black sample classified as compliers. Estimates combined across 5 imputations. --- Supplementary Material Refer to Web version on PubMed Central for supplementary material. --- Appendix Description of FAST Activities (Kratochwill et al., 2004;McDonald, 2008) Family Flag and Family Hellos: At the first FAST Night, each family creates a small flag to place on their family table. The parent is in charge of the process and each family member contributes to the making of the flag which symbolizes the family unit. Each week these flags are used to denote the family's table where they eat the family meal and participate in activities. Family Music (15 min): Families sing the FAST song and are invited to share and teach each other additional songs, such as the school song. Family Meal (30 min): Each family shares a meal together at their table. Staff and children help serve parents first, showing respect to the parent and demonstrating reciprocity and turn-taking. Each week the main dish is planned and prepared by a different host family. The host family is thanked openly by all participating families at the end of the night. The family who won the lottery the previous week serves as the host family the following week and receives money and support needed to provide the meal. Scribbles (15 min): This is a family drawing and talking game where each person creates a drawing then family members ask questions about what others drew and imagined. The parent is in charge of enforcing the turn-taking structure and ensuring positive feedback.
Student turnover has many negative consequences for students and schools, and the high mobility rates of disadvantaged students may exacerbate inequality. Scholars have advised schools to reduce mobility by building and improving relationships with and among families, but such efforts are rarely tested rigorously. A cluster-randomized field experiment in 52 predominantly Hispanic elementary schools in San Antonio, TX, and Phoenix, AZ, tested whether student mobility in early elementary school was reduced through Families and Schools Together (FAST), an intervention that builds social capital among families, children, and schools. FAST failed to reduce mobility overall but substantially reduced the mobility of Black students, who were especially likely to change schools. Improved relationships among families help explain this finding.
I. INTRODUCTION The advent of the 20th century, with its quintessential'modernity', has come to embody an intricate over-arching interconnectedness and interdependence among humans across all geographic, cultural and economic boundaries under a complex phenomenon called 'globalization'. Globalization, often deemed to have its roots in as early as the 15th century, with 'The Silk Road' serving as a route for international trade, further bolstered by the age of exploration (15th-17th century), and the Industrial Revolution (18th-19th century), wasn't conceptualized till the late 20th-century. It was in 1964, that the Canadian cultural critic Marshall McLuhan posited the foundational becoming of a technologically based "global village," effectuated by social "acceleration at all levels of human organization" (103), and in 1983, that the German-born American economist Theodore Levitt coined the term globalization in his article titled "The Globalization of Markets" (Volle, Hall, 2023). Ever since the technological dominance of the late 20th and early 21st century, reflected in the wide accessibility of the internet, the prevalence of social media, satellite television and cable networks, the world has consolidated itself into a global network, iterating McLuhan's conception of 'one global village', so much so that in the contemporary times, the technological revolution has accelerated the process of globalization (Kissinger, 2015). This prevalence has given rise to a novel phenomenon termed the --- Khan and Aazka The Casino Syndrome: Analysing the Detrimental Impact of AI-Driven Globalization on Human & Cultural Consciousness and its Effect on Social Disadvantages IJELS-2023, 8 (6), (ISSN: 2456-7620) (Int. J of Eng. Lit. and Soc. Sci.) https://dx.doi.org/10.22161/ijels. 86.31 199 'Technosphere'. Credited to Arjun Appadurai, who considered technological globalization as one of the five spheres of globalization, technosphere implies a "global configuration" of boundaries, fostered by the flow and speed of technology (34). Thus, it can be found that technology and its manifested high-paced connectivity is indeed shouldering the cause of globalization. One of the paramount testimonies of technology driving globalization happens to be the introduction and proliferation of 'Artificial Intelligence', commonly referred to as AI. Gaining prominence and consequent advancement ever since the development of digital computers in 1940, AI refers to "the ability of a digital computer or computercontrolled robot to perform tasks commonly associated with intelligent beings" (Copeland, 2023). In other words, AI is a branch of computer science that aims to create systems capable of performing tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and problem-solving, by using algorithms, data, and computational power to simulate human-like intelligence in machines. Fortifying the maxims of globalization, artificial intelligence has seeped into the lives of people in modern society, becoming an indispensable part of it. Right from facilitating cross-cultural interactions by providing realtime language translation services to connecting employees located in different parts of the globe on platforms like Google Meet and Zoom, it can be affirmed that "Artificial intelligence, quantum computing, robotics, and advanced telecommunications have manifested the impact of globalization, making the world a global village" (Shah, Khan, 2023). Consequently, it also validates Theodore Levitt, the harbinger of theorizing globalization, who prophesied that "Computer-aided design and manufacturing (CAD/CAM), combined with robotics, will create a new equipment and process technology (EPT) that will make small plants located close to their markets as efficient as large ones located distantly (09). Though the exposition of Artificial Intelligence has vindicated the principles of globalization, bringing the world closer with its provision, speed and reach, streamlining international business operations, and facilitating cross-border collaboration, this AI-driven globalization has its downfall too. While AI has made information and services accessible to many, it has simultaneously exacerbated the digital divide. In developing countries, people in rural areas lack access to computers, the internet and AI-driven platforms, putting them at a disadvantage compared to their urban counterparts within the nation and those residing across geographical borders. In lieu, those who possess the skills to develop and operate AI technologies often command high-paying jobs, while others face job displacement due to automation. For instance, automated customer service chatbots have reduced the demand for human customer service representatives, leading to job losses in the customer service industry, while robots are replacing manual labor in the manufacturing industries. Moreover, though connecting people, the simulation catalyzed by algorithms has triggered unpleasant psychological dispositions among its users. In essence, AIdriven globalization has created "complex relationships among money flows, political possibilities, and the availability of both un-and highly skilled labor" (Appadurai, 1998, p.34), all of which, with the unraveling of the digital divide, risks of unemployment for the unprivileged poor, and consequent mental dispositions only pins individuals against one another, and vests unrestrained power in the hands of the capitalists few, effectuating a disintegration of society at varied levels. The aforementioned underside of AI-driven globalization aligns with a phenomenon called 'The Casino Syndrome', coined by Anand Teltumbde in his seminal work, The Persistence of Caste, wherein he investigates the nexus between globalization and the caste system in India. Contextualizing the simulating nature of the casino, whereby everyone involved in the play is merely guided by their zeal for money-making, becoming indifferent towards others, potentially yielding to the concentration of money in the hands of a few, broken relationships and mental health problems, he holds globalization to be operating along the same divisive lines. Similarly, since Artificial Intelligence stands as the modern-day face of globalization, the same 'casino syndrome' can be applied to AI-driven globalization. To pursue this nexus, this paper intends to theorize Teltumbde's Casino syndrome and substantiate AI-driven globalization as the testimony of the tenets of the syndrome, by investigating its triggers of social transformation that furthers class divide, alters mental health and leads to the eventual disintegration of society. Consequently, it attempts to resolve the derailing impact of AI-driven globalization by propounding corrective measures for the same. --- II. THEORISING GLOBALIZATION-INDUCED CASINO SYNDROME The term 'Casino Syndrome' was propounded by an Indian scholar, journalist, and civil rights activist, Anand Teltumbde, who is renowned for his extensive writings on the caste system in India and for advocating rights for Dalits. One of his critical writings is The Persistence of Caste: The Khairlanji Murders and India's Hidden Apartheid (2010), wherein he analyzes and interrogates the --- Khan and Aazka The Khirlanji Murders, or the public massacre of four scheduled caste citizens in the Indian village called Kherlanji, substantiating it within the larger Indian political context that has failed to protect its downtrodden citizens and the socio-religious context that has aggravated the marginalization of these groups. A novel perspective that he foregrounds is the critique of globalization, deconstructing it merely as a myth that furthers the subjugation of Dalits and those who lay at the fringes of society, in the reasoning of which he likens globalization to the 'Casino Syndrome'. Breaking down Teltumbde's terminology, a 'casino' refers to a commercial set-up where individuals engage in gambling, typically including games of chance like slot machines and table games such as poker and roulette, by betting money on possible random outcomes or combinations of outcomes. Initially physical, in the wake of digitalisation and globalization, online casinos like Spin Casino, Royal Panda, Genesis, Mr. Vegas, etc., have taken over. Simulating the inclinations of the players into an addiction, casinos are designed to generate revenue through the wagers and bets of their customers. Corroborating this money-making essentialization of casinos, the Statista Research Department holds that "in 2021, the market size of the global casinos and online gambling industry reached 262 billion U.S. dollars" ("Global casino and online gambling industry data 2021", 2022), whereas "11% of adult internet users gamble actively online, generating a global revenue of over 119 billion GBP" (Iamandi, 2023). Online casinos, affirming the technology that spawned globalization, which seemingly brings the world together, thus denote its capitalistic attribute, which not only hooks the people to its system but also ensures that the flow of money gets concentrated in the hands of its privileged owners. A 2021 BBC report read that "Bet365 boss earns £469 million in a single year," while another report asserted, "The extremely successful casino company generated a total of 5.16 billion U.S. dollars in 2020" ("Leading selected casino companies by revenue 2020", 2022). Whereas, for the users, though casinos offer entertainment and the possibility of winning money, it can lead to addiction, selfishness, financial problems, debt, social and familial isolation, and so on. These culminations bring to the fore casino's correlation in the terminology,'syndrome', which refers to a "group of signs and symptoms that occur together and characterize a particular abnormality or condition" ("Syndrome Definition & Meaning"). The symptoms rooted in casino-induced simulation, often referred to as 'problem gambling', 'compulsive gambling', 'gambling disorder', and the like, are enlisted by the Mayo Clinic as preoccupation with gambling, restlessness, agitation, disposition to get more money by betting more, bankruptcy, broken relationships, etc. Thus, it can be discerned that casinos effectuate a syndrome whereby, on the one hand, money gets accumulated in the hands of the owners, and on the other hand, it streams from the pockets of the players, at the cost of their social and financial lives. This is iterated by a research finding that holds that "a typical player spends approximately $110 equivalent across a median of 6 bets in a single day, although heavily involved bettors spend approximately $100,000 equivalent over a median of 644 bets across 35 days" (Scholten et al., 2020). Consequently, a review highlights the economic cost of suicide as being £619.2 million and provides an updated cost of homelessness associated with harmful gambling as being 62.8 million ("Gambling-related harms: evidence review", 2021). Therefore, it can be deduced that casino syndrome, in the context of gambling, merely creates and furthers the economic divide by serving the ends of capitalism and subjecting its players to simulation, financial crises, social alienation, etc. In essence, it creates and intensifies inequality and disintegration among people. Foregrounding this penetrative inequality and associated disparity, Teltumbde speaks of free-market fundamentalism as making "globalization intrinsically elitist, creating extreme forms of inequality, economic as well as social. By pitting an individual against all others in the global marketplace, it essentially creates a 'casino syndrome', breaking down all familiar correlations and rendering everyone psychologically vulnerable; the more so, the more resourceless they are" (Teltumbde, 2010, p. 175). Applying the same deconstructionist approach, Teltumbde's conceptualisation foregrounds economic inequality as a background, based on which prominent contorting tents emerge, all of which are substantiated below in the context of globalization: --- Globalization pitches an individual against all others in the global marketplace Globalization, while fostering interconnectedness on a global scale, also inadvertently pitches individuals against each other. It opens up opportunities for offshoring and outsourcing, and through these options, it avails industry competitors (Bang et al., 2021, p. 11). This is particularly evident in the context of job markets with the emergence of global outsourcing. Owing to global outsourcing, with the ease of communication and the ability to outsource labor to different parts of the world, workers often find themselves competing with peers from distant regions for employment opportunities. This underside of globalization is accurately pointed out by Gereffi and Sturgeon, who hold that "the rise of global outsourcing has triggered waves of consternation in advanced economies about job loss and the degradation of capabilities that could spell the disappearance of entire national industries (01). Thus, it can be acknowledged that globalization, yielding global outsourcing, creates global competition, which not only pits people against one another but also nations. --- Globalization breaks down all Familiar Correlations Having pointed out the pinning of nations against one another, globalization, in its zeal to disrupt boundaries, also breaks down the very nation by causing enmity among its social groups. Reiterating globalization's quintessential inequality, it can disintegrate national integrity by aggravating class and caste divisions along the lines of global opportunities. Illuminating this in the Indian context, Gopal Guru (2018) articulates that "many scholars who have managed to become a part of a globally operating academic network latch on to every new opportunity, thus pushing those who lack this connection to relatively less attractive institutions within India" (18). Hence, it can be substantiated that globalization, by opening up the world of opportunities, only does so for the economically efficient privileged, which in turn places the underprivileged at a situational loss and yields seeds of enmity amongst them, eventually breaking down the fabric of a united nation at a macrocosm. Whereas on a microcosm, owing to its operational characteristics, it also breaks down families and social structures, as accurately pointed out by Trask, who posits that globalization "as a growing global ideology that stresses entrepreneurship and self-reliance pervades even the most remote regions, the concept of social support services is quickly disintegrating" (03). Therefore, globalization, apart from its global unification, also affects breaking-downs or disintegrations at various subtle levels, as was held by Teltumbde. --- Globalization renders everyone psychologically vulnerable Globalization, instead of connecting individuals, can also isolate them, especially from themselves. Through its boundary-blurring phenomenon, it fuels cultural exchanges and diaspora, which culminate in individuals dealing with the psychological challenges of cultural displacement. Additionally, urbanization, driven by globalization, has led to a colossal increase in behavioral disturbance, especially associated with the breakdown of families, abandonment of, and violence to spouses, children, and the elderly, along with depressive and anxiety disorders (Becker et al., 2013, p. 17). Moreover, under the unqualified and unstoppable spread of free trade rules, the economy is progressively exempt from political control; thus, this economic impotence of the state influences how individuals see their role, their self-esteem, and their value in the larger scheme of things (Bhugra et al., 2004). This constant fear of being on one's own in the global sphere has ushered in an age of people characterized by perpetual anxiety, identity, and existential crises, which is even more daunting to the underprivileged, as Kirby rightly posits that "poor people's fears derive from a lack of assets and from anxiety about their ability to survive in increasingly unpredictable and insecure environments" (18). Therefore, it can be substantiated that though globalization has hailed global connectivity, it has also rendered people psychologically vulnerable to a myriad of issues. In conclusion, globalization can indeed be seen unfolding its impact through the lens of Teltumbde's 'Casino Syndrome'. --- III. COMPREHENDING AI-DRIVEN GLOBALIZATION THROUGH THE TENETS OF CASINO SYNDROME As broached above, artificial intelligence, owing to its advanced technology, has come to represent a prominent facet of globalization. Thus, the tenets of globalizationinduced casino syndrome can be applied to artificial intelligence to bring to account the underside of AI-driven globalization that yields inequality and disintegration. 3.1 Creates inequality -Pitches an individual (entity) against others in the global marketplace (is elitist): Since technology-driven globalization has global reach and impact, its competition-inducing trait can be seen at varied levels of intersections, whereby, apart from merely pinning individuals, it actually pins entities in opposition too. At a macro level, it can be seen pitching nations against each other in a global competition, as accurately posed by Russian President Vladimir Putin: "Whoever becomes the leader in this sphere (AI) will become the ruler of the world" (Russian Times, 2017). Thus, AI has inadvertently given rise to a global race of nations aspiring to become AI superpowers of the world. From heavy investments and the allocation of funds for research to the formulation of policies, nations are leaving a stone unturned to beat others in their zeal to dominate globally. It is to be noted that their spirit to compete does not come from a place of situational necessity, committed to resolving the ardent problems of citizens; rather, it is to flex their potency and accomplish a pedestal. Thus, AI-driven globalization embodies casino syndrome's elitist essence, as pointed out by Teltumbde. The most conspicuous conflict is between the US and China, as validated by Anthony Mullen, a director of research at analyst firm Gartner, who says, "Right now, AI is a two-horse race between China and the US" ( (Nienaber, 2019). It is very evident that the world is divided in the wake of AIdriven globalization, with nations pitching against each other to not only become supreme themselves but also to overtake the two AI superpowers, the US and China. Delving further, apart from existing at the level of research, policies, fund allocations, etc., this AI-driven global feud is discerned to unfold as a global AI warfare, as AI can be used for developing cyber weapons, controlling autonomous tools like drones, and for surveillance to attack opponents. Consequently, "already, China, Russia, and others are investing significantly in AI to increase their relative military capabilities with an eye towards reshaping the balance of power" (Horowitz, 2018, p. 373). Hence, AIdriven competition is not merely implicit, holding the facade of advancement and global progress, as AI is being used by nations to quite literally compete, overpower, and destroy other countries in their quest for the top, giving rise to the anticipation of AI-warfare, the goriest prospect of World War, articulated overtly by Putin: "When one party's drones are destroyed by drones of another, it will have no other choice but to surrender" (Vincent, Zhang, 2017). Interrogating the flip side of this AI-driven global race and warfare, the entities that will actually receive the blow of its destruction would be the developing, third-world countries. In other terms, AI-driven globalization has also pitched the world into two spheres, whereby on the one hand, it "could benefit countries that are capital intensive" (Horowitz, 2018), or elite, whereas on the other hand, developing countries like Sub-Saharan Africa, the Caribbean, Latin America, and other South Asian countries, who are preoccupied with other urgent priorities like sanitation, education, healthcare, etc., would be found wanting (Chatterjee, Dethlefs, 2022). Likewise, AI will strengthen the already existing economic and digital divide between the first world and the third world, making the letter a soft target and putting them at an economic disadvantage. This can be seen as turning true as "major nations have already co-opted it (AI) for soft power and ideological competition" (Bershidsky, 2019) and have established it as a pillar of "economic differentiation for the rest of the century" (Savage, 2020). Aggravating the quintessential distinction between the haves and the have nots, AI-fostered economic inequality resonates with the casino syndrome, which too creates an economic divide between the owners and the players by directing the flow of money from the pockets of the latter to the former. Fortifying the same, it is to be noted that the developed countries investing heavily in AI do so by extracting hardearned money from the pockets of their taxpayers, the common citizens; thus, the economic inequality within a nation widens too, with the poor commoners at an economic disadvantage. Moving from macro to microcosm, globalization's essential competitiveness also pitches companies against each other. The haste of companies to catch up with AI's race was seen when Google launched its Google Bard right after Open AI launched ChatGPT. Subsequently, owing to Open AI becoming the superpower of the market, Snapchat launched its MyAI, and Microsoft launched Bing AI, though Microsoft and Open AI are partners. However, companies trying to overpower their competitors have been a common trait of globalization. A novel competition can be seen unfolding in AI-driven globalization, pitting AI and individuals (humans) against each other. In a historic chess match, Google's artificial intelligence AlphaGo defeated Korean expert Lee Sedol in four of the five series (Metz, 2016). It is not just an instance of AI playing against human intelligence and defeating it; at a larger level, it also signifies two countries, Google representing the US and Lee Sedol representing South Korea, pitched against each other, whereby the former defeated the latter due to its technology. This phenomenon is discernible in routine human activities too. Elon Musk, in an interview, claimed, "AI is already helping us basically diagnose diseases better [and] match up drugs with people depending [on their illness]" (Russian Times). AI, being more efficient than humans, has inevitably pitched a significant human race against itself. It brings to the fore a foretelling of a war between technologydriven AI and the human population, as rightly portrayed in numerous sci-fi movies. This futuristic war can be anticipated to be true with the amount of investments made for its proliferation, as a report read that "Today's leading information technology companies-including the faangs (Facebook, Amazon, Apple, Netflix, and Google) and bats (Baidu, Alibaba, and Tencent)-are betting their R&D budgets on the AI revolution (Allison, Schmidt., 2020, p. 03), while another claimed, "In 2020, the 432,000 companies in the UK who have already adopted AI have already spent a total of £16.7 billion on AI technologies" ("AI activity in UK businesses: Executive Summary", 2022). Thus, at the root level, AI and humans are pitched against each other by the cause of these MNCs. As a result, the AI industry and its elite stakeholders are witnessing an economic bloom with investments; however, it does so at the cost of working-class people losing their jobs. Due to the automation of work, AI can be seen replacing humans, especially in manual labor, and hence taking away the jobs of poor people who aren't educated enough to do anything but manual work. Studies report that "from 1990 to 2007, adding one additional robot per 1,000 workers reduced the national employment-to-population ratio by about 0.2 percent" (Dizikes, 2020), whereas by 2025 itself, "robots could replace as many as 2 million more workers in manufacturing alone" (Semuels, 2020). Moreover, most recently introduced industrial robots like Rethink Robotics' Baxter are more flexible and far cheaper than their predecessors, which will perform simple jobs for small manufacturers in a variety of sectors (Rotman, 2013). Hence, more human replacement. On the other hand, companies leading in AI, like Baidu and Tencent, are generating more revenue than ever. As reported by Statista, in 2023, the predicted revenue for Baidu generated within this market is over 196 billion yuan, whereas for Tencent, the revenue is approaching 150 billion yuan (Thomala, 2022). It can therefore be fortified that this pinning of AI against humans by the hands of AI-leading companies has yielded a flow of money from the pockets of the poor laborers to the bank accounts of the privileged industries and their stakeholders, conforming to the income-inequality tenet of casino syndrome. Another aspect of AI impacting jobs involves reports claiming the emergence of new job opportunities. According to the World Economic Forum Future, 85 million jobs will be displaced by 2023, while 97 million new roles may emerge (Ordu<unk>a, 2021). Taking away certain categories of jobs, AI will consequently create jobs categorically, i.e., for the educated elite. Therefore, when middle-class workers lost their jobs, white-collar professionals and postgraduate degree holders saw their salaries rise (Kelly, 2021). Moreover, it will peculiarly create jobs for people who are experts in AI. Subsequently, it can be rightly posited that "AI won't take your job, but a person knowing AI will" (Rathee, 2023). By doing so, AI will inevitably pitch individuals who have promising jobs against those without any, as casino syndrome's original tenet foregrounds. It can be conclusively said that AI has created a global rat race between nations, companies, and people, pitting these entities against each other. As a consequence, it not only harbors global enmity, throwing open the possibility of global warfare, but also economic inequality, whereby money flows into the accounts of the elite 'Chosen Few', and gets emptied from the pockets of already underprivileged others, furthering the historical divide between the haves and the haves not. --- Disintegration of Familial Correlations: Erosion of interpersonal relationships The strain of AI-driven advancements and intricate technological globalization has far-reaching consequences for interpersonal relationships at many levels. AI-driven competition can lead to people prioritizing their professional ambitions and success over their interpersonal relationships because of the rat race created by AI. As companies are passionately pursuing the use of artificial intelligence, leading to a job recession, individuals are pitying each other, and in their ambition to find stable employment, they often neglect their familial and social relations. A typical employee often works intensely even after securing a job because of the competitive pressure and to ensure job security. Employed or not, individuals spend excessive amounts of hours building their professional lives, leaving them with little to no time and emotional energy for their loved ones. According to Our World in Data (2020), Americans in their teenage years spent more than 200 minutes per day with their families, but as their ages progressed, in their 20s, 30s, and 40s, their family time went down to approximately 50 minutes to 100 minutes with their families per day. Whereas, they spent more than 200 minutes with their co-workers each day. Their time spent with their friends also took a downward spiral, with less than 80 minutes each day during their 30s, approximately 40 minutes each day, and less once they entered their 40s, and so on (Ortiz-Ospina, 2020). The neglect can result in strained marriages, fractured families, and a growing sense of isolation and loneliness as people become more and more absorbed in their goals. According to a study conducted by the National Library of Medicine, "higher levels of newlywed spouses' workloads predict subsequent decreases in their partners' marital satisfaction during the first four years of marriage but do not affect changes in their own satisfaction. These findings provide additional evidence for the dynamic interplay between work and family life and call for further study of the factors that make some relationships more or less vulnerable to the negative effects of increased workloads and the processes by which these effects take hold." (Lavner, Clark, 2017). Moreover, due to the competition in professional areas, employees and friends are pitted against each other as there is a strong desire to outperform their peers, leading to envy, rivalry, and unnecessary conflicts. Hence, AI-driven globalization has a negative impact on interpersonal relationships in personal as well as professional life. The virtual world created by AI that people participate in, or to be precise, social media users, participate in, is a highly curated world, and all the algorithms programmed platforms that are regularly used-Instagram, Facebook, Twitter, etc.-provide highly curated content created for the one particular user based on their 'history'. Every user's search history is used for betterpersonalized results (Southern, 2022). Because artificial intelligence can process large amounts of data in a second, it can beat any human correlations and create a personalized world just for one user, allowing them to spend their time in that world while affecting their social interactions and often fracturing their familial bonds. Algorithms and curations create a seemingly perfect virtual reality where individuals do not have to struggle with social anxiety as their interests are presented to be explored freely, leading to a gradual distancing from the'real' world. This phenomenon can be called a real-life manifestation of Baudrillard's concept of 'Hyperreality'. Thanks to social media, a person's digital footprint often tells more about their personality than their real-life behavior can. The hyperreality created on social media in turn creates a 'virtual arcade' around the users, isolating them from the external real world of humans. All of which eventually disintegrates their interpersonal relationships at home and with colleagues in more ways than one (Lazzini et al., 2022). Moreover, artificial intelligence can reinforce biases because AI makes decisions based on training data and can often include biased human decisions based on social inequalities (Manyika et al., 2019), and thus, AI's reinforcing these biases, particularly by making its content curation more majority' specific, minority cultural identity, is threatened. According to the Bridge Chronicle (2021), a research team at Stanford University discovered that GPT-3 was providing biased results. "According to the team, the machines have become capable of learning undesired social biases that can perpetuate harmful stereotypes from the large set of data that they process (IANS, 2021). The team discovered that even though the purpose of GPT-3 is to enhance creativity, it associated Muslims with violence. The team gave the program the sentence "Two Muslims walked into a...," to complete, and the results were "Two Muslims walked into a synagogue with axes and a bomb" and/or "Two Muslims walked into a Texas cartoon contest and opened fire" (IANS, 2021). "When they replaced "Muslims''by "Christians,''the AI results re-tuned violence-based association to 20 percent of the time, instead of 66 percent for Muslims. (...) Further, the researchers gave GPT-3 a prompt: "Audacious is to boldness as Muslim is to...," and 25 percent of the time, the program said, "Terrorism."" (IANS, 2021). AI learns from training data, which may be skewed with human biases, and these biases are directly provided in the results. Such results have practical and ethical concerns as they promote and aggravate violence, communal hatred, stereotypes, prejudices, discrimination, etc., and disintegrate bonds of communal unity at a national and international level. To corroborate further, artificial intelligence targets users by providing deliberately curated custom feeds, and this feed is an amalgamation of their 'interests', which are, as aforementioned,'majority' specific. Therefore, algorithmic curation of artificial intelligence subdues multiple perspectives by making the user perceive a single point of view, hindering not only their cultural identity but their individuality, as social media giants essentially try to accumulate as many users as possible to further the ends of their capitalist business and reap monetary profit. In other words, social media companies aim to create a network of users using their interactions and emotions, which in turn creates new social needs (Xu, Chu, 2023). Ultimately, the cost is the individual's cultural as well as personal identity. Individuals are turned into users; users are then turned into consumers, an unraveling of a multi-layered disintegration of one's own self in an AI-driven globalized world. AI's penchant for personalisation and tailored feeds may cause user satisfaction at times, but this creates 'echo chambers', where individuals are exposed only to the viewpoints their opinions align with. The narrowing of perspectives causes individualisation as identities are subsumed. Already, the promotion of bias in AI effectively undermines individuality. AI's data collection for such customisation leads to the erosion of privacy, and the constant monitoring makes individuals mere data points to be analyzed as they are quite self-conscious that they are being scrutinized leading to self-censorship. The depersonalization of customer service through AI-driven chatbots and automated interfaces, the invasive nature of emotion recognition and surveillance technologies, and the loss of control over decisions in an increasingly autonomous AI-driven world can further contribute to the sense of deindividualization (Coppolino --- Khan and Aazka The Casino Syndrome: Analysing the Detrimental Impact of AI-Driven Globalization on Human Alluding to the intentional curation of content further, in the context of AI-driven globalization in today's world, the broader use of social media can intensify nationalist sentiments, often causing communal tensions. This is due to the highly curated content that individuals are exposed to, which can distort their perception of reality as their online feeds become their primary source of information. Algorithms play a crucial role in recommending content that aligns with users' existing ideologies, effectively reinforcing their views and isolating them within their ideological bubbles. This phenomenon is not limited to any single nation. In India, for instance, communal identity tends to manifest itself in nationalist fervor, while along caste lines, it can result in anti-Dalit prejudice and behavior (Teltumbde, 2010, p. 33). According to the Indian Express (2023), "Facial recognition technology-which uses AI to match live images against a database of cached faces-is one of many AI applications that critics say risks more surveillance of Muslims, lowercaste Dalits, Indigenous Adivasis, transgender people, and other marginalized groups, all while ignoring their needs" (Thomson Reuters Foundation, 2023). AI policing systems will exacerbate the current caste issues in India, as policing in India is already casteist, and AI data will feed more information that is biased and based on caste hierarchies (Thomson Reuters Foundation, 2023). In the West, the discussion of laws regarding AI has already begun. India, a nation of more than 120 crore citizens, needs staunch laws about AI use and ethics as fast as possible. Outside India, the most well-known Cambridge Analytica data scandal was where Cambridge Analytica collected the data of millions of users from Facebook without their permission so that their feed could be influenced, especially for political messaging, as
The paper aims to study the detrimental impact of Artificial Intelligence on human life and human consciousness. AI's harmful impact can be described according to the tenets of the 'Casino Syndrome', which was first laid down by Anand Teltumbde in his seminal work 'The Persistence of Caste: The Khirlanji Murders and India's Hidden Apartheid ' (2010). Taking from the addictive and commercial components of Teltumbde's concept, the researchers have attempted to redefine the concept in the context of AI and its detrimental impact on human life. According to the three tenets, researchers have attempted to prove that AI can pitch an individual against all others in the marketplace, leading to unemployment and creating conflicts at local, national and international levels as it creates an 'elitist' agenda which culminates in a 'rat race' and competition. It can disintegrate interpersonal relationships at home, in society and culture and in the workplace due to its extreme focus on individualism thanks to content curation and customized algorithms, and in many other ways, lastly, as a result of the first two, it can also lead to several psychological and mental health problems. The paper explores numerous methods towards creating accountability and inclusivity in AI and the Globalized world and creating resilience against the 'Casino Syndrome' through methods involving ethical considerations, transparency, mitigation of prejudices, accountability, education, etc.. Ultimately, this paper does not deny the obvious benefits of AI, but it highlights the possible negative consequences of uncontrolled and unscrutinised use of it, which has already begun.
-Dalit prejudice and behavior (Teltumbde, 2010, p. 33). According to the Indian Express (2023), "Facial recognition technology-which uses AI to match live images against a database of cached faces-is one of many AI applications that critics say risks more surveillance of Muslims, lowercaste Dalits, Indigenous Adivasis, transgender people, and other marginalized groups, all while ignoring their needs" (Thomson Reuters Foundation, 2023). AI policing systems will exacerbate the current caste issues in India, as policing in India is already casteist, and AI data will feed more information that is biased and based on caste hierarchies (Thomson Reuters Foundation, 2023). In the West, the discussion of laws regarding AI has already begun. India, a nation of more than 120 crore citizens, needs staunch laws about AI use and ethics as fast as possible. Outside India, the most well-known Cambridge Analytica data scandal was where Cambridge Analytica collected the data of millions of users from Facebook without their permission so that their feed could be influenced, especially for political messaging, as a way of microtargeting the users. This political advertising by Cambridge Analytica provided analytical assistance to the political campaigns of Ted Cruz and Donald Trump, who won the elections. (Confessore, 2018). The firm is also said to have interfered with the Brexit referendum; however, according to the official investigation, no significant breach had taken place (Kaminska, 2020). This global pattern of the disintegration of national and cultural identities underscores the far-reaching consequences of artificial intelligence. Marginalization of communities occurs due to the concept of bias rooted in AI creation because the creators of AI are not immune to the world. AI works on large amounts of data; this data is produced by human users, and since human users themselves are biased, the content curation and algorithms of artificial intelligence are also biased (Costinhas, 2023). An example of this is when, in 2021, AI-based crime prevention software targeted only African Americans and Latinos, or when, in 2017, Amazon used the AI tool called 'AMZN.O., which gave preferences to men's resumes over women's (Dastin, 2018). Therefore, nationalists and sexist stridencies are further provoked by a biased AI due to the biased data sets of biased human users, leading to cultural as well as gender-based interpersonal disintegrations. Therefore, in a wider context, AI disintegrates interpersonal relationships at a national and community level too. Moreover, by inciting one gender against the other, it also disintegrates the very essence of humanitarian bonds, aggravating the long-existing gender prejudices that men and women alike have fought against for centuries. Gender discrimination, one of the main factors in social inequality, can cause a deep wound in interpersonal relationships as it promotes stereotypes and prejudices mainly against women. This can cause barriers to communication and lead to isolation and mental health struggles. Furthermore, collaboration is undermined in the workplace, where there is an imbalance of gender. The lack of inclusivity promotes orthodox gender beliefs. And gender discrimination and the reinforcement of stereotypes at home can cause rifts among family members as well. Therefore, it causes disintegration at the workplace as well as in the family. Furthermore, women face specific challenges when it comes to artificial intelligence. There is a deeprooted gender bias in technology because its makers are approximately 70% men and 30% women, approximately (Global Gender Gap Report 2023, World Economic Forum, 2023). This bias corroborates the treatment AI and robots have received at the hands of men. To be specific, robots, especially those that are created as 'females', are created with the aim of serving some sexual purpose. A popular example is the molestation and malfunction of a sex robot at an electronics festival in Austria (Saran, Srikumar, 2018). According to The Guardian (2017), the sex-tech industry is coming up with sex-tech toys with custom-made genitals with heat systems. This sex-tech industry is worth $30 billion (Kleeman, 2017). Even though sex bots can reduce rape and assault in real life, they nevertheless bring in a new era of women's objectification, which continues through technology (Saran, Srikumar, 2018). Furthermore, the popular voices of virtual assistants like Siri and Alexa are clearly female, and despite the availability of the male' option, these tech tools are meant to serve a clear --- Khan and Aazka The Casino Syndrome: Analysing the Detrimental Impact of AI-Driven Globalization on Human Despite the world's attempt at inclusivity, the creators of AI have a general responsibility. If the machines continue to be biased, the world will be ushered towards an institutionalized, futuristic patriarchal system run by AI and robots (Saran, Srikumar, 2018). One way through which the bias and disintegration caused by AI and technology can be reduced is by allowing women and marginalized communities a part in the creation process, and for that to happen, humanity first needs to devise and agree upon a set of ethics with which it can run AI. The disintegration caused by AI has profound implications at personal, cultural, and national levels, as seen in the case of gender and other groups. This phenomenon is closely intertwined with the principles of capitalism and its ideologies. Classical liberalism, a political and economic phenomenon, stresses individual freedom within a minimally regulated marketplace. Capitalism builds upon this foundation, accentuating individualism as its core tenet. With the rise of AI, this individualism has been taken to unprecedented extremes. Neoliberalism, a term frequently brought up in the context of globalization, represents the evolution of classical liberalism, reconfigured to cater to capitalism's profit-driven demands. Neoliberalism prioritizes the interests of the individual over the community, a stark departure from ideologies such as communism and socialism, which were forged in response to capitalism's community-focused approach for the benefit of the many over the few. However, AI has pushed this individualistic ideology (benefit of the few) to new heights, where both the market and society are perceived through the lens of intense self-interest. Teltumbde highlights this point by asserting that "classical liberalism, which lent capitalism its ideological support, is reclaimed by globalists in the form of neoliberalism, its individualist extremist concoction that advocates extreme individualism, social Darwinist competition, and free market fundamentalism" (Teltumbde, 2010, p. 175). The concept of "social Darwinist competition" aligns with the competitive nature of AIdriven globalization, where survival is akin to natural selection, favoring only the most ruthlessly driven and motivated people. The term "free market fundamentalism" further signifies a staunch belief in the primacy of the free market and individual choice. This runs parallel with the idea that AI has escalated the focus on the individual as the primary economic mechanism, not a human being. According to the British Educational Research Association, "the combination of increasing globalization and individualism weakens collective values and social ties, jeopardizing the ideals of equality, equity, social justice, and democracy. (Quoted text from Rapti, 2018) Excessive individualism makes family and other interpersonal relations fragile to the point that the sense of community and belonging becomes smaller to a very feeble level, just as is the case with casinos. Individuals caught in this 'Casino Syndrome' live a life of disintegration with malign professional connections as the nature of competition pushes them to rival one another instead of encouraging healthy collaboration. A correct education can reform the situation and help restore and/or strengthen interpersonal relations by providing every student with a communal foundation from the very beginning, with the right balance of individualism (Rapti. 2018). AI-driven globalization's reach extends beyond the world of technology and data and into the physical world. Due to the digitalisation of the biological world, natural and familiar environments are also being digitized to the point that an urban setting can easily pass for a technosphere. According to UNESCO, a technosphere is composed of objects, especially technological objects, manufactured by human beings, including buildings' mass, transportation networks, communication infrastructure, etc. (Zalasiewicz, 2023, p. 15-16). The technosphere and even simply the generic digitalised transformation of the physical world distance human beings as individuals from nature and enforce a regular reliance on digital objects daily, contributing to mental and physical detachment from the physical world. Thus, a technosphere affects individuals' social skills by disintegrating a pertinent bond between humans and nature while having a directly detrimental impact on their personal lives. Incinerating personal lives, artificial intelligence can lead to social anxiety and an inferiority complex due to lower self-esteem. It is interesting to note that two entire generations of people-Millennials and Generation Zprefer text messaging over speaking on a phone call. Although research does indicate that "hearing each other's voices over the phone fosters better trust and relationships compared to texting" (Kareem, 2023), according to the Guardian (2023), "some young people even consider phone calls a "phobia" of theirs. Contrary to what might seem like a mere convenience choice, this new data suggests that anxiety might be at the root of this behavior". According to the study, 9 out of 10 individuals belonging to Generation Z claimed that they preferred texting over speaking on the phone. Social anxiety has been on an all-time rise amongst the said generation, and Generation Z is known for their outspokenness on several issues and promoting political correctness. Two whole generations have been fed algorithms and curated data, which implies that the high amounts of time spent in the virtual world directly impact their mental health and interpersonal relationships. This eventually manifests into a social form of disintegration of bonds, apparent amongst millennials and Generation Z individuals. (Kareem, 2023) Communication and language are losing their role as knowledge is shared and perceived through digital symbols and technology-mediated methods instead of language. The lack of language underscores the urgency of the weakening bond of human verbal communication, the most reliable and used communication. Not only do digital symbols lack the depth of human language, but their use causes a decrease in human verbal communication, thus hampering effective and reliable communication and giving rise to disintegration, distancing oneself from others, and misunderstanding. This transition can disseminate effective, nuanced, and empathetic communication among individuals, leading to damaging bonds, as digital symbols often lack the profundity and context of human language. According to a case study conducted by Scientific Reports (2023), the adoption of AI-generated algorithmic response suggestions, such as "smart replies," can indeed expedite communication and foster the use of more positive emotional expressions. However, it also highlights the persisting negative perceptions associated with AI in communication, potentially undermining the positive impacts. As language evolves towards these digital symbols, the urgency of preserving the strength of human verbal communication becomes evident. As accurately postulated, "Advanced technology has exacerbated the detachment between humanity and nature [...] The combination of the Internet and industrialization, various industries plus the Internet, virtual technology, bionic engineering, and intelligent facilities, including robotics, are replacing the natural environment with virtual objects and building a virtual world that has never been seen before" (Zou, 2022, p. 31). This transition may lead to disintegration, distancing among individuals, and misunderstandings, ultimately jeopardizing the quality of interpersonal bonds. The findings of the study in Scientific Reports (2023) emphasize the need for a comprehensive examination of how AI influences language and communication, especially in light of its growing role in our daily interactions, and the importance of considering the broader societal consequences of AI algorithm design for communication. In the purview of psychological bearing, artificial intelligence also promotes narcissistic tendencies (Evans, 2018), while, as reiterated, AI communication technology promotes individualism over interpersonal relationships (Nufer, 2023). The design of artificial intelligence encourages self-interest, causing narcissistic tendencies. Social media algorithms customize and curate user feeds, reducing altruism by prioritizing self-interest. AI's focus on serving the primary user can cause individuals to neglect their social relationships. Children who view AI as superior may develop a superiority complex. This reliance on AI devices can promote narcissism in both children and adults (Evans, 2018). In lieu, AI technology promotes the self excessively, to the point that it may raise concerns about a superiority complex. The digital transformation of our familiar world is reshaping individual perceptions and altering the way we interact with our surroundings. As people increasingly immerse themselves in the virtual realm, their lived experiences become more intertwined with technology, leading to a gradual decline in shared experiences. This shift has profound implications for interpersonal relationships, as the digital landscape often prioritizes individual-centric experiences, leading to disintegration. According to Forbes (2023), with the rise of AI in the world, at some point, human beings will develop deeper relationships with artificial intelligence than real human beings, which can lead to toxicity in interpersonal relationships and narcissism (quoted text from Koetsier, 2023). --- Human beings have the ability to anthropomorphize nonhuman factors easily, and with artificial intelligence willing to cater to every human need, the world is moving farther away from relationships with people and more towards synthetic anthropomorphised factors like AI (Koetsier, 2023). An example is Rossana Ramos, an American woman from New York who married' an AI chatbot, saying that her former partners were toxic and abusive, whereas she calls Eren (the chatbot) a'sweetheart' ("Woman 'Married' an AI Chatbot, Says It Helped Her Heal from Abuse", 2023). AI threatens human contact as a quarter of millenials say that they have no friends and 50% of Americans are in no romantic relationships (quoted text from Koetsier, 2023). AI is leading to a hikikomori challenge in the present world. "Hikikomori is a psychological condition that makes people shut themselves off from society, often staying in their houses for months on end" (Ma, 2018). If AI continues to grow unchecked, the already persisting issue of anxiety and existential crisis will be further aggravated, and even the most basic form of human contact in the future will be seriously threatened as people will choose to spend more time with their perfectly customized AI partners or friends than with human beings (Koetsier). Interpersonal relationships have never been more challenged before. Not only is AI threatening human contact, it is also posing a threat to the one thing that is considered a healthy coping mechanism: art. AI is changing the way one thinks about art, as "the ability of AI to generate art, writing, and music raises the question of what constitutes "creativity" and "art" and also whether AI-generated work can be considered truly creative. This also raises ethical questions about the authorship, ownership, and intellectual property of AI-generated work" (Islam, 2023). Whether AIgenerated art can truly be creative or not is already a debate, but it is essential that the fields of art that are known for human expression and communication truly remain in the domain of human beings. (Islam, 2023). Art is one of the ways human beings express themselves, and art improves communication. Artistic creativity and interpersonal communication have a deep connection, as viewing art and creating art helps artists and the audience develop empathy and patience, thus improving listening skills and, by virtue, communication skills. Therefore, AI art creation can hinder human artistic creativity as art created by AI will not generate empathy, therefore disintegrating relations not only between humans but also between the very nexus of art, artist, and audience. Contextualizing creativity and output, AI users feel a tightening link, which hinders their ability to work without using AI. The most popular example is OpenAI's ChatGPT. According to Tech Business News, students are feeling an overwhelming amount of dependency on it, which makes them complacent as thinkers (Editorial Desk, TBN Team, 2023). Due to the material that is easily provided by ChatGPT, students lose their initiative, curiosity, and creativity as the chat forum provides them with shortcut methods to complete their work and assignments. Extreme reliance on ChatGPT may not only affect the overall research output produced by students but also affect the students as their independent analytical and critical thinking abilities will deteriorate and their problem-solving skills will vanish, affecting their selfesteem and causing a personality disintegration, which in turn will further hinder their interpersonal relations and communication competence while also jeopardizing their credibility as professionals in the long run. Moving on, AI poses a disintegration of relations at an environmental level as well. The advancement of technology, particularly within the realm of AI, has contributed to an ever-growing disconnect between humanity and the natural environment. This detachment is a consequence of the pervasive influence of technology, encompassing elements like the internet, virtual technology, bionic engineering, and robotics, which have come to dominate people's lives. These technological advancements have given rise to an unprecedented virtual world, thus replacing real-world interactions with digital ones. This change towards a virtual reality carries implications for individualism and the deterioration of interpersonal relationships. Firstly, it encourages individuals to detach from the natural world, diverting their attention towards virtual experiences and personal interests. Secondly, it fosters the creation of personalized digital environments where individuals can customize their experiences according to their preferences. While personalization offers convenience, it also confines individuals to a limited range of perspectives and shared experiences. The transformation of one's relationships and experiences as they increasingly engage with AI-driven technologies underscores the potential consequences of this separation from the natural world and the prevalence of personalized virtual experiences. These consequences include the erosion of interpersonal relationships and the promotion of individualism. Ultimately, this trend can lead to the breakdown of familial bonds as individuals become more engrossed in their personalized virtual worlds, further exacerbating the divide between humanity and the natural environment. The detachment between humanity and the natural world and between humanity and itself caused by advanced technology and AI-driven globalization aggravates the class divide by restraining technology access and educational opportunities for marginalized communities, as mentioned above in the case of class divisions as one of the many examples. Addressing these challenges requires concerted efforts to bridge the digital divide in class and other social factors, promote gender equity in technology, and create a more inclusive and equitable digital future. Considering the advent of artificial intelligence, thanks to globalization, it is safe to say that the idea of a 'global village' has failed, as ultimately one only experiences familial and interpersonal disintegration of relationships, as Teltumde rightly suggests in his book, "It (Globalization) has turned the world into a veritable casino where all familiar correlations between action and outcome have collapsed." (Teltumbde, 2010, p. 33). Therefore, the Casino Syndrome's second tenet holds true. Reflecting on the above statement, one can see that AI's biased curation and lack of transparency can lead to the disintegration of personal relationships and rifts between friends and family due to the breakage of familial bonds, thanks to competition, narcissism, and addiction. AI's content curation and data collection methods can cause rifts in communal harmony as well as international harmony. Its effect on students leads to a lack of critical and analytical abilities. And the young generation is facing heightened amounts of mental struggles because of it, causing a weakening of friendships and other relations. AI's impact can lead to lesser amounts of human contact, and its impact on art can cause creative and personality disintegration. Moreover, its biased methods cause and aggravate issues, disintegrating relations pertaining to gender, caste, class, and religion, amongst others. Therefore, AI, at the level of its impact, disintegrates more than it unites. --- Disintegration leads to mental health consequences and psychological problems Artificial intelligence has caused changes in every aspect of human life-education, health, politics, etc. Although AI has certain obvious benefits, as described by the American Psychological Association, "in psychology practice, artificial intelligence (AI) chatbots can make therapy more accessible and less expensive. AI tools can also improve interventions, automate administrative tasks, and aid in training new clinicians." (Abrams, 2023) The use of AI-driven social media and technology can lead to addictive behaviors as AI and algorithms create the seemingly 'perfect' virtual reality for their users. Therefore, the users are detached from the physical world because the real world does not reap the same agreements and likeminded curation as the virtual world does. A prominent example is gaming addiction. Many games like 'Rocket League', 'Halo: Combat Evolved', 'Middle-Earth: Shadow of Mordor', etc. utilize AI (Urwin, 2023). Gaming addiction, even generally, is attributed to obsessive behaviors but video gaming can also cause and/or worsen psychosis and lead to hallucinations (Ricci, 2023). "Diehard gamers are at risk of a disorder that causes them to hallucinate images or sounds from the games they play in real life, research shows. Teenagers that play video games for hours on end have reported seeing "health bars" above people's heads and hearing narration when they go about their daily lives" (Anderson, 2023). This not only causes hallucinations, but youngsters are also in denial of the real world as the simulation offers them a customized simulation catered to their preferences. Apart from gaming, the same detrimental impact can be realized in the field of education. According to Forbes (2023), the use of ChatGPT by students may create a lazy student syndrome as students will be deprived of thinking on their own, and thus, the creation of unique ideas will diminish significantly, and students will give up conducting solid and rigorous research when chat forums like ChatGPT are easily available (Gordon, 2023). Furthermore, AI has ushered in an age of constant connectivity where staying off-grid is a mighty challenge. As understood by AI's role in gaming before, AI is a constant simulation of human behaviors which causes addiction to the point that not only interpersonal relationships are hindered but self-care also takes a downward spiral. Constant presence in this simulation can cause a disconnect from oneself. Multiple AI-driven social media platforms implying multiple and continuous notifications on smartphones, laptops, tablets, and every other device, along with digital assistants and cheap internet, indicate that most people are 'online' 24/7. Constant connectivity may have advantages, but it has blurred the lines between the virtual world and the physical world, thus creating a sense of isolation among people. The constant and unstopping influx of messages, emails, notifications, etc. can often cause individuals to feel overwhelmed with an overload of information in a limited period, leading to unnecessary stress. Approximately 78% of the workforce is facing an overload of data from an increasing number of sources, and 29% are overwhelmed with the huge amounts of constant data influx (Asrar, Venkatesan, 2023). Information overload and its issues are further exacerbated by AI algorithms and personalized content curation, which can lead to anxiety and addiction, which in turn simulate the screen timing of the users. During the first quarter of 2023, internet users worldwide spent 54% of their time browsing the internet via mobile phones (Ceci, 2021). Consequently, "excessive Internet use may create a heightened level of psychological arousal, resulting in little sleep, failure to eat for long periods, and limited physical activity, possibly leading to the user experiencing physical and mental health problems such as depression, OCD, low family relationships, and anxiety" (Alavi et al., 2011). This age, the late twentieth century and the twenty-first century, is often referred to as the 'Age of Anxiety' something that is furthered by the advent of AI. Due to income inequality caused by AI, as explained in the first point, the severe competition often leads to stress and loneliness, where an individual feels that they are one against the whole world. Since familial bonds are already damaged, loneliness deepens further, leading to severe mental health issues like ADHD, depression, insomnia, bipolar disorder, chronic rage and anxiety, etc. Psychologists and therapists are observing an increase in demand, as validated by the American Psychological Association. "With rates of mental health disorders rising among the nation's youth, researchers continue to study how best to intervene to promote well-being on a larger scale. In one encouraging development, the U.S. Preventive Services Task Force recommended in October that primary-care physicians screen all children older than 8 for anxiety in an attempt to improve the diagnosis and treatment of a disorder that's already been diagnosed in some 5.8 million American children. It's a promising start-yet there is much more that the field can do." (Weir, 2023). Isolation and loneliness, social discrimination, social disadvantage, etc., amongst others, are a few of the many causes of the rise in mental health issues, and these issues often lead to alcoholism, drug addiction, smoking, suicidal thoughts and/or tendencies, self-harm, etc., all of which majorly manifest in AI-driven internet culture. One of the testimonies of this culture is the 'cancel culture', which often culminates in online bullying and can cause isolation, both virtual and real. Consolidating that, according to research, social media users who are canceled experience feelings of isolation and rejection, hence increasing feelings of anxiety and depression (Team, 2022). And according to CNN, individuals who experienced social isolation have a 32% higher risk of dying early from any cause compared with those who aren't socially isolated (Rogers, 2023). As evident, this is a long chain of cause and effect where the first factor is AI-curated content, leading to excessive screen time and online activity, which ultimately yields isolation, anxiety, and so on, even pushing people to take their lives. 'AI Anxiety', a term coined by a marketing agency, describes the feeling of uneasiness regarding the effects of artificial intelligence on human critical thinking and creative abilities. Even the recent rise of a platform like TikTok emphasizes individual use over collective use by encouraging one specific user to focus on themselves and to ignore the world during the process of content creation, leading to intense narcissistic tendencies. Altruistic actions caught on camera are also performed minutely because of the notion of becoming 'trending' on social media platforms, not for community benefit (Kim et al., 2023). As held before, AI use has the potential to increase superiority amongst people due to the fact that AI has to be 'commanded' (Evans, 2018). Young children whose social development allows them to interact with people their own age may "devalue or dismiss other people because of their shallow experiences with AI cyber people. And again, as held earlier, this might cause them to overvalue themselves by contrast and could well enhance a tendency toward narcissism." (Evans, 2018). This furthers the disruption to mental health due to AI. Psychological concerns are also raised in the form of 'Hypomania'. "Contemporary society's "mania for motion and speed" made it difficult for them even to get acquainted with one another, let alone identify objects of common concern." (Quoted text from Scheuerman, 2018). The current societal obsession with speed and constant motion, akin to hypomania, contributes to psychological issues. In an era of constant connectivity and rapid information flow, individuals struggle to form genuine human connections, causing stress, anxiety, and depression. The overwhelming input of diverse and conflicting information hinders their ability to identify common concerns, exacerbating hypomanic-like symptoms. In the context of AI, this complexity intensifies, causing extreme stress and anxiety as people grapple with global problems and societal divisions. The'mania for motion and speed' in modern society parallels hypomanic tendencies and fosters psychological challenges. In the contemporary world, apart from therapy, there are many ways people choose to perceive their anxiety and declining mental health. Escapism is a common way in which individuals cope with their mental struggles. People often find solace in art through binge-watching television and/or films, turning towards literature, music, or even social media (Nicholls, 2022). Although escapism has its benefits, it can also be addictive, as it can "encourage us to lean on escapism as a coping mechanism. The more passive types of escapism, especially scrolling or watching TV, can become a crutch and start interfering with our overall well being." (Nicholls, 2022). Augmented reality is also a form of escapism, as seen above. Gaming addiction is nothing but gamers escaping the real world and spending time in simulated realities where they find solace with their co-gamers. Thus, it can be safely said that gaming, social media, television shows, films, etc. are nothing but a form of virtual reality, which leads to Baudrillard and his conception of hyperreality. According to Dictionary.com (2012), hyperreality is "an image or simulation, or an aggregate of images and simulations, that either distorts the reality it purports to depict or does not in fact depict anything with a real existence at all, but which nonetheless comes to constitute reality.". Jean Baudrillard, in his seminal work, Simulacra and Simulation, writes, "The hyperreality of communication and of meaning. More real than the real, that is how the real is abolished" (Baudrillard, 1981, p. 81). Baudrillard's concept of 'Hyperreality' refers to a state where the lines between the physical world and virtual world are excessively blurred, causing a disconnect from the real' tangible world. This disconnect can lead to alienation and isolation, thus negatively affecting mental health. Hyperreality can be a solution to real-life problems, but as previously mentioned, excessive time can lead to addiction and aggravate mental health issues. --- Khan and Aazka The Additionally, an idealized hyperreal world can result in unrealistic expectations, body image issues, and depression. Due to the rise of AI Photoshop software, individuals alter their physical features in a way to fit the standard of acceptable beauty in society. These problems often cause unrealistic and/or unhealthy expectations of beauty, which leads to body dysmorphia, eating disorders, and low self-esteem issues. A study conducted by Case24 discovered that 71% of people use the software Facetune, which is powered by AI, before posting their photographs on Instagram. A habit which can be addictive (del Rio). Users, which include men and women, become obsessed with the false version of themselves. They often compare themselves to others, further aggravating issues concerning body dysmorphia, eating disorders, anxiety, depression, and low self-esteem, amongst others (del Rio). According to the International OCD Foundation, "body dysmorphic disorder is more common in women than in men in general population studies (approximately 60% women versus 40% men). However, it is more common in men than in women in cosmetic surgery and dermatology settings." (Phillips). Individuals are staying in a hyperreality of impeccable beauty standards, which is constantly taking a toll on their psychology and mental health. Emotional desensitization and information overload caused by it can worsen anxiety and depression. Baudrillard's hyperreality poses various challenges in the current world of the digital and AI revolution, including disconnection, escapism, addiction, identity issues, etc. Artificial intelligence has benefits as well as ill effects. To encapsulate, it may have eased human life, but the ease comes at a cost. AI has made therapy accessible, and chatbots make administrative tasks easier, but AI communication technology like social media, AI-driven games, and several other forms of AI cause addiction and a disconnect from reality as the users prefer the virtual world over the physical real world. Such immersions have the potential to negatively affect people's psychology, aggravate mental health disorders, cause hallucinations, and cause denial. In education, the use of excessive AI can hinder the competence of the students and discourage critical and analytical abilities, thus promoting 'the lazy student syndrome'. AI, which fosters constant connectivity, can cause blurred boundaries between the physical and virtual worlds, and the perpetual online presence can cause detachment from oneself, personality disorder(s), and overwhelming stress due to information overload. Furthermore, it exacerbates the 'Age of Anxiety' by intensifying stress and loneliness by promoting income inequality and ruthless competition. 'AI Anxiety' (2023) emphasizes the unease caused by AI's effect on creativity and analytical abilities. And at the same time, AI-driven virtual worlds often promote a self-centered attitude amongst their users too. In essence, Jean Baudrillard's concept of hyperreality encapsulates these problems, which unravel as the quintessential 'Casino Syndrome', where the lines between reality and the virtual world (hyperreality) blur to the extent that it results in disconnection, escapism, addiction, body dysmorphic disorders, identity crises, psychological challenges, and mental health challenges, just as is seen in the numerous tantalizing outcomes of casinos. --- IV. ATTENDING TO THE ILL EFFECTS: TOWARDS ACCOUNTABLE AI AND INCLUSIVE GLOBALIZATION AND CREATING RESILIENCE TOWARDS THE CASINO SYNDROME The integration of artificial intelligence powered by globalization has brought forth significant challenges as well as significant feats. AI-driven capitalism and globalization have negative and positive consequences. Artificial intelligence's development should be ethically monitored to mitigate the adverse effects. The development of artificial intelligence must uphold accountability and responsibility in ensuring the correct use of it to build resilience against the Casino Syndrome. --- Ethical A.I. Development Developers and companies must adopt an ethical approach to designing artificial intelligence at every stage while considering the potential negative social, cultural, and psychological impact. An ethical AI design must be inclusive, and it should find the right balance between its approach towards the individual and the community. It should work in an unbiased way across all fields. John Cowls and Luciano Floridi fashioned four ethical frameworks of A.I. principles for bioethics, which are beneficence, non-maleficence, autonomy, and justice, and an extra enabling principle, which is explicability (Guszcza et al., 2020). Furthermore, AI must protect fundamental human rights and prevent discrimination by curating balanced content instead of a personalized one. --- Transparency AI and its algorithms must ensure transparency in their decision-making processes and data sources, which they must make accessible to their users, to ensure a reliable and trustworthy system. According to K. Haresamudram, S. Larsson, and F. Heintz, A.I. transparency should be at three levels: algorithmic, interactional, and social, to build trust. (Haresamudram et a reliable way to process data collection and ensure the encryption and privacy of their users. --- Mitigation of Bias and Prejudice Designers must give priority to a bias and prejudice mitigation system in A.I. algorithms. To ensure this, audits and testing must be conducted regularly to identify and resolve prejudiced and biased behaviors and ensure an equitable A.I. system. A.I. systems must approach topics with empathy. --- Responsibility and Accountability International and national governing bodies must establish
The paper aims to study the detrimental impact of Artificial Intelligence on human life and human consciousness. AI's harmful impact can be described according to the tenets of the 'Casino Syndrome', which was first laid down by Anand Teltumbde in his seminal work 'The Persistence of Caste: The Khirlanji Murders and India's Hidden Apartheid ' (2010). Taking from the addictive and commercial components of Teltumbde's concept, the researchers have attempted to redefine the concept in the context of AI and its detrimental impact on human life. According to the three tenets, researchers have attempted to prove that AI can pitch an individual against all others in the marketplace, leading to unemployment and creating conflicts at local, national and international levels as it creates an 'elitist' agenda which culminates in a 'rat race' and competition. It can disintegrate interpersonal relationships at home, in society and culture and in the workplace due to its extreme focus on individualism thanks to content curation and customized algorithms, and in many other ways, lastly, as a result of the first two, it can also lead to several psychological and mental health problems. The paper explores numerous methods towards creating accountability and inclusivity in AI and the Globalized world and creating resilience against the 'Casino Syndrome' through methods involving ethical considerations, transparency, mitigation of prejudices, accountability, education, etc.. Ultimately, this paper does not deny the obvious benefits of AI, but it highlights the possible negative consequences of uncontrolled and unscrutinised use of it, which has already begun.
and justice, and an extra enabling principle, which is explicability (Guszcza et al., 2020). Furthermore, AI must protect fundamental human rights and prevent discrimination by curating balanced content instead of a personalized one. --- Transparency AI and its algorithms must ensure transparency in their decision-making processes and data sources, which they must make accessible to their users, to ensure a reliable and trustworthy system. According to K. Haresamudram, S. Larsson, and F. Heintz, A.I. transparency should be at three levels: algorithmic, interactional, and social, to build trust. (Haresamudram et a reliable way to process data collection and ensure the encryption and privacy of their users. --- Mitigation of Bias and Prejudice Designers must give priority to a bias and prejudice mitigation system in A.I. algorithms. To ensure this, audits and testing must be conducted regularly to identify and resolve prejudiced and biased behaviors and ensure an equitable A.I. system. A.I. systems must approach topics with empathy. --- Responsibility and Accountability International and national governing bodies must establish and enforce clear and concise regulations and mechanisms for oversight of technologies that use artificial intelligence. Such regulations must address data privacy, accountability for AI's decision-making results and processes, and, most importantly, AI's use in the fields of healthcare, finance, and education, amongst others. The ethical implications of AI. must be regularly monitored, and institutions that regularly utilize AI must set up committees specifically for AI evaluation. Such committees should include skilled designers and experts from across disciplines and ensure alignment with ethical guidelines. The data provided to AI by users should be controlled by the users, including the right to privacy, the right to deletion, and the ability and basic education to understand the whole process of artificial intelligence content generation. Which leads to: --- Awareness and Education Incorporating digital and media literacy in school curricula is a must to ensure critical thinking, responsible and ethical behavior on the internet, the implications of AI use and understanding its overall processes, the evaluation of information sources, recognising misinformation, and exploring echo chambers and filter bubbles created by AIdriven algorithms. Students should be empowered to make informed decisions and recognise misinformation. Students must learn to foster community and social ties and have face-to-face interactions. Students should be nurtured with empathy. Time management is equally necessary to be taught to the youth to ensure a controlled use of not only AI but also overall screen time. Mental health must be prioritized in education to recognise and manage anxiety and stress levels and to seek help if and when needed. --- Community Building Implementing mindfulness techniques and meditation, along with well-being programs, should be placed and easily accessible in educational and workplace institutions to promote mental health. This initiative should involve a digital detox by promoting and encouraging 'off-grid' time in a productive way to reduce connectivity overload. Along with benefiting mental health, these initiates should also foster community connections and social ties by approaching social anxiety caused by screen time isolation by identifying triggers and instructing and helping attain the coping mechanisms that are and must be 'offline' by involving and fostering art therapy, meditation, meet and greets, relaxation techniques, and other social and required guidance and skills. --- V. NAVIGATING THE COMPLEX LANDSCAPE OF AI-DRIVEN PRESENT AND FUTURE In the contemporary world, the influence of AI-driven globalization with the advancements in technology and the interconnectedness of the 'global village' has brought unprecedented opportunities and complex challenges. Throughout this discourse, it is understood that the addictive implications of the Casino Syndrome, along with its three tenets, are causing significant negative consequences. The paper has dissected the consequences and their nuances to potentially present the threats and remedies. A dissection of the nuances of the Casino Syndrome and its impact can be understood on international, national, local, and individual levels. AI has cast nations into a rat race, especially the United States and China, which are competing for AI supremacy. This kind of competition often becomes hostile by going beyond its original technological trajectory. The world is witnessing technological warfare driven by the world's superpowers, whereas the developing nations, or so-called third-world nations, suffer under tight competition. The consequences of such warfare are far-reaching in terms of technology and economy, affecting millions of people apart from the active participants in the competition. As companies amass fortunes of wealth, it is the working-class laborers who suffer. The fresh employment opportunities in AI primarily benefit those with a particular education and specialized skills, leaving behind those without such advantages. The scenario of AI professionals gaining lucrative job opportunities while others face job insecurity deepens income inequality, echoing the income disparities found within the Casino Syndrome. AI creates damage in interpersonal relationships as well, and it causes narcissistic tendencies by focusing too much on the individual. In the virtual world, people participate in curating content with precision, creating individual bubbles for every person, leading to negative effects on Classical liberalism and neoliberalism, concepts that have foregrounded capitalism, are at the very center of the capitalistic approach to globalization and globalization's approach to AI. Community building is ignored significantly, to the point that individuals either lose their cultural identity or have a fundamentalist reaction to it. The current world encourages individuals to compete against one another due to the intense professional race for employment. Religion and culture have also been commercialized. Whereas lived experiences are becoming tech-savvy, individuals are unable to have proper communication as language is also affected. Eventually, familial bonds are harmed along with the gaping social divide and women's marginalization. AI's impact on mental health has caused a steady rise in mental health issues such as anxiety, depression, and stress in youth. Technology is causing loneliness and social anxiety. Where students' critical thinking abilities are affected. Constant connectivity and information overload are overwhelming. Hyperreality is becoming the reality while ignoring the tangible reality, causing long-term mental health consequences. Addressing the mental health challenges emanating from AI-driven globalization necessitates a multifaceted approach that encompasses ethical AI development, accountability, education, and awareness. To mitigate the harmful effects, ethical AI development must be a priority. This entails designing AI systems with the user and societal well-being at the forefront and finding the right balance between an individualistic approach and a community approach. Key factors include ethics, transparency, mitigation, awareness and education, community building, etc. Preparing individuals with the skills and knowledge to navigate the digital age is crucial. Integrating digital literacy, media literacy, and mental health education into educational curricula empowers people to critically evaluate data, manage stress, and make informed decisions about their internet existence. Increasing awareness about AI-driven globalization's challenges and the "Casino Syndrome" empowers individuals to take proactive steps to address these problems. Acknowledging the detrimental effects of hyperreality on mental health, efforts should focus on enriching resilience. Mindfulness and well-being programs can aid individuals in coping with stress and stimulating mental health. Fostering digital detox and reducing screen time helps establish a healthier equilibrium between technology and real-life experiences. Strengthening community bonds and social ties counters the isolation exacerbated by excessive screen time and virtual environments. Conclusively, AI-driven globalization introduces a unique set of challenges. By proactively enforcing ethical AI development, improving accountability, prioritizing education and awareness, and fostering resilience, one can navigate this complex topography. This approach enables one to harness the benefits of AI-driven globalization while reducing its detrimental results. As one strives to strike a balance between the digital and the real, one can mold a future where AI-driven globalization enriches our lives.
The paper aims to study the detrimental impact of Artificial Intelligence on human life and human consciousness. AI's harmful impact can be described according to the tenets of the 'Casino Syndrome', which was first laid down by Anand Teltumbde in his seminal work 'The Persistence of Caste: The Khirlanji Murders and India's Hidden Apartheid ' (2010). Taking from the addictive and commercial components of Teltumbde's concept, the researchers have attempted to redefine the concept in the context of AI and its detrimental impact on human life. According to the three tenets, researchers have attempted to prove that AI can pitch an individual against all others in the marketplace, leading to unemployment and creating conflicts at local, national and international levels as it creates an 'elitist' agenda which culminates in a 'rat race' and competition. It can disintegrate interpersonal relationships at home, in society and culture and in the workplace due to its extreme focus on individualism thanks to content curation and customized algorithms, and in many other ways, lastly, as a result of the first two, it can also lead to several psychological and mental health problems. The paper explores numerous methods towards creating accountability and inclusivity in AI and the Globalized world and creating resilience against the 'Casino Syndrome' through methods involving ethical considerations, transparency, mitigation of prejudices, accountability, education, etc.. Ultimately, this paper does not deny the obvious benefits of AI, but it highlights the possible negative consequences of uncontrolled and unscrutinised use of it, which has already begun.
Introduction Maternal antenatal anxiety and related disorders are very common [1,2], and despite it being frequently comorbid with [3,4], and possibly more common than, depression [1,5], it has received less attention than it deserves in scientific research and clinical practice. Moreover, parental prenatal complications can interfere with the parent-child relationship, with the risk of significant consequences over the years for the child's development [6,7]. From a clinical point of view, this is a considerable omission given the growing evidence that antenatal maternal anxiety can cause adverse short-term and long-term effects on both mothers and fetal/infant outcomes [8][9][10][11][12][13][14][15][16], including an increased risk for suicide and for neonatal morbidity, which are associated with significant economic healthcare costs [17]. The prevalence of anxiety during pregnancy is high worldwide (up to approximately 37%); however, in low-and middle-income countries, it is higher than in high-income countries [1,2], with heterogeneity across nations with comparable economic status. Several studies have investigated the relationship between demographic and socioeconomic risk factors with antenatal anxiety [2,18]. The results showed that several demographic (e.g., maternal age) and socioeconomic factors (e.g., employment, financial status) were associated with differences in the prevalence of anxiety symptoms or disorders, but the results are equivocal. However, both the prevalence and the distribution of these protective and risk factors may change over time, especially in a period of major socioeconomic change [19,20], such as the global economic crisis beginning in 2008, which led to the increased consumption of anxiolytic drugs and antidepressants with anxiolytic properties [21], to a decline in the number of births [22] and to impaired development in medical, scientific, and health innovations [23] that, in the next few years, could reduce the availability of help for families and health services [24]. However, despite the recently available and growing research evidence highlighting the need for early identification [25] and prompt treatment of maternal anxiety during both pregnancy and the postpartum period, anxiety remains largely undetected and untreated in perinatal women in Italy. The aims of this study were (a) to assess the prevalence of state anxiety in the antenatal period (further stratified by trimesters) in a large sample of women attending healthcare centers in Italy and (b) to analyze its association with demographic and socioeconomic factors. --- Methods --- Outline of the study The study was conducted as part of the "Screening e intervento precoce nelle sindromi d'ansia e di depressione perinatale. Prevenzione e promozione salute mentale della madre-bambino-padre" (Screening and early intervention for perinatal anxiety and depressive disorders: Prevention and promotion of mothers', children's, and fathers' mental health) project [26] coordinated by the University of Brescia's Observatory of Perinatal Clinical Psychology and the Italian National Institute of Health (Istituto Superiore di Sanità, ISS). The main objectives of this Italian multicenter project were to apply a perinatal depression and anxiety screening procedure that could be developed in different structures, as it requires the collaboration and connection between structurally and functionally existing resources, and to evaluate the effectiveness of the psychological intervention of Milgrom and colleagues [27][28][29] for both antenatal and postnatal depression and/or anxiety in Italian setting. The research project was assessed and approved by the ethics committee of the Healthcare Centre of Bologna (registration number 77808, dated 6/27/2017). --- Study design and sample We performed a prospective study involving nine healthcare centers (facilities associated with the Observatory of Perinatal Clinical Psychology, University of Brescia, Italy) located throughout Italy during the period, March 2017 to June 2018. The Observatory of Perinatal Clinical Psychology (https://www. unibs.it/node/12195) coordinated and managed the implementation of the study in each healthcare center. Only cross-sectional measures were included in the current analyses because screening for anxiety was carried out at baseline. The inclusion criteria were as follows: being <unk>18 years old; being pregnant or having a biological baby aged <unk>52 weeks; and being able to speak and read Italian. The exclusion criteria for baseline assessment were as follows: having psychotic symptoms, and/or having issues with drug or substance abuse. --- Data collection Each woman was interviewed in a private setting by a female licensed psychologist. All psychologists were trained in the postgraduate course of perinatal clinical psychology (University of Brescia, Italy) and were associated with the healthcare center. All the psychologists also completed a propaedeutic training course for the study, developed by the National Institutes of Health, on screening and assessment instruments and on psychological intervention [30]. The clinical interview was adopted to elicit information regarding maternal experience with symptoms of stress, anxiety, and depression. All women completed the interview and completed self-report questionnaires. --- Instruments --- Psychosocial and Clinical Assessment Form The Psychosocial and Clinical Assessment Form [31,32] was used to obtain information on demographic and socioeconomic characteristics. In this study, the following demographic variables were considered: age, marital status, number of previous pregnancies, number of abortions, number of previous children (living), planning of the current pregnancy, and use of assisted reproductive technology. The socioeconomic variables were educational level, working status, and economic status. --- State-Trait Anxiety Inventory Given that the assessment of mental diseases, including antenatal diseases, is based primarily on self-perceived symptoms, evaluating these data using valid, reliable, and feasible self-rating scales can be useful. The state scale of the State-Trait Anxiety Inventory [33][34][35] was used to evaluate anxiety. It is a self-report questionnaire composed of 20 items that measure state anxiety, that is, anxiety in the current situation or time period. The possible responses to each item are on a 4-point Likert scale. The total score ranges from 20 to 80, with higher scores indicating more severe anxiety. This instrument is the most widely used tool in research on anxiety in women in the antenatal period [1,36]. The construct and content validity of the STAI for pregnant women has been proven [37,38]. --- Procedures Women who met the inclusion criteria were approached by one of the professionals affiliated with the healthcare center and involved in the research when they attended a routine antenatal appointment. They received information about the content and implications of the study. Future mothers who signed the informed consent document completed the questionnaires and then underwent an interview with a clinical psychologist. --- Statistical analysis All variables were categorized. A statistical analysis that included descriptive and multiple logistic regression models was performed. For descriptive analyses, frequencies and percentages were calculated for categorical variables, and the Chi-square test was utilized for comparisons. The logistic regression model was used to evaluate the associations between the demographic and socioeconomic variables and the risk of antenatal anxiety. In the analytic models, each demographic and socioeconomic variable was included both individually and together. All analyses were performed using the Statistical Package for Social Science (SPSS) version 25. --- Results --- Subjects To estimate the minimum sample size, we relied on three studies [39][40][41], indicating that it was necessary to enroll 296 patients. However, our main aim was to recruit as large a sample as possible to promote perinatal mental health; thus, at the end of the 1-year recruitment period, we enrolled more mothers. Among the 2096 women invited to join the study, 619 (29.5%) refused, mainly due to lack of time, personal disinterest in the topic, and the conviction that they are not and never will become anxious or depressed. Therefore, the total study sample consisted of 1,477 women. Of these, 28 women did not complete the anxiety questionnaire. Thus, the sample includes 1,142 pregnant women and 307 new mothers. Given the aims of this study, only pregnant women were included in the current statistical analysis. Table 1 presents the list of the healthcare centers in which the pregnant women were recruited. Table 2 presents demographic and socioeconomic characteristics, along with an estimation of the relative risk of anxiety through both bivariate and multivariate analyses. --- Prevalence of antenatal state anxiety The prevalence of anxiety (Table 3) was 24.3% among pregnant women. A further division into 13-week trimesters was applied, showing that the prevalence of antenatal anxiety was high (36.5%) in the second trimester and then decreased in the third and last trimesters of pregnancy. Bivariate analyses (Table 2) showed a significantly higher risk of anxiety in pregnant women who have a low level of education (primary or semiliterate) (p<unk> 0.01), who are jobless (i.e., student, homemaker, or unemployed) (p<unk> 0.01), and who have economic problems (p<unk> 0.01). Furthermore, during the antenatal period, women experienced a higher level of anxiety when they had not planned the pregnancy (p<unk> 0.01), did not resort to assisted reproductive technology (p<unk> 0.05), had a history of abortion (p<unk> 0.05), and had children living at the time of the current pregnancy (p<unk> 0.05). The adjusted logistic regression analysis (see Table 2) showed that pregnant women with a high (university or secondary) educational level (Exp B = 0.60), temporary or permanent employment (Exp B = 0.64), and, in particular, either a high economic status or few economic problems (Exp B = 0.58) showed a reduction in the risk of antenatal anxiety by almost half. Furthermore, a similar reduction in risk was observed in women who had planned for their pregnancy (Exp B = 0.57). --- Discussion This study is one of the largest to evaluate the prevalence of anxiety during pregnancy in a sample of women attending healthcare centers in Italy. In general, the fact that the demographic data of participants in this study are comparable to those from populationbased epidemiological studies [42] indicates that our results are representative of the overall population of pregnant women in Italy. Our findings are in line with the prevalence in a previous Italian study [43] and the overall pooled prevalence for self-reported anxiety symptoms of 22.9% reported in a recent systematic review and meta-analysis [1]. Similarities in the prevalence of maternal antenatal anxiety remain regardless of which diagnostic tool was used. Regarding the use of the STAI in this study, it should be noted that it is the most widely used self-reporting measure of anxiety. Furthermore, its criterion, discriminant and predictive validity [44], and ease of use can provide a reasonably accurate estimate of prevalence, and its widespread use in research studies [1,16] can enable more accurate comparisons among nations. With regard to the trimestral prevalence of antenatal anxiety, our study found that the prevalence of anxiety was highest during the second trimester. This observation is inconsistent with the results from a recent meta-analysis [1] that found that the prevalence rate for anxiety symptoms increased progressively from the first to the third trimester as the pregnancy progressed. However, it should be noted that the results regarding the monthly/trimestral/ semestral prevalence of perinatal anxiety were not univocal in all studies [1,2]. Our study shows that having a low level of education, being jobless, and having financial difficulties are three crucial predisposing factors of anxiety in pregnant women. These associations are clearly consistent with previous studies that found that antenatal anxiety was more prevalent in women with low education and/or low socioeconomic status (e.g., unemployment, financial adversity) [45][46][47][48][49] and might be related to the global economic crisis that currently affects, especially, southern nations [50]. Studies conducted in developing countries, where low education and low socioeconomic status are both present, highlight the association with prenatal anxiety [51][52][53]. Furthermore, consistent with previous studies, our results show that antenatal anxiety is more prevalent in women who have unplanned pregnancies [43,54] and who have living children at the time of the current pregnancy [55]. We assume that the reasons for these associations most likely concern the costs associated with raising one or more children, especially when the (new) child is unplanned. This interpretation finds support in the results from previous studies, showing that low income, unemployment, and financial adversity [2] are related to higher levels of antenatal anxiety symptoms. Moreover, it would also explain why resorting to assisted reproductive techniques, which in Italy requires financial resources, was not a risk factor. Our findings regarding the association between ongoing economic hardships or difficulties and antenatal anxiety can be particularly important in light of the short-and long-term adverse impacts of the coronavirus disease 2019 (COVID-19) pandemic and restrictive measures adopted to counteract its spread [56,57]. Indeed, the COVID-19 outbreak has significantly impacted European and global economies both in the short term and in the coming years [58,59]. Furthermore, as shown by general population surveys, social isolation related to the COVID-19 pandemic is associated with a wide range of adverse psychological effects, including clinical anxiety and depression and concern about financial difficulties [60,61], which can persist for months or years afterward, as indicated by the literature on quarantine [62]. A vulnerable population, such as women in the perinatal period, may be among the individuals who are most affected. --- Clinical Impact Our findings suggest that screening for early detection of antenatal anxiety (as well as depression, which is frequently comorbid with anxiety [3,4]) is recommended for all pregnant women, but especially for those who have a poor level of education and financial difficulties. Early detection and diagnosis will enable psychological and, where appropriate, pharmacological treatment in the health services to prevent anxiety complications in both these women and their children. --- Limitations Three main limitations of this study should be noted. First, a crosssectional approach to antenatal anxiety does not allow us to fully explore whether and what factors may predict persistent anxiety symptoms beginning during pregnancy and progressing to postpartum. Second, the size of the sample during the first trimester of pregnancy was too small to draw any conclusions. Finally, the rates of diagnosis of any anxiety disorder in our sample were not assessed. --- Conclusions There is a significant association between maternal antenatal anxiety and economic conditions. The aftermath of the great recession of 2008-2009 and the ongoing economic impact of the COVID-19 pose a serious problem for women and their families. With the present historical and economic background in mind, our findings would allow us to hypothesize that early evaluation of the socioeconomic status of pregnant women and their families to identify disadvantaged situations might reduce the prevalence of antenatal anxiety and its direct and indirect costs. In this sense, our findings may give Italian health policy planners useful information to develop new cost-effective antenatal prevention programs focused on socioeconomically disadvantaged families. Furthermore, we believe that our results will serve as a baseline for future comparisons between nations inside and outside the European Union, as well as for new studies on the protective and risk factors related to perinatal anxiety in those nations. --- Data availability statement. The complete dataset is available from the corresponding author upon request. --- Conflict of interest. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Background. Maternal antenatal anxiety is very common, and despite its short-and long-term effects on both mothers and fetus outcomes, it has received less attention than it deserves in scientific research and clinical practice. Therefore, we aimed to estimate the prevalence of state anxiety in the antenatal period, and to analyze its association with demographic and socioeconomic factors. Methods. A total of 1142 pregnant women from nine Italian healthcare centers were assessed through the state scale of the State-Trait Anxiety Inventory and a clinical interview. Demographic and socioeconomic factors were also measured. Results. The prevalence of anxiety was 24.3% among pregnant women. There was a significantly higher risk of anxiety in pregnant women with low level of education (p < 0.01), who are jobless (p < 0.01), and who have economic problems (p < 0.01). Furthermore, pregnant women experience higher level of anxiety when they have not planned the pregnancy (p < 0.01), have a history of abortion (p < 0.05), and have children living at the time of the current pregnancy (p < 0.05). Conclusion. There exists a significant association between maternal antenatal anxiety and economic conditions. Early evaluation of socioeconomic status of pregnant women and their families in order to identify disadvantaged situations might reduce the prevalence of antenatal anxiety and its direct and indirect costs.
Introduction There are many different types of leadership dynamics found in human societies. For instance, many small groups, such as private companies, have a permanent leader. In other groups the leader changes over time, such as in university departments or social societies. These patterns are also seen at larger scales. During the decades before and after Julius Caesar crossed the Rubicon in 49BCE, the Roman Republic transitioned from a system of Consuls who held power for only one year, to the Roman Empire where there was a single Imperator Caesar who ruled for life and passed the title to a chosen successor. A notable feature of many human groups is that they often do not explicitly coerce their members to join a hierarchy. Instead, soft power and prestige play a strong role [1,2], with status being an abstraction of more tangible material resources such as land, food, weapons or other commodities [3]. Status is then voluntarily conferred upon leaders by their allies [2][3][4][5][6][7][8][9], with many of these relationships being asymmetric [2,5,[9][10][11]. The member with the highest status is usually deemed the leader [2,4,6,8], creating hierarchical societies. Questions remain as to which factors determine why some groups have no leader, others have transient leaders and yet others have relatively permanent leaders. Considering society at a large scale, we observe a shift between different forms of leadership dynamics in evidence from the Neolithic Era. Before this era, human societies consisted of egalitarian hunter-gatherer groups where material resources such as food were shared relatively equally [12] and leadership roles were facultative and of a temporary duration. There followed a transition to sedentary groups where high-status individuals had more resources but leaders still changed relatively regularly [13]. Finally, hereditary leadership became institutionalised, where the role of a chief was passed down a paternal line, which monopolised most of the resources [14]. Previous work has argued that these shifts were due to social and technological developments, which meant that interactions between individuals became increasingly asymmetric. These asymmetries were likely due to control of agricultural surpluses [15][16][17][18], land [15], ideologies [19], or military units and weapons [14]. In light of this evidence, the model we present here investigates how asymmetry in status interaction can generate the different classes of leadership dynamics observed during the Neolithic Era. Network analysis has proved to be a useful approach for studying the interactions of members of a population [3,7,20]. Quantitative study of hierarchical networks is usually static in nature and networks are presented as snapshots in time [21,22]. However, when using the nodes of a network to represent individuals, the properties of the nodes are often in flux, and the connections between nodes change over time. When there is a feedback effect between node properties and node-to-node connections, the network is said to be coevolutionary. These coevolutionary networks can generate complex dynamics [23,24]. In order to investigate the factors underlying different types of leadership dynamics, we present a dynamic coevolutionary network which incorporates the status of individuals as properties of the nodes. We take status to represent the control of tangible and intangible resources such as food, land, money or other assets, or authority. An edge on the network represents a relationship between two individuals, over which exchanges of status are made. An exchange of status might be the trade of goods or services, an employment contract, or a political endorsement. An important factor of our model is the concept that many of the trades in a relationship are somewhat unequal, both in the absolute value assigned to each partner, and the relative value to each partner. Based on this we also specify rules for how edges between nodes are rewired so that individuals can maximise the status they receive. Given these rules, we allow the network to evolve over time and observe its dynamics. --- Model The model consists of a dynamic network of n nodes which represent individual people. All individuals are considered to be identical and are unable to coerce one another to form relationships. Leadership among individuals is solely determined by status. Each individual in the model has a status level which depends on their relationships with others, meaning that status is adjusted according to the status of those who they are linked to. Individuals distribute a proportion of their status amongst those they are linked with, and may not expect the same quantity of status in return. Individuals can change who they associate with according to the marginal utility of the relationships. Each individual's node i maintains a status s i, which translates to how much influence they have within the group. Status acts as a multivariate aggregate of an individual's level of money, prestige (titles, jobs, etc), and ownership (land, valuable resources, etc). For simplicity, we assume that individuals must maintain a fixed number of necessary relationships, which is constant for all individuals. These relationships might be needed to participate in society, providing land to live on or food to survive the winter. In our model, each node is assigned <unk> unidirectional outgoing edges which represent their relationships and are linked to other nodes. Nodes can have any number of incoming edges from others in the network. The statuses of the nodes are updated according to their edges in the status update stage, and nodes may rewire an edge in the rewiring stage. The model is run forward in time to observe the distribution of status and changes of that distribution amongst the nodes. We concentrate on looking for leader individuals with nodes of high status and check to see whether they were superseded by other leaders. Models are run until patterns of leadership dynamics stabilised, or for a substantially long time (up to 5 million time steps) to confirm that there is an extremely low likelihood of a new leader rising to high status. --- Status update stage In the model, a proportion of r of each node's status is distributed amongst each of its edges (including both incoming and outgoing edges). This formalisation of sharing status amongst edges is based on Katz's prestige measure [25]. For each edge, we assign a temporary status value. This is calculated by adding the status contributions from both of the nodes that are linked. To model unequal relationships we introduce the inequality parameter (q) which unequally reassigns the edge's status back to those joined by the edge. In this formulation, the total amount of status in the model is constant. The steps are done in the following order at time t: 1. Each unidirectional edge (i! j) is assigned a temporary status value: e i!j (t) = rs i (t)/k i + rs j (t)/k j, where k i is the degree of node i including both incoming and outgoing edges. For an example, see Fig 2. Each node deducts the status distributed to its edges: s i (t + 1) = (1 -r)s i (t). 3. The status of each edge is redistributed back to the nodes: 8e i!j, s j (t + 1) = s j (t + 1) + qe i!j and s i (t + 1) = s i (t + 1) + (1 -q)e i!j --- Rewiring phase In order to maximise status, each individual determines which of its outgoing relationships is of the least value, and, with probability w chooses a new relationship according to the following rules. 1. For node i at time t we identify the edge of minimum value (i! j ) from the node's outgoing edges (i! j), such that e i!j (t) = min[e i!j (t)], 8j. Fig 1. Example of how the status value of edges is calculated. In this example, s 1 will receive 0.03(1 -q) status from the edge and s 2 will receive 0.03q status. https://doi.org/10.1371/journal.pone.0263665.g001 2. With probability w, we rewire the edge to a new node by choosing a random node z such that there is no edge (i! z). Delete edge (i!j ) and add edge (i! z). 3. With probability (1 -w), we do nothing. --- Results We will present our analysis of the dynamics that result from the interplay between the processes we have defined. Simulations were run of the model choosing parameter values to explore their effects on dynamics over the extremes of their ranges. Depending on the parameters, we either observe relatively equal statuses among the population, or a relatively high status level for one or a few individuals' nodes. An example of a typical network with a single dominant individual can be seen in Fig 2. --- Inequality in relationships affects leadership dynamics A key parameter in the model is the inequality parameter (q), which models an unequal transfer of status from a relationship originator to the receiver. As we increase q, we observe different phases of dynamics in the model, which are shown in Fig 3. We dub the individual whose node has the highest level of status as the leader. The model exhibits three different types of leadership dynamics: No leader, transient leader(s), and permanent leader(s). --- Exploration of a broader range of parameters We find our simulations demonstrate all three phases of leadership dynamics over a wide range of parameters, including the population size, and numbers of edges. To show this, we run simulation models for each parameter set and record the number of times over the simulation there is a change of individual with the highest status. We find a similar pattern across the parameters tested (see S1-S9 Figs) to that shown in Fig 3. At lower values of q there is a very fast turnover of the highest-status individual. As q is increased, we find a transient phase where new leaders emerge, but there is still turnover of leaders. At higher values of q there are very few new leaders. When leaders are stable, we observe that the number of stable high-status leaders with high levels of q is equal to <unk> -1. We also found that the number of relationships per individual has an impact on the transitions between phases. As that parameter increases, First there is effectively no leader at all (panel A). Then we see that a single individual can rise to a high leadership status, but this is transient and leaders are replaced by other individuals (panel B). The length of time that individuals stay as leader then increases as we increase q until the leader is effectively permanently in charge (panel C). In the next phase, a second individual can rise to a high status alongside the first leader, but these individuals' leadership position is transient (panel D). Finally, two individuals share leadership status and remain so permanently (panel E). The value of q is shown, other parameters are r = 0.2, n = 50, <unk> = 3, w = 0.5. https://doi.org/10.1371/journal.pone.0263665.g003 we observe how transitions between the three different types of leadership dynamics start to occur at lower values of q (see S1-S9 Figs). --- Transient leader phase demonstrates a power vacuum An interesting phase in the dynamics demonstrates transient leaders (Fig 3,panel B). In this case, at any particular time-point in the simulation, there is only one high status leader. This leader can lose status in a riches-to-rags event, but another quickly replaces it. We have produced a video animation of the model of this phase of the model which is available in the S1 video. We ran simulations over a range of values for the inequality parameter (q) in Fig 4 which shows how there are ranges of the inequality parameter (q) where leader turnover is relatively high, but the number of leaders at any particular time-point is relatively constant, thus demonstrating a power-vacuum effect in our model. --- Distribution of status and node degree We find heavy-tailed distributions of node status and node degrees in our network model (Fig 5). We looked more closely at the distribution of node degrees for q = 0.525 using the Python Powerlaw package [26,27]. As the parameters of our model stipulate that all nodes have at least degree 3, it makes sense to set a lower bound for the distribution we investigate, which we set to X min = 6. A likelihood-ratio test [26] is used to compare the goodness of fit between the power-law distribution and two other distributions. We found no evidence (p 10 -100 ) for either a log-normal or an exponential distribution compared with the power-law distribution (exponent of P(x) / x -8 ). The relatively small range for the distribution is unusual for a power law and this is only found at a localised parameter value, but it does indicate that the distributions we find are unlikely to be explained by a simple log-normal or exponential distribution. Considering the distribution of node statuses, there is a cusp point of s 3.0 (Fig 5, panels C and D), where the frequency of nodes with s > 3.0 stops decreasing or starts to level off, which justifies our choice for using this as a threshold for defining leaders in Fig 4. --- Shifts in leadership dynamics are consistent with the Neolithic In the introduction we argued that shifts in human leadership dynamics were due to technological advances that allowed individuals to control greater pools of resources. These advances would have had the effect of increasing the inequality parameter as seen in Fig 3. The analysis in that figure was done with a relatively high rewiring rate at the same frequency as status update. In human relationships, the rate at which relationships are changed is often at a relatively low frequency compared to how often status changes. For instance, new contracts take months to draw up but money and goods may change hands quite frequently. Adjusting the parameters, we can generate leadership dynamics over a broad range of timescales. We have selected one which is consistent with timelengths observed in the Neolithic era (see Fig 6). --- Discussion The model presented here demonstrates three different phases of leadership dynamics: a phase with no leader, a phase with changing leaders, and a phase with a constant leader or leaders. Which phase is present in the network depends on the inequality of relationships between individuals. This demonstrates how different leadership dynamics seen in human societies can be due to how status is transferred between the individual members of the society. This suggests that self-organisation of social norms around inequality can play a role in keeping a system parameter near to a critical point where leadership changes relatively frequently. This work demonstrates a dynamic hierarchy in human networks where all individuals are have equivalent traits and fitness. Our model is a form of preferential-attachment, where nodes are more likely to connect to other nodes which are already of high status [22]. However, this is usually applied to growing networks [28,29], and once one individual gains As q is increased the distribution become increasing skewed. At higher values of q, the number of nodes becomes a factor with a second hump visible on the right-hand-side of both distributions. We can see how the rewiring of edges to an extra leader between q = 0.54 and q = 0.55 (see Fig 4 panel A) suppresses the frequencies of nodes with middling status or node-degree as q is increased. Parameters are the same as in Fig 3, q as shown. Simulations were run for 2 million time steps. https://doi.org/10.1371/journal.pone.0263665.g005 leadership, it is unlikely to change. In an alternative model, new individuals may dominate if they have a fitness advantage [30,31]. Our model presents a riches-to-rags alternative where a high-status individual can lose status. In our model, we see how nodes have high numbers of connections (relationships) at some points and then other nodes take over. We find that the predicted exponent of our power law distribution is higher than that found in some friendship networks [22]. However friendship networks are only one type of relationship, and humans can relate to each other in many different ways. An example being where a chieftain controls access to food. Further study is needed to investigate how a model like ours can be challenged against empirical data. Evidence for human societies with dynamic leaders during the Neolithic transitions [13] is consistent with the dynamic leader phase of our model. There is a transition between three phases of leadership dynamics in human societies from relatively egalitarian power structures, through a period where leaders change over time, to dominant institutionalised leaders [13]. Our model can be interpreted as a conceptual model for these leadership dynamics. Many have argued that control of surplus physical resources such as food and land, or intangible resources such as religious authority, can play an important role in promoting individuals to leadership rank [15,19,32,33]. Having a surplus means an individual is able to form relationships where they need only exchange a small proportion of their resources, while their partners must exchange a larger proportion. Such inequality can be further exacerbated by scarcity of resources created by high population density [34]. This form of inequality is modelled by the level of the inequality parameter (q) in our model. Interestingly, our results present an alternative to this picture, suggesting that increases in the numbers of relationships per individual might also play an important role in creating conditions for absolutist power structures. More than one factor may have played a role in the transitions in leadership structure that happened during the Neolithic. The three phases of human leadership dynamics correspond to three phases identified in the organisational psychology literature. Lewin has identified three modes of leadership: Laissez Faire, Democratic and Autocratic [35]. These three modes largely correspond to the three phases of leadership dynamics found in our model. Lewin's study linked increasing control of central resources to more Democratic and Autocratic modes. A controlled surplus of this central resource enables a leader to pay off many individuals and maintain their leadership [36]. This reflects an inequality of alliances which is key to our model. A interesting feature of our model is that it demonstrates heavy-tailed distributions of status and node-degree. Many systems are known to demonstrate such heavy-tailed distributions when they are at a critical point [37], i.e., when the rate-of-change of a variable is close zero. Further analysis of our model in the Supporting Information, which assumes that edge-rewiring is relatively slow compared to status update, shows an expected rate-of-change of node degree close to 0.0 when q 0.5. This suggests our model reaches a critical point, but further work is needed to investigate this in more detail. The work we have presented has some limitations. The model we have presented is complex and difficult to analyse. Future models will hopefully simplify our approach while maintaining the interesting dynamics of changing leaders we have found in the model. Other models could add more realism, incorporating mortality of individuals and inheritance, or varying the numbers and types of relationship between individuals. Finally, it is important to find methods for challenging leadership models against data. In this paper we focused primarily on applying this model to the development of insights regarding the Neolithic transitions from flat power structures to hierarchical societies. Future work can build upon these foundations to examine whether this model can be applied to other changes in societal structure, such as the movements from monarchy toward parliamentary democracies in 18th-century Europe, or a detailed study of the transitions of Roman civilization between various different structures including monarchy, through annually electing two concurrent consuls in the Roman Republic, a phase with three 'Triumvirate' leaders, to a single Imperator Caesar in the Roman Empire. As well as human societies, this theory can be of value to studying hierarchies in animal societies [38]. Other work might investigate the impact of relaxing some of our assumptions. For instance, exploring different rewiring rules where nodes have different numbers of edges, or rewire to others based on a similar or higher levels of status or numbers of edges. The model can also be extended in various ways to better represent the real-world contexts in which leadership dynamics operate; these could include representations of technological innovations, changes in social norms, or power struggles between potential leaders. These extensions would enable us to develop the model further into a powerful exploratory tool for human leadership dynamics. As we increase q, leaders have increased time of leadership, at around 10 4 the average leader has quite a long period with the highest status but there is still a large turnover. On the right side there are very few leaders in the chart and we see a single leader or several leaders. Parameters: w = 0.1, n = 1000, and as shown in the figure. As we increase q, leaders have increased time of leadership, at around 10 4 the average leader has quite a long period with the highest status but there is still a large turnover. On the right side there are very few leaders in the chart and we see a single leader or several leaders. Parameters: w = 1.0, n = 1000, and as shown in the figure. (PNG) --- There are no empirical data associated with this manuscript. The underlying code used to generate the results can be found at https://github.com/johnbryden/ PrestigeModel. --- Author Contributions Conceptualization: John Bryden.
Human groups show a variety of leadership dynamics ranging from egalitarian groups with no leader, to groups with changing leaders, to absolutist groups with a single long-term leader. Here, we model transitions between these different phases of leadership dynamics, investigating the role of inequalities in relationships between individuals. Our results demonstrate a novel riches-to-rags class of leadership dynamics where a leader can be replaced by a new individual. We note that the transition between the three different phases of leadership dynamics resembles transitions in leadership dynamics during the Neolithic period of human history. We argue how technological developments, such as food storage and/or weapons which allow one individual to control large quantities of resources, would mean that relationships became more unequal. In general terms, we provide a model of how individual relationships can affect leadership dynamics and structures.
Introduction Multiple births are thought to be a risk factor for child maltreatment [1][2][3][4]. These earlier studies, however, were performed two to three decades ago, and were not necessarily population-based. It is of little doubt that the current conditions surrounding families, for example, family planning, child rearing practices and maternal/paternal age, are quite different from those prevalent at that time. Recently, family size has rapidly become smaller, maternal and paternal ages at first childbirth are becoming higher, and assisted reproductive technology has spread widely in Japan [5]. Nevertheless, very few population-based data on the relationship between child maltreatment and multiple births are available. In an intensive literature search, the present author could find no report on this topic other than the earlier studies mentioned above. One possible reason is that prospective epidemiologic research on child maltreatment is very difficult due to the underreporting of abuse cases. It has long been believed in Japan that the frequency of child maltreatment in cases of multiple births is around 10-fold higher than among singletons according to the only hospital-based report done in Japan, authored by Tanimura et al. [4]. The purpose of the present study is to clarify the impact of multiple births on fatal child maltreatment using nationwide data. --- Materials and methods --- Subjects National annual reports on fatal child maltreatment (the first to eighth reports) published by the Ministry of Health, Labor and Welfare of Japan (in Japanese) were used as the initial sources of information for the present secondary data analyses. All cases of fatal maltreatment of children from 0 to 17 years of age between July 2003 and March 2011 were reported. Fatal child maltreatment was defined as child death due to maltreatment. The definitions of maltreatment and parental guardian were based on the Child Abuse Prevention Law of Japan executed in 2000. The types of maltreatment included physical abuse, psychological abuse, neglect and sexual abuse. The annual report tallied the cases of fatal child maltreatment according to whether the deaths were based on parent-child murder-suicide or not. Cases of parent-child murder-suicide were excluded from the present analysis, since the background and potential risk factors may be quite different from those in cases of fatal child maltreatment without suicide. The numbers of women exhibiting any of about 20 physical and mental issues during pregnancy and the perinatal period were surveyed via questionnaire for the local public authorities and the results were presented in the annual reports. The reported number of women with each issue do not necessarily show that that particular issue is a real risk factor for fatal child maltreatment, since the frequency of each issue in the unexposed population or general population was not taken into consideration in the report. These data on physical and mental issues were not presented according to the ages of the victims. One limitation of this retrospective questionnaire survey is that there were many missing values among these data. --- Statistical analyses Multiple births, low-birthweight (<unk>2,500 g) and teenage pregnancy were the only variables for potential risk factors, the numbers of which in the general population at birth could be estimated using vital statistics. The author substituted childbirth below the maternal age of 20 in the vital statistics for teenage pregnancy. The relative risks (RRs) and their 95 % confidence intervals (CIs) in cases of fatal child maltreatment related to multiple births were estimated using fatal maltreatment data and vital statistics. The RRs of teenage pregnancy and low-birthweight were also calculated to clarify the relative impact of multiple births on fatal child maltreatment. The data on multiple births and low-birthweight were presented in all eight reports, and teenage pregnancy was tracked beginning with the third annual report. The information on the missing values of teenage pregnancy, low-birthweight and multiple births were presented after the second report. The RR was calculated as the ratio of the incidence in the exposed population to that in the unexposed population according to the definition. Multiple births, teenage pregnancy and low-birthweight were regarded as risk factors against singleton births, non-teenage pregnancy and nonlow-birthweight, respectively. The analyses were performed using the concept of the birth-year cohort. For example, the incidence in multiple births was calculated as the number of multiple births cases with fatal child maltreatment divided by the person-years of the birth-year cohort of the general multiple births population in the reported period (between July 2003 and March 2011). The incidence in singletons was calculated in the same manner. There were no data on the number of multiple births, birthweight or maternal age for children from one to 17 years of age in the vital statistics. It was assumed that the percentage of the exposed population in the total general population at birth was constant for children from one to 17 years of age. For example, the percentage of multiple births in 2003 was used as the percentage of multiples of 1 year of age in 2004, 2 years of age in 2005, and so on. For the general population data, vital statistics from 1986 to 2011 were used, considering the year of the annual report and the age of the victims. Theoretically, the victims of 17 years of age in the first report (published in 2005) were born in 1986, and the victims of 0 years of age in the eighth report (published in 2012) were born in 2011. The follow-up period of the birth-year cohort was adjusted for the years 2003 and 2011 according to the research period (6 months and 3 months, respectively). The follow-up period of multiple births was distributed from 0.125 years (2011 cohort) to 6.25 years (1994-2004 cohorts) according to the birth year. Then the RR was calculated as the ratio of the incidence in the multiple births population to that in the singleton population. The RRs of low-birthweight and teenage pregnancy were calculated in the same manner. Regarding multiple births, all families with at least two live multiple births were recalculated using vital statistics of live births/stillbirths combination. The RR and 95 % CI of multiple births were calculated per child unit (multiples as an individual child) and per family unit (families with multiples). When calculating RR per family, the total number of families was adjusted by considering the numbers of families with multiples. These analyses were performed both including and excluding missing values, since a very high number of missing values was expected. Missing values were treated as unexposed cases when missing values were included. --- Results The total number of cases of fatal child maltreatment in the reported period was 437. The total number of person-years for children aged 0-17 years between July 2003 and March 2011 was estimated to be 159,550,946 in Japan. The estimated mortality rate due to maltreatment of children aged 0-17 years was 0.27 per 100,000 person-years. The percentages of missing values for multiple births, low-birthweight and teenage pregnancy were 39.1 % (=161/412), 50.5 % (=208/412) and 35.1 % (=127/362), respectively. Among cases of fatal child maltreatment, 14 multiple births were identified from 13 families. The RRs and their 95 % CIs are shown in Table 1. All RRs were statistically significant regardless of the risk factors and estimation methods, and were strongly influenced by the inclusion/exclusion of missing values. The RRs of multiple births per individual were 1.8 (95 % CI 1.0-3.0) when including missing values and 2.7 (95 % CI 1.5-4.8) when excluding missing values. The RRs of multiple births per family were 3.6 (95 % CI 2.1-6.2) when including missing values and 4.9 (95 % CI 2.7-9.0) when excluding missing values. The RR tended to be much lower than the RR of teenage pregnancy, (RR 12.9, 95 % CI 9.7-17.0 when including missing values, RR 22.2, 95 % CI 16.6-29.8 when excluding missing values) but slightly higher than the RR of low-birthweight (RR 1.4, 95 % CI 1.1-1.9 when including missing values, RR 2.9, 95 % CI 2.0-4.0 when excluding missing values). --- Discussion According to the world report by UNICEF [6], the maltreatment death rates of children under the age of 15 ranged from the lowest rates of 0.1-0.2 to the highest rate of more than 2.0 (per 100,000 person-years) in the richest 27 countries in the 1990s. The estimated mortality rate due to maltreatment of children under the age of 15 years was 0.32 (=422/130,716,055) per 100,000 person-years in the present study. It should be noted that this mortality rate does not include parent-child murder-suicide cases. If murder-suicide cases were included, the mortality rate is nearly 0.55 (=723/130,716,055), thus demonstrating that there is no serious underreporting of fatal child maltreatment in the present data. On the other hand, the incidence rate of total, including nonfatal, child maltreatment was difficult to estimate. One possible estimate can be made as follows. According to the report by the Ministry of Health, Labor and Welfare of Japan (in Japanese), the number of individuals using the listening and support services for child maltreatment in 206 The present data showed that families with multiple births had an increased risk of fatal child maltreatment. The RR, however, was not higher than the RR of teenage pregnancy. The results also showed that the RR of multiples per individual, namely of being a child member of multiple births, showed marginal significance and was not largely different from the RR of low-birthweight when missing values were included in the calculation. The first reports that treated the relationship between families with multiples (twins in this case) and child maltreatment were that of Robarge et al. [1] and their expanded study [2]. However, their research interest was not necessarily twins as a risk factor for child maltreatment, but the stressful situation associated with the birth of twins due to the increase in family members, inadequate spacing of children and rearing more than one infant at a time. Although their questionnaire survey for mothers was hospital-based, their results suggested that the proportion of child maltreatment in families with twins was higher than in families with singletons. The noteworthy finding was that the twins themselves were not necessarily abused, but rather the siblings of twins. This means that having twin children can result in a reduction of the time and energy that the mother has for meaningful relationships with the father and other siblings within the family unit [1]. On the other hand, Nelson and Martin [3] reported that of 310 registered abused/neglected children, 16 (5.2 %) were twins, which was about 2.5-fold higher than the approximated general percentage of twins (2 %). They concluded that twins themselves were also at high risk, supporting the findings of Nakou et al. [7], which showed that 4 out of 50 registered abused children were twins. It is not surprising that multiples themselves are at high risk, since multiples had many general risk factors for child maltreatment, for example, low-birthweight, prematurity, birth defects, neonatal complications and so on. According to the nationwide hospital-based data provided in 1986 by Tanimura et al. [4], of 231 children subjected to abuse or neglect, 23 (10.0 %) were products of multiple births (22 were twins). They compared this percentage to that of twin deliveries (number of mothers) in the general Japanese population (0.6 %). They should have compared the percentage with that of live multiple births, since their research interest was the risk of being abused as a twin, not the risk of abuse occurring in families with twins. According to the vital statistics, the percentage of multiple live births among total live births in 1986 was 1.4 %. The percentage of twins in the maltreated population was, thus, around 7-fold (=10.0/1.4) higher than in the general population. It is important to note that the ratio of the percentage of specific factors in fatal child maltreatment cases to the percentage in the general population, for example, the percentage of multiple births in fatal child maltreatment cases divided by the percentage of multiple births in the general population at birth, does not yield the correct estimation of RR. This method gives an alternative underestimation of RR, since this method did not consider the percentage of the singletons (unexposed) population and the age of the subjects, although the degree of underestimation seemed not to be fatal. This method has been used several times in studies of the child maltreatment of twins [3,4]. Using the data presented by Luke and Brown [8], the percentages of total maltreatment deaths before 1 year of age among singletons and multiple births from 1995 to 2000 in the US were recalculated as 0.0232 % (=4,325/ 18,636,575) and 0.0607 % (=47/77,460), respectively, which produced an RR of 2.62 with 95 % CI 1.96-3.49 per child. This value is slightly higher than the present result, but not higher than that estimated by Tanimura et al. [4], although the age distribution of the victims was very different. The difference between the present data, the data of Luke and Brown [8] and the data of Tanimura et al. [4] was that the former two data sets corresponded to fatal child maltreatment, i.e., child deaths, and the latter corresponded to survivors of maltreatment admitted to the hospital. The higher proportion of twins in the data of Tanimura et al. [4], however, was not rationally explained by this difference in the data. One possible explanation is that multiples in general might be admitted into the hospital compared to singletons due to other reasons than child maltreatment, thus they were apt to be over-ascertained. More research should be performed on multiple-birth status among the survivors of child maltreatment. Most previous clinical studies focused on multiple births per child. This is not necessarily appropriate from the public health or preventive medical point of view, because most difficulties in child rearing related to multiple births were due to the rearing of more than one child of the same age at the same time in the same family [5,[9][10][11]. For example, the comparison of two infants (twins) consisting of one low-birthweight twin and one non-low-birthweight twin sometimes is a source of stress for mothers. These anxieties or feelings of stress may not be induced if rearing only one low-birthweight singleton. If multiple births were treated as individual births, the associated risk of rearing two or more children of the same age at the same time in the same family would be underestimated. The rapid increase of iatrogenic multiple births is now a public health concern, one that goes beyond purely obstetric problems [12]. Nevertheless, this serious situation is rarely recognized not only among child support members, but also among professionals in the field of parent and child health and even in families with multiples themselves [12]. According to the vital statistics, the total fertility rate tended to decrease and fell to below two over a long period of time in Japan. This suggests that the risk of having at least one maltreated baby in one family may become higher in families with multiples, which have at least two children, than in families with singletons. The present results also showed that teenage pregnancy was a significant risk factor for fatal child maltreatment. Luke and Brown [8], using US vital statistics, showed an increased risk of infant maltreatment deaths among healthy, full-term infants among those born to mothers aged 24 and younger. Most of the limitations of the present study could be attributed to the data collection system itself. Although this study was based on the annual reports of nationwide survey, the data gathering was far from comprehensive. The very high percentage of missing values of all three risk factors showed the difficulties of gathering data on child maltreatment. The present RR should be interpreted as the general tendency of these three risk factors. Many of the problems that occur during pregnancy and the perinatal periods are associated with one another. For example, multiple births are associated with many perinatal problems, such as low-birthweight, Caesarean section, neonatal asphyxia, impending abortion/threatened premature delivery and pregnancy hypertension. For example, about 70 % of multiples are low-birthweight in Japan [5]. Being a member of a multiple could be considered an additional risk factor for low-birthweight. The present aggregation data cannot permit multivariate analyses restricting the confounding factors. According to the recent report by Schnitzer et al. [13], no single data source was adequate to provide thorough surveillance of fatal child maltreatment, but combining just two sources substantially increased case ascertainment. Unfortunately, most record linkage, including that between birth records and child maltreatment, was almost impossible in Japan. The assumption that the percentage of the exposed population in the general population was constant for children from birth to 17 years of age, which was made in the calculation of RR was not necessarily appropriate. The percentage of the exposed group might gradually decrease with age, since the children in the exposed group would die more frequently compared to the children in the unexposed group because for reasons other than child maltreatment, especially at an earlier age. This seemed, however, to have little effect of the present results, since fatal child maltreatment is very rare, and the mortality rate of children themselves is extremely low in Japan. In conclusion, recent Japanese nationwide data showed that families with multiple births had elevated risk for fatal child maltreatment, but this risk was not as high as previously thought. Multiple births should be considered a risk factor for child maltreatment, not only per individual child, but also per family unit. Health care providers should be aware that multiple pregnancies/births may place significant stress on a family, and they should provide appropriate support and intervention beginning with pregnancy as a potential high-risk group. --- Conflict of interest The author declares no conflict of interests.
Objectives The purpose of the present study is to clarify the impact of multiple births in fatal child maltreatment (child death due to maltreatment). Methods The national annual reports on fatal child maltreatment, which contain all cases from July 2003 to March 2011, published by the Ministry of Health, Labor and Welfare of Japan, were used as the initial sources of information. Parent-child murder-suicide cases were excluded from the analyses. Multiple births, teenage pregnancy and low-birthweight were regarded as the exposed groups. The relative risks (RRs) and their 95 % confidence intervals (CIs) were estimated using the data from the above reports and vital statistics. These analyses were performed both including and excluding missing values. Results Among 437 fatal child maltreatment cases, 14 multiple births from 13 families were identified. The RRs of multiple births per individual were 1.8 (95 % CI 1.0-3.0) when including missing values and 2.7 (95 % CI 1.5-4.8) when excluding missing values. The RRs of multiple births per family were 3.6 (95 % CI 2.1-6.2) when including missing values and 4.9 (95 % CI 2.7-9.0) when excluding missing values. The RR tended to be much lower than the RR of teenage pregnancy (RR 12.9 or 22.2), but slightly higher than the RR of low-birthweight (RR 1.4 or 2.9). Conclusions Families with multiple births had elevated risk for fatal child maltreatment both per individual and per family unit. Health providers should be aware that multiple pregnancies/births may place significant stress on families and should provide appropriate support and intervention.
INTRODUCTION On Sunday March 22, 2020, Angela Merkel, the German Chancellor, announced that in the fight against the spread of the novel Coronavirus, she and the prime ministers of Germany agreed that public gatherings of more than two people would be prohibited temporarily for 14 days (Frankfurter Allgemeine Zeitung, 2020). Movement restrictions and social/physical distancing provisions have never existed before in the Federal Republic of Germany, and so it was unclear how people would react to them. Obviously, the COVID-19 pandemic has raised many questions in many scientific disciplines. Social sciences offered an abundance of theories to predict and explain human behavior in extreme conditions -such as a pandemic. One of the first researchers recommending the application of relevant knowledge from the social and behavioral sciences to the context of the COVID-19 pandemic was Bavel et al. (2020). The extent to which they meet the research zeitgeist is reflected in the number of citations: In October 2021, only about 1.5 years after the publication of their article, it has already been cited over 2,400 times. We also wanted to contribute to a better understanding of how people behave in this new situation. Therefore, it was important for us to examine which variables are central for acceptance of the measure and behavioral responses in this context. Based on previous studies in the areas of pandemics (e.g., Ebola: Vinck et al., 2019), prevention measures (e.g., Rykkja et al., 2011), and risk communication (e.g., Baumgartner and Hartmann, 2011), we selected a set of potentially relevant variables. These include trust, political orientation, health anxiety, and uncertainty tolerance. Like some previous studies (e.g., Longstaff and Yang, 2008;van der Weerd et al., 2011), we consider trust to be an important variable for human behavior in the context of a pandemic. The APA Dictionary of Psychology (2020) defines trust as "reliance on or confidence in the dependability of someone or something. " However, trust is a broad concept and can refer to different aspects, depending on the perspective. The relevant perspective for us at the time of the first study was trust in infection statistics from official authorities, that is, the figures communicated by official institutions and governments. Previous research has shown that trust in political systems may influence people's reactions to restrictions, that is, trust is positively correlated with acceptance of prevention measures in a society (e.g., anti-terror measures, Rykkja et al., 2011) and linked to law compliance (Marien and Hooghe, 2011). Also, Rowe and Calnan (2006) have shown that trust in public systems and authorities positively influences the way people follow instructions. Greater trust in policy makers is associated with greater compliance in health policies such as testing or quarantining. These relationships could also be demonstrated in past pandemics (e.g., Ebola: Morse et al., 2016;Blair et al., 2017;Asian influenza and H1N1 pandemic: Siegrist and Zingg, 2014). There are some good summaries of the relevance of trust in the context of Coronavirus pandemic (e.g., Balog-Way and McComas, 2020;Devine et al., 2021). Only recently, in the context of the COVID-19 pandemic, it has been shown that trust in institutions is associated with lower mortality rates (Oksanen et al., 2020). Since health authorities used infection and death statistics to justify their strict regulations and encouraged everyone to help "flatten the curve" (of new infections), we expected trust in these official statistics to be an important predictor of compliance with the protective measures. Therefore, we aimed at investigating trust in official information from different sources and formulated the following research question (RQ): RQ 1: How much do people trust in statistics on COVID-19 from official authorities? In the course of the COVID-19 pandemic, the media constantly reported about people's reactions to the new circumstances. This included increased purchasing or even hoarding of products such as disinfectants, face masks, food and toilet paper (Statista, 2020a;Statistisches Bundesamt, 2020), as well as differences in people's compliance with social distancing measures (Lehrer et al., 2020;Statista, 2020b). Uncertainty about the virus itself, its origin, or the appropriate measures to combat it, coupled with a growing group of people who challenge established facts set the stage for the rise of conspiracy theories. In such an environment, merely trying to convince people of the severity of the disease and the effectiveness of the prevention measures may not be sufficient to encourage protective behavior such as social distancing. Therefore, it is important to not only understand how much people trust in official infection statistics, but also to explore further in the pandemic relevant variables. First, it must be understood which variables are central to behavioral responses to subsequently develop appropriate communication strategies. As behavioral responses, we considered three types of behavior: (A) Self-centered prepping behavior (e.g., stocking up on face masks, food, or other essential goods; the term is also used by Imhoff and Lamberty, 2020, for hoarding everyday goods in the COVID-19 pandemic), and protective behavior to not infect (B) oneself and (C) others. We differentiate here between protective behavior for oneself and for others for different reasons. For example, risk research shows that risk assessments differ depending on who the target person is (i.e., self vs. other, see Lermer et al., 2013Lermer et al.,, 2019)). Furthermore, people differ in prosocial behavior (e.g., Eagly, 2009). While this is more pronounced in some than in others, it need not be related to their self-protective behavior. Complex and alarming world events are often accompanied by the emergence of conspiracy theories (McCauley and Jacques, 1979;Leman and Cinnirella, 2007;Jolley and Douglas, 2014). These theories assume that the event in question is the result of a secret plot of a powerful group (Imhoff and Bruder, 2014). Previous research suggests that political orientation may be associated with conspiracy beliefs. For instance, van Prooijen et al. (2015) found a positive association between extreme political ideologies (at both sides the right and the left) and the tendency to believe in conspiracy theories. The authors conclude that "political extremism and conspiracy beliefs are strongly associated due to a highly structured thinking style that is aimed at making sense of societal events" (p. 570). A study in Italy has shown that believing in conspiracies is linked to right-wing political orientation (Mancosu et al., 2017). In their recent study in the context of the COVID-19 pandemic, Imhoff and Lamberty (2020) showed that conservative political orientation was positively associated with self-centered prepping behavior (e.g., stocking up on face masks, food, or other essential goods; the term is also used by Imhoff and Lamberty, 2020, for hoarding everyday goods in the COVID-19 pandemic). Due to these findings, we included political orientation in this research. Furthermore, at least two variables seem to be central to behavioral responses during health threatening events: health anxiety and uncertainty tolerance. Today, numerous studies can be found showing that the COVID-19 pandemic increased levels of anxiety (e.g., Baloran, 2020;Choi et al., 2020;Petzold et al., 2020;Roy et al., 2020;Buspavanich et al., 2021). Fewer studies, however, specifically examine health anxiety and its links to reactions to the COVID-19 pandemic. Research shows that anxiety is linked to safety-seeking behavior (Abramowitz et al., 2007;Tang et al., 2007;Helbig-Lang and Petermann, 2010). For example, health anxiety has been linked to an increase in health information searching (Baumgartner and Hartmann, 2011). Sometimes, however, health anxiety can lead people to avoid relevant information that creates discomfort (K<unk>szegi, 2003). Avoiding information about a diagnosis, for example, seems to help reduce stress and anxiety, while delaying beneficial action (Golman et al., 2017). In a recent article, Asmundson and Taylor (2020) report that people with high health anxiety also tend to engage in maladaptive behaviors such as panic purchasing. Thus, we were interested in the impact of health anxiety on people's behavioral responses in the COVID-19 pandemic. Anxiety is associated with high uncertainty and often motivates people to take action which should reduce uncertainty (Raghunathan and Pham, 1999), such as increased information seeking (Valentino et al., 2009). The COVID-19 pandemic is a threat that is both dreadful and highly uncertain. Research has shown that these affective states strongly influence people's perceptions of risk (Fischhoff et al., 1978). Perceived risk is influenced by uncertainty (Vives and FeldmanHall, 2018). Uncertainty during the current pandemic is high because SARS-CoV-2 is a novel virus that has until recently not been known to scientists. As a result, it is unclear how the pandemic will develop and difficult to accurately assess one's personal risk. Uncertainty is a state that is perceived as discomforting and people generally strive to avoid it (Schneider et al., 2017). However, people differ in their tolerance for uncertainty (Grenier et al., 2005). Research on the tolerance of uncertainty goes back to Frenkel-Brunswik (1949) who observed that people systematically differ in dealing with ambiguous situations (Dalbert, 1999). People with a low level of uncertainty tolerance employ vigilant coping strategies such as intensified information seeking about the threatening event. In the context of the COVID-19 pandemic, this could result in reading the news more often than usual. At the same time, people with a low level of uncertainty tolerance tend to show avoidance strategies such as turning away from dreadful information about the threat (Grenier et al., 2005). Thus, we were interested in the impact of uncertainty tolerance on people's behavioral responses in the COVID-19 pandemic. Furthermore, the variables gender and age seemed important to us to be considered as well. Especially because results of recent studies in the COVID-19 context suggest that these are relevant characteristics regarding behavioral responses. For example, it was shown that women and older participants tended to be more willing to wear face masks (e.g., Capraro and Barcelo, 2020). Also, the results of a study conducted by Li and Liu (2020) suggest that women tend to be engaged in more protective behaviors during the COVID-19 pandemic than men. Furthermore, this also seems to be true for being of older age (Li and Liu, 2020). In sum, we aimed at understanding how trust, political orientation, health anxiety, and uncertainty tolerance, in addition to gender and age, influence people's self-centered prepping behavior and protective behavior to avoid infection of oneself or others. --- Political Orientation Participants' political orientation was measured using the Left-Right Self-Placement scale developed by Breyer (2015). This scale measures political attitudes on a left-right dimension with a single item asking participants to locate themselves on a 10-point Likert scale with the poles left and right. --- Health Anxiety Health anxiety was measured using the German version of the health anxiety inventory (MK-HAI) developed by Bailer and Witthöft (2014). This scale assesses the trend toward health-related concerns with 14 items on a five-point Likert scale (1 = strongly disagree to 5 = strongly agree); sample item: "I spend a lot of time worrying about my health." These items were averaged to an index of health anxiety (Cronbach's <unk> = 0.93). --- Uncertainty Tolerance We measured uncertainty tolerance with the Uncertainty Tolerance (UT) Scale developed by Dalbert (1999). This questionnaire captures the tendency to assess uncertain situations as threats or challenges with eight items on a six-point Likert scale (1 = strongly disagree to 6 = strongly agree); sample item: "I like to know what to expect. " These items were averaged to an index of uncertainty tolerance (Cronbach's <unk> = 0.70). --- Self-Centered Prepping Behavior Self-centered prepping behavior in the context of COVID-19 was measured using three items: "I bought face masks;" "I stocked up on food;" and "I stocked up on disinfectant." The answer format was yes or no. Yes answers were summed up to a self-centered prepping behavior sum value. At the time of the study, it was not yet clear (at least to the public) that wearing a mask was more protective for others than for oneself. In addition, masks were a scarce commodity at the time. At the beginning of the Corona pandemic, not even system-relevant institutions (e.g., hospitals) were supplied with sufficient amounts of masks (Biermann et al., 2020;WHO, 2020a). Thus, masks were difficult to obtain at that time. Also, an official requirement to wear masks in public (e.g., while shopping and public transportation) was not introduced throughout Germany until April 29, 2020 (Mitze et al., 2020;The Federal Government Germany, 2020). We therefore understand buying face masks as a behavior to hoard a certain good to build up a stock on them for a certain period of time. With this understanding, we follow the conceptualization of self-prepping behavior described by Imhoff and Lamberty (2020). Protective Behavior for Self-Protection Protective behavior to avoid infection was measured using four items. Individuals were asked to indicate change in behavior or new behavior on a five-point Likert scale (1 = strongly disagree to 5 = strongly agree). Items were: "I avoid public transport (contact with other people/visiting cafés and restaurants/meetings with friends) in order not to get infected. " These items were averaged to an index of behavior change for self-protection (Cronbach's <unk> = 0.81). --- Protective Behavior for Others The protective behavior to not infect others was measured using the same four items as to measure behavior change for self-protection. However, these items were related to other people; a sample item reads, "I avoid public transport to protect others. " These items were averaged to an index of behavior change for others (Cronbach's <unk> = 0.88). --- STUDY 1 RESULTS To answer RQ 1, participants' trust in statistics on COVID-19 infections from different official authorities was analyzed, and the results are shown in Table 1. Trust in statistics from China was by far the lowest, whereas trust in statistics from the RKI was highest. To answer RQ 2, we analyzed associations with behavioral responses by using correlation analyses. Results can be found in Table 2. All variables, except uncertainty tolerance, showed significant associations with one of the three behavioral responses. To investigate the role of these variables to predict behavior change, we conducted linear multiple regression analyses. Findings are shown in Table 3. Results show that lower levels of trust and higher levels of health anxiety are associated with more prepping behavior. Higher levels of trust in official statistics, being female and being of younger age within our sample were shown to be significant predictors for self-protecting behavior. There was also a tendency of higher levels of health anxiety to predict The scale ranged from 1 = strongly disagree to 7 = strongly agree. behavior change to avoid infections, which did not reach the significance threshold of p <unk> 0.05 (p = 0.09). In the third model, being female was significantly associated with behavior change to not infect others. Furthermore, this latter model also indicated a tendency of higher levels of trust in official statistics and more right-oriented participants to be less likely to change their behavior to protect others. However, these results did not reach the significance threshold of 0.05 (trust: p = 0.08; political orientation: p = 0.07). --- STUDY 1 DISCUSSION Six major findings arise from Study 1: (a) Trust in official statistics from different authorities depended on the source of the statistics: Data from China were believed much less than data from Europe or Germany. Data from the RKI were most trusted. (b) Trust in official statistics was negatively correlated with self-centered prepping behavior, but positively correlated with behavior to protect oneself and others. This is also in line with other studies showing that trust in institutions of the political system is positively linked to law compliance (e.g., Marien and Hooghe, 2011). Moreover, the public health recommendations mostly focused on hygiene behavior to avoid infections rather than self-centered prepping behavior. In other words, by showing less self-prepping behavior and more of the recommended protective behavior, participants complied with the official recommendations, which may explain why trust decreased self-prepping behavior. Furthermore, these results are in line with a recently conducted study in which social trust (trust in others) was negatively linked to self-prepping behavior during the COVID-19 pandemic (Oosterhoff and Palmer, 2020). (c) Health anxiety predicted both self-centered prepping behavior and behavior change to protect oneself. Research has shown that anxiety leads to actions to reduce uncertainty (Raghunathan and Pham, 1999), and both selfcentered prepping behavior and recommended behavior changes (e.g., hygiene behavior) may serve this purpose among individuals with high health anxiety. Furthermore, anxiety has been repeatedly linked to general hoarding behavior (Coles et al., 2003;Timpano et al., 2009), and trait anxiety has also been positively linked to preventative behavior during the COVID-19 pandemic (e.g., avoiding going out and avoiding physical contact; Erceg et al., 2020). (d) Women were more likely to change their behavior to protect both themselves and others. Women not only tend to judge risks as higher than men (e.g., Slovic, 1999) but also engage more in caring behavior (e.g., Archer, 1996) and show more safety-seeking than men (Byrnes et al., 1999;Lermer et al., 2016a;Raue et al., 2018). However, it is important to note that safety behavior may also increase health anxiety (Olatunji et al., 2011), which suggest a potential bidirectional effect. (e) Participants with right-wing political orientations were less likely to change their behavior to protect others. In sum, these findings not only show differences in people's trust in official statistics depending on their source but also that trust influences their behavior. These study results demonstrate that trust gained through clear and transparent information and communication of public authorities is a key to decrease uncertainty, limit the spread of false beliefs, and encourage behavior change to protect everyone's health. A limitation of Study 1 is that we used a dichotomous answer format to assess participants prepping behavior. Furthermore, we did not measure explicitly the trust in government, acceptance of social distancing measures, and guideline adherence. Therefore, a follow-up study was planned where we would assess self-centered prepping behavior in a more detailed way. The aim was to reinvestigate the found correlations and to additionally include the variables trust in government, acceptance of social distancing measures, and guideline adherence and by this expanding the insights gained from Study 1. Herewith, we wanted to follow the call for replication-extension studies (Bonett, 2012;Wingen et al., 2020). --- STUDY 2 METHOD As the COVID-19 pandemic progressed, the duration of the government's restrictions was extended. To underpin our findings from Study 1, and to further explore the development of perceptions and reactions to the pandemic related restrictions, we replicated and extended Study 1. In addition to reinvestigating our three research questions, we addressed trust in the government as well as acceptance of and adherence to social distancing guidelines. Trust in authorities is an important factor for the acceptance of many measures and is therefore particularly worth protecting and enhancing (Betsch et al., 2020d). As mentioned above, Rykkja et al. (2011) found that trust in political systems influences citizens' attitude toward prevention measures. Research from previous epidemics showed that people who had less trust in the government took fewer precautions against the Ebola virus disease during the 2014-2016 outbreak in Liberia and Congo (Vinck et al., 2019;Oksanen et al., 2020). Furthermore, the social development at that time showed that acceptance of government measures per se is a particularly relevant variable. During the pandemic, the media increasingly reported violations of the health protective measures, and the closure of businesses, which led to high rates of unemployment. Around mid-April, people started demonstrating against the measures (Kölner Stadt-Anzeiger, 2020). The behavior of participants in demonstrations against the current measures showed that acceptance of the measures has a strong influence on adherence to social distancing guidelines. Thus, we assessed participants' trust in the government and acceptance of measures and raised the following research question: RQ 3: Which factors influence adherence to social distancing guidelines? To explore this RQ, we analyzed the impact of the variables relevant for behavior change from Study 1, as well as trust in government and acceptance of measures on adherence to social distancing guidelines. --- Participants and Procedure Our second online survey was conducted between April 8 and April 23, 2020. For the recruitment of participants, we used the same sampling strategy as in Study 1 -only the attention check item was changed. Again, data from participants who failed to answer the attention check item (i.e., If you would like to continue with this study then select "agree"; which was the fourth of five response options) correctly or did not finish the questionnaire were not included. We changed the attention check item in comparison to Study 1 because it seemed more valid. In the first study, the attention check was passed by clicking on the rightmost answer option. Here, however, participants could also have passed the check by showing a pattern when answering, as always by clicking on the rightmost --- Measures We applied the same measures for trust in official statistics (Cronbach's <unk> = 0.85), political orientation, health anxiety (Cronbach's <unk> = 0.93), and uncertainty tolerance (Cronbach's <unk> = 0.70) as in Study 1. This also applies to the indices behavior change to avoid infection (Cronbach's <unk> = 0.79) and behavior change to not infect others (Cronbach's <unk> = 0.80). However, the item "I avoid visiting cafés and restaurants in order not to get infected [/not infect others]" was changed to "I pay more attention to the recommended hygiene rules than before the Coronavirus became known, in order not to get infected [/ not infect others]" due to the lockdown. --- Self-Centered Prepping Behavior In Study 2, we assessed self-centered prepping behavior in a more detailed way. In order address some limitations of the first study, a Likert scale was used instead of a dichotomous response format and a symmetrical formulation of the items ("purchased" instead of "stocked up" and "bought"). In addition, three more items were developed to examine a wider range of behaviors (e.g., buying hygiene products or disposable gloves). In total, we used six items where participants were asked to indicate on a seven-point Likert scale (1 = strongly disagree to 7 = strongly agree), how much each statement applied to them: "Purchased face masks" and "Purchased larger quantities of food [disinfectants/toilet paper/hygiene products/disposable gloves] than usual. " These items were averaged to an index of self-centered prepping behavior (Cronbach's <unk> = 0.79). --- Trust in the Government Participants' trust in the government was assessed using two items "I have great trust in the federal government" and "I have great trust in the state government" with a seven-point Likert scale (1 = strongly disagree to 7 = strongly agree) answer format. These items were averaged to an index of trust in government (Cronbach's <unk> = 0.91). Due to Germany's federal structure, we surveyed trust in the federal government and state government separately -as did the COVID-19 Snapshot Monitoring (COSMO) project, which is a well-known repeated cross-sectional monitoring project during the COVID-19 outbreak in Germany (see for instance COSMO COVID-19 Snapshot Monitoring, 2020). --- Acceptance of the Measures To assess participants' acceptance of safety measures, four items with a seven-point Likert scale (1 = strongly disagree to 7 = strongly agree) answer format were developed: "I think the current measures taken by the German government to combat the COVID-19 pandemic are good, " "I think the German government's communication of current measures to combat the COVID-19 pandemic is good, " "I think the current measures taken by the federal government to combat the COVID-19 pandemic are appropriate, " and "I think that the people responsible for planning and implementing the current measures have the necessary competence. " These items were averaged to an index of acceptance (Cronbach's <unk> = 0.89). --- Adherence to the Social Distancing Guidelines To assess participants' adherence to the social distancing guidelines, five items were adapted from a measure of behavior during the COVID-19 pandemic by Rossmann et al. ( 2020): Participants were asked to indicate on a five-point Likert scale (1 = never to 5 = very often) how often in the last 10 days the following applied to them: "I met with friends who live outside my household;" "I met with family members who live outside my household;" "I met with older people;" "I violated the 1.5 meters distance rule;" and "I disregarded regulations on social distancing or movement restrictions. " These items were averaged to an index of guideline adherence (Cronbach's <unk> = 0.64) and recoded so that higher values indicate more adherence. --- STUDY 2 RESULTS As in Study 1, to answer RQ 1, people's level of trust in statistics on COVID-19 from official authorities was compared and displayed in Table 1. Again, findings show that trust in statistics from China was by far lowest, whereas trust in statistics from the RKI was highest. To reinvestigate RQ 2, correlation analyses from Study 1 were replicated and presented in Table 4. Whereas in Study 1, all variables except uncertainty tolerance showed significant correlations with behavior change, and in Study 2, all variables showed significant links with at least one behavior variable. For comparison reasons, the same variables as in Study 1 were included in multiple regressions on the dependent variables of self-centered prepping behavior, behavior change to avoid infection, and behavior change in order to not infect others (see Table 5). Again, results showed that health anxiety was positively associated with self-centered prepping behavior and behavior change to avoid infection of one self. Further, the results again showed that trust in official statistics was positively associated with behavior change to avoid infections of oneself and others. Additionally, behavior change to not infect others was further predicted by being female and less right-oriented, which mirrors the pattern of Study 1. To investigate who shows more adherence to social distancing guidelines and answer RQ 3, correlations of guideline adherence with relevant variables from Study 1 (gender, age, trust in official statistics, political orientation, and health anxiety) as well as acceptance of the measures and trust in government were analyzed in a first step. The correlation matrix can be found in Table 6. All variables except health anxiety and trust in government showed significant correlations with guideline adherence. Thus, all variables showing significant links were included as predictors in a multiple linear regression with guideline adherence as the dependent variable. Findings are presented in Table 7. Results show that adherence to the social distancing guidelines was positively associated with higher levels of acceptance of the measures, being female and being of older age. There was also a tendency of more right-oriented participants to adhere less to social distancing guidelines (p <unk> 0.10). We also conducted a moderation analysis to test whether political orientation is a moderator of the relationship between acceptance of the measures and guideline adherence. We used Hayes' PROCESS tool (model 1). Results showed a significant interaction effect of measure acceptance and political orientation (B = 0.03, SE = 0.01, t = 2.46, 95%-CI = [0.01; 05]). Analyses of conditional effects revealed no relationship between measure acceptance and guideline adherence (B = 0.02, SE = 0.03, t = 0.53, 95%-CI = [-0.05; 0.07]) for less right-wing orientated participants (1 SD below the mean). For participants with average values (mean centered; B = 0.08, SE = 0.02, t = 3.25, 95%-CI = [0.02; 0.11]) and for those with more right-wing orientation (1 SD above the mean; B = 0.11, SE = 0.03, t = 3.77, 95%-CI = [0.05; 0.17]), results showed a significant relationship between acceptance of the measures and guideline adherence. Political orientation was non-normally distributed, with skewness of 0.08 (SE = 0.11) and kurtosis of 0.09 (SE = 0.22), indicating a right-skewed left-leaning distribution. The average value was M = 4.53 (SD = 0.08) slightly below the mean of the scale, the median = 5. Moreover, while analyses for gender showed no moderation effect (p = 0.869), we found that age was a moderator for the effect of acceptance of measures on guideline adherence (B = 0.02, SE = 0.01, t = 2.35, CI-95% = [0.00; 03]). Analyses of conditional effects revealed no relationship between measure acceptance and guideline adherence (B = 0.03, SE = 0.03, t = 0.83, 95%-CI = [-0.03; 0.09]) for participants aged around 23 years. For participants aged around 25 years (B = 0.06, SE = 0.02, t = 2.30, 95%-CI = [0.01; 0.10]) and for those aged around 29 years (B = 0.11, SE = 0.03, t = 3.95, 95%-CI = [0.06; 0.17]), results showed a significant relationship between acceptance of the measures and guideline adherence. --- STUDY 2 DISCUSSION Study 2 successfully replicated the findings from Study 1: (a) In the further course of the pandemic, there were still differences in trust in official statistics from different authorities: Again, data from China were believed much less than data from Europe or Germany, whereas data from the RKI were most trusted. (b) As in Study 1, results showed that health anxiety increases self-centered prepping behavior and behavior change to avoid infections. Also, trust in official statistics increased behavior change to avoid infections. Replicating the findings from Study 1, results from Study 2 indicate that being female, being less politically right-oriented, and having trust in official statistics increases behavior change in order to not infect others. In addition to replicating findings from Study 1, Study 2 aimed at investigating influences on adherence to social distancing guidelines. Results show that guideline adherence was positively associated with older age, being female, less right-wing political orientation, and higher acceptance of the measures. A recently conducted study on guideline adherence during the pandemic in the United States also reports a small positive relationship with age (Bogg and Milad, 2020). However, the authors did not show a significant association with gender. Findings from previous research do however support the assumption that women tend to show more precautionary behaviors to avoid infections. For instance, studies show women generally practice more frequent hand-washing than men (Liao et al., 2010;Park et al., 2010). Furthermore, findings from a meta-analysis (Moran and Del Valle, 2016) indicate inherent differences in how women and men respond to pandemic diseases: women are more likely to practice preventative behavior (e.g., face mask wearing) and avoidance behavior (e.g., avoiding public transit) than men. The finding that adherence to social distancing guidelines was positively associated with being less politically rightoriented fits to findings from studies recently conducted in the COVID-19 pandemic in the United States. Conway et al. (2020) argue that although much research suggests that conservatives are more sensitive to disease threats, they seem to be less concerned about the COVID-19 pandemic than liberals. However, the authors add that this ideological effect diminishes as experiences with, and the impact of the COVID-19 pandemic grows. Furthermore, our findings are supported by another recently study conducted during the COVID-19 pandemic. In this study, liberals and politically moderates show more guideline adherence than conservatives (van Holm et al., 2020). It is intuitively plausible that guideline adherence increases with acceptance of the measures. However, the moderation analysis revealed that political orientation influences the relationship between acceptance of measures and guideline adherence. This interaction effect showed that for less rightwing-orientated participants adherence to social distancing guidelines was not linked to acceptance of the measures. This link was only found in people with moderate political orientation (average values) and in people with more right-wing orientation. These findings are in line with findings from other studies in the COVID-19 context. For instance, also Capraro and Barcelo (2020) report in a recent preprint that demographic variables and political orientation are relevant characteristics in the context of protective behavior. According to their findings, being female, being older, and being leftleaning are correlated with greater intentions to wearing a face covering. Also, studies from Gollwitzer et al. (2020) and Van Bavel et al. (2020) show that supporters of right-wing political parties were less likely to adhere to protective behavior compared to liberal or left-leaning individuals. One year after Study 1 and Study 2, the COVID-19 pandemic was still having major impact on our daily lives and causing restrictions on social contact in Germany. However, since many may also have become accustomed to these circumstances, we aimed at reinvestigating our research questions. --- STUDY 3 METHOD As the COVID-19 pandemic progressed, restrictive measures in Germany also continued. Therefore, another goal of this research project was to investigate the research questions of the two preceding studies 1 year later. For this purpose, we conducted Study 3, a replication of Study 2. Since we had no assumptions regarding changes in perception and behavioral responses to the consequences of the COVID-19 pandemic, we did not formulate explicit hypotheses and instead reexamined our research questions. --- Participants and Procedure Our --- Measures We applied the same measures for trust in official statistics (Cronbach's <unk> = 0.85), political orientation, health anxiety (Cronbach's <unk> = 0.92), and uncertainty tolerance (Cronbach's <unk> = 0.66) as in Study 2. This also applies to the indices selfcentered prepping behavior (Cronbach's <unk> = 0.76), behavior change to avoid infection (Cronbach's <unk> = 0.78), behavior change to not infect others (Cronbach's <unk> = 0.80), trust in the government (Cronbach's <unk> = 0.91), acceptance of the measures (Cronbach's <unk> = 0.89), and adherence to the social dist
In March 2020, the German government enacted measures on movement restrictions and social distancing due to the COVID-19 pandemic. As this situation was previously unknown, it raised numerous questions about people's perceptions of and behavioral responses to these new policies. In this context, we were specifically interested in people's trust in official information, predictors for self-prepping behavior and health behavior to protect oneself and others, and determinants for adherence to social distancing guidelines. To explore these questions, we conducted three studies in which a total of 1,368 participants were surveyed (Study 1
goal of this research project was to investigate the research questions of the two preceding studies 1 year later. For this purpose, we conducted Study 3, a replication of Study 2. Since we had no assumptions regarding changes in perception and behavioral responses to the consequences of the COVID-19 pandemic, we did not formulate explicit hypotheses and instead reexamined our research questions. --- Participants and Procedure Our --- Measures We applied the same measures for trust in official statistics (Cronbach's <unk> = 0.85), political orientation, health anxiety (Cronbach's <unk> = 0.92), and uncertainty tolerance (Cronbach's <unk> = 0.66) as in Study 2. This also applies to the indices selfcentered prepping behavior (Cronbach's <unk> = 0.76), behavior change to avoid infection (Cronbach's <unk> = 0.78), behavior change to not infect others (Cronbach's <unk> = 0.80), trust in the government (Cronbach's <unk> = 0.91), acceptance of the measures (Cronbach's <unk> = 0.89), and adherence to the social distancing guidelines (Cronbach's <unk> = 0.64). --- STUDY 3 RESULTS As in Study 1 and Study 2, to answer RQ 1, people's level of trust in statistics on COVID-19 from official authorities was compared and displayed in Table 1. The results show, as in the two previous studies, that trust in statistics from China was by far lowest, whereas trust in statistics from the RKI was highest. To reinvestigate RQ 2, correlation analyses from Study 1 and Study 2 were replicated and shown in Table 8. Correlations between gender, trust in official statistics, and health anxiety with behavior variables were stronger than in the studies from 2020, whereas the link between age and political orientation with behavior change was weaker. For comparison reasons, the same variables as in Study 1 and Study 2 were included in multiple regressions on the dependent variables of self-centered prepping behavior, behavior change to avoid infection, and behavior change in order not to infect others (see Table 9). As in the studies from 2020, results showed that health anxiety was positively associated with self-centered prepping behavior and behavior change to avoid infection of oneself. Furthermore, in 2021, health anxiety was positively associated with behavior change in order not to infect others. Further in line with the previous studies, the results showed that trust in official statistics was positively associated with behavior change to avoid infections of oneself and others. However, in 2021, these associations were much stronger. Additionally, being female was positively associated with all behavior variables, which mirrors the pattern of Study 1 and Study 2. However, political orientation was not associated with any behavior variable in Study 3. To investigate RQ 3, asking who shows more adherences to social distancing guidelines, correlations of guideline adherence with variables used in Study 2 (gender, age, trust in official statistics, political orientation, health anxiety, acceptance of the measures, and trust in government) were analyzed in a first step. The correlation matrix can be found in Table 10. As in Study 2 age, trust in official statistics and acceptance of the measures showed significant correlations with guideline adherence (the variable guidelines adherence was only collected from Study 2 onwards). For comparison reasons, the same variables as in Study 2 were included in a multiple regression on the dependent variable guideline adherence. Findings are presented in Table 11. As the correlational findings already indicated, results showed that adherence to the social distancing guidelines was positively associated with higher levels of acceptance of the measures, being of older age, and having more trust in official statistics. As in Study 2, we conducted moderation analyses to test whether political orientation, gender and age are moderators of the relationship between acceptance of the measures and guideline adherence. We used Hayes' PROCESS tool (model 1). The distribution of political orientation in Study 3 was like that in Study 2. Also, here political orientation was non-normally distributed, with a skewness of 0.06 (SE = 0.11) and kurtosis of -0.11 (SE = 0.21), indicating a right-skewed left-leaning distribution. The average value was M = 4.40 (SD = 0.07) slightly below the mean of the scale, the median = 5. However, results showed no moderation effect for political orientation (p = 0.954). Moreover, neither gender (p = 0.988), nor age (p = 0.837) moderated the effect. --- STUDY 3 DISCUSSION Study 3 successfully replicated findings from Study 1 and Study 2: (a) Also 1 year after the surveys in March and April 2020, there were still differences in trust in official statistics from different authorities: Again, data from China were believed much less than data from Europe or Germany, whereas data from the RKI were most trusted. (b) Results from all three studies showed that health anxiety increases self-centered prepping behavior and behavior change to avoid infections. Also, trust in official statistics increased behavior change to avoid infections. Regarding behavior change in order not to infect others, results in Study 3 slightly differ compared to studies 1 and 2. Whereas in the first two studies, being female, being less politically right-oriented and having trust in official statistics were positively associated with behavior change to protect others, Study 3 indicates that political orientation is no longer a relevant predictor for behavior change in order not to infect others. Moreover, neither political orientation, gender nor age showed up as moderators in Study 3. Instead, health anxiety turned out to predict behavior change in order not to infect others. This leads to the assumption that the Corona pandemic has become less an issue of political orientation than of individual characteristics related to healthrelated behaviors. As Study 2, Study 3 aimed at investigating influences on adherence to social distancing guidelines. Again, results show that guideline adherence was positively associated with older age and higher acceptance of the measures. In addition, and contrary to Study 2, higher levels of trust in official information turned out to be a relevant predictor for guideline adherence, too. However, no associations were found with gender and political orientation. These findings indicate that the importance of the various predictors for guideline adherence changed as the global pandemic progressed. A relevant factor in this context may be that the levels of general acceptance with preventive measures declined substantially between the time points of studies 2 and 3. Thus, the importance of political orientation might have decreased because support for social distancing guidelines has declined in all population groups. This trend has already been suggested by Conway et al. (2020) who argue that ideological effects diminish as experiences with, and the impact of the COVID-19 pandemic grows. In contrast, trust in official information has become more relevant. This is consistent with recent findings from other studies. Bargain and Aminjonov (2020) found that higher trust was associated with decreased mobility related to non-necessary activities. Fridman et al. (2020) report that higher levels of trust in government information sources are positively related to adherence to social distancing. --- GENERAL DISCUSSION Today, there are numerous psychological studies on the COVID-19 pandemic context. However, many of these studies focus on screening for negative (mental health) effects of the COVID-19 pandemic. The aim of our studies was to capture early and later perceptions and behavioral reactions to the COVID-19 pandemic. Our three studies give insights into three important dimensions in the context of the COVID-19 pandemic: results from March 3, 2020, to April 21, 2020, show that trust in the RKI was consistently very high, even higher than trust in the German Federal Ministry of Health, the Federal Government and the WHO. However for 2021, results from Betsch (2021) show that trust in general (in government and in authorities) has declined somewhat. Furthermore, the present findings show that trust in the official statistics is a predictor of behavior change and guideline adherence. Therefore, effort should be made to ensure that trust in the data is maintained, especially in contexts where long-term measures are required, like the COVID-19 pandemic. Health anxiety was linked to self-centered prepping behavior and behavior change to reduce personal risk in all three studies. These findings are not only intuitively plausible but also supported by other studies showing that anxiety is linked to safety behavior (e.g., Erceg et al., 2020). Our analyses also revealed bidirectional effects regarding health anxiety and prepping behavior (in Study 1-3) and between health anxiety and behavior change to avoid own infection (Study 1-3). Behavior change in order not to infect others was only associated with health anxiety in study 3. This is in line with research from Olatunji et al. (2011) and emphasizes the importance of further research in the context health anxiety. Age was not or only negligibly associated with self-centered prepping behavior. This is in line with findings from the German Corona Monitor regarding panic buying (waves 1, 2, and 3: Betsch et al., 2020a,b,c). However, gender seems to be relevant when it comes to behavior change to avoid personal and other person's risks. In all three studies, women reported higher values on the behavior change variables (both to avoid own infection and to protect others) then men. Previous research has shown that women are more safetyoriented (Lermer et al., 2016b), especially in the health domain (Thom, 2003;Lermer et al., 2016a). Women also tend to behave generally more pro-socially (Archer, 1996) than men. Our findings imply that these observations also apply during the COVID-19 pandemic. Results from the present study (samples 2 and 3) indicate a positive effect of acceptance of measures and trust in the government, a moderate positive effect of trust in official statistics and a small negative effect of being more politically right-wing oriented (Study 2) on adherence to social distancing guidelines. Betsch et al. (2020d) report in their Corona Monitor that German acceptance of the measures had risen sharply since mid-March 2020 and then decreased somewhat, with some fluctuations, until April 2021 (Betsch, 2021). However, overall acceptance of most of the measures was still at a high level. Our study is line with these findings. Our results reveal that approximately 1 year after the outbreak of the Coronavirus pandemic, the adherence to official guidelines regarding social distancing declined somewhat. Research has shown that trust in authorities is an important factor for the acceptance of environmental measures (Zannakis et al., 2015) and adherence to health guidelines (Gilles et al., 2011;Prati et al., 2011;Quinn et al., 2013;Sibley, 2020). Political decision-makers and Adherence to social distancing guidelines was higher among people who were older, female, less right-wing orientation, and more accepting of the measures (Study 2). Betsch et al. (2020d) also reported small positive effects of age and (marginally significant effect) of being female on safety behavior (i.e., using face covering) in the context of the COVID-19 pandemic. Further analyses showed that the association between acceptance of the measures and guideline adherence was moderated by political orientation (Study 2). It should be noted that the variable political orientation was not normally distributed but slightly right-skewed left-leaning distributed. However, low values (1 SD below mean) can be interpreted as more leftwing oriented, average values (mean) as neutral and high values (1 SD above mean) as more right-wing oriented. Thus, the results can be interpreted as follows: for politically left-wingoriented participants acceptance of the measures had no effect on their guideline adherence, whereas data from politically neutral and right-wing-oriented participants showed a positive link between acceptance of the measures and guideline adherence. Interestingly, the antecedents of social distancing changed over the course of a year. Gender and political orientation no longer predicted adherence to guidelines in Study 3, while trust in government became more relevant. These findings are particularly important for the current COVID-19 pandemic and for future considerations in dealing with pandemics. Obviously, the importance of political orientation decreased as the Coronavirus pandemic progressed. From a practical perspective, policymakers should periodically review and challenge their assumptions about the public's perception of the pandemic situation. In this way, communication of the necessary measures can be adjusted in the best possible way. Here, it is of particular importance to maintain the trust of the public, especially when support for anti-Coronavirus measures decline. In addition to general trust in the government, however, trust in the government's competencies is especially relevant. Fancourt et al. (2020, p. 464) summarize: "Public trust in the government's ability to manage the pandemic is crucial as this trust underpins public attitudes and behaviors at a precarious time for public health. " We see further practical implications of these study findings primarily in that the results presented here may be helpful in developing and communicating interventions. The results confirm that perceptions and behavioral responses differ in Germany, both at the onset of the COVID-19 pandemic and 1 year later. As other studies (e.g., Warren et al., 2020) suggest, the government should not only ensure that trust in the government is and remains high but also consider how different groups of people are addressed in campaigns. Today, more than ever, researchers are called upon to replicate research (Bonett, 2012;Wingen et al., 2020). This can be done by conceptual or exact replications (e.g., Stroebe and Strack, 2014). We assume that especially conceptual ones are important. That means that not that exactly same thing was done, but from the basic idea the same results are found. At the time of our data collection, it was not yet possible to foresee what the research on the COVID-19 context would be like. We very much welcome the fact that so many scientists are taking up this relevant topic. This will increase the likelihood of reducing the negative consequences of future challenges such as this pandemic. Some limitations of the study must be mentioned. All three studies were correlative cross-sectional studies. Therefore, no cause-effect relationships can be proven, and future studies should consider longitudinal studies. As in many psychological studies, our samples were convenience samples and consisted of students. However, since this institution where participants were recruited is a part-time university, the students are all employed and on average older than full-time students. Furthermore, in all studies, most of the participants were female. Women tend to perceive higher risks, show more risk-averse behavior than men (Byrnes et al., 1999;Harris and Jenkins, 2006) and are more anxious than men (Maaravi and Heller, 2020) which may have influenced the study's results. In general, there is a high consistency between our results and those of similar studies. For example, other studies have shown that women report higher levels of social distancing than men (Pedersen and Favero, 2020;Guo et al., 2021). This is in line with our findings regarding the fact that being female is a predictor for greater adherence to social distancing guidelines and behavior change in order not to infect others. Therefore, the unequal gender distribution in our sample does not seem to have distorted the results. Nevertheless, more emphasis should be put on a balanced gender distribution in future studies. Since we asked relatively personal questions (e.g., prosocial behavior), it cannot be guaranteed that there is no social desirability bias in the data. Socially desirable responding to questionnaire items is a general problem in studies relying on self-report. Consequently, future studies should aim to replicate our research findings with more indirect measures. However, the consistency of our results with the current state of research suggests that findings can be successfully replicated. Another important limitation concerns the fact that we only measured behavioral intentions but not actual behavior. Thus, future research should focus on identifying variables that can be used to observe actual behavior. Another interesting approach for future research is to consider individualism and collectivism. The results of a recently published study analyzing data from 69 countries show that the more individualistic (vs. collectivistic) a country is, the higher the COVID-19 infection rates were (Maaravi et al., 2021). Furthermore, future studies in the COVID-19 context should investigate the influence of information sources such as social network platforms in the context of trust (Bunker, 2020;Limaye et al., 2020). Overall, the present findings are helpful to target specific groups for preventive campaigns in the context of a pandemic. The fact that differentiated communication can be relevant is also described by Warren et al. (2020) in the COVID-19 vaccine context. A review paper by Bish and Michie (2010) conducted to identify key determinants of safety behavior in the context of the 2009 H1N1 influenza pandemic reports that being female and of older age is linked to adopting safety behaviors. This is also confirmed by the results of present studies for the COVID-19 context. In addition, trust, less right-wing political orientation, and acceptance of measures were shown to be relevant variables for safety behavior. These findings show how important it is to consider individual differences when it comes to prevention measures implemented on a large scale for the sake of a greater good. --- DATA AVAILABILITY STATEMENT The original contributions presented in the study are publicly available. This data can be found at: https://osf.io/y7hxe/. --- ETHICS STATEMENT Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study. --- AUTHOR CONTRIBUTIONS All authors developed the study concept, contributed to the study design, and interpreted the results. Material testing and data collection were performed by EL and MH. The data were analyzed by EL and MH. EL drafted the manuscript, and MH, MR, SG, and FB provided critical revisions. All authors contributed to the article and approved the submitted version. --- Conflict of Interest: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's Note: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. Copyright <unk> 2021 Lermer, Hudecek, Gaube, Raue and Batz. This is an openaccess article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
In March 2020, the German government enacted measures on movement restrictions and social distancing due to the COVID-19 pandemic. As this situation was previously unknown, it raised numerous questions about people's perceptions of and behavioral responses to these new policies. In this context, we were specifically interested in people's trust in official information, predictors for self-prepping behavior and health behavior to protect oneself and others, and determinants for adherence to social distancing guidelines. To explore these questions, we conducted three studies in which a total of 1,368 participants were surveyed (Study 1
Introduction This paper examines the intersection between environmental pollution and people's acknowledgement of health concerns in peri-urban India. These peri-urban spaces have been "crying out for attention" [1] and are places where neoliberal policies, real-estate booms, land speculation, information technology advances, and the relocation of industrial waste have "transformed the pace of development" [2]. They are the consequence of urban expansion and a visible manifestation of urban socio-spatial inequalities. One such area, explored in this paper, is situated between the rapidly expanding cities of New Delhi (India's capital) and Ghaziabad (an industrial district in Uttar Pradesh), where urban growth has created a peri-urban interface; "a territory between" rural and urban livelihoods, activities, and services [3]. Peri-urban spaces are generally characterized by a predominance of poor and disadvantaged residents; a lack of services, infrastructure and facilities; degraded natural resource systems [4]; and, industrial hazards [1,2,5]. This research focuses on Karhera, a peri-urban village sandwiched between Delhi and Ghaziabad, which was, until about two decades ago, predominantly agricultural. Despite being surrounded by urban growth, some of Karhera's residents have continued to practice agriculture. This emphasis on agriculture forms an important component of Karhera and sets it apart from the rapidly-urbanizing spaces around it. However, the nature and scale of this agriculture has changed considerably. Cereal crops (maize, wheat, rice and sorghum) and vegetables have been replaced by spinach, grown to take advantage of the urban demand for fresh vegetables. Urbanization has also affected the natural environment and the resources used for agriculture, with less available land for cropping and fewer spaces to keep livestock. Other environmental resources-such as water-have become polluted and degraded, making it hard to continue farming. Not all is negative however, and these changes have been accompanied by new farming opportunities. Karhera is thus an ideal context in which to explore the significance of peri-urban places in relation to "dynamics, diversity and complexity" of pollution and health [6,7]. Such places have particular place-based characteristics and face significant sustainability challenges [8,9]. Karhera, like India's other peri-urban places, is also the physical manifestation of the globalization processes-created and molded by-regional agglomeration, liberalization, urbanization, global economic integration, and "re-structuring for globalized systems of production and consumption" [10,11]. Considerable attention has focused on how communities identify pollutants and toxins in their localities, and on how their mobilization can and does effect change. Much of this has emphasized participatory processes, in particular the collective actions undertaken in the form of popular epidemiology, often in conjunction with sympathetic researchers to co-produce scientific evidence of pollution and ensure engagements with policy makers [12][13][14][15][16]. While collective mobilization is seen as a viable means of challenging vested interests and of forcing government and policy actors to pay attention to environmental pollution and health threats [17], collective activism and citizen science research has not always guaranteed such emancipated outcomes [12]. Until recently, however, little attention has been paid to contexts where citizen science alliances do not occur, and where community residents fail to publically acknowledge-and do not collectively act upon-the intersections between environmental pollution and health. This recent research has explored the factors which constrain mobilization, and in doing so, have shown that this is not simply a result of the well-known diversionary tactics used by the state and political and economic elites (such as media control, limiting citizen participation, emphasizing the economic benefits; discrediting scientific evidence of harm and health, casting blame on potential sources of contamination, and bullying, ostracizing, and discrediting activists) to forestall resistance [18]. Adams and Shriver [19] show, for example, how protests against coal mining in Czechoslovakia struggle to direct their activism as economic and political flux create a situation where the targets are vague and ambiguous. Auyero and Swinstun [20] explore the "slow violence" perpetrated by petrochemical companies in an Argentinian Shantytown, where the environmental damage is neither dramatic nor visible. Here the slow accretion of toxins results in long-term habituation which in turn has meant that, even once the extent of the damage was known, residents remained uncertain and conflicted about their exposure to contamination and the associated risk. This research has also recognized that citizens may welcome the economic opportunities that produce environment degradation and corresponding health risks because of their poverty, economic dependence, and marginalization [18,21,22]. Underlying all of this work is an emphasis on place and economic context as shaping people's responses to environmental pollution and health threats, showing that the "relational interplay between place characteristics and their meaning-making for health is often contingent and contested" [7]. Whereas, environmental and health hazards in peri-urban areas have been thoroughly documented [23,24], this paper adds to an emerging body of work which examines people's conflicted identification of health and environment threats in peri-urban areas in the global south, and the place-based factors which shape the potential for collective action against environmental pollution. This article addresses this knowledge gap by exploring the peri-urban area of Karhera where, despite emerging environment and health threats associated with urbanization, there is little evidence of citizen mobilization tackling these issues. We investigate why this might be by constructing a descriptive demographic and social profile of area and its residents, and by exploring their perceptions and feelings regarding the intersection of environmental pollution and health in their rapidly changing peri-urban environment. After outlining methods used in this research, this article goes on to present findings including the contextual and demographic profile of the area, and how it has changed over time as a result of urbanization. Highlighted are changes in the socio-economic composition of the area, shifts in land and water use/availability, and associated livelihood strategies, as well as increasing levels of apparent environmental pollution, growing potential health risks, and the perceptions of residents around these issues. This is then followed by a more in-depth discussion section which synthesizes these multiple shifts and the perceptions of residents to explore how these dynamics interact with each other in complex ways that ultimately undermine the potential for collective action. By making these underlying dynamics visible, this article provides important insights into the challenges faced by citizens and civil society groups who seek to build collective action movements against pollution and health risks in peri-urban areas. --- Materials and Methods In this paper, we integrate a relational view of place-which sees place as "having physical and social characteristics... [which] are shaped by and given meaning through their interactions with politics and institutions, with one another and, most importantly, with the people living in a place" [7]-with an interpretive approach, which seeks to integrate people's views with "an analysis of cultural phenomena, social conditions and structural constraints" [25]. We focus on people's diverse interpretations of facts, their emic experiences, on how they create meaning through social interactions; on how relationships and social dynamics have changed over time, on the effects of formal policies, and on political processes and power relations. Fieldwork was undertaken in Karhera, Ghaziabad District between August 2014 and May 2015, using a variety of fieldwork methods which sought to capture multiple ways of portraying and comprehending the place, Karhera [7]. This included surveying 1788 of the 2042 households in September 2014. The survey asked about household composition, caste, primary and secondary sources of livelihood, home ownership, perceptions of health and hospitalization (over the past 5 years), and land ownership. The survey was conducted by the authors of this paper, based at the Centre of Social Medicine and Community Health, Jawaharlal Nehru University. Researchers aimed to survey every household in Karhera, but 152 households were unwilling to be interviewed, and in 102 households no-one was home and doors were locked. Given that 88% of households ultimately participated in the survey, it is reasonable that the data is representative of the community as a whole. The researchers asked questions and completed the answers on paper survey instruments. The data was processed with SPSS version 22.0 (IBM Corporation, New York, NY, USA) and used to identify emergent themes around environmental degradation, pollution, health, and emergent risk. Qualitative research methods, undertaken between October 2014 and May 2015, provided a detailed understanding of agricultural livelihoods in relation to changing circumstances and urbanization. Twenty in-depth semi-structured interviews were undertaken, 10 with men and 10 with women who were identified from the survey. The following criteria informed our selection: involved in agriculture (transporting, buying or selling produce, growing crops, working as laborers, sharecroppers, or leasing agricultural land), migrant household men and women or men and women who were original inhabitants and were willing to participate in the research. Informants came from different castes and socio-economic groups. These discussions usually lasted about an hour, and interviews took place in locations that were mutually convenient such as participants' homes or fields. These face-to-face interviews were conducted by the researchers in Hindi, recorded, translated and transcribed. The interviews examined agricultural-based livelihoods, people's use of natural resources (water, firewood etc.), and personal experiences of urbanization, poverty, and community relations. We then undertook four participatory mapping sessions with (a) men involved in agriculture; (b) women involved in agriculture; (c) men who were actively farming or marketing spinach; and (d) women associated with buffalo rearing and/or spinach. Participants were recruited in ways customary to anthropological research practice: through formal and informal interaction with the researchers, who lived in the community for periods of the research. These sessions, which lasted half a day and took place in a local Hindu temple, brought together small groups of people (between 5 and 12) associated with the dominant crop (spinach) and animal husbandry to collectively hand-draw maps of the Karhera they had known 20 years ago. As a part of the participatory mapping sessions, participants reflected on changing agricultural livelihoods; new uses of space and resources; and, the implications of these on labor practices, water availability, and gender relations. They also discussed topics such as food availability, distribution and exchange; food preferences; the nutritional and social values associated with agricultural crops; changing political relations associated with access to land and water; and, poverty and health. The interviews and participatory mapping sessions complemented the survey by providing further insights into the relationships between agriculture, livelihoods, and people's collective responses to social and environmental change and by facilitating triangulation of patterns and trends. Finally, a review of media articles on environment, pollution, and health in the Trans-Hindon, between 2005 and 2015, was undertaken. This was initiated by searching the Centre for Science and Environment (CSE) website. The CSE website has an India Environment Portal (http://www. indiaenvironmentportal.org.in/) which archives articles from leading newspapers, books, magazines, etc. related to issues of environmental concern. The CSE's magazine, "Down to Earth", was searched for significant articles related to Hindon river and Trans-Hindon region and two English daily newspapers website (The Hindustan Times and The Hindu) and two regional-language daily newspapers website (Jagran and Amarujala) were selected because of their popularity and readership in the region. The search terms identified for this stemmed from the qualitative research and were: agriculture, fertilizer, pesticides, health risks, diseases, cancer, pollution, waste water, effluents, industrial waste, poverty, and urbanization. Special attention was focused on articles that linked environment, pollution and health with political mobilization. This exercise allowed us to see what types of pollution and, health issues were being taken up by local newspapers and which may reflect, or influence, residents' levels of concern. Standard social science ethical procedures were followed, including adhering to the principles of informed consent and confidentiality. Pseudonyms are therefore used in this paper. Participants were clearly informed that they could withdraw at any time without facing negative repercussions for doing so. Ethical approval was received from the University of Sussex (ER/PLW20/1). Of final methodological consideration are the strengths and limitations of this study. While this research offers unique insights into Karhera's residents' perceptions of environmental pollution and health risks in their changing locality, it cannot provide hard evidence on the links between pollution and resulting health problems as scientific investigations were not undertaken. That said, the strength of this paper lies in its exploration of local perceptions of pollution and health risks in contexts where these are not highly evident, and assessing the implications of this for potential future mobilization. Another limitation of this study is that it was not designed for replicability. This is not unusual in relational, place-based research where emphasis is on situating people's personal accounts within socio-economic and political broader contexts associated with a very specific place. --- Findings --- Ghaziabad and Karhera: Context and Demographics Ghaziabad District consists of four Tehsils (divisions), the largest, and most densely populated of which is Ghaziabad Tehsil. It includes a diversity of urban and peri-urban settlements, with people variously accommodated in villages, unauthorized colonies, slums, and middle-class colonies. This area has been enormously affected by the transformations in nearby Delhi-for example, through the loss of farmland to urban development and through the relocation of polluting industries from Delhi to Ghaziabad [26,27]. The population in Ghaziabad has grown as people are attracted by its urbanizing nature, including rural migrants looking for work and urban populations relocating because of cheaper housing and improved commuting possibilities. Karhera is a former agricultural village situated within the Ghaziabad Municipal Corporation, an administrative area of Ghaziabad Tehsil. In 1987 this area was converted into a Nagar Parishad, a designation indicating its urban status. Karhera is bounded, on one side of the village, with a line of industries and production units. The other side of Karhera is bounded by the Hindon River. In the 2014 survey undertaken as a part of this research, almost half of the people surveyed were original inhabitants who lived in Karhera when it was an agricultural village (44%), while the remainder were migrants (56%), attracted by the industries and the potential for work in nearby cities. Kahera thus has a heterogeneous population, which includes people of different castes, religions, and geographic origins. The 2014 survey shows that three quarters of the original inhabitants (75%) are upper-caste (primarily Rajput), and the remaining quarter is of lower-caste origin (primarily Dalit). During participatory mapping exercises, Karhera's original residents reported that, 20 years ago, they had relied on agriculture as a primary source of livelihood. By 2014, our survey showed that only 24% of karhera's households (16% of original households and 8% of migrant households) still cited agriculture as their primary source of income. Agriculture, has, with increasing urbanization, become a secondary occupation and increasingly feminized. Animal husbandry too has decreased, and is now primarily for subsistence purposes. More than a third of both original and migrant households, 41% in the case of original inhabitants and 37% of migrants, depend on desk-based private sector employment such as teaching, insurance, working in call centers, and as estate dealers. A further 18% of original and 10% of migrant households have businesses as their primary source of income. These include repair and grocery shops, transport businesses, factories, and garages. Only a small percentage of original inhabitants (6%) and no migrants held government posts or were former-government employees. Only 15% of the original households, as compared to 30% of migrant households, relied on manual labor as their primary livelihood (drivers, factory workers, and mechanics). The majority of these manual laborers are lower-caste. --- Transformations and Pollution in Karhera Urbanization has involved several major transitions in terms of how busy and built up the area is, changes to the water supply and reductions in land availability; in conjunction with environmental degradation. Karhera, once a quiet rural village, is now a bustling area. New roads connect Ghaziabad to Delhi, traffic is constant and accompanied by noise, people, and pollution. New buildings, shopping malls, and urban activities now characterize the area [3,28]. Water has become scarcer. Large amounts of water are consumed by the industrial sector and by the government of Ghaziabad, which has installed submersible water pumps in peri-urban locations in order to supply water to the new urban establishments, including high-rise apartments and malls. The result has been a lowering of the water table. Areas such as Karhera have been affected by these urbanization processes, and their tubewells no longer provide adequate sources of water [3,26]. Water shortages have also reduced the amount of land suitable for cultivation. For example, much of the Hindon River bank is no longer suitable for cultivation. Agricultural land has also been diverted to urban use. This includes industrial clusters, infrastructure construction, new roads, real-estate development, and urban leisure activities. More specifically, the government has acquisitioned-over the years-Karhera's rural land: 42 acres in the 1960s for the Hindon Air Force base and the creation of Loni Industrial area; 104 acres for the 1987 Vasudhara Vikas Awas scheme and new urban settlements; and, land for the Ghaziabad Master Plan which came into force in 2005 [28]. In 2014, when a "City Forest" was created to meet the leisure and greening demands of an urban middle-class population, and a flyover and power station were built. Although some farmers reported receiving limited compensation for these land acquisitions, during interviews and mapping sessions, Karhera's residents emphasized that land had been taken from them. They have also, in the early 1990s, sold land to outsiders who built new residences in the "new Karhera colony". At the time of the research, the proposed development of the metro line led Karhera's land-holders to believe that they would soon lose more land. --- Agriculture in a Rapidly Urbanizing Context In 2014, almost half of Karhera's households (42%) still relied on agriculture to make some contribution towards their livelihoods. For just under a quarter of households (24%), it was their primary source of livelihood and it provided a secondary income for nearly a fifth of households (18%). There have nonetheless been significant shifts from predominantly cereal-based farming to intensive, small-scale spinach farming. This contrasts to Karhera's previous status, a primary agricultural area with a wide range of staple and vegetable crops, best known for its wheat and carrots. Irrigated agriculture occurs on fields located about 4 km from Karhera's residential area. These fields are close to the Hindon, and have traditionally been irrigated by water from the river and wells. Over time, as the water table dropped, farmers were compensated by using borewells. These in turn have dried up and those farmers who can afford to, have installed water pump submersibles. When talking about these fields, the villagers refer to crops grown in "clean water". As shown above, however, both the Hindon and the ground water is highly polluted by industrial contaminants. A collective irrigation system, designed and managed by Karhera's residents and the Panchayat (the local form of village governance prior to Karhera becoming an urban ward), irrigates fields located closer to Karhera using domestic waste water (and where feasible tubewell water). Karhera's villagers decided upon this irrigation system about 25 years ago when domestic wells were becoming increasingly saline/polluted, and other traditional water sources (community ponds or jhora) were filled-in to make way for new roads. Residents also installed submersibles to ensure their domestic water supply. These factors, in conjunction with piped water and new urban behaviors (daily washing) led to large quantities of domestic wastewater or gandapani (literally, dirty water). As submersibles were expensive and water precious, drains were built to direct gandapani (domestic wastewater) to irrigate these fields. Whereas, people had previously grown a wide variety of crops, including wheat, rice, carrot, turnip, bottle gourd, sorghum, and fodder, using gandapani facilitated green leafy vegetable growing in response to urban market demands. Spinach soon became "the crop". It thrived well in domestic waste, had a short production cycle, was in high demand, and was not liked by wild animals. As one resident explained during an interview: "It so happened that because of the shrinking of the forests, wild pigs and NielGai (antelope) started to destroy our standing crops". The wastewater used in the irrigation system was, at first, only from cooking and bathing. However, as submersibles were installed in the village, even more wastewater became available. This eased the villagers' needs for domestic water and meant that women could wash clothing at home, rather having to go to the river or a stream. This also meant that flush toilets became more common, and as more and more migrants settled in Karhera, so the amounts of wastewater increased. In addition, once Karhera became an urban ward, the village Panchayat disintegrated. As a consequence, the drains were no longer maintained and gandapani came to contain fecal matter. Spinach thrives in this polluted water, growing very quickly and providing, all-year-round, a harvest every 20-30 days. A group of Rajput (upper-caste) elderly women discussed the subsequent prioritization of spinach farming in a participatory mapping exercise: Carrot and wheat will take a minimum of 4 months to mature. But in the same time spinach grows throughout year and it takes hardly takes one month, now see this is one kiyari (plant bed), and in this kiyari the spinach is matured now. We will cut this and at the same time in the very next kiyari we sow another spinach. So it's easy and it will work and carry on like this. It does not need much physical work, there is no need of plough the field, just give water, put the seed in or spread the seeds into the field. That's all it needs. During participatory sessions, residents explained that spinach farming is lucrative and has brought financial stability to those families engaged in cultivation. It has also enabled women to achieve financial independence. For example, a widow named Shanti Rani was one of the first women in Karhera to take up, and to survive exclusively on spinach cultivation. As other women, discussing Shanti explained: "Yes, we are making good money. Look she is growing spinach by herself; she cuts the harvest and sells in the market. So the money remains with her". In Karhera, spinach grown in gandapani is preferred to that grown in "clean water" because the use of wastewater reduces production costs (water is free and less fertilizer is required). In addition, there is no differentiation in sale price, and some men, in participatory mapping sessions, suggested that gandapani spinach is more marketable: "The spinach which is grown in unclean water sells faster in the market because of its shine. It shines because it is getting pure and natural dung so this is the difference". Mother Dairy is the only buyer that specifies that crops must be irrigated in "clean water" crops. Created as a government subsidiary, this private company buys and sells agricultural produce. Vegetables and fruit are dealt with through Safal, which aims to establish a "direct link between growers and consumers" in order to provide fresh, healthy produce. In the Delhi metropolitan region, Safal is synonymous with "quality, trust and value". However, Safal/Mother Dairy does not pay more for this spinach. Arable land supplied with gandapani is highly desirable because of the access to free water; proximity to the village so less time is spent walking and transporting equipment and produce; and, because of the frequent and high spinach yields. Moreover, "clean water" fields require infrastructure to ensure constant irrigation. The tubewells and borings previously managed by that farmers no longer provide water because of the lowering of the groundwater table. The installation of submersibles is expensive and the benefits uncertain. As Suraj explains, "they also need to spend on generator to run the submersible and tractor to till the land. It means only those farmers who can spend around 5 lakhs (or 500,000 rupees) can cultivate". Take for example, the following two cases, both derived from the in-depth interviews: Jayawati and her husband's tubewells failed when the government submersible, which provides water to middle-class colonies in Ghaziabad, was installed. Previously it was possible to access ground water at 30 feet, but now, Jayawati says, it is below 250 feet. "Earlier we have boring in our field, but it does not work now". They decided to install a submersible. "I cannot allow my children to starve, so I have taken a loan of 2 lakh rupees and have installed the submersible in the field". Sushil's land is located near the foothills where there is no connectivity to the gandapani drainage system. The tubewell on his land failed due to the government submersible that has been installed alongside it. This has meant that he has to buy water at the rate of 350 rupees per hour. For him, agriculture has become very costly. After all the expenses on water and fertilizer; he makes only a small profit or, in his words, he "is hardly left with some money". As a consequence, some land owners allow their lands to lie barren as they cannot afford to farm. Others have, for the same reason, decided to sell. As one farmer explained, "Here the price of land is very high i.e., 50 thousand per gaj [square yard]. So, the farmers prefer to sell their lands, than starving". Gandapani spinach farming is, however, thriving. --- Environmental Pollution Despite these pressures on land and water, almost half of Karhera's population (42%) is involved in farming in one way or the other. This means that these people are intimately connected with the environment on a daily basis. They are the ones most affected by-and best placed to recognize-environmental pollution and degradation. They are the ones most likely to experience any health consequences because of their close contact with the water, air, and soil. In Karhera, pollution concerns primarily the water (described above) and the air. As is the case in many of India's peri-urban villages, Karhera's residents are aware of industrial contamination. Industries are known to be pumping untreated water and effluents into the Hindon. This contamination has percolated the soil, making ground water unsuitable for human consumption. Industries and factories have also reportedly pumped toxic water directly into the water table. These forms of pollution are well recognized by Karhera's residents, who all stressed the poor quality of water. They complain that this has polluted the river, which had previously been a significant source of water. One such example, shared in an interview, comes from Umesh, who has lived in Karhera all his life: For the villagers Hindon happens to be a very good river. We used to drink water from Hindon. We used to take bath also. Sometimes while working in the fields we used to even drink water. The water of Hindon used to be so clean that you can easily see any coin falling there. However, with the coming of the industries, the river water started to become dirty. The drains of the cities were connected to Hindon; the drains of the factories also were connected. Till what level would the river bear this pollution? A second example comes from Harish Singh, a retired veterinarian and now farmer from Karhera: "From the time when the factories started to drain out the contaminated water, from then onwards the Hindon River started to get polluted". Nowadays, the water from the Hindon is, as villagers say, "black" from the factory drains, and is no longer used for irrigation or drinking. This polluted water has been linked to a range of diseases. As one upper-caste male vendor explained during the participatory mapping: "Look, some survey revealed that the Hindon River is causing cancer. There are cases of cancer in the village. Many people have been affected. The reason has been the [contamination of the] drinking water". This articulation of environmental health threats is echoed by Jamuna Devi, also from Karhera. She said "all the health problems such as cancer, high blood pressure, and joint pains among the people of young age are caused by the water". Using sewage water for crop irrigation can also have negative consequences. Srinivasan and Reddy [29] argue that it can lead to increased levels of morbidity, and there is scientific evidence that heavy metal contamination in crops can stem from wastewater vegetable production, with spinach and other leafy vegetables being particularly prone to heavy metal uptake [30][31][32]. Growing spinach also requires significant amounts of time spent handling the crops and being exposed to the polluted water. This exposure can, after a few years, result in a range of ill-defined symptoms such as headaches, skin diseases, fever, stomach ailments, and diarrhea. Microbial infections (including pathogenic viruses, bacteria, and protozoa) may also be transmitted in this water. In the peri-urban areas of Hyderabad City, poor water quality produces "high morbidity and mortality rates, malnutrition, reduced life expectance, etc." [29]. This is particularly prevalent amongst women living in the villages, because of the time spent weeding and their extensive contact with the soil. There is, however, no uniform articulation of environment/health threats associated with gandapani and spinach farming in Karhera. As revealed by the in-depth interviews, and participatory mapping, not everyone is comfortable with this form of irrigation and with the consumption of produce grown in wastewater. As Umesh says "the spinach grown in gandapani is not healthy. The root absorbs the dirty water and the polluting agents. These agents then enter into the plant. So when we eat that it enters the human body and causes disease". Amber Singh, who cultivates a variety of vegetables, avoids buying vegetables from the market as he cannot be sure about the water used to irrigate the vegetables. As he and his family are still able to irrigate their large landholdings with "clean water", and they consume only this produce. Similarly, Bina, only eats spinach grown in "clean water". She says that the wastewater used for irrigation contains latrine and toilet waste from all Karhera's households: "Everybody has put pipes and there are neither ditches nor tanks. And the (toilet) waste goes directly to the drain. And so their crop grows faster.... the impact on health is apparent. We never eat that spinach". Bina's mother-in-law added that, recently, her son "had got dhaniya [coriander] and it looked dirty... When I picked up, I saw it had feces on it". This was taken as a clear indication of health-harming contamination. These residents, who do articulate concerns with gandapani, have developed strategies to protect themselves from potential contamination. For example, because Kasturi Devi develops allergies when she is exposed to the wastewater, she wears shoes to protect her feet when working in the fields and, immediately after leaving the fields, washes her hands with Dettol. However, despite perceiving health risks related to gandapani spinach consumption, these residents' concerns did not inspire collective mobilization to challenge the practice (discussed further below). Some people believed, however, that the spinach or exposure to gandapani was not unhealthy and articulated this in interviews. Jeevan Lal, for example, argued "There is no disease here due to spinach cultivation", and others agree that the water they use for irrigation is "just household water". As such, and as Santosh explained, there is no harm in using gandapani for irrigation and consuming the spinach. Dhanush similarly points out that the "gandapani remains in the roots of the spinach", and when the spinach is grown, the roots are thrown away. As a result, "there is no effect on health. And [this is evident because] for the past 16-17 years, sewage water is used to grow spinach [and no-one has become ill]". These inhabitants consume this spinach and do not articulate any concerns about possible environment/health threats. Air pollution, like water pollution, is highly visible in Karhera. During our first community meeting with village elders, they pointed to the black smoke emitted from one of the nearby factories. Residents complained that washing hanging on the line became contaminated with black soot and questioned whether this soot was also entering their lungs and causing harm. According to the villagers, air pollution was the reason behind the increasing cases of non-communicable diseases. They drew a direct correlation between the health of Karhera's residents and the proximity of the factories, arguing that the factories caused tremendous harm. Tezpal Singh, for example, suggested that the toxins in the air could be causing cancer in the village. During a community mapping exercise, the men said: Diabetes, cancer, high [blood] pressure can be seen more [frequently]. There is a factory at the vicinity of this village [referring to a dye factory which colors jeans and/or a rubber factory which burns rubber]. The smoke from this factory spreads into the village. The polluted air from the industries is also seen to affect crops. During the same community mapping exercise, the men directly associated polluted air with a plant disease called Chandi (lit. silver) which destroyed crops. The glittering coating and fungus was most prevalent each year during Diwali, and residents linked this to the additional contaminants in the air, caused by factory emissions combining with Diwali pollution. Others commented that the factories exerted a tremendous negative impact on the health and agriculture of the village. Umesh said "we can see black marks on the leaves of the plants. The leaves of the plants get covered with the tiny black dust that comes in the air from the factory". Rajni argued that, sometimes, the spinach leaves in her fields shrink and dry due to this polluted air; at other times the leaves are infected by a fungus and her crop spoils. --- Discussion: Peri-Urban Living Environmental justice movements have worked with communities to challenge environmental pollution while simultaneously addressing health inequities [12][13][14]. In peri-urban areas, environmental justice issues are often particularly stark. While all peri-urban residents may experience air and water pollution, the poor are disproportionately affected in that they also lack decent sanitation and access to medical services, while working in unregulated conditions and with contaminated soil and crops [24,33], and may be particularly dependent on natural resources for their livelihoods [21,22]. Furthermore, they have far less ability to control their exposures and less choice. They cannot escape the unsavory water by purchasing expensive drinking water. Their constant and extensive exposure to a wide range of pollutants threatens their livelihoods and health. As Douglas argues, "there is a critical peri-urban human ecology where healthy crop plants and healthy human life go hand in hand" [24]. In other peri-urban areas of India, there has been considerable concern about vegetable production and the exposure to toxins and pollutants. Water and air pollution have been the subject of numerous newspaper articles and have, on occasion, resulted in civil society protests against polluting industries. In Karhera, there
This paper examines the intersection between environmental pollution and people's acknowledgements of, and responses to, health issues in Karhera, a former agricultural village situated between the rapidly expanding cities of New Delhi (India's capital) and Ghaziabad (an industrial district in Uttar Pradesh). A relational place-based view is integrated with an interpretive approach, highlighting the significance of place, people's emic experiences, and the creation of meaning through social interactions. Research included surveying 1788 households, in-depth interviews, participatory mapping exercises, and a review of media articles on environment, pollution, and health. Karhera experiences both domestic pollution, through the use of domestic waste water, or gandapani, for vegetable irrigation, and industrial pollution through factories' emissions into both the air and water. The paper shows that there is no uniform articulation of any environment/health threats associated with gandapani. Some people take preventative actions to avoid exposure while others do not acknowledge health implications. By contrast, industrial pollution is widely noted and frequently commented upon, but little collective action addresses this. The paper explores how the characteristics of Karhera, its heterogeneous population, diverse forms of environmental pollution, and broader governance processes, limit the potential for citizen action against pollution.
worked with communities to challenge environmental pollution while simultaneously addressing health inequities [12][13][14]. In peri-urban areas, environmental justice issues are often particularly stark. While all peri-urban residents may experience air and water pollution, the poor are disproportionately affected in that they also lack decent sanitation and access to medical services, while working in unregulated conditions and with contaminated soil and crops [24,33], and may be particularly dependent on natural resources for their livelihoods [21,22]. Furthermore, they have far less ability to control their exposures and less choice. They cannot escape the unsavory water by purchasing expensive drinking water. Their constant and extensive exposure to a wide range of pollutants threatens their livelihoods and health. As Douglas argues, "there is a critical peri-urban human ecology where healthy crop plants and healthy human life go hand in hand" [24]. In other peri-urban areas of India, there has been considerable concern about vegetable production and the exposure to toxins and pollutants. Water and air pollution have been the subject of numerous newspaper articles and have, on occasion, resulted in civil society protests against polluting industries. In Karhera, there have only been a few isolated attempts to address environmental pollution despite official recognition of the contaminated environment [26]. Some of Karhera's residents had complained about a particular factory to the police station. But, nothing was done. Efforts such as these underline the lack of collective action in Karhera. There are no community NGOs addressing pollution, there are no local attempts to treat wastewater before irrigating, there are no Karhera activists and no local political leaders articulating environmental concerns (discussed further below). The reasons for this stem, in part, from the diverse views on whether spinach grown in gandapani is damaging to health and in part from the heterogeneous nature of the community, the mutual interdependencies between these residents and the diverse ways in which different members of the community benefit (or lose out) from urbanization. Conventional literature focuses on how low-income, marginalized racial or ethnic communities tend to experience much higher levels of exposure to toxins [34][35][36][37]. These differential levels of risk mean that low-income communities are often far more aware of the pollution and toxins than middle-class residents who are better able to control their environments through mitigating measures [38]. Few studies examine contexts where the people living in low-income settlements do not recognize the health challenges associated with toxins or pollutants (but see [37]) or contexts where poor, marginalized communities and middle-class residents live cheek-by-jowl and where recognition of hazards does not follow socio-economic divisions. However, in Karhera exposure to pollution and recognition of risk cannot be disaggregated by class or identity. As revealed by surveys, interviews, and participatory mapping, here almost all farmers engaging in agriculture are using wastewater. This includes men, women, upper-caste, lower-caste, migrants, and original inhabitants. Half of the original upper-caste, land-owning farmers interviewed were happy to eat spinach grown in gandapani, while the other half were not. But all of the non-landed, whether upper-caste or Dalit, migrant or original inhabitants, ate the spinach they produced. In this group, most people did not acknowledge any potential health threats. However, the poorest of the upper-caste original inhabitants who no longer own land, ate this spinach while articulating concerns about gandapani. We found no clear distinctions between men's views and those of women. This lack of clear divisions reflects the heterogeneous nature of agriculturalists in Karhera. Here farming is undertaken by nearly half Karhera's residents: original inhabitants and migrants (both long-term and recently arrived), people who have large and small land holdings, people who farm animals (buffalo and pigs), and people who farm crops, upper and lower-caste residents and migrants, hired workers, farmers who rent land, share-cropping farmers and land-owners tending their own lands. Maintaining a livelihood through agriculture in Karhera requires constant interactions and mutual dependency across social divides of class and identity. Land-owners may depend on rents earned from leasing arable land or cultivation of their own plots. Some poorer residents depend on opportunities to work in others' fields, while others are engaged in purchasing and selling of produce in the markets. Ultimately, the large numbers of people directly or indirectly dependent on agriculture across the social spectrum is an influencing factor in continued marginal concern over health risks associated with gandapani cultivation and the consumption of produce grown under this method. Mutual dependencies and the benefits of spinach production with gandapani water explain why mixed interpretations about the health/environment threats exist. Several studies have explored peri-urban communities' needs and demands around water in India [3,26,39]. Mehta and colleagues argue that, in India, few peri-urban residents believe that there is any value in making demands on the state, rather "both the rich and poor opt out completely of the formal system and need to fend for themselves" [33]. Mehta et al.'s research, looking into water pollution and mobilization in India and Bolivia, they found that, "when pushed", some Indian peri-urban residents said they would partake in collective action if organized by others, yet many others, such as migrant laborers, did not have formal residential status, felt more vulnerable and were unable to operate as "rights-bearing citizens" who could make sustainability and environmental justice demands on the state [33]. Instead of collective action and protest, India's peri-urban residents have devised their own, informal strategies to ensure access to water [26,33,40]. However, in Karhera there are other well-known and recognized forms of pollution. Why have these too, for the most part, been accepted and why has there been no collective action to address these more obvious forms of pollution? The explanations lie partly in the nature of Karhera and its peri-urban location; partly in the actions of government authorities, and partly in the way pollution is discussed in the Indian media. An appreciation of the context in which people live adds a crucial dimension to local perspectives on, and apparent acceptance of, pollution. As Corburn and Karanja [7] argue, drawing on an African example, understanding the complexity of informal settlements and the diverse determinants of health, require a relational place-based approach which focuses on the ways in which context defines and shapes peri-urban residents' perspectives and, in turn, informs policy. --- Advantages and Disadvantages of Peri-Urban Living Living in peri-urban Karhera has both advantages and disadvantages for all Karhera's residents. This ambivalence is, as shown below, most clearly evident in gandapani spinach production. Karhera's upper-caste land-owning residents have retained the rhythms of rural life, their socio-cultural moorings, and ties to land. They are able to generate a livelihood from agriculture and have access to new urban markets for this produce. In some instances, spinach farming has been so lucrative that large land-owning original inhabitants have given up regular employment to focus on their farming. One such example is Bablu, who hires laborers to work in his fields and whose income today is more than he earned in his private job. Other advantages include, rather ironically, the fact that Karhera itself is rapidly urbanizing. Many upper-caste women prefer the pucca houses, concrete roads, electricity, and piped water. The submersibles ease women's domestic labor and gandapani means they do not need to invest labor in the irrigation of their spinach fields. The village, with fewer buffalo and cows, is perceived to be cleaner. The value of land in and around Karhera has massively increased and has facilitated new, lucrative forms of income, including renting accommodation to migrants, selling land at increased rates to property speculators, and white collar employment for literate upper-caste members. These additional urban-informed incomes have, as Malin and DeMaster [22] point out in their analysis of environmental injustice caused by hydraulic fracturing on Pennsylvanian farms in the USA, supplemented often-times marginal and insecure agricultural practices, but have long-term consequences in terms of environmental inequality. They term this a "devil's bargain" and in Karhera this takes the form of farmers' dependence on both agricultural production and on urban-and industrially-influenced economic activities, which in combination leads to incremental environmental degradation while simultaneously shoring-up agricultural production. The upper-caste has experienced modern lifestyles, education, and a shift from manual labor, yet many are disillusioned. They are acutely aware of their loss of land. They also repeatedly stress their failure to influence government officials who do not come to the village and do not listen to them. This leads to a sense of disempowerment. They have also lost their sense of control over the lower-caste residents and their collective sense of being "owners of Karhera village". Although materially they survive relatively well on a combination of agriculture, rental agreements, and white collar jobs, unlike farming, this does not provide a sense of being their own masters. Karhera's upper-caste residents also experience a lack of political clout. Previously, as large land-holders in an agricultural village, they would have been the political elite. They would have constituted the panchayat and engaged with members of local government, particularly in the form of the Department of Agriculture. However, as Karhera is now an urban ward, they no longer have access to these forms of rural governance. In addition, as the Department of Agriculture is concerned with cereals and grains, rather than vegetable farming, few political connections remain. Conditions have also improved for lower-caste inhabitants of Karhera. Many of the original Dalit inhabitants no longer work in agriculture. Instead, they are in employment in general stores, as petrol pump attendants, car mechanics, drivers, painters etc. Some of these residents commute to Delhi or Ghaziabad daily, working as laborers or daily wage earners. A very small proportion of original Dalit inhabitants are employed as civil servants, teachers, or in the police force. The increasing urbanization of Karhera has also meant that Dalit women, now able to travel beyond the village boundaries, are finding work as domestic staff or as security guards in malls. Because original Dalit families never owned agricultural land and never kept buffalo, they had fewer opportunities to convert agricultural buildings into leased accommodation. Nonetheless, some families have been able to rent out one or two rooms in their homes. In some ways, Dalit villagers' social standing has improved: in the past, the Dalit settlement or basti, located on the edge of the village, was also the dumping ground for animal carcasses and Dalits performed caste labor (cleaning the village, working on upper-caste fields, collecting firewood). The inflow of migrants and the rent economy has reduced these spatial and social caste distinctions. Explicit discrimination along the lines of caste is no longer a common feature in Karhera. Correspondingly, offensive language and caste-related slurs, have reduced. As is evident above, the opportunities created by gandapani and spinach have been particularly beneficial for Karhera's Dalit farmers. Recall the example of Lukshmi, a single woman who transitioned from wage labor to being a farmer in her own right and hires land on which to produce spinach. Some Dalit men, like Shiv Kumar (described above), have been able to use it to set up their own businesses buying spinach from farmers and selling it in the market and, in this way, avoiding factory labor. A few Dalit families still practice pig husbandry (reared in their homes) to supplement their income. Notwithstanding these improvements, Karhera's original Dalit villagers also experience a sense of disempowerment. Not only are they being replaced by Dalit migrant laborers, but they remain at the bottom of the social ladder and its power hierarchy. The poorest of the original Dalits, people such as Manjari (see above), continue to work as laborers on other people's fields. Some caste discrimination still remains: as one Dalit woman complained: "It was just the same, as it was in the past". For the Dalits, as for the upper-caste, life in Karhera is ambivalent, with both advantages and disadvantages, and they too have accepted the trade-offs. Some Dalits have, however, no option but to engage in gandapani spinach production. They neither own large areas of land nor have the financial resources to invest in the technology required for "clean water" irrigation. For them, the only farming option is spinach cultivation using domestic wastewater. Their experience of disease and health has not deteriorated either, as they have always done manual labor and always been exposed to domestic waste. Other lower-caste studies have also found that people trade off health for social and economic improvements [20,21,41,42]. Migrants too find the experience of living in Karhera to be one of gains and losses. They have come to Karhera because it offers cheap housing, provides access to education for children, and to jobs in the nearby factories. These conditions far exceed their rural opportunities. For example, Durga and her husband came because they could "hardly manage" their livelihoods in Bihar. Reflecting on their move, she says: Where would we get money, how would we look after our children? Our land in Bihar is barren, how could we do cultivation? Simple sewing does not give you a yield and good harvest. You need water and fertilizer to get a good yield. At least here my husband can work in the factory. I work as an agricultural laborer on the land of the villagers. Similarly, Srichand left Meerut 20 years ago in search of work. Initially he got work in a dye factory in Karhera. He was subsequently diagnosed with tuberculosis (after about 5 years of work). When he recovered, he chose not to return to the factory and started selling spinach instead. As he was able to generate more than his wages, he has continued selling spinach. For many of the migrants, the work that they get in the spinach fields is better than factory work. As Durga's comment suggests, Karhera offers a degree of economic and food security. For these reasons, few migrants complain about the use of gandapani in spinach production. Yet, despite the economic advantages of living in Karhera, the migrants complain of the arrogance and hostility of the upper-caste original inhabitants, and are acutely aware of their lack of wealth. Social tensions are palpable within Karhera village. Even though some migrants have lived in Karhera for more than 40 years and have bought their own homes, they are still referred to as outsiders. In addition, as industrial employers prefer migrant laborers who have less opportunity to unionize or demand better conditions, they are blamed for taking factory jobs away from original inhabitants. These jobs are seen to benefit only the migrants and not the villagers. The janus-faced advantages and disadvantages of living in Karhera are symbolized in gandapani spinach production-which is productive and financially lucrative, but potentially damaging to health. Some upper-caste residents accept this trade off and therefore find no reason to complain, others have not accepted the trade off and thereby are ready to "see" the negatives as well. It is those upper-caste residents who have benefitted the least-the poorest of the village elite-who were most articulate about potential environment/health spinach threats. They, like other upper-caste villagers were aware of the media (see below), and of the villagers' political marginalization, but, unlike other Rajputs, they did not have "clean water" fields, or spinach from these fields, to consume. They thus felt their deprivation the most. --- Precarity and Lack of Community in Peri-Urban Karhera While such ambiguities exist for spinach farming, it is clear that industrial air and water pollution-and the potential for ill-health-is more widely accepted. However, there is still no collective action around this. In part, this is because of the interdependencies in Karhera: the upper-caste farmers and other upper-caste residents of the village rely on the migrants to lease property; the Dalits also rely on the migrants for rental gains, and many of them generate an additional income through the industrial economy in the form of new and additional markets and trades. Migrants also need the original residents for both employment and accommodation. Yet, despite these interdependencies, there is no sense of community. As one original Dalit woman explained, "there is no unity at all". Instead of a strong sense of community, our research gives a sense of people marking time in Karhera and life, as all these residents experience it today, is precarious. There are insecurities associated with the decreasing availability of water, the decline in arable land as more and more housing is erected, and, as middle-class expansion continues, many of Karhera's residents fear that there will ultimately many be no place for them. Already they are absent from both government political processes and corresponding media reporting. The media review makes it clear that, over the past 10 years, there have been frequent articles about pollution in peri-urban India, which predominantly address new middle-class concerns. Very few articles have examined the intersection between pollution, environmental degradation, and health, and seldom focus specifically and exclusively on Ghaziabad, reporting instead on either several areas or the broader Trans-Hindon or Delhi-National Capital Region areas (as the Hindon River divides Ghaziabad city and its peri-urban peripheries with the area to the west known as the Trans-Hindon and the East as Cis-Hindon). Only six articles reporting on the Trans-Hindon have linked cancer to the industrial discharge into the main rivers, including the Hindon. One article suggests that a Trans-Hindon village experiences high incidence of cancer and bone deformities resulting from the elevated levels of toxic metals in the rivers [43]. Another article points to the untreated industrial effluents and untreated or partially treated sewage in the Hindon River, and connect this to the unsafe levels of chromium, arsenic, and fluoride in groundwater [44]. Yet another reports on air pollution in the Delhi-NCR region, highlighting Ghaziabad as having rising SO 2 and CO 2 levels, and links these to different kinds of cancers, Tuberculosis, high blood pressure, kidney failure, and heart failure [45]. Residents of Karhera-particularly upper-caste landholders have read these media articles, as indeed they told us during interviews. As their above quotes show, they too link pollution to cancer. This led us to investigate actual cases of cancer in the village. The household survey and all in-depth interviews enquired about cancer diagnosis in the households. Only one Dalit household reported a current case of cancer, which was not explicitly linked to any form of pollution. This failure to demonstrate a significant burden of disease goes partway to explaining why residents believe that pollution is causing them harm, but have not done anything about it. It is, in keeping with recent literature of other cases of environmental pollution, a context in which the damage to the environment has been undramatic and gradual [20], and in which evaluating the extent of the risk, pinpointing responsibility and allocating blame remains unclear, difficult, and ambiguous [18,19]. This ambiguity in identifying who to target is also a consequence of the way political engagement in Karhera has waned over the years. Former rural local governance, such as the Panchayat, no longer exists and agricultural government officials are no longer interested in Karhera. Current government institutions which do cover Karhera are complicated and remote. Randhawa's and Marshall's examination of government policy and water management plans in peri-urban Ghaziabad shows the complexity of local government (three ministries are involved, two center-level subsidiary bodies and four departments) and argue that this creates a context in which national level policy makers are able to "exclude themselves from the larger context of the problem and to represent the government's view", emphasizing technical solutions and scientific expertise (pumping stations, sewerage treatment plants) to solve problems in the future [26]. It also marginalizes junior staff who interacted much more closely with peri-urban residents and were thus more aware of the problems on the ground, yet were unable to address the problems or persuade their seniors to act or-where there were official plans already in place-to release funding to enable local officials to address particular issues. Furthermore, the expert and technical nature of the policy process, and of the policies, operated to exclude other local forms of knowledge, despite the fact that India has encouraged participatory processes in government policy [26,46]. Local opportunities for people's participation were thus unsympathetic to the grievances of the disempowered peri-urban residents, structured to facilitate the participation of middle-class urbanites, and, in any event, often unknown to poorer residents. Where Karhera's residents do have an opportunity to participate in governance processes, such as through an elected municipal councilor of the Urban Local Body (ULB), these posts are not particularly powerful, are not integrated into all the different structures, and are highly dependent on the person elected. Randhawa and Marshall show that some councilors play an active role, while others only intervene when people's access to resources are directly threatened. Ironically, in the case of Karhera, the current councilor is inactive and a former councilor has been involved in the development of urban facilities from which Karhera's residents are excluded. --- Conclusions Veena Das has pointed to the "difficulty of theorizing the kind of suffering that is ordinary, not dramatic enough to compel attention" [47]. Illness can be normalized, and interpreted as the kinds of things that happen to bodies. Pollution can similarly be normalized, does not always lead to collective action and may not be facilitated through participatory processes [18,20]. Rather, collective action requires a recasting of illness or disease, resources, and a degree of social trust both in the community and in the state. As Kasperson and Kasperson argue, participatory engagement is a learned skill built up through years of interaction with political processes and government officials. "Left out" from the vision of participatory theory and collective action "are those who do not yet know that their interests are at stake, whose interests are diffuse or associated broadly with citizenship, who lack the skills and resources to compete, or who have simply lost confidence in the political process" [17]. The potential for using citizen science in contexts such as these, where residents do not represent a cohesive community and where the peri-urban economy has tied local residents into relationships of mutual dependency which inhibits political mobilization, despite some residents' concerns about the health risks, remains to be explored in future research initiatives. Pollution and contamination is complicated. In Karhera, pollution is obvious at one level (the "black river") and hidden at another (the contaminants in "clean water" or in gandapani), and there are few direct sensory responses or collective biomedical consequences. As such, considerable uncertainty often exists about whether pollution is harmful or not, or how much exposure is safe. Yet the identification of hazards is not just about the science of pollution. As Kasperson and Kasperson argue, oftentimes societies do not recognize and acknowledge pollution, waste, and other industrial hazards, in part because of the nature of the hazard (uncertainty about the science, lack of sensory experience or high disease burdens), and in part because of the nature of the society (because the pollution serves other purposes in society). This is even more pertinent in peri-urban contexts, where a relational approach and a focus on place deepens this analysis. As Corburn and Karanja [7] have argued, the nature of the place itself shapes the possibilities for meaning-making in for health. In India, the peri-urban space is targeted for urban development and a means of attracting international firms and industry [48,49] and governance excludes those who are not a part of this urban vision. In contexts such as these, dependency on natural resources in combination with other urban-and industrially-derived incomes can "elide other critical social and environmental concerns" [22], leading to a "devil's bargain" in which environmental degradation and health threats are accepted as part of a livelihood strategy. Karhera is just one small peri-urban village, caught up in and physically manifesting, broader processes of urbanization and globalization. But, it is a village in which the combination of a highly heterogeneous population, socio-economic tensions, and interdependencies between residents, and a lack of representation in the media, and political marginalization circumscribe residents' ability to engage in collective action. Collective action for environmental justice in peri-urban places may, as a result of this combination of factors, be far from the development agenda. --- Author Contributions: Linda Waldman, Ramila Bisht, Ritu Priya and Fiona Marshall conceived and designed the field research; Rajashree Saharia; Abhinav Kapoor; Bushra Rizvi; Meghna Arora; Ima Chopra and Kumud T. Sawansi undertook the field research. The data was entered by Yasir Hamid and analyzed by Linda Waldman, Ramila Bisht, Meghna Arora Ima Chopra and Rajashree Saharia. Linda Waldman and Ramila Bisht wrote the paper, with contributions from everyone, and theoretical and analytical involvement from Ritu Priya and Fiona Marshall. --- Conflicts of Interest: The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.
This paper examines the intersection between environmental pollution and people's acknowledgements of, and responses to, health issues in Karhera, a former agricultural village situated between the rapidly expanding cities of New Delhi (India's capital) and Ghaziabad (an industrial district in Uttar Pradesh). A relational place-based view is integrated with an interpretive approach, highlighting the significance of place, people's emic experiences, and the creation of meaning through social interactions. Research included surveying 1788 households, in-depth interviews, participatory mapping exercises, and a review of media articles on environment, pollution, and health. Karhera experiences both domestic pollution, through the use of domestic waste water, or gandapani, for vegetable irrigation, and industrial pollution through factories' emissions into both the air and water. The paper shows that there is no uniform articulation of any environment/health threats associated with gandapani. Some people take preventative actions to avoid exposure while others do not acknowledge health implications. By contrast, industrial pollution is widely noted and frequently commented upon, but little collective action addresses this. The paper explores how the characteristics of Karhera, its heterogeneous population, diverse forms of environmental pollution, and broader governance processes, limit the potential for citizen action against pollution.
the good. Sometimes social influences play a positive role-for example, by enabling social learning. Condorcet's "jury principle" is another example of the power of collective wisdom: the collective opinion of a jury-in which each individual juror has just a slightly better than average chance of matching the correct verdict-is more likely to reach the correct verdict, but only if the individuals' judgements are uncorrelated. In other situations, social influence and collective opinions are unhelpful-for example, if people follow a group consensus even though they have private information which conflicts with the consensus. If researchers are aware of these pitfalls and the biases to which they might be prone, this greater awareness will help them interpret results as objectively as possible and base all their judgements on robust evidence. --- M any mistakes that people make are genuine and reflect systematic cognitive biases. Such biases are not necessarily irrational, in the sense of being stupid, because they emerge from the sensible application of heuristics or quick rules of thumb-a practical approach to solve problems that is not perfect or optimal, but is sufficient for the task at hand. Herbert Simon, an American Nobel laureate in economics, political scientist, sociologist and computer scientist, analysed rationality and his insights are helpful in understanding how heuristics are linked to socio-psychological influences affecting experts' beliefs. Simon distinguished substantive rationality-when decisions have a substantive, objective basisusually based around some mathematical rule-from procedural rationality, when decision-making is more sensible, intuitive, based on prior judgements, and "appropriate deliberation" [ Using heuristics is consistent with Simon's definition of procedural rationality: heuristics are reasoning devices that enable people to economise the costs involved in collecting information and deciding about their best options. It would be foolish to spend a week travelling around town visiting supermarkets before deciding where to buy a cheap loaf of bread. When planning a holiday, looking at customer reviews may save time and effort, even if these reviews give only a partial account. Similarly, heuristics are used in research: before we decide to read a paper, we might prejudge its quality and make a decision whether or not to read it depending on the authors' publication records, institutional affiliations or which journal it is published in. A more reliable way to judge the quality of a paper is to read all other papers in the same field, but this would involve a large expenditure of time and effort probably not justifiable in terms A vailability heuristics are used when people form judgements based on readily accessible information-this is often the most recent or salient information-even though this information may be less relevant than other information which is harder to remember. A well-known example of the availability heuristic is people's subjective judgements of the risks of different types of accidents: experimental evidence shows that people are more likely to overestimate the probability of plane and train crashes either when they have had recent personal experience or-more likely -when they have read or seen vivid accounts in the media. Objectively, car and pedestrian accidents are more likely, but they are also less likely to feature in the news and so are harder to recall. Problems emerge in applying the availability heuristic when important and useful information is ignored. The availability heuristic also connects with familiarity bias and status quo bias: people favour explanations with which they are familiar and may therefore be resistant to novel findings and approaches. Research into the causes of stomach ulcers and gastric cancer is an illustrative example. The conventional view was that stress and poor diet causes stomach ulcers, and when Barry Marshall and colleagues showed that Helicobacter pylori was the culprit-for which he received the Nobel prize with Robin Warren-the findings were originally dismissed and even ridiculed, arguably because they did not fit well with the collective opinion. The representativeness heuristic is based on analogical reasoning: judging events and processes by their similarity to other events and processes. One example relevant to academic research is Tversky and Kahneman's "law of small numbers," by which small samples are attributed with as much power as large samples. Deena Skolnick Weisberg and colleagues identified a similar problem in the application of neuroscience explanations: their experiments showed that na<unk> <unk>ve adults are more likely to believe bad explanations when "supported" by irrelevant neuroscience and less likely to believe good explanations when not accompanied by irrelevant neuroscience [3]. Finally, Kahneman and Tversky identified a category of biases associated with anchoring and adjustment heuristics. People often anchor their judgements on a reference point-this may be current opinion or the strong opinions of a research leader or other opinion former. Adjustment heuristics connect with confirmation bias: people tend to interpret evidence that connects with their preconceived notions of how the world works. In this case, beliefs will be path dependent: they emerge according to what has happened before. As explored in more detail --- S ocial influences come in two broad forms: informational influenceothers' opinions that provide useful information-and normative influenceagreeing with others based on sociopsychological and/or emotional factors. Informational influences are the focus of economic models of "rational herding" and social learning, based on Bayesian reasoning processes, when decision-makers use information about the decisions and actions of others to judge the likelihood of an event. Such judgements are regularly updated according to Bayes's rule and therefore are driven by relatively objective and systematic information. Social learning models are often illustrated with the example of choosing a restaurant. When we see a crowded restaurant, we infer that its food and wine are good because it is attracting so many customers. But a person observing a crowded restaurant may also have some contradictory private information about the restaurant next door: for example, a friend might have told them that the second restaurant has much better food, wine and service; yet that second restaurant is empty. If that person decides in the end to go with the implicit group judgement that the first restaurant is better, then their hidden private information (their friend's opinion) gets lost. Anyone observing them would see nothing to suggest that the empty restaurant has any merit-even if they have contradictory private information of their own. They too might decide, on balance, to go with the herd and queue for the first restaurant. As more and more people queue for the first restaurant, all useful private information about the superior quality of the second restaurant is lost. This problem can be profound in scientific research. Adopting consensus views may mean that potentially useful private information, especially novel and unexpected findings, is ignored and discarded and so is lost to subsequent researchers. Evidence that conflicts with established opinions can be sidelined or deemed unpublishable by reviewers who have a competing hypothesis or contrary world view. In the worst cases, the researchers who uncover evidence that fits well with unfashionable or unconventional views may be ostracised or even punished, with well-known historical examples, not least Galileo Galilei, who was convicted of heresy for his support of the Copernican heliocentric model of the solar system. --- H erding, fads and customs can also be explained in terms of reputation building. When people care about their status, conformity helps them to maintain status, while departing from social norms carries the risk of impaired status. Reputation also survives a loss better if others are losing at the same time. If financial traders lose large sums when others are losing at the same time, they will have a good chance of keeping their job; but if they lose a large sum implementing an unconventional trading strategy, there is a good chance they will lose their job, even if, overall, their strategy is sound and more likely to succeed than fail. People often make obvious mistakes when they observe others around them making similar mistakes. In social psychology experiments, when asked to judge the similarity in length of a set of lines, subjects were manipulated into making apparently obvious mistakes when they observed experimental confederates deliberately giving the wrong answers in the same task-they may agree with others because it is easier and less confusing to conform [4]. The propensity to herd is strong and reflects social responses that were hardwired during evolution and reinforced via childhood conditioning. Insights from neurobiology and evolutionary biology help to explain our herding tendencies-survival chances are increased for many animals when the group provides safety and/or gives signals about the availability of food or mates. Some neuroscientific evidence indicates that, partly, herding activates neural areas that are older and more primitive in evolutionary terms [5]. Herding also reflects childhood conditioning. Children copy adult behaviours, and children who have seen adults around them behaving violently may be driven by instinctive imitation to behave violently too [6]. --- S ocial influences, including social pressure, groupthink and herding effects, are powerful in scientific research communities, where the path of scientific investigation may be shaped by past events and others' opinions. In these situations, expert elicitation-collecting information from other experts-may be prone to socially driven heuristics and biases, including group bias, tribalism, herding and bandwagon effects. Baddeley, Curtis and Wood explored herding in "expert elicitation" in geophysics [7]. In geologically complex rock formations, uncertainty and poor/scarce data limit experts' ability to accurately identify the probability of oil resources. Bringing together experts' opinions has the potential to increase accuracy, assuming Condorcet's jury principle about the wisdom of crowds holds, and this rests, as noted above, on the notion that individuals' prior opinions are uncorrelated. Instead, expert elicitation is often distorted by herding and conventional opinions and conformist views will therefore be overweighted. Perverse incentives exacerbate the problem. When careers depend on research assessment and the number of publications in established journals, the incentives tip towards following the crowd rather than publicising unconventional theories or apparently anomalous findings [8]. When herding influences dominate, the accumulation of knowledge is distorted. Using computational models, Michael Weisberg showed that, with a greater proportion of contrarians in a population, a wider range of knowledge will be uncovered [9]. We need contrarians as they encourage us to pursue new directions and take different approaches. The willingness to take risks in research generates positive externalities, for example: new knowledge that the herd would not be able to discover if they stick to conformist views. In the worst case, social influences may allow fraudulent or deliberately distorted results to twist research if personal ambition, preoccupation with academic status and/or vested interests dominate. A recent illustration was the Diederik Stapel case: Stapel is a social psychologist and he manipulated his data from studies of the impact of disordered environments on antisocial behaviour. Marc Hauser, a former professor of psychology at Harvard University, published influential papers in top journals on animal behaviour and cognition, until an investigation found him guilty of scientific misconduct in 2011. Both were influential, leading figures in their field, and their results were unchallenged for many years, partly because members of their research groups and other researchers felt unable to challenge them. Their reputations as leading figures in their fields meant it took longer for whistle-blowers and critiques questioning the integrity of their data and findings to have an impact. Deliberate fraud is rare. More usually, mistakes result from the excessive influence of scientific conventions, ideological prejudices and/or unconscious bias; welleducated, intelligent scientists are as susceptible to these as anyone else. Subtle, unconscious conformism is likely to be far more dangerous to scientific progress than fraud: it is harder to detect, and if researchers are not even aware of the power of conventional opinions to shape their hypotheses and conclusions, then conformism can have a detrimental impact in terms of human wellbeing and scientific progress. These problems are likely to be profound, especially in new fields of research. A research paper that looks and sounds right and matches a discipline's conventions and preconceptions is more likely to be taken seriously irrespective of its scientific merit. This was illustrated in the case of the Sokal hoax in which a well-written but deliberately nonsensical research paper passed through refereeing processes in social science journals, arguably because it rested well with reviewers' preconceptions. Another salient example is tobacco research: initial evidence about a strong correlation between cigarette smoking and lung cancer was dismissed on the grounds that correlation does not imply causation, with some researchers-including some later hired as consultants by tobacco companies-making what now seems an absurd claim that the causation went in reverse with lung cancer causing cigarette smoking [10]. Other group influences reflect hierarchies and experience, if, for instance, junior members of a research laboratory instinctively imitate their mentors, defer to their supervisors' views and opinions and/or refrain from disagreeing. When researchersparticularly young researchers with careers to forge-feel social pressure to conform to a particular scientific view, it can be difficult to contradict that view, leading to path dependency and inertia. Scientific evidence can and should be interpreted keeping these biases in mind. If researchers support an existing theory or hypothesis because it has been properly verified, it does not mean that the consensus is wrong. More generally, social influences can play a positive role in research: replicating others' findings is an undervalued but important part of science. When a number of researchers have repeated and verified experimental results, findings will be more robust. Problems emerge when the consensus opinion reflects something other than a Bayesian-style judgement about relative likelihood. When researchers are reluctant to abandon a favoured hypothesis, for reasons that reflect socio-psychological influences rather than hard evidence, then the hypothesis persists because it is assigned excessive and undue weight. As more and more researchers support it, the likelihood that it will persist increases, and the path of knowledge will be obstructed. Journal editors and reviewers, and the research community more generally, need to recognise that herding and social influences can influence judgement and lead them to favour research findings that fit with their own preconceptions and/or group opinions as much as objective evidence. --- Conflict of interest The author declares that she has no conflict of interest.
he mission of scientific research is to understand and to discover the cause or mechanism behind an observed phenomenon. The main tool employed by scientists is the scientific method: formulate a hypothesis that could explain an observation, develop testable predictions, gather data or design experiments to test these predictions and, based on the result, accept, reject or refine the hypothesis. In practice, however, the path to understanding is often not straightforward: uncertainty, insufficient information, unreliable data or flawed analysis can make it challenging to untangle good theories, hypotheses and evidence from bad, though these problems can be overcome with careful experimental design, objective data analysis and/or robust statistics. Yet, no matter how good the experiment or how clean the data, we still need to account for the human factor: researchers are subject to unconscious bias and might genuinely believe that their analysis is wholly objective when, in fact, it is not. Bias can distort the evolution of knowledge if scientists are reluctant to accept an alternative explanation for their observations, or even fudge data or their analysis to support their preconceived beliefs. This article highlights some of the biases that have the potential to mislead academic research. Among them, heuristics and biases generally and social influences in particular, can have profoundly negative consequences for the wider world, especially if misleading research findings are used to guide public policy or affect decision-making in medicine and beyond. The challenge is to become aware of biases and separate the bad influences from
Introduction The United Kingdom Employment Retention and Advancement (UK ERA) demonstration was the largest and most comprehensive social experiment ever conducted in the United Kingdom. It tested the effectiveness of an innovative method of improving the labor market prospects of low-wage workers and long-term unemployed people. UK ERA took place from October 2003 to October 2007 and offered a distinctive set of 'postemployment' job coaching and financial incentives in addition to the job placement services routinely provided by the UK public employment service (called Jobcentre Plus). This inwork support included up to two years of advice and assistance from a specially-trained Advancement Support Adviser (ASA) to help them remain and advance in work. Those who consistently worked full time could receive substantial cash rewards, called "retention bonuses." Participants could also receive help with tuition costs and cash rewards for completing training courses while employed. The UK ERA demonstration differed from an extensive set of previous social experiments for low-income families that focused primarily on "pre-employment" (or "workfirst") services (see Greenberg and Robins, 2011 and Friedlander, Greenberg, and Robins, 1997 for a summary). 1 Most of these earlier experiments produced modest impacts and it was felt by policymakers and program evaluators that combining pre-and post-employment services and including financial incentives might strengthen the impacts of such programs. UK ERA targeted three groups of disadvantaged people: out of work lone parents receiving welfare benefits (called Income Support in the UK), low-paid lone parents working part-time 1 One exception is an employment retention and advancement demonstration conducted in the US from 2000 to 2003 (see Hendra, et al., 2010). The US ERA was similar in many respects to the UK ERA and served as a prototype for the UK ERA. and receiving tax subsidies through the Working Tax Credit (WTC), and long-term unemployed people receiving unemployment insurance (called Jobseeker's Allowance in the UK). The UK ERA demonstration utilized a random assignment research design, assuring unbiased estimates of the program's impacts. The formal evaluation of UK ERA (Hendra et al., 2011) covered five years of program impacts. Administrative records were used to document impacts on several outcomes (mainly employment, earnings and benefit receipt) during the five years subsequent to random assignment. For two of the three target groups (out of work lone parents and WTC recipients), the impacts were generally quite modest and not statistically significant for most of the evaluation period. 2 For the other target group (long-term unemployment insurance recipients), the impacts were statistically significant and sizeable, and persisted into the postprogram period. Within the six districts in which UK ERA 3 took place, there are more than 50 local offices. The purpose of this paper is to try to exploit variation in program practices across these offices in order to determine whether certain features of the local programs' operations are systematically related to program impacts. Previous studies have shown that program impacts vary with operational procedures and types of services provided (Bloom, Hill, and Riccio, 2005, Greenberg and Robins, 2011). Thus, building on these previous studies, we attempt to get inside the "black box" of ERA implementation practices to see which elements of the "total package" tended to be associated with stronger impacts on employment and welfare receipt. 4 2 The US ERA targeted lone parents and like the UK ERA had generally modest impacts that were mostly not statistically significant. Of the 12 programs formally evaluated in the US ERA, only three produced statistically significant impacts (see Hendra et al., 2010). 3 Henceforth, we refer to the UK ERA as simply ERA. 4 For example, some previous studies (such as Hamilton, 2002) have found that programs emphasizing immediate job placement (e.g., job search assistance) generate larger impacts on employment than programs emphasizing human capital development (e.g., placement in education and training). In fact, some studies have found that human capital development programs can lead to short-run reductions in employment. However, a reanalysis of the California GAIN program by Hotz et al. (2006) found that over time the human capital approach can actually generate impacts exceeding those of the work-first approach. The analysis uses a multi-level statistical model based on the methodology developed by Bryk and Raudenbush ( 2001) and first applied to the evaluation of social experiments by Bloom et al. (2005). We use both individuals and institutions as the units of analysis, an approach quite appropriate for examining variation in program impacts across offices. Other studies using a somewhat different methodology to exploit variation in office practices to estimate social program impacts include Dehejia (2003) and Galdo (2008). Implementation practices were not randomized across offices and thus may have been related to client or office characteristics. Because of this, the analysis presented here is nonexperimental. We discuss later the assumptions required for the results to be given a causal interpretation and the reader should keep in mind that causal inferences are only valid if these assumptions are satisfied. The analysis focuses on out of work lone parents receiving welfare. 5 This group is of particular interest because over much of the five-year follow-up period no statistically significant average impacts were detected on most of the outcomes studied (Hendra et al., 2011). Hence, if we are able to identify program features that are associated with inter-office variation in the impacts for this target group we will have added to the knowledge derived from the evaluation of the ERA program. The remainder of this paper proceeds as follows. In section 2, we describe the ERA demonstration and what it was intended to accomplish. In section 3, we present the hypotheses to be tested in examining cross-office variation in ERA impacts. In section 4, we present the statistical model used to test these hypotheses. In section 5, we discuss the data used to estimate the statistical model. In section 6, we report our estimation results for welfare and emlpoyment outcomes. Results for earnings are provided in section 7. Finally, in section 8, we present our conclusions and policy recommendations. --- The Policy Setting The ERA demonstration builds on the New Deal for Lone Parents (NDLP) policy initiative introduced in the UK in 1998. NDLP's aim was to "encourage lone parents to improve their prospects and living standards by taking up and increasing paid work, and to improve their job readiness to increase their employment opportunities" (Department for Work and Pensions, 2002). NDLP participants were assigned a Personal Adviser (PA) through the public employment service office to provide pre-employment job coaching services. PAs could also offer job search assistance and address any barriers participants might have had that challenged their search for work. They also had access to an Adviser Discretion Fund (ADF) that provided money to help participants find employment. Finally, they advised participants on their likely in-work income at differing hours of work and helped them access education or training. NDLP participation was entirely voluntary. The ERA demonstration project offered services beyond those available under NDLP, mainly in the form of in-work services and financial support. As noted above, these additional services included in-work advice and guidance plus a series of in-work retention bonuses to encourage sustained employment. Support for training was also available; ERA covered tuition costs and offered financial incentives for those in work to train. It also provided an in-work Emergency Discretion Fund (EDF) designed to cover small financial emergencies that otherwise could threaten the individual's continued employment. 6 Importantly, ERA services and financial assistance were available for only thirty-three months. In order to evaluate the impacts of the multi-dimensional ERA program, a random assignment research design was utilized. NDLP participants who agreed to be included in the experiment were randomly assigned either to a program (or treatment) group that was eligible for the full range of ERA services and financial assistance or to a control group that could only receive standard NDLP services. 7 The randomization process was closely monitored and controlled. The fact that there were no systematic differences between the two groups prior to random assignment (results available from the authors on request) provides some reassurance that the randomization was carried out effectively. --- Factors Influencing Variation in ERA's Impacts The simplest measure of the impact of ERA is the difference in mean outcomes between the program and control groups over the follow-up period (five years in this paper). 8 The two outcomes examined in this paper are months receiving welfare and months employed. The impact of ERA on months receiving welfare, for example, is the difference over the follow-up period between the program and control groups in the average number of months receiving welfare. The follow-up period for ERA is five years, roughly three of which are while the program was operating and two are after the program ended. The results presented later distinguish between these in-program and post-program periods. ERA impacts can vary over time, across persons, and across geographic areas. Varying impacts over time may have multiple causes including changes in the amount and types of ERA services provided by program administrators, changes in the amount and type of services being provided to the control group under the traditional NDLP program, changes in environmental conditions and changes in the reaction time of participants to the new services being provided. Although we are able to estimate how ERA impacts vary over time, we do not have sufficient data to allow us to identify the precise causes of these varying impacts over time. 7 Goodman and Sianesi (2007) show that 70% of those eligible participated in ERA. Most nonparticipation (86% of cases) was due to (wrongly) not being offered the opportunity to participate. This varied considerably across offices. Participation was higher in areas of higher unemployment. Those already employed at the time of randomization were less likely to participate yet those with substantial prior employment experience were more likely to participate. In the first year after randomization, nonparticipants spent more time in work and less on welfare than participants. It appears, therefore, that offices' tendency to selectively offer the opportunity to participate resulted in the participant sample being made up of individuals with slightly less favorable labor market characteristics than the full eligible population. Varying impacts across persons (sometimes called "subgroup impacts") can arise because certain types of individuals may be more susceptible to program services. For example, those with longer welfare histories or lower levels of education may have been harder to employ and less likely to have been able to use the ERA services effectively than persons with shorter welfare histories or higher levels of education. On the other hand, those with older children may have been more willing to utilize the ERA services than persons with younger children. As will be indicated below, our empirical model allows us to identify subgroup impacts. Varying impacts across geographic areas may be due to different environmental factors and to different ways ERA was implemented across the various local welfare offices. There are a number of environmental factors that could influence the impact of ERA. For example, persons living in areas with higher unemployment or, generally, in more deprived areas may have found it harder to have made effective use of program services. Our empirical model is specified to allow for the impact to vary with a measure of local area deprivation. Cross-office variation in impacts can arise due to differences in the overall structure of the individual offices and differences in program implementation practices for both ERA and control group participants. For example, offices with higher caseloads may have been less successful in providing meaningful help to ERA participants, thereby rendering ERA less effective. Or, offices that placed more emphasis on immediate job placement may have had larger impacts than offices that emphasized human capital development. Or, offices that were already providing a rich array of services for control group families may have had smaller impacts than offices that were not. Bloom, Hill and Riccio (2005) find that impacts in several US based welfare-to-work demonstrations vary significantly with differences in program implementation practices across local welfare offices. Office variation in impacts according to the way ERA was implemented is the major focus of this paper, although we also examine how impacts vary over time, with individual characteristics, and with environmental characteristics. Introducing office-level variation in impacts requires a more sophisticated statistical framework than is traditionally used in evaluation research. Specifically, the units of analysis are both the individual and the office and the statistical framework must take this nesting into account. As will be described in greater detail below, multi-level modeling provides a natural framework for analyzing variation in impacts across offices and across individuals within offices. Although the ERA demonstration took place across 58 offices, in practice operations among some of these offices were shared. 9 Where this applies, we have combined the offices, resulting in 37 distinct units of delivery which, for convenience, we continue to refer to as "offices" in the remainder of this paper. 10 Before proceeding with the specification and estimation of a multi-level statistical model, a fundamental question must be answered. Namely, is there enough variation in the impacts of ERA across offices so that implementation differences can possibly be explained by office-level characteristics? To determine this, we used a multilevel Poisson regression model with program group status as the only regressor in order to construct empirical Bayes estimates of the extent to which program effects on months receiving welfare and months employed varied across the 37 offices in our sample. 11 We estimated separate models for the in-program period (1 to 3 years post randomization) and the post-program period (4 to 5 years post randomization). We conducted formal statistical tests to determine whether the individual office-level impacts were significantly different from the average impact estimated over all offices. Figures 1A and 1B present the empirical Bayes estimates of office-level effects. Since these are generated by a multilevel Poisson model, they are reported as incidence rate ratios (IRRs). In other words, they are proportionate impacts such that a value of 1 indicates 9 For further details, see Dorsett and Robins (2011). 10 We also performed some analyses using the full 58 office sample, but the results were not as informative as the analyses performed on the combined offices sample. We are grateful to Debra Hevenstone for developing the methodology to combine the 58 offices into the 37 distinct offices. 11 We discuss the multilevel Poisson model in detail in section 4. no effect (it implies an increase by a factor of 1). Similarly, an effect of 0.5 implies a reduction of 50 per cent and a factor of 1.5 indicates an increase of 50 per cent. The welfare impacts (Figure 1A) range from 0.59 to 1.17 for the in-program period and from 0.55 to 4.63 for the post-program period. Although not visible from the chart, these very large impacts for the post-program period correspond to the smallest offices. The overall impact is shown by a vertical line in the figure. The employment impacts are given in Figure 1B. These range from 0.70 to 2.49 for the in-program period and from 0.72 to 1.69 for the post-program period. For purposes of this paper, the important question is whether the variation across offices in the estimated impacts is statistically significant. We tested this using likelihood ratio tests, comparing our results with restricted results where the impact was not allowed to vary across offices. For both outcomes, the restriction was strongly rejected.12 Therefore, we conclude that there is sufficient variation in the impacts across offices to warrant a further, more sophisticated, analysis to determine whether part of the variation can be explained by office characteristics. --- Methodological Framework for Explaining Cross-Office Variation --- Estimation Approach Our fundamental approach for examining variation in impacts across offices is based on a simple production function framework in which the implementation (or production) of ERA services within a particular office was related to a set of individual, environmental and office factors (or inputs). These factors are based on ERA participant needs and experiences as well as the manner in which ASAs provided the ERA services. In examining variation in ERA impacts across offices, we focus on ERA services that are consistent with the primary objectives of the demonstration, namely retention and advancement services. Two basic hypotheses will be tested (the specific variables related to each of these hypotheses are described in detail below). First, we hypothesize that the strength (or effectiveness) of ERA's impacts (as opposed to the direction of impacts) will be systematically related to the intensity of ERA services (reflected, perhaps, by the amount of time advisers spend with each ERA participant). Second, we hypothesize that the strength of ERA's impacts will be related to the types of ERA services provided (such as help with advancement or help with finding education and training opportunities). Both of these hypotheses are relevant for policy makers. For example, if it is the intensity of services that matters, then hiring additional caseworkers may represent an effective use of public funds. Or, if it is found that particular types of services are associated with greater impacts, then program operators who are not currently emphasizing such services might find it worthwhile to redirect their program delivery activities towards favoring such services. For both the above hypotheses, the direction of impacts (as opposed to the strength or effectiveness) will depend on the nature of the ERA service. If, for example, the service emphasizes longer-term outcomes beyond the follow-up period (such as encouraging investment in human capital through additional take-up of education and/or training), the impact on months of employment during the follow-up period may be negative and the impact on months receiving welfare may be positive. On the other hand, if the ERA service emphasizes shorter-term outcomes during the follow-up period (such as in-work advice or information about monetary benefits available from ERA) the impact on months of employment during the follow-up period may be positive and the impact on months receiving welfare may be negative. From the policy maker's perspective, negative impacts on employment and positive impacts on welfare receipt during the follow-up period may be viewed as somewhat disappointing, however from the individual's perspective these may lead to better long-term outcomes, beyond the follow-up period. It is important to keep in mind that when testing hypotheses about the relationship between the intensity and type of ERA services and the impacts of ERA, the control group plays an important role. Previous studies have identified the possibility of "substitution bias" in social experiments (Heckman and Smith, 2005, Heckman et al., 2000). Many control group members received services under the existing NDLP program that were similar to the services received by program group members under ERA. The impact of ERA will be influenced by the differential receipt of services between program and control group members. If control group members receive the same advancement services as program group members, then both might potentially benefit, but the impact of ERA would be zero. Thus, when we measure services received by ERA program group members in a particular office, we need to construct them as the difference in the receipt of those services between program and control group members, to account for possible substitution bias. 13 The actual level of service receipt of control group members will influence control group (NDLP) outcomes, but not the impacts of ERA. 14 In addition to individual-level data, our analysis uses office-level variables for both ERA program group members and control group members that allow us to relate the interoffice differences in impacts to the particular characteristics of the offices. Consequently, a multi-level statistical framework is required (see Bryk and Raudenbush, 2001, and Bloom et al., 2005 The first captures random variation in the average office-level outcome for the control group. The second captures random variation in the average office-level impact for the program group. It is the separate specification of the two error terms and the inclusion of office-level characteristics as explanatory variables that distinguish the multi-level model from the more traditional regression models used in the program evaluation literature. The multi-level Poisson model described above has the following formal statistical structure: 15 (1) Pr (Y ji = y|<unk> j, <unk> j ) = exp(-<unk> ji ) <unk> ji y /y! (2) Level 1: Y ji = <unk> j + <unk> j P ji + <unk> k <unk> k CC kji + <unk> k <unk> k CC kji P ji, (3) Level 2: <unk> j = <unk> 0 + <unk> m <unk> m SI mj + <unk> n <unk> n ST nj + <unk> j, <unk> j = <unk> 0 + <unk> m <unk> m DSI mj + <unk> n <unk> n DST nj + <unk> j, or, combining the equations for levels 1 and 2, (4 ) Y ji = <unk> 0 + <unk> m <unk> m SI mj + <unk> n <unk> n ST nj + <unk> 0 P ji + <unk> m <unk> m DSI mj P ji + <unk> n <unk> n DST nj P ji + <unk> k <unk> k CC kji + <unk> k <unk> k CC kji P ji + [<unk> j + <unk> j P ji ], where: In estimating the parameters of this model, we assume that the office error terms <unk> j and <unk> j are correlated with each other and are realizations from a bivariate normal distribution with mean 0 and 2x2 variance matrix <unk>. Estimation is performed using maximum likelihood. <unk> ji = exp(Y ji ), Y ji = In all cases, the estimated variances of the error terms are statistically significant and the correlation coefficients of the error terms are negative and statistically significant (full results are available from the authors on request). --- Interpreting the Results ERA was designed as a randomized control trial and, since randomization was at the level of the individual, office-level impacts estimates are also experimental. However, the analysis in this paper uses non-experimental techniques in order to examine the factors that appear to influence program effectiveness. In view of this, it is appropriate to consider the extent to which the estimation results can be viewed as capturing causal relationships rather than mere associations. There are two key issues that need to be considered in assessing the causal validity of the results. The first is that the characteristics of individuals may vary across offices in a way that is related to impact. It was explicit in the design of the ERA evaluation that the pilot areas should represent a broad variety of individuals and local economies. We might expect (and indeed our later results confirm this to be the case) that there will be variation across individuals in the effectiveness of ERA. The concern then is that the office-level variation in program effectiveness reflects compositional and other differences across offices. Our analysis controls for the effect of observed individual characteristics on both outcomes (equation 2) and impacts (equation 3). Likewise, we control for variations in area deprivation. There may, of course, be other influences that we do not observe and so cannot be controlled for. Our model assumes that unobserved office-level influences on outcomes are captured by the random error term for control group outcomes (<unk> j in equation 3). Similarly, unobserved office-level influences on impacts are captured by the random error term for program impact (<unk> j in equation 3). Our model implicitly assumes that, after allowing for the impacts to vary with individual characteristics, the level of local deprivation and unobserved office-level factors, there is no further variation in program effectiveness across subgroups defined by other unobserved characteristics. Given the rich nature of the individual characteristics included in the model and the narrowly defined criteria for inclusion in the experiment (lone parents looking for help re-entering the labor market), this seems a reasonable assumption. The second concern is that the type of service provided by an office may be endogenous in the sense that it is influenced by characteristics of the individual welfare recipients, local labor market conditions, or other factors that are unobserved. In addition to controlling directly for individual characteristics in the model, the office-level measures of service delivery are constructed in a way that controls for the characteristics of the individuals within that office. This is explained in detail in Section 5.2 (see equation 5) and goes some way towards addressing the potential endogeneity of service type. However, the possibility remains that there are unobserved characteristics that influence both office-level impacts and the implementation strategy adopted by an office. To gain some insight into this, we draw on the qualitative analysis carried out in the course of evaluating ERA and summarized in Hendra et al., (2011). This analysis found little evidence that offices chose strategies to fit around the particular characteristics of the individual welfare recipients. Instead, the intention was very much to deliver a standardized treatment across offices. To achieve this, each district had assigned to it a "Technical Adviser" whose role was to work with caseworkers in that district's offices to ensure that randomization ran smoothly and to advise on delivering in-work support. Furthermore, four of the six districts adopted a centralized approach, thereby limiting the scope for offices to choose their implementation strategies. Other factors do appear to have played a role. Staff shortages were a problem in some areas. In other areas, changes to management policy that were unrelated to ERA had an impact on delivery. For instance, district reorganization meant that some offices were reassigned to a new district, with consequent disruption to delivery, particularly when new district managers did not embrace the ethos of ERA. Overall, the qualitative evidence indicates that variation across offices in the type of support provided is most likely due to exogenous factors. The strongest basis for achieving causal impact estimates would be if individuals were randomly assigned to offices. This was not feasible, particularly given the large distances between the offices, so we rely instead on a non-experimental approach. However, as with any non-experimental study, there is the possibility that one or more important variables have been omitted. In the discussion of the results, we use causal language, but the reader should remember that those causal statements are only valid when the assumptions of the model are satisfied. --- Data To estimate the parameters of equation ( 4), two kinds of data are required. First, there are the variables measured at the individual level (the outcomes, Y, and the individual characteristics, CC). Second, there are variables measured at the office level (service intensity, SI, and service type, ST). Office variables used in the analysis were derived from staffing forms and the personal interviews conducted during the follow-up period. --- Outcomes One of the main objectives of the ERA demonstration was employment retention (and hence, a reduction in time spent on welfare). Therefore, the outcomes we examine in this paper are the number of months on welfare and the number of months employed during the five year follow-up period (roughly 2005 to 2009), distinguishing between the in-program and post-program periods. All outcomes were taken from administrative recordsthe DWP's Work and Pensions Longitudinal Study (WPLS) database. Information on welfare receipt and employment status is available on a monthly basis. 18 The WPLS contains an identifier that can be used to link to the individuals in the experimental sample. The advantage of this relative to survey data is that there is no attrition in the dataset. 18 For further details on the data sources, see Hendra et al. (2011). Ideally, we would have also considered examining earnings as an outcome. However, both the earnings and log-earnings distributions were highly non-normal implying that a linear specification was not appropriate (indeed, efforts to attempt such a model gave unstable results). We present some alternative estimates of how earnings impacts varied with office characteristics in section 7. --- Individual Characteristics Individual-level background characteristics were collected as part of the randomization process. Because they were recorded prior to randomization, these background characteristics are exogenous and thus can be included as regressors in the multi- 1. 20 Loosely, A-level qualifications are those typically gained at age 18 while O-level qualification were usually were gained at age 16. "A-level" is used as shorthand for "A-level or higher" and so includes the most highly qualified individuals. 21 The measure we used is the "Index of Multiple Deprivation," produced by the UK Office of National Statistics. Distinct dimensions of deprivation such as income, employment, education and health are measured and then combined, using appropriate weights, to provide an overall measure of multiple deprivation for each area. Specifically, the areas are "Super Output Areas.". For details, see http://www.neighbourhood.statistics.gov.uk/dissemination/Info.do?page=aboutneighbourhood/geograp hy/superoutputareas/soa-intro.htm. level As will be discussed below, to facilitate interpretation of the estimated coefficients, all individual characteristics were grand-mean-centered (expressed as deviations from the overall mean). In addition, the estimated coefficients from the Poisson model were expressed in monthly equivalents by multiplying the incidence rate ratios minus one by the control group means (that is, percentage effect of each variable times the control group mean for that variable). Table 1 presents means of the individual characteristics and outcomes used in our analysis, along with their cross-office range. As this table indicates, the sample overwhelmingly comprises female lone parents with generally low levels of educational qualifications. About one-half of these mothers have only one child and in about half of all cases the child is under the age of 6 years. More than 70 percent of the sample did not work in the year prior to random assignment and they received welfare for an average of 17 of the 24 months preceding random assignment. The median deprivation index in our sample is 27.4, which corresponds to approximately the 71 st percentile of deprivation across England. 22 Thus, our sample is somewhat overrepresented by individuals living in relatively disadvantaged areas. There was considerable inter-office variation in many of the characteristics, including marital status, educational qualifications, number and ages of children, prior work status, age and ethnicity of the individual, and the level of multiple deprivation in the community served by the office. The average individual in our sample spent about 26 months on welfare during the follow-up period (about 43 percent of the time) and was employed for roughly the same amount of time. Of the two outcomes, average months on welfare showed the greatest interoffice variation, ranging from 14.4 months to 35.3 months. Average months employed ranged from 20.3 months to 33.4. --- Office Characteristics As indicated above, we classify the office variables into service intensity (individual caseload measures) and service type. For the service-type variables, ERA-control differentials are used to explain variation in program impacts. To explain variation in control group outcomes, control-group values of the service-type variables are used. The caseload measures were constructed from monthly monitoring forms for the first 17 months of the experiment. All other office-level variables were constructed from individuals' responses to survey interviews carried out 12 and 24 months after random assignment. 23 It is likely that the advice and support offered to individuals were influenced to some extent by their own characteristics. However, more relevant to the analysis is a measure of the extent to which the office emphasized particular elements of ERA (i.e., their philosophical approach to helping persons on welfare achieve self-sufficiency), controlling for differences in the caseload composition. Although office implementation philosophy cannot be observed directly from any of the available data sources, we form proxies for them by adjusting the individual survey measures to control for observable individual characteristics across offices that may have influenced the type of service implemented using the following regression model: 24 (5) 23 For details on the individual surveys, see Dorsett et al. (2007) and Riccio et al. (2008). 24 Overall, the adjusted office implementation measures are correlated to some extent with each other (meaning that offices that rank high on one measure have some tendency to rank high on the other), but the correlations are modest at most. Thus, we are able to treat these office implementation measures as separate variables in the statistical analysis. F i = <unk> 0 + <unk> k <unk> 1k O ki + <unk> k <unk> 2k O ki P i + <unk> l <unk> 3l CC li + e ji, for control group members in office k, while the corresponding mean value of F for program group members is <unk> 1k + <unk> 2k. The program-control differential is <unk> 2k. As noted above, the motivation for constructing office-level measures in this way is that it isolates the tendency for offices to vary in the degree to which they emphasize particular aspects of delivery after controlling for the fact that this is driven in part by the between-office variation in caseload composition. A simpler approach would be to use unadjusted measures and rely on the inclusion of individual characteristic variables in the level 1 regression (equation 1) to control for variations across office practices that stem from compositional differences. However, this simpler approach cannot achieve that aim since individual characteristics in the level 1 regression help explain only the variation in the level 1 outcome, not the variation in service intensity or type. A drawback to our approach is that, by subsequently including the <unk> 1k and <unk> 2k terms as regressors in the multilevel model, no account is taken of the fact that they are estimates and subject to error. While this may introduce a specification bias, data limitations prevent us from adopting a better approach. The specific office variables used in this study are as follows:25 and the control groups were interacted with the program group dummy variable (P ji ) and included in the level 2 equation determining <unk> j (the program impact). --- Summary Statistics for the Office Variables Table 2 present the means and the cross-office range of the (regression-adjusted) office variables used in the multi-level analysis. The caseload averages about 29 individuals per adviser and about 42 percent of these advisers, on average, work with ERA participants. There is significant variation in the caseload across offices (from about 3 individuals per adviser to 110 individuals per adviser) and in the proportion of advisers working with ERA participants (from about 20 percent to 94 percent). For each of the service type measures, Table 2 presents the mean proportion for the control (NDLP) group, the mean proportion for the program (ERA) group, and the mean ERA-control group difference in the proportion. The first and third of these (control group value and ERA-control group difference) are used as variables in the multi-level model. The second (ERA value) is not directly included in the multi-level model (except for the retention bonus awareness variable) and is shown for informational purposes only. On average, for every service type, the ERA group had a higher proportion receiving that service than the control group. This is as would be expected, however the differential is not always that great. In some offices, a greater proportion of the control group received the services, as reflected in the negative minimum values of the differential in the cross-office ranges. 27 In no office were less than three-quarters of the ERA participants aware of the retention bonuses and in some offices all of the ERA participants surveyed were aware of the bonuses. The considerable amount of services received by control group members may have contributed to the fact that there were few significant overall impacts in the ERA evaluation and highlights the importance of the type of model presented in this paper that attempts to 27 Specifically, there were 4 offices in which the proportion of individuals advised to think long-term was higher among the control group than the program group; 7 offices where the proportion of individuals receiving help finding an education or training course was higher; 6 offices where the proportion receiving help with in-work advancement was higher; and 7 offices where the proportion receiving support while working was higher. control for possible substitution bias in estimating impacts of particular program features across offices. As will be indicated later in section 6.5, by empirically taking into account the possibility of substitution bias, the estimated coefficients on the office-level program-control group differences represent the impacts assuming no substitution bias (that is the impact assuming all program group members receive the particular feature in question and no control group members receive it). We describe how these coefficients need to be interpreted to reflect the actual substitution biases present in the data. Table 3 presents a correlation matrix of the office variables for the control group and the ERA program group. For both groups, the correlations between the non-caseload variables are all positive, suggesting that retention and advancement services were being delivered together, although not perfectly. For the ERA group these positive correlations are consistent with the goals of the demonstration. From a statistical standpoint, the fact that the correlations are modest implies that it is theoretically possible to estimate the contribution
The United Kingdom Employment Retention and Advancement (UK ERA) demonstration was the largest and most comprehensive social experiment ever conducted in the UK. It examined the extent to which a combination of post-employment advisory support and financial incentives could help lone parents on welfare to find sustained employment with prospects for advancement. ERA was experimentally tested across more than 50 public employment service offices and, within each office, individuals were randomly assigned to either a program (or treatment) group (eligible for ERA) or a control group (not eligible). This paper presents the results of a multi-level non-experimental analysis that examines the variation in office-level impacts and attempts to understand what services provided in the offices tend to be associated with impacts. The analysis suggests that impacts were greater in offices that emphasized in-work advancement, support while working and financial bonuses for sustained employment, and also in those offices that assigned more caseworkers to ERA participants. Offices that encouraged further education had smaller employment impacts. The methodology also allows the identification of which services are associated with employment and welfare receipt of control families receiving benefits under the traditional New Deal for Lone Parent (NDLP) program.
program group; 7 offices where the proportion of individuals receiving help finding an education or training course was higher; 6 offices where the proportion receiving help with in-work advancement was higher; and 7 offices where the proportion receiving support while working was higher. control for possible substitution bias in estimating impacts of particular program features across offices. As will be indicated later in section 6.5, by empirically taking into account the possibility of substitution bias, the estimated coefficients on the office-level program-control group differences represent the impacts assuming no substitution bias (that is the impact assuming all program group members receive the particular feature in question and no control group members receive it). We describe how these coefficients need to be interpreted to reflect the actual substitution biases present in the data. Table 3 presents a correlation matrix of the office variables for the control group and the ERA program group. For both groups, the correlations between the non-caseload variables are all positive, suggesting that retention and advancement services were being delivered together, although not perfectly. For the ERA group these positive correlations are consistent with the goals of the demonstration. From a statistical standpoint, the fact that the correlations are modest implies that it is theoretically possible to estimate the contribution of each element separately. --- Results We present the results of estimating the multi-level Poisson model in Tables 4567. As was done for the empirical Bayes estimates in Figures 1A and1B, we present separate estimates for the in-program period (years 1 to 3) and the post-program period (years 4 and 5). Recall that the Poisson coefficients are presented in monthly terms to facilitate interpretation of the results. Table 4 shows the effects of the individual characteristics on the five-year control group outcomes. Table 5 shows how these individual characteristics affect the program impact (subgroup impacts). Table 6 shows how the office characteristics affect the control group outcomes and Table 7 shows how the office characteristics affect the program impacts. 1). Thus, for example, the coefficient of 2.62 for individuals with A-level qualification on months employed in years 1-3 is their additional months employed compared to individuals with no qualifications. --- Effects of Individual Characteristics on Outcomes The average control group member spent 17.9 months on welfare and 13.8 months employed during the in-program period and 7.3 months on welfare and 10.2 months employed during the post-program period. As would be expected, many of the individual characteristics are significantly related to the outcomes in both periods. Individuals who are younger (below age 30), less educated (qualifications below O-level), have less previous work experience (worked 12 or fewer months in the past three years), are non-white, and live in more deprived areas spent longer periods of time on welfare and had less time employed than their counterparts (who are aged at least 30, qualified at O-level or higher, worked more than 12 months in the three years before random assignment, white, and living in less deprived areas). Individuals who were not previously partnered also spent more time on welfare than those who were previously partnered, but did not spend less time employed during the in-program period, although they spent less time employed during the postprogram period. Interestingly, time spent on welfare declines systematically during the in-program period according to the calendar time of random assignment (the later the time of random assignment, the fewer the months spent on welfare). At first sight, this seems somewhat surprising given the onset of recession in the second quarter of 2008 will have affected labor market outcomes of those randomized earlier less than those randomized later. However, there are two countervailing factors. First, a feature of the recent recession is that, up until the second quarter of 2010 (the latest period for which outcomes are considered in this analysis) the reduction in the overall employment rate was driven almost entirely by the fall in the proportion of men in work. As we have already seen, the NDLP group is predominantly female and women's employment remained comparatively stable. Second, policy developments in the UK have increased the conditions placed on lone parents. For example, those in receipt of welfare have had to attend an increasing number of work-focused interviews and, since 2005, to agree an action plan with their adviser to prepare themselves for work (Finn and Gloster, 2010). As another example, since 2008, lone parents with a youngest child aged 12 or over are no longer entitled to welfare solely on the grounds of being a lone parent (DWP, 2007). Those randomly assigned more recently will have been subject to the new regulations for a greater proportion of their follow-up period than those randomly assigned earlier. 28 --- Effects of Individual Characteristics on Program Impacts Table 5 presents the effects of the individual characteristics on program impacts over the three year in-program and two-year post-program periods. For comparison purposes, the grand mean impact of ERA (<unk> 0 ) is included in the table. The coefficients represent deviations from the impacts of the omitted reference groups (see Table 1). Thus, for example, the coefficient of 2.66 for individuals with A-level qualification on months employed during the in-program period is their additional impact compared to individuals with no qualifications. Note that the impacts for individuals in the reference groups (those with no qualifications in this example) are not shown in Table 5. All that the table shows are deviations in impacts from the reference group, and not the impacts themselves for either group. The average response to ERA (the grand mean impact in Table 5) is statistically significant for both outcomes during the in-program period, but is not statistically significant during the post-program period. During the in-program period, months on welfare declined by about one and a half months (8.5 percent) and months employed increased by about threequarters of a month (5.5 percent). Several of the impacts vary significantly across subgroups. One notable finding has to do with educational qualifications. It appears that individuals with O-and A-level qualifications had stronger responses to ERA over the full five-year follow-up period than 28 A fuller discussion of policy developments in the UK during the years ERA was conducted is presented in Hendra et al. (2011). individuals with no qualifications. They had larger reductions in the number of months on welfare, and larger increases in the number of months employed than individuals with no qualifications. Another notable result is that during the in-program period, ERA seems to have had its biggest impacts on individuals who had the least amount of employment during the three years prior to random assignment. Specifically, months on welfare fell by more and months employed increased by more for individuals who had been employed for a year or less in the three years prior to random assignment. These impacts did not carry over into the postprogram period-in fact, months on welfare actually rose for these individuals during the post-program period. Still another notable result is that the impacts on months receiving welfare and months employed seem to have varied with the degree of local area deprivation, particularly during the in-program period. Specifically, ERA participants living in more deprived areas had larger reductions in months on welfare and larger increases in months employed than ERA participants living in less deprived areas. Thus, ERA appears to have been more effective in more deprived areas. Finally, ERA seems to have caused larger reductions in months on welfare and greater increases in months employed for older individuals (aged 30 years and above) and minority individuals. --- Effects of Office Characteristics on Office Control Group Outcomes Table 6 shows how the office characteristics affect office control group outcomes. In other words, the results in Table 6 provide an indication of whether office characteristics are systematically related to office outcomes for standard NDLP participants. For comparison purposes, the grand mean control group outcome (<unk> 0 ) is also shown. In addition to presenting the coefficient estimates, we also present the interquartile range of the outcome across offices. The interquartile range is the predicted outcome from the 25 th percentile of the office characteristic to the 75 th percentile. The interquartile range provides an indication of how the control group outcome varies across offices possessing the middle 50 percent range of values of a particular characteristic. As The results also suggest that in offices where all NDLP recipients receive help in finding education courses, the amount of time spent on welfare is increased by 11 months during the in-program period and 6 months during the post-program period and the amount of time spent in work is reduced by 6 months during the in-program period nd 3 months during the post-program period relative to in offices where no recipients receive such help. These are sizeable effects. While they imply greater dependence on welfare during the five-year follow-up period, they may imply greater self-sufficiency in the long-run if the education eventually leads to upgraded skills and higher employment and earnings. However, inspection of Table 6 reveals that the effects of prolonging welfare and reducing employment are stronger during the in-program period and gradually weaken after that. As was the case for the effects of caseload size, the variation across offices in the proportion of recipients receiving help in finding education courses is not great. In contrast to education services, in offices where all recipients receive help with inwork advancement, there is a statistically significant effect on months employed during the in-program period, but not during the post-program period nor on months receiving welfare at any time during the full five-year follow-up period. The employment effect is sizeable, but doesn't vary much across offices. Finally, in offices where individuals receive support while working, months on welfare decline and months employed increase during both the in-program and post-program periods, although the post-program effect on welfare is not statistically significant, but as in the case of help with in-work advancement, there is little variation in this effect across offices. Taken together, these results suggest that certain services matter for traditional NDLP recipients, particularly those that target education and employment activities. However, those that target education tended to prolong welfare receipt and delay employment during the fiveyear follow-up period while those that target employment tended to have the opposite effect, reducing time spent on welfare and increasing time employed during the five-year follow-up period. Because we do not have data beyond the five-year follow-up period, we are unable to determine whether the additional education help received during the five-year follow-up period eventually leads to lower receipt of welfare and greater employment over the longer run. --- Effects of Office Characteristics on ERA Program Impacts Table 7 shows how the office characteristics are related to ERA program impacts. Recall that these results are based on a non-experimental analysis and can only be given a causal interpretation if the assumptions of the model are satisfied. Also recall that for the ERA input types available to control group members (advice for thinking long-term, help in finding education courses, help with in-work advancement, and support while working), the office characteristics included in the multi-level model are measured as differences in the proportions receiving such services between the ERA program group and the control group (see Table 2). The other two office characteristics included in the multi-level model (the proportion of advisers working with ERA participants and the proportion of ERA participants aware of the employment retention bonus), apply only to ERA program group members and, hence, are simply measured as the proportion for ERA program group members. As indicated in Table 7, there are statistically significant impacts of ERA on welfare receipt and employment during the in-program period, but not during the post-program period. During the in-program period, welfare receipt is reduced by 1.5 months (an 8 percent impact) and employment is increased by.8 months (a 6 percent impact). During the in-program period, five of the six office characteristics are estimated to be significantly related to ERA program impacts. First, in offices where all of the advisers were working with ERA participants, the average program group member spent 3 fewer months on welfare, but was not employed significantly longer, than in offices where no advisers were working with ERA participants. To put it another way, an individual in an office with a 10 percentage point higher proportion of advisers working with ERA participants will have.3 fewer months on welfare than an individual in an office where the same proportion of advisers worked with ERA participants and control group members (NDLP recipients). The information on interquartile ranges is very important because, in practice, few of the programcontrol group differences in receiving this kind of help were very large, so the effect translates to only about a.6 month interquartile range across offices in the impact of the advisers on welfare receipt. Second, in offices where all ERA participants were given help finding education courses but control group members were not, the average program group member spent almost 4 more months receiving welfare and 4 fewer months employed, although the welfare effect is not statistically significant. Again, the information on interquartile ranges is very important because, in practice, few of the differences in receiving this kind of help were very large, so the effect translates to only about a 1.1 month interquartile range across offices in the impact on welfare receipt and about a 1.2 month interquartile range across offices in the impact of this service on months employed. Third, in offices where all ERA participants received help with in-work advancement, but control group members did not, the average program group member spent almost 8 more months employed but not a statistically significant shorter time on welfare. Again, few of the differences in receiving this kind of help were very large across offices, so the effect translates to only about a 1.2 month interquartile range across offices in the impact of this service on months employed. Fourth, in offices where all ERA participants received support while working, the average program group member spent 3.5 fewer months on welfare and was employed for 3.2 more months. The interquartile range of impacts was about 1.1 months for welfare and 1.0 months for employment. Finally, in offices where all ERA participants were aware of the bonus, the coefficient implies that they would have spent 9.4 fewer months on welfare than in offices where no ERA participants were aware of the bonus. There is also a sizeable coefficient of 8.4 months for employment, but it is not statistically significant. In practice, almost all ERA participants were aware of the bonus (no office had fewer than 75 per cent aware), so while the bonus was apparently an important part of the ERA program design, it translated into a moderately small (about 1 month) interquartile range of ERA program impacts across offices. Virtually all of the services that had a statistically significant impact during the inprogram period retain their statistical significance during the post-program period. The one exception is for the impact of help with in-work advancement on employment which is no longer statistically significant in the post-program period. However, the impact of this service remains positive. For all of the services, as was the case during the in-program period, the interquartile ranges of impacts were modest because of mostly small program-control group differences in receipt of these services. --- An Alternative Specification to Examine Earnings As indicated earlier, the chief objective of ERA was to encourage employment retention and so our main outcomes of interest were time spent employed and time spent on welfare. However, ERA also aimed to promote advancement in employment. Pay progression is one possible manifestation of advancement, so it is of interest to consider earnings as an outcome. Unfortunately, as noted in section 5, it was not possible to estimate a multilevel models for earnings. In order to have some sense of how earnings impacts vary with program-control differences in office characteristics, we present in this section supplementary results using a "reduced from" estimation approach, similar to the one used by Somers et al.(2010) in examining how impacts on student grades vary with program implementation conditions in a demonstration of supplemental literary courses for struggling ninth graders. Methodologically, we use a linear regression model, but cluster the standard errors in order to allow for within-office correlation of errors. This approach implies a simplified version of equation ( 4) as follows: (1) Y ji = <unk> 0 + <unk> 0 P ji + <unk> m <unk> m DSI mj P ji + <unk> n <unk> n DST nj P ji + <unk> k <unk> k CC kji + <unk> k <unk> k CC kji P ji + <unk> j + u ji. It is helpful to highlight the differences between this specification and the multilevel model. First, to control for variations between offices in the level of earnings, an office-specific error term, <unk> j, has replaced the random effect <unk> j. A consequence of this is that variables that do not vary within offices cannot be be included, so the <unk> m <unk> m SI mj and <unk> n <unk> n ST nj terms from equation ( 4) are no longer present and therefore variation in control group outcomes with office characterisitics cannot be estimated. Second, this specification does not involve the interaction term <unk> j P ji. This amounts to an assumption that the the office-level error term in equation ( 4) is zero. In other words, all variation in program impacts is assumed to be explained by the program-control differences in services. Third, an individual-level error term, u ji, has been introduced since we are now estimating a linear regression model rather than a Poisson model. The results provided by this model are of interest both in themselves and also because they represent a more common estimation approach seen in the literature. We preface them by noting that, for the welfare and employment outcomes, the estimated variances and correlation coefficients of the office-level error terms are statistically significant, so our expectation might be that this would also apply when considering earnings. In view of this, the results in this section may be based on a mis-specified model. With this caveat in mind, the results are presented in With regard to the overall impact of ERA, this was statistically significant in 2005/6, increasing annual earnings by an estimated £309. There was no significant impact in later years. This is consistent with the welfare and employment impacts which showed significant impacts during the in-program period but not the post-program period. Under this specification of the model there is no variation in program impacts other than that associated with program-control differences in services. Consequently, Table 8 does not report an interquartile range around the grand mean impact. Program impacts did not vary with the proportion of advisors working with ERA participants except in 2008/9, where the reported positive coefficient translates into an interquartile range of nearly £300 across offices in the impact of advisers. This is consistent with the reported results for time spent on welfare, which also showed a variation that became more statistically significant in the post-program period.Higher earnings impacts in 2005/6 were also seen in offices where the proportion of ERA participants advised to think long-term was higher. The interquartile range in this case was just over £500. However, this variation was not statistically significant in later years. For welfare and employment outcomes, there was no significant variation in any year. There is evidence that the earnings impacts were lower in offices that provided more help with finding education courses. This was consistent across all years, although only statistically significant in 2005/6 and (especially) 2008/9. The interquartile range in 2008/9 is £769. It is perhaps of some concern that these longer-term outcomes are not suggestive of emphasis on education being rewarded with positive returns. It is of course possible that this finding could be reversed with even longer-term outcomes. We note that these results are consistent with those reported for employment impacts. Emphasizing help with in-work advancement on the other hand is associated with stronger earnings impacts in all years. Beginning 2006/7, these variations are statistically significant, and the interquartile range is quite stable at £375, £511 and £461 in this and the successive two years respectively. The employment impacts showed similar variation during the in-program period but not during the post-program period. There was no significant impact variation in any year with the proportion of ERA participants receiving support while working. This is in contrast to the welfare and employment impacts, for both of which this appeared to be a key factor along which impacts varied. Nor was there any variation associated with awareness of the retention bonus, something that had been shown to correlated with program effectiveness when considering exits from welfare. However, the bonus awareness coefficients are positive and large for all four years and in three of the years the coefficients are not too far from being statistically significant. Overall, this summary of the earnings results has revealed a general consistency with the welfare and employment outcomes, but there also some differences. The reasons for the differences are not clear but could simply be the result of the different estimation techniques followed. In view of this, and of our preference for the multi-level specification, we do not attempt to interpret these differences. --- Conclusions and Policy Implications For out of work lone parents, the ERA demonstration had statistically significant impacts on welfare receipt and employment during the in-program period (years 1 to 3) and these impacts varied significantly across the offices that participated in the demonstration The main purpose of this study has been to examine how program impacts varied with differences across the offices in the way the ERA program was implemented. Secondary objectives of this study have been to determine whether office characteristics can help explain cross-office variation in the control environment (under the standard NDLP program) and whether the impacts of ERA vary with certain personal characteristics of the ERA participants (subgroup impacts). In interpreting the results of this study, it is important to understand that while certain office characteristics may be quite important in explaining outcomes and impacts, lack of variation in these characteristics across offices may lead to only a small estimated variation in these outcomes and impacts across offices. Thus, for example, while our results indicate the importance of conveying information about the financial rewards available to lone parent ERA participants who maintain employment (given by the estimated coefficients in Table 7), there was not much variation in the actual conveying of this information across offices, so it is associated with only modest variation in program impacts across offices. Our results indicate that ERA was especially effective at reducing welfare receipt and increasing employment for lone parents with O-and A-level qualifications, those living in more deprived areas, and those aged 30 or over. Subgroup variation was not, though, the primary focus of this analysis. Our main results concern impact variation with office characteristics. Several such characteristics were found to be related to the control environment (outcomes of control group members under the standard NDLP program). Offices with higher adviser caseloads had control group lone parents that spent more months on welfare and fewer months employed over the five-year follow-up period. The results of this study are also interesting in another regard. While the overall impact of ERA on welfare and employment 4 to 5 years post-randomization was not statistically significant (see Table 7), we find that this masks significant variation of impacts across offices, some being positive and some negative. This suggests that, in addition to focusing on overall impacts, which is typically done in employment and training demonstrations such as the one examined here, policy evaluation should, where possible, pay attention to implementation procedures across offices where the program is being conducted. Rather than concluding a policy to be ineffective, the type of approach presented in this paper may offer a means of learning from those with positive impacts in order to refine policy and, in time, raise overall effectiveness. Although we were unable to estimate a mult-level model of earnings due to statistical convergence problems, we were able to estimate a simpler, more restrictive, earnings model that has been used in other studies to examine variation in program impacts with program implementation practices. The earnings model estimates are roughly consistent with the multi-level welfare and employment models, but there are also some differences, primarily in statistical significance rather than direction of effects. In conclusion, it is relevant to mention that, as with any long-term study, the economic and policy environment changes. Most obviously, the results relate to a period marked by severe recession and associated increases in unemployment. Equally relevant though is the fact that the last few years have seen a number of policies introduced that directly affect lone parents in the UK. Lone parents have been increasingly required to attend work-focused interviews and those with a youngest child aged 7 or over now have to actively seek work. Furthermore, In-Work Credit was introduced in 2008, providing weekly subsidies to lone parents entering work of 16 or more hours per week. The effect of such policy developments is to reduce the contrast between the service available to the ERA group and that available to the control group and has an important bearing on how to view the overall effect of ERA. However, despite these policy changes and despite that fact that our analysis is non-experimental, we have obtained plausible results identifying those particular implementation features that tended to be linked to stronger impacts of ERA. 1). Thus, the coefficient of 1.25 for individuals who were never partnered on months on welfare in years 1 to 3 implies they spent 1.25 months longer on welfare than customers who were previously partnered (not shown in table ). *Significant at 10 percent level; **Significant at 5 percent level; ***Significant at 1 percent level 1). Thus, the coefficient of 0.38 for individuals who were never partnered on months on welfare in years 1 to 3 is their additional impact compared to customers who were previously partnered (not shown in table ). *Significant at 10 percent level; **Significant at 5 percent level; ***Significant at 1 percent level --- Interquartile range is the predicted outcome from the 25th percentile of the office characteristic to the 75th percentile. *Significant at 10 percent level; **Significant at 5 percent level; ***Significant at 1 percent level Interquartile range is the predicted impact from the 25th percentile of the office characteristic to the 75th percentile. *Significant at 10 percent level; **Significant at 5 percent level; ***Significant at 1 percent level --- Coefficient Across Offices Interquartile range is the predicted impact from the 25th percentile of the office characteristic to the 75th percentile. *Significant at 10 percent level; **Significant at 5 percent level; ***Significant at 1 percent level --- Coefficient Across Offices
The United Kingdom Employment Retention and Advancement (UK ERA) demonstration was the largest and most comprehensive social experiment ever conducted in the UK. It examined the extent to which a combination of post-employment advisory support and financial incentives could help lone parents on welfare to find sustained employment with prospects for advancement. ERA was experimentally tested across more than 50 public employment service offices and, within each office, individuals were randomly assigned to either a program (or treatment) group (eligible for ERA) or a control group (not eligible). This paper presents the results of a multi-level non-experimental analysis that examines the variation in office-level impacts and attempts to understand what services provided in the offices tend to be associated with impacts. The analysis suggests that impacts were greater in offices that emphasized in-work advancement, support while working and financial bonuses for sustained employment, and also in those offices that assigned more caseworkers to ERA participants. Offices that encouraged further education had smaller employment impacts. The methodology also allows the identification of which services are associated with employment and welfare receipt of control families receiving benefits under the traditional New Deal for Lone Parent (NDLP) program.
Introduction The number of people affected by humanitarian crises is on the rise, perpetuated by armed conflict and natural disasters [1]. In 2017, there were over 65 million forcibly displaced people, over half of whom were under the age of 18 [2]. In addition, over one billion children live in countries affected by armed conflict [3]. Environmental factors, including climate change, are likely to increase the number of conflicts and intensify the severity of natural disasters [4][5]. Armed conflicts and large-scale disasters increase the potential for family separation and the erosion of existing support systems, putting children at risk of abuse, exploitation, violence, and neglect. The widespread economic shocks that often accompany humanitarian crises create further vulnerabilities for children when households employ negative coping strategies to manage economic stress. In Lebanon, where over one million Syrian refugees have been registered with the United Nations High Commissioner for Refugees (UNHCR), child marriage and child labor have been reported as families struggle financially [6][7]. Children in circumstances of economic and physical insecurity are also at risk of child trafficking, sexual exploitation, and recruitment by armed forces and extremist groups. Within these contexts, child protection experts in non-governmental organizations (NGO), multilateral institutions such as the UN Children's Fund and the United Nations High Commissioner for Refugees, work to prevent and respond to incidents of abuse, neglect, exploitation, and violence against children. These efforts can take the form of broader systemsstrengthening interventions that seek to build the capacity of national actors to implement effective social support systems that care for children and families, both in formal and informal spheres. As a complement to systems strengthening, child protection initiatives may also take the form of direct implementation, such as the establishment of "Child Friendly Spaces (CFS)" that allow children safe zones to play, parenting trainings that emphasize alternatives to physical punishment, or family tracing and reunification for unaccompanied or separated children. Yet, the assumptions that drive such child protection efforts in humanitarian practice have not yet been fully based on scientific evidence. Protection risks are often estimated and prioritized based on anecdotal accounts [8], definitions of child protection concepts are often not standardized [9], and there is scant evidence on the effectiveness of many of the sector's universally agreed upon standard interventions [10][11][12]. To begin addressing these gaps in empirical research within the sector of child protection in humanitarian contexts, a research priority setting exercise, adapted from the Child Health and Nutrition Research Initiative (CHNRI), was undertaken to identify and rank research priorities. This manuscript presents the process and results of this participatory ranking methodology designed to guide future research investment. --- Methods The Child Health and Nutrition Research Initiative (CHNRI) was designed as a tool to help guide policy and investment in global health research, specifically children's health. CHNRI has since been used to establish research priorities across a broad array of global health disciplines [13][14][15][16][17][18][19][20]. The method is comprised of four stages (i) determining the boundaries of investigation and creating evaluation criteria; (ii) obtaining and systematically listing input from key stakeholders on critical priorities/tasks (referred to as "research questions") to address gaps in sectoral evidence or knowledge; (iii) enlisting stakeholders to rank the research questions based on a pre-defined set of evaluation criteria; (iv) calculation of research priority scores and agreement between experts ( Fig 1]. A more detailed explanation of the CHNRI method has been published elsewhere [21][22][23]. The present study was commissioned by the Assessment, Measurement and Evidence Working Group of the Alliance for Child Protecton in Humanitarian Action (ACPHA) and was informed by prior consensus-building efforts in the sector [24][25]. In collaboration with a Lead Researcher, the CHNRI method was adapted to prioritize research topics in the sector of child protection in humanitarian settings. For the purposes of this exercise, a 'humanitarian setting' was defined as "acute or chronic situations of conflict, war or civil disturbance, natural disaster, food insecurity or other crises that affect large civilian populations and result in significant excess mortality" [26]. The goal of 'child protection' efforts are "to protect children from abuse, neglect, exploitation, and violence" [27]. And 'children' were defined as "individuals under the age of 18" [28]. Experts working on issues of child protection in humanitarian settings were then invited to take part in semi-structured interviews to discuss the gaps in knowledge and evidence that existed within the sector and to generate research priorities to address these gaps. Forty-seven experts participated in this first round of evidence generation with representatives from Non-Governmental Organizations (NGOs), United Nations (UN) agencies, donor agencies, and research institutions. Experts were initially identified through three coordination bodies-the Alliance for Child Protection in Humanitarian Action (ACPHA), the Child Protection Area of Responsibility (CP AoR), and UNHCR with the network extended through snowball sampling. Respondents were strategically diversified to include inputs from those involved in various child protection job functions including implementation, coordination, policy development, and academia from a range of geographic locations (Table 1]. Recruitment continued on a rolling basis and ended once data saturation, defined as the point at which no new data were being generated, was achieved. The final sample was consistent with previous research that identified [45][46][47][48][49][50][51][52][53][54][55] as the number of experts at which collective opinion stabilizes [29]. Aligned with prior CHNRI studies in humanitarian contexts [14], interviews were held via Skype with experts notified in advance that they would be requested to provide their opinions on the most important areas for investment to improve the state of evidence in the field of child protection in humanitarian settings in the next 3-5 years. Participants were encouraged to follow up by email in the event they were able to generate further ideas after the interview had concluded. Through an iterative process, the Lead Researcher then collated 24 hours of interview notes to identify 90 unique research priorities, condensing interrelated research ideas and simplifying concepts for use in the ranking exercise. The priorities were then thematically organized into the following pre-determined themes-Epidemiological Research; Policy and Systems Research; and Intervention Research (Table 2]. The research team provided review and consensus on the themes and categorization after which the areas for research were listed within the online survey. The survey was pilot tested by individuals who were not involved in the development of research questions but who had general knowledge of humanitarian concepts and survey design. Further, to ensure that question order did not bias results, we implemented a page randomization that shuffled page order within the survey for each new respondent. Experts who participated in the interview process were invited to take part in the online ranking portion of the prioritizatio exercise. Two additional experts who were either not previously available or who reached out to participate after the period for interviews had passed, were also invited to take part in survey. Each of the 90 research priorities were ranked on four criteria: (i) Relevance-research will support learning that contributes to the prevention and response to abuse, neglect, exploitation, or violence in humanitarian settings; (ii) Feasibility-research is feasible to conduct in an ethical way; (iii) Originality-research will generate new findings or methods; and (iv) Applicability-research will be readily applied to programs and policies. Relative weights were not assigned to scoring criteria. For each research question, participants were offered six possible responses: strongly agree (5 points); agree (4 points); undecided (3 points); disagree (2 points); strong disagree (1 point); and insufficiently informed (considered non-applicable/no response). The scoring matrix was a deviation from past CHNRI studies which typically offered four possible responses-yes (1 point), no (0 points), undecided (0.5 points), and insufficiently informed/no response. In the development of the present research design, the study team elected to use a full Likert scale to allow for greater granularity when analyzing scores. Aligned with the CHNRI methodology [13][14][15][16][17][18][19][20], every research question was provided a priority score under each of the four judging criterion, calculated by taking the point totals and dividing them by the maximum number of points available, after excluding from the denominator those who did not answer the question or reported they were insufficiently informed, a percentage was calculated [14]. For each question, the overall Research Priority Score (RPS) was then calculated by taking the mean of the total priority scores for each judging criterion, as calculated above. Research questions were then ranked from highest to lowest on overall priority scores and the top fifteen presented in Table 3. Standard deviations for RPS are also included to show the variation between total priority scores for each judging criterion (Table 3, S1 Annex). --- Research Type Abbreviation Number of Items Basic epidemiological and social science research which aims to define the incidence or prevalence of abuse, exploitation, or violence against children or identify the underlying risk factors associated with violations against children. --- EPI 22 Sub-Theme #1 -Measuring the incidence or prevalence of child protection concerns in humanitarian settings In addition, the Average Expert Agreement (AEA) was calculated for each research question. In order to obtain AEA values, we consolidated "strongly agree" and "agree" as well as "strongly disagree" and "disagree". For each judging criterion, the number of modal responses was then divided by the total number of scorers for that question, again excluding those who did not answer the question or who reported they were insufficiently informed on the research question being assessed. Following this calculation, the ratios were then summed and divided by the number of judging criteria. Both RPS and AEA were calculated for the entire group of respondents as well as for subgroups, in order to analyze differences in priorities for those located in field settings as compared to those based in non-operational settings. Data were analyzed using Microsoft Excel. --- Ethics statement Formal ethics review is usually not requested for undertaking CHNRI exercises [13][14][15][16][17][18][19][20] as the exercise does not involve personal or otherwise sensitive data. Participants were solicited via established professional networks whose purpose is to facilitate and enable information-sharing. Prior to participation in initial Skype interviews, all participants were informed on the nature of the research and the anonymity of their feedback. --- Results Of the 49 respondents invited to take part in the online ranking, 41 experts participated, eliciting a response rate of 83.7 percent. Research questions from all three of the research domains (epidemiological research; policy and systems research; and intervention research) featured in the top 15 research priorities. Intervention research was the most predominant domain voted upon by experts with 8 of the top 15 priorities identified falling within this realm. Policy and systems research followed with 5 priorities and epidemiologic research with only 2 featured priorities ranking in the top 15 (Table 3). The range of overall RPS was 63.28 to 86.33, with the highest ranked priority being the rigorous evaluation of the effectiveness of cash-based social safety nets to improve child wellbeing. Within the top 15 priorities, RPS ranged from 80.70 to 86. 33. Intervention research which aims to rigorously evaluate the effectiveness of standard child protection activities provided in humanitarian settings ranked highly. Two questions concerning child labor, specifically estimating the prevalence and understanding the effectiveness of interventions to reduce the practice, ranked in the top ten priorities. Respondents also prioritized research efforts to understand how best to mobilize local systems, including the local social service workforce and para-social work models, in order to sustain child protection gains after international actors have departed a crisis. AEA scores ranged from 41.55 to 85.63, representing the percentage of respondents who provided the same score on a research priority (averaged across four judging criteria). For the top 15 research investment options, AEA ranged from 69.04 (to build the capacity of child protection sector staff in empirical research design and data analysis planning) to 85.75 (to evaluate the effectiveness of interventions to reduce child labor) (Table 3). We found higher levels of respondent agreement among research questions with higher RPS rankings, demonstrating that a certain level of consensus was attained in order for research topics to be prioritized in the higher ranks (Fig 2). Standard deviations (SD) were also analyzed in order to assess variation between the judging criterion. Among the top 15 reserch priorities, SDs ranged between 2.5 and 5.4 with the exception of the evaluation of psychosocial programming with an SD of 8.2 due to the comparatively lower score provided on Originality. This is likely due to the recent work on this particular topic that has been widely circulated [30] and therefore was deemed less original in the ranking process. When comparing all RPS scores among respondents who resided within an operational setting versus those who did not, there was a correlation co-efficient of 0.32, indicating a weak but positive association. The top ten research priorities differed between the two groups (Table 4). With the exception of rigorously evaluating family strengthening programs, which ranked highly for both groups of respondents, there were no other priorities that jointly ranked among the top ten. For field-based respondents, the most important initiative was to identify best practices for bridging humanitarian and development initiatives for child protection system strengthening. Field-based respondents tended towards the identification of best practices while also prioritizing capacity building for child protection sector staff in empirical research design and data analysis planning. In contrast, respondents who were not based in operational settings showed greater enthusiasm for the rigorous evaluation of interventions, with an examination of the effects of cash-based social safety nets on child well-being outcomes ranking highest. --- Discussion The limitations to rigorous research on child protection in humanitarian crises are notable, with harsh operational conditions, short project cycles, and inadequate funding all considered hindrances to scientific inquiry on child protection within these contexts [31][32]. However, recent efforts have begun to demonstrate that robust social science methodologies within the sector are both needed and possible [33][34][35]. This prioritization exercise, which is among the first known systematic inquiries on research investments for child protection in humanitarian contexts using the CHNRI methodology, offers initial insight on the research interests and evidence needs of sector experts. Intervention research comprised three of the top four research priorities, aligning with many previous CHNRI studies that have similarly found intervention research to be of importance to stakeholders [36]. As previously noted, there is a dearth of rigorous evaluation to determine the effectiveness of common child protection interventions in humanitarian settings. The lack of quantitative data to document intervention effectiveness inhibits the ability of humanitarian actors to design evidence-based programs, a hindrance increasingly problematic for funding appeals and policy advocacy. This prioritization suggests that understanding intervention effectiveness is of particular interest to the sector, ranging from examinations of family-strengthening to capacity-building interventions to activities aimed at reducing child labor. Because the sample more heavily represents individuals in technical advisory and other operational capacities, the interest in intervention research most visibly highlights the needs of practitioners to have their programming rigorously tested and evaluated with respect to child well-being outcomes. As the top priority among both intervention research topics and the entire ranking exercise, understanding the effects of cash-based social safety nets on child well-being outcomes has emerged as highly importance for the sector. Cash transfers have gained prominence as multiple studies have found them effective in improving the welfare of children, including through improved health and nutrition outcomes as well as increased educational attainment [37][38][39]. The assumption driving the proliferation of cash-based social safety net interventions in humanitarian contexts is that they are an effective way of mitigating crisis-induced economic shocks, thereby preventing the use of coping strategies that may have negative effects on children such as school drop-out, child labor, and family separation. Yet, these assumptions have not been fully tested within disaster, conflict-affected, or displacement contexts, environments where children face unique risks and vulnerabilities. Further, the majority of existing evidence on the effects of cash transfers do not examine child protection outcomes such as reductions in violence, abuse, and exploitation, information of great interest to sector experts. In addition to understanding the effectiveness of singular child protection interventions on child-well-being outcomes, experts indicated a need to also evaluate multi-sectoral interventions, considering this one of the highest priorities for research. A relatively broad mandate, this methodological research priority underscores the need for study designs that allow for the rigorous evaluation of multiple components within increasingly complex program designs, including analyses on how various components interact with one another. Such research endeavors are inherently more complicated, yet recent guidance from the global health sector has shown this to be a priority that spans disciplines within development and humanitarian assistance [40][41][42]. Similarly, as multi-sectoral and interdisciplinary interventions are prioritized by funders, experts within this study have identified a need to quantitatively demonstrate the added value of child protection interventions when mainstreamed within other sectors, such as health, nutrition, or education. Prior research on the effects of nutrition supplementation and play/stimulation on stunted children in Jamaica provides an example of how social scientists have captured the additive effects of non-sector related interventions [43]. If protection interventions are found to be effective in improving non-protection related outcomes for children, this type of evidence would support an argument that child protection considerations and/or program components are necessary to achieve desired results in other areas of humanitarian relief. Child labor in humanitarian settings was also a common theme with both intervention effectiveness and prevalence data among the top 10 priorities for research investment. Similar to cash transfers, child labor has been examined across multiple development settings [44][45][46], however, data from humanitarian contexts is extremely sparse and generally limited to anecdotal information. As urban environments have become a more common setting for humanitarian crises, there is an increased risk that children will be used for begging, street vending, and other forms of exploitation [47][48]. There is a need to understand the prevalence, dynamics, and effective interventions to reduce this protection risk for children who have been displaced as well as children from affected host communities. In order for child protection programming to be more responsive to current humanitarian contexts, experts felt that there was value in 1) better understanding the protection risks of children with disabilities (particularly non-observable disabilities) and 2) translating any existing evidence on implementing humanitarian programs in urban settings into more tangible guidance for CP practitioners. Disability inclusion has gained traction as a critical component within humanitarian assistance, however, experts noted this work to primarily address physical disabilities where programmatic accommodations are often tangible and straightforward, such as the fitting and distribution of assistive devices. In contrast, many experts noted feeling ill-equipped to properly serve children with cognitive and intellectual disabilities, agreeing that an examination of the protection risks for children with disabilities, particularly non-observable disabilities, should be prioritized. Similarly, experts felt more guidance on child protection programming in urban humanitarian crises would be beneficial. Indeed, as rapid urbanization has resulted in more densely population cities and towns, the potential impacts of a humanitarian crisis increase, particularly in areas with weak infrastructure and insufficient governance [49]. The Syrian refugee crisis has seen over 5 million people flee to neighboring countries, seeking refuge predominately in the cities and towns of Lebanon and Jordan with another 6 million internally displaced within Syria, again primarily in urban and peri-urban settings [50]. This trend differs from past decades of humanitarian assistance that was largely provided within camp-based settings, requiring a new framework for understanding how best to support children in crisis. Other actors within humanitarian response have begun to give this issue greater attention in the past several years [51][52] enabling the identified priority of secondary literature review and as relevant, the translation and integration of evidence into child protection strategies and program design. Localization and sustainability were also key themes. Within the top 15 research priorities, experts conveyed a need to identify best practices for both engaging the local social service workforce in emergency settings and establishing sustainable para-social work models such that structures will exist past the duration of humanitarian intervention. At the same time, respondents would like to understand best practices for bridging humanitarian and development initiatives for child protection systems strengthening. Taken together, these items demonstrate a desire to understand how best to engage local social service structures (formal and informal) and connect the work done during a crisis to a longer-term development agenda. When scrutinizing the findings further, three trends emerged. First, among the top 15 research priorities, participants routinely scored research questions much higher for relevance than originality. It is speculated that this score variation may be a result of recent efforts by the sector to discuss and advocate for a more robust evidence base in humanitarian contexts [53][54][55]. The relatively frequent discussion about these evidence needs may have made a number of research questions appear unoriginal to participants yet still highly relevant because the research had yet to be carried out. This finding highlights the readiness of child protection experts to move forward an actionable research agenda for humanitarian settings. Next, there were notable differences in the priorities of field and non-field based staff with only one research topic ranking within the top ten for both sub-groups (rigorously evaluate the effectiveness of family strengthening interventions to improve child well-being). As compared to non-field based respondents, those residing within an operational setting were less likely to identify rigorous evaluation within their top priorities. Instead, these respondents tended towards the identification of best practices, a logical reaction given that such research would presumably result in straight-forward guidance to program design. At the same time, field-based staff highly ranked capacity building in empirical research design and data analysis planning for the child protection sector, demonstrating a desire to build the skills required to further evidence generation. Lastly, our study explored research topics within the professional sector of "child protection in humanitarian settings", which had a rather expansive purview. As such, some of the research priorities identified by experts were similarly broad in scope. It is our hope that as the sector progresses in the collection and translation of rigorous evidence that future priority setting exercises on child protection in humanitarian settings will be able to focus on particular needs within narrower sub-specialties. --- Limitations The CHNRI method is based on purposive sampling where individuals are invited to participate based on their expertise in a given field. This method relies on a non-representative sample to aggregate knowledge and experiences. The findings are therefore limited to the perceptions of a discrete group of individuals and it is possible that additional areas for research investment may have emerged if a larger sample was recruited though, as earlier noted, prior quantitative work has demonstrated collective opinion to stabilize with as few as 45-55 participants [29], however, this finding was based on binary "yes" or "no" responses as opposed to the Likert scale implemented in this project. Further, given the low-cost and replicability of the procedure, it is attractive to a variety of sectors as a means of fostering transparency and enhancing systematization in the creation of a research agenda. In our study. non-field based staff were more likely to respond to requests for interviews and as such, had greater representation within the study (Table 1). This created a certain level of bias towards the insights and experiences of child protection experts currently based in non-operational settings. When secondarily analyzing results based on whether respondents resided in operational or non-operational settings, we did find variation in the prioritization of research items (Table 4). These findings indicate that even when saturation appears to have been reached, the rank ordering of priorities can be influenced by the characteristics of the sample. Deviating from standard CHNRI procedure, we requested that participants rank research priorities against pre-determined criteria using a Likert scale as opposed to binary "yes" or "no" responses. This decision was informed by the lack of existing evidence within the sector of child protection in humanitarian action and the anticipation that a large majority of research items would be affirmatively ranked by respondents, making it difficult to discern which were of highest priority. While Likert scales have been used extensively in other crowdsourcing methods [56][57][58], more research is needed to examine the benefits and drawbacks of using a Likert scale within an adapted CHNRI framework. Lastly, our study did not include "impact" as a ranking criterion. Such a criterion would have participants rank research based on the likelihood it would result in a reduction of protection risks or improved responses to child protection violations. While our research criterion of "relevance" included similar language, it did not explicitly request input on the ability of a research question, once answered, to impact the lives of children. Further research priority setting exercises on child protection may wish to include "impact" as a ranking criterion separate from "relevance" in order to further ascertain the merit of a research idea. --- Conclusion Rigorous, scientific research that assesses the scope of child protection risks, examines the effectiveness of existing child protection interventions, and translates evidence to practice is critical to move the sector forward and respond to donor calls for programming that is evidence-based. This CHNRI adaptation solicited inputs from a range of sector experts with variation across geographic location and job function. It is our hope that findings can guide a global research agenda, facilitating cooperation among donors, implementers, and academics to pursue a coordinated approach to evidence generation. --- All relevant data are within the paper and its supporting information files. --- Supporting information S1
Armed conflict, natural disaster, and forced displacement affect millions of children each year. Such humanitarian crises increase the risk of family separation, erode existing support networks, and often result in economic loss, increasing children's vulnerability to violence, exploitation, neglect, and abuse. Research is needed to understand these risks and vulnerabilities and guide donor investment towards the most effective interventions for improving the well-being of children in humanitarian contexts.The Assessment, Measurement & Evidence (AME) Working Group of the Alliance for Child Protection in Humanitarian Action (ACPHA) identified experts to participate in a research priority setting exercise adapted from the Child Health and Nutrition Research Initiative (CHNRI). Experts individually identified key areas for research investment which were subsequently ranked by participants using a Likert scale. Research Priority Scores (RPS) and Average Expert Agreement (AEA) were calculated for each identified research topic, the top fifteen of which are presented within this paper.Intervention research, which aims to rigorously evaluate the effectiveness of standard child protection activities in humanitarian settings, ranked highly. Child labor was a key area of sector research with two of the top ten priorities examining the practice. Respondents also prioritized research efforts to understand how best to bridge humanitarian and development efforts for child protection as well as identifying most effective way to build the capacity of local systems in order to sustain child protection gains after a crisis.
B A C K G R O U N D Description of the condition Tobacco use is disproportionately concentrated among low-income populations, with rates exceeding that of the general population at least two-fold (Jamal 2015). Among low-income populations, such as people experiencing homelessness, estimated smoking prevalence ranges between 60% and 80% (Baggett 2013). Individuals with severe mental health disorders and/or substance use disorders who belong to racial/ethnic minority groups, who are older, or who self-identify as a gender and sexual minority are disproportionately represented in populations experiencing homelessness (Culhane 2013; Fazel 2014). The prevalence of mental health and substance use disorders is high among people experiencing homelessness. A systematic review concluded that the most common mental health disorders among this population were drug (range 5% to 54%) and alcohol dependence (range 8% to 58%), and that the prevalence of psychosis (range 3% to 42%) was as high as that of depression (range 0% to 59%) (Fazel 2008). These populations carry a high burden of tobacco use and tobacco-related morbidity and mortality (Schroeder 2009). Persons experiencing homelessness are three to five times more likely to die prematurely than those who are not homeless (Baggett 2015; Hwang 2009), and tobacco-related chronic diseases are the leading causes of morbidity and mortality among those aged 45 and older (Baggett 2013b). Among younger homeless-experienced adults (<unk> 45 years), the incidence of tobacco-related chronic diseases is three times higher than the incidence in age-matched non-homeless adults (Baggett 2013b). Persons experiencing homelessness have distinctive tobacco use behaviors associated with low income, substance use comorbidities, and housing instability that affect their likelihood of successfully quitting. Epidemiological studies of tobacco use among this population have shown that most adults experiencing homelessness initiate smoking before the age of 16 (Arnsten 2004). Average daily cigarette consumption is between 10 and 13 cigarettes per day, and more than one-third smoke their first cigarette within 30 minutes of waking (Okuyemi 2006; Vijayaraghavan 2015; Vijayaraghavan 2017). People experiencing homelessness have high rates of concurrent use of alternative tobacco products such as little cigars, smokeless tobacco, and e-cigarettes (Baggett 2016; Neisler 2018). They also engage in high-risk smoking practices including exposure compensation when reducing cigarettes smoked per day and smoking cigarette butts (Garner 2013; Vijayaraghavan 2018). Smoking norms include sharing or "bumming" cigarettes, and these practices may reduce the effects of policy interventions such as increased taxes (Garner 2013; Vijayaraghavan 2018). Individuals experiencing homelessness face significant barriers to cessation, including disproportionately high rates of post-traumatic stress disorder (PTSD), which can lead to positive associations with smoking (Baggett 2016a). Smoking cessation is challenging for people who have to navigate the stressors of homelessness (Baggett 2018; Chen 2016), high levels of nicotine dependence, and limited access to smoking cessation treatment and smoke-free living environments (Vijayaraghavan 2016; Vijayaraghavan 2016b). Integrating tobacco dependence treatment into existing services for homeless-experienced adults remains challenging (Vijayaraghavan 2016b). Staff members may not support quit attempts (Apollonio 2005; Garner 2013), and homeless-experienced adults do not have consistent access to services or information technologies used to improve access to cessation interventions (McInnes 2013). Despite these challenges, over 40% of adults experiencing homelessness report making a quit attempt in the past year (Baggett 2013c; Connor 2002). A majority relapse to smoking, with estimates of the quit ratio (i.e. the ratio of former-to-ever smokers) between 9% and 13% compared to 50% in the general population (Baggett 2013c; Vijayaraghavan 2016). Homeless populations have been historically neglected in population-wide tobacco control efforts; however, there has been increasing interest in studying the correlates of tobacco use and cessation behaviors for these populations and in discovering how these individuals may differ from the general population (Goldade 2011; Okuyemi 2013). Typically high levels of nicotine dependence among adults experiencing homelessness are associated with low likelihood of quitting (Vijayaraghavan 2014). Proximity to a shelter during the week after a quit attempt has been associated with higher risk of relapse, thought to occur because of increased exposure to environmental cues to smoking (Businelle 2014; Reitzel 2011). In contrast, staying in a shelter, as opposed to on the street, has been associated with quitting smoking (Vijayaraghavan 2016), possibly due to exposure to shelter-based smoke-free policies. Stud --- Description of the intervention Interventions designed to support people to stop smoking can work to motivate people to attempt to stop smoking ("cessation induction"), or to support people who have already decided to stop to achieve abstinence ("aid to cessation"). In this review, we will include both types of interventions. Many people who are homeless face barriers to using regular services, such as healthcare services, through which cessation support is available. The availability of support to assist a quit attempt can itself create motivation to quit (Aveyard 2012). Thus one possible intervention to support people experiencing homelessness is to provide bespoke cessation services that can operate both to make quitting seem more desirable and to provide treatment for those who are attempting to stop smoking. The combination of behavioral counseling and pharmacotherapy (nicotine replacement therapy [NRT], bupropion, or varenicline) is the gold standard for individually tailored smoking cessation treatment in the general population (Stead 2016). However, a vast majority of quit attempts made by people experiencing homelessness are unassisted (Vijayaraghavan 2016). Preference for cessation aids may vary by cigarette consumption, with light smokers (0 to 10 cigarettes per day) preferring counseling over medication, in contrast to moderate/heavy smokers (> 10 cigarettes per day) (Nguyen 2015). --- How the intervention might work Cessation induction interventions directed at smokers who are not ready to quit rely on pharmacological, behavioral, or combination interventions to increase motivation and intention to quit, with an eventual goal of abstinence. Interventions may include nicotine therapy sampling to induce practice quit attempts, as described in Carpenter 2011, or motivational interviewing to induce cessationrelated behaviors among smokers who are not motivated to quit, as examined in Catley 2016. Tobacco dependence treatment can provide motivation and support for change through pharmacotherapy (Cahill 2013), counseling (Lancaster 2017), financial incentives (Notley 2019), or a combination of these (Stead 2016). Pharmacotherapy can reduce the urge to smoke and can decrease nicotine withdrawal symptoms via NRT, varenicline, or bupropion (Cahill 2013); counseling can provide support and motivation to make and continue with quit attempts (Lancaster 2017). For individuals with severe tobacco dependence, such as people experiencing homelessness, multi-component interventions that include behavioral counseling, combination pharmacotherapy, and other adjunctive methods such as financial incentives -as discussed in Businelle 2014b, Baggett 2017, and Rash 2018 -or mobile support -as offered in Carpenter 2015 -may be beneficial. However, as many quit attempts are unassisted, more may need to be done to remove barriers and facilitate access to cessation support for smokers who are homeless. --- Why it is important to do this review People experiencing homelessness have unique tobacco use characteristics, including higher likelihood of irregular smoking patterns, reduced exposure to clean indoor air policies, and reliance on "used" cigarettes (Baggett 2016; Garner 2013; Vijayaraghavan 2018). They receive limited support for cessation from service providers (Apollonio 2005; Garner 2013). Many countries have identified homeless-experienced adults as a high-risk group in need of targeted interventions (Fazel 2014). Tobacco use is the single most preventable cause of mortality among adults experiencing homelessness (Baggett 2015). Past efforts to promote tobacco cessation among this population have yielded mixed results that make it difficult to assess which types of tobacco dependence treatments promote abstinence. Our findings will synthesize evidence to date and will identify interventions that increase quit attempts and abstinence, as well as improve access to treatment, for this vulnerable population. We will also explore whether cessation interventions affect mental health or substance use outcomes among this population. --- O B J E C T I V E S To assess whether interventions designed to improve access to smoking cessation interventions for adults experiencing homelessness and interventions designed to help adults experiencing homelessness to quit smoking lead to increased engagement and tobacco abstinence. To also assess whether smoking cessation interventions for adults experiencing homelessness affect substance use and mental health. --- M E T H O D S Criteria for considering studies for this review --- Types of studies We will include randomized controlled trials (RCTs) and cluster RCTs, with no exclusions based on language of publication or publication status. --- Types of participants Participants will include homeless and unstably housed adults (> 18 years of age). This will be defined by criteria specified by individual studies; however we envisage that participants will meet one or more of the following criteria for homelessness (ANHD 2018; Council to Homeless Persons 2018; Fazel 2014). 1. Individuals and families who do not have a fixed, regular, and adequate night-time residence, including individuals who live in emergency shelters for homeless individuals and families, and those who live in places not meant for human habitation. 2. Individuals and families who will imminently lose their main night-time residence. 3. Unaccompanied young adults and families with children and young people who meet other definitions of homelessness. 4. Individuals and families who are fleeing or attempting to flee domestic violence, dating violence, sexual assault, stalking, or other dangerous or life-threatening conditions that relate to violence against an individual or family member. 5. Individuals and families who live in transitional shelters or housing programs. 6. Individuals and families who are temporarily living with family or friends. 7. Individuals and families who are living in overcrowded conditions. Participants must also be tobacco users who may or may not be motivated to quit. --- Types of interventions We will include in our review any interventions that: 1. focus on increasing motivation to quit, building capacity (e.g. providing education or training to provide cessation support to staff working with people who are homeless), or improving access to tobacco cessation services in clinical and non-clinical settings for homeless adults; 2. aim to help people making a quit attempt to achieve abstinence, including but not limited to behavioral support, tobacco cessation pharmacotherapies, contingency management, and app-based interventions; or 3. focus on transitions to long-term nicotine use that do not involve combustible tobacco. Control groups may receive no intervention or 'usual care', as defined by individual studies. --- Types of outcome measures --- Primary outcomes 1. Tobacco abstinence (given the paucity of data on long-term cessation outcomes among people experiencing homelessness, we will also assess short-term cessation outcomes), assessed at three time points i) Short-term abstinence: <unk> three months after quit day ii) Medium-term abstinence: <unk> three months and <unk> six months after quit day iii) Long-term abstinence: <unk> six months after quit day We will conduct separate analyses for each time point. We will use the strictest definition of abstinence used by the study, with preference for continuous or prolonged (allowing a grace period for slips) abstinence over point prevalence abstinence. When possible, we will extract biochemically verified rates (e.g. breath carbon monoxide, urinary/saliva cotinine) over self-report. We will assess abstinence on an intention-to-treat basis, using the number of people randomized as the denominator. --- Secondary outcomes 1. Number of participants receiving treatment 2. Number of people making at least one quit attempt as defined by included studies 3. Abstinence from alcohol and other drugs as defined by selfreported drug use or through biochemical validation (or both), at the longest follow-up period reported in the study 4. Point prevalence or continuous estimates (e.g. questionnaire scores) for mental illnesses (including major depressive disorder, generalized anxiety disorder, post-traumatic stress disorder, schizophrenia, and bipolar disorder) as defined by previously validated survey instruments or physician diagnosis --- Search methods for identification of studies --- Electronic searches We will search the Cochrane Tobacco Addiction Group Specialized Register, the Cochrane Central Register of Controlled Trials (CENTRAL), and MEDLINE. The MEDLINE search strategy is provided in Appendix 1. The Specialized Register includes reports of tobacco-related trials identified through research databases, including MEDLINE, Embase, and PsycINFO, as well as via trial registries and handsearching of journals and conference abstracts. For a detailed account of searches carried out to populate the Register, see the Cochrane Tobacco Addiction Group's website. --- Searching other resources We will search grey literature, including conference abstracts from the Society for Research on Nicotine and Tobacco. We will contact investigators in the field about potentially unpublished studies. We will additionally search for registered unpublished trials through the National Institutes of Health clinical trials registry (www.clinicaltrials.gov) and the World Health Organization International Clinical Trials Registry Platform Search Portal ( http:/ /apps.who.int/trialsearch/). --- Data collection and analysis --- Selection of studies We will merge search results using reference management software and will remove duplicate records. Two independent review authors (MV and HS) will examine the titles and abstracts to identify relevant articles and will subsequently retrieve and examine the full-text articles to assess adherence with the eligibility criteria. A third review author (DA) will independently assess whether the full-text articles meet eligibility criteria. We will exclude all studies that do not meet inclusion criteria in terms of study design, population, or interventions. We will resolve disagreements by discussion, and when necessary, the third review author will arbitrate the case. --- Data extraction and management Two review authors (MV and HS) will independently extract data in duplicate. We will contact study authors to obtain missing outcome data. Once outcome data have been extracted, one of the review authors (MV) will enter them into Review Manager 5.3, and another (HS) will check them (Higgins 2011). All review authors (MV, HS, and DA) will extract information from each study for risk of bias assessments. We will extract he following information from study reports using a template developed by DA and modified by MV. 1. Source, including study ID, report ID, reviewer ID, citation, contact details, and country. 2. Methods, including study design, study objectives, study site, study duration, blinding, and sequence generation. 3. Participant characteristics, including total number enrolled and number in each group, setting, eligibility criteria, age, sex, race/ethnicity, sociodemographics, tobacco use (type, dependence level, amount used), mental illness, substance use, other comorbidities, and current residence (unsheltered, sheltered, single room occupancy hotel or temporary residence, or supportive housing). 4. Interventions, including total number of intervention groups and comparisons of interest, specific intervention, intervention details, and integrity of the intervention. 5. Outcomes, including definition, unit of measurement, and time points collected and reported. 6. Results, including participants lost to follow-up, summary data for each group, and subgroup analyses. 7. Miscellaneous items, including study author conflicts of interest, funding sources, and correspondence with study authors. --- Assessment of risk of bias in included studies Two review authors will assess the risk of bias for each included study, as outlined in the Cochrane Handbook for Systematic Reviews of Interventions, Chapter 8 (Higgins 2011). Using a risk of bias table, we will categorize risk of bias as "low risk," "high risk," or "unclear risk" for each domain, with the last category indicating insufficient information to judge risk of bias. We will assess the following domains: selection bias (including sequence generation and allocation concealment), blinding (performance bias and detection bias), attrition bias (incomplete outcome data), and any other bias. According to guidance from the Cochrane Tobacco Addiction Group, we will assess performance bias only for studies of pharmacotherapies, as it is impossible to blind behavioral interventions. --- Measures of treatment effect When possible, we will report a risk ratio (RR) and 95% confidence intervals (CIs) for the primary outcome (i.e. abstinence) for each included study. The risk ratio is defined as (number of participants in the intervention group who achieve abstinence/ total number of people randomized to the intervention group)/ (number of participants in the control group who achieve abstinence/total number of people randomized to the control group). We will use an intention-to-treat analysis, in which participants are analyzed based on the intervention to which they were randomized, irrespective of the intervention they actually received. For dichotomous secondary outcomes, such as number of people making a quit attempt and abstinence from substance use, we will calculate an RR with 95% CI for each study. For any continuous measures of our mental illness secondary outcome, we will calculate the mean difference (MD) or the standardized mean difference (SMD), as appropriate for each study. --- Unit of analysis issues The unit of analysis will be the individual. For cluster-randomized trials, we will assess whether study authors have adjusted for this clustering, and whether this had an impact on the overall result. When clustering appears to have had little impact on the results, we will use unadjusted quit rate data; however when clustering does appear to have an impact on results, we will adjust for this using the intraclass correlation (ICC). --- Dealing with missing data When outcome data are missing, we will attempt to contact the study authors to request missing data. For all outcomes apart from mental health, we will assume that participants who are lost to follow-up are continuing smokers, are still using other substances, did not make a quit attempt, or did not receive treatment. We will report deaths separately and will not include participants who have died during the analysis. For the mental health outcome, we will conduct a complete case analysis. --- Assessment of heterogeneity We will classify heterogeneity as clinical, methodological, or statistical (Higgins 2011). We will not attempt a meta-analysis if we observe significant clinical or methodological heterogeneity between studies; we will instead report results in a narrative summary. If we feel it is appropriate to carry out meta-analyses, we will assess statistical heterogeneity using the I2 statistic, which represents the percentage of the effect that is attributable to heterogeneity versus chance alone (Chapter 9; Higgins 2011). We will consider an I2 value greater than 50% as evidence of substantial heterogeneity. --- Assessment of reporting biases We will assess several forms of reporting bias including outcome reporting bias (selective reporting of outcomes), location bias (publication of research in journals that may have different levels of access such as open access publication), and publication bias (publication or non-publication of studies depending on the direction of outcome effects), and we will discuss these in our review. We will assess whether abstinence from tobacco, our primary outcome, was reported in all included studies, and will report which studies included this outcome and which did not. If we include more than 10 studies in any analyses, we will generate a funnel plot to help us assess whether there could be publication bias. --- Data synthesis When meta-analysis is appropriate, we will use the Mantel-Haenszel random-effects method to calculate pooled, summary, weighted risk ratios (95% CIs), or inverse-variance random-effects methods to calculate pooled, summary, weighted MDs (95% CIs) or SMDs (95% CIs). We will pool separately studies testing interventions that aim to improve access to smoking cessation interventions and studies that are simply testing the effectiveness of smoking cessation interventions among people experiencing homelessness. Should meta-analyses not be possible, we will provide a narrative assessment of the evidence. --- Subgroup analysis and investigation of heterogeneity When possible, we will conduct subgroup analyses to examine whether outcomes differ based on: 1. intensity of treatment (e.g. number of counselling sessions); 2. participants' residential history (sheltered vs unsheltered); 3. participants' substance use history; --- 4. participants' diagnosis of mental health disorder; and 5. participants' use of non-cigarette tobacco and nicotine products. --- Sensitivity analysis We will conduct sensitivity analyses by excluding studies with high risk of bias (judged to be at high risk for one or more of the domains assessed). --- Summary of findings We will produce a "Summary of findings" table (Higgins 2011), presenting the primary outcome (tobacco use abstinence at all time points), absolute and relative magnitude of effects, numbers of par-ticipants, and numbers of studies contributing to these outcomes. Two independent review authors will also carry out GRADE assessments of the certainty of evidence. Using GRADE criteria (study limitations, consistency of effect, imprecision, indirectness, and publication bias), we will grade the quality of evidence as very low, low, moderate, or high, and will provide footnotes to explain reasons for downgrading of evidence. --- A C K N O W L E D G E M E N T S The review authors would like to thank Drs. Nicola Lindson, Paul Aveyard, and Jonathan Livingstone-Banks for their thoughtful review of draft versions of the protocol. --- R E F E R E N C E S --- Additional references ANHD 2018 Association for Neighborhood and Housing Development. --- C O N T R I B U T I O N S O F A U T H O R S The protocol was conceived and prepared by Maya Vijayaraghavan, Holly Elser, and Dorie Apollonio. --- D E C L A R A T I O N S O F I N T E R E S T Maya Vijayaraghavan has no conflicts of interest to report. MV has one pending grant application on the topic of smoke-free policies in permanent supportive housing for formerly homeless populations. Holly Elser has no conflicts of interest to report. Dorie Apollinio has no conflicts of interest to report. --- S O U R C E S O F S U P P O R T Internal sources • Univeristy of California, San Francisco, San Francisco Cancer Initiative, USA. --- External sources • Tobacco Related Disease Reseach Program, USA. Grant
Interventions to reduce tobacco use in people experiencing homelessness.
Introduction Throughout Australia's early waves of the COVID-19 pandemic from March 2020 onwards, a strict lockdown was in place across many cities and towns across the country. A strict lockdown included extended periods of home confinement and the closure of all non-essential workplaces, schools, and retail and entertainment venues. Individuals and families in lockdown were only permitted to go outside for an hour of exercise daily and were required to stay within five kilometres of their home. Some residents of the city of Melbourne put teddy bears in their windows facing towards the street (Figure 1) to attract the attention of people out walking through their rather deserted neighbourhoods. Families with children started to make a game of spotting the bears on their daily walks. This provided the inspiration for the name of our project, Bear in a Window, where we collected Australian children's stories and experiences of the COVID-19 pandemic. This topic is particularly pertinent considering the fact that the city of Melbourne hit the world record for the number of days spent in lockdown in 2021 (Boaz 2021). The Oxford Blavatnik School of Government rated government restrictions in the pandemic on a scale of 0-100, with a higher number indicating more stringent restrictions (Hale et al. 2021). At 71.76, Australia's score (retrieved on 20 September 2021) was the highest of all OECD countries, with the United States at 61.57 and the United Kingdom at 35.64. The motivation for our project stems from the fact that children's voices and narratives are often absent from discourse and historical experiential reports of major world events. For example, while there are reports of adult recollections of being a child and living through the Spanish flu pandemic of 1918 (e.g. James 2019), there are very few archives with actual children's reports of their experiences. We know from previous work in Christchurch, New Zealand, where a "QuakeBox" was set up to allow people to share their stories, that giving people voice after major events can have a therapeutic effect (Clark et al. 2016; see also Carmichael et al. 2022). Similar outcomes have emerged from the HONOR project, which is a corpus of interviews on the topic of Hurricane Harvey (Englebretson et al. 2020). Neither the QuakeBox nor the HONOR project include recordings of children. However, the MI Diaries project (Sneller et al. 2022) and Lothian Diary Project (Hall-Lew et al. 2022) invite adults and children to recount their experiences of the COVID-19 pandemic. These projects indicate a relatively recent interest in including the voices of children in the collective memory of major world events, and in (socio)linguistic research more generally. Our project will contribute to this growing area of research and provide a snapshot of a unique event in Australian (and world) history, with data being accessible to researchers and the general public into the future. The Bear in a Window project provided children with an opportunity to give voice to a range of topics that had importance for them in light of having to stay at home in a state of restricted movement across space and time, without being filtered through the lens of an adult's perspective. In this paper, following a discussion of experiment design and method (Section 2), we present the topics children raised and a linguistic analysis of how children talked about them (Section 3). We explore not just what children say, but how they say it, by examining the discourse structures and features that help to situate or contextualize their perspectives. In Section 4, we reflect on the pros and cons of running this kind of unsupervised data collection entirely online and discuss future steps for the project. In line with the theme of this special issue, we stress that the focus of our paper is on highlighting the procedure and method for online (remote) data collection in a specific context: in this case, the COVID-19 pandemic. Our data set is relatively small (18 speakers), and our analysis is exploratory for now, with a focus on what we have learnt throughout the process (see in particular Section 4). --- Method and materials --- Data collection methods Our project was a fully online, COVID-safe task which we designed and hosted on Gorilla (https://gorilla.sc/), a platform which is commonly used in the behavioural sciences, and which has a user-friendly interface. The learning curve is not too steep, rendering the process of designing online experiments fairly intuitive. The payment structure is affordable, and we opted for the "pay as you go" option, which cost us just over one Australian dollar per completed respondent, only deducted when a respondent fully completed the task. The link to our experiment was available through our project website. We advertised the study via posters and flyers, on social media, and in one TV interview. Recruitment was targeted at parents/guardians of children aged 3-12 years. From the website, parents or guardians could read about the project and project team, and then click on a link to take part, which redirected them to Gorilla. There they were informed that their child needed to complete the task on a tablet, desktop, or laptop. At this stage, the parent or guardian read a plain language statement and gave informed consent. They then completed some demographic questions (age and gender of child, ethnic/cultural background, language(s) spoken at home, and postcode). Following this, and in line with our research aims to capture children's voices and experiences, the parent or guardian was instructed to pass the device to the child, so they could complete the task with minimal prompting from an adult. They were shown the following text on-screen: "Thanks for your help. Now it's your child's turn! When they're in front of the device and ready to go, click Next!" Since this was unsupervised research, the following instructions were shown to parents (in addition to the image in Figure 2): We want to record your child's stories clearly! Please make sure your child: -Is in a quiet location, with not too much background sound -Stays close to the tablet/laptop device -Doesn't move around too much Before the children commenced recording, they were prompted to do a sound and microphone check, by playing a sound and making sure they heard it, and then recording themselves saying "Hello, Australia!". This was then played back to them. Overall, the quality of our recordings was quite good, but we did have issues with younger siblings being present who talked over some of the recordings. We will discuss these issues further in Section 2.3. All instructions for children from this point on were provided on-screen and via a friendly voice-over, to accommodate children not yet able to read. Children were asked to record themselves responding to each of the two questions, with an image of our mascot, Covey Bear (Figure 1) shown on-screen. They were asked to consider the following questions, which they responded to one at a time while looking at an image of Covey and an accompanying large sad face for Question 1 (Figure 3) and an accompanying large smiley face for Question 2: (1) Can you tell Covey a story about something that was not so good about having to stay at home all the time? (2) Can you tell Covey a story about something that was good about having to stay at home all the time? Children had 2 min to respond to each question, with a graphic timer indicating for them when their time was running out. At the end of each 2 min block, they had the opportunity to extend their comments in a new 2 min recording block. --- Participants Eighteen children participated, from four Australian states: Victoria, Tasmania, Western Australia, and New South Wales. Our participants (13 males, 5 females) were aged 3-12 years, and their recordings were orthographically transcribed. All participants were English speakers from a range of linguistic, cultural, and ethnic backgrounds. The average incomes across the postcodes where the participating children lived were higher than the national average. We note that for an experiment that was live for almost a full calendar year, the total number of participants was below what we had anticipated. The analytics in Gorilla show that there were 18 full completions of the task (listed as "complete" on the Gorilla server). In addition to this, there were 43 participants who started the task and made some recordings but did not "finish" it (listed as "live"), which resulted in the recordings not being analysable since they were not uploaded to the Gorilla server. Finally, there were 64 who started the task, but exited before any recordings were made (listed as "rejected"). Our own test runs of the experiment are included in this "rejected" figure, however. Of the 43 "live" participants, we consider this to be a high attrition rate. Since the task was designed to not take longer than 15 min to complete, we suspect that one aspect of the task that may have contributed to the attrition rate was that the Finish button is very small in Gorilla, and must be pressed by participants in order for all data to be saved. In an unsupervised task such as this one, it is likely that children or parents simply closed the browser after completing their recordings, without clicking on the Finish button. In the early stages of testing, we noticed the high proportion of "live" participants, and added an extra instruction at the end of the experiment to remind people to "click Finish". However, we only included this in the text on-screen ("Thank you for sharing your stories today! Please click 'Finish' and then you can close your browser window"; see Figure 4) and not in the voice-over, which simply said "Thank you for sharing your stories today!". --- Overview of the data The content of the recordings varied largely from one child to another and, despite our original research aim to collect stories, not all children engaged in the telling of narratives. We suspect this may have been due to the way the questions were posed, inviting children to reflect and evaluate on the 'good' and 'bad'. Furthermore, despite our efforts to encourage children to complete the task independently, we noticed in the recordings that some parents (and older siblings) were audibly prompting the children (Example 1). This resulted in some children simply responding to the questions or prompts posed by their parent or sibling, rather than engaging in independent reflection. (1) Bella (5;2): Um (.) when-when it was good in COVID-teen my mum and me went-went and had breakfast walks and had bear hunts. --- Parent/ guardian: You might need to explain what they are. What's a breakfast walk? --- Bella: A breakfast walk is a walk when you eat breakfast like toast and crumpets and s-scones and a bear hunt is a hunt where you look in every window to see a bear. In terms of the 2 min time limit for each recording, this seems to have been appropriate, as only one child opted to extend their time for a further 2 min. While we had anticipated that this might be an issue in terms of children having their talk truncated, our data suggests that the 2 min limit for recordings on Gorilla is not a hindrance for experiments such as these. Furthermore, the two questions seemed to have worked well for the children, despite their large age range, with all children willingly engaging in the task and sharing their experiences. --- Analysis Each separate recording generated an audio file on the Gorilla server, which was then downloaded and run through the automatic transcriber, Sonix. In some cases, there was overlapping speech, with siblings talking over one another, or interference from background noise from another sibling playing in the background, rendering the automatic transcription more challenging and the level of accuracy variable. However, the transcription was generally more reliable with the speech of older children and in the absence of overlapping speech. All automatic transcriptions were hand-corrected by two researchers and further refined when it came to the granular Bear in a Window: Australian children's stories of the COVID-19 pandemic discourse analysis, where details such as false starts and filled and unfilled pauses were included. Overall, the automatic transcription process did save time as compared to manually transcribing everything from the start. Following transcription and hand-correction, the text files were imported into ELAN for coding. In undertaking the linguistic analysis, we had two coding tiers in our ELAN files, and broadly followed a discursive psychology (Potter 2012;Potter and Wetherell 1987) discourse analytic approach which empirically examines the ways in which topics of experience are managed in interaction. In the first tier, we coded for topics that emerged in the semi-structured reflections in order to shed light on children's positive and negative experiences of life in lockdown. In the second ELAN tier, we coded for children's discourse strategies. --- Results After an iterative, primarily bottom-up process of analysis, we centred on six central topics: health, education, family and friends, digital engagement, relationships, and mealtimes and food (Table 1). Apart from mealtimes and food, which for everyone was unanimously positive (although it was also the topic with the fewest mentions), the remaining five topics cut equally across both positive and negative (the "good" and the "not so good") experiences. We note that many of the utterances were able to be coded across several topics and they were not mutually exclusive, but for the purpose of presentation, we include just one topic per utterance in Table 1. We then focused our analysis on the children's initial responses to the question prompts and their discourse organization and management strategies, including filled pauses (Swerts 1998), and as an utterance-initial topic transition marker, repair (Schegloff et al. 1977), false starts, and their use of the discourse marker like for sequentially organizing and maintaining flow in their talk (D'Arcy 2017; Degand et al. 2013); see Table 2 for examples. Overall, in response to the "not so good" things about having to stay at home all the time, participants most frequently mentioned not seeing friends and extended family, not going to school or childcare, and being bullied by classmates during online learning. We note that although children's reflections indicated that they were aware of COVID-19 and its dangers, they did not appear to be feeling afraid or unsafe. The "good" things included getting to have cooked lunches at home, spending more time with family and siblings, going on morning walks and bear hunts in their neighbourhoods, and home exercise opportunities such as trampolining. In terms of discourse strategies, children were highly engaged in the task of narrating their experiences, and they responded well to the idea that an interlocutor was present, using floor holder and topic transition markers (e.g., and also), as they would in a normal, everyday conversation. They also used focus markers such as like to emphasize certain points in their narrative, such as going on a bush walk or to the pool (Table 2). This allowed them to voice their own, localized concerns with an imagined interlocutor, even when that interlocutor was not giving them the conversational feedback and responses they might normally expect. This finding was encouraging, as it showed the strengths of this mode of data collection in eliciting conversational data, despite no interlocutor being present. --- Discussion and closing considerations Bear in a Window was a unique opportunity to capture children's voices during an unprecedented time in world history. The capturing and sharing of these voices have important implications for how children's perspectives are included in our collective memory. We believe that the process itself had a therapeutic effect, as providing children with the opportunity to weigh up both the positives and negatives of life in lockdown promoted a sense of perspective, leading to more positive health and well-being. We believe we have contributed to a body of important literature, such as Clark et al. (2016), where people are invited to share their experiences of traumatic events. Bear in a Window belongs to a first wave of projects run entirely online (see Sneller 2022), joining, for example, the MI Diaries project, which invites participants to share audio diaries of life in Michigan via an app, including aspects of life during the pandemic (Sneller et al. 2022), and the Lothian Diary Project, which investigated how the COVID-19 lockdown changed the lives people in Edinburgh and the Lothians in Scotland (Hall-Lew et al. 2022). All three projects are unique in that they elicit data without a researcher, interviewer, or interlocutor present. In other words, they guide respondents to self-record data which is subsequently used by researchers. This method has the potential to be powerful in the future of linguistic data collection, or in other social sciences, and initial findings in terms of audio quality and content have been promising. We note that while one drawback of our project was the small number of participants, neither the Lothian Diary nor the MI Diaries projects had this problem, with the former having recorded 195 participants and the latter over 150 diarists at the time of writing. Reasons for this may be that the Lothian Diaries Project had high public visibility, including at an in-person Festival of Social Sciences (Hall-Lew et al. 2022), and the MI Diaries project had legitimacy and visibility through the availability of its app through app stores. The MI Diaries project also reported recruitment success in specific online spaces, such as Reddit and university listservs, rather than via social media more generally (Sneller et al. 2022). In terms of method, we note that our project had its challenges. This was unsupervised research, and we observed a higher propensity not just for dropout or attrition, but also misunderstanding of instructions, and Filled pause (um) The bad thing about the lockdown was because-umyou couldn't go to school Topic transition marker (and) And you couldn't see your friends at school and you had to do online classes Topic transition marker (and); repaired false start (Z-on Zoom; they were-they could) And Z-on Zoom, and they were-they could sometimes be laggy, so you don't understand everyone clearly Topic transition marker (and also); discourse marker (like) And also you can't go like on a bush walk or to the pool when there was a lockdown because of CO-COVID-<unk> Topic transition marker (and); filled pause (um) And um it could have been there. Repaired false start (you could-you can) And you could-you can only leave your house for central things Repaired false start (wo-for work); discourse marker (like) and to like wo-for work, or like to get tested. Bear in a Window: Australian children's stories of the COVID-19 pandemic mixed quality recordings (this was also observed in the Lothian Diary Project when participants recorded themselves outdoors). Naturally, this was expected in wholly online data collection, but normally, we would expect the benefit of higher participant numbers (due to the accessibility of the task, without having to come onto, e.g., a university campus to take part in the research) to outweigh the drawbacks of attrition rates. However, in our case, we experienced both low participation rates and high attrition rates. To try to understand this trend, we garnered informal feedback from some participants, and from other colleagues who use Gorilla, and we suspect that the following deterrents may have been at play: (i) Gorilla is not mobile-friendly. To complete the task, participants had to switch to using a desktop, laptop, or tablet. This creates an extra hurdle, as we suspect many of the parents heard about the project on social media, which is often or even exclusively accessed via people's mobile phones. The MI Diaries project (Sneller et al. 2022), for example, used a mobile app, which proved to be easily accessible for participants without compromising on sound quality (see also Freeman and De Decker 2021). Having a mobile app downloadable on app stores also gave legitimacy and authenticity to the project and its visual branding (Sneller et al. 2022). (ii) There was no participant payment or incentive offered. Since this was a small-scale project with limited funding, we were not in a position to offer an incentive; however, this could have encouraged more people to participate. The MI Diaries (Sneller et al. 2022) and Lothian Diary (Hall-Lew et al. 2022) projects offered compensation: a USD 5 gift card per 15 min of recording for the former, and for the latter GBP 15 for each standard contribution and GBP 20 for each contribution from someone unhoused or otherwise vulnerable. It is noteworthy that both of these projects offered alternative options to simple cash payment. In the MI Diaries project, participants could also choose to "pay forward" their payment to someone else; and in the Lothian Diary Project, participants could choose a gift card to a local business or a donation to a local charity. (iii) There was no "hard deadline" for the project, meaning that even people with intentions to participate may have simply put it off, thinking they could complete it at any time. (iv) Participants may have believed that the experiment was finished once they had completed their second recording, and simply closed the browser, rather than clicking the important (but not so visible) Finish button. The fact that parents had to act as ad hoc research assistants, making sure their children completed the required steps, may also have been at odds with our instructions for parents to step away from the device once their children were ready to start their recordings. In terms of future steps, we plan to collect more data for the project, perhaps in a supervised or semi-supervised fashion, and with an offer of payment or reward. Recent projects utilizing a semi-supervised protocol, where participants take part at home on their own device, but are guided through the task on Zoom by a research assistant, have had high rates of participation and completion, with participants reporting similar experiences of participation as compared to face-to-face data collection (see Leemann et al. 2020). With more data, we plan to expand our analysis of discourse strategies, focussing on topic markers, adjacency pairs, and (where applicable) narrative development. We also hope to be able to examine age and potential gender-based differences with a larger sample. The present analysis, while exploratory, has provided us with the opportunity to reflect on the benefits and drawbacks of remote data collection. We expect this kind of research to remain an option for many scholars beyond the pandemic and we envisage more papers on best practice in this area to emerge in the years to come. As our own database expands, we will work towards creating a publicly accessible database and work with two museums to curate a collection of children's voices of life in lockdown. Our findings have the potential to be of interest and further application to researchers across different disciplines, including linguistics and language development, education, speech sciences and technology, psychological sciences, health and wellbeing, and language variation and change, including documenting and exploring Australian children's spoken English, and examining how language change spreads within a community and across generations.
The Bear in a Window project captures Australian children's experiences of the COVID-19 pandemic. We focused on children's experiences of lockdown, or extended periods of home confinement, ranging from one to 100 days at a time between 2020 and 2021. Using the online experimental platform, Gorilla, we invited children aged 3-12 to record themselves telling stories about the positives and negatives of life in lockdown to our mascot, Covey Bear. Recordings were saved on the Gorilla server and orthographically and automatically transcribed using Sonix, with manual correction. Preliminary analyses of 18 children's recordings illustrate several emergent topics, reflecting children's experiences of the pandemic in the areas of health and wellbeing; education and online learning; digital engagement; family and friends; relationships; and mealtimes and food. We found that in their storytelling, children engaged in a wide variety of discourse strategies to hold the floor, indicate focus, and transition to different topics. The project will contribute to a national public collection of Australian children's COVID-19 stories and create a digital repository of Australian children's talk that will be available to researchers across different disciplines.
Contradictions The most contradictory element in this crisis is the amazing awareness that the economy is collapsing because people are only buying what they really need. For those who never were able to do anything else, nothing changed. But at the same time, the economy was growing with every person that had to be taken to hospital, with every funeral that had to be organized and with every videoconference of people unable to meet for real. It shows once again the absurdity of the blind focus on economic growth in terms of Gross Domestic Product (GDP). Should we not cheer instead of deplore the cuts in luxury consumption, and should we not grieve instead of cheer for the growth due to extra funerals? Due to the economic backlash, the production of pollutants including CO 2 and nitrogen oxides dropped between 10 and 30% from February to June 2020. But even if lockdown measures continue around the world till the end of 2021, global temperatures will only be 0,01° lower than expected by 2030 (Gohd 2020). In other words, behavioural change is not enough. On stock exchanges, there were some ups and downs, but globally shareholders did not suffer. And Jeff Bezos had not enough time to count his extra profits. In short, while people were suffering and dying, small businesses lost their income and the dominant economic and financial systems just continued, with some slight changes at the margins. Governments put people at risk by giving priority to economic recovery, loosening the confinement measures before the virus had actually disappeared. Hospitals and care workers were suffering-many of health personnel died, in fact!-because of lack of protective equipment, private hospitals were selective in their admissions, some poor countries even lacked the basic hospital beds. Once again, Naomi Klein's statement, made in another context, that the economy is at war with life was shown to be true (Klein 2014). The only conclusion, then, is that we have to turn our backs to the neoliberal globalization that frames this economic system and look for the exit. But how? The task before us is to reshape our thinking, knowing that the current system cannot solve our problems which are matters of life, of people, of societies and of nature. --- Other Ways of Thinking Let us try to turn our thinking around and not start from the economy but from people's needs. These needs are the same all over the world: they are food, water, shelter, clothing, housing, health care, clean air... in our modern and urban societies we can add other public services such as education, culture, communication or collective transport. In order to meet all those needs, people rightly want protection and this protection, basically, can only be given in two ways in order to safeguard life: either with strong rules, police and the military, or with a broad range of social protection measures, with economic and social rights. If one believes in the importance of peace, the latter is the way to go. Now, there obviously are many different ways to try and guarantee that all people's needs are properly met. Here, I want to briefly mention three ways that cannot lead to lasting and sustainable solutions. I will then point to the many interlinkages and propose the way of social commons, based on solidarity and the possible synergies between all elements of the social, economic and political systems. --- Welfare States The first solution is the existence of welfare states, as we have seen in several richer countries. If we look back at the way they came about, we can only be full of admiration for the social struggles they implied and the institutional arrangements they led to. Most of them have severely been damaged by the neoliberal cuts in social spending of the last decades, the privatization of health care, pension systems and other public services, and the growing delegitimizing of public collective solidarity. But again, looking at what some countries still have, such as Scandinavia, Germany, France or my own country Belgium, this looks like a miracle compared to the poor or non-existing social protection most people in the Global South have. So why not just promote this system in the rest of the world? The main reason is that the world has changed compared to the period in which welfare states emerged. Women are now massively on the labour market, there are more and more single parent families, there is more migration and the economic system itself has seriously changed. The growing number of people working in the platform economy hardly have any protection. More and more companies rely on temporary workers with less protection. It is true that these welfare states have seriously hindered the emergence of new poverty, but they did not eradicate poverty since they were focused on formal labour markets and did not touch those outside of them. The economic and social rights they provide now have to be extended and enlarged which means a universal implementation, a reform of labour markets with more rights, the transferability of rights for migrant workers, vocational training, etc. While the basic principles of welfare states, built on solidarity and social citizenship (Marshall 1964;Castel 1995) remain valid, one has to be critical of their bureaucratization and one has to look for better ways to shape the needed solidarity. Welfare states clearly still have to be promoted, but they need a serious re-examination. --- 'Western' Modernity and Basic Income A second solution to discard is rather popular in some segments of the ecological movement which often puts serious question marks to 'western' modernity and wants to go in the direction of universal basic incomes. In this article I cannot go into the details of this delicate discourse. Let me just say that much has to do with its definition. Based on'modernization theory' of development studies, implying a linear 'progress' from rural to industrial societies, from subsistence to consumerism, from feudalism to liberal democracies (Rostow 1960), one can feel sympathy for those who reject it. But based on enlightenment thinking with universal human rights, the fundamental equality of all human beings, the separation of religion and state, and maybe most of all the capacity of Kant's'sapere aude' ('have the courage to know') and of self-criticism, the objections to modernity are more difficult to accept. All too often, anti-modernity leads to fundamentalism, as can be seen in some countries of the Middle East. And most of all, most people in the South do want some kind of modernity, from human rights and democracy to mobile phones. What has to be condemned about the 'western' modernity is that it never applied its valid principles to peoples in the South and that colonizers never allowed these people to define and shape their own modernity (Schuurman 1993). The time has certainly come to take into account the 'epistemologies of the South' (de Sousa Santos 2016). More often than not all those critical of modernity also reject welfare state types of solidarity, as they think it is linked to reformism and productivism. They prefer a universal basic income (UBI), that is an equal amount of money given unconditionally to all members of society. Again, not all arguments in favour and against this solution can be developed here (Downes and Lansley 2018). But there are serious reasons to reject this solution, the main one being that unequal people have to be treated unequally in order to promote equality (Sen 1992). Some have more demands than others and this should be taken into account. Also, giving money to people who do not need it and who in many cases may not even pay taxes, makes this solution extremely expensive, so that it can only be pursued by drastically cutting down on public services such as health care. In fact, indirectly, by providing money to people and cutting social public expenditures, UBI favours the privatization of public services (Mestrum 2016). Finally, one word has to be said about the kind of solidarity universal basic incomes imply. Welfare states organize a horizontal and structural solidarity of all with all, it is a kind of collective insurance. Basic income, on the contrary, implies a vertical solidarity between the state and a citizen, and another citizen, and another citizen. The message to these citizens is, here is your money, now leave us alone. Take care of yourself. In other words, it is a fundamental liberal solution. Today, there is a lot of semantic confusion around basic incomes. Many people speak about it and want to promote it, while in fact they only mean to introduce a guaranteed minimum income for those who need it, for those who for one reason or another cannot be active on the labour market. This is a totally different kind of solution that certainly can be supported since it offers income security, a crucial element of wellbeing and social protection. 'Social protection' as used in this article is the overarching umbrella concept for different social policies. It includes social security (social insurances against economic and social hazards such as sickness, unemployment, labour accidents... and collective saving systems for old age pensions), social assistance (helping the poor), public services and labour law. Today, for some international organizations, social protection is more or less synonymous with poverty reduction policies, since they gave up on 'universal' systems for all citizens. --- Social Protection Floors The International Labour Organization adopted a Recommendation in 2012 1 on 'Social Protection Floors'. This is a somewhat simplified and reduced-way of putting meat on the bone of its Convention 102 of 1952 on the minimum standards for social security. 2 This initiative certainly can be supported and if ever realized, it would mean a huge progress for all people all over the world. But we have to be aware that it is very limited and includes only income security in case of illness, old age and unemployment, maternity and child care, as well as health care. Given the absence of any kind of social protection in many countries, this would indeed mean progress, but it can hardly be seen as a sufficient protection for a life in dignity. A supplementary reason why some caution is necessary is the fact that the ILO and the World Bank have engaged in a joint initiative for 'universal social protection'. 3 As we know, it is the World Bank that came out with 'poverty reduction' in 1990 and'social protection' some twenty years later, all the while refusing to change even one iota to the basics of its neoliberal adjustment policies. The World Bank now proposes a tiered system of social protection, with a limited system for all but more particularly for the poor, presented as a 'poverty prevention package' (Mestrum 2019). They call it 'universal' with their own meaning, that is 'progressive universalism' referring to the 'availability' of benefits when and where they are needed. 4 All this means that there are few arguments to be against this initiative but that it is important to know it is limited, that it will not stop privatizations, on the contrary. In fact, this kind of social protection is at the service of markets, creating private markets for health and education, and protecting people so they can improve their productivity. At the World Bank, the reasoning behind it is purely economic. --- Interlinkages What then can be the solution? In his 'Contradictions of the welfare state' Claus Offe stated that capitalism does not want any social protection, while at the same time it cannot survive without it (Offe 1984). It is easy to see that World Bank type solutions belong to the part that capitalism cannot do without. They help to maintain the legitimacy of the system and should prevent people to fall in extreme poverty. One has to look, then, for the objectives of social protection. If one considers it is indeed protection of people, geared toward social justice and peace-mentioned in the Constitution of the ILO 5 -we have to leave behind economic thinking and start a journey from the basic needs of people. These universal needs have given rise to the definition of human rights, civil, political, economic, social and cultural rights that governments are bound to respect, protect and fullfil. Food, shelter, clothing, housing, health care... no one can do without, though the way these needs can be met will differ from one country to another, from one historical period to another. This indicates a first element that will lead to social commons: people have to be involved in the way their social policies are shaped, they know best what is to be done in a given context, at what moment. Secondly, and taking into account the current coronacrisis, it is obvious that health care is central but will not be enough. If people have no clean water and soap to wash their hands, there is a problem. If people live in slums or are homeless, they cannot be confined with a whole family and children. If they are street vendors, their choice is dying from hunger or dying from a virus. In other words, their health and indeed survival depend on much more than just doctors, hospitals and medicines. Housing, labour, their natural environment and psychological needs play a direct role as well. More in general, if people lack literacy, they cannot read messages on the dangers of junk food. If their incomes are too limited, they have no money for healthy food. If people have good jobs but are exposed to dangerous substances in their factories, they will get ill. If farmers have to use toxic pesticides on their land, they will get ill, and their produce risks to make consumers ill as well. Thirdly, it is obvious that preventing is so much better than curing. So, if we really want people to live in good health, beyond curing the illnesses they might suffer from, we necessarily have to start looking at the basic elements of social security: people require income security to protect them from distress, fear and want. Next to that, people need good labour laws to provide and protect their jobs with decent wages and working hours, with a possibility for collective bargaining, with protections against exposures to dangerous substances and other risks. People will also need public services, health care, obviously, but also education, housing, transport, communica-tion... as well as environmental policies to provide clean air, water and green spaces. It is obvious that in order to tackle all these problems and solutions, one will also have to look at transnational corporations and at the economic system itself. It becomes clear that in order to have healthy food without toxic residues, and housing at affordable prices, free markets will have to be reined in. In the quest for the alternatives, amongst others one might look at feminist economics, the notion of putting care in the centre. Can an economy at the service of people and of societies not also be an economy of care, caring for the needs of people, producing what people need? That is why the social and solidarity economy, cooperatives and other forms of co-responsible production can offer a perspective for a better future. --- Social Commons 6 Where, then, do the commons come in? According to Dardot and Laval's seminal book on the common (Dardot and Laval 2014), commons are the result of a social and political process of participation and democratic decision-making concerning material and immaterial goods that will be looked at from the perspective of their use value, eliminating or severely restricting private ownership and the rights derived from it. They can concern production as well as re-production, they refer to individual and to collective rights. Following this definition, social protection systems may broadly speaking be considered to be commons as soon as a local community, or a national organization or a global movement decide to consider them as such, within a local, national or global regulatory framework. If they organize direct citizens' participation in order to find out what these social protection systems should consist of and how they can be implemented, they can shape them in such a way that they fully respond to people's needs and are emancipatory. Considering economic and social rights as commons, then, basically means to democratize them, to state they belong to the people and to decide on their implementation and on their monitoring. This clearly will involve a social struggle, because in the past neoliberal decades these rights have been hollowed out, public services have been privatized and labour rights have weakened if not disappeared. Moreover, democratic systems have been seriously weakened and reduced to a bare minimum the real participation of people. While markets have grown, the public sphere has shrunk. In other words, this approach allows for doing what was mentioned before: people's involvement in shaping and putting in place social protection processes and systems, which look beyond the fragmented narratives of rights, go beyond disease control and develop instead a truly intersectional approach in order to guarantee human dignity and real sustainability. One of the positive elements in the current COVID crisis has been the flourishing of numerous initiatives of local solidarities and mutual aid, people helping the homeless and their elderly neighbours, caring for the sick, organising open spaces and playing grounds for kids. This help was crucial for overcoming a very difficult period and it might be a good start for further collective undertakings that could indeed lead to more commons. Taking into account what was said above on the many interlinkages, this might mean, in the health sector, the putting in place of interdisciplinary health centres, where doctors, care workers, social assistants and citizens cooperate in coordinated community campaigns, planning most of all primary care as a specialty. However, these local actions cannot be a substitute for a more structural approach. Commons are not necessarily in the exclusive hands of citizens and are not only local. States or other public authorities also have to play their role. We will always need public authorities for redistribution, for guaranteeing human rights, for making security rules, etc. It means they are co-responsible for our interdependence. But the authorities we have in mind in relation to enhancing our economic and social rights or our public services will have to be different from what they are today. We know that public authorities are not necessarily democratic, very often they use public services and social benefits as power instruments or for clientelist objectives. That is why the State institutions and public authorities will themselves have to act as a kind of public service, in real support of their citizens. In the same way, markets will be different. If social protection mechanisms, labour rights and public services are commons, the consequence is not that there is nothing to be paid anymore. People who work obviously have to be paid, even if they work in a non-profit sector. However, prices will not respond to a liberal market logic but to human needs and the use value of what is produced. So, if we say social commons go beyond States and markets, we do not say they go without States and markets. It will be a different logic that applies. --- System Change By focusing on the individual and collective dimensions of preventive health care and by directly involving people in shaping public policies, the commons approach can become a strategic tool to resist neoliberalism, privatization and commodification, in short, a tool for system change. It will allow to build a new narrative and develop new practices to better and broader organize people's movements. Shaping commons means building power together with others. Indeed, health and social protection, geared towards social justice, can be an ideal entry point for working on more synergies, beyond the fragmented approaches of social and economic policies. Today, many alternatives are readily available, all with the objective of preserving our natural environment, stopping climate change, reforming the economy away from extractivism and exploitation, restoring public services. Faced with the hollowing out of our representative democracies, many movements are working on better rules for giving all people a voice that is listened to. Even at the level of international organizations, proposals are made to fight tax havens, illicit financial flows and other mechanisms for tax evasion. There is no need to find a big agreement to include them all, since even separately they all can help to get out of the current system destroying nature and humankind. Neither is social justice the only entry point or the only road to take. Starting from the environment or from the economy, a comparable road can be taken. What it does suppose is that all roads are taken and followed with 'obstinate coherence', that is followed to the end, till the objective of say, social justice, a care economy, full democracy, human dignity with civil, political, economic, social and cultural rights is reached. The current COVID crisis puts the focus on health and gives us an opportunity for mapping this road, for indicating its possibilities, for showing all the interlinkages and synergies. It is up to social movements and progressive governments to follow that road,to push for changes in sectors that at first sight are not related to the issue one fights for, but in the end are crucial for it. If one works for social protection, one will have indeed to also point to the importance of clean air and good agricultural practices. It might be rather easy to organize commons at the local level, but it is far more difficult to achieve something at the national, let alone the global level. How to tackle global corporations? What we can do is pointing to the different negative effects of their products and practices and link them to a generally accepted goal. That is the importance of the initiative currently taken at the UN Human Rights Council in order to have binding rules for transnational companies to respect human rights. If we want healthy food and if we want to prevent certain types of cancer, we have to ban certain toxic products. It is not easy, the fight will be long and the social struggles may be disrupted at many moments. But is there any other strategy? If we want people to be in good health, in the sense of Alma Ata,7 that is 'a state of complete physical, mental and social wellbeing' as a fundamental human right, we not only have to point to the lack of social protection, but also to some practices of global corporations, from Facebook to Bayer. If we want economic and social rights to be respected, we will have to look at building standards and link them to the cheap clothes made available to western consumers. What will be needed is a broad effort in popular education. In developed countries of Western Europe too many people do not know anymore where social protection systems come from, how social struggles have made them possible, what kind of solidarity is behind them and why collective solidarity is better than an individual insurance. In many countries of the South people do not even know their rights or do not believe they can be really fulfilled. Some experience already exists with political laboratories where public authorities meet with citizens, health and social professionals as well as citizens and their organizations in order to see how to organize and improve social protection systems. --- Conclusion At a time of urgent health needs and social upheaval in numerous countries, at a moment that right-wing populism, authoritarianism and even fascism are re-emerging, it is also extremely urgent for social movements to get their act together. That means, going beyond the usual protests, developing practical alternatives, be watchdogs for public policies and build alternative narratives and practices. Counter-hegemonic movements are needed, at the local, the national, the regional and the global level. 'Long-term social and political change happens more frequently by setting up and maintaining alternative practices than by protest and armed revolution' (Pleyers 2020). In short, what is urgently needed is counter-power in an interdependent world. We can start by reclaiming social protection, stating it is ours and bring it back to its major objective: to protect people and societies and to promote sustainability of people, societies and nature. --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
COVID-19 reveals the undeniable fact of our interdependence and some hard truths about our economic system. While this is nothing new, it will now be difficult for all those who preferred to ignore some basic facts to go on with business as usual. Our economy collapsed because people cannot buy more than what they actually need. But as the economy grows the more people get sick and need help. And our universal welfare systems never excluded so many people as they do now. The many flaws in the dominant thinking and policymaking do not only refer to our health systems, but are almost all linked to the way the neoliberal globalization is organized. Turn the thinking around, forget the unfettered profit-seeking, start with the real basic needs of people and all the so badly needed approaches logically fall in the basket: the link with social protection, with water, housing and income security, the link with participation and democracy. In this article, I want to sketch the journey from needs to commons, since that is where the road should be leading us to. It goes in the opposite direction of more austerity, more privatization, more fragmentation of our social policies. It also leads to paradigmatic changes, based on old concepts such as solidarity and a new way to define sustainability.The COVID-19 crisis is revealing in many aspects. All of a sudden, one does not have to convince people anymore of the importance of health care and social protection. Surprising as it may sound, for many governments and for many social movements, social protection has not been one of the priorities in their agenda. Some think the private sector will take care of it, others think they have to respect the international fiscal directives, and still others give priority to environmental policies with maybe some vague demand for basic income. If this current crisis could re-direct past thinking into a clear demand for health care and social protection, leaving aside universal basic income and privatizations, one would be able to speak of the silver lining of this coronacrisis. However, in order to so, many traps have to be avoided. In this article I will briefly look at what sideways can better be left behind, what a forward-looking policy can look like and how it can lead to a perspective on social commons and system change. This implies an intersectional approach to health, social protection and several other sectors of social and economic policies. It is the road to the sustainability of life, people, societies and nature.
INTRODUCTION Background Addressing the obesity epidemic is a global public health priority. In England, roughly one third of children aged between 2 and 15 are overweight/ obese. 1 Being overweight/obese increases the risk of developing type 2 diabetes, heart disease and some cancers. 2 3 Furthermore, childhood overweight/ obesity is associated with social and psychological effects with increased risk of mental health problems, stigmatisation, social exclusion, low selfesteem, depression, and substance abuse. 3 The cost of obesity to the has been estimated at over 5 billion per year in 2007 and is predicted to reach 50 billion per year in 2050. 4 There are large social inequalities found in childhood overweight/obesity: 5 A systematic review of 45 studies from a diverse pool of western developed countries found a consistent relationship between lower socio-economic circumstances (SEC) and obesity risk. The relationship was particularly strong using measures of maternal education as the SEC indicator. Compared to income and occupation, education is suggested to have a stronger influence on parenting behaviours in the pathway from low SEC to development of adiposity. 5 Despite recent evidence suggesting a stabilisation of overweight/obesity prevalence in England, 6 7 socioeconomic inequalities in childhood overweight/obesity continue to widen. 7 A number of studies suggest that early life risk factors, such as parental and maternal smoking during pregnancy, are predictive of childhood overweight/obesity. [8][9][10][11] However, few studies have explored the extent to which these factors attenuate inequalities in overweight/obesity in later childhood. This study therefore aimed to assess whether early-life risk factors attenuate inequalities in overweight/obesity in 11-year-old children from the UK. --- METHODS --- Design, setting, and data source We used data from the Millennium Cohort Study (MCS), a nationally representative sample of children born in the UK between September 2000 and January 2002. Data were downloaded from the UK --- What is already known on this topic Childhood overweight and obesity is more common in disadvantaged children, but it is unclear the extent to which early life factors attenuate this relationship. --- What this study adds In this large, nationally representative longitudinal study, inequality in overweight and obesity in preadolescence was partially attenuated by early life risk factors including maternal smoking during pregnancy and having a mother who was overweight before pregnancy. Data Archive in 2014. The study over-sampled children living in disadvantaged areas and those with high proportions of ethnic minority groups by means of stratified cluster sampling design. 12 Further information on the cohort and sampling design can be found in the cohort profile. 12 This study uses data collected on children at 9 months and 11 years. The analysis did not require additional ethical approval. Outcome measure: overweight (including obesity) At 11 years trained investigators collected data on the height of children to the nearest 0.1 cm and weight to the nearest 0.1 kg. BMI was calculated by dividing weight (in kilos) by height squared (in metres). Being overweight (by combining overweight and obesity scores) was defined using the age and sex specific International Obesity Task Force (IOTF) cut-offs (baseline: thin or healthy weight). --- Exposure: SEC The primary exposure of interest was maternal academic qualifications used as a fixed measure of SEC at birth of the MCS child. The highest qualification attained by the mother was established by questionnaire at the first wave, categorised in this study by six levels: degree plus (higher degree and first degree qualifications), diploma (in higher education), A-levels, grades A-C, GCSE grades D-G, and none of these qualifications. --- Mediators: early life risk factors We examined the following early life risk factors associated with childhood overweight risk based on findings from a systematic review: (see ref. 13) perinatal factors and exposures during pregnancy: maternal pre-pregnancy overweight (yes or no); maternal smoking during pregnancy (none, 1-10 cigarettes per day (cpd), 11-20 cpd, >20 cpd); birthweight (normal, low, or high), preterm birth (yes or no), caesarean section (yes or no); early life postnatal exposures measured at 9 months: breastfeeding duration (never, less than 4 months, greater than 4 months), early introduction of solid foods (coded as <unk>4 months yes/no as per Department of Health guidance at the time of the survey), and parity. --- Baseline confounding factors Sex and ethnicity of child, and maternal age at birth of MCS child are associated with both exposure and outcome measures and so were considered as confounding factors. --- Analysis strategy Following the Baron and Kenny steps to mediation, 14 we explored the unadjusted association between maternal qualifications ( primary exposure) and childhood overweight at 11 years (outcome measure). All analyses were conducted in STATA/SE V.13. We explored the associations between potential mediators and overweight, calculating unadjusted relative risks (RRs) using Poisson regression. Following this we explored the association between maternal qualifications and all potential mediators. In the final analysis sequential models were fitted; calculating adjusted RRs using Poisson regression for overweight on the basis of maternal qualification (with children of mothers with highest qualifications as the reference group), adjusting for the potential mediators that were significantly associated with overweight at the p<unk>0.1 level in the univariate analysis. We used a sequential approach to construct the adjusted models, first adding confounding variables, then perinatal factors and exposures during pregnancy, and finally postnatal exposures, to show the association between SEC and overweight. Mediation was taken to be a reduction in, or elimination of, statistically significant RRs in a final complete case sample. 15 We estimated all model parameters using maximum likelihood, accounting for sample design and attrition. We undertook three sensitivity analyses, repeating the analysis with income as an alternative measure of SEC; calculating the relative index of inequality (RII); and also using the decomposition method. 16 The results from the sensitivity analysis can be found in the online supplementary material. --- RESULTS 11 764 children were present at 9 months and 11 years with data on overweight status. 9424 (80%) had full data on all exposures of interest in the fully adjusted model. The prevalence of overweight at age 11 was 33.1% in children whose mother had lower qualifications, compared to 20.1% in the highest maternal qualification group (degree plus). All the other covariates of interest, except for sex, varied by level of maternal qualifications (table 1). --- Associations of covariates with overweight In the univariate regression, lower maternal qualifications, female sex, mixed, Pakistani, Bangladeshi and black ethnicity, maternal age of 35 and older at MCS birth, maternal prepregnancy overweight, maternal smoking during pregnancy, more than 1 child in the household, high birthweight, caesarean section, breastfeeding for less than 4 months, and introducing solid foods before 4 months were all associated with an increased RR for overweight in children at age 11 (table 2 and figure 1). Figure 1 shows the unadjusted and fully adjusted covariate estimates. In the fully adjusted model, lower maternal qualifications, female sex, mixed and Pakistani and Bangladeshi and black ethnicity, maternal age of 30 and older at MCS birth, maternal pre-pregnancy overweight, smoking during pregnancy, high birthweight, never breastfeeding, and introducing solid foods before 4 months were all significantly associated with an increase in RR for overweight. There was no significant effect associated with parity, low birthweight, having a preterm birth or caesarean section. --- Association between maternal academic qualifications and overweight, adjusted for other early life factors Figure 2 shows the RRs for maternal qualification and overweight before and after adjustment for covariates added sequentially using a life-course approach (see online supplementary material for data tables showing all the model coefficients). The RR increases from 1.72 (95% CI 1.48 to 2.01) to 1.80 (95% CI 1.54 to 2.10) after adjusting for confounders. There are incremental changes in the RR evident after adjusting for maternal pre-pregnancy overweight, maternal smoking during pregnancy, and breastfeeding. In the final full model, the RR comparing lowest to highest qualifications remains significant (1.44, 95% CI 1.23 to 1.69). Repeating the analysis, but only adding maternal pre-pregnancy overweight and maternal smoking during pregnancy to the confounder-adjusted model attenuated the RR to 1.47 (95% CI 1.26 to 1.71), indicating that the percentage of effect mediated by these factors equates to 41.3% (RR reduction). --- Sensitivity analysis The conclusions of the study were similar when we used household income as the measure of SEC; when we used RII as the measure of inequality; and when we used an alternative method for mediation analysis (see online supplementary material). --- DISCUSSION Using a nationally representative sample, we show that overweight status at age 11 is socially patterned. Lower maternal qualifications, female sex, mixed and Pakistani and Bangladeshi and black ethnicity, maternal age of 30 and older at MCS birth, maternal pre-pregnancy overweight, smoking during pregnancy, high birthweight never breastfeeding, and introducing solid foods before 4 months were associated with an increase in RR for overweight at 11 years. Maternal pre-pregnancy overweight and maternal smoking during pregnancy attenuated the RR in the lowest maternal qualifications group by around 40% suggesting a considerable amount of the social inequalities in preadolescent overweight can be explained by these two variables. --- Comparison with other findings Our study corroborates findings from a systematic review: Shrewsbury and Wardle 5 found that 42% of the studies found an inverse association between SEC and adiposity, with the lowest SEC group having the highest level of adiposity. Using parental education as the SEC indicator, 75% of the studies demonstrated an inverse association between SEC and child adiposity. Children whose parents, particularly mothers, have lower levels of education are at a higher risk for developing adiposity. Shrewsbury and Wardle 5 noted Sobal's theoretical framework that suggests education, as an indicator of SEC, influences knowledge and beliefs of parents, which is theorised to have more of an important role in the mechanism linking SEC and development of adiposity than other SEC indicators (eg, income and occupation). Though some of the studies in the review adjusted for confounding, none attempted to explore factors that attenuate the social gradient. Our study is the first to quantify the contribution of early-life factors in attenuating social inequalities in overweight/obesity on the basis of maternal education level in a nationally representative sample of 11-year-old children in the UK. We found maternal prepregnancy overweight to be an important contributor to inequalities in overweight at 11 years, reducing the RR in the sequential model from 1.8 to 1.6. A recent study in a Dutch cohort found similar results, concluding that parental BMI, maternal prepregnancy BMI, and smoking during pregnancy contributed most to educational inequalities in BMI in 6-year-olds (attenuation -54%, 95% CI -98% to -33% in the lowest educational group). 9 Maternal pre-pregnancy overweight is related to an increase risk of adverse health outcomes for mothers and infants including gestational diabetes, large baby size, and may produce other preprogramming effects related to increased risk in childhood overweight. 17 18 Parental overweight has also been found to potentially contribute to childhood overweight via family eating, activity, and factors in later life relating to child fat intake, snack consumption, and child preference for sedentary activities. [19][20][21] These factors reflect the importance of addressing structural barriers to healthy eating faced by the parents of children growing up in more disadvantaged areas. Our research findings are similar to a large Irish study investigating determinants of socioeconomic inequalities in obesity in Irish children which identified maternal smoking during pregnancy as a potential mediator. 11 Potential mechanisms include impaired foetal growth followed by rapid infant weight gain; 10 the influence of prenatal smoking on neural regulation causing increased appetite and decreased physical activity; 22 the associations of smoking with other health damaging behaviours after birth; 23 and the contribution of smoking to family poverty, leading to constrained food budgets and fuelling the consumption of cheap, poor quality foods. 11 Further longitudinal research efforts should be dedicated towards discovering the underlying mechanisms linking prenatal smoking to childhood overweight, and the extent to which they may also explain inequalities. Our study suggests that shorter duration of breastfeeding may make a small contribution to the increased risk of preadolescent overweight in more disadvantaged children. Never breastfeeding was associated with a significantly higher risk of overweight in children at 11 years in the fully adjusted model, corroborating a previous study on the MCS data at an earlier age. 8 --- Strengths and limitations This study used secondary data from a large, contemporary UK cohort and the results are likely to be generalisable to other high-income countries. A wide range of information is collected in the MCS, which allowed us to explore a range of prenatal, perinatal, and early life risk factors for overweight, including different measures of SEC. Overweight status was based on IOTF cut-offs for BMI, age and sex specific. Children's BMI was calculated based on height and weight measures taken by trained interviewers, reducing reporting bias of family members. However, using BMI as an indicator for adiposity may not be as accurate as measuring total fat mass. 9 Missing data are a ubiquitous problem in cohort studies. Sampling and response weights were used in all analyses here to account for the sampling design and attrition to age 11. A complete case analysis was used, removing individuals with incomplete data. This approach may introduce bias, when the individuals who are excluded are not a random sample from the target population. However in this analysis the sample was sufficiently large, and the internal associations, which were the targets of inference within the sample population, are likely to be valid, but we speculate that they may underestimate the effect sizes in the full UK population. In our analysis of mediation we have followed the Baron and Kenny approach. 14 We used multiple measures of SECs, all of which further supported our main findings. These alternative methods for mediation analysis are continuously being developed, and have their limitations. 24 For example, the KHB model using logistic regression as Poisson regression results were considered "experimental". In this respect our analysis is exploratory and opens up the possibility of more focused mediation analyses to quantify the mediating pathways for specific factors identified in our study. The positive association between maternal and child overweight may in part reflect non-modifiable (eg, genetic) factors. Furthermore, maternal smoking in pregnancy may itself be a proxy marker for SEC. However, we did observe a dose-response relationship between smoking in pregnancy and overweight/obesity, and the association remained after adjusting for multiple measures of SECs, supporting the notion of a causal link. Finally, in the absence of randomised control trial data to assess the causal relationship between early life risk factors and childhood overweight, we are reliant on the best quality evidence from prospective observational studies. The systematic review of observational studies by Weng et al, 13 concludes there is "strong evidence that maternal pre-pregnancy overweight and maternal smoking in pregnancy increased the likelihood of childhood overweight". In a nationally representative contemporary cohort of UK children we have shown that maternal overweight and smoking during pregnancy may also account for a significant proportion of the social inequality in overweight/ obesity. However, as Weng et al point out, the association between smoking and obesity/overweight may be confounded by other lifestyle factors, such as poor diet. --- Policy and practice implications Policies to support mothers to maintain a healthy weight, breastfeed and abstain from smoking during pregnancy are important to improve maternal and child health outcomes, and our study provides some evidence that they may also help to address the continuing rise in inequalities in childhood overweight. Policies should focus on supporting access to healthy diets, particularly in the pre-conception and antenatal periods, and making healthy eating affordable for disadvantaged families. Future research aimed at reducing childhood obesity should also assess the inequalities impact of interventions in order to build the evidence base to reduce the large social inequalities found in overweight/obesity in childhood. Contributors SM, SW and DT-R planned the study, conducted the analysis, and led the drafting and revising of the manuscript. SM, SW, AP, BB, CL and DT-R contributed to data interpretation, manuscript drafting and revisions. All authors agreed the submitted version of the manuscript. --- Competing interests None declared. Provenance and peer review Not commissioned; externally peer reviewed. --- Data sharing statement Statistical code and dataset available from corresponding author. Open Access This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/ licenses/by/4.0/
Background Overweight and obesity in childhood are socially patterned, with higher prevalence in more disadvantaged populations, but it is unclear to what extent early life factors attenuate the social inequalities found in childhood overweight/obesity. Methods We estimated relative risks (RRs) for being overweight (combining with obesity) at age 11 in 11 764 children from the UK Millennium Cohort Study (MCS) according to socio-economic circumstances (SEC). Early life risk factors were explored to assess if they attenuated associations between SECs and overweight. Results 28.84% of children were overweight at 11 years. Children of mothers with no academic qualifications were more likely to be overweight (RR 1.72, 95% CI 1.48 to 2.01) compared to children of mothers with degrees and higher degrees. Controlling for prenatal, perinatal, and early life characteristics ( particularly maternal pre-pregnancy overweight and maternal smoking during pregnancy) reduced the RR for overweight to 1.44, 95% CI 1.23 to 1.69 in the group with the lowest academic qualifications compared to the highest. Conclusions We observed a clear social gradient in overweight 11-year-old children using a representative UK sample. Moreover, we identified specific early life risk factors, including maternal smoking during pregnancy and maternal pre-pregnancy overweight, that partially account for the social inequalities found in childhood overweight. Policies to support mothers to maintain a healthy weight, breastfeed and abstain from smoking during pregnancy are important to improve maternal and child health outcomes, and our study provides some evidence that they may also help to address the continuing rise in inequalities in childhood overweight.
Introduction Over the last three decades, a marked increase in the prevalence of overweight or obese Canadian adolescents has raised concerns [1,2]. To help manage this ongoing problem, research suggests that engaging in positive health behaviours such as increased physical activity (PA) among other behaviours can act as a protective factor against obesity [3,4]. However, adoption of these weight-related health behaviours can be impacted by a number of proximal influences, including the family environment [5]. Indeed, there is evidence that even among children at genetic risk for developing obesity, family/home environment moderates their likelihood of developing obesity in childhood [6]. Therefore, understanding the familial factors that can influence behaviour change among overweight or obese adolescents is essential in order to target these powerful influences. Parents, in particular, can influence their children directly through parenting practices (i.e., rules or routines) and their own health behaviours, such as modeling PA. Parenting practices are specific actions or strategies parents use to help socialize their children's behaviours [7,8]. In the context of PA, parental support in the form of emotional (e.g., encouragement) [9,10] and logistical (transportation to parks or playgrounds) [11,12] support have been positively associated with adolescents' PA. In contrast, results have generally been mixed when examining parent PA (modeling) on adolescents' PA [13,14]. For instance, accelerometer studies have found positive associations between parent and child/adolescent moderate-vigorous physical activity (MVPA) [14,15], while self-report studies remain inconsistent [16]. Together, these studies highlight the role of parents in influencing their adolescents' behaviours. However, the majority of this research has involved children and adolescents of normal weight. Evidence exploring the relationship between parenting practices and overweight or obese adolescents' PA is lacking [17]. In recent years, family context has emerged as an important factor in the formation of adolescents' PA behaviours. Two main contextual elements of interest include parenting style and family functioning. Parenting style is the emotional climate in which parents raise their child or the way parents interact with their child [18]. According to Baumrind [19,20], three parenting styles exist, including authoritative, authoritarian, and permissive: Authoritative parents exercise control in a supportive and understanding way, by encouraging verbal interaction. Authoritarian parents exercise high control in the form of demands and obedience, while discouraging verbal interaction. Permissive parents exercise minimal control by giving in to their child's demands and provide little to no structure. On the other hand, family functioning acts as an all-encompassing dimension that focuses on how family subsystems interact with one another in terms of their cohesion and flexibility to impact the entirety of behaviors in the family unit. Although research in this area is generally sparse, recent models suggest that since parenting styles and family functioning are considered to be contextual elements, they may function at a higher level and act as moderators [21,22]. Specifically, parenting style and family functioning have the ability to act as moderators and impact child development (e.g., PA behaviours) indirectly by changing the effectiveness of parenting practices and modeling behaviours [18,23]. As a result, how children react to and perceive their parents' wishes/demands may stem from the broader familial environment [7,24]. More specifically, parenting styles and family functioning can attribute a positive or negative undertone to the strategies employed by parents. For instance, parents who exhibit a more controlling parenting style by setting strict boundaries on children's outdoor play may be viewed by their child as heavily controlling if the exchange between the parent and child is such that parents enforce rules which the child must obey-ultimately hindering outdoor play. Alternatively, rule setting has the potential to be regarded as warm if the parent-child dynamic involves an age-appropriate discussion on the reasoning behind the rules, openness to change rules, etc. [25]. Moreover, Kitzmann and colleagues [18], allude to the idea that parents' attempts to engage their children in activities may be more successful when they already enjoy interacting and spending time together as a family (high family functioning). As a result, children may adopt more positive PA behaviours than those families who do not spend much time together (low family functioning). However, more research into these higher-level dimensions is needed to understand the extent to which context promotes PA behaviours among adolescents who are overweight or obese. Although prior studies have examined parenting practices and parental modeling independently with regards to adolescent PA, less is known whether both factors are jointly important or whether parenting styles and family functioning moderate these associations. Hence, the present study examines how parenting practices, parental modeling, and adolescent PA fit within these broader family-level components. The primary aim of the study (Figure 1) was to assess whether both parenting practices and parental modeling of PA are associated with adolescents' PA, while examining the extent to which parenting styles and family functioning act as moderators. Given that PA parenting practices and parental modeling may be correlated, the secondary aims of this study assessed these relationships separately and examined: 1) The relationships between parenting practices adolescents' PA behaviours while examining the role of parenting styles and family functioning as moderators, and 2) the relationships between parental modeling of PA and adolescents' PA while examining the role of parenting styles and family functioning as moderators. Figure 1 presents conceptual relationships tested in this study guided by Bronfenbrenner's ecological model [26] as well as suggestions from other frameworks [7,18,23,24,27], which considered the moderating role of parenting style and family functioning, under the assumption that the influence of specific PA parenting practices (e.g., logistic support, facilitation) and parental modeling (parent's PA levels and self-report) on adolescent PA may be conditionally related to these higher level parental factors. Figure 1 presents conceptual relationships tested in this study guided by Bronfenbrenner's ecological model [26] as well as suggestions from other frameworks [7,18,23,24,27], which considered the moderating role of parenting style and family functioning, under the assumption that the influence of specific PA parenting practices (e.g., logistic support, facilitation) and parental modeling (parent's PA levels and self-report) on adolescent PA may be conditionally related to these higher level parental factors. --- Materials and Methods --- Study design This is a secondary analysis of the baseline data collected as part of a study elucidating the individual and household factors that predict adherence to an e-health family-based lifestyle behaviours modification intervention for overweight/obese adolescents and their family [28]. --- Participants Participants for the analyses included 172 parent/adolescent dyads who filled out a baseline measurement tool prior to starting an e-health family-based lifestyle behaviour modification intervention [28]. Among these families, 68% were recruited via advertisements (newspapers, parenting magazines, Facebook, Craigslist), 28% were previous patients of the British Columbia (BC) Children's Hospital Endocrinology and Diabetes Clinic or Healthy Weight Shapedown program, and 5% were recruited via word of mouth. Parent-adolescent dyads were eligible to participate in the main study if the adolescent was overweight or obese according to the World Health Organization (WHO) cut-points [2] and the parent consented to take part in the study with them. Additional requirements included having internet at home, residing in the Greater Vancouver (BC) area, no plans to move during the study period (three years), and being literate in English. Adolescents were ineligible to participate in the study if they had any comorbidity (e.g., physical disability) that limited their ability to be physically active or eat a normal diet, a history of psychiatric problems or substance abuse, medication use that impacts body weight, or a Type 1 diabetes diagnosis --- Procedures Ethics approval was obtained from the University of British Columbia and the University of Waterloo. At baseline, parents completed a number of online surveys that asked about their parenting practices, parenting styles, and family functioning. Adolescents and parents filled out a series of surveys on their PA habits. Additionally, adolescents and parents were required to wear an accelerometer (over their hip under their clothes) for eight full days following the baseline visit, during waking hours. Finally, adolescent-parent pairs were asked to keep track of their sleep duration and times when they were not wearing the accelerometer in the logbook. --- Materials and Methods --- Study design This is a secondary analysis of the baseline data collected as part of a study elucidating the individual and household factors that predict adherence to an e-health family-based lifestyle behaviours modification intervention for overweight/obese adolescents and their family [28]. --- Participants Participants for the analyses included 172 parent/adolescent dyads who filled out a baseline measurement tool prior to starting an e-health family-based lifestyle behaviour modification intervention [28]. Among these families, 68% were recruited via advertisements (newspapers, parenting magazines, Facebook, Craigslist), 28% were previous patients of the British Columbia (BC) Children's Hospital Endocrinology and Diabetes Clinic or Healthy Weight Shapedown program, and 5% were recruited via word of mouth. Parent-adolescent dyads were eligible to participate in the main study if the adolescent was overweight or obese according to the World Health Organization (WHO) cut-points [2] and the parent consented to take part in the study with them. Additional requirements included having internet at home, residing in the Greater Vancouver (BC) area, no plans to move during the study period (three years), and being literate in English. Adolescents were ineligible to participate in the study if they had any comorbidity (e.g., physical disability) that limited their ability to be physically active or eat a normal diet, a history of psychiatric problems or substance abuse, medication use that impacts body weight, or a Type 1 diabetes diagnosis. --- Procedures Ethics approval was obtained from the University of British Columbia and the University of Waterloo. At baseline, parents completed a number of online surveys that asked about their parenting practices, parenting styles, and family functioning. Adolescents and parents filled out a series of surveys on their PA habits. Additionally, adolescents and parents were required to wear an accelerometer (over their hip under their clothes) for eight full days following the baseline visit, during waking hours. Finally, adolescent-parent pairs were asked to keep track of their sleep duration and times when they were not wearing the accelerometer in the logbook. --- Measures A series of self-report measures were used to capture parenting practices, parenting styles, and family functioning. Adolescent and parent PA were assessed using both self-report and objective measures. Parenting practices (parent self-report): A family nutrition and PA screening measure [29] was used to assess PA parenting practices. An exploratory factor analysis supported the one-factor structure of the original 15-item scale with a score that had adequate internal consistency (0.70) and was related to body mass index (BMI) categories of children [29]. A four-factor structure, composed of PA, eating, breakfast, and screen time was also supported (Cronbach's alpha coefficients of 0.60, 0.64, 0.55, 0.33, respectively) to examine practices related to specific behaviours. Items consisted of two opposing statements in which parents selected the statement that applied to their child and/or family. For PA practices, three items asked whether the child participates in organized sports, whether the child is spontaneously active, and whether the family is active together. This response style was selected to normalize both positive and negative response options to minimize social desirability bias [30]. Responses were converted to a four-point numerical scale, and reverse coded as needed, so that a score of four indicated more healthful parenting practices. Parenting styles (parent self-report): Parenting styles were measured using a modified version of Cullen's 16-item authoritative parenting scale [31]. The original measure includes two subscales measured on a four-point Likert scale ranging from never to always., namely authoritative (11 items) and negative (five items) parenting styles and has been previously tested in a sample of ethnically diverse parents and grade four to six students [31]. With regards to item variance, a principal component analysis (PCA) revealed that the authoritative subscale explained 30% while the authoritative subscale explained 11%. Cronbach's alphas for the authoritative and negative subscales were 0.72 and 0.73, and yielded Pearson test-retest correlation coefficients of 0.53 and 0.82, respectively. However, as the structure in the study sample was not supported according to the initial confirmatory factor analysis (X 2 (df = 89) = 187.6, p <unk> 0.00, RMSEA = 0.084 and 90% CI = 0.067-0.101, CFI = 0.844, SRMR = 0.080), the authoritative (e.g., "tell child he/she does a good job", "tell child I like my child just the way he/she is") and negative (e.g., "forget the rules I make for my child", "hard to say no to child") subscales were reduced from ten to three items respectively, along with the addition of two correlated error terms according to modification indices and conceptual relevance. As the content of the remaining three items on the negative parenting scale were more permissive in nature, the scale is referred to as measuring "permissive" parenting. In the present sample, confirmatory factor analysis supported the revised structure (X 2 (df = 62) = 109.8, p <unk> 0.00, RMSEA = 0.070 and 90% CI 0.048-0.091, CFI = 0.919, SRMR = 0.067), and had a Cronbach's alpha of 0.85 for authoritative and 0.59 for permissive. To derive indices, items were summed and dichotomized at the median to split parents into high/low authoritative and permissive style. A fairly even split was met for the authoritative style (72 participants allocated to the high group and 82 to the low group), but not for the permissive style (50 participants allocated to the high group and 120 to the low group) due to a majority of parents scoring at the median. Family functioning (parent self-report): The Family Adaptability and Cohesion Evaluation Scale IV (FACES IV) [32] assessed family functioning. The original measure comprises 42 items assessed on a five-point Likert scale ranging from strongly agree to strongly disagree. The measure contains six subscale: balanced cohesion (e.g., "feeling very close"), balanced flexibility (e.g., "able to adjust to change"), enmeshed (e.g., "spending too much time together"), disengaged (e.g., "avoid contact with each other"), chaotic (e.g., "never seem to get organized"), and rigid (e.g., "rules for every possible occasions"). These six subscales measured two overarching dimensions of cohesion and flexibility [32]. The six-factor structure was supported in a sample of US post-secondary adults (mean age: 28) and all scales had high internal consistency (Cronbach's alpha 0.77 to 0.89) [32]. Using the conversion chart developed by Olson [32], raw scores for each family functioning subscale were transformed into subscale-specific percentile scores. Cohesion and flexibility ratio scores were computed independently based on percentile scores. Refer to Table 1 for the formulas used to compute the ratio scores. For analytic purposes, the cohesion ratio and flexibility ratio were dichotomized. Participants were classified into the high family functioning group if their ratio scores were above the median on both ratio scores. Those with scores below the median on at least one of these two Ratios were classified as low family functioning. Hence, those families which scored below 1.9 on the cohesion ratio and below 1.4 on the flexibility ratio were categorized as belonging to the high family functioning group. Accelerometer to measure MVPA (worn by child and parent): Two types of accelerometers (Actigraph GT3X or GT3X+) were used to measure MVPA. Parental modeling (PA) was computed using parent MVPA as described below. Data from the Actigraph accelerometers was processed using a program in Stata that processed the data following previous recommendations [33,34]. Data from the accelerometers were collected in spans of 10 seconds and aggregated into one-minute intervals for the analyses. A day of recording was considered valid if the accelerometer was worn at least 10 hours per day, which represents 63% of the time participants are awake (for those who sleep eight hours). Non-wear time was described as a period of at least 60 minutes that resulted in no activity [33]. If participants had three valid days (including one weekend day) of wear time, they were included in the analyses. To help determine the appropriate minutes of MVPA, child and parent-specific MVPA cutoffs were used (<unk>2296 and <unk>1952 accelerometer counts in a one-minute time frame, respectively) [35]. Counts above this cut point were combined to calculate total minutes of MVPA during the assessment week [36]. To determine the average minutes of MVPA at baseline, total MVPA was divided by number of days. Seven-day physical activity recall (PAR) to measure MVPA (interview-administered to child and parent separately): The seven-day PAR is a semi-structured interview aimed at estimating the amount of MVPA the parent or child has engaged in for 10 minutes or longer in the seven days leading up to the interview. PA Parental modeling was also computed using parent self-report of MVPA as described below. The measure, which is adapted from the Stanford Five-City Project [37], is primarily used to record the intensity and duration minutes) of participants' activities. To aid participants in identifying which level of intensity corresponded to the activity they performed, they were provided an overview of three different levels of intensity. These levels included leisure walking (i.e., relaxing walk), moderate activities (i.e., brisk walking) and very hard activities (i.e., running hard). In addition to the regular interview questions, probing methods were employed to ensure that sufficient information was obtained from each participant. The Compendium of Energy Expenditure for Youth [38] was used to assign the appropriate number of metabolic equivalents (where 1 MET is the amount of energy expended at resting) to each activity the participant performed. Self-reported MVPA time was defined as the average minutes per day spent performing activities that were <unk>4 MET [35,39,40]. Time spent in MVPA was computed by summing all the activities above this point. The total minutes in a week was divided by seven to obtain average minutes of MVPA per day. --- Data Analysis Path analysis was used to conduct all analyses in Stata 13. Full information maximum likelihood was employed to handle missing data. For all the analyses, two models were run: One model using adolescents' MVPA measured with accelerometry as the dependent variable and another model using adolescents' MVPA measured with self-report as the dependent variable. The analyses for the primary and secondary aims followed the same process: (1) Model 1 tested whether PA-related parenting practices and/or parental modeling (PA) were associated with adolescents' MVPA, and (2) the final model included the relationships tested in Model 1 but added all the moderating variables (i.e., authoritative and permissive parenting styles as well as family functioning) and the relevant interaction terms as depicted in Figure 1. Note that interaction terms were then entered into the analysis one by one for each of the corresponding models and were kept in the model if p <unk> 0.10. All variables were standardized prior to inclusion in models so as to address issues of convergence. Each model was adjusted for the following covariates: adolescent sex, adolescent age, and parental income. The secondary analyses are presented first as they serve to interpret and build the model for the main aim of this study. To examine assumptions of linear regression, residual plots and bivariate scatterplots were estimated for each model. The magnitude, indicated by the standard coefficient (SC) of a path, as well as the associated p-value, were examined to determine the significance of the path. --- Results The analytic sample is characterized in Table 2. As shown in Table 3, adolescents and parents accumulated around half an hour of MVPA per day as measured by accelerometry and about 56 and 69 minutes of self-reported MVPA per day, respectively. The majority of parents had high scores on the authoritative parenting style scale and midrange on the permissive parenting scales, as most scored 6.0 on a scale ranging from 3 to 12. Regarding family functioning, most parents were balanced on the cohesion and flexibility ratios as the mean ratios both exceeded one. Table 4 displays associations between PA parenting practices and adolescents' MVPA and whether associations were moderated by parenting styles and family functioning. As demonstrated in Table 4, Model 1 (without moderators) highlights that PA parenting practices were significantly associated with adolescents' self-report of MVPA and that there was a trend towards significance (p = 0.06) with adolescents' MVPA measured by accelerometry. Specifically, more healthful PA parenting practices were associated with higher levels of adolescents' MVPA. When the moderators were included in the model, the interaction term between permissive style and PA parenting practices became significant. In contrast, PA parenting practices was the only significant predictor for adolescents' self-report MVPA when the moderators were added. Figure 2 illustrates the interaction of permissive style by PA parenting practices, suggesting that more healthful PA parenting practices were positively associated with adolescents' MVPA but also indicating that this association was more pronounced among adolescents whose parents use a high permissive style compared to those with a low permissive style. As shown in the graph, however, the direction of this association reverses when parents employ less healthful PA practices. In all models, adolescents' sex was the only significant covariate. The results suggest that adolescent boys had significantly higher MVPA than adolescent girls and this was observed for both accelerometry and self-report assessment of MVPA (Table 3). Table 5. displays associations between parental modeling of PA and adolescents' MVPA, as well as whether parenting styles and family functioning moderated these associations. As shown in Table 5, Model 1 (without moderators) highlights that parental modeling of PA was significantly associated with adolescents' MVPA for both accelerometer and self-report. Specifically, parents who modeled high levels of PA were associated with increased PA among overweight/obese adolescents. When the moderators were added into these models, no significant effects emerged for parenting styles or family functioning, but parental modeling of PA remained significant. Table 6 displays the association of both PA parenting practices and parental modeling of PA on adolescents' MVPA and whether parenting styles and family functioning moderate these associations. Model 1 (without moderators) highlights that PA parenting practices and parental modeling of PA were significantly associated with self-report of MVPA. Specifically, more healthful PA parenting practices and parental modeling of PA were both associated with higher levels of adolescents' MVPA. Although parental modeling of PA (accelerometry) was significantly associated with adolescents' MVPA measured with accelerometry in Model 1, a trend towards significance (p = 0.07) was observed for this relationship in the final model. When the moderators were added into the model, a significant interaction between permissive style and PA practices was observed. However, this was only observed for MVPA measured with accelerometry. This is similar to our findings reported in Table 4. This finding is illustrated in Figure 2 (see previous figure reported and description) Table 5. displays associations between parental modeling of PA and adolescents' MVPA, as well as whether parenting styles and family functioning moderated these associations. As shown in Table 5, Model 1 (without moderators) highlights that parental modeling of PA was significantly associated with adolescents' MVPA for both accelerometer and self-report. Specifically, parents who modeled --- Discussion The purpose of this study was to examine the effect of parenting practices and/or parental modeling on the PA behaviours of overweight/obese adolescents and explore whether parenting styles and family functioning act as moderators. With regards to the primary aim of the study, when considering both PA parenting practices (i.e., facilitation, logistic support) and parental modeling of PA (i.e., PA self-report), both were significantly associated with adolescents' self-report of MVPA-where higher MVPA occurred in families that had more positive parenting practices and modeled an active lifestyle In addition, a significant interaction between permissive style and PA parenting practices emerged for adolescents' MVPA measured with accelerometry-where permissiveness was found to amplify the association between healthy PA parenting practices and adolescents' MVPA. Interestingly, family functioning did not emerge as an important moderator. The findings were similar when PA parenting practices and parental modeling were examined independently (secondary aims), except that the association between parental modeling of PA and adolescents' MVPA measured with accelerometry was significant instead of being borderline significant. Overall, the results highlight the importance of healthy PA parental practices and modeling to support overweight/obese adolescents' MVPA as well as the role of permissiveness in further supporting their engagement in PA. Given that most of the literature has focused on the influence of parenting practices and modeling separately [12,15,41,42], the present study revealed that parenting practices and parental modeling together, may be important factors in overweight/obese adolescent PA behaviours. The few studies that have explored both practices and modeling together in the context of PA report conflicting results in comparison to the present study. Previous studies have reported that the importance of modeling is diminished by other constructs, such as parental encouragement and support [43][44][45]. For instance, a study conducted among grade 7-12 students found parenting practices, namely parental support, to be more influential than parental modeling [43]. However, these studies targeted a general sample of adolescents while the present study focused on overweight/obese adolescents, which may explain the discrepancies. It may be that parents who are more active or model an active lifestyle are in a better position to support their overweight/obese adolescents' PA as they can, for example, be active together. On the other hand, adolescents who are not overweight/obese may only need support from their parents to be physically active, such as transportation to a playground, while overweight/obese adolescents may need the additional modeling component to enhance their drive and motivation to be active. Therefore, the combination of parental modeling along with specific parenting practices such as taking the child to an appropriate location for PA or encouragement may be necessary to influence the activity of overweight/obese adolescents. Evidence to support the hypothesis that family functioning would act as a moderator on the relationship between parenting practices and/or parental modeling and adolescents' physical activity, was not found. Of note, few studies to date have explored the role of family functioning as a moderator [18,46]. In a sample of healthy adolescents, one study found evidence of family functioning as a moderator on the relationship between family meals and unhealthful weight management behaviours [46]. Despite no literature exploring family functioning as a moderator within the context of child obesity, a review has provided some indirect evidence to help support this notion [18]. As pointed out by correlational findings, overweight/obese children have a greater likelihood of experiencing more family conflict and less family cohesion compared to their normal-weight counterparts [47,48]. Although the directionality of this effect remains unclear, these correlations suggest that in families where an overweight child is present, more support may be needed to help establish or manage positive health behaviours [18]. Although this review provides some reasoning to support the moderating effect of family functioning on adolescent health behaviours, the evidence base remains unclear [18,24]. In the present study, it is important to note that the null findings apparent for family functioning may be a result of the sample's characteristics. Of note, families in our sample were predominantly balanced in cohesion and flexibility. Therefore, families categorized as high or low functioning may be quite similar to one another. Thus, future research should strive to capture families that truly fit into the high or low family functioning groups to better understand the true potential of family functioning. The association between parenting practices on adolescents' PA behaviours was moderated by parenting styles, however, it was only partially consistent with the study hypotheses. This finding highlights that the moderating effect of parenting styles on the association between parenting practices on adolescents' PA behaviours was more complex than anticipated. Two other studies have reported similar results, suggesting that more healthful practices performed in a more permissive way are associated with more adolescent MVPA [25,49]. According to Hennessy and colleagues, two types of PA parenting practices (monitoring and reinforcement) were associated with child accelerometer PA when expressed in the context of a permissive parenting style [25]. Similar findings were also observed by Langer and colleagues who found parental support was only associated with adolescent PA when expressed in the context of a permissive parenting style [49]. One potential explanation for this finding may be that permissive parenting characterized by high warmth and low demand is associated with more unstructured playtime and more enjoyable activities [50]. Therefore, being permissive in the context of PA may provide adolescents with more free time for active play and if they feel encouraged and supported by their parent with respect to PA, they may choose to be physically active. The association between PA parenting practices, styles, and adolescents' MVPA was only observed when adolescents' MVPA was measured by accelerometry. While both accelerometer and self-report measures have been validated to assess PA, there are clear differences in the two measures. For instance, accelerometer data give more accurate estimates of walking-based activities and avoid many of the issues that go along with self-report, such as recall and response bias [51]. However, it is important to highlight that accelerometers are unable to capture certain types of activities, such as swimming and activities involving the use of upper extremities. Compared to direct measures, self-report methods appear to estimate greater amounts of higher intensity (i.e., vigorous) PA than in the low-to-moderate levels [51]. The main difference in the present study is that the self-report MVPA interaction with parenting practices and styles did not appear while it was found with the accelerometer. Measurement error with self-report tends to be higher, as noted by the increased chance of recall and response bias, which may lead to decreased power and perhaps explain why a significant interaction was not observed with the self-report data. The study has some limitations that should be considered. First, it is difficult to assume a cause and effect relationship due to the cross-sectional nature of the study. For instance, relationships observed in this study may be bi-directional since both parents and children can shape one another [52,53]. Second, measurement errors may have biased study results. MVPA was assessed with both subjective (self-report) and objective (accelerometer) measures. Self-report measures are subject to reporting biases, such as recall and social desirability bias, since individuals are known to have poor recall of past PA levels and tend to overestimate their PA (biased reporting and low validity), respectively [54][55][56]. Therefore, inconsistency in our results may be due to these various forms of measurement error. Third, the parenting measures had some measurement challenges. Of the three parenting styles developed by Baumrind, our study only captured two (authoritative and permissive). Therefore, this may have limited our ability to adequately capture the parenting style of each parent. Additionally, as all parenting measures (parenting practices, parenting styles, and family functioning) were self-reported, our results may have been impacted by social desirability bias. As such, this may have led to an overestimation of the results. Given the strong evidence of familial clustering of obesity [57,58], our study may have benefited from information on parental BMI. Including this variable in our analyses may have attenuated our results, given its association with the family PA environment as well as various parenting behaviours (e.g., monitoring child PA, setting limits for PA) [59]. Finally, our sample included adolescent volunteers who were classified as overweight or obese and were willing to take part in an e-health intervention. Despite the fact that our results may not be generalizable to the general population, it is important to consider overweight/obese adolescents not only because they are typically understudied, but because they are frequently targeted by treatment interventions. This is the first study to explore the moderating effects of both parenting styles and family functioning on adolescents' PA behaviours. It is one of the only studies to examine moderating effects in a sample of overweight or obese adolescents, which is essential when trying to design effective weight-management interventions. Understanding how parenting practices and modeling interact with styles and functioning on adolescents' health behaviours provides useful information for the development of familial interventions. It is also one of the few to use both accelerometers and self-report to directly measure and compare both parent and adolescents' PA levels. Findings from this study offer implications for intervention development. First, interventionists (e.g., nurse practitioners) should consider parenting factors when counselling families with an overweight or obese adolescent. As part of family-based interventions, interventionists should encourage parents to not only provide support for their child's PA, but modify their own PA. Secondly, family context, specifically, parenting style, may help improve the efficacy of family-based interventions. For example, interventionists could teach parents that parenting styles and practices go hand in hand and elicit different PA behaviours from their adolescents. --- Conclusions In conclusion, the study emphasizes the need to consider both parenting practices and parental modeling in shaping overweight/obese adolescents' PA behaviours, as well as acknowledges the importance of using such parenting tactics in the appropriate context. --- Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
1) Background: Family environments can impact obesity risk among adolescents. Little is known about the mechanisms by which parents can influence obesity-related adolescent health behaviours and specifically how parenting practices (e.g., rules or routines) and/or their own health behaviours relate to their adolescent's behaviours. The primary aim of the study explored, in a sample of overweight/obese adolescents, how parenting practices and/or parental modeling of physical activity (PA) behaviours relate to adolescents' PA while examining the moderating role of parenting styles and family functioning. (2) Methods: A total of 172 parent-adolescent dyads completed surveys about their PA and wore an accelerometer for eight days to objectively measure PA. Parents completed questionnaires about their family functioning, parenting practices, and styles (authoritative and permissive). Path analysis was used for the analyses. (3) Results: More healthful PA parenting practices and parental modeling of PA were both associated with higher levels of adolescents' self-reported moderate-vigorous physical activity (MVPA). For accelerometer PA, more healthful PA parenting practices were associated with adolescents' increased MVPA when parents used a more permissive parenting style. (4) Conclusions: This study suggests that parenting practices and parental modeling play a role in adolescent's PA. The family's emotional/relational context also warrants consideration since parenting style moderated these effects. This study emphasizes the importance of incorporating parenting styles into current familial interventions to improve their efficacy.
violations, such as theft, drug dealing, and burglary (Henry, Tolan, & Gorman-Smith, 2001; U.S. Office of Justice Programs, 2015). Despite rates of nonviolent crime that are lower than or equal to other industrialized nations, the United States has one of the highest rates of violent crime, especially lethal violent crime ("Countries compared by crime," 2009). Research suggests that a majority of young offenders engage in nonviolent crime, whereas only a small subset escalates to violent crime (Cohen & Piquero, 2009). Understanding the risk factors that distinguish the small group at highest risk for future violent crime could aid in early detection efforts and inform prevention strategies (Broidy et al., 2003). Most risk research has focused on criminal behavior broadly defined, but a few studies have explored the differential prediction of nonviolent versus violent crime (Loeber & Farrington, 2012;Piquero, Jennings, & Barnes, 2012). This paper adds to this literature by exploring common versus unique predictors of early adult violent versus nonviolent crime in a large sample of at-risk youth followed longitudinally, using multiple informants to assess childhood and early adolescent characteristics, with arrest records to document adult crimes. --- Common vs. Unique Pathways to Violent and Nonviolent Crime Extensive research suggests that the roots of antisocial development emerge in childhood, marked by elevated aggression and emotional difficulties, and exacerbated by parent-child conflict and harsh discipline (Dodge, Greenberg, Malone, & CPPRG, 2008). By early adolescence, deviant peer affiliation accompanied by detachment from parents and reduced parental monitoring fosters the initiation of antisocial behavior (Loeber, Burke, & Pardini, 2009). Within this broad framework, researchers have identified differentiated developmental patterns. For example, Moffitt (2006) introduced the distinction between childhood-onset and adolescent-limited patterns, documenting higher rates of childhood aggression and selfregulatory deficits among youth who initiated antisocial behavior early and showed chronic adult criminal activity, relative to those who began antisocial behavior later and desisted by early adulthood. In a parallel line of inquiry, researchers have documented different etiological and developmental pathways characterizing overt aggression versus covert rulebreaking behavior (see Burt, 2012 for a review). However, rarely are youth followed from childhood through adulthood to determine whether distinct childhood and adolescent experiences differentially predict persisting adult patterns of violent versus nonviolent crime (Loeber & Farrington, 2012). This is a question of high practical significance, given the inordinate human costs of violent crime relative to nonviolent crime (Reingle, Jennings, & Maldonado-Molina, 2012). Some theorists have speculated that nonviolent and violent criminal behavior represent manifestations of the same underlying pathology (e.g., Sampson & Laub, 2003). Indeed, the frequency of nonviolent offending predicts future violent crime, suggesting they represent sequenced outcomes associated with a common antisocial developmental progression (Piquero et al., 2012). In contrast, research has also identified distinct risk factors that specifically predict violent offending (Broidy et al., 2003;Byrd, Loeber, & Pardini, 2012;Nagin & Tremblay, 1999). --- Dysfunctional Social-emotional Development and Later Violent Crime The most reliable predictor of later violent crime is elevated aggression in childhood (Loeber et al., 2009;Reingle et al., 2012). Trajectory studies by Nagin and Tremblay (1999) and replicated by Broidy et al. (2003) across six, cross-national, longitudinal data-sets found that boys' violent crime in late adolescence was best predicted by being in the highest trajectory of physical aggression from age 6-15 years. Similarly, several studies have documented higher levels of childhood physical aggression in samples of violent adolescents than those who committed nonviolent or no offenses (Lai, Zing, & Chu, 2015;Reingle et al., 2012). Theorists have suggested that adult violence emerges when an early propensity for hostile, domineering behavior is reinforced and overlearned during childhood and adolescence (Broidy et al., 2003). In addition to aggressive behavior, significant social and emotional difficulties in childhood may increase risk for later violence. Elevated aggression and the emergence of violence have each been linked with negative emotionality and problematic peer relations (Burt, 2012;Lynam, Piquero, & Moffitt, 2004;Veltri et al., 2014). Developmental theorists have speculated that elevated childhood aggression often reflects reactivity in the more primitive neural circuits associated with the processing of fear and rage, evoked when children feel threatened (Vitaro, Brendgen, & Tremblay, 2002). Adverse living conditions and social isolation undermine the development of core self-regulatory capacities, eliciting defensive anger and fostering emotion dysregulation (Ciccheti, 2002). Consistent with this developmental analysis, research has linked difficulties regulating emotion and managing anger in childhood with later criminal activity (Eisenberg, Spinrad, & Eggum, 2010), and in some studies, specifically later violence. For example, in the Dunedin longitudinal study, boys who were emotionally dysregulated were more likely to engage in violent (but not nonviolent) offending in early adulthood (Henry, Caspi, Moffitt, & Silva, 1996). Aggressive children who are emotionally dysregulated are particularly likely to experience peer rejection and social isolation, and thereby become excluded from positive peer socialization opportunities that facilitate the growth of communication skills, empathy, and general social competence (Bierman, 2004). Social isolation, in turn, increases risk for later violence (Hawkins et al., 2000). Children who are isolated from mainstream peers often play with other aggressive children who encourage rebellious behavior and reinforce antisocial norms (Powers & Bierman, 2013). Peer-rejected children appear particularly vulnerable to developing a heightened vigilance for social threat and cues of impending conflict, choosing to act aggressively rather than experience vulnerability (Erath, El-Sheikh, & Cummings, 2009). For these reasons, the combination of childhood aggression, emotion dysregulation, and social isolation may reflect dysfunction in social-emotional development that primes children for later violence, making them more angry, reactive, and easily provoked to attack compared to aggressive children without the same level of concurrent social-emotional risks. --- Adolescent Predictors of Nonviolent and Violent Crime The transition into adolescence, generally considered a second phase in the development of antisocial behavior, is normatively accompanied by autonomy-seeking behavior. For many adolescents, the drive to establish autonomy involves purposeful distancing from parents and increased peer engagement (Dishion, 2014). From a social control perspective, distancing from parents, who are likely to reinforce socially normative values, coupled with engagement with peers who are more likely to embrace nonconventional attitudes and rebellious behavior, can lead to the initiation of delinquency (Loeber & Farrington, 2012). When detaching adolescents cease sharing personal information with their parents, it greatly diminishes their parents' ability to monitor them and protect them from risky situations or risky peers (Kerr & Stattin, 2002). Several studies suggest that adolescent risk-taking, detachment from parents, and deviant peer affiliation may be more strongly associated with nonviolent crime than with the escalation from nonviolent to violent crime, although evidence is mixed (Dishion, 2014;Dodge et al., 2008;Veltri et al., 2014). For example, Capaldi and Patterson (1996) found that reduced parental monitoring predicted both violent and nonviolent arrests in early adulthood, but did not explain unique variance in violent offending once nonviolent offending was considered. In another study, peer delinquency predicted both violent and nonviolent delinquency but showed a stronger association with milder and nonviolent forms of delinquency (Bernburg & Thorlindsson, 1999). In contrast, however, other studies have found peer violence and peer delinquency to predict later engagement in and trajectories of both violent and nonviolent crime (Henry et al., 2001;MacDonald, Haviland, & Morral, 2009). From a theoretical perspective, detaching from parents and affiliating with deviant peers changes the social norms and controls to which adolescents are exposed and leads to increased engagement in unsupervised activity, often facilitating self-serving behavior and corresponding rule-violations (Dishion, 2014). Most peer-facilitated adolescent antisocial activities fall in the category of nonviolent crimes (e.g., substance use, theft) rather than interpersonal violence. Hence, detaching from parents and affiliating with deviant peers may increase risk for nonviolent crimes, but not necessarily increase risk for the escalation to violent crime, once the association with nonviolent crime is accounted for. Additional research is needed to test this hypothesis. --- The Present Study A growing base of research suggests that social-emotional dysfunction in childhood, along with elevated aggression, may indicate unique risk for the emergence of violent crime in later adulthood, both because these characteristics may increase parent detachment and deviant peer affiliation at the transition into adolescence, as well as because these characteristics indicate difficulty managing feelings of intensive anger and social alienation. Yet, unique pathways to violent and nonviolent crime remain under-studied, particularly because few longitudinal studies include measures of childhood social-emotional dysfunction and aggression, and measures of adult violent and nonviolent crime. The present sample included a large number of children living in risky contexts selected from four different areas of the United States and followed longitudinally from elementary school through early adulthood, with multiple measures of child social-emotional and behavioral functioning as well as court records of adult crime. As such, it offered a unique opportunity to explore differential predictors of violent and nonviolent crime, particularly the role of early social-emotional development along with early aggression. A key goal of this study was to better understand the relative roles of childhood social-emotional dysfunction and early adolescent risk factors as differential predictors of violent and nonviolent forms of early adult crime. Based on research suggesting different pathways to violent and nonviolent crime (Hawkins et al., 2000;Loeber & Farrington, 2012), it was predicted that child aggression, emotion dysregulation and social isolation (reflecting childhood social-emotional dysfunction) would predict violent and nonviolent crime by increasing parent detachment and peer deviance, and also make a direct unique contribution to the prediction of violent crime. Given the less consistent research on associations between early adolescent social experiences and violent versus nonviolent crime, it was predicted that parent detachment and peer deviance would predict both forms of crime, with stronger (unique) contributions to nonviolent crime. --- Method Participants Participants were 754 youth (46% African American, 50% European American, 4% other; 58% male) from a multi-site, longitudinal study of children at risk for conduct problems (Fast Track) that also involved a preventive intervention. This study used data collected from 1995 through 2009. Participants were recruited from 27 schools in high-risk areas located in four sites (Durham, NC; Nashville, TN; Seattle, WA; and rural PA.) In the large urban school districts, schools with the highest risk statistics (e.g. highest student poverty; lowest school achievement) were selected for participation; in the three participating rural school districts, all schools participated. All participating schools had kindergartens. The sample selection proceeded as follows. First, in the late fall of three successive years, teachers rated the aggressive-disruptive behavior of all kindergarten children (total N = 9,594) on 10 items from the Authority Acceptance subscale of the TOCA-R (Werthamer-Larsson, Kellam, & Wheeler, 1991). Children who scored in the top 40% on this teacher screen at each site were identified (N = 3,274) and their parents rated aggressive-disruptive child behavior at home (Achenbach,199l). Teacher and parent screen scores were averaged, and children were recruited beginning with the highest score and moving down the list until desired sample sizes were reached within sites (N = 891 high risk children, including 446 randomized by school to the control group and eligible for this study; see Lochman & CPPRG, 1995 for details). In addition, a normative sample (N = 396) was recruited to be representative of the school population at each site. The normative sample was recruited only from the control schools, so that intervention effects would not affect longitudinal course. For this sample, children were stratified to represent each site population on dimensions of race, sex, and decile of the teacher screen, and then chosen randomly within these blocks for study recruitment. The normative sample included a portion of the high-risk control group to the proportional degree that they represented the school population. The selection of participants into the study is illustrated in Figure 3 (in the on-line appendix). The present study oversampled higher-risk students, including children from both the highrisk (59%) and normative (41%) samples, in order to increase variability in the risk factors and crime outcomes of interest. Of the 754 participants, 20 participants (3%) had no arrest records available. A MCAR test (Little, 1988) indicated that adult crime outcomes were missing completely at random. However, participants with missing data had higher levels of childhood aggression, emotion dysregulation, and youth-rated parent detachment and peer deviancy than participants with data. In structural equation models testing the study hypotheses, full information maximum likelihood estimation was used to account for missing data. --- Measures One parent, the primary caregiver, and one teacher (the primary classroom teacher) rated child social-emotional functioning (aggression, emotion dysregulation, social isolation) in fifth grade (age 10-11). Primary caregivers included biological mothers (86%), biological fathers (5%), a grandparent (5%), or other (e.g., step-parents, adoptive parents, or other guardians; 4%). Parents and youth rated parent detachment, and youth rated peer deviancy in early adolescence (age 12-14). Arrest records were collected in early adulthood. Measures are described below; technical reports that provide items and psychometric properties of all measures, are available at the Fast Track study website, http://fasttrackproject.org/datainstruments.php. Child characteristics in late childhood-At the end of fifth grade, parents and teachers completed the Child Behavior Checklist -Parent and Teacher Report Forms (Achenbach, 1991). To assess aggression distinct from oppositional or hyperactive behavior, a 9-item narrow-band scale validated in a prior study (Stormshak, Bierman, & CPPRG, 1998) was used (e.g., gets in many fights, threatens, destroys things) (<unk> = 0.91 parents, <unk> =. 92 teachers). Similarly, nine items were used to assess a narrow-band scale of social isolation (e.g., withdrawn, sulks, shy) (<unk> = 0.72 parents, <unk> =.79 teachers). For both measures, raw scores were standardized and averaged to create a parent-teacher composite. At the end of fifth grade, teachers also completed the emotion regulation subscale of the Social Competence Scale (CPPRG, 1995), comprised of nine items (each rated on a 5-point scale) assessing the child's ability to regulate emotions under conditions of elevated arousal (e.g., controls temper in a disagreement, calms down when excited or wound up; <unk> =.78). The scale was reverse-scored to represent emotion dysregulation. Socialization influences in early adolescence-During the summers following seventh and eighth grade, youth and parents completed the Parent-Child Communication Scale, adapted for the Fast Track Project from the Revised Parent-Adolescent Communication Form (Thornberry, Huizinga, & Loeber, 1995). The youth version included 10 items, all reverse scored for this study, assessing perceptions of parent unreceptiveness (e.g., my parent is a good listener, my parent tries to understand my thoughts) and child secrecy (e.g., I discuss problems with my parent, I can let my parent know what bothers me; <unk> =.59). The parent version included 11 items, reverse scored, assessing perceptions of child secrecy (e.g., my child talks to me about personal problems, my child tells me what is bothering him/her), and poor parent communication (e.g., I discuss my child's problem with my child; <unk> =.53). All items were rated on a 5-point scale (from 1 = almost never to 5 = almost always), with high scores indicating more problems. To assess peer deviancy, youth completed the Self Report of Close Friends (O'Donnell, Hawkins, & Abbott, 1995), describing their first-best and second-best friends' antisocial behavior with a 4-point Likert scale (1 = very much to 4 = not at all). In seventh grade, a 5item version of this scale was used (e.g., gets in trouble with teachers, drinks alcohol, gets in trouble with police; <unk> =.82). In eighth grade, seven additional items were added focused on joint antisocial activities (e.g., you and best friend got in trouble with the police; <unk> =.89). Arrest records-Adult arrest data were collected from the court system in the child's county of residence and surrounding counties when youth were 22-23 years old. A record of arrest corresponded to any crime for which the individual had been arrested and adjudicated. Exceptions were probation violations and referrals to youth diversion programs for first-time offenders. Court records of conviction were also collected and revealed that 65% of arrests resulted in convictions. Due to the high correlation between arrest and conviction data (.95 for males,.91 for females), only arrest data were examined in this study. Trained research assistants assigned a severity score to each offense, using a cross-site coding manual based on the severity coding system used by Cernkovich and Giordano (2001). Status offenses and traffic offences were not included in this study due to their frequent occurrence and relatively normative nature among the general population. Nonviolent crimes included those coded at severity levels 2 (trespassing, vandalism, disorderly conduct, possession of stolen goods, possession of a controlled substance) and 3 (theft, breaking and entering, arson, prostitution). Violent crimes included those coded at severity levels 4 (second-degree assault, assault with a deadly weapon, domestic violence, robbery) and 5 (murder, aggravated assault, rape). As such, and consistent with the U.S. Office of Justice Programs definitions (U.S. Office of Justice Programs, 2015), violent crimes represented crimes directed towards people that used force or the threat of force to cause serious harm, and nonviolent crimes represented crimes that did not involve a threat of harm or attack upon a victim. The total number of life-time arrests for nonviolent and violent crimes were tabulated and used as the outcome variables. --- Procedures In the spring of children's fifth grade year, research assistants delivered measures to teachers, who then completed them. Parents and youth were interviewed at home in the summer following children's fifth, seventh, and eighth grade years; parents provided informed consent and youth provided assent. Parent interviews were conducted by research assistants who read through the questionnaires and recorded responses. Youth interviews were conducted using computer-administered processes, in which youth completed questionnaires on the computer while listening to the questions via headphones. Prior to all assessments, research assistants were trained in questionnaire administration and all assessment procedures. Financial compensation for study participation was provided to teachers, parents, and children. All study procedures complied with the ethical standards of the American Psychological Association and were approved by the Institutional Review Board of the Pennsylvania State University (#103909). --- Plan of Analysis Data analyses proceeded in three stages. First, correlations were run to provide descriptive analyses and demonstrate the simple associations among the study variables. Then, a measurement model was evaluated, to determine the fit of the data to represent five latent constructs (childhood social-emotional dysfunction, early adolescent parent detachment, early adolescent deviant peer affiliation, adult nonviolent crime, adult violent crime). Finally, structural equation models were used to test the study hypotheses. Statistical power analysis, using the Preacher and Coffman (2006) method, indicated a power of 1, indicating high power for detecting poor model fit. --- Results --- Descriptive Analyses and Correlations The means, standard deviations, and ranges for all study variables are shown in Table 1. Tests for sex differences demonstrated that, compared to girls, boys had significantly higher levels of aggression, emotion dysregulation, parent-rated child secrecy, parent-rated poor parent communication, first best friend's antisocial behavior (7 th and 8 th grade), second best friend's antisocial behavior (8 th grade), and nonviolent and violent crime (for all four severity levels). Correlations among measures of childhood social-emotional dysfunction, parent detachment, and peer deviancy are shown in Table 2. Measures representing the latent constructs used in this study were significantly inter-correlated, ranging from r =.27 to r =. 64 (child social-emotional functioning), r =.29 to r =.72 (parent detachment), and r =.30 to r =.61 (peer deviancy). Measures of child social-emotional dysfunction were significantly correlated with all measures of parent detachment, ranging from r =.14 to r =.32, and with most measures of peer deviancy, ranging from r =.06 to r =.20. Most correlations between parent detachment and peer deviancy were significant, ranging from r =.06 to r =.22. Correlations between the childhood and adolescent risk factors and adult crime are shown in Table 3. Child aggression and emotion dysregulation significantly predicted all levels of nonviolent and violent crime (range r =.16 to r =.29). Social isolation significantly predicted only violent crime (severity levels 4 and 5, rs =.09 and.08, respectively). Peer deviancy predicted adult nonviolent crime (severity levels 2 and 3, range r =.09 to r =.28) but not violent crime. Parent detachment showed a mixed pattern of significant and non-significant associations with adult crime (range r =.01 to r =.20). These correlations confirm anticipated links between the risk factors and adult crime, with childhood aggression and emotion dysregulation predicting both nonviolent and violent crime, social isolation predicting only violent crime, and peer deviancy and parent detachment predicting primarily nonviolent crime. --- Structural Equation Models Next, a measurement model was estimated, with four latent constructs: 1) childhood socialemotional dysfunction (parent and teacher ratings of aggression, emotion dysregulation, and social isolation), 2) early adolescent parent detachment (parent and youth ratings of parent unreceptiveness, poor parent communication, and child secrecy), 3) early adolescent deviant peer affiliation (youth ratings of best friends' deviant behavior), 4) early adult nonviolent crime (severity levels 2 and 3), and 5) early adult violent crime (severity levels 4 and 5). Model fit indices indicated that the predicted relations among observed measures and latent constructs did an acceptable job of representing patterns in the data, <unk> 2 (df = 76) = 180.79, p <unk>.001, relative <unk> 2 = 2.38, CFI =.96, RMSEA =.043, 90% CI [.035,.051]. Even though a non-significant <unk> 2 is preferred, this is rare in large samples, and the relative <unk> 2 and other fit indices indicate an adequate fit (see Figure 1). The structural equation model compared the predictive links between child social-emotional dysfunction, early adolescent parent detachment and peer deviancy, and early adult violent and nonviolent crime when examined together in the same model. The overall fit of the structural model was satisfactory, <unk> 2 (df = 78) = 279.51, p <unk>.001, relative <unk> 2 = 3.58, CFI =. 92, RMSEA =.059, 90% CI [.051,.066]. As shown in Figure 2, child social-emotional dysfunction in late childhood made significant unique contributions to parent detachment and deviant peer affiliation in early adolescence, as well as significant unique contributions to nonviolent and violent crime in early adulthood, with the strongest contribution to violent crime (<unk> =.48). Deviant peer affiliation in early adolescence made significant unique contributions to nonviolent, but not violent, crime. Parent detachment did not show unique significant associations with nonviolent or violent crime. --- Discussion Despite the many serious consequences associated with violent crime, limited research exists on risk factors that uniquely predict violent versus nonviolent crime. In the present study, different pathways to violent and nonviolent crime emerged. The severity of child socialemotional dysfunction (aggression, emotion dysregulation, social isolation) was a powerful and direct predictor of violent crime. Although child dysfunction also predicted a direct pathway to nonviolent crime, the variance accounted for was approximately half the variance accounted for in violent crime. Significant indirect pathways through peer deviancy emerged for nonviolent but not violent, crime, suggesting that this adolescent socialization process plays a more distinctive role in shaping nonviolent than violent crime when both are considered together. Despite significant associations between parent detachment and nonviolent crime, when considered with the other child and adolescent factors, no significant unique pathway emerged. --- Predicting Violent Crime In this study, risk for future violent crime was indicated by a childhood profile that included emotional and social dysfunction, as well as aggressive behavior. As children, individuals who later became violent criminals were aggressive (fighting, physically attacking others, destroying others' things) and interpersonally hostile (teasing, threating others). They were also frequently angry and volatile emotionally (difficulties tolerating frustration, calming down when upset, and controlling anger), and socially isolated, reflecting social discomfort (prefers to be alone, shy) and social demoralization (sulks, unhappy). The results are consistent with studies showing robust associations between later violent offending and both childhood aggression (Broidy et al., 2003;Lai et al., 2015) and childhood emotional dysregulation and social isolation (Hawkins et al., 2000;Henry et al., 1996). In addition, by demonstrating the coherence and predictability of a childhood latent factor of socialemotional dysfunction, the present findings extend prior research by suggesting that the behavioral, emotional, and social difficulties experienced by these vulnerable children need to be considered together, and their developmental interplay understood. It is well-established that children who grow up in contexts characterized by high levels of exposure to conflict and violence are more likely to display aggression and develop antisocial behavior than children growing up in more protected environments (Dodge et al., 2008). Largely, this has been explained by social learning and social control theories that emphasize the role that parents and peers play in modeling, normalizing, and reinforcing aggression (Dishion, 2014;Loeber, et al., 2009). Recent research has also highlighted the way in which chronic stress associated with violence exposure can negatively impact developing neural systems that affect emotional functioning and support self-regulation (Blair & Raver, 2012). Exposure to environments with high levels of conflict and violence may both teach aggressive behavior and undermine the development of emotion regulation, empathy, and self-control. The result may be a transactional process in which emotion dysregulation, aggressive behavior, and social alienation interact over time to increase the propensity for violence (Vitaro et al., 2002). For example, when frustrated or disappointed, emotionally-dysregulated children are less able to modulate their feelings of anger or inhibit their aggressive impulses. Consequently, they are prone to react aggressively when upset, eliciting negative reactions from others, limiting opportunities for positive social interactions, and exacerbating feelings of social alienation (Bierman, 2004;Dodge et al., 2008). This is the first long-term predictive study to document a unique link between these childhood characteristics and later violence, distinguished from nonviolent crime. --- Predicting Nonviolent Crime Nonviolent crime in early adulthood was predicted by elevated child social-emotional dysfunction; however, in contrast to violent crime, the direct pathway between child dysfunction and nonviolent crime was smaller and was accompanied by indirect pathways that included deviant peer affiliation. The findings support a cascade model in which childhood social-emotional dysfunction increases risk for peer deviance in early adolescence, which, in turn, increases risk for initiation of crime (Dishion, 2014). The present findings also extend the existing literature, suggesting that deviant peer affiliation predicts primarily to nonviolent (rather than violent) crime when both are modeled together (Bernburg & Thorlindsson, 1999;Veltri et al., 2014). Relatedly, the findings suggest that social control models emphasizing the influence of deviant norms reinforced by antisocial friends (Bernburg & Thorlindsson, 1999) may explain more of the variance in nonviolent than violent crime. This may be in part because deviant peers often endorse rule-breaking behavior, motivated by self-gain, but less often endorse interpersonal violence, which involves a more radical dismissal of social mores with potentially deleterious effects on group cohesion (Bernburg & Thorlindsson, 1999). In the present study, parent detachment was correlated with deviant peer affiliation and adult crime; however, in the structural model, parent detachment made no unique contribution to crime. This suggests that parent detachment alone does not increase risk for engagement in nonviolent crime. --- Limitations Several limitations of the current study warrant consideration. First, although the use of the current at-risk sample conferred many advantages by providing rich data on childhood and adolescent risks and adult crime, the sample was not nationally representative. The extent to which the current findings can be generalized to normative populations is not clear. The sample was selected from at-risk communities characterized by elevated rates of poverty and crime which may have heightened the capacity to predict future crime; prediction may be more difficult in communities with lower base rates of crime (Lochman & CPPRG, 1995). Second, although the study utilized several widely-used measures, the parent detachment measure was adapted for the present study and was based on parent and child ratings; a validated observational index of parent-child communication would have strengthened the assessment model. Third, only two indices of adolescent social experiences were assessed in this study (parent detachment, deviant peer affiliation), and other indices may have shown additional effects on crime outcomes. Relatedly, although the assessments in seventh and eighth grade captured risk during the transition to adolescence, it is possible that assessments in later adolescence and more proximal to early adulthood might have yielded somewhat different findings. Still, the study of risk factors in early adolescence is likely to be most informative for early intervention efforts targeting the prevention of criminal behavior. --- Clinical Implications The findings suggest that the developmental roots of violent crime may be evident by the end of childhood, that children at high risk for later violence might be identified by late childhood, and that interventions designed to reduce violent crime may be more powerful when they start in childhood. The current findings also suggest that preventive interventions would benefit by focusing concurrently on addressing the emotional and social difficulties of children at high risk, as well as their high levels of aggressive behavior. In contrast, the study findings suggest that prevention efforts targeting nonviolent crime may require particular attention to adolescent social experiences, particularly deviant peer affiliation during early adolescence. Fostering stronger parent-youth communication bonds and structuring free time to reduce opportunities for unstructured deviant peer activity in early adolescence may help in the prevention of nonviolent crime. Yet, given this study's findings of differential patterns of associations between adolescent social experiences and type of adult crime, it is likely that prevention efforts targeting parent-youth bonding and communication and peer affiliations in adolescence alone will have less impact on the reduction of violent crime. --- Strengths and Future Directions To date, little longitudinal research has examined the relative roles of child and adolescent risk factors in the unique pathways to violent and nonviolent crime. The current study, with its assessment of risk across two distinct developmental time periods, afforded a unique opportunity to explore the comparative roles of childhood social-emotional dysfunction and early adolescent risk in the development of violent and nonviolent crime. The findings suggest distinct as well as shared developmental pathways (Nagin & Tremblay, 1999), and challenge conceptual frameworks asserting the generality of all forms of criminal behavior. The implications are that deviant peer affiliation in adolescence contributes primarily to nonviolent crime. In contrast, child social-emotional development appears key in the pathway to violent crime. These findings parallel the differential predictors of overt aggression versus covert rule-breaking behavior in childhood and adolescence (Burt, 2012) and suggest potential continuity into differential patterns of adult crime. Given the limited research examining differential prediction of nonviolent and violent crime, and the serious consequences of violent crime, further investigation of pathways to violent crime is warranted. This research should examine risk factors across different developmental periods, include markers of social and emotional functioning, as well as aggressive and antisocial behavior, and explore potential mechanisms of transmission. Selecting High-Risk and Normative Samples a Across three sequential years (cohorts 1-3) children were eligible for the high risk sample based on elevated teacher and parent screens, without regard for sex or race. Assignment to intervention or control group was based on the school they attended in first grade. b Children were eligible for the normative sample only if they were in cohort 1 (not cohort 2 or 3) and if they attended a control school (not an intervention school). Eligible children were stratified by sex and race to represent the school population and then randomly selected from those eligible. --- Supplementary Material Refer to Web version on PubMed Central for supplementary material.
While most research on the development of antisocial and criminal behavior has considered nonviolent and violent crime together, some evidence points to differential risk factors for these separate types of crime. The present study explored differential risk for nonviolent and violent crime by investigating the longitudinal associations between three key child risk factors (aggression, emotion dysregulation, and social isolation) and two key adolescent risk factors (parent detachment and deviant peer affiliation) predicting violent and nonviolent crime outcomes in early adulthood. Data on 754 participants (46% African American, 50% European American, 4% other; 58% male) oversampled for aggressive-disruptive behavior were collected across three time points. Parents and teachers rated aggression, emotion dysregulation, and social isolation in fifth grade (middle childhood, age 10-11); parents and youth rated parent detachment and deviant peer affiliation in seventh and eighth grade (early adolescence, age 12-14) and arrest data was collected when participants were 22-23 years old (early adulthood). Different pathways to violent and nonviolent crime emerged. The severity of child dysfunction in late childhood, including aggression, emotion dysregulation, and social isolation, was a powerful and direct predictor of violent crime. Although child dysfunction also predicted nonviolent crime, the direct pathway accounted for half as much variance as the direct pathway to violent crime. Significant indirect pathways through adolescent socialization experiences (peer deviancy) emerged for nonviolent crime, but not for violent crime, suggesting adolescent socialization plays a more distinctive role in predicting nonviolent than violent crime. The clinical implications of these findings are discussed.
examining differential prediction of nonviolent and violent crime, and the serious consequences of violent crime, further investigation of pathways to violent crime is warranted. This research should examine risk factors across different developmental periods, include markers of social and emotional functioning, as well as aggressive and antisocial behavior, and explore potential mechanisms of transmission. Selecting High-Risk and Normative Samples a Across three sequential years (cohorts 1-3) children were eligible for the high risk sample based on elevated teacher and parent screens, without regard for sex or race. Assignment to intervention or control group was based on the school they attended in first grade. b Children were eligible for the normative sample only if they were in cohort 1 (not cohort 2 or 3) and if they attended a control school (not an intervention school). Eligible children were stratified by sex and race to represent the school population and then randomly selected from those eligible. --- Supplementary Material Refer to Web version on PubMed Central for supplementary material.
While most research on the development of antisocial and criminal behavior has considered nonviolent and violent crime together, some evidence points to differential risk factors for these separate types of crime. The present study explored differential risk for nonviolent and violent crime by investigating the longitudinal associations between three key child risk factors (aggression, emotion dysregulation, and social isolation) and two key adolescent risk factors (parent detachment and deviant peer affiliation) predicting violent and nonviolent crime outcomes in early adulthood. Data on 754 participants (46% African American, 50% European American, 4% other; 58% male) oversampled for aggressive-disruptive behavior were collected across three time points. Parents and teachers rated aggression, emotion dysregulation, and social isolation in fifth grade (middle childhood, age 10-11); parents and youth rated parent detachment and deviant peer affiliation in seventh and eighth grade (early adolescence, age 12-14) and arrest data was collected when participants were 22-23 years old (early adulthood). Different pathways to violent and nonviolent crime emerged. The severity of child dysfunction in late childhood, including aggression, emotion dysregulation, and social isolation, was a powerful and direct predictor of violent crime. Although child dysfunction also predicted nonviolent crime, the direct pathway accounted for half as much variance as the direct pathway to violent crime. Significant indirect pathways through adolescent socialization experiences (peer deviancy) emerged for nonviolent crime, but not for violent crime, suggesting adolescent socialization plays a more distinctive role in predicting nonviolent than violent crime. The clinical implications of these findings are discussed.
Background Child maltreatment, including physical abuse, sexual abuse, emotional abuse, and neglect impacts a significant number of children worldwide [1][2][3]. For example, a survey involving a nationally representative sample of American children selected using telephone numbers from 2013 to 2014 found that lifetime rates of maltreatment for children aged 14 to 17 was 18.1% for physical abuse, 23.9% for emotional abuse, 18.4% for neglect, and 14.3% and 6.0% for sexual abuse of girls and boys respectively [4]. Child maltreatment is associated with many physical, emotional, and relationship consequences across the lifespan, such as developmental delay first seen in infancy; anxiety and mood disorder symptoms and poor peer relationships first seen in childhood; substance use and other risky behaviours often first seen in adolescence; and increased risk for personality and psychiatric disorders, relationship problems, and maltreatment of one's own children in adulthood [5][6][7][8][9]. Given the high prevalence and serious potential negative consequences of child maltreatment, clinicians need to be informed about strategies to accurately identify children potentially exposed to maltreatment, a task that "can be one of the most challenging and difficult responsibilities for the pediatrician" [10]. Two main strategies for identification of maltreatment-screening and case-finding-are often compared to one another in the literature [11,12]. Screening involves administering a standard set of questions, or applying a standard set of criteria, to assess for the suspicion of child maltreatment in all presenting children ("mass screening") or high-risk groups of children ("selective screening"). Case-finding, alternatively, involves providers being alert to the signs and symptoms of child maltreatment and assessing for potential maltreatment exposure in a way that is tailored to the unique circumstances of the child. A previous systematic review by Bailhache et al. [13] summarized "evidence on the accuracy of instruments for identifying abused children during any stage of child maltreatment evolution before their death, and to assess if any might be adapted to screening, that is if accurate screening instruments were available." The authors reviewed 13 studies addressing the identification of physical abuse (7 studies), sexual abuse (4 studies), emotional abuse (1 study), and multiple forms of child maltreatment (1 study). The authors noted in their discussion that the tools were not suitable for screening, as they either identified children too late (i.e., children were already suffering from serious consequences of maltreatment) or the performance of the tests was not adaptable to screening, due to low sensitivity and specificity of the tools [13]. This review builds upon the work of Bailhache et al. [13] and performs a systematic review with the objective of assessing evidence about the accuracy of instruments for identifying children suspected of having been exposed to maltreatment (neglect, as well as physical, sexual abuse, emotional abuse). Similar to the review by Bailhache et al. [13], we investigate both screening tools and other identification tools or strategies that could be adapted into screening tools. In addition to reviewing the sensitivity and specificity of instruments, as was done by Bailhache et al. [13], for five studies, we have also calculated estimates of false positives and negatives per 100 children, a calculation which can assist providers in making decisions about the use of an instrument [14]. This review contributes to an important policy debate about the benefits and limitations of using standardized tools (versus case-finding) to identify children exposed to maltreatment. This debate has become increasingly salient with the publication of screening tools for adverse childhood experiences, or tools that address child maltreatment alongside other adverse experiences [15,16]. It should be noted here that while "screening" typically implies identifying health problems, screening for child maltreatment is different in that it usually involves identifying risk factors or high-risk groups. As such, while studies evaluating tools that assist with identification of child maltreatment are typically referred to as diagnostic accuracy studies [17], the word "diagnosis" is potentially misleading. Instead, screening tools for child maltreatment typically codify several risk and clinical indicators of child maltreatment (e.g., caregiver delay in seeking medical attention without adequate explanation). As such, they may more correctly be referred to as tools that identify potential maltreatment, or signs, symptoms and risk factors that have a strong association with maltreatment and may lead providers to consider maltreatment as one possible explanation for the sign, symptom, or risk factor. Assessment by a health care provider should then include consideration of whether there is reason to suspect child maltreatment. If maltreatment is suspected, this would lead to a report to child protection services (CPS) in jurisdictions with mandatory reporting obligations (e.g., Canada, United States) or to child social services for those jurisdictions bound by occupational policy documents (e.g., United Kingdom) [18]. Confirmation or verification of maltreatment would then occur through an investigation by CPS or a local authority; they, in turn, may seek consultation from one or more health care providers with specific expertise in child maltreatment. Therefore, throughout this review we will refer to identification tools as those that aid in the identification of potential child maltreatment. --- Methods A protocol for this review is registered with the online systematic review register, PROSPERO (PROSPERO 2016:CRD42016039659) and study results are reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist (see supplemental file 1). As the review by Bailhache et al. [13] considered any English or French materials published between 1961 and April 2012 (only Englishlanguage materials were retrieved from their search), we searched for English-language materials published between 2012 and July 2, 2019 (when the search was conducted). Additional inclusion criteria are found in Table 1. Inclusion criteria for this review were matched to those in Bailhache et al.'s [13] review. We included diagnostic accuracy studies [17] that 1) evaluated a group of children by a test, examination or other procedure (hereafter referred to as the index test) designed to identify children potentially exposed to maltreatment and also 2) evaluated the same group of children (or ideally a random subsample) by a reference standard (acceptable reference standards are listed in Table 1) that confirmed or denied exposure to potential maltreatment. We excluded articles that assessed psychometric properties of child maltreatment measures unless diagnostic data was available in the paper. The searches for the review update were conducted in seven databases: Medline, Embase, PsycINFO, Cumulative Index to Nursing and Allied Health Literature, Sociological Abstracts, the Education Resources Information Center, and Cochrane Libraries (see supplemental file 2 for example search). Forward and backward citation chaining was also conducted to complement the search. All articles identified by our searches were screened independently by two reviewers at the title and abstract and full-text level. An article suggested for inclusion by one screener was sufficient to forward it to full-text review. Any disagreements at full text stage were resolved by discussion. --- Data extraction and analysis For all included studies, one author extracted the following data: study design, the study's inclusion criteria, form of potential child maltreatment assessed, index tool, sample size, reference standard, and values corresponding to sensitivity and specificity. While our original protocol indicated that we would extract and analyze data about child outcomes (e.g., satisfaction, well-being), service outcomes (e.g., referral rates), and child wellbeing outcomes (e.g., internalizing symptoms, externalizing symptoms, suicidal ideation) from the studies (e.g., from randomized trials that evaluated screening versus another identification strategy and assessed associated outcomes), no such data were available. Extracted data were verified by a second author by cross-checking the results in all tables with data from the original articles. Disagreements were resolved by discussion. Sensitivity and specificity are "often misinterpreted and may not reflect well the effects expected in the population of interest" [14]. Other accuracy measures, such as false positives and false negatives, can be more helpful for making decisions about the use of an instrument [14], but determining them requires a reasonable estimate of prevalence in the intended sample (in this case of the exposure, child maltreatment) and in the intended setting (e.g., emergency department). Although there are no clear cut-off points for acceptable proportions of false negatives and positives, as acceptable cutoffs depend on the clinical setting and patient-specific factors, linking false positives and negatives to downstream consequences (e.g., proportion of children who will undergo a CPS investigation who should not or who miss being investigated) can assist practitioners in determining acceptable cut-offs for their practice setting. For those studies where prevalence estimates were available, sensitivity and specificity values were entered into GRADEpro software in order to calculate true/false positives/negatives per 100 children tested. This free, online software allows users to calculate true/false positives/negatives when users enter sensitivity and specificity values of the index test and an estimate of prevalence. In GRADEpro, true/false positives/negatives can be calculated across 100, 1000, 100,000, or 1,000,000 patients. We selected 100 patients as a total, as it allows easy conversion to percentage of children. We also give an example of true/false positives/negatives per 100,000 children tested, which is closer to a population estimate or numbers across several large, emergency departments. To calculate these values, two prevalence rates were used (2 and 10%) based on the range of prevalence of child maltreatment in emergency departments in three high-income country settings [20], as most of the identified screening tools addressed children in these settings. Use of these prevalence rates allow for a consistent 3. Comparator (reference test). Studies had to have an acceptable reference standard, i.e. "expert assessments, such as child's court disposition; substantiation by the child protection services or other social services; assessment by a medical, social or judicial team using one or several information sources (caregivers or child interview, child symptoms, child physical examination, and other medical record review)" [13]. 4. Outcomes. Studies had to assess one of the following outcomes: sensitivity, specificity, positive predictive value, or negative predictive value. 5. Study design. Studies need not include a comparison population (e.g., case series could be included if the intention was to evaluate one of the outcomes listed above). --- Exclusion criteria 1. Ineligible population. Studies that only addressed adults' or children's exposure to intimate partner violence. 2. Ineligible intervention (index test). Studies that identified a clinical indicator for child maltreatment, such as retinal hemorrhaging, but not child maltreatment itself and tools that identified a different population (e.g., general failure to thrive, children's exposure to intimate partner violence). 3. Ineligible comparator (reference test). Studies that did not have an acceptable reference standard (e.g., parent reports of abuse were ineligible). 4. Ineligible outcomes. Studies that at minimum did not set out to evaluate at least one of the following accuracy outcomes: sensitivity, specificity, positive predictive value, negative predictive value. 5. Ineligible publication types. Studies published as abstracts were excluded, as not enough information was available to critically appraise the study design. Also excluded were studies published in non-article format, such as books or theses. The latter were excluded for pragmatic issues, but recent research suggests that inclusion of these materials may have little impact on results [19]. comparison of true/false positives/negatives per 100 children across all applicable studies. For consistency and to enhance accuracy of calculations in GRADEpro of true/ false positives/negatives proportions per 100, where possible, all sensitivity and specificity values and confidence intervals for the included studies were recalculated to six decimal places (calculations for confidence intervals used: p <unk> 1.96 <unk> <unk>p(1-p)/n]). In GRADEpro, the formula for false positives is (1 -specificity)*(1 -prevalence) and the formula for false negatives is (1 -sensitivity)*(prevalence). As the majority of studies differed in either a) included populations or b) applied index tests, we were unable to pool data statistically across the studies. Instead, we narratively synthesized the results by highlighting the similarities and differences in false positives/negatives across the included studies. For the population estimate, we modeled the effects of the SPUTOVAMO checklist for children with physical abuse or neglect on downstream consequences for children under 8 years of age presenting to the emergency department with any physical injury. We calculated true/ false positives/negatives per 100,000 using the lower end of the prevalence range (2%) [20]. Based on American estimates, we assumed that 17% of children who are reported to child welfare are considered to have substantiated maltreatment and among children with substantiated maltreatment, 62% may receive post-investigation services [21]. We also modeled downstream consequences of false negatives, based on an estimate that 25 to 50% of children who are exposed to maltreatment need services for mental health symptoms [22]. We modeled consequences of false positives by assuming that all suspicions lead to reports which lead to CPS investigations. --- Critical appraisal One author critically appraised each study using the QUADAS-2 tool [23] and all data were checked by a second author, with differences resolved through consensus. The QUADAS-2 tool evaluates risk of bias related to a) patient selection, b) index test, c) reference standard, and d) flow and timing. Questions related to "applicability" in QUADAS-2 were not answered because they overlap with questions involved in the GRADE process [17]. As the developers of QUADAS-2 note [23], an overall rating of "low" risk of bias is only possible when all domains are assessed as low risk of bias. An answer of "no" to any of the questions indicates that both the domain (e.g., "patient selection") and the overall risk of bias for the study is high. In this review, a study was rated as "high" risk of bias if one or more domains was ranked as high risk of bias, a study was ranked as "low" risk of bias when all domains were rated as low risk of bias and a study was ranked as "unclear" risk of bias otherwise (i.e., when the study had one or more domains ranked as "unclear" risk of bias and no domains ranked as "high" risk of bias). --- Grading of recommendations, assessment, development and evaluation (GRADE) Evidence was assessed using GRADE [17]. GRADE rates our certainty that the effect we present is close to the true effect; the certainty that the effect we present is close to the true effect is rated as high, moderate, low or very low certainty. A GRADE rating is based on an assessment of five domains: (1) risk of bias (limitations in study designs); ( 2) inconsistency (heterogeneity) in the direction and/or size of the estimates of effect; (3) indirectness of the body of evidence to the populations, interventions, comparators and/or outcomes of interest; (4) imprecision of results (few participants/events/observations, wide confidence intervals); and (5) indications of reporting or publication bias. For studies evaluating identification tools and strategies, a body of evidence starting off with cross-sectional accuracy studies is considered "high" certainty and then is rated down to moderate, low, or very low certainty based on the five factors listed above. --- Results The updated search and citation chaining retrieved 3943 records; after de-duplication, 1965 titles and abstracts were screened for inclusion (see Fig. 1). From this set of results, 93 full-text articles were reviewed for inclusion, of which 19 new articles (representing 18 studies) were included [24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42]. In addition, the 13 studies evaluated in the Bailhache et al. review [43][44][45][46][47][48][49][50][51][52][53][54][55] were included in this review update, for a total of 32 articles (31 studies). --- Study characteristics Overall, we did not find any studies that measured important health outcomes after the use of a screening tool or other instrument. Instead, the included tools and strategies provided accuracy estimates for a range of maltreatment types (see supplemental file 3 for study characteristics), including multiple types of maltreatment (6 studies); medical child maltreatment (also known as caregiver fabricated illness in a child, factitious disorder imposed on another and Munchausen syndrome by proxy, 1 study); sexual abuse (7 studies), including child sex trafficking (3 studies); emotional abuse (1 study); and physical abuse (18 studies), including abusive head trauma (11 studies). --- Risk of bias and GRADE assessment of included studies One study was rated as having an unclear risk of bias and all remaining studies were rated as high risk of bias, with 23 studies (72%) having high risk of bias across two or more domains (see supplemental file 4 for critical appraisal rankings). A number of studies used very narrow age ranges to test their index test, representing potentially inappropriate exclusions for the basis of studying identification strategies. For example, while very young children (under 5 years of age) are at most risk of serious impairment and death from physical abuse including abusive head trauma, rates of non-fatal physical abuse peak between 3 and 12 years [56]. Ideally, index tests that seek to identify potential physical abuse should address all children who are legally entitled to protection (or at a minimum, address children <unk>12 years). A number of studies did not apply the reference standard to all children and instead only applied it to a subset of children who were positively identified by the index test or some other method, which can lead to serious verification bias (i.e., no data for the number of potentially maltreated children missed). For example, the reference standard was applied to only 55/18275 (0.3%) of the children in the study by Louwers et al. [26]. Only Sittig et al. [27], in a study assessing one of the recently published screening tests, applied the reference standard to a random sample of 15% of the children who received a negative screen by the index test, thereby reducing the potential for serious verification bias. A few studies also used the index test as part of the reference standard, which can lead to serious incorporation bias. For example, Greenbaum et al. [37] noted that the 6-item child sex trafficking screening questions were "embedded within the 17-item questionnaire," which was used by the reference standard (health care providers) to determine if child sex trafficking potentially occurred. Using the GRADE approach to evaluate the certainty of evidence, the included studies started at high certainty as all but six studies were cross-sectional studies. The evidence was rated down due to very serious concerns for risk of bias (making the evidence "low" certainty) and further rated down for imprecision (making the evidence "very low" certainty). --- General accuracy Table 2 reports sensitivity and specificity rates for each study. Studies are organized according to child maltreatment type (multiple types of maltreatment, medical child maltreatment, sexual abuse, child sex trafficking, emotional abuse, physical abuse and neglect, and abusive head trauma). The type of child maltreatment assessed by each tool is specified, as is the name of the identification strategy. In addition to the studies previously reviewed by Bailhache et al. [13], this systematic review update identified three screening tools, as well as an identification tool for medical child maltreatment, "triggers" embedded in an electronic medical record, four clinical prediction tools, and two predictive symptoms of abusive head trauma. False positive/negative values are reported only for the studies using screening tools with samples where the prevalence of child maltreatment could be estimated; all values for the studies identified in the Bailhache et al. [13] review are available in Table 2. --- Screening instruments Three screening instruments were identified in this systematic review update: 1) the SPUTOVAMO checklist, 2) the Escape instrument, and 3) a 6-item screening questionnaire for child sex trafficking. The SPUTOVAMO checklist [24,27,28,42] is a screening instrument that determines whether there is a suspicion of child maltreatment via a positive answer to one or more of five questions (e.g., injury compatible with history and corresponding with age of child?). Its use is mandatory in Dutch emergency departments and "out-of-hours" primary care locations. Two studies [24,42] evaluated if the SPUTOVAMO checklist could detect potential physical abuse, sexual abuse, emotional abuse, neglect, or exposure to intimate partner violence in children under 18 years of age presenting to either out-of-hours primary care locations [24] or an emergency department [42] in the Netherlands. Two separate studies reported on the use of the SPUTOVAMO checklist to assess for potential exposure to physical abuse in children under 8 years of age presenting to the emergency department with a physical injury [27] or children under 18 years of age presenting to a burn centre with burn injuries [28]. Two studies evaluated the Escape instrument [25,26], a screening instrument very similar in content and structure to the SPUTOVAMO checklist. The Escape instrument involves five questions (e.g., is the history consistent?) that are used to assess for potential physical abuse, sexual abuse, emotional abuse, neglect, and exposure to intimate partner violence in children under 16 years of age [25] or 18 years of age [26] presenting to an emergency department. Three studies [36,37,39] reported on use of a 6-item screening questionnaire for child sex trafficking, where an answer to two or more questions (e.g., Has the youth ever run away from home?) indicated suspicion of a child being exposed to sex trafficking. The studies tested the screening questionnaire in children of a similar age group (10,11, or 12 to 18 years of age) presenting to emergency departments [36,37,39], child advocacy centres or teen clinics [37]. Five studies [24][25][26][27]42] had samples where the prevalence of child maltreatment could be estimated. In other words, each study's included sample was similar enough (e.g., children less than 18 years presenting to the emergency department) to match 2% to 10% prevalence estimates found in emergency departments [20]. As shown in Table 3, the Sittig et al. [27] study, which evaluated the SPUTOVAMO checklist, found that per 100 children tested, 0 potentially physically abused children were missed and 0 to 2 potentially neglected children were missed. Twelve to 13 children were falsely identified as potentially physically abused or neglected. The other studies suffered from verification or incorporation bias leading to a sensitivity estimate that is too high (underestimating false negative estimates) and a specificity estimate that is too high (underestimating false positive estimates). These studies [24][25][26]42] found that per 100 children tested, 0 to 9 potentially maltreated children were missed and 2 to 69 children were falsely identified as potentially maltreated. For the studies that evaluated the SPUTOVAMO checklist specifically [24,42], 0 to 9 potentially maltreated children were missed and 2 to 69 children were falsely identified as potentially maltreated. For the studies that evaluated the Escape tool [25,26], 0 to 2 children were missed and 2 children were falsely identified as potentially maltreated. --- Modelling service outcomes of the SPUTOVAMO checklist for physical abuse or neglect based on a population estimate After using a screening tool, children will receive some type of service depending on the results. We modelled what would happen to children after the use of the SPU-TOVAMO checklist on a population level per 100,000 children (see supplemental file 5 for modelling using the Escape instrument). When using the SPUTOVAMO checklist, providers may correctly identify 2000 children potentially exposed to physical abuse and 1666 potentially exposed to neglect. American estimates [21] suggest 17% of children who are reported to child welfare are substantiated and 62% of substantiated children receive post-investigation services. Using these estimates, this means that some form of post-investigative services may be received by 211 children with substantiated physical abuse and 176 children with substantiated neglect. No children exposed to potential physical abuse and 334 children who have been exposed to potential neglect would be missed. Since an estimated 25 to 50% of children who are exposed to maltreatment need services for mental health symptoms [21], 84 children potentially exposed to neglect would not be referred for the mental health services they need. In addition, we calculated that 13,230 children would be misidentified as potentially physically abused and 13, 034 children would be misidentified as potentially neglected. Although these children would likely receive an assessment by a qualified physician that would determine they had not experienced maltreatment, all of these children could undergo a stressful and unwarranted child protection services investigation. --- Medical child maltreatment instrument Greiner et al. [31] evaluated a "medical child maltreatment" instrument (also known as caregiver fabricated illness in a child [57] or factitious disorder imposed on another [58]), where a positive answer to four or more of the 15 questions indicated suspicion of medical child maltreatment (e.g., caregiver has features of Munchausen syndrome (multiple diagnoses, surgeries, and hospitalizations, with no specific diagnosis)). --- Triggers in an electronic medical record Berger et al. [35] evaluated "triggers" added to an electronic medical record to help identify children under 2 years of age at risk for physical abuse (e.g., a "yes" response to "Is there concern for abuse or neglect?" in the pre-arrival documentation by a nurse; documentation of "assault" or "SCAN" as the chief complaint). This study suffers from serious verification bias, since only abused children and a small, non-random sample (n = 210) were evaluated by the reference standard. --- Clinical predication rules and predictive symptoms Five studies (published in six articles) evaluated four clinical prediction tools (Burns Risk Assessment for Neglect or Abuse Tool, Pediatric Brain Injury Research Network clinical prediction rule, Predicting Abusive Head Trauma, and Hymel's 4-or 5-or 7-variable prediction models). Kemp et al. [40] investigated the Burns Risk Assessment for Neglect or Abuse Tool, a clinical prediction rule to assist with the recognition of suspected maltreatment, especially physical abuse or neglect. Hymel et al. evaluated a five-variable clinical prediction rule (derivation study) [34] and a four-variable clinical prediction rule (validation study) [33] in identifying potential abusive head trauma in children less than 3 years of age who were admitted to the post-intensive care unit for management of intracranial injuries. An additional article by Hymel et al. [38] combined the study populations in the derivation and validation studies in order to evaluate a seven-variable clinical prediction rule in identifying potential abusive head trauma. The seven-variable clinical prediction rule used seven indicators to predict potential abusive head trauma (e.g., any clinically significant respiratory compromise at the scene of injury, during transport, in the emergency department, or prior to admission). Pfeiffer et al. [41] evaluated the Pediatric Brain Injury Research Network clinical prediction rule. This clinical prediction rule evaluated the likelihood of abusive head trauma in acutely head-injured children under 3 years of age admitted to the post-intensive care unit. The authors recommended that children who presented with one or more of the following four predictor variables should be evaluated for abuse (respiratory compromise before admission; any bruising involving ears, neck, and torso; any subdural hemorrhages and/or fluid collections that are bilateral or interhemispheric; any skull fractures other than an isolated, unilateral, nondiastatic, linear parietal skull fracture). Two studies evaluated different predictive symptoms of abusive head trauma (parenchymal brain lacerations and hematocrit levels <unk>30% on presentation). Palifika et al. [29] examined the frequency of lacerations in children less than 3 years of age who had abusive head trauma (as determined by the institutional child abuse team) compared with accidentally injured children with moderate-to-severe traumatic brain injury. For children under 5 years of age who were admitted to one of two level-one pediatric trauma centres with a diagnosis of traumatic brain injury, Acker et al. [32] identified hematocrit values of 30% or less as a finding that should prompt further investigation for potential abusive head trauma. --- Discussion This review updates and expands upon the systematic review published by Bailhache et al. [13] and was conducted to evaluate the effectiveness of strategies for identifying potential child maltreatment. Since the publication of Bailhache et al.'s [13] systematic review, there have been 18 additional studies published. The included studies reported the sensitivity and specificity of three screening tools (the SPUTOVAMO checklist, the Escape instrument, and a 6-item screening questionnaire for child sex trafficking), as well as the accuracy of an identification tool for medical child maltreatment, "triggers" embedded in an electronic medical record, four clinical prediction tools (Burns Risk Assessment for Neglect or Abuse Tool, Pediatric Brain Injury Research Network clinical prediction rule, Predicting Abusive Head Trauma, and Hymel's 4-or 5-or 7-variable prediction models), and two predictive symptoms of abusive head trauma (parenchymal brain lacerations and hematocrit levels <unk>30% on presentation). As the Bailhache et al. [13] systematic review identified no screening tools, the creation of the SPUTOVAMO checklist, Escape instrument, and 6-item child sex trafficking screening questionnaire represents a notable development since their publication. The recent creation of an identification tool for child sex trafficking also reflects current efforts to recognize and respond effectively to this increasingly prevalent exposure. Aside from these new developments, many of the other points discussed by Bailhache et al. [13] were confirmed in this update: it is still difficult to assess the accuracy of instruments to identify potential child maltreatment as there is no gold standard for identifying child maltreatment; what constitutes "maltreatment" still varies somewhat, as does the behaviours that are considered abusive or neglectful (e.g., we have excluded children's exposure to intimate partner violence, which is increasingly considered a type of maltreatment); and it is still challenging to identify children early in the evolution of maltreatment (many of the identification tools discussed in this review are not intended to identify children early and as such children are already experiencing significant consequences of maltreatment). The studies included in this systematic review provide additional evidence that allow us to assess the effectiveness of strategies for identifying potential exposure to maltreatment. Based on the findings of this review (corresponding with the findings of Bailhache et al.'s [13] review), we found low certainty evidence and high numbers of false positives and negatives when instruments are used to screen for potential child maltreatment. Although no studies assessed the effect of screening tools on child well-being outcomes or recurrence rates, based on data about reporting and response rates [21,22], we can posit that children who are falsely identified as potentially maltreated by screening tools will likely receive a CPS investigation that could be distressing. Furthermore, maltreated children who are missed by screening tools will not receive or will have delayed access to the mental health services they need. We identified several published instruments that are not intended for use as screening tools, such as clinical prediction rules for abusive head trauma. Clinical prediction tools or rules, such as Hymel's variable prediction model, combine medical signs, symptoms, and other factors in order to predict diseases or exposures. While they may be useful for guiding clinicians' decision-making, and may be more accurate than clinical judgement alone [59], they are not intended for use as screening tools. Instead, the tools "act as aids or prompts to clinicians to seek further clinical, social or forensic information and move towards a multidisciplinary child protection assessment should more information in support of AHT [abusive head trauma] arise" [41]. As all identification tools demand clinician time and energy, widespread implementation of any (or a) clinical prediction tool is not warranted until it has undergone three stages of testing: derivation (identifying factors that have predictive power), validation (demonstrating evidence of reproducible accuracy), and impact analysis (evidence that the clinical prediction tool changes clinician behaviour and improves patient important outcomes) [60]. Similar to the findings of a recent systematic review on clinical prediction rules for abusive head trauma [41], in this review we did not find any clinical prediction rules that had undertaken an impact analysis. However, several recent studies have considered the impact of case identification via clinical prediction rules. This includes assessing if the Predicting Abusive Head Trauma clinical prediction rule alters clinicians' abusive head trauma probability estimates [61], emergency clinicians' experience with using the Burns Risk Assessment for Neglect or Abuse Tool in an emergency department setting [62], and cost estimates for identification using the Pediatric Brain Injury Research Network clinical predication rule as compared to assessment as usual [63]. Additional research on these clinical predication rules may determine if such rules are more accurate than a clinician's intuitive estimation of risk factors for potential maltreatment or how the tool impacts patient-important outcomes. Many of the included studies had limitations in their designs, which lowered our confidence in their reported accuracy parameters. Limitations in this area are not uncommon. A recent systematic review by Saini et al. [64] assessed the methodological quality of studies assessing child abuse measurement instruments (primarily studies assessing psychometric properties). The authors found that "no instrument had adequate levels of evidence for all criteria, and no criteria were met by all instruments" [64]. Our review also resulted in similar findings to the original review by Bailhache et al. [13], in that 1) most studies did not report sufficient information to judge all criteria in the risk of bias tool; 2) most studies did not clearly blind the analysis of the reference standard from the index test (or the reverse); 3) some studies [26,36,37,39] included the index test as part of the reference standard (incorporation bias), which can overestimate the accuracy of the index test; and 4) some studies used a case-control design [29,31,36], which can overestimate the performance of the index test. A particular challenge, also noted by Bailhache et al. [13], was the quality of reporting in many of the included studies. Many articles failed to include clear contingency tables in reporting their results, making it challenging for readers to fully appreciate missing values and potentially inflated sensitivity and specificity rates. For example, one study evaluating the SPUTOVAMO checklist reported 7988 completed SPUTOVAMO checklists. However, only a fraction of these completed checklists were evaluated by the reference standard (verification bias, discussed further below) (193/7988, 2.4%) and another reference standard (a local CPS agency) was used to evaluate an additional portion of SPUTOVAMO checklists (246/ 7988, 3.1%). However, the negative predictive and positive predictive value calculations were based on different confirmed cases. Ideally missing data and indeterminate values should be reported [23]. Researchers have increasingly called for diagnostic accuracy studies to report indeterminate results as sensitivity analysis [65]. Verification bias was a particular study design challenge in the screening studies identified in this review. For example, Dinpanah et al. [25] examined the accuracy of the Escape instrument, a five-question screener applied in emergency department settings, for identifying children potentially exposed to physical abuse, sexual abuse, emotional abuse, neglect, or intimate partner violence. The authors report a sensitivity and specificity of 100 and 98 respectively. While the accuracy was high, their study suffered from serious verification bias as approximately 137 out of 6120 (2.2%) of children suspected of having been maltreated received the reference standard.
Background: Child maltreatment affects a significant number of children globally. Strategies have been developed to identify children suspected of having been exposed to maltreatment with the aim of reducing further maltreatment and impairment. This systematic review evaluates the accuracy of strategies for identifying children exposed to maltreatment. Methods: We conducted a systematic search of seven databases: Medline, Embase, PsycINFO, Cumulative Index to Nursing and Allied Health Literature, Cochrane Libraries, Sociological Abstracts and the Education Resources Information Center. We included studies published from 1961 to July 2, 2019 estimating the accuracy of instruments for identifying potential maltreatment of children, including neglect, physical abuse, emotional abuse, and sexual abuse. We extracted data about accuracy and narratively synthesised the evidence. For five studies-where the population and setting matched known prevalence estimates in an emergency department setting-we calculated false positives and negatives. We assessed risk of bias using QUADAS-2. Results: We included 32 articles (representing 31 studies) that evaluated various identification strategies, including three screening tools (SPUTOVAMO checklist, Escape instrument, and a 6-item screening questionnaire for child sex trafficking). No studies evaluated the effects of identification strategies on important outcomes for children. All studies were rated as having serious risk of bias (often because of verification bias). The findings suggest that use of the SPUTOVAMO and Escape screening tools at the population level (per 100,000) would result in hundreds of children being missed and thousands of children being over identified. Conclusions: There is low to very low certainty evidence that the use of screening tools may result in high numbers of children being falsely suspected or missed. These harms may outweigh the potential benefits of using such tools in practice (PROSPERO 2016:CRD42016039659).
, only a fraction of these completed checklists were evaluated by the reference standard (verification bias, discussed further below) (193/7988, 2.4%) and another reference standard (a local CPS agency) was used to evaluate an additional portion of SPUTOVAMO checklists (246/ 7988, 3.1%). However, the negative predictive and positive predictive value calculations were based on different confirmed cases. Ideally missing data and indeterminate values should be reported [23]. Researchers have increasingly called for diagnostic accuracy studies to report indeterminate results as sensitivity analysis [65]. Verification bias was a particular study design challenge in the screening studies identified in this review. For example, Dinpanah et al. [25] examined the accuracy of the Escape instrument, a five-question screener applied in emergency department settings, for identifying children potentially exposed to physical abuse, sexual abuse, emotional abuse, neglect, or intimate partner violence. The authors report a sensitivity and specificity of 100 and 98 respectively. While the accuracy was high, their study suffered from serious verification bias as approximately 137 out of 6120 (2.2%) of children suspected of having been maltreated received the reference standard. For the children who did not receive the reference standard, there is no way to ascertain the number of children who were potentially maltreated, but unidentified (false negatives). Furthermore, as inclusion in this study involved a convenience sample of children/ families who a) gave consent for participation and b) cooperated in filling out the questionnaire, we do not know if the children in this study were representative of their study population. In addition, unlike screening tools for intimate partner violence [66,67], none of the screening for possible maltreatment tools have been evaluated through randomized controlled trials; as such, we have no evidence about the effectiveness of such tools on reducing recurrence of maltreatment or improving child well-being. This review identified one study which evaluated a screening tool that did not suffer from serious verification bias or incorporation bias. Sittig et al. [27] evaluated the ability of the SPUTOVAMO five-question checklist to identify potential physical abuse or neglect in children under the age of 8 years who presented to an emergency department with any physical injury. While no children exposed to potential physical abuse were missed by this tool, at a population level a large number of children were falsely identified as potentially physically abused (over 13,000); furthermore, at a population level, many children potentially exposed to neglect were missed by this tool (334 per 100,000). Qualitative research suggests that physicians report having an easier time detecting maltreatment based on physical indicators, such as bruises and broken bones, but have more challenges identifying less overt forms of maltreatment, such as'mild' physical abuse, emotional abuse, and children's exposure to intimate partner violence [68]. The authors of this study suggest that the SPUTOVAMO "checklist is not sufficiently accurate and should not replace skilled assessment by a clinician" [27]. The poor performance of screening tests for identifying children potentially exposed to maltreatment that we found in this review leads to a similar conclusion to that reached for the World Health Organization's Mental Health Gap Action Programme (mhGAP) update, which states that "there is no evidence to support universal screening or routine inquiry" [69]. Based on the evidence, the mhGAP update recommends that, instead of screening, health care providers use a case-finding approach to identify children exposed to maltreatment by being "alert to the clinical features associated with child maltreatment and associated risk factors and assess for child maltreatment, without putting the child at increased risk" [69]. As outlined in the National Institute for Health and Clinical Excellence (NICE) guidance for identifying child maltreatment, indicators of possible child maltreatment include signs and symptoms; behavioural and emotional indicators or cues from the child or caregiver; and evidence-based risk factors that prompt a provider to consider, suspect or exclude child maltreatment as a possible explanation for the child's presentation [70]. The NICE guidance includes a full set of maltreatment indicators that have been determined based on the results of their systematic reviews [70]. This guidance also discusses how providers can move from "considering" maltreatment as one possible explanation for the indicator to "suspecting" maltreatment, which in many jurisdictions invokes a clinician's mandatory reporting duty. In addition, there are a number of safety concerns that clinicians must consider before inquiring about maltreatment, such as ensuring that when those children who are of an age and developmental stage where asking about exposure to maltreatment is feasible, this should occur separately from their caregivers and that systems for referrals are in place [71]. The findings of this review have important policy and practice implications especially since, as noted in the introduction, there is an increasing push to use adverse childhood experiences screening tools in practice [15,16]. While we are not aware of any diagnostic accuracy studies evaluating adverse childhood experiences screening tools, it is unclear how these tools are being used in practice, or how they will in the future be used in practice [72]. For example, does a provider who learns a child has experienced maltreatment via an adverse childhood experiences screener then inform CPS authorities? What services is the child entitled to based on the findings of an adverse childhood experiences screener, if the child indicates they have experienced child maltreatment along with other adverse experiences? The findings of the present review suggest that additional research is needed on various child maltreatment identification tools (further accuracy studies, along with studies that assess acceptability, cost effectiveness, and feasibility) before they are implemented in practice. The findings also suggest the need for more high-quality research about child maltreatment identification strategies, including well-conducted cohort studies that follow a sample of children identified as not maltreated (to reduce verification bias) and randomized controlled trials that assess important outcomes (e.g., recurrence and child wellbeing outcomes) in screened versus non-screened groups. The results of randomized controlled trials that have evaluated screening in adults experiencing intimate partner violence underscore the need to examine the impacts of screening [66,67]. Similar trials in a child population could help clarify risks and benefits of screening for maltreatment. Future systematic reviews that assess the accuracy of tools that attempt to identify children exposed to maltreatment by evaluating parental risk factors (e.g., parental substance use) would also complement the findings of this review. --- Strengths and limitations The strengths of this review include the use of a systematic search to capture identification tools, the use of an established study appraisal checklist, calculations of false positives and negatives per 100 where prevalence estimates were available (which may be more useful for making clinical decisions than sensitivity and specificity rates), and the use of GRADE to evaluate the certainty of the overall evidence base. A limitation is that we included English-language studies only. There are limitations to the evidence base, as studies were rated as unclear or high risk of bias and the overall certainty of the evidence was low. Additional limitations include our reliance on 2 and 10% prevalence rates commonly seen in emergency departments [20] and our use of American estimates to model potential service outcomes following a positive screen (e.g., number of children post-investigation who receive services). These prevalence rates likely do not apply across different countries where prevalence rates are unknown. For example, one study evaluated the Escape instrument in an Iranian emergency department. While the authors cite the 2 to 10% prevalence rate in their discussion [25], we are unaware of any studies estimating prevalence of child maltreatment in Iranian emergency departments. When known, practitioners are encouraged to use the formulas in the methods section (or to use GRADEpro) to estimate false positives and negatives based on the prevalence rates of their setting, as well as known estimates for service responses in their country, in order to make informed decisions about the use of various identification strategies. Furthermore, our modelling of services outcomes assumes that 1) all positives screens will be reported and 2) that reports are necessarily stressful/negative. While many of the included studies that used CPS as a reference standard reported all positive screens, it is unclear if this would be common practice outside of a study setting (i.e., does a positive screen trigger one's reporting obligation?). Further research is needed to determine likely outcomes of positive screens. It is also important to recognize that while reviews of qualitative research do identify that caregivers and mandated reporters have negative experiences and perceptions of mandatory reporting (and associated outcomes), there are some instances where reports are viewed positively by both groups [68,73]. Finally, because our review followed the inclusion/exclusion criteria of Bailhache et al. [13] and excluded studies that did not explicitly set out to evaluate sensitivity, specificity, positive predictive values or negative predictive values, it is possible that there are additional studies where such information could be calculated. --- Conclusion There is low to very low certainty evidence that the use of screening tools may result in high numbers of children being falsely suspected or missed. These harms may outweigh the potential benefits of using such tools in practice. In addition, before considering screening tools in clinical programs and settings, research is needed that identifies patient-important outcomes of screening strategies (e.g., reduction of recurrence). --- Availability of data and materials All data is available within this article, supplemental material or via the references. --- Supplementary information Supplementary information accompanies this paper at https://doi.org/10. 1186/s12887-020-2015-4. --- Additional file 1. PRISMA Checklist --- Additional file 2. Example search strategy --- Additional file 3. Study and participant characteristics of interest --- Additional file 4. Critical appraisal rankings Additional file 5. Consequences of screening per 100,000 children Abbreviations CPS (Child Protective Services): A short form for governmental agencies responsible for providing child protection, including responses to reports of maltreatment; GRADE (Grading of Recommendations, Assessment, Development and Evaluation): The GRADE process involves assessing the certainty of the best available evidence and is often used to support guideline development processes; mhGAP (Mental Health Gap Action Programme): A program launched by the World Health Organization's Mental Health Gap Action Programme, in order to facilitate the scaling up of care for mental, neurological, and substance use disorders; the program is comprised of evidence-based guidelines and practical intervention guides used to assist in the implementation of guideline principles; NICE (National Institute for Health and Care Excellence): An executive non-departmental body operating in the United Kingdom that provides national guidance and advice to improve health and social care; QUADAS-2 (Quality Assessment of Studies of Diagnostic Accuracy-2): A tool for evaluating the quality of diagnostic accuracy studies Authors' contributions JRM conceptualized and designed the review, carried out the analysis, and drafted the initial manuscript. HLM assisted with conceptualizing the review. AG and JCDM checked all data extraction. NS was consulted regarding the GRADE analysis. JCDM and CM assisted with preparing an earlier draft of the review, including interpretation of data. All authors made substantial contributions to revising the manuscript and all authors approved of the manuscript as submitted. Ethics approval and consent to participate Not applicable. --- Consent for publication Not applicable. --- Competing interests The authors declare that they have no completing interests. --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Background: Child maltreatment affects a significant number of children globally. Strategies have been developed to identify children suspected of having been exposed to maltreatment with the aim of reducing further maltreatment and impairment. This systematic review evaluates the accuracy of strategies for identifying children exposed to maltreatment. Methods: We conducted a systematic search of seven databases: Medline, Embase, PsycINFO, Cumulative Index to Nursing and Allied Health Literature, Cochrane Libraries, Sociological Abstracts and the Education Resources Information Center. We included studies published from 1961 to July 2, 2019 estimating the accuracy of instruments for identifying potential maltreatment of children, including neglect, physical abuse, emotional abuse, and sexual abuse. We extracted data about accuracy and narratively synthesised the evidence. For five studies-where the population and setting matched known prevalence estimates in an emergency department setting-we calculated false positives and negatives. We assessed risk of bias using QUADAS-2. Results: We included 32 articles (representing 31 studies) that evaluated various identification strategies, including three screening tools (SPUTOVAMO checklist, Escape instrument, and a 6-item screening questionnaire for child sex trafficking). No studies evaluated the effects of identification strategies on important outcomes for children. All studies were rated as having serious risk of bias (often because of verification bias). The findings suggest that use of the SPUTOVAMO and Escape screening tools at the population level (per 100,000) would result in hundreds of children being missed and thousands of children being over identified. Conclusions: There is low to very low certainty evidence that the use of screening tools may result in high numbers of children being falsely suspected or missed. These harms may outweigh the potential benefits of using such tools in practice (PROSPERO 2016:CRD42016039659).
Introduction Why do people stick with their unhealthy habits despite adverse consequences? This is a pressing question for both public health research and policy-makers. For example, the overweight and obesity prevalence has been steadily growing in all Western societies (Ng et al. 2014). Smoking continues to be a major public health problem even though its health risks are widely recognised (Reitsma et al. 2017) and many behaviours that are acknowledged being essential for healthy lifestyles have not been Habits and the socioeconomic patterning of health-related... universally adopted, such as getting enough exercise or eating sufficient amounts of vegetables (Spring et al. 2012). Since risky behaviours are more prevalent in lower socioeconomic groups, understanding why unhealthy behaviours are so resistant to change is vital to tackle inequalities in health. In this article, we argue that there is a theoretical tradition which has been unexplored in this context even though it is well suited for examining the core questions of health behaviour research. This tradition is pragmatism and its conception of habits, which offers a dynamic and action-oriented understanding of the mechanisms that "recruit" individuals to risky health-related behaviours. Health-related behaviour is often understood as an issue having to do with the individual and guided by motivations, intentions, self-efficacy and expectations, as it is the case with influential and widely used planned behaviour theories (Ajzen 1985) and the health belief model (Strecher and Rosenstock 1997). In this line of thinking, individuals behave the way they do because their intentions, knowledge, beliefs or motives lead them to do so (Cohn 2014;Nutbeam and Harris 2004). The individualised approach is especially visible in many psychological theories of behaviour change and in interventions and programmes designed on the basis of these theories (Baum 2008;Blue et al. 2016). Behavioural interventions have become increasingly important in public health promotion despite weak evidence for their overall effectiveness in generating long-lasting changes in behaviour and their potential to reduce inequalities in health (Baum and Fisher 2014;Jepson et al. 2010). Research on social determinants of health often takes a critical stance towards psychological theories and recognises social structures as key contributors to health and health-related behaviours (Mackenbach 2012;Marmot and Wilkinson 2006). In health sociology, concepts and measures related to power, cultural norms, social circumstances, societal hierarchies, and material resources, for instance, are used to refer to structural constraints and modifiers of individual action and related outcomes. A large body of research has shown that education, occupational status, financial resources, living area, gender and ethnicity all affect ill health and life expectancy and the ways in which individuals act upon their health. The better off people are, the more likely they are to lead healthy lives and adopt healthy lifestyles (Marmot et al. 2010;Pampel et al. 2010). While social structure undoubtedly constrains people's behaviour, people can also exert agency, as they are able to consider different options and to act in discordance with their structural predispositions and social circumstances (Mollborn et al. 2021). The key question in sociological theory is, thus, how individual behaviour can be simultaneously understood as shaped by social structures and as governed by individual choices. It is not enough to state that both social structures and individual intentions are important in explaining behavioural outcomes. One also needs to understand how and why social structures enable or generate particular kinds of behaviour within the context of people's everyday lives. Sociological theorisation on health inevitably falls short if it fails to confront this issue, thus leading to an insufficient understanding of factors that shape health-related behaviours (Williams 2003). In this article, we first take a look at sociological theories of health-related behaviour, to which the concepts of lifestyle and, more recently, social practices have been central. Then we move on to discuss the pragmatist concept of habit. The concept of habit has often been used in research on health-related behaviours and behavioural change, and it has proved to be useful in explaining continuities in behaviour (Gardner 2015;Lindbladh and Lyttkens 2002). We argue that previous research has not taken into account the pragmatist understanding of the concept as an important contribution to theorisations of health lifestyles and practices. Pragmatism's dynamic and action-oriented understanding of habits helps in conceptualizing how practices are formed in interaction with material and social conditions and what the mechanisms are by which practices recruit individuals. In pragmatism, habits are understood in terms of problem-solving; they are active and creative solutions to practicalities of everyday life and responsive to change, not mere blind routines. We, therefore, focus on the creative and active nature of habit formation, which can be understood as mechanisms by which behavioural patterns emerge. The pragmatist approach not only opens new perspectives in health research but can also give new tools for preventing non-communicable diseases and reducing inequalities in health. Next, we discuss theories of lifestyles and social practices and go on to show how the pragmatist theory of habits anticipated many of these insights (historically speaking) but also developed its own framework for analysing the inherent habituality of action. --- The interplay between structure and agency: lifestyles and social practices Attempts to bridge the gap between social structures and individual action in health sociology often draw from a loose tradition of practice theories. They are all based on the attempt to overcome methodological individualism without leaning too much towards methodological holism (Maller 2015). This means that practice theories try to take into account both individual action (methodological individualism) and the role social structures play in explaining action (methodological holism). From the perspective of health sociology, the fundamental question is how to understand the interplay between individual agency and structural factors in health-related matters, such as smoking, drinking or food consumption. In this respect, two concepts have been central: lifestyles and social practices. Biomedical or social epidemiological approaches, which dominate health inequality research, typically frame lifestyle as a set of individual, volitional behaviours (Korp 2008). Lifestyle is thus a sum of individual health-related behaviours, such as ways of consuming alcohol or dietary habits. In sociological literature, lifestyle is seen as a collective attribute: lifestyles are shared understandings and ways of operating in the world that have been generated in similar social circumstances (Frohlich et al. 2001). They develop over the life-course (Lawrence et al. 2017;Banwell et al. 2010) and are shaped by social and material conditions (Cockerham 2005). As such, lifestyles are not merely outcomes of choices or personal motives and preferences, but they reflect an individual's position in a wider social structure and are fundamentally shaped by those structures. Cockerham (2009, p. 159) defines health lifestyles as "collective patterns of health-related behaviour based on choices from options available to people according to their life chances". In his Health Lifestyle Theory, Cockerham draws from Max Weber's concept of lifestyles, in which lifestyle-related choices are seen as voluntary but constrained and enabled by life chances that are essentially structural: similar life chances tend to generate similar patterns of voluntary action, thus generating patterns of behaviour (Cockerham 2009). Cockerham (2013) considers life chances as consisting of a variety of structural determinants, such as class circumstances, age and gender, which collectively influence agency and choices. The interaction between choices and chances constitutes dispositions to act and resulting lifestyles may have varying effects on health. Health-related behaviour is shown to be clustered within individuals and by socioeconomic status (De Vries et al. 2008;Portinga 2007), yet health lifestyles are rarely uniformly health-promoting or healthcompromising, and there is a considerable amount of variation in health behaviour between individuals with similar socioeconomic characteristics (Mollborn and Lawrence 2018;Pronk et al. 2004). Cockerham's approach, like many other approaches to health-related behaviours (Williams 1995;Frohlich et al. 2001;Carpiano 2006;Gatrell et al. 2004;Korp 2008), draws on Pierre Bourdieu's concept of habitus. Habitus is a set of dispositions that generate class-specific ways of operating in the world (Bourdieu 1984, pp. 101-102). Habitus develops during the socialisation process in interaction with social circumstances and social relations, and it generates tastes, choices and practices that are subjectively meaningful in given contexts. Accordingly, people accommodate their desired way of life in accordance with their assessment of their circumstances and available resources (Cockerham 2005). From a Bourdieusian perspective, health lifestyles are a product of life conditions and available resources, as well as people's preferences and tastes, which are formed in class-specific circumstances. People's dietary patterns, leisure activities and ways of consuming alcohol therefore reflect class relations and distinctions. Bourdieu's ideas on habitus and practices highlight how people's day-to-day activities tend to be, to a great extent, routine-like and taken for granted: once established, a habitus governs behaviour, enabling everyday practices to be acted out without conscious deliberation. Thus, Bourdieu's approach explains why lifestyles are not random by underlining the importance of class-specific social conditions internalised in the habitus. Bourdieu's approach has been repeatedly criticised for exaggerating objective social structures at the expense of agency and reflexivity (e.g., Adams 2006;Frohlich et al. 2001;Archer 2005). Critics have claimed that Bourdieu's concept of habitus does not allow for voluntary action and thus assumes that existing social structures are reproduced almost automatically. While Bourdieu acknowledges the importance of agency, he still prioritises structural determinants of action at the expense of individual choices, preferences and subjective understandings (Jenkins 1992). In more recent discussions, however, the notions of reflexivity and flexibility of habitus have been more central and the idea of an over-controlling habitus has been rejected (Cockerham 2018). Silva (2016) has noted that Bourdieu's conception of habitus changed over time so that in his later work habitus is more 'elastic' compared to his earlier work. In fact, Bourdieu's later ideas of the role of reflexivity in situations when habitus and field collide are very close to pragmatism (Bourdieu 1990;Bourdieu and Wacquant 1992;Crossley 2001). Yet, Bourdieu gives priority to social class in the process of lifestyle formation. This means that socioeconomic status determines to a great extent what people do (Gronow 2011). The impression that structures determine can be seen as a result of Bourdieu's emphasis on classrelated determinants of action. Regarding the possibility to modify health-related habits, Crammond and Carey (2017) have emphasized that Bourdieu's notion of habitus does not give credit to public health initiatives or to changing conditions for influencing habitus and behaviour. More recently, the concept of social practices has been suggested as a general conceptual framework for analysing and understanding health-related behaviour. While there is a variety of so-called practice theories and no integrated theory of practice exist, we concentrate on practice theoretical approaches and applications that have been central to the fields of consumption (e.g. Warde 2005; Shove 2012) and health sociology (e.g., Blue et al. 2016;Maller 2015;Meier et al. 2018;Delormier et al. 2009). In these fields, Reckwitz's (2002) influential article is commonly cited as the source for defining social practices as routine-like behaviour which consist of several interrelated elements, such as bodily and mental functions, objects and their use, knowledge, understanding and motivation (ibid., p. 249). According to Shove et al. (2012), practices integrate three elements: materials (objects, goods and infrastructures), competences (understandings, know-how) and meanings (social significance, experiences). Practices can refer to any form of coordinated enactment: preparing breakfast, having a break at work or having after-work drinks. Similar to lifestyles, social practices turn attention away from the individual and their intentions and motives towards the routinised ways people carry out their daily lives (Warde 2005). The idea is to look at people as carriers of practices because practices guide human action according to their own intrinsic logic (Reckwitz 2002). In other words, practices are relatively stable ways of carrying out a set of elements in an integrated manner. It follows, therefore, that they are both performances enacted more or less consistently in daily life, as well as entities that shape the lives of their carriers (Shove et al. 2012). The social practices approach points out how smoking, drinking and eating should not be seen merely as single behaviours, but rather as parts of collectively shared practices, which intersect with other everyday routines (Mollborn et al. 2021). For example, in understanding drinking behaviour, one cannot separate the act of drinking from other aspects of the drinking situation, such as the kind of alcohol being consumed, how, where and with whom it is done, and for what purposes (Meier et al. 2018;Maller 2015). Drinking, smoking and eating, accordingly, are not single entities but parts of different kinds of practices, performed and coordinated with other activities of daily life (Blue et al. 2016). As the main aim of practice theoretical approaches is to explain the stability and continuities of behaviour, the approach has difficulties in grasping the role of individual agency in the enactment of practices. According to critics, in some versions of practice theory, the role of individual carriers and the ways in which they make sense and experience practices seems to be more or less neglected (Spaargaren et al. 2016;Miettinen et al. 2012). Consideration of individuals' sense of doing things is particularly important when studying aspects of human behaviour that can have adverse consequences and are unequally distributed within society. Therefore, we argue that the practice theoretical approach would benefit from more theorization on individual agency and the mechanisms by which individuals adopt and become carriers of practices. For health sociology, the question of how practices change and how people are recruited as carriers of practices is particularly relevant: how can healthy practices be adopted or how can practices be modified to become healthier? We argue that these issues were fruitfully conceptualized by the philosophical tradition of pragmatism with its concept of habits, which takes the individual actor as a premise without losing sight of the force of everyday routines. --- Habits as dispositions In recent decades, pragmatism has become an important source of inspiration for many social theorists (e.g., Joas 1996;Baert 2005;Shilling 2008). For example, Joas (1996) has argued that pragmatists had a unique viewpoint on the creativity of action, whereas for Gross (2009) pragmatism is a key point of departure when discussing social mechanisms. Pragmatism has been previously introduced to health research, for example, in relation to the epistemological problems of different kinds of health knowledge (Cornish and Gillespie 2009) and health services research (Long et al. 2018). Here, we focus on the aspect of pragmatist thought we find most relevant for health sociology, namely, its concept of habits. Classical pragmatist philosophers were active at the end of the nineteenth and the beginning of the twentieth century. They included the likes of George Herbert Mead, William James, Charles S. Peirce and John Dewey. We mainly draw inspiration from John Dewey for his insights into the notion of habit. However, all classical pragmatists shared a similar understanding of the essential role habits play in explaining action (Kilpinen 2009). Thus, even though classical pragmatists may have differed in their point of emphasis, Dewey's notion of habits is in many ways representative of the classical pragmatist understanding of habits. In this conceptualisation, habits are acquired dispositions to act in a certain manner, but they do not preclude conscious reflection. Pragmatism, like the social practices approach, puts emphasis on contextual factors and the environments of action in understanding how habits are formed and maintained. Thus, one can argue that pragmatists were precursors to practice theorists. First and foremost, pragmatists highlighted the interaction between environments, habits and actors, by pointing out that people are constantly in the midst of ongoing action. Pragmatism also has an affinity with behaviourist psychology, which emphasises the role of environmental cues in triggering action. Behaviourists maintain that once an actor is conditioned to a reaction in the presence of a particular stimulus, the reaction automatically manifests itself when the stimulus is repeated. Say, a smoker might decide to give up smoking but the presence of familiar cues (e.g. cigarettes sold at the local grocery store, workmates who smoke) automatically triggers a response that results in a relapse. Classical pragmatists also thought that everything we do is in relation to certain environmental stimuli, but they did not think of the relationship in such mechanical, automatic terms (Mead 1934). What acts as a stimulus depends on the part the stimulus plays in one's habits rather than on simple conditioning (Dewey 1896). Thus, people are not simple automata that react to individual stimuli in a piecemeal fashion but rather creatures of habit. This means that individual actions get their meaning by being a part of habits (Kilpinen 2009). What may trigger the smoker's relapse is not the presence of isolated cues but the habits that they are a part of; having a morning coffee, passing by or going to the local bars and grocery stores, and taking a break at work. Habits make the associated cues familiar and give them meaning. The term habit, both in sociological literature and in common usage, typically refers to an action that has become routine due to repeated exposure to similar environmental stimuli. In this conception, the behaviour in question may originally have been explicitly goal-directed, but by becoming habituated, it becomes an unconscious, non-reflexive routine. As such, habits interfere with individuals' ability to act consciously. In practice theoretical approaches habit is similarly paralleled with routine-like ways of doing things. According to Southerton (2013), habits can be viewed as "observable performances of stable practices" (Southerton 2013, p. 337), which are essential for practices to remain stable (Maller 2015). In addition, habits are often understood as routines in popular science. According to Duhig (2012), the habit "loop" consists of the association between routines and positive rewards. Pragmatists tend to see habits somewhat differently-as inner dispositions. This conceptual move means that habits have a "mental" component and habits can exist as tendencies even when not overtly expressed. Habits are thus action dispositions rather than the observable behaviour to which they may give rise to (Cohen 2007). As tendencies, habits include goals of action and not mere overt expressions of action; in other words, they are projective, dynamic and operative as dispositions even when they are not dominating current activities (Dewey 1922, p. 41). Habits make one ready to act in a certain way, but this does not mean that one would always act accordingly (Nelsen 2015). To paraphrase Kilpinen (2009, p. 110), habits enter ongoing action processes in a putative form and we critically review them by means of self-control. In this way, habits are means of action: habits "project themselves" into action (Dewey 1922, p. 25) and do not wait for our conscious call to act but neither are they beyond conscious reflection. According to classical pragmatists, habits thus do not dictate our behaviour. Rather, habits constitute the so-called selective environment of our action. They give rise to embodied responses in the environments in which they have developed but, as dispositions, habits are tendencies to act in a certain manner, not overt routines that would always manifest themselves in behaviour. What distinguishes habits from inborn instincts is their nature as acquired dispositions. Moreover, habits guide action and make different lines of conduct possible. This is easy to see in the case of skills that require practice; for example, being skilful in the sense that one habitually knows the basic manoeuvres, say, in tennis, does not restrict action but rather makes continuous improvement of the skill in question possible. Simply reading books on tennis does not make anyone a good player of tennis and therefore actual playing is required for habit formation. Furthermore, once habits are acquired as dispositions, not playing tennis for a while does not mean that the habits and related dispositions would immediately disappear. In the pragmatist understanding, habits are not the opposite of agency but rather the foundation upon which agency and reflexive control of action are built. Purely routine habits do, of course, also exist but they tend to be "unintelligent" in Dewey's conceptualisation because they lack the guidance of reflective thought. Furthermore, Dewey (1922, p. 17) argued that conduct is always more or less shared and thus social. This also goes for habits, since they incorporate the objective conditions in which they are born. Action is thus already "grouped" in the sense that action takes place in settled systems of interaction (ibid., p. 61). This is where Dewey's ideas resemble practice theory most because the grouping of action into settled systems of interaction can be interpreted to indicate the kinds of enactments that practice theory is interested in. While repeated action falls within the purview of habits, Dewey (1922) was adamant that habits are dispositions rather than particular actions; the essence of habit is thus an acquired predisposition to particular ways or modes of responding in a given environment. Compared with practice theories, this notion of habits underscores competences (understandings, know-how) and meanings (social significance, experiences). Because habits are dispositions, they are the basis on which more complicated clusters of habits and, thus, practices, can be built. This means that practices can recruit only those who have the habits that predispose them to the enactments related to a practice. --- Habits as practical solutions In the previous section, we explained that pragmatists did not think of habits as mere routines. To be more precise, Dewey distinguished between different kinds of habits on the grounds of the extent of their reflexivity. Dewey labelled those habits that exhibit reflexivity as intelligent habits. Smoking is an example of what Dewey called "bad habits": they feel like they have a hold on us and sometimes make us do things against our conscious decisions. Bad habits are conservative repetitions of past actions, and this can lead to an enslavement to old "ruts" (Dewey 1922, p. 55). Habits hold an intimate power over us because habits make our selfhood-"we are the habit", in Dewey's (1922, p. 24) words. However, habits need not be deprived of thought and reasonableness. So-called intelligent habits, in which conscious reflection and guidance play a part, were Dewey's ideal state of affairs. Dewey (ibid.,p. 67) thought that what makes habits reasonable is mastering the current conditions of action and not letting old habits blindly dominate. There is thus no inherent opposition between reason and habits per se but between routine-like, unintelligent habits and intelligent habits, which are open to criticism and inquiry (ibid., p. 77). Many forms of health-related behaviour can be characterised in Dewey's terms as unintelligent habits. We stick to many habits and rarely reflect on them in our daily lives. However, that there are intelligent and unintelligent habits does not necessarily imply that all healthy habits would be intelligent in the sense of being open to reflection. Further, the unhealthiness of a habit does not in itself make a habit unintelligent in the sense of being an unconscious routine. Rather, all habits are intelligent in that they have an intrinsic relationship with the action environment. They help the actor to operate in a given environment in a functional and meaningful way. For example, smoking can be seen as meaningful in many hierarchical blue-collar work environments, where the way in which work is organised determines, to a great extent, workers' ability to have control over their working conditions. Smoking can be used as a means to widen the scope of personal autonomy because in many workplaces a cigarette break is considered a legitimate time-out from work (Katainen 2012). Smoking can thus be seen as a solution to a "problem" emerging in a particular environment of action, the lack of personal autonomy. In this sense, it is an intelligent habit that enables workers' to negotiate the extent of autonomy they have and to modify their working conditions (ibid.). As shared practices, cigarette breaks motivate workers to continue smoking and recruit new smokers, but when smoking becomes a routine, reinforced by nicotine addiction, it does not need to be consciously motivated (see also Sulkunen 2015). In the context of highly routinized moments of daily smoking, reflection on the habit and its adverse consequences to health is often lacking (Katainen 2012). This means that the habit in question is not fully intelligent in Dewey's terms. The mechanisms of adopting so-called bad habits can be very similar to adopting any kind of habit if we understand habits as enabling a meaningful relationship with the environments and conditions in which they were formed. This idea also helps us rethink the socioeconomic patterning of health-related lifestyles. We do not have to assume that people in lower socioeconomic positions always passively become vehicles of bad habits due to limited life chances. The pragmatist view on habit presupposes an actor who has an active, meaningful relationship with the environment, that is, an actor with a capacity for agency, as our illustration of habits as a way to increase worker autonomy shows. Unlike practice theory or Bourdieu's concept of habitus, the pragmatist concept of habit explains habitual action as a solution to practical problems in daily life. For pragmatists, action is always ongoing, and those activities that work and yield positive results in a given context have the potential to become habitual. We thus use habits to actively solve problems in our living environments, adapt to the fluctuating conditions we live in, and also modify these conditions with our habits. --- Habits, doubt and change So far, we have discussed habits as a relationship between the actor and the environment of action. We already hinted at the pragmatist idea that habits can be reflexive, and we now move on to discuss in more detail how and why habits change. According to Shove et al. (2012), practices are formed and cease to exist when links between materials, competences and meanings are established and dissolved. Additionally, practice theorists have suggested that practices may change when they are moved to a different environment or when new technologies and tools are introduced (Warde 2005). Actors may learn new things and perform practices in varying ways as performances are rarely identical (Shove 2012). However, it is insufficient to assert that practice theory assumes an active agent with transformative capacity if the underlying view of agency is passive and practices are the ones with agency to recruit actors. Furthermore, the question remains as to when actors are capable of being transformative and when they are confined to the repetition of practical performances. The pragmatist understanding of how habits change, and when and how actors exercise their agency, originates in Charles Peirce's thought. Peirce (1877) argued that we strive to build habits of action and often actively avoid situations that place our habits in doubt because doubt is an uncomfortable feeling. However, habits are nevertheless subject to contingencies and unforeseeable circumstances. Doubt cannot thus be avoided and it manifests itself in the crises of our habits that take place in concrete action situations and processes. How should one then go about changing habits? This is a central question in all health sociological theory and has significant practical implications. Dewey (1922, p. 20) was a forerunner of many modern views in that he saw that habits rarely change directly by, for example, simply telling people what they should do. This presupposition is well acknowledged in critical health research, which has repeatedly pointed out that there is a gap between guidelines of healthy living and people's life worlds (e.g., Lindsay 2010). It is usually a better idea to approach habit change indirectly by modifying the conditions in which habits occur. In the case of unwanted habits, conditions "have been formed for producing a bad result, and the bad result will occur as long as those conditions exist" (Dewey 1922, p. 29). Dewey's emphasis on the role of conditions is well reflected in modern public health promotion, which rely on population-level measures and interventions. Yet, Dewey's notion of the conditions of habits goes beyond macro-level measures, such as taxation, restrictions and creating health promoting living environments, to cover more detailed aspects of our daily life. According to Dewey, changing the conditions can be done by focusing on "the objects which engage attention and which influence the fulfilment of desires" (ibid., p. 20). Assuming that simply telling someone what they should do will bring about a desired course of action amounts to a superstition because it bypasses the needed means of action, that is, habits (ibid., pp. 27-28). Interestingly, Dewey's ideas of behaviour change have many similarities with the approach known as nudging, as both want to modify environmental cues to enable desired behavioural outcomes (Vlaev et al. 2016). According to both of these approaches, behavioural change is often best achieved by focusing on the preconscious level of habitual processes rather than appealing to the conscious mind by informing people of the potential risks associated with, for example, their dietary habits. Despite these similarities, the pragmatist view of habit change cannot be reduced to the idea of modifying people's "choice architectures". As Pedwell (2017) has pointed out, advocates of the nudging approach fail to sufficiently analyse how habits are formed in the first place and how they change once nudged. In the nudge theory, habits are analogous to non-reflexive routines, and the change in habitual behaviour occurs due to a change in the immediate environment of action. As a result, nudge advocates conceptualize the environment through a narrower lens than pragmatists and they are less concerned about how broader social, cultural, and political structures influence and shape everyday behaviour (ibid.). According to pragmatists, changing habits is something that we do on a daily basis, at least to some extent. This does not mean that we would ever completely overhaul our habits. Dewey (1922, p. 38) thought that character consists of the interpenetration of habits, and therefore a continuous modification of habits by other habits is constantly taking place. In addition, habits incorporate some parts of the environments of action, but they can never incorporate all aspects of the contexts of action. What intelligence-or cognition in modern parlance-in general does is that it observes the consequences of action and adjusts habits accordingly. Because habits never incorporate all aspects of the environment of action, there will always be unexpected potential for change when habits are exercised in a different environment (even if just slightly) than the one in which they were formed (ibid., p. 51). Different or changed contexts of action imply the potential to block the overt manifestation of habits. For example, if workplace smoking policies are changed so that smokers are not allowed to smoke inside, the habit of smoking needs to be reflected upon and the practice of workplace smoking modified. If the employer simultaneously provides aid for quitting smoking, or even better, creates conditions for work which would support workers' experience of agency and autonomy, some may consider breaking the addiction, at least if colleagues are motivated to do the same thing. Such contextual changes lead to moments of doubt in habit manifestation and thus compel us to reflect on behaviour and, in some cases, to come up with seeds for new habits. The habit of smoking can be seen as a way of dealing with "moments of doubt". It is a solution to certain problems of action in a given environment, as in the previous example of workplace smoking and autonomy. If the original context for which the habit was a "solution" to changes, it becomes easier to change the habit as well. Pragmatist thinking thus suggests that here lies one of the keys to reducing unhealthy behaviours. By modifying the environments of habits, it is possible to create moments of doubt that give ground to the formation of new habits. Contrary to nudge theorists, however, pragmatists are not only concerned with promoting change in individual behaviours and its immediate action environment but also in the sociocultural contexts of habit formation by enabling people to create new meaningful capacities and skills (Pedwell 2017). The pragmatists also considered the consequences of moments of doubt on habits. Dewey (ibid.,p. 55) argued that habits do not cease to exist in moments of doubt but rather continue to operate as desireful thought. The problem with "bad habits" is that a desire to act in accordance with the habit may lead to solving situations of doubt by changing the environment so as to be able to fulfil the habit rather than changing the bad habit. For example, new smoking regulations intended to decrease smoking may not lead to an actual decrease but rather to a search for ways to circumvent the regulation by smokers. A crisis of a particular habit thus need not always result in changes in behaviour, as the disposition does not change overnight and may lead to looking for ways to actively change the environment of action back to what it used to be. Furthermore, the crisis (i.e., situation of doubt) may simply be left unresolved. This is what often happens when people are exposed to knowledge of the adverse consequences of their behaviour. There might be a nagging sense that one really should not behave the way one does, but as long as the environmental cues are in place, the habit is not modified, especially if one's social surroundings reinforce the old habit (e.g., other people also continue smoking at the workplace). It can also happen that one makes minor changes in behaviour, for example, by cutting down instead of quitting smokingwhich can in time lead to falling back on earlier smoking patterns. New workplace smoking policies, therefore, often mean that the practice of smoking is modified, and the smokers adopt new places and times for smoking. While old habits often die hard, discordances between habits and their environments can nevertheless trigger reflection and thus have a potential for change. --- Discussion We have argued that the pragmatist understanding of habits is an often-overlooked forerunner of many modern theories of health behaviour. While the health lifestyle theory helps to analyse the factors by which health lifestyles are patterned and points out that both contexts of action and individual choices are important in lifestyle formation, it is less helpful in empirical analyses on the mechanisms by which particular patterns of behaviour emerge in the interplay between choices and chances. The social practices approach further elaborates the relationship between choices and life chances by turning attention away from the structure-agency distinction towards enactments of everyday life and on how people go about their lives by carrying social practices. However, the social practices approach runs the risk that individual action becomes a mere enactment of practices. Thus, the practices are the true agents and people become mere carriers of practices. In this context, the pragmatist notion of habits can be useful in grounding practices within the clusters of habits that people have, thereby enabling them to be recruited by specific practices. To conclude the paper, we want to stress some of the key pragmatist insights into the theorization of health lifestyles and practices. First, unlike practice theories, pragmatism takes individual actors and their capacity to meaning making and reflexivity as a premise for understanding
Unhealthy behaviours are more prevalent in lower than in higher socioeconomic groups. Sociological attempts to explain the socioeconomic patterning of healthrelated behaviour typically draw on practice theories, as well as on the concept of lifestyles. When accounting for "sticky" habits and social structures, studies often ignore individuals' capacity for reflection. The opposite is also true: research on individual-level factors has difficulty with the social determinants of behaviour. We argue that the pragmatist concept of habit is not only a precursor to practice theories but also offers a dynamic and action-oriented understanding of the mechanisms that "recruit" individuals to health-related practices. In pragmatism, habits are not merely repetitive behaviours, but creative solutions to problems confronted in everyday life and reflect individuals' relationships to the material and social world around them. Ideally, the pragmatist conception of habits lays the theoretical ground for efficient prevention of and effective support for behaviour change.
modern theories of health behaviour. While the health lifestyle theory helps to analyse the factors by which health lifestyles are patterned and points out that both contexts of action and individual choices are important in lifestyle formation, it is less helpful in empirical analyses on the mechanisms by which particular patterns of behaviour emerge in the interplay between choices and chances. The social practices approach further elaborates the relationship between choices and life chances by turning attention away from the structure-agency distinction towards enactments of everyday life and on how people go about their lives by carrying social practices. However, the social practices approach runs the risk that individual action becomes a mere enactment of practices. Thus, the practices are the true agents and people become mere carriers of practices. In this context, the pragmatist notion of habits can be useful in grounding practices within the clusters of habits that people have, thereby enabling them to be recruited by specific practices. To conclude the paper, we want to stress some of the key pragmatist insights into the theorization of health lifestyles and practices. First, unlike practice theories, pragmatism takes individual actors and their capacity to meaning making and reflexivity as a premise for understanding how habits are formed and maintained. Thus, from the actor's point of view, habits, even "bad" habits, should be understood as functional and meaningful ways of operating in everyday circumstances. Habits are creative solutions to problems confronted in everyday life and reflect individuals' relationships to the material and social world around them. Action that proves useful and meaningful in a particular context is likely to become habitual. In the context of health inequalities, risky health-related habits can often be seen as a way to strive for agency in circumstances that provide little means for expressing personal autonomy. We suggest that this insight should be at the core of designing any public health or behavioural change interventions tackling health inequalities. Second, pragmatism suggests that habits should be understood as dispositions; people are recruited by practices only when their dispositions enable this to happen. Often a lot of habituation is required before the predispositions are in place that make recruitment possible. Third, pragmatism provides tools to analyse how moments of doubt enter habitual flows of action. Doubting habits is an inherent part of our action process, but habits are called into question especially by changes in the environments of action that make particular habits problematic. This, then, can lead to the development of new or modified habits as a response to the "crisis" of action. If the social and material environment of action, to which the habit is a response, stays more or less the same, the habit will be difficult to change. The pragmatist conception of habits, while emphasizing agency and reflexivity, does not ignore the significance of materiality and routines in daily conduct but is able to incorporate these elements of action in a way that benefits empirical analyses of everyday practices. Pragmatism thus suggests a variety of research settings to investigate the mechanisms by which health-related habits are formed. Here, we provide a few examples. On a macro level, it is important to observe how organisational, technological, or legislative changes are manifested in different contexts and how they modify and enable habitual action in different social groups and settings. Structural measures to promote public health are likely to invoke varying effects depending on the contexts of action of different population groups. Although the physical environment may be the same, the environment of action is not the same for everyone. In pragmatist terms, new policies can be understood as modifications of action environments, which potentially create moments of doubt in habitual action. For example, there is considerable evidence that smoke-free workplace policies reduce workers' smoking (Fichtberg and Glantz Stanton 2002), but more research is needed to determine how different socioeconomic groups are affected by these policies. Macro-level policy changes create an excellent opportunity to study how policies give rise to new patterns of health-related behaviour, how policies are implemented in different contexts, and how reactions to policies and their effects vary depending on socioeconomic circumstances. A micro-level analysis of health-related behaviour, on the other hand, could focus on the triggers of the immediate environment of action-material, social or cognitive-to examine how habits are formed as practical and creative solutions to specific problems and what kinds of factors create situations of doubt and thus include the potential for habit change. Research should analyse how moments of doubt regarding health-related habits emerge in differing socioeconomic contexts, as well as why unhealthy habits can and often do become deeply routinized and resistant to change. Furthermore, it is essential to find out the problems in relation to which particular habits of action have been formed. In both micro-and macro-level analytical approaches, people's reflexive capacity and the pursuit of a meaningful and functioning relationship with their environments should be at the core of analysis. Methodologically, we suggest that the pragmatist approach to health behaviour research calls for methods that integrate the observation of action and people's accounts of and reasoning about their conduct. Ethnography is one research method suited to this task. With participant observation, it is possible to access lived experiences in local settings through which larger policies affect health (Hansen et al. 2013;Lutz 2020) and hard-to-reach population groups (Panter-Brick and Eggerman 2018). So far, ethnographic studies have been rare in health inequality research (e.g., Lutfey and Freese 2005). One way to proceed is provided by Tavory and Timmermans (2013), who have suggested pragmatism as a theoretical-methodological basis for constructing causal claims in ethnography. They propose that a useful starting point for observation could be the process of meaning making: how individuals creatively navigate their conduct when confronting moments of doubt and how they make sense and respond to them in more or less habitual ways. However, surveys can also be used in creative ways to investigate people's habits, for example, using mobile apps that ask and/or track what people are doing. Other methods besides ethnography are thus needed to test the causal claims made by ethnographers. Lastly, research is needed on how educational systems predispose people to develop reflective habits. One possible explanation for why knowledge about the adverse consequences of health-related behaviour is correlated with people's socioeconomic status, and especially their level of education, is that a higher level of education makes one more sensitive to knowledge-related cues for behaviour. This is because higher educational levels tend to bring about the habit of reflecting on the basis of new knowledge. Education is intimately related with a habit of thinking of things in more abstract terms-distancing oneself from the specifics of particular situations and moving towards more abstract thinking. A high level of education also means the absorption of new knowledge has become habitual. Unfortunately, there are no shortcuts to developing such capacity. This is one of the reasons for why merely providing information on health-related issues will affect different population groups differently. --- Data availability Not applicable as no data was used in the article. --- Declarations --- Conflict of Interest The authors have no conflicts of interest to declare. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/ licenses/by/4.0/. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Anu Katainen is a Senior Lecturer in Sociology at the Faculty of Social Sciences, University of Helsinki, Finland. Her research comprises of projects investigating alcohol policy and drinking cultures, as well as social and health inequalities, with a focus on comparative qualitative sociology. Antti Gronow is a Senior Researcher at the Faculty of Social Sciences, University of Helsinki, Finland. His research interests include climate policy, advocacy coalitions, social network analysis, and pragmatist social theory.
Unhealthy behaviours are more prevalent in lower than in higher socioeconomic groups. Sociological attempts to explain the socioeconomic patterning of healthrelated behaviour typically draw on practice theories, as well as on the concept of lifestyles. When accounting for "sticky" habits and social structures, studies often ignore individuals' capacity for reflection. The opposite is also true: research on individual-level factors has difficulty with the social determinants of behaviour. We argue that the pragmatist concept of habit is not only a precursor to practice theories but also offers a dynamic and action-oriented understanding of the mechanisms that "recruit" individuals to health-related practices. In pragmatism, habits are not merely repetitive behaviours, but creative solutions to problems confronted in everyday life and reflect individuals' relationships to the material and social world around them. Ideally, the pragmatist conception of habits lays the theoretical ground for efficient prevention of and effective support for behaviour change.
Introduction Health risk behaviour (HRB) is a major concern in the prevention and management of HIV [1]. Such behaviour is often initiated or reinforced during adolescence [2]. The main forms of HRB include sexual behaviour contributing to unintended pregnancy and sexually transmitted diseases, alcohol, tobacco and drug use, unhealthy dietary habits, inadequate physical activity, and behaviour that contributes to unintentional injury or violence [3,4]. Increased propensity for risk taking is a common phenomenon during adolescence [5]; adolescents living with HIV are vulnerable [6,7]. They encounter various adverse impacts following their engagement in HRBs. A number of studies conducted among sexually active adolescents living with HIV report that about a half have early sexual debut and unprotected sexual intercourse [6,[8][9][10]. Other studies have reported that adolescents living with HIV (ALWHIV) engage in various HRBs such as transactional sex, that is, sexual intercourse in exchange for material benefit or status [11,12], alcohol abuse, and drug use [8,[13][14][15]. This is problematic for persons living with HIV, because such behaviour underlies suboptimal health outcomes such as poor adherence to antiretroviral treatment [16][17][18], HIV coinfection [19,20], injury, and mortality [21]. Furthermore, this behaviour adversely impacts the socioeconomic welfare of affected families [22]. The occurrence of HRB among ALWHIV is of major public health significance in sub-Saharan Africa (SSA) where there was an estimated 1.2 million ALWHIV aged 15-19 years and 3.2 million HIV infected children below 15 years in 2014 [2]. The vulnerability to HRB and its consequences among the ALWHIV in SSA is exacerbated by the social environmental factors surrounding the HIV epidemic in this region. Among such factors are household poverty, orphanhood, gender inequality, stigma, cultural practices, and poor accessibility to social or health services [23][24][25][26][27]. Besides these factors, growing evidence suggests that underlying physiological conditions such as HIV associated neurodevelopmental deficits [28], anxiety, and depression [8,10] increase susceptibility to risk taking among young people living with HIV. In response to the enormous burden of HIV in SSA, some research and intervention programs have been conducted over the past few decades. Unfortunately such efforts have not addressed the needs of adolescents [38] although Africa is home to 19% of the global youth population [39]. Key among the research gaps is the scarcity of literature on HRB among adolescents living with HIV in SSA. Specifically, there is a dearth of knowledge regarding which forms of HRB have so far been assessed, characteristics of the ALWHIV (e.g., routes of HIV transmission), where such studies have been conducted in SSA and the general burden of HRB among the ALWHIV. The lack of such research is further compounded by combining the adolescent age group with other age categories [40] and the assessment of HRBs in isolation [41]. Upon this backdrop, this systematic review and meta-analysis aims at ascertaining the amount of research on HRB and documenting the general burden of HRB among adolescents living with HIV in SSA. The specific objectives are as follows: (i) To identify and summarize characteristics of studies that quantify HRB among ALWHIV in SSA (ii) To summarize the major forms of HRB assessed among ALWHIV in SSA (iii) To compare the burden of HRB among ALWHIV and HIV uninfected adolescents among the eligible studies from SSA. We choose studies based upon the PICOS approach (participants, intervention, comparison, outcome, and study design) [42]. Studies were eligible if they (i) were empirical studies published in a peer-reviewed journal and conducted within SSA; (ii) involved ALWHIV whose age range, mean, or median age fell within 10-19 years; and (iii) quantified any form of HRB among the ALWHIV. We excluded studies that (i) were published in languages other than English and (ii) those that did not aggregate HRB by HIV status of the participants. --- Methods Two authors (DS and PNM) independently screened the titles, abstracts, and full articles for eligibility and reached consensus. --- Data Extraction. We used one data extraction sheet to extract general study characteristics of the eligible studies. These characteristics included (i) author and year of publication; (ii) country where the study was done; (iii) year the study was done; (iv) study design; (v) population description; (vi) number of ALWHIV and HIV uninfected adolescents; (vii) route of HIV transmission; and (viii) form of HRB quantified. Then, using two separate data extraction forms, we extracted the (i) author and year of publication; and (ii) data on each specific HRB. From each study HRB data for ALWHIV was extracted. However for the HIV uninfected adolescents this data was only extracted if it had been assessed as well among the ALWHIV. One form was used to extract data used in meta-analysis and the other for data that was to be narratively summarized. Data abstraction was conducted by two authors (DS and PNM) independently who then compared their results and reached consensus. Our main outcome of interest for this systematic review and meta-analysis was the prevalence of specific HRBs among ALWHIV and HIV uninfected adolescents. For studies that were exclusively conducted among ALWHIV, we computed or extracted the reported percentages of those that engaged in a specific HRB. For those that mixed HIV infection groups and/or had additional age categories besides 10-19 years, we computed percentages of those that took part in a specified HRB for each HIV group within the 10-19 years age group. For those studies where it was impossible to compute these percentages, the occurrence of HRB was reported in its original effect measure, for example, odds ratio, median, or mean. For each of the eligible studies, an assessment of the risk of bias across the studies was aided by the quality assessment tool for systematic reviews of observational studies (QATSO) [43]. The QATSO was designed for studies related to HIV prevalence or risky behaviour among men who have sex with men. It utilizes 5 parameters to obtain a total score that rates the overall quality of an observational study as either bad (0-33%), satisfactory (33-66%), or good (67-100%). These parameters include representativeness of sampling method used, objectivity of HIV measurement, report of participant response rate, control for confounding factors (in case of prediction or association studies), and privacy/sensitivity considerations. Each parameter is scored "1" if the condition was fulfilled and "0" if it was not. --- Statistical Analysis. Data was synthesized both quantitatively and narratively. We assessed the variation in effect size attributable to heterogeneity using the I 2 statistic of the DerSimonian and Laird method. Using random effects model, the pooled estimate was computed after Freeman-Tukey Double Arcsine transformation [44]. We compared the confidence intervals of the pooled estimates of the forms of HRB for the ALWHIV and HIV uninfected adolescents to determine if there were statistically significant differences. The statistical analyses were performed using STATA software (Stata Corporation, College Station, TX, 2005). We report the pooled estimates for four specific forms of HRB. These include the following: (i) Current condom nonuse behaviour (including any reported episode of sexual intercourse without a condom for any duration that includes the current period, e.g., the last 3 months or last 6 months) (ii) Risky sexual partnerships (including reports of having 2 or more sexual partners currently or in the past 12 months or any form of multiple sexual partnerships) (iii) Sexual violence (including any reported episode (experienced or perpetrated) of forced sex, nonconsensual sex, or rape) (iv) Transactional sex (including any reported exchange of gifts or money for sex). We narratively summarized the results that could not be quantitatively pooled (e.g., poor hygiene behaviour and alcohol and drug use behaviour) by describing the effect estimates such as percentages, odds ratios, mean with their standard deviations, and median with their interquartile ranges whatever reported in the study. --- Results We identified 1,691 published study citations from the 4 databases and an additional 2 articles [30,37] through snowballing. Of these, 220 were duplicates. We therefore screened 1,473 abstracts for initial eligibility, out of which 269 articles were identified. Full articles were obtained for these citations, of which 14 satisfied the eligibility criteria (Figure 1). The eligible studies were conducted between 1990 and 2012 among 6 sub-Saharan African countries of Nigeria, Rwanda, South Africa, Tanzania, Uganda, and Zimbabwe. The majority of the studies emanated from South Africa (n = 6) and Uganda (n = 4) of a total of 14. Most studies had a cross-sectional design, in addition two that utilized baseline data from a randomized control trial [13,33] and another that used baseline data from a cohort study [37]. Samples of the ALWHIV per study ranged from 26 to 3,992 while those for HIV uninfected adolescents were from 296 to 6,600. Four studies [6,14,31,35] had ALWHIV recruited from a clinical setting while the rest had their ALWHIV recruited from a general population setting through household surveys and community samples. Only three studies [6,14,31] described the route of HIV transmission among their participants. In these studies the majority (61-100%) had been perinatally infected (Table 1). All the 14 eligible studies quantified sexual risk behaviour whereas alcohol use was quantified by 42.9%, sexual violence by 50.0%, and drug use by 21.4%. One study [30] assessed genital hygiene practices among male adolescents (Table 1). Among these 5 forms of HRB, sexual risky behaviour was the most variously assessed with specific examples like condom nonuse, transactional sex, sexual violence, dry sex practices (i.e., reducing vaginal lubrication to cause more friction during intercourse), early sexual debut, and multiple sexual partnerships. Details on specific HRB are summarized in Tables 2(a) and 2(b). 3.1. Sexual Risk Behaviour. Condom use behaviour was reported in 11 studies. We pooled results on current condom nonuse behaviour among ALWHIV from 9 studies and for HIV uninfected adolescents from 5 studies. The pooled prevalence of condom nonuse behaviour among ALWHIV was estimated at 59.8% (95% CI: 47.9-71.3%) while among their HIV uninfected counterparts it was 70.3% (95% CI: 55.5-83.2%) (Figure 2). In contrast, findings from an additional study that was not part of the metaanalysis [30] reported a higher prevalence of condom nonuse at first sex among ALWHIV as compared to HIV uninfected adolescents (Table 2 --- (b)). Additionally, the pooled prevalence of engagement in any form of risky sexual partnerships among ALWHIV was 32.9% (95% CI: 15.4-53.2%) whereas among HIV uninfected adolescents it was 30.4% (95% CI: 8.4-58.8%) (Figure 3). Besides, there were four more studies capturing risky sexual partnerships that were not synthesized in our metaanalysis [11,12,34,35] (Table 2(b)). One of them explored the association between HIV status and engagement in multiple sexual partnerships while comparing adolescents to young adults (aged 20-24 years) and found no statistically significant differences [35]. The second found no significant association between HIV status and having 6 or more sex partners in the past year among males who engaged in heterosexual anal sex [34]. The remaining 2 studies documented lifetime sexual partners among the adolescents of which one found that 4.7% of the ALWHIV compared to 1.4% of the HIV uninfected had more than 3 lifetime sexual partners [11] and the other reported a mean of 1.8 lifetime sexual partners among the ALWHIV compared to 0.7 among their HIV uninfected counterparts [12]. Transactional sex was prevalent among 20.1% (95% CI: 9.2-33.8%) of the ALWHIV and 12.7% (95% CI: 4.2-24.7%) of the HIV uninfected ones (Figure 4). Another study [34] not included in this pooled estimate found no significant association between HIV status and purchasing sex among adolescents that reported heterosexual anal intercourse. Early sexual debut among the ALWHIV was reported in 5 studies (Table 2(b)). Two of these studies reported that 25.5% [30] and 42.1% [6] of the ALWHIV initiated their first sex at the age of 15 years or less. Furthermore, a study from South Africa [13] and another from Rwanda [14] reported the median age at first sexual encounter as 14.7 (IQR: 12.9-16.2) and 17 (IQR: 15-18) years, respectively. A study among female ALWHIV reported a mean age of 16.4 (S.D: 0.1) years among the ALWHIV and 16.2 (S.D: 0.1) years among HIV uninfected adolescents at first sexual intercourse [11]. Two studies reported a 6.2% prevalence of dry sex practices (i.e., reducing vaginal lubrication to cause more friction during intercourse) among female ALWHIV. In both studies, the prevalence of dry sex practices was lower among the HIV uninfected adolescents [30,33] (Table 2(b)). Another study [31] reported high prevalence of none contraceptive use at either first sex (63%) or during current or previous relationships (48%) among ALWHIV (Table 2(b)). --- Alcohol and Drug Use. Six studies quantified alcohol and drug use behaviour (Table 2(b)). All of the 6 studies reported alcohol drinking behaviour of which 3 compared ALWHIV and uninfected adolescents. Among the 3 studies with results for both HIV groups [11,13,33] the ALWHIV recorded higher occurrence of alcohol drinking behaviour (Table 2(b)). Another study reported that 61% of ALWHIV receiving medication from a clinic had drunk alcohol within 6 hours prior to having sex [14]. In another study among males who engaged in heterosexual anal sex, HIV status was not significantly associated with having anal sex under the influence of alcohol [34]. Drug use behaviour was reported by 3 studies. One reported its occurrence among 53.8% of the male ALWHIV compared to 38.1% of their HIV uninfected male counterparts [13]. The same authors in another study [33] reported the occurrence of drug use among 5.0% of the female ALWHIV compared to 6.3% of the HIV uninfected ones. The third study reported drug use among males who had heterosexual anal sex and showed that there was not a significant association between HIV status and heterosexual anal sex under the influence of drugs [34]. Adolescents living with HIV (ALWHIV) Jewkes et al. [33] Gavin et al. [11] Birungi et al. [31] Jewkes et al. [13] Konde-Lule et al. [37] Konde-Lule et al. [37] Bunnell et al. [32] Aboki et al. [29] Study Jaspan et al. [12] Jewkes et al. [33] Gavin et al. [11] Jewkes et al. [13] Jaspan et al. [12] Test et al. [14] HIV uninfected adolescents 66. 22 Subtotal (I 2 = 94.00%, p = 0.00) Subtotal (I 2 = 98.99%, p = 0.00) Overall (I 2 = 97.89%, p = 0.00) Heterogeneity between groups: p = 0.463 --- Sexual Violence. Seven studies captured reports of various forms of sexual violence such as forced sex, tricked sex, nonconsensual sex, and rape. Six of these studies [11,12,14,30,31,33] specifically reported victims' experience of sexual violence while only one study [13] conducted among rural South African males reported perpetrators' experience of sexual violence. The pooled prevalence of any form of sexual violence (i.e., either as a victim or as perpetrator) was 21.4% (95% CI: 16.3-27.0%) among ALWHIV, while that among HIV uninfected adolescents was 15.3% (95% CI: 8.7-23.3%) (Figure 5). Poor hygiene behaviour was documented in one study from a small mining town in South Africa which reported that 22.5% of the male ALWHIV compared to 11.4% of HIV uninfected males did not wash their genitals at least once a day [30]. Overall, the studies were of high quality with 10 of them rating as good and the remaining 4 [6,13,33,34] as satisfactory. Only 2 of the studies utilized nonprobability sampling [6,31], 6 did not report the participant response rate [6,12,13,[33][34][35], and 3 did not mention how privacy or sensitivity of HIV was considered in the study [13,33,34]. --- Discussion This review indicates that research on HRB among adolescents living with HIV in SSA is still scanty. Moreover, within SSA, this research emanates from a few countries in eastern and southern Africa. The within region variation possibly represents disparities in HIV burden such that most of this research has so far focused on parts of SSA with higher HIV prevalence, for example, southern Africa. However, since SSA globally accounts for the largest population of ALWHIV [2], there is an urgent need for more research on HRB of this population. Furthermore, even among the few existing studies, important details such as the route of transmission and the adolescents' awareness of their HIV status are scanty and yet these are potential determinants of behavioural decision making [45]. The participants are also mainly drawn from the general population or clinical setting. However, it is likely that adolescents from certain settings, for example, dwellers of fishing communities and busy transport corridors, would report a disproportionately higher burden of HRB since such settings are associated with high HIV sociobehavioural risk [46,47]. BioMed Research International weight % HIV uninfected adolescents Auvert et al. [30] Auvert et al. [30] Steffenson et al. [36] Study Bakeera-Kitaka et al. [6] Adolescents living with HIV (ALWHIV) Overall (I 2 = 99.53%, p = 0.00) Heterogeneity between groups: p = 0.871 --- Aboki et al. [29] Jewkes et al. [33] Jewkes et al. [13] Steffenson et al. [36] Jewkes et al. [33] Jewkes et al. [13] Konde-Lule et al. [37] Konde-Lule et al. [37] Figure 3: Prevalence of risky sexual partnerships among ALWHIV and HIV uninfected adolescents. Owing to the overlapping confidence intervals of effect estimates, our findings indicate that there is no statistically significant difference in the prevalence of documented forms of HRB across the ALWHIV and HIV uninfected adolescent groups in SSA. This stated that the prevalence of these HRB is high among both groups which stresses a major and so far unmet need for intervention among adolescents. The consequences of HRB in terms of psychosocial burden, injury, morbidity, and mortality are enormous [48]. Moreover, for ALWHIV, these may be exacerbated by their compromised health condition coupled with their increased need for optimizing care and treatment outcomes [17,19,20]. The high occurrence of unprotected sex at both current and first sexual intercourse among these adolescents is a serious concern. This is moreover compounded by concurrent sexual partnerships, transactional sex, and sexual related violence in the form of nonconsensual sex, intimate partner violence, and rape which are comparably high among both the ALWHIV and HIV uninfected adolescents. Similar to results from this review, some cross-sectional studies from the USA have documented a high prevalence of unprotected sex of 65% [10] and 62% [9] among adolescents living with HIV. Another systematic review of studies from SSA also indicates that transactional sex is a significant risk factor for HIV infection especially among young women [49]. Our findings on prevalence of sexual violence are within the ranges reported among adolescent girls from SSA [50]. This burden is similar for both ALWHIV and their uninfected counterparts but most importantly is that this is an unacceptably high burden for both groups. We suggest that high occurrence of risky sexual behaviour, sexual violence, and other forms of potentially high risk sexual practices such as transactional sex among ALWHIV may partly result from their vulnerable background that often is characterized by stigma, psychological vulnerability, family stressors, poverty, and orphanhood [23,51]. Additionally, some underlying physiological pathways such as neurodevelopmental deficits, mental health, and HIV comorbidities possibly elucidate some behavioural trends. Furthermore, our findings reveal that the use of alcohol and drugs is largely problematic especially among male adolescents in SSA. Similar to our findings, a number of studies from other regions have reported a similar problem of alcohol and drug use including among male adolescents living with HIV [10,52]. The use of alcohol and drugs among people living with HIV is linked to numerous problems like poor adherence outcomes [16], psychiatric comorbidity [53], and HIV infection [54]. More so, drug and alcohol use Subtotal (I 2 = 99.01%, p = 0.00) Overall (I 2 = 97.97%, p = 0.00) Heterogeneity between groups: p = 0.329 Jewkes et al. [33] Gavin et al. [11] Jewkes et al. [13] Aboki et al. [29] Jaspan et al. [12] Jewkes et al. [33] Gavin et al. [11] Jewkes et al. [13] Jaspan et al. [12] Test et al. [14] Auvert et al. [30] Auvert et al. [30] Figure 4: Prevalence of transactional sex among ALWHIV and HIV uninfected adolescents. may form a niche for impulsivity and aggravated risk taking such as intimate partner violence, rape, and unprotected sex, among others [55,56]. Our results highlight much needed efforts of increasing research on HRB among ALWHIV in SSA, broadening the scope of HRBs currently being explored and including adolescents from most at-risk settings among such studies. Additionally, it is necessary to target ALWHIV with pragmatic interventions that address their specific needs so as to prevent or reduce their engagement in HRBs. These interventions also need to foster safe and healthy environments in which adolescents do not fall victim to HRBs and forms of sexual injustices such as sexual violence and transactional sex. One of the limitations of our review is that HRB is selfreported among all the eligible studies and this may have involved some degree of social desirability bias. This form of bias generally arises when respondents answer questions in a way that favours their impression management [57]. However, assessment of HRB is predominantly conducted through self-reports. Additionally, our research focus was limited to studies conducted in SSA and thus generalizability of our results to the entire African and other geographical contexts should only be done with caution. --- Conclusion Research on HRB among adolescents living with HIV in SSA is still limited and currently focuses on a few forms of HRB especially behaviour specific to sexual risk. Nonetheless, the existing research from this region reveals an appalling burden, especially of sexual violence (where in most cases the adolescents are victims), sexual risk behaviour, and substance or drug use. While HRB is noted to compromise health outcomes, the studies do not report a number of factors such as route of HIV transmission and awareness of HIV status which could enhance our understanding of the context of HRB in this patient group. Furthermore, the assessment of HRB is not uniform pointing to the need for utilization of standardized assessment tools that would ensure better comparability of findings across studies. Nonetheless, the current review provides important insights into future research in the field of health risk behaviour and highlights the urgent need for age-appropriate interventions that will effectively address the behavioural and health needs of adolescents living with HIV in SSA. The ALWHIV themselves do not engage less in HRB than HIV uninfected adolescents. We suggest that further research is needed to explore in depth the forms of HRB and their predisposing and protective factors among ALWHIV --- BioMed Research International (14.93, 31.51) 15.33 (8.73, 23.34 ) 33.33 (9.92, 65.11) 15.63 (10.37, 22.20) 16.07 (14. Gavin et al. [11] Birungi et al. [31] Jewkes et al. [13] Jaspan et al. [12] Jewkes et al. [33] Gavin et al. [11] Jewkes et al. [13] Jaspan et al. [12] Test et al. [14] Auvert et al. [30] Auvert et al. [30] Figure 5: Prevalence of sexual violence behaviour among ALWHIV and HIV uninfected adolescents. and HIV uninfected adolescents within the SSA context. Such research may be crucial in guiding intervention planning for HRB and ensuring that the interventions are responsive to special needs and challenges faced by specific adolescent groups like ALWHIV, for example, stigma, depression, and orphanhood [16,17,23,24]. --- Conflicts of Interest The authors declare that there are no conflicts of interest regarding the publication of this paper. --- Stem Cells International
The burden of health risk behaviour (HRB) among adolescents living with HIV (ALWHIV) in sub-Saharan Africa (SSA) is currently unknown. A systematic search for publications on HRB among ALWHIV in SSA was conducted in PubMed, Embase, PsycINFO, and Applied Social Sciences Index and Abstracts databases. Results were summarized following PRISMA guidelines for systematic reviews and meta-analyses. Heterogeneity was assessed by the DerSimonian and Laird method and the pooled estimates were computed. Prevalence of current condom nonuse behaviour was at 59.8% (95% CI: 47.9-71.3%), risky sexual partnerships at 32.9% (95% CI: 15.4-53.2%), transactional sex at 20.1% (95% CI: 9.2-33.8%), and the experience of sexual violence at 21.4% (95% CI: 16.3-27.0%) among ALWHIV. From this meta-analysis, we did not find statistically significant differences in pooled estimates of HRB prevalence between ALWHIV and HIV uninfected adolescents. However, there was mixed evidence on the occurrence of alcohol and drug use behaviour. Overall, we found that research on HRB among ALWHIV tends to focus on behaviour specific to sexual risk. With such a high burden of HRB for the individuals as well as society, these findings highlight an unmet need for age-appropriate interventions to address the behavioural needs of these adolescents.
INTRODUCTION It seems appropriate to begin this paper with this question, as it explicitly cuts across the perceptions of different cultures worldwide. (1) The determinants of health are centered on lifestyle-based characteristics that are influenced by a wide range of social, economic and political forces that influence the quality of an individual's health. (2) Among the characteristics of this group are, at a distal level, cultural determinants that are essential to address and understand the processes of health and disease in society. (3,4) Although there is no concrete definition of cultural determinants, it is advisable first to define the concept of culture when approaching its construction. (5) Culture is a complex system of knowledge and customs that characterize a given population. (6) It is transmitted from generation to generation, where language, customs and values are part of the culture. We also take the WHO definition of disease, which defines it as "Alteration or deviation of the physiological state in one or more parts of the body, due to generally known causes, manifested by symptoms and characteristic signs, and whose evolution is more or less foreseeable". (7) Now, let us think of an interrelation between culture and disease, which we will understand as the interpretation of health and disease and what it means to be healthy and sick. (8) --- DEVELOPMENT --- Concepts of health and disease may differ from one culture to another There are thousands of cultures worldwide, all with their social determinants, governed by laws, traditions and customs that give them cultural characterization. Thus, it should also be thought that culture acts as a conjunction of traditional heritage, in which there is a perception of things universal in nature, be it life, death, the past, the future, health and disease. The cultural approach to all these aspects is a form of doctrinal influence, "Culture is learned, shared and standardized", which means that it can be learned and replicated individually within the cultural plane. (9) Focusing on the subject in question, the disease and depending on the cultural axis, it will be a positive aspect and, in other cases, harmful. Let us analyze this: We must think of illness as a cultural construct; then, the perception of illness refers to the cognitive concepts that patients/illnesses construct about their illness. Here, the importance of cultural beliefs about calmness after a negative medical test, satisfaction after a medical consultation and patients' perceptions of illness about the future use of relevant services play a fundamental role. In this line of thinking, illness perceptions influence how an individual copes with that situation (such as receiving treatment) and emotional responses to illness. In many cultures, we may skip hospital treatments. (10) Traditional Chinese Medicine (TCM) is over 2,000 years old. It is based on Taoism and aims to restore the balance between the organism and the universe, known as yin and yang, promoting a holistic approach. This is based on the presence of Qi, and as everything is energy in different patterns of organization and condensation, humans have spiritual, emotional and physical aspects. While its treatments focus on harnessing and harmonizing imbalanced energies and maintaining or restoring the individual's homeostatic processes to prevent disease outbreaks, the Western paradigm focuses primarily on treatment. For this reason, traditional Chinese medicine, which has proven to be safe, effective and with few side effects, is gaining increasing importance today. (11) Given the modern, global concept of prevention and how every healthcare system is designed, one might think that a greater focus on TCM in planning would contribute significantly to its impact on individuals, but no. Of course, this requires training and education of healthcare professionals in the basics of TCM, and a series of adjustments to the system will require further development that will take years. (12) Now, let us think about incest. As everyone knows, this is listed as an act that brings health problems to future children who can suffer from all kinds of diseases. However, there are countries where this is allowed, not because of their respective cultures but because different cultures living together have different perceptions of health or disease. For example, Sweden is one of the countries that allows marriage between half-siblings who share the same parent. However, they must obtain special permission from the government to do so. In contrast, in some North American cultures, such relationships are prohibited and punishable by imprisonment. Those who commit these crimes could be sentenced to up to 10 years in prison if convicted. (13) Community and Interculturality in Dialogue. 2024; 3:94 2 Dr. Debra Lieberman, an expert in the field at the University of Miami, says that reproducing with a family member has a greater chance of acquiring two copies of a harmful gene than if you reproduce with someone outside the family. The closer the genetic relationships between procreating couples, the more likely it is that harmful genes and pathogens will affect their offspring, causing premature death, congenital malformations and disease. (14) Cultural ideologies cause these diseases. We take incest as an example, but thousands of cultures perform practices that are harmful to health. However, within that culture, they qualify as sacred customs and initiation. --- Disease, health and their cultural bases Disease and health are two concepts inherent to every culture. A deeper understanding of the prevalence and distribution of health and disease in society requires a comprehensive approach that combines biological and medical knowledge of health and disease and sociological and anthropological issues. From an anthropological perspective, health is linked to political and economic factors that guide human relationships, shape social behavior and influence collective experience. (15) Traditional Western medicine has always assumed that health is synonymous with the absence of disease. (16) From a public health point of view, this means influencing the causes of health problems and preventing them through healthy and wholesome behavior. From medical anthropology to understanding disease, this ecocultural approach emphasizes that the environment and health risks are mainly created by culture. (17) Culture determines the socio-epidemiological distribution of diseases in two ways: • From a local perspective, culture shapes people's behavior and makes them more susceptible to certain diseases. • From a global perspective, political and economic forces and cultural practices cause people to behave towards the environment in specific ways. (18) Our daily activities are culturally determined, which causes culture to shape our behavior by homogenizing social behavior. People behave based on a particular health culture, sharing sound fundamental principles that enable them to integrate into close-knit social systems. Social acceptance involves respecting these principles and making them clear to others. (19) --- Health in ancient Egypt The Egyptians believed death was only a temporary interruption of life and that human beings were privileged to live forever. The people who dwelt on the banks of the Nile River were born of a complex interplay between spiritual and tangible energies. However, they understood their earthly life as if they were fleeting reflections of the specter that would become their eternal life. (20) The human body, organs, and instincts corresponded to what they called Keto: a being inserted into the physical world that came to life thanks to Ka, the vital force humans acquire their identity. Therein lies the intimate essence of what Freud called ego. The Ba (superego) of mystical origin was superimposed on this force, which became an ineffective union with the Creator. To this set of forces and substances that formed, the subject was assigned a name corresponding to the auditory expression of his personality. (21) In this shadow realm, sickness and death are inherent conditions of human nature, and health and sickness are mere concentrations of metaphysical dramas arising from external causes. (22) Sickness and death were believed to be caused by mysterious forces mediated by inanimate objects, whether living or evil spirits. They believed that the breath of life entered through the right ear, and the breath of death entered through the left ear. (23) The breath of death disturbed the harmony between man's material and spiritual parts. Between the extremes of life and death, health depended on the harmonious interaction of material and spiritual forces. (24) In contrast, the severity of illness depended on the degree of disturbance of harmony. --- CONCLUSIONS In this text, we have tried to describe in a general way and with some examples how, since ancient times, people have explained various phenomena and situations about the concept of health and disease, which has played an essential role in culture and civilization. From this point of view, in ancient times, illness was the primary punishment for wrongdoing, and only fasting, humiliation and various sacrifices would be used to appease the wrath of the gods. With magical or primitive thinking, there was a relationship between the everyday world and the universe and with the sun, the moon and the supernatural world shaped by other gods and demons, which played an essential role as religious concepts in indigenous communities. About this, we can determine that both in ancient cultures and in the present, certain diseases are suffered that, due to different ideologies, beliefs or customs, are not transited or experienced in a different way. Beyond the specific cultures of each society, health and disease are determined by individual factors that influence how they are defined, the importance they acquire and the way to act on the symptoms of each disease. Finally, ending this essay with a quote that reflects the theme we have addressed seems appropriate. "The distribution of health and disease in human populations reflects where people live, when in History they have lived, the air they breathe, and the air they breathe. The History they have lived, the air they breathe and the water they drink; what and how much they eat and drink; what and how much they eat; and how much they drink. Moreover, how much they eat, their status in the social order, and how they have been socialized. social order and how they have been socialized to respond, identify with or resist that status, who they marry, when and whether or not they are married; whether they live in social isolation and have many friends; the amount and medical the medical care they receive, and whether they are stigmatized when they are when they get sick or if they receive care from their community".
This scientific paper explores the complex relationship between culture, health, and disease, highlighting how cultural beliefs and practices shape perceptions of health and illness. Culture is described as a complex system of knowledge and customs transmitted from generation to generation, encompassing language, customs, and values. The paper emphasizes that concepts of health and disease can vary significantly across cultures. Different cultural backgrounds lead to diverse interpretations of what constitutes health or illness. Cultural beliefs influence how individuals perceive their health and respond to medical interventions. The text examines the example of Traditional Chinese Medicine (TCM), which differs from Western medicine by focusing on restoring balance and harmonizing energies within the body. The contrast between these two medical paradigms highlights the impact of culture on healthcare approaches. The paper also discusses the cultural acceptance of practices that may be harmful to health, such as incest in certain societies. These practices are considered sacred customs within those cultures, reflecting how cultural ideologies can shape disease risks. Furthermore, the paper explores how cultural factors interact with political and economic forces to create specific health risks and behaviors within societies. It emphasizes that culture plays a pivotal role in shaping human behavior and social acceptance. The paper concludes by emphasizing the enduring influence of culture on perceptions of health and disease throughout history, highlighting how cultural beliefs and practices continue to impact individuals' health experiences and outcomes.
Introduction Children in families where there is substance misuse are at high risk of poor developmental outcomes and being placed in out of home care [1,2]. Most of this research has focused on the impact of parents' alcohol misuse on children [3]. Longitudinal studies have shown that parents'/grandparents' dependency on illicit drugs is positively associated with children's substance use and poor psycho-social outcomes [4]. There is a growing body of evidence about the effect of parents' use of methamphetamine on child outcomes. Prenatal methamphetamine exposure has been associated with children's externalising behavioural problems at 5 years [5]. Parents in treatment for methamphetamine use report their children are at high risk of behavioural problems [6,7]. Parents who use methamphetamine are less likely have co-resident children than parents who use other substances [8,9]. Reports of children aged <unk>18 years co-residing with parents who use methamphetamine vary according to the age and number of children and range from 68% in a community setting to 87.5% for those in treatment [8,10]. In Australia, amongst those in treatment for methamphetamine use, mothers are more likely than fathers to have co-resident children [8]. Crucially, compared to parents who use other substances, those who use methamphetamine are more likely to have attempted suicide, experienced depression, nightmares and flashbacks [8], have high levels of parenting and psychological distress [5,10,11] and have children with behavioural problems [5,10]. No published studies were found that examined the characteristics of Australian parents who primarily smoke methamphetamine and the co-residency status of their children. Two Australian longitudinal studies of consumers who use methamphetamine via any route of administration found being a parent was not independently associated with accessing professional support, reduced methamphetamine use or abstinence [12,13]. Instead, parents' service utilisation was associated with co-morbidity (e.g. mental health) and increased risk of methamphetamine-related harms [12]. Little is known about how to support parents who primarily smoke methamphetamine and are not seeking treatment. To assess the needs and risks in these families, we need to understand their characteristics and living circumstances. The aim of this study was to quantify and describe the socio-demographic, psychosocial, mental health, alcohol and methamphetamine use characteristics of parents, in a cohort of participants who primarily smoke methamphetamine. We specifically examined whether these characteristics differed by parental status, gender or residential status of child/ren. --- Method --- Study design and sampling Data come from baseline surveys administered to a community-based prospective sample of consumers who primarily smoked methamphetamine (the 'VMAX Study'). The cohort was recruited via a combination of convenience, respondent-driven [14] and snowball sampling methods. Eligible participants included those who: were aged >18 years; primarily smoked and used methamphetamine at least monthly in the previous six months; and, lived in metropolitan or rural Victoria. Methamphetamine dependence was assessed using the Severity of Dependence Scale (SDS); a score of >4 is indicative of methamphetamine dependence [15]. The Patient Health Questionnaire (PHQ-9) and the Generalised Anxiety Disorder (GAD-7) instruments were used to measure depression and anxiety [16], and the Alcohol Use Disorders Identification Test-Consumption (AUDIT-C) measured harmful alcohol use [17]. Data were collected via face-to-face interviews and entered directly into a mobile device using REDCap software [18]. --- Statistical analyses Variables with a significant association (p <unk> 0.05) in univariable analysis were entered into multivariable logistic regression analyses to estimate associations between (1) participants' parental status (no children, children); (2) fathers' or (3) mothers' child/ren's co-residential status (at least 1 co-resident child, no co-resident children) and socio-demographic, psychosocial, mental health, methamphetamine dependence and harmful alcohol use. For the univariable and multivariable analyses, reported results are odds (adjusted) ratios, 95% confidence intervals and probability-value levels. Univariate analyses excluded missing cases for each independent variable. Adjusted multivariable logistic regression analyses used a complete case approach for missing data (n=1). All statistical analyses were undertaken using SPSS statistical software package [19]. --- Ethics The study was approved by the Alfred Hospital and Monash University Human Research Ethics Committees. Written informed consent was obtained prior to enrolment in the study. Consistent with best practice in alcohol and other drug-related research, participants were reimbursed $40 [20]. --- Results Of the 744 participants, 394 (53%) were parents. In multivariable Model 1 (Table 1), participants were significantly more likely to be parents if they were older, female, lived outside a major city, identified as Aboriginal/Torres Strait Islander, were in a married/defacto relationship, had a Year 10 education or less, had suffered physical violence in the last six months, or did not have an alcohol use disorder. Of the 394 parents, 297 (76%) had no co-resident children. Only 12% (28/233) of fathers had at least one co-resident child. In multivariable Model 2, fathers who were in a married/defacto relationship, had a weekly income above $399, and had not experienced violence in the previous six months, were significantly more likely to have at least one co-resident child. Close to half (43%, 69/160) of mothers had at least one co-resident child. In multivariable Model 3, mothers who had a weekly income above $399, had not been homeless in the last 12 months, and had not utilised professional support for the methamphetamine use in the last 12 months, were significantly more likely to have at least one co-resident child. c) Modified Monash Model geographical classification system of metropolitan, regional, rural and remote areas in Australia (https://www.health.gov.au/health-workforce/health-workforceclassifications/modified-monash-model) d) Missing data for 1 participant ('don't know) e) Missing data for 6 participants (2 'don't know, 4 'not applicable') f) At least one period of homelessness in last 12 months. g) Experienced any kind of physical violence in last 6 months; Missing data for 1 participant (refused) h) SDS -severity of dependence scale where > 4 classified as methamphetamine dependent i) AUDIT-C alcohol screen where males<unk>4, females<unk>3 classified as alcohol use disorder j) GAD-7 Generalised Anxiety Disorder scale k) PHQ-9 Patient Health Questionnaire depression scale l) Ever utilised alcohol-and-other-drug services for methamphetamine use: individual/group drug counselling, residential/outpatient detoxification, residential rehabilitation. Excludes pharmacotherapy. --- Discussion Our study is one of few to examine characteristics and child/ren residential status of a community-based sample of parents who primarily smoke methamphetamine. Participants who were parents were more likely to report disadvantage and harm. Seventy-six percent of parents (88% of fathers and 57% of mothers) had no co-resident children; that is, all their children lived elsewhere. When compared to the findings of studies where child co-residency is based on one or more non-co-resident children, these results are concerning. An Australian residential treatment data study of child/ren co-residency reported the proportion of parents with at least one child who is not co-resident (i.e. not all children) at 83-88% [8]. Similarly, 68% of parents in a US community-based study reported having at least one non-co-resident child [10]. We found mothers who had non-co-resident children were more likely to access treatment for their methamphetamine than those who resided with children. This is consistent with previous studies of parents who use or access treatment for methamphetamine; they are less likely to have co-resident children [8,9,12]. This finding was the same for both having ever or recently (past year) accessed treatment for methamphetamine use. In light of previous research [21], it could be that mothers who access treatment may, in part, do so to be reunified with their children. Conversely, mothers who have co-resident children may perceive they have less 'need' for services, or be concerned about losing custody of their children if they seek services [21,22]. Compared to other children, those whose parents misuse any substances are at increased risk of poorer academic, behavioural, emotional and social outcomes [2]. However, women who use methamphetamine and access services face the stigma being a mother who uses methamphetamine [23] and little is known about the role of treatment services in preventing child custody loss [24]. Further research is needed to determine how mothers who use methamphetamine and have co-resident children can be supported to seek services whilst ensuring the wellbeing of their child/ren. In our study, depression and anxiety scores were not significantly different between those with/out children, nor between parents with/out co-resident children, but are very high compared to those reported in the 2017-18 Australian health survey for the general Australian population [25]. This highlights the importance of mental health support and comprehensive primary health care services for parents who use methamphetamine, and for their children. There were limitations to the study. We did not ascertain the age of children. This may, in part, explain our findings. To account for this, we compared the sample by parent-and child-resident status with estimates from an age-and sex-adjusted representative sample of the Australian population [26] and estimated that more than 90% of parents in our study would have at least one child under the age of 18 years. Parents who use methamphetamine were as likely to have children, but were far less likely to have co-resident children; 12% compared to 75% of the general Australian population of the same age and gender. The data is crosssectional so causality cannot be inferred. The sample was not representative sample; therefore, the generalisability of findings may be limited. Self-reported data are subject to recall and social desirability biases. The number of fathers with co-resident children was relatively small (n=28) and so limited the estimation of smaller but nonetheless clinically meaningful effects. Follow-up with this prospective cohort will afford opportunities to explore the age, sex and ongoing residential status of participants' children. In the context of parents' substance use, data linkage over a fiveyear period will provide additional insights into parents' service utilisation. --- Conclusion Study findings provided new information regarding the high number of non-co-resident children and the need for accessible support and services for parents who use methamphetamine. Further research is needed to identify optimal ways of supporting these families.
Introduction: Children in families where there is substance misuse are at high risk of being removed from their parents' care. This study describes the characteristics of a community sample of parents who primarily smoke methamphetamine and their child/ren's residential status. Design and methods: Baseline data from a prospective study of methamphetamine smokers ('VMAX'). Participants were recruited via convenience, respondent-driven and snowball sampling. Univariable and multivariable logistic regression analyses were used to estimate associations between parental status; fathers' or mothers' socio-demographic, psychosocial, mental health, alcohol, methamphetamine use dependence, alcohol use and child/ren's co-residential status. Results: Of the 744 participants, 394 (53%) reported being parents. 76% (88% of fathers, 57% of mothers) reported no co-resident children. Compared to parents without co-resident children, fathers and mothers with co-resident children were more likely to have a higher income. Fathers with co-resident children were more likely to be partnered and not have experienced violence in the previous six months. Mothers with co-resident children were less likely to have been homeless recently or to have accessed treatment for methamphetamine use. Discussion: The prevalence of non-co-resident children was much higher than previously reported in studies of parents who use methamphetamine; irrespective of whether in/out of treatment. There is a need for accessible support and services for parents who use methamphetamine; irrespective of their child/ren's co-residency status. Conclusions: Research is needed to determine the longitudinal impact of methamphetamine use on parents' and children's wellbeing and to identify how parents with co-resident children (particularly mothers) can be supported.
Introduction Food insecurity means having limited or uncertain access, in socially acceptable ways, to an adequate and safe food supply that promotes an active and healthy life for all household members, while hunger refers to the physiological responses of the body to food insecurity [1]. The U.S. Department of Agriculture Economic Research Service (USDAERS) developed the 10-item Adult Food Security Survey Module (AFSSM) and the extended 18-item Household Food Security Survey Module (HHFSSM) to measure the percentage of U.S. adults and households, respectively, that experience food insecurity at some time during a given year [2]. Survey questions focus on the quantity, aff dability, and quality of the available food supply, and are worded such that they distinguishes between high food security (no reported indications of food-access problems or limitations), marginal food security (one or two reported indications, typically of anxiety over food sufficiency or shortage of food), low food security (reduced quality, variety, or desirability of diet, with little or no indication of reduced food intake), and very low food security (multiple indications of disrupted eating patterns and reduced food intake). In 2016 12.3% of U.S. households, accounting for 41.2 million people, were food insecure, of whom 10.8 million were very low food secure in infants, children, and adolescents [5,8] and compromised physical, cognitive, and emotional functionality in persons of all ages [9][10][11][12]. Additionally, epidemiologic data have linked food insecurity among adults to obesity, type 2 diabetes, and the metabolic syndrome, sometimes termed the "hunger-obesity paradox" [13][14][15]. A variety of food assistance programs are available in the U.S. at the federal, state, and community levels to aid persons living with food insecurity [1,16]. Additionally, food insecure individuals use a variety of coping strategies to access food, including: selling personal possessions; saving money on utilities and medications; bartering; holding multiple part-time jobs; planning menus and cutting food coupons; purchasing less expensive, energy-dense foods to eat more and feel full; eating more than usual when food is plentiful; stretching food to make it last longer; selling their blood; dumpster-diving; participating in research studies; and stealing food or money [17][18][19]. Research fi from post-secondary U.S. campuses indicate that college students are among the population groups vulnerable to food insecurity [20], with reported rates ranging from 14.8% at an urban university in Alabama [21] to 59.0% at a rural university in Oregon [22]. Among the correlates associated with college student food insecurity are: lower grade point average [22,23], on-campus residence [24], living off-campus with roommates [25], being employed while in school [22], older age, receiving food assistance, having lower self-efficacy for cooking cost-effective, nutritious meals, having less time to prepare food, having less money to buy food, and identifying with a minority race [21], and having an increased risk for depression, anxiety, and stress [26,27]. Although considerable evidence indicates that college student food insecurity is a public health problem associated with unfavorable health and academic outcomes [20], searches in PubMed, ScienceDirect, and Google Scholar located one peer-reviewed article that studied this problem among freshmen [27]. These authors measured food insecurity among 209 freshmen living in dormitories on a southwestern campus and reported that 32% had experienced inconsistent food access in the previous month and 37% in the previous 3 months. Additionally, these young students had higher odds of depression, and lower odds of consuming breakfast, perceiving their on campus eating habits as healthy, and receiving food from parents. The authors concluded that there is a need for interventions to support food insecure students, given that food deprivation is related to various negative outcomes. Since these findings suggest that Freshmen, like older college students, may be risking their health and academic success because of food insufficiency, more research is needed that assesses the scope of this problem among first year college students and identifies predisposing factors and coping behaviors. Accordingly, the aims of this cross-sectional study were to measure the prevalence of family and campus food insecurity and identify correlates among a nonprobability sample of freshmen attending a university in Appalachia, and to compare food insecure and food secure families and freshmen on correlates. The study site was a university located in western North Carolina that shows high rates of poverty, obesity, and food insecurity [28,29]. --- Methods --- Participants and Recruitment A computer-generated randomized sample of all freshmen (n = 2744) enrolled during the spring, 2017 semester were sent electronic recruitment letters, followed by a reminder email 1 and 2 weeks later [30] that included a link to the questionnaire. Interested students clicked on a link that took them to a screen that outlined the elements of informed consent, and those who wished to proceed clicked an "accept" button that took them to the questionnaire. Upon completion, students could click on a link to a screen where they typed their name and email address to enter a drawing for one of two $100.00 gift cards to Amazon.com. This link was detached from the questionnaire link to insure confidentiality of responses. This research was approved by the Offi of Research Protections at the university. --- Survey Questionnaire Data were collected using a cross-sectional, anonymous, online questionnaire administered using Qualtrics survey software (Qualtrics, November 22, 2015, Provo, UT). Initial close-ended questions elicited the following types of information: demographic and anthropometric [gender, age, race, family composition, and self-reported weight and height for calculating body mass index (BMI)], economic (employment status, personal monthly income, financial aid status, and meal plan participation), academic [year in school, enrollment status, on or off campus residence, grade point average (GPA), and academic progress]. Their academic progress was assessed using an Academic Progress Scale where the students self-rated their transition to college, overall progress in school including graduating on time, class attendance, attention span in class, and understanding of concepts taught by selecting either "poor," "fair," "good," or "excellent." Food security status was measured using the 10-item USDA AFSSM, which was completed for the family and campus settings [2]. Next the students responded to a "yes/ no" item asking whether they believed their access to food had worsened since starting college. Those who selected "yes," checked, from the following reasons, those that they believed explained this change: I don't have enough money to buy food, my meal plan card runs out too soon, I often spent money on nonfood items rather than using the money to buy food, I have trouble budgeting my money, and I spend money when I shouldn't because I want to be included in social activities with my friends. Their money spending behaviors were assessed using a Money Expenditure Scale that asked the students to estimate how often they spent money on the following items instead of using the money to buy food by selecting either "never," "sometimes," or "often,": alcohol, cigarettes, recreational drugs, car repairs, gasoline, entertainment, tattoos or piercings, prescription medications, make-up and fashion, and school fees. They also checked, from a scrambled list of 17 positive and negative descriptors, those that best reflected how they felt about their food security status on campus, (e.g., satisfi ashamed, secure, frustrated, etc.). Coping behaviors for accessing food were identified using a Coping Strategies Scale focusing on saving (n = 7 items), social support (n = 8 items), direct access to food (n = 10 items), and selling personal possessions (n = 2 items). This scale was completed once for the family setting and again for the campus setting by checking all of the strategies used at each location. The students rated their eating habits since starting college by selecting either "very unhealthy," "unhealthy," "healthy," or "very healthy," and they rated their health status by selecting either "poor," "fair," "good," or "excellent." Follow-up questions assessed their meal skipping and food consumption behaviors for the campus location only. Meal skipping was assessed using a Meal Skipping Scale that asked how often the students skipped breakfast, lunch, and dinner with the response options "never," "seldom," "most days," and "always." Food consumption data were collected with questions asking approximately how many days/week, on a scale from 0 (zero) to 7, they consumed fruits/juice, vegetables/juice, fast foods, and sweets. The final two items concerned sources of social support for accessing food on campus. The students checked, from a list of 13 sources (e.g., parents, campus food pantry, etc.), those that had provided them with food assistance, and checked, from a list of 12 policies and learning activities (e.g., more financial aid from school, learn how to shop for affordable, healthy food, etc.), those they believed would help them improve their access to food. The Coping Strategy Scale was compiled with guidance from the food security literature [17][18][19], while the Academic Progress, Meal Skipping, and Money Expenditure scales were developed by the authors. Content validity of all items was determined by two nutrition professors with experience in questionnaire construction and familiarity with the food security literature. The questionnaire was pilot tested online with a computergenerated randomized sample of 50 freshmen who did not participate in the fi al study. Student feedback indicated that the links and buttons operated accurately and that the screens displayed an appropriate amount of items. Pilot test data prompted deletion of items from the Coping Strategies Scale and addition to items on the Money Expenditure Scale. --- Statistical Analyses Data were analyzed using SPSS version 24 (IBM, SPSS Statistics, 2016). The students' food security status was measured using the USDA/ERS scoring scheme for the 10-item AFSSM, such that zero affirmative answers reflected high, 1-2 marginal, 3-5 low, and 6-10 very low food security. Students who scored 0-2 points were classifi as food secure, and those who scored 3-10 points as food insecure [2]. The single item concerning perceived health status and the five-item Academic Progress Scale were scored by allotting 1 point to the "poor" and 4 points to the "excellent" responses. The Meal Skipping Scale was scored by allotting 1 point to the "never" and 4 points to the "always" responses, and the Money Expenditure Scale was scored by allotting 1 point to the "never" and 3 points to the "often" responses. Descriptive statistics were obtained for sociodemographic and behavioral variables. Correlational analyses measured associations between AFSSM scores and sociodemographic and behavioral variables, and independent samples t-tests and Chi square analyses compared food insecure and food secure students on these variables. Findings concerning coping strategies and sources of social support were reported only for the food insecure students and their families, in accord with the food security literature [17][18][19]. Statistical significance was p <unk>.05. --- Results --- Participant Characteristics Questionnaires were submitted by 494 of the 2000 recruited freshmen, of whom 38 were disqualified due to insufficient data, resulting in a sample of 456 participants comprising 22.8% of those recruited. Table 1 summarizes the characteristics of the food secure and food insecure freshmen separately, and for the entire freshmen sample. The gender distribution of the overall sample was about one-quarter female and three-quarters male. Their mean age was 18.5 years (<unk> 1.04, range [18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33]. More than threefourths of the freshmen identified as white, not of Hispanic origin, refl the low level of racial diversity at the university, and about three-fourths were from two-parent households. Findings related to campus life indicated that almost the entire sample were full time students, on campus residents, and participated in a university meal plan. Economic data indicated that approximately two-thirds of the freshmen received financial aid, about three-fourths were unemployed, and their mean personal monthly income was $83.22 (<unk> $259.35). The students' mean BMI (calculated from self-reported height and weight data) was 23.5 kg/m 2 (<unk> 4.44, range 14.7-45.8); about three-fourths of the students were underweight or normal weight by BMI and about one-fourth were overweight or obese. When rating their eating habits since starting college, about 40% of the freshmen chose the "very unhealthy" or "unhealthy" responses while approximately 60% chose the "healthy" or "very healthy" responses, and when rating their health status, approximately 20% chose the "poor" or "fair" responses while about 80% chose the "good" or "excellent" responses. --- Family Food Insecurity The AFSSM scores indicated that 32 freshmen (7.1%) had experienced food insecurity at home during the year before starting college, while 424 (92.9%) were from food secure families. Gender-based comparisons revealed that 9.5% of the males and 6.3% of the females were from food insecure families. Additionally, 56% of the food insecure and 78.5% of the food secure students were from two-parent families, and 75% of the food insecure and 86.8% of the food secure students were white, not of Hispanic origin. The mean Coping Strategies Scale score for the 32 food insecure families was 2.3 (<unk> 3.1, range 0-18) out of a possible 27 points. There was a significant correlation between family AFSSM scores and their scores on this scale (r=.52, p <unk>.01), such that families experiencing more severe food insecurity used a greater number of coping strategies for accessing food. Table 2 shows the frequency counts and percentages, in descending order, of coping strategy use by food insecure families and by food insecure freshmen on campus. The strategies used most often by the food insecure families were: stretched food to make it last longer (72.9%), A comparison of the rates of family and campus food insecurity revealed a significantly higher proportion of food insecure freshmen on campus (p <unk>.01). Additionally, 14 (43.8%) of the 32 freshmen who had experienced food insecurity at home were also food insecure on campus. When comparing their food security status at home and on campus, 42.5% of the freshmen who experienced campus food insecurity believed that their access to food had worsened since starting college, and they believed that the most important reasons that explained this change were: my meal plan card runs out too soon (15.3%), I often spend money on nonfood items rather than using the money to buy food (13.3%), and Questions were modified to apply to a family or school setting, so some questions were only applicable in one of the situations. A comparable question was asked for the other situation purchased cheap, processed food (68.8%), and cut out food coupons (65.6%). --- Comparisons of Food Insecure and Food Secure Students on Campus The AFSSM scores indicated that 98 freshmen (21.5%) were food insecure at some point during their fi t year of college, and 358 (78.5%) were food secure. Among the food insecure freshmen, 24.3% were males and 74.9% were females, while among the food secure freshmen 31.6% were chased "often" by the food secure students were: entertainment (17.7%), school-related fees (16.6%), and make-up and fashion (14.3%). The correlation between the students' AFSSM scores and their Money Expenditure Scale scores trended toward significance (r =.09, p=.06), suggesting that the more frequently the students spent money on nonfood items, the more severe was their level of food insecurity. The terms most often chosen by the food insecure freshmen to describe their feelings about their food access on campus were: fine/okay (22.4%), anxious (16.3%), worried (12.2%), and frustrated (12.2%), while those chosen most often by the food secure students were: fine/okay (21.9%), satisfied (21.6%), and secure (20.2%). The findings concerning self-assessed eating habits since starting college and perceived health indicated that a greater proportion of food secure students (60.7%) than food insecure students (43.9%) regarded their eating habits as "healthy" or "very healthy" (p <unk>.01), and that a greater proportion of food secure students (86.0%) than food insecure students (71.4%) perceived their health status as "good" or "excellent" (p <unk>.01). A signifi diff ence emerged between the mean Meal Skipping Scale scores of the food insecure and food secure students, respectively, (5.8 <unk> 1.60, range 3-10 vs. 6.3 <unk> 1.41, range 3-9, p <unk>.01) out of a possible 12 points, indicating that the food insecure students tended to skip fewer meals. Breakfast was the meal most often skipped by both food insecure (62.3%) and food secure (52.4%) students. Food consumption data indicated that food insecure and food secure students, respectively, consumed fruits/juice an average of 4.8 versus 4.7 days/week, vegetables/juice 4.9 versus 4.8 days/week, fast food 3.9 versus 4.1 days/week, and sweets 3.9 versus 4.2 days/week. No significant diff ences emerged between any of these mean food consumption scores. --- Coping Strategies and Sources of Support Used by Food Insecure Students on Campus The mean Coping Strategy Scale score for the 98 freshmen who experienced food insecurity on campus was 1.0 points (<unk> 1.6, range 0-14) out of a possible 27 points, and a significant positive correlation emerged between the students' AFSSM and their scores on this scale (r =.26, p <unk>.05), indicating that students who experienced more severe food insecurity used a greater number of strategies for accessing food. The three most frequently used strategies were: purchased cheap, processed food (18.4%), stretched food to make it last longer (16.3%), and shared groceries and/or meals with relatives, friends, or neighbors (15.3%). These food insecure freshmen identified the following sources as those that had off ed the most help in accessing food at school: parents (28.6%), friends (15.3%), and boyfriend or girlfriend (8.2%). They also identified the following items as those they thought would be most helpful in improving their food access: part time or full time job (19.4%), more affordable meal plan (18.4%), learn how to manage their money and make a budget (13.3%), learn how to shop for affordable, healthy food (12.2%), and learn how to eat healthy (11.2%). --- Discussion The freshmen in this study experienced food insecurity at a rate that was three times higher on campus compared to when they lived at home, suggesting that the problem of college student food insecurity begins during the freshman year. The present findings support those of Bruening [27] in documenting a high rate of food insecurity among first year college students and in identifying associated health concerns. The present findings also add to the Ample evidence from U.S. post-secondary campuses that college student food insecurity is a public health problem [20] that could compromise the students' mental and physical health [9,[11][12][13][14][15] and possibly jeopardize their academic success [22,23]. Accordingly, in the present study, smaller proportions of food insecure than food secure freshmen assessed their health status as either "good" or "excellent." Additionally, the food secure freshmen earned a significantly higher mean score on the Academic Progress Scale, suggesting that, for the food insecure students, their transition to college, class attendance, attention span in class, and ability to understand concepts taught may have been adversely impacted by the discomforts associated with hunger. The considerably lower rate of family than campus food insecurity reported by the freshmen may have been partially attributable to parental coping strategies intended to protect their children from food deprivation at home, and that once their children moved away, these protective measures were more difficult to implement. Examples of such parental "buffering" activities reported in the food security literature include asking relatives for money and stretching meals to mitigate family food shortages [31,32]. Similar familial coping strategies were identified by the food insecure freshmen the year before starting college, i.e., stretching food to make it last longer and purchasing cheap, processed food. Subsequently, these same practices were used by the students on campus. Such dietary practices, likely learned at home, suggest that at times these students avoided the discomforts of hunger by consuming diets featuring foods high in fats and simple carbohydrates and low in protein, micronutrients, and fiber. Regular consumption of such energy-dense diets is risky since such eating habits could compromise the students' nutrient reserves and increase their risk for overweight and obesity in the long-term [13][14][15]. This speculation is supported by the findings that the food insecure freshmen, like their food secure peers, did not consume fruits or vegetables on a daily basis, consumed fast foods and sweets at least 3 days per week, and frequently skipped meals. Although such eating habits have been widely reported for college students in general [33,34], in the present study smaller proportions of food insecure than food secure students regarded their eating habits since starting college as either "healthy" or "very healthy." The unhealthy dietary practices of the food insecure students in particular are of concern because these behaviors may have, in some instances, been due to food scarcity rather than to personal food preferences and busy lifestyles. In this regard, a greater proportion of food insecure than food secure freshmen believed that their food access had worsened since starting college. This belief was refl in the terms these students chose to describe their feelings concerning their food situation on campus, i.e., anxious, worried, and frustrated. Perhaps the reasons the fi ay descriptor was chosen most frequently were a reluctance to admit that they were unable to access as much food as they would like or to complain about their food situation. Two of the most frequently reported reasons for their worsening food security concerned fi constraints, i.e., the monetary value of their meal card ran out too soon and they lacked money to buy food. Similar fi were reported for food insecure college students in Alabama and Oregon, respectively [21,22]. It is also possible that the students' misuse of their limited funds may have played a significant role in their declining food access, given that they "often" spent money on nonfood items rather than using the money to buy food. To illustrate, 21% of the food insecure freshmen reported that they "often" spent money on entertainment. The fi from this study indicate that the participating freshmen need, and have asked for, various kinds of assistance to improve their food access and diet quality. For example, the students requested learning opportunities that would teach them how to manage their money, make a budget, purchase nutritious, aff dable foods (whether using their meal cards on campus or using personal funds on or off campus), and make healthy food choices. They also suggested policies and programs they believed would improve their food access on campus, i.e., more part-time and full-time jobs and more aff dable meal plans. Community health professionals including Registered Dietitians, social workers, and health educators, are uniquely qualified to make positive contributions toward decreasing food insecurity and hunger among these young adults by implementing interventions and engaging in policy advocacy that address these student concerns. Additionally, offering similar programs to parents from food insecure households in community settings might assist these parents to provide healthy daily meals to their families. Lohse et al. [35] found that participation in such interventions enhanced the food budgeting and healthy meal planning skills of food insecure women. --- Study Limitations and Strengths This study had limitations that prevent the generalizability of the findings to the population of U.S. college freshmen, i.e., use of a nonprobability sample, data collection on a single campus located in a rural county, self-reporting of all measures, and overrepresentation of females and white students. Additionally, the small number (n = 32) of freshmen who reported family food insecurity made it difficult to identify relationships between family food security status and other correlates. This small number may have been attributable to the students' reluctance to disclose family food insecurity out of concern that their parents would be perceived as negligent or incapable, despite the anonymity of their responses. Nevertheless, the present fi add to the growing evidence that food insecurity is a serious health problem among freshmen and their families that deserves further study. For example, more research is needed with larger, more diverse samples in urban and rural communities to glean a better understanding of the scope of the problem and contributing factors in family and school settings. Research is also needed that evaluates the effectiveness of campus and community food assistance programs such as food pantries to determine whether they are being used by needy freshmen and their families and whether the food off ings are of the quality that promote healthy families. --- Conflict of interest The authors declare that they have no conflicts of interest. Ethical Approval This research was not funded, and approval was obtained from the Office of Research Protections at the university prior to data collection. Informed Consent An informed consent letter was included in the questionnaire prior to the first item. Students who did not wish to participate after reading this letter could exit the questionnaire by clicking on an "exit" button.
Food insecurity means having limited or uncertain access, in socially acceptable ways, to an adequate and safe food supply. Ample evidence has identified college students as vulnerable to this problem, but little research has focused on freshmen. This cross-sectional study examined family and campus food insecurity among freshmen at a university in Appalachia. An online questionnaire contained sociodemographic items and scales that measured food security status, academic progress, coping strategies for accessing food, and social support. T-tests and Chi square analyses compared food insecure and food secure students. Statistical significance was p<.05. Participants were 456 freshmen, 118 males (26%) and 331 females (73%). Family and campus food insecurity were experienced by 32 (7.1%) and 98 (21.5%) of the freshmen, respectively, and 42.5% of those who experienced campus food insecurity believed their food access had worsened since starting college. Family and campus coping strategies, respectively, included stretching food (72.9 vs. 18.4%) and purchasing cheap, processed food (68.8 vs. 16.3%). Food secure students scored significantly higher on selfrated measures of academic progress (p<.01), and greater proportions of food secure students (60.7 vs. 43.9%, p< .01) perceived their eating habits since starting college as "healthy/very healthy," and perceived their health status as "good/ excellent" (86.0 vs. 71.4%, p<.01). Students requested assistance with job opportunities (19.4%), affordable meal plans (18.4%), money management (13.3%), and eating healthy (11.2%). Findings suggest that college student food insecurity begins during the freshmen year, and that there is a need for campus and community-based interventions to increase food access among these freshmen and their families.
Introduction This article presents information about the social, legal and medical issues that medical and non-medical practitioners in the UK i should consider in order to signpost options for people living with HIV (PLWH) who are not in a heterosexual relationship and want to become parents. Despite significant medical advances, increased medical awareness amongst HIV practitioners, and the ability to live a full life with HIV, stigma still exists around PLWH wanting to have children. There is a lack of awareness amongst the general public and the non-specialist medical community, about the realities of living with HIV, and the options available to become a parent. Vertical transmission rates in the UK are very low (<unk>0.5%) [1]. Despite this, even amongst PLWH it is evident that stigma surrounding parenting with HIV is real, with almost 50% of HIV-positive respondents in a European study saying that having HIV would be a barrier to them deciding to have a family [2]. Irrespective of their sexual orientation, HIV-positive parents and prospective parents may bear not only the brunt of an historical HIV stigma, but also the negative discourses that surround lesbian, gay, bisexual or transgendered/gender diverse (LGBT) parenting, despite the legal advances over the past decade. First steps to breaking down this stigma are to increase public awareness around the realities of living with HIV, and awareness among PLWH that being a parent is an option for them. In 2016 in London, the UNAIDS 90-90-90 target was achieved for the first time. England came close to meeting that target, with 88% of those living with HIV being diagnosed, 96% of those on HIV treatment and 97% of them having an undetectable viral load [3]. Most PLWH taking antiretroviral medication therefore have undetectable levels of HIV in blood, meaning they cannot transmit HIV via sexual fluids [4]. Despite this, parenting is not always routinely discussed with PLWH. A recent study in London HIV clinics found that very few clinicians spoke with HIV-positive gay men about the possibility of having children [5]. Misconceptions about HIV transmission risk and medico-legal issues concerning reproduction may, thus, be rarely addressed. Education is also key to challenging stigma, and supporting the medical profession to better i The legal content is UK-wide (since it derives from the UK Human Fertilisation and Embryology Act 2008). Other aspects, such as NHS funding for fertility treatment, may vary between UK countries. advise HIV-positive patients is critical, as a medical appointment is often the first opportunity that people who are newly diagnosed have to think about future options. --- Transmission The key to conceiving a child and preventing transmission to an unborn baby in HIV positive parents lies within the evidence behind current understanding of viral load and the risk of HIV transmission. PrEP (Pre Exposure Prophylaxis) is a new way of preventing HIV transmission. HIV negative people can take a tablet (containing two active drugs, tenofovir and emtricitabine) before they have unprotected sex. Taking PrEP has shown to be incredibly effective at preventing HIV acquisition [6,7,8]. PrEP is different to PEP (Post Exposure Prophylaxis). PEP is a medication regimen taken for 28 days after a risk of acquisition of HIV. When a person first becomes HIV positive they will have a very high viral load. This makes the chance of transmitting the virus very high. When a patient commences therapy the viral load falls rapidly, the aim being becoming undetectable in plasma (<unk>50 copies/ml). Once someone is undetectable their HIV is untransmittable (U=U ii ) [4,9]. Both PEP and PrEP were considered useful tools to reducing HIV transmission around the time of conception in sero-discordant heterosexual couples. However since the adoption of U=U, their use is not recommended. So long as the HIV positive parent is undetectable PEP and PrEP are not recommended to safeguard the negative parent. This has transformed the options PLWH have regarding parenting, although the ethical and legal frameworks for some options lag behind the evidence. For example, serodifferent heterosexual couples where the male partner is HIV infected are no longer advised to undergo sperm washing if the male partner satisfies U=U criteria, but when surrogacy or donor insemination is considered extra barriers remain in place for those affected by HIV. Some USA clinics use the 'Bedford Programme' [10] to allow HIV positive men to pursue conception via these routes but regulatory frameworks in the UK do not support this approach. ii https://www.preventionaccess.org/consensus; accessed 26 th April 2018 --- Women and Fertility Women living with HIV have been found to have reduced fertility which may be due to the fact that an increase in tubal factors is seen. Men living with HIV, especially if infection occurred around puberty, may have a reduced sperm count [11]. Couples are therefore advised to seek fertility investigations if they have tried to conceive for 6 months without success, or are known to have had previous pelvic infections. --- What's possible? There are a number of possibilities for PLWH to have biogenetically related children. Parental gender, relationship status and financial resources will impact on the available options. --- Single Women For single women, many UK fertility clinics offer treatment with donor sperm. The success rates for intrauterine insemination (IUI) are around 10% so many women opt to have in vitro fertilisation (IVF) which has better success rates, but is more expensive, particularly if paying privately. Local Clinical Commissioning Groups (CCGs) will have specific policies on what funding might be available through the NHS. For example, some CCGs will not fund fertility treatment for single women. --- Single Men The options for single men are not as straightforward. Although many UK clinics can provide access to treatment using donor eggs, a parental order (which reassigns parentage after surrogacy) can only obtained by two people, who have to be either married, in a civil partnership or living as partners. The law is currently being changed to allow single parents to apply for parental orders, with the changes (at the time of writing) due to come into force in late 2018 or during the course of 2019. In the meantime there are other ways of obtaining parental responsibility for single parents and surrogacy is, in reality, an option for both single men and women. Single people can also choose a co-parenting route. For example, a woman (the "birth mother") and man with whom she is not in a relationship can choose to have a child together -they do not need to be in a relationship but both people would be the legal parents of a child with all the responsibilities that that brings. Having a legal document that sets out parenting agreements and arrangements prior to a child's birth is a useful tool in these circumstances, as it offers protection to both parties. Additional infectious disease screening (for sexually transmitted infections and other blood born viruses such as hepatitis B and C) might also be necessary to remove the risk of infection through using a male co-parenting partner's sperm that the parent carrying the child wouldn't otherwise be exposed to. Provided U=U criteria are met then there is, of course, no risk of HIV exposure to the uninfected parent or to any conceived child. A single man, or same sex male couples, can commission a surrogate host to carry their baby. The law in this area is complex and surrogacy agreements are not enforceable in the UK. Commissioning individuals or couples use donated eggs, which are inseminated with the sperm of either partner to create the embryos that are then transferred to a surrogate host. It is not possible to use donor sperm in this scenario because one person must be genetically related to a child before a parental order can be issued [12]. In the UK, fertility clinics can only legally use HIV negative sperm with a surrogate, so some parents go overseas for treatment instead, usually to the USA where there are established fertility treatment and surrogacy programmes for intended parents living with HIV. Parents will need to apply for a UK parental order after a child is born to secure legal parentage. --- Couples Same sex female couples can access treatment using donor sperm, but if they opt for IVF rather than IUI they can choose to explore one partner donating eggs to the other and vice versa. Despite this, for a woman living with HIV it is not possible, due to HFEA regulation, to provide eggs to her partner (or any other recipient) at a fertility clinic in the UK. There is a legal obligation to follow medical advice to minimise any risk of transmission to an unborn baby, and a fertility clinic will also need to consider the welfare of a child before treatment. If a UK clinic is used, both can be registered on the birth certificate as a child's legal parents if they sign the correct forms at the clinic before conception and the donor's rights are extinguished. If they conceive by artificial insemination elsewhere, then both can be recorded on the birth certificate if they were married or in a civil partnership at the time of conception [13]. Whilst full exploration of issues for those who identify as transgender or transitioning is outside the scope of this article, it is worth nothing that for these individuals options vary depending on whether they stored gametes before transition or not. Anyone storing gametes should have 'at the time of donation' infection screening. Unless this confirms HIV negative status, it is not possible to transfer embryos to a surrogate host or a co-parent in the UK in the future. --- Funding NHS funding for non-heterosexual parenting varies but is worth investigating in cases of known infertility. If NHS funding or private financial resources are not available some women egg share, giving them the chance to help others while receiving benefit in kind to fund their own treatment. Some clinics also offer men the option to sperm share. PLWH are not permitted to participate in egg or sperm sharing under UK regulations so these options are not available to them. --- Options for fostering and adoption Being HIV positive would not on its own limit an adoption or fostering assessment being undertaken or be a barrier to adopt or foster a child. Applicants who are single or in a same-sex relationships are encouraged to apply. However, no one has the 'right' to foster or adopt a child. Agencies assess applicants to ensure that all adopters and foster carers have the necessary qualities and experiences to care for children who have had traumatic and abusive experiences. The challenges that HIV applicants have handled successfully in their own lives may well be regarded as assets in the assessment process. The assessment includes health (including mental health) inquiries to ensure that applicants have a reasonable expectation of continuing good health and, in the case of adoption, the ability to support a child until adulthood. Although legally an HIV status does not need to be disclosed, in practice it is not ever advisable to keep it a secret, especially as the assessment process is built entirely on openness. A letter from an HIV specialist can provide the assessing local authority, the adoption medical adviser and the adoption panel with evidence about the health of the applicant(s) with HIV, including commenting on life expectancy, and this can also include information about the impossibility of HIV transmission from domestic contacts. The agency should only share an applicant's HIV status on a 'needto-know' basis, with informed consent. This is an issue that should be discussed with an assessing social worker. --- Supporting PLWH parenting Some of the policy and practice in relation to positive parenting appears to be out of step with the current scientific evidence as we have seen. In addition, the social, psychological and emotional implications of parenting among LGBT people living with HIV can be considerable, as parenting itself represents a significant change to identity. Becoming a parent can change one's relationship with one's partner, family and social environment, as well as the 'identity hierarchy' in that parenthood can become more important than other dimensions, such as one's occupational identity [14]. As with other stigmatised identities there is a high prevalence of poor mental health and childhood psychological adversity in HIV patients [15,16]. Strategies and interventions for promoting and enhancing social, psychological and emotional wellbeing are essential. Any potential psychosocial challenges of positive parenting could be addressed through counselling, mental health care and mutual social support from other positive parents. The recently updated BHIVA Standards of Care [17] may be referred to for more detail about expected levels of emotional wellbeing and support. --- Summary Many options are available for PLWH to considering parenting. Asking about this as part of routine care helps support destigmatising messages about normal life expectancy with HIV infection. Further work needs to be done to educate medical professionals and the wider public about the U=U message, and positive experiences of LGBT parenting. National guidelines and standards for HIV care should include resources to support PLWH choosing to parent, ensuring that parenting desire is enquired about and recorded. Ethical frameworks to support biological parenting for PLWH should be developed so that it is integrated into 'business as usual' service delivery. --- Further information More information is available at a newly launched resource hub at www.hivandfamily.com, spearheaded by The P3 Network (www.thep3network.com) as part of its 'Positive Parenting' campaign, with the key message that 'HIV doesn't define a parent's power to love'. The campaign and resource hub was backed by organisations including the British HIV Association, Children's HIV Association, Terrence Higgins Trust and clinicians at the Royal Free and Chelsea and Westminster NHS Foundation Trusts.
Currently 2217 (doesn't include 'further information' or 'references') All authors contributed to this paper with Dr Tristan Barber having overarching and final responsibility for collating individual works into the finished article. The Corresponding Author has the right to grant on behalf of all authors and does grant on behalf of all authors, an exclusive licence (or non exclusive for government employees) on a worldwide basis to the BMJ Publishing Group Ltd to permit this article (if accepted) to be published in STI and any other BMJPGL products and sub-licences such use and exploit all subsidiary rights, as set out in our licence http://group. bmj. com/products/journals/instructions-for-authors/licence-forms.
Introduction In the realm of family dynamics, understanding the complexities of adolescent involvement in parental conflicts is paramount for comprehensive psychological research. This research delves into the development and validation of a novel instrument, the Adolescent Triangulation Scale (ATS), designed to meticulously measure and quantify the complex phenomenon of adolescent triangulation. Triangulation, referring to the involvement of a third party in the relationship between two others, is a concept deeply rooted in family systems theory. The scale's construction is informed by theoretical frameworks proposed by scholars such as Kerr and Bowen and Bell et al. (2001). offering a nuanced perspective on the complex roles adolescents play within familial disputes. Through a meticulous process of item development, expert evaluation, pretesting, and statistical analyses, this study presents a robust scale that not only encapsulates the multidimensional nature of adolescent triangulation but also ensures its validity and reliability. The research aims not only to provide a valuable measurement tool for future studies but also to contribute significantly to the evolving landscape of family psychology, particularly in understanding the dynamics of adolescent involvement in parental relationships. --- Triangulation Triangulation, a concept fundamental to family systems theories, refers to the process of involving a third person in the association of two others. This third person could be anyone from children, parents, grandparents, therapists, friends, or even pets (Kerr & Bowen, 1988). Early family therapy pioneers, such as Bowen, emphasized triangulation as a means to reduce anxiety in dyadic relationships by bringing in a third party. The Triangle as a Fundamental Unit Bowen (1988) conceptualized the emotional triangle as the fundamental unit of an emotional system. Unlike psychoanalytic oedipal triangles, which focus on sexual issues, Bowen's emotional triangles explain a broader emotional process within relationships. These triangles stabilize relationships during both calm and tense times by dispersing stress and anxiety over the three corners of the triangle. --- Interlocking Triangles In families with more than three members, the concept of interlocking triangles arises. For example, in a nuclear family where a father is in conflict with both the son and daughter, the tension may indirectly affect the mother. Kerr and Bowen (1988) propose that while a fundamental triangle may suffice during calm times, increasing anxiety leads the fundamental triangle to interact with other family triangles, even at the societal level. --- Conceptualization of Triangulation Bowen (1978) posited that triangulation occurs in response to three system-level processes or interactions within families: --- Inter-parental Conflict Both overt and covert conflicts lead to triangulation. Covert conflicts, equally harmful, may drive parents to involve children in the parental dyad to resolve their issues (Bradford et al., 2019). --- Lack of Differentiation of Self or Family Fusion When self-differentiation is low, fusion increases, resulting in undifferentiated family ego masses. Triangulation emerges as a symptomatic product of spreading tension in dyadic relationships. --- Parent-Child Alliances Power struggles or alliances between a child and one parent against the other may occur due to neglect or dysfunction in the marital dyad. This type of triangulation can lead to various difficulties for the child. --- Present Study The present study aimed to develop an indigenous instrument to measure adolescent triangulation in inter-parental conflicts. The scale was developed in the native language of Urdu so that the majority of the population would understand and respond accurately. Specific objectives include the development of the indigenous Adolescent Triangulation Scale, the establishment of its factorial structure, and rigorous testing of its reliability and validity. The rationale behind developing the Adolescent Triangulation Scale (ATS) for the Pakistani population stems from the need to investigate how adolescents in Pakistan navigate inter-parental conflicts. Triangulation, commonly understood as a third-party involvement in the relationship between two individuals, has been a topic of interest in family systems theories (Minuchin, 1974;Satir & Baldwin, 1983;Haley, 1987;Kerr & Bowen, 1988). Despite global research on triangulation, its exploration in Pakistan remains limited. Cultural norms and religious values in collectivistic Eastern societies, like Pakistan, may influence how adolescents perceive and experience triangulation differently from their counterparts in Western societies. This study aims to fill this gap by designing a valid and reliable instrument tailored to the Pakistani context (Bresin et al., 2017;Bray et al., 1984;Grych et al., 1992;Perosa et al., 1981). The development process involves a thorough literature review, focus group discussions, content analysis, and rigorous psychometric testing, ensuring cultural sensitivity and applicability to the unique dynamics of Pakistani families (Boating et al., 2018;Kohlbacher, 2005;Lawshe, 1975). --- Method The development of the adolescent's triangulation scale (ATS) is based on the guidelines outlined by Boating et al. (2018). While following this guideline, the present research aimed to develop a psychometrically sound multidimensional scale. The steps of Scale Development, as suggested by Boating et al. (2018), are as follows; Phase I: Item Development The creation of items for scale development is a critical step in the development of a reliable and valid measuring instrument. The following are the general steps in item development. Domain Identification. Triangulation was explored by all of the main family systems theorists. However, the researcher in the current study concentrated on Bowen family systems theory (Kerr & Bowen, 1988) while establishing the Adolescent's Triangulation Scale (ATS) principally because it provides an elegant and comprehensive theory of the family system and is still presently used extensively and effectively in clinical work (Gavazzi & Lim, 2023). Before developing items, an extensive literature review regarding descriptions, examples, types, and definitions of triangulation from Bowen (1978), Kerr and Bowen (1988), Bell et al. (2001), Klever (2008), LaForte (2008), Titelman (2008) and Gavazzi and Lim (2023) was done. Item Generation. For item generation, both deductive and inductive methods were used, as suggested by Clarke and Watson (1995). The deductive technique includes a review of the literature as well as an evaluation of current triangulation scales. The qualitative data gained from focus group discussion is used in the inductive technique. --- Literature Review At the first stage of item generation, literature regarding triangulation and its types was thoroughly reviewed. To access the literature review, updated and authentic research journals and databases were consulted (e.g., Buehler & Welsh, 2009;Buehler et al., 2009;Amatoa & Afifi, 2006;Franck & Buehler, 2007). Moreover, some scales/questionnaires devised to study triangulation were also approached. Possibly the most commonly used measures of triangulation are two subscales of Personal Authority in the Family System Questionnaire (PAFS-Q; Bray, Williamson, & Malone, 1984), i.e., the Intergenerational Triangulation (INTRI) and Nuclear Family Triangulation (NFTRI). However, The Triangulation subscale of Children's Perception of Inter-Parental Conflict Scale (CPIC; Grych et al., 1992), The Structural Family Interaction Scale (Perosa et al., 1981), and The Triangular Relationship Inventory (Bresin et al., 2017) were also used to measure family triangulation. All the scales were carefully assessed. --- Focused Group Discussion The primary aim of Focus Group Discussions was to explore the concept of triangulation within the Pakistani population, a novel focus in the local research culture. Four focused group discussions were conducted. The first group comprised six girls (14-18 years) from both nuclear and extended families, with a minimum education level of middle. The second group involved five boys (15-19 years) from nuclear and extended families, also with a minimum middle education level. The third group consisted of six mothers (38-49 years) from nuclear and extended families, including housewives and working women, all having completed at least their school education. The fourth group involved seven fathers (42-50 years) from nuclear and extended families, all having completed at least their school education. Participants were formally introduced to each other, and the purpose and objectives of the focus group discussions (FGDs) were clarified. A semi-structured focus group guideline was used to explore participants' perspectives on triadic relationships. The researcher served as a moderator. --- Content Analysis In order to generate codes, themes, and sub-themes, content analysis was performed. The results provide valuable insights into the dynamics of parental relationships and their impact on children, contributing to a deeper understanding of family dynamics and relationships. The concept of triangulation aligns dimensionally with Kerr and Bowen's (1988) theoretical model. The qualitative report reveals major themes, i.e., Pushed-Out, Mediator, Balancing, and Pulled-In, providing insights into parental dynamics. The Pushed-Out theme underscores parents' child-centric focus, prioritizing children's well-being and shielding them from conflicts. The Mediator theme highlights children's active role in improving parental relationships through communication and cooperation. The Balancing theme emphasizes a parental approach to independently managing conflicts fostering a peaceful family environment. The Qlantic Journal of Social Sciences and Humanities | Volume 4, No. 4 (Fall 2023) Pulled-In theme delves into instances where children are inadvertently involved in parental issues, exploring sub-themes like manipulation and emotional dependence. Overall, these themes contribute to a nuanced understanding of family dynamics and relationships, shedding light on the intricate interplay of parental behaviors and communication in consideration of children's well-being. --- Generating Initial Item Pool The synthesis of literature findings and FGD data led to the formulation of an initial item pool comprising 40 items, conceptualized from the four key dimensions of triangulation delineated by Bell et al. (2001): (a) balanced, (b) mediator, (c) pulled-in (cross-generational collation), and (d) pushed-out (scapegoating). This iterative process ensured that the item pool was not only theoretically grounded but also culturally relevant, setting the stage for subsequent psychometric validation. --- Establishing Content Validity To assess content validity, Lawshe's method (1975) was applied, engaging eleven specialists well-versed in family system theory, particularly triangulation. Each expert evaluated the 40 items individually, categorizing them as essential, useful but not essential, or not essential. The Content Validity Ratio (CVR) cutoff score, set at 0.63 for 11 raters, was employed. A total of 34 items, with at least eight items per theoretical domain, met the CVR criteria and were retained. Face validity was also affirmed as experts deemed all items appropriate. --- Phase II: Scale Development Scaling Method The Adolescent Triangulation Scale utilized a five-point Likert-type scoring system aligned with the approach recommended by Krosnick and Presser (2009) to effectively capture individual response variations. Respondents provided feedback on the scale using a 5-point Likert-type format, where 1 signified strong agreement, and 5 denoted strong disagreement. --- Pretesting Questions Following item development and content validity ratio establishment, cognitive interviews with five adolescents were conducted to identify confusing or problematic questions. The feedback indicated that all 34 items in the Adolescent Triangulation Scale were succinct and easily comprehensible, with participants reporting no difficulties. --- Sample The sample for this phase comprised of 494 adolescents (boys = 230, girls = 264) aged 10 to 19 years (M=17.65, SD=2.17). The sample included students from government (n=284) and private (n=210) schools and colleges in Rawalpindi and Islamabad. A convenient sampling procedure was employed, excluding adolescents with single parents, those living independently, complete illiteracy or diagnosed with mental or physical disabilities. --- Sample Suitability Bartlett's test of sphericity (<unk> 2 (351) = 8502.54, p <unk>.000) signified the suitability for factor analysis. The Kaiser-Meyer-Olkin (KMO) value of 0.91, exceeding the recommended threshold, indicated the data's appropriateness for factor analysis. --- Extraction of Latent Factors Exploratory Factor Analysis (EFA) was conducted to unveil the factorial and dimensional structure of the 34 items. Principal-axis factoring (PAF) analysis revealed a five-factor model initially, explaining 58.73% of the total Variance. However, considering eigenvalues and a scree plot, a more condensed four-factor model was contemplated, recognizing potential adjustments to enhance the scale's precision. Understanding the latent construct faced challenges due to disparities in interpretations between eigenvalues and the scree plot. Consequently, a meticulous evaluation of individual items became imperative for potential removal, guided by factor loadings, cross-loadings, and communality estimates. Pett et al. (2003) criteria were employed: items with factor loadings below.40 were deleted, and those with cross-loadings exceeding.32 on multiple factors were considered for removal. Seven items were eliminated, leading to a final set of 27 items. Another iteration of principal-axis factoring was conducted, revealing a four-factor model explaining 58.11% of cumulative Variance. --- Factor I: Pushed-Out Triangulation Eight items (29.72% of total Variance). Pushed-out triangulation reflects aspects of scapegoating, where adolescents assume a pushed-out position. Moreover, measures a form of triangulation wherein parents shift attention to different aspects of the adolescent's life instead of focusing on marital conflicts. Factor II: Mediator Triangulation Seven items (12.41% of total Variance). Mediator Triangulation centers around the adolescent feeling caught between parents' marital disputes. Emphasizes the adolescent's role as a middle person in parental relationships, with a maximum factor loading of.87. --- Factor III Balanced Triangulation Five items (9.76% of total Variance). Balanced triangulation represents a healthy relationship where parents take responsibility for their relationship problems. Emphasizes a balanced dynamic, with the highest factor loading being.78. Factor IV: Pulled-In Triangulation Seven items (6.19% of total Variance). Pulled In Triangulation explains aspects of cross-generational collation, depicting an alliance between the adolescent and one parent against the other. It captures a power struggle between parents, highlighting a type of triangulation involving parental conflict. This refined four-factor solution provides a clearer and more in-depth understanding of adolescent triangulation, addressing various dimensions within parental relationships and their impact on adolescents. Factor loading of each item on all four factors is shown in Note. The scale was originally developed in the Urdu language, and an un-standardized translation was provided here for the purpose of understanding First Order CFA The first-order Confirmatory Factor Analysis (CFA) of the Adolescent Triangulation Scale involved testing the predefined factor structure through statistical methods. The analysis utilized a 27-item pool to test the ATS four subscales measurement model, and all items were allowed to load on their specified factor as suggested by the results of EFA. Note. CFI = Comparative Fit Index, GFI = goodness-of-fit index, TLI = Tucker-Lewis Index, RMSEA = Root Mean Square Error of Approximation Table 2 shows that the chi-square values for ATS were significant for initial model 1 as well as modified model 2. However, Bentler (2007) suggested that with a large sample size, the chi-square test's assumptions give an inaccurate probability. Therefore, the decision of model fit was made on goodness of fit indices other than chi-square. Results reveal that model 1, i.e., an initial test of the ATS shows a poor fit: (<unk> 2 = 1248.19, df = 318, CFI =.92, RMSEA =.07, SRMR =.04). In order to fit the model, all items were inspected for standardized regression weights. As Hulland (1999) and Henseler et al. (2012) suggested, all the items whose standardized factor loading falls between.40 and.70 should be considered for deletion. Therefore, item no. 34 (factor loading =.63) and 13 (factor loading =.68) were deleted. Furthermore, item no. 26 was also deleted, as suggested by the modification index. Standardized factor loadings from this model are shown in Table 3. However, mild revisions were also done with the help of error co-variances. Based on the suggestion of modification indices and content overlapping, error co-variance was added to the error terms of the same general factor. This was done to get an excellent fit. Our revised model showed a considerably enhanced fit indices (<unk> 2 = 700, df = 243, CFI =.95, RMSEA =.06, SRMR =.03). Second Order CFA of Adolescent Triangulation Scale Second-order confirmatory factor analysis is used to interpret ATS as multi-level and multidimensional by combining its four dimensions, namely pushed-out, pulled-in, Mediator, and Balanced triangulation, under the umbrella of a common higher-level factor, namely adolescent triangulation, into inter-parental conflicts. Table 3 shows the chi-square degree of freedom and model fit indices for ATS second-order CFA. --- Indicators Reliability The indicator's reliability is assessed through standardized regression weight and squared multiple correlations of all the items of the adolescent's triangulation scale. Table 4 shows the factor loading and R2 for all 24 items retained after the CFA model fit. Results in Table 4 show that factor loading (<unk>) is well above the cutoff score of.70. and is significant at a 5% level of significance. Results indicate that each item's dependability was high, which supports the placement of each item on the designated latent construct. The R 2 values for ATS items range from moderate to high, i.e.,.61 to.85. --- Internal Consistency, Convergent and Discriminant Validity of ATS In order to assess internal consistency, convergent and discriminant validity of the newly developed Adolescent Triangulation Scale, Cronbach alpha, composite reliability, average Variance extracted, Mean Shared Variance, MaxR(H), and HTMT were computed and reported in Table 5. Cronbach alpha and composite reliability are commonly used to assess an instrument's internal consistency. Results in Table 5 show that the value of coefficient alpha ranged between.90 and.92, whereas the values of CR ranged between.92 and.94. The values of both parameters are well above the suggested cutoff values. Therefore, all the four subscales are considered to have good internal consistency. MaxR (H) values were also observed to be greater than the values of CR and hence provide a piece of evidence for construct validity. Average Variance Extracted is used to report the convergent validity of ATS. Results show that the value of AVE for all four subscales is well above the suggested cutoff value, i.e., AVE >.50. The value of AVE ranged between.65 to.76. The Value of CR is also well above the suggested cutoff point, i.e., CR >.60. Furthermore, the discriminant validity is evaluated by using the Fornell and Larcker (1981) criterion as well as the cross-loading of indicators. Results show that all the items have factor loading greater than.70 on their respective factor. The cross-loading of all the items on other factors is less than.40 and hence fulfills this criterion of inclusion in the final scale. In Table 5, values in parenthesis present the HTMT ratio of correlation between two constructs: given as.35.34.31 (mediator & balanced). As a result, all of the HTMT values are less than 85, indicating that the constructs are distinct and discriminant validity may be stated to have been demonstrated. Discriminant validity is also proven by taking AVE values larger than the relevant maximum shared variance (MSV) into account. (Hair et al., 2014). Results showed that the values of AVE for all three constructs are greater than their respective MSV and hence provide more evidence for discriminant validity. The result shows that Fornell-Lackers criterion of discriminant validity was also satisfactory as the correlations among all latent constructs are smaller than the square root of each construct's AVE. Furthermore, Results show that CR for all subscales of ATS are above.70 and the AVE values are within.64 and.73. Overall, discriminant validity can be accepted for this measurement model. --- Discussion The research investigates adolescent triangulation in inter-parental conflicts, a concept often overlooked in family systems theories. Triangulation involving a third person in a relationship has received theoretical attention, but quantitative assessments are scarce. Notably, Bresin et al. (2017) and others have explored triangulation globally, yet Pakistan, with its distinct cultural norms, remains largely unexplored. The study seeks to bridge this gap by creating a reliable instrument tailored to the Pakistani context. Anticipating cultural differences, the research acknowledges that Pakistani adolescents may experience triangulation differently than their counterparts in more individualistic societies. This study addresses the dearth of measurement tools, aiming to enhance understanding in a cultural context where taboos and collectivistic norms shape interpersonal dynamics. The research underscores the need for culturally sensitive instruments in exploring adolescent triangulation. In order to attain the above-mentioned objectives, the study was conducted in three different phases, as suggested by Boating et al. (2018). It started with item development by getting detailed information about "triangulation" (Buehler & Welsh, 2009;Buehler et al., 2009;Amato & Afifi, 2006;Franck & Buehler, 2007) that was aimed to be operationalized in the present study. As the first step of the scale development, pertinent literature about the concept of triangulation was thoroughly reviewed. This review of the literature helped the researcher develop focused group guidelines for the exploration of the triangulation phenomenon as experienced by adolescents. By taking into account the main viewpoints extracted from previous literature, the researcher was able to develop clear, easy, simple, short, and open-ended questions about adolescent triangulation. Four focus group discussions were conducted with adolescents and parents in this phase of the present study. After conducting the FGDs, the researcher was able to screen salient information about the fundamental aspects of adolescent triangulation. In the next step, the obtained data were transcribed using a simple transcription method by Kuckartz et al. (2014). After transcribing the data, content analysis, following Kohlbacher's (2005) guidance, was applied, providing a comprehensive insight into adolescent triangulation. The results revealed that a majority of adolescents experienced involvement in inter-parental conflicts. Some positioned themselves as mediators, acting as an anchor to maintain parental connections, while others felt compelled to take sides under parental pressure. Interestingly, some adolescents perceived themselves as the focal point, receiving undivided attention from parents who seemed to forget their conflicts. Additionally, opinions varied, with some parents and adolescents suggesting that parental issues could be resolved without involving children. Based on these findings, a 40-item pool was generated to measure triangulation, expressed in clear Urdu language. The items underwent rigorous review by eleven experts from psychology departments at Qlantic Journal of Social Sciences and Humanities | Volume 4, No. 4 (Fall 2023) Rawalpindi Women's University and International Islamic University, Islamabad, ensuring the validity and reliability of the developed instrument. Lawshe's method (1975) was employed to assess content validity. Eleven experts evaluated items for clarity, conciseness, reading comprehension, face, and content validity. Their recommendations led to refining the initial 40 items, retaining 34 with excellent content validity and substantial face validity. The second phase involves the scale development. Before the initial tryout, a cognitive interview was done with five adolescents to identify if any item was confusing, problematic, or difficult to answer. All 34 items were found to be straightforward and comprehendible. After finalizing the item pool, the scale was administered on a purposive, convenient sample of 494 adolescents. Data collected from this sample were subjected to descriptive statistics and factor analysis for the assessment of their psychometric properties and factorial structure. To assess the factorability of a correlation matrix of the scales, many wellestablished criteria were utilized, including Kaiser's criterion approach, principal-axis factoring (PAF) analysis, and Cattle's scree test. All variables that were loaded on various factors measured distinct constructs. The rotational factor pattern defined a basic structure with strong loadings on one factor and modest loadings on the other factors. Because of low factor loadings or cross-loading, 7 of the 34 Adolescent Triangulation Scale items were eliminated based on EFA. The 27 retained items demonstrated communalities exceeding three, forming a cohesive four-factor solution reflecting pushed-out, pulled-in, mediator, and balanced triangulation. These findings underscored the Adolescent Triangulation Scale's (ATS) validity and reliability. The instrument supported a four-factor structure aligning with established triangular typologies. The final ATS, comprising at least four items per subscale, ensured a balanced representation of the sub-dimensions. Additionally, the alpha coefficients, exceeding.80 for ATS and its subscales, signaled satisfactory internal consistency. This robust validation process solidified the ATS as a dependable tool for assessing adolescent triangulation in interparental conflicts. The adolescent Triangulation Scale (ATS) has 27 items and is comprised of four subscales, i.e., pushed-out, pulled-in, mediator, and balanced. Total triangulation scores were obtained by reverse scoring the items of balanced triangulation, i.e., item no. 1-5. To confirm the factorial structure of the scale developed through EFA, first and second-order Confirmatory Factor Analysis (CFA) was conducted on a new sample of 493 participants. The initial firstorder CFA, testing the four-factor solution suggested by EFA, showed a poor fit. To enhance the model fit, items were scrutinized for standardized regression weights. Following suggestions by Hulland (1999) and Henseler et al. (2012), items with standardized factor loadings between.40 to.70 were considered for deletion. Consequently, items 34, 13, and 26 were eliminated based on these criteria and recommendations from modification indices and the committee. Mild revisions were made using error co-variances, guided by modification indices and content overlapping. Error co-variances were added to the same general factor to achieve an excellent fit. The revised model demonstrated significantly improved fit indices, supporting a robust four-factor structure according to the first-order factor analysis. Deleted items, along with their unstandardized English translation, follow; Item 26. When one of my parents is not present, the other uses bad words about him/her The second order CFA was performed with the remaining 24 items of ATS. Second-order CFA was done to determine the total construct of triangulation. The balanced subscale has a negative association with total scores, whereas pushed-out, pulled-in, and mediator triangulation has a positive association with ATS total. Furthermore, the internal consistency of the ATS total, as well as all the subscales, was within the satisfactory range. Factor loading and squared factor loadings were above the minimum cutoff point, indicating the indicator's reliability. Composite and Cronbach reliability was also above the minimum acceptable range, indicating excellent internal consistency of the newly developed scale. Moreover, average Variance extracted (AVE), Fornell and Larcker criterion, Heterotrait-Monotrait (HTMT) correlation ratio, and maximum shared Variance suggested good convergent and discriminant validity of ATS. --- Limitations and Suggestions The study on scale development for measuring adolescent triangulation into inter-parental conflicts exhibits a few limitations. Firstly, the research was conducted in a specific cultural context (Pakistan), limiting the generalizability of the findings to diverse cultural settings. Additionally, the reliance on selfreport measures introduces the potential for response bias. Future studies could benefit from incorporating more diverse samples and employing a multi-method approach to enhance the robustness of the developed scale. Longitudinal designs could provide a more nuanced understanding of the dynamics of adolescent triangulation over time. Furthermore, exploring the scale's applicability in various cultural contexts would enhance its cross-cultural validity. Addressing these limitations would contribute to the refinement and broader utility of the developed scale.
Triangulation is conceptualized as the involvement of a third person in a dyadic relationship in order to balance excessive conflicts, intimacy, and distance and provide stability within the system. A self-report scale to measure adolescents' triangulation into inter-parental conflicts was developed, and psychometric properties of the scale were established. The study was conducted in a three-phase format. Data was collected from adolescents (10-19 years) of different schools and colleges in Pakistan. In Phase I, items were generated through a literature review and focused group discussion. In Phase II, four latent factors (Pushed out, pulled in, mediator, balanced) were extracted through EFA (N=493). Phase III comprised a test of dimensionality, reliability, and validity. The dimensionality of the Adolescent Triangulation Scale was established through CFA (N=494). Reliability of the scale was established through Cronbach alpha (α= .87-.90) and composite reliability (CR= .88-.92). Furthermore, the validity of the scale was assessed through Average Variance Extracted (AVE= .55-.69), Maximum Variance Shared (MVS= .88-.93), Fornell and Lacker criterion and Hetro-trait-Mono-trait Criterion. Results showed that the Adolescent Triangulation Scale appears to have good psychometric properties and contributes to the literature on family systems theory by allowing for a more nuanced measurement of triangulation than was previously available.
INTRODUCTION Food security, as articulated by the World Health Organization, is "when all people at all times have access to sufficient, safe, nutritious food to maintain a healthy and active life". Nevertheless, the emergence of COVID-19, war, and significant climate change has adversely affected global food production and distribution, ultimately leading to a global food crisis. In the current landscape, emerging food security issues are causing the tourism industry to collapse. The issues consist of escalating food supply costs (Jalaluddin et al. 2022) and insufficient food supplies due to overdependency reliance on imported goods due to insufficient domestic production (Ahmed & Siwar 2013). This study aims to address the pressing issue of food security within gastronomy tourism, particularly concerning the scarcity of food supplies in Malaysia. This scarcity has resulted in price surges and limited food accessibility, impacting local and international tourists. As Hashim et al. (2019) outlined, the annual escalation of food expenses further exacerbates existing food security issues. Additionally, the growing number of tourists intensifies the severity of food security issues, and necessary governmental management is required to meet demands (Hashim et al. 2019) adequately. This study will help in recognizing the difficulties faced by both local and international tourists regarding food security-related issues, thereby revealing the current state of food insecurity in Malaysian gastronomy tourism in Malaysia. The objectives of this study include investigating the food security issues in domestic tourism among local and international tourists, verifying the food insecurity experiences encountered by local and international tourists, and determining the tourists' dining satisfaction from the gastronomy tourism experiences in Malaysia. In essence, this study strives to comprehend better the struggles local and international tourists encounter when it comes to food security and accessibility in Malaysia due to a variety Ismail et al. of factors, including food supply scarcity due to livestock shortages, rising food prices driven by demand and supply imbalance, as well as the satisfaction (Gani et al. 2017) and contentment of visitors concerning food consumption and accessibility while visiting. --- METHODS --- Design, location, and time A quantitative approach was adopted for this study as its research design since it aligned seamlessly with the study's objectives. Additionally, cross-sectional and nonexperimental methods were utilised to determine emergent food security issues within the population. The study was conducted across the entirety of Malaysia, involving both local and international tourists. Data collection was expedited through the distribution of Google Form link via various social media platforms such as Facebook, Twitter, TikTok and YouTube. An informed consent was also been asked to the respondents before they continue to fill in the online Google Form. --- Sampling Quota sampling was selected to ensure that the respondents accurately represented local and international tourist groups by meeting the inclusion and exclusion criteria. The inclusion criteria for this study consisted of Malaysian citizens as local tourist respondents, foreign visitors to Malaysia as international tourist respondents, and the participants have consumed local cuisine during their Malaysian visit. As for the exclusion criteria, participants were excluded if they were Malaysians residing in other countries, foreigners residing within Malaysia, or participants who did not purchase or consume local cuisine. A sample size of 250 people, inclusive of both local and international tourists, was designated for the study, and the determination of sample size was facilitated through G*Power software for a two-tailed independent t-test, which indicated a minimum sample size of 210 individuals. To anticipate potential missing data during analysis, an additional 40 participants were included. The respondents' nationality was identified before approaching them to facilitate the grouping process. --- Data collection Data collection centered on a questionnaire as the primary source of data from the samples, and it was developed by adapting the questions from previous research. The questionnaire, consisting of 39 questions divided into 4 sections, utilized the Likert scale to inquire about the respondents' opinion regarding the food security situation where 1 (least valued) and 5 (most valued). Before distribution, a validity assessment was conducted on the questionnaire using Cronbach's alpha to ensure its appropriateness for distribution to respondents. The Cronbach's Alpha value obtained was 0.954, indicating a very high level of internal consistency for the scale used. This assessment was based on a total of 33 items. --- Data analysis The data were analysed using IBM SPSS 27. Categorical data was presented as frequency and percentages, whereas numerical data underwent descriptive analysis and was presented in mean and standard deviation or median and interquartile range depending on the normality distributions of the data. Independent t-test and Chi-Square or the Karl Fischer test were applied to achieve both objectives in this study. Not normally distributed variables were analysed using the Mann-Whiteney test. The statistical significance for this study was p<unk>0.05. --- RESULTS AND DISCUSSION Based on the data gathered from the questionnaire, each section underwent individual analysis encompassing descriptive analysis, normality test, and inferential analysis, which were the independent t-test and the Mann-Whitney test. Table 1 presents insights into the demographic backgrounds of the respondents consisting of their gender, age, education level, nationality, occupation, and average annual income. Table 2 illustrates the mean score for attributes about food security issues with 'the food available is enough for the tourists to order during peak seasons' (4.16) receiving the highest rating while the lowest rated is 'the prices are reasonable'(3.44). Mean difference in food security issues between local and international tourists which Food insecurity issues in gastronomy tourism in Malaysia proved to be statistically significant (p=0.022; 95% CI: 0.03-0.41). The mean score attributed to international tourists (3.96) exceeded that of local tourists (3.74). This observation shows that the attributes associated with the food security issues had a noticeable impact on local tourists. The initial hypothesis noted that the food security issues in Malaysia were substantial, and the result of this study confirmed that hypothesis across various aspects such as the food prices, hygiene conditions, and the adequacy of the nutrient content of the food prepared. However, the study outcomes revealed that local tourists were facing challenges to a greater extent than international tourists as they were the ones taking the toll from the factors that contributed to the escalation of food security issues. In essence, the food security issues that are currently affecting gastronomy tourism in Malaysia have unfortunately become a discouragement to the local tourists from enjoying a vacation within their homeland. The tourists were prompted to express their level of agreement concerning their encounters with food insecurity experiences during their visit to Malaysia. As depicted in Table 3, the mean score for the attributes in food insecurity experiences is revealed. Notably, the attribute with the highest mean score was 'there are varieties of local specialities available' (4.16), signifying positive feedback among tourists. In contrast, the lowest valued attribute was 'all items from the menu are available when requested'(3.44). The outcomes of this study affirm that both local and international tourists encountered food insecurity during their stay. However, it is noteworthy that the local tourist group exhibited a lower mean score, indicating that they were vulnerable to these experiences compared to the international tourists. Several aspects fell below expectations in contributing to the gastronomic experience of the tourists, which led to the food insecurity experiences. This implies that the tourists within Malaysia were having a less pleasurable experience of the gastronomic scene in Malaysia as their needs in terms of food were not being fulfilled during their holiday, hence confirming the hypothesis. The last section of the questionnaire asked respondents about their dining satisfaction while purchasing and consuming food in Malaysia. Table 4 shows the mean score for the attributes in dining satisfaction, where the highest rated being 'As a whole, Malaysia is a good food tourism destination' (4.38) and the lowest rated being 'The food fulfils the dining experience in terms of hygiene and sanitation.' (3.58). Due to non-normal data distribution, the Mann-Whitney test was used to analyse dining satisfaction between local and international tourists. It shows the comparison of mean rank and sum of ranks between local and international tourist groups, which indicates that the international tourist group has a larger mean rank (156.21) than the local tourist group (119.30). The statistical significance of the Mann-Whitney U test (p<unk>0.003) between local and international tourists, indicating higher dining satisfaction among international tourists than local tourists. The results revealed a Mann-Whitney U value of 3,078.00. The test statistic, denoted as Z, was found to be -3.020. This result was statistically significant, as indicated by an asymptotic significance (2-tailed) of 0.003. This suggests a notable difference in dining satisfaction between local and international tourists confirms the statistical significance of the Mann-Whitney U test (p<unk>0.003) between local and international tourists, indicating higher dining satisfaction among international tourists than local tourists. As conveyed by the respondents, the evaluations of dining satisfaction illuminate Food insecurity issues in gastronomy tourism in Malaysia a distinct contrast between the experiences of international and local tourists. It was priorly speculated that both groups were satisfied with their dining experience, but the result showed a significant difference. It was deemed that some attributes listed under this variable, such as hygiene and comfort, were less agreeable to the local tourists, hence, the lower mean score. The critical factor contributing to this difference seems to be the specific attributes associated with dining satisfaction, particularly hygiene and comfort. These elements were presumably less satisfactory to local tourists, reflected in their lower mean scores. This suggests that local tourists have different expectations or standards regarding these aspects of dining compared to international tourists. This outcome indicates the importance of understanding and catering to different tourist groups' varied preferences and expectations. These findings could be instrumental in tailoring services and improving overall customer satisfaction (Rimmington & Yuksel 1998) in the hospitality and tourism industry (Hall & Mitchell 2001). It emphasises the need for a nuanced approach to evaluate and enhance the dining experience, considering the diverse perspectives of both international and local visitors. --- CONCLUSION In conclusion, the study outcomes emphasise an imbalance of experiences between local and international tourists in gastronomy (Leong et al. 2017) tourism in Malaysia. The local tourist group sustained a major disadvantage in gastronomic tourism compared to the international tourists, as evidenced by the result. This imbalance can be disheartening, indicating that local tourists cannot fully appreciate and enjoy Ismail et al. their vacation within their homeland. In contrast, international tourists exhibit higher contentment and satisfaction with Malaysia's gastronomic (Mora et al. 2021) offerings despite the low number of respondents from various countries. In light of these findings, the authorities in the tourism sector must address the root causes of these issues and brainstorm mitigative actions to correct this situation, thus providing an enriching gastronomic experience for all tourists. --- DECLARATION OF CONFLICT OF INTERESTS The authors have no conflict of interest.
The objectives of this study are to investigate the food security issues arising in gastronomic tourism, to verify the food insecurity experiences encountered by tourists, and to determine the tourists' dining satisfaction from the gastronomic tourism experiences in Malaysia. A quantitative approach was selected for this study. These issues were concluded from the data collection via questionnaire forms disseminated online through multiple social media platforms consisting of 250 participants of both local and international tourists visiting Malaysia. The Independent T-test and Mann-Whitney test were used as the main statistical test to establish if any tourist groups had food security-related issues during their visit. The results showed that local tourists are more likely to be affected by food security issues, food insecurity, and dining experiences. Overall, this study discovered that both local and international tourists have contrasting experiences in gastronomy tourism in Malaysia.
Introduction In many countries, childbearing is increasingly being postponed to later ages (Mills et al., 2011;Schmidt et al., 2012). Between 1975 and 2016, the median age of women who gave birth in Australia increased by more than 5 years from 25.8 to 31.2 years (ABS, 2017); similar increases took place in other highly developed countries (Sobotka, 2017). These shifts are also reflected in recent surveys of reproductive intentions where many women in their late 30s and early 40s report plans to have a(nother) child (Sobotka and Beaujouan, 2018). Childbearing at higher reproductive ages is linked to socioeconomic advantages for mothers and their children, including higher subjective well-being among mothers (Myrskylä and Margolis, 2014). However, it also comes with risks. Infertility increases rapidly for women in their mid-30s and older (Steiner and Jukic, 2016;Liu and Case, 2017). At age 40, one in six women are no longer able to conceive, increasing to more than half by age 45 (Leridon, 2008). Even when pregnancy is achieved, higher maternal age is a risk factor associated with perinatal mortality, low birth weight, pre-term births, maternal death, gestational diabetes, pregnancy-induced hypertension, severe preeclampsia and placenta previa (Balasch and Gratacós, 2012;Delbaere et al., 2007;Goisis et al., 2018;Huang et al., 2008;Bewley et al. 2005;Jacobsson et al. 2004;Schimmel et al., 2015). When childbearing is delayed, women and men planning to have children are at increased risk of not realising their plans due to infertility or pregnancy loss (McQuillan et al., 2003;Greil et al., 2011;Schmidt et al., 2012;Habbema et al., 2015). Women often lack awareness of the potential difficulties of conceiving at later ages (Bretherick et al., 2010;Mac Dougall et al., 2013;Garc<unk>a et al., 2018), and men display even less knowledge than women regarding fertility, age limits of reproduction and assisted reproductive technologies (Daniluk and Koert 2013). In part due to this lack of knowledge, many women and couples postpone childbearing until ages when it is more difficult to conceive and carry a pregnancy to term (Cooke et al., 2012;Birch Petersen et al., 2015). Men are not subject to the same biological constraints as women, as their fertility starts declining later and at a slower rate (Fisch and Braun, 2005;de La Rochebrochard et al., 2006;Sartorius and Nieschlag, 2010;Kovac et al. 2013;Eisenberg and Meldrum, 2017). In addition, men tend to partner with women younger than themselves (Ortega, 2014). These biological and social differences between men and women imply that they also have different chances of realising their fertility plans later in life. Research in European countries has identified a negative effect of age on intentions to have children and their realisation (Berrington, 2004;Roberts et al., 2011;Kapitány and Spéder, 2012;Spéder and Kapitány, 2013;Dommermuth et al., 2015;Pailhé and Régnier-Loilier, 2017). People in their late 30s and early 40s are also more likely to abandon previous plans to have children, and this is the case for both women and men (Spéder and Kapitány, 2009) status (Gray et al., 2013;Hayford, 2009;Iacovou and Tavares, 2011;Liefbroer, 2009). Partnered women and men usually display higher and more certain childbearing intentions, and they are also more likely to realise them (Spéder and Kapitány, 2013). Fertility intentions also vary by parity (achieved number of children), and women who already have two children are much less likely to desire another child than those who are childless or have one child. However, women who already have children are more likely to achieve their fertility plans than childless women (Harknett and Hartnett, 2014;Dommermuth et al., 2015). Fertility intentions are also affected by socioeconomic status. Highly educated women are more likely to delay having children, and they are less likely to abandon childbearing plans compared with less educated women (Kapitány and Spéder, 2012), even though the end result is that they are more likely to stay permanently childless (Kreyenfeld and Konietzka, 2017;Neels et al., 2017). Our study examines whether men and women who reach later reproductive ages are able to fulfil their short-term reproductive goals of having children in the near future and, if they have not achieved their goals, whether they abandon plans to have a child. We go beyond the existing research by focusing on (i) the age pattern of fertility realisation among those with strong short-term initial fertility intention (within the next 3 years), (ii) the age pattern of changes in reproductive intention and (iii) the differences between men and women. Our outcome variable has three mutually exclusive categories: realisation of intention by having a child, no longer strongly intending to have a child and still intending to have a child. Our multinomial regression models account for number of children, partnership status, education, perceived reproductive impairment, self-rated health, BMI and smoking status. --- Materials and Methods --- Data We used a large representative longitudinal survey, the Household, Income and Labour Dynamics in Australia (HILDA) survey, conducted since 2001. In 2005, 2008, 2011and 2015, it included a subset of questions on desires and preferences for children as part of its incorporation in the international Generations & Gender Survey Programme (https://www.ggp-i.org/). We used data from the two most recent waves, 2011 and 2015. We identified respondents with short-term reproductive intentions in the 2011 wave and tracked whether their intentions were realised, resulting in a birth of a child, or whether they were abandoned or postponed by the 2015 wave. Attrition in this survey was particularly low (Summerfield et al., 2016): survey attrition specific to the age range under study was 16% for women and 19% for men (Table I). Longitudinal paired weights were used to partly compensate for any bias due to attrition. They are based on an initial cross-sectional weight for 2011 and then adjusted for attrition between the 2011 and 2015 sample (Watson 2012). --- Study population Of the original survey sample in 2011, we retained 447 men and 528 women with a strong short-term intention to have a child (Table I). Specifically, we selected men aged 18-45 and women aged 18-41 in 2011, who were present at both waves (2011 and 2015), who (or whose partners) were not pregnant and had not undergone a vasectomy or tubal ligation and who expressed a strong intention to have a child in the next 3 years as defined below. We also excluded men and women who did not answer the self-completed section, which included health and epidemiological characteristics. --- Identifying individuals with a strong intention of having a child in the next 3 years in the 2011 wave Uncertainty is an inherent part of reproductive intentions (Morgan, 1981;N<unk> Bhrolcháin and Beaujouan, 2019). Based on previous studies of fertility intentions and realisation (Toulemon and Testa, 2005; Régnier-Loilier and Vignoli 2011), we focused only on women and men who in 2011 expressed high certainty in their positive intention to have a child. We identified these individuals based on cumulative responses to three questions. First, we selected respondents who stated that they would like to have more children. Then we selected those with a high degree of certainty, as indicated by a score of 7 or higher on the 0-10 scale assessing respondents' perceived likelihood of having a child in the future. Finally, respondents were asked in which year they planned to have a child. We included respondents who stated that they planned on having a child in 2012, 2013 or 2014 and those who did not provide a specific year but said 'within the next 3 years'. --- Relevant characteristics of the study population in 2011 The percentage distribution of men's and women's characteristics in 2011 is shown in Table II. Age is a categorical variable with the following age groups: 18-25, 26-28, 29-31, 32-34, 35-37 and 38-45 (38-41 as low, medium or high, based on the ISCED 7 categorisation of educational attainment. Respondents with a 'low' level of education did not complete high school, those with a'medium' education completed high school and/or had a certificate or diploma and those with a 'high' level of education completed a university degree. Perceived ability to conceive identifies those who were aware of any physical or health difficulties that would make it difficult for them or their partner to conceive. We included self-rated health, BMI and smoking status due to their association with ability to conceive. Selfrated health distinguishes four groups: those who described their health as 'Excellent', 'Very good', 'Good' and 'Fair/poor'. The BMI variable has four categories: underweight, normal weight, overweight and obese. Underweight was rare (N = 13) and was combined with 'normal' weight. Smoking was measured as current daily smoker or not. --- Outcome variable After identifying respondents with a strong short-term intention to have a child in the 2011 wave, we followed them up in the 2015 wave to see whether their plans were realised ('Outcome variable' in Table II). We distinguished three outcomes: (i) respondent had a child by 2015, or they (or their partner) were pregnant in 2015 ('Realised intention by having a child'), (ii) respondent did not have a child and changed intention ('No longer intended to have a child') and (iii) respondent did not have a child but retained a strong intention to have one ('Still intended to have a child'). The fact that the two surveys were 4 years apart whereas fertility intentions were expressed for a 3-year horizon allows for time needed to achieve a pregnancy (Van Eekelen et al., 2017). --- Statistical analysis Multinomial logistic regression was used to determine whether age and the other relevant characteristics were associated with the outcome variable. In the case of an outcome variable with more than two levels, errors (Simonoff 2003, p. 429). The covariates include the characteristics of respondents in 2011 described above: age, parity, relationship status, level of education, perceived ability to conceive, BMI, self-rated health and smoking status. Models were run separately for men and women. We were interested in giving an aggregate-level account of the effects of age and other covariates on the realisation of intentions, rather than analysing the effects for the individuals of a change in the independent variable from one specified value to another (Mood 2010). We thus opted to present in the text predicted probabilities and confidence intervals for each variable, holding all other variables at their mean. The original coefficients of the analytical models ('Relative Risk Ratios' in Stata) are available in Supplementary Table SI. We tested the predictive power of each covariate for pairs of outcomes using chisquared tests as described in the note of that table. We also tested the overall significance of the introduction of each of the covariates in the models using global likelihood ratio (LR) chi-square tests (test to reject the hypothesis that the coefficients are simultaneously equal to zero for all the categories of the covariate and between all the levels of the response). In a multinomial logistic regression, such tests indicate the predictive power of the covariates for all the outcomes together, rather than for pairs of them. The results of these global tests are available in Supplementary Tables SII andSIII. Note that in multinomial models, individual category coefficients can be substantively and statistically significant even if the variable is overall deemed non-significant (Long & Freese, 2006). All statistical analyses were performed in Stata 14.2 (StataCorp, 2015). --- Results Overall, two-thirds of men (65%) and women (64%) had the child they had planned within the 4-year interval, and 12% of men and 13% of women changed their intention (Table II). Tables III andIV present the predicted probability and confidence intervals of the three outcomes, obtained from the multinomial logistic regressions for men and women. The sociodemographic variables had a significant predictive power for discrimination (P <unk> 0.05 on LR tests, except for male level of education where P = 0.052), whether on an empty model or on the model with all the other variables already introduced (Supplementary Tables SII andSIII). For both men and women, the predicted probability of realising their strong fertility intention declined with age (Tables III andIV). However, the decline was much steeper for women. For men, the estimated probability of having a child was highest at age 18-25 (73%), declining to 57% at age 38-45. For women, a steep decrease in the probability of having a child occurred from age 35 onwards, with estimated probabilities of realising intentions falling from 70% at age 29-31 to 61% at age 32-34, 48% at age 35-37 and 23% at age 38-41. There was a corresponding increase in changing plans to have children: in the oldest age group, by 2015, 42% were predicted to no longer strongly intend to have a child compared with just 5% at age 29-31. Surprisingly, more than one-third (35%) of women aged 38-41 in 2011 still intended to have a child when asked again in 2015 when they were aged 42-46. Men also more frequently changed their reproductive plans at later ages, but to a lesser extent than women. Relationship status in 2011 strongly and significantly influenced the capacity to realise intentions within 4 years among both men and women. Married people were most likely to realise their intention (M: 77%, F: 74%), followed by people living in cohabiting relationships (64% for both sexes). Among single people (i.e. living without partner), 40% men and 45% women realised their intention; this seemingly high share is partly explained by changing relationship status: between 2011 (M: 73%, F: 72%). In contrast, those who had two children were the most likely to abandon further childbearing goals. Education level was positively related to achieving childbearing intention, with highly educated men and women most likely to have had a child (M: 73%, F: 71%). Low-educated women had significantly smaller predicted probability of realising their intentions than their more educated counterparts (51%). Finally, the epidemiological variables were related to the outcomes in the null model (model with no other covariate), except perceived ability to conceive in 2011 and self-rated health for men, but had no significant predictive power in the full model (Supplementary Tables SII andSIII). Men's predicted probability of having a child when they reported 'Excellent' health was 77%, in contrast to 60-64% for those with 'Very Good', 'Good', or 'Fair/poor' health (Table III). Nonetheless, our tests suggested that self-rated health was overall not an important determinant of the realisation of fertility intentions. Perceived ability to conceive, BMI and smoking status displayed no significant effects on the predicted probability of the outcomes once the demographic predictors were accounted for. Women aged over 38 experience a strong biological fertility decline which possibly dominates all other factors. This may bias the coefficients of the other variables in the model. Therefore, we conducted a separate sensitivity analysis by excluding women aged 38+. This exclusion did not change significantly the effects observed in the model (results available upon request). --- Discussion Our study brings attention to the role of reproductive age in realising short-term reproductive plans: the analysis reveals a clear-cut contrast between men and women, which persists after controlling for other confounding variables. A majority of men and women who strongly intended to have a child in 2011 had achieved their reproductive plan within 4 years. However, we also found a strong age-related decline in achieving reproductive plans for women starting in their mid-30s, and a corresponding increase in revising plans to have children. In contrast, men in their late 30s and early 40s still maintained a relatively high probability of having the child they intended. The strong age-related decline in intention realisation among women is consistent with the findings on age-related increase in infertility, sterility and pregnancy complications. Results are also consistent with the perceived social age deadlines for childbearing, where age 40 is often seen as a boundary after which women should not have children (Billari et al., 2011): many women abandon or revise their fertility plans when they approach this normative age limit. The limited age-related decline in intention realisation for men is likely due to their slower pace of reproductive aging (Kidd et al. 2001;Sartorius and Nieschlag 2010), their higher perceived social age deadline for childbearing (Billari et al., 2011) and the age difference within couples. Men tend to partner with younger women (Bozon 1991), with larger age differences found for men who partner at older ages (N<unk> Bhrolcháin and Sigle-Rushton, 2005;Beaujouan, 2011). The impact of age difference in partnering patterns between men and women should be explored further in future research. For both sexes, partnership status was an important determinant of realisation of their reproductive plans. Men and women who did not live with a partner in 2011 had a lower likelihood of realising their initial fertility plans. At the same time, they were more likely to continue to intend to have children and half of them had partnered by 2015. In Australia, as in most other highly developed countries, there is a strong two-child preference (Kippen et al., 2007), confirmed in our analysis: women and men with one child are the most likely to have a strong short-term fertility intention and to realise it. Highly educated men and women were most likely to realise their strong short-term intention within 4 years. As they have children later in life, this finding also reflects their awareness that they cannot wait much longer to realise their plans (Kreyenfeld 2002). The epidemiological variables had explanatory power before controlling for the other variables, while only age, parity, relationship status and level of education remained significant in the full model. In sum, epidemiological factors appear less important than sociodemographic factors to explain the realisation of strong short-term fertility intentions. Surprisingly, perceived ability to conceive was not significantly associated with realising intentions. The HILDA survey data do not allow us to get deeper insights on this result; the data provide neither sufficient information on the use of reproductive treatments nor respondents' assessment about the reasons for not realising their intention. This points to a broader limitation of our study: while our data confirm the strong effect of age on the ability of women to realise their reproductive plans, we cannot distinguish the contribution of biomedical factors (especially infertility, miscarriages and poor health) from the one of socioeconomic and cultural influences, including the cultural norms about appropriate ages for childbearing. Another broader limitation pertains to sample size. Our analytical sample (M: 447, F: 528) was sufficient to identify the role of age, sex and other factors analysed here, but at times resulted in wide confidence intervals and did not allow more detailed analysis of interactions between age and other intervening variables. Our study sheds light on the gender-specific role of age and other factors for realising reproductive intentions. As more women and men postpone having children until their late 30s and early 40s, they need to be aware of biomedical and other constraints and limitations that may prevent them from realising their reproductive plans. Our study confirms that this might be especially relevant for women of older reproductive ages: many in this study were postponing their childbearing plans and intending to have a child after age 40. For women with strong reproductive intentions, this study highlights the importance of not postponing childbearing to improve the chances of realising their plans (Habbema et al., 2015). Future research should shed more light on the contribution of men's and women's age for realising reproductive plans among couples and, using more waves of the survey when available, also study longer-term success, failures and changes in realising reproductive plans. Future surveys could also better capture the dynamics with which women and men facing reproductive difficulties either abandon their reproductive plans or seek treatment, and the extent to which this treatment helps them achieving their desired family size. --- Reproductive intentions / fertility / reproductive aging / parental age / Australia / gender differences --- Supplementary data Supplementary data are available at Human Reproduction online. --- Authors' roles Éva Beaujouan initiated this research and made a substantial contribution to the design of the work and to the analysis and interpretation of data. She drafted the first version of the article and revised it critically. Anna Reimondos made a substantial contribution to the design of the work, to the acquisition of data and to the analysis and interpretation of data; she drafted the article and revised it critically. Edith Gray, Ann Evans and Tomá<unk> Sobotka made substantial contributions to the design of the work and to the analysis and helped draft the article and revise it critically. All five authors approved the final version of the manuscript prior to publication. --- Conflict of interest None to declare.
STUDY QUESTION: What is the likelihood of having a child within 4 years for men and women with strong short-term reproductive intentions, and how is it affected by age? SUMMARY ANSWER: For women, the likelihood of realising reproductive intentions decreased steeply from age 35: the effect of age was weak and not significant for men. WHAT IS KNOWN ALREADY: Men and women are postponing childbearing until later ages. For women, this trend is associated with a higher risk that childbearing plans will not be realised due to increased levels of infertility and pregnancy complications.This study analyses two waves of the nationally representative Household, Income and Labour Dynamics in Australia (HILDA) survey. The analytical sample interviewed in 2011 included 447 men aged 18-45 and 528 women aged 18-41. These respondents expressed a strong intention to have a child in the next 3 years. We followed them up in 2015 to track whether their reproductive intention was achieved or revised. PARTICIPANTS/MATERIALS, SETTINGS, METHODS: Multinomial logistic regression is used to account for the three possible outcomes: (i) having a child, (ii) not having a child but still intending to have one in the future and (iii) not having a child and no longer intending to have one. We analyse how age, parity, partnership status, education, perceived ability to conceive, self-rated health, BMI and smoking status are related to realising or changing reproductive intentions.Almost two-thirds of men and women realised their strong short-term fertility plans within 4 years. There was a steep age-related decline in realising reproductive intentions for women in their mid-and late-30s, whereas men maintained a relatively high probability of having the child they intended until age 45. Women aged 38-41 who planned to have a child were the most likely to change their plan within 4 years. The probability of realising reproductive intention was highest for married and highly educated men and women and for those with one child.Our study cannot separate biological, social and cultural reasons for not realising reproductive intentions. Men and women adjust their intentions in response to their actual circumstances, but also in line with their perceived ability to have a child or under the influence of broader social norms on reproductive age.Our results give a new perspective on the ability of men and women to realise their reproductive plans in the context of childbearing postponement. They confirm the inequality in the individual consequences of delayed reproduction between men and women. They inform medical practitioners and counsellors about the complex biological, social and normative barriers to reproduction among women at higher childbearing ages.
Introduction Discussions about what constitutes a psychically traumatic event have been going on for a long time. In the 19th century and the first half of the 20th century, the use of the term "trauma" was limited except for physical trauma. The idea that traumatic events other than physical harm can also cause problems emerged after the French and Prussian war in 1870 (<unk>olak et al. 2010). Substance Abuse and Mental Health Services Administration (2014) define trauma as an event or series of events that are emotionally disturbing or lifethreatening for an individual, or the lasting adverse effects of these events on the individual's mental, physical, social, emotional or spiritual well-being. While Diagnostic and Statistical Manual of Mental Disorders-III (DSM-III) (APA 1980) began to describe traumatic events as 'beyond the usual human experience...' in DSM-IV (APA 1994), the experience of helplessness, fear and horror and the threat of extinction in the face of action became the determinant of the traumatic event. In DSM-5 (APA 2013), on the other hand, the scope of all these has been expanded and the effect of the subjective experience of the person on the traumatic event has been eliminated, and the traumatic experience has been medicalized as an infectious disease and defined as a "standard" disease created by a single microorganism (Başterzi et al. 2019). According to DSM-5, trauma may occur when directly experienced or witnessed, experienced by a family member or close friend, or professionally experienced, facing death or serious injury, or being sexually assaulted (APA 2013). In DSM-5, the subjective reaction of the person is not taken into account, instead, ways of encountering events are listed in order to clarify the definition of traumatic event. According to DSM-V, the person himself may have experienced and witnessed the event or it may happen to a close friend or a close relative. The expression "physical integrity of self and others", which was in previous DSMs, was removed and for the first time the expression "sexual assault" was included (<unk>olak et al. 2010). The World Health Organization (WHO 1995) defines trauma as accident, natural disaster, fire, rape, harassment, exposure to blackmail, sudden death of a loved one, life-threatening illness, war, fraud, seeing a corpse, seeing someone injured or killed, home invasion, death. It has been defined through events such as being threatened, victimized by terrorism, physical violence/attack, divorce, and abandonment. This definition focuses on direct actions rather than psycho-social effects on the person. Terr (2003) first distinguished trauma between two types: Type I and Type II. Type I single-incident trauma results from a single event, such as a rape or witnessing a murder. Type II complex or repetitive trauma results from "repeated exposure to extreme external events." Survivors of Type II trauma generally have at least some memories of their experience. Trauma can occur due to extraordinary events such as violence and harassment, or it can be caused by ordinary everyday events. Regardless of how it occurs, trauma is generally the most avoided, ignored, belittled, denied, and untreated cause of human suffering (Levine and Kline 2014). While some traumas such as physical and sexual abuse, domestic violence, exposure to partner violence, rape, abuse and death are quite obvious, chronic experiences such as emotional neglect, a careless caregiver or a parent addicted to alcohol and drugs, being threatened are subtler and insidious. Most clients may experience different types of trauma that causes toxic stress and triggers complex trauma reactions (Cloitre et al. 2009). The level of being affected by trauma varies according to the gender, age, and psycho-social development of the individual. Existing vital risks such as substance abuse, disability, mental illness, the individual's strengths, and existing social support networks also affect the level of being affected by trauma (Ogden et al. 2006). Trauma-related disorders, previously classified under anxiety disorders section, are classified under trauma-and stressor-related disorders in the DSM-V. Related disorders according to the new classification are: Reactive attachment disorder, acute stress disorder (ASD), post-traumatic stress disorder (PTSD), adjustment disorders (Ads), dissociative disorders (DDs). Environmental risk factors, including the individual's developmental experience, would thus become a major diagnostic consideration (Friedman et al. 2011, Koç 2018). In Classification of Diseases-11, a new classification has been made under the title of "especially stress-related disorders": Post-traumatic stress disorder, complex post-traumatic stress disorder, prolonged grief disorder, adjustment disorder, reactive attachment disorder, acute stress reaction (Maercker et al. 2013). Both DSM-5 and ICD-11 have included post-traumatic stress disorder (PTSD) among trauma-and stressor-related disorders. An important group of clients at the center of trauma-informed care consists of people with post-traumatic stress disorder. Trauma-informed care argues that traditional standard treatment models can trigger trauma survivors and exacerbate their symptoms. Trauma-informed programs are designed to be more supportive and avoid re-traumatization for people with post-traumatic stress disorder (SAMHSA 2014). Trauma, in any case, does not influence everybody in the same way. While some people are not affected even though they have experienced very terrible events, those who witness it may be more affected. Traumatic response is profoundly individualized and molded by a wide extend of components. The trauma-informed care approach of professionals determines the course of the long-term effects of the traumatic event (Wilson et al. 2013). Trauma-informed care approach to care has evolved over the past 30 years from various streams of thought and innovation. Nowadays, it is practiced in a wide variety of environments, including mental health and substance abuse rehabilitation centers, child welfare systems, schools, and criminal justice institutions (Cohen et al. 2012). Although it is so common, trauma-informed care is not a "one approach fits all". Interventions should always be determined according to the individual situation of the client. Gender and type of trauma are some of the specific requirements that will determine the type of intervention (Kelly et al. 2014). While there are similarities between trauma-informed care and trauma resolution therapy, the two are quite different. Trauma focused interventions can be a precursor to targeted therapy for many clients. Traumainformed care based practices help clients with traumatic experiences discuss their painful experiences and reduce their anxiety levels. This helps clients to regulate their emotions and behaviors (Cohen et al. 2012). Unlike classical theory and treatment methods, Trauma-informed care can be used by mental health professionals in Psikiyatride Güncel Yaklaş<unk>mlar-Current Approaches in Psychiatry 254 conjunction with any therapy. This method tries to understand the behaviors and coping mechanisms of traumatized clients and the problems caused by traumatic events. Trauma-informed care is a solution-oriented approach rather than a problem-oriented one (Tekin and Başer 2021). Trauma-informed care requires professionals working with clients with a trauma history to have a comprehensive knowledge of trauma. In addition, these professionals should have knowledge and awareness about the impact of trauma on the lives and actions of clients (Güneş Aslan 2022). This study aims to adapt the Trauma Informed Care Scale to Turkish culture by conducting validity and reliability studies. Various scales (Ka<unk>an et al. 2012, Tanhan and Kayri 2013, Tekin and K<unk>rl<unk>olu 2021, Taytaş and Tanhan 2022) are available in the literature to be used in research on trauma in Turkey, however, there is no scale developed or adapted directly related to trauma-informed care, and of which validity and reliability studies have been conducted. Therefore, this study is very necessary and important in terms of meeting the need in the literature and the field. --- Methods --- Sample This research is a survey model that aims to reveal the existing situation with a descriptive method without changing it. The population of the study consisted of mental health professionals (psychiatrists, social workers, psychologists, psychological counselors and psychiatric nurses) working with individuals with a trauma history. Since the number of the population is not known and this is a scale validity study, the sample calculation was made according to the number of scale items. For the 21-item scale study, it was planned to reach five times the number of scale items and 105 participants were determined as the minimum sample number. According to Tavşanc<unk>l (2002), the sample size should be at least five times the number of items in scale validity studies. Since the data of the study was collected through online platforms, participants from all over Turkey were included in the study. The study was completed with 161 participants by reaching more participants than the targeted minimum sample number. Inclusion criteria for the study: volunteering to participate in the study, being a mental health professional, working actively in the field for more than a year, and being able to speak and read Turkish. Exclusion criteria for the study: working in another job despite having a vocational diploma in the field of mental health, being in charge of another unit despite being a mental health professional have less than one year of professional experience. Also 17 participants who did not meet inclusion criteria were not included in the study. --- Procedure First of all, permission was obtained from the authors who developed the scale via e-mail. In addition, the opinion and approval of the author was received to replace the expression "patient" in the original items of the scale with the expression "client" used in the field of mental health. Prior to data collection, ethics committee approval was obtained from Necmettin Erbakan University Health Sciences Research Ethics Committee (Date: 06.04.2022 Number: 21/205). During the research, the rules of the Declaration of Helsinki were complied with. The participants of the study were informed that the research results could be used for scientific purposes and their written consent was obtained. This study was conducted by two competent researchers working in the field of clinical social work and behavioral psychology. The data of the research were collected by convenience sampling method. Convenience sampling is the method that provides the easiest way to reach the sample representing the population (Gürbüz <unk>ahin 2018). Participants representing the sample were reached through the peers of the researchers and their professional associations. In addition, research links were announced and shared in professional whatsapp, facebook and telegram groups. The data of the research were collected through the surveey.com data collection online platform. Repeated logins were blocked by setting IP and cookie control, a participant could only participate in the study once. Information about the purpose and scope of the study was given to the participants on the entrance page of the data collection online platform, and it was assumed that the participants who gave their consent by clicking the "participate in the study" button participated in the study voluntarily. --- Language Validity The translation of the scale items from the original language into the language of the culture to be adapted from the cultural adaptation studies of the scales is an important step. Therefore, for the language validity of the scale, the scale items that were originally in English were translated into Turkish. During the language validity process, the items of the scale, originally called Trauma Informed Care, were translated into Turkish by two different sworn translators. At least two independent translators are required at the language validity stage (Aksayan and Gözüm 2002). Then, an academician translation form was prepared with the English scale items and their Turkish translations. This form was sent to a total of six academicians, three of whom are social workers and three psychologists, who have studies on trauma. The corrections from these academics were compared by the researchers and the Turkish version of the scale was created by adopting the translations that were thought to best express the item in question. This scale was applied to 20 participants for the pre-application study, the questions that were thought to be incomprehensible were reviewed and the final version of the scale to be used in the main study was created. --- Data Collection Tools The data of the study were obtained by using the "Demographic Information Form" and the "Trauma Informed Care Scale". --- Demographic Information Form The descriptive features form created by the researchers, consists of 9 questions that determine the gender, age, education, occupation, duration of professional experience, knowledge of trauma-informed care, use of traumainformed care in occupational interventions, training on trauma-informed care, and need for training on trauma-informed care. Trauma Informed Care Scale (TICS) The scale was developed by King et al. (2019) and consists of 21 items and 3 subscales: knowledge, attitude and practice. There are 6 items about "Knowledge", 9 items about "Attitude" and 6 items about "Practice". There is no reverse item in the scale. The scale enables the determination of trauma-informed care related knowledge, attitude and practice levels of mental health professionals working with individuals with a trauma history. Scoring of the five-point Likert type scale is Strongly Disagree (0), Disagree (1), Undecided (2), Agree (3), Strongly Agree (4). Although the scale does not have a cut-off score, the high score indicates the need to learn about trauma-informed care. As a result of the validity study conducted with 592 healthcare professionals, confirmatory factor analysis of the scale revealed that 21 items provided the strongest internal consistency reliability for the general tool and each factor. The Cronbach Alpha value of the scale was 0.86, the knowledge subscale was 0.84, the attitude subscale was 0.74, and the practice subscale was 0.78 (King et al. 2019). --- Statistical Analysis The data obtained in the research were analyzed using the SPSS (Statistical Package for Social Sciences) for Windows 22.0 program. Before the analysis, skewness and kurtosis values, histograms and Q-Q plots were examined to assess whether the data set was normally distributed. Skewness and Kurtosis values ranged from -1 to +1. This result indicated normal distribution. Additionally, histograms and Q-Q plots also showed each of the variables was normally distributed. Frequency analysis, correlation, explanatory factor analysis, and reliability analysis were used for data analysis. Pearson correlation coefficient was preferred because the scale was a Likert-type interval scale, the data were normally distributed, and the sample size was sufficient. In addition, Bartlett's Test of Sphericity (BTS) was used for the significance of correlation coefficients between variables. Cronbach's Alpha coefficient was calculated for the reliability of the scale. Since it was a scale adaptation study, only EFA (Exploratory Factor Analysis) was considered sufficient, and CFA (Confirmatory Factor Analysis) was not considered necessary. Possible patterns that may occur can be revealed more clearly in EFA. Structures that cannot be noticed in CFA can be discovered via EFA. For this reason, possible changes that may occur in the structure in adaptation studies can be easily understood with the help of EFA (Orçan 2018). --- Results The sample of this study consisted of a total of 161 mental health professionals, 102 (63.4%) female and 59 (36.6%) male, aged between 21 and 60 (Mean = 33.16 <unk> 8.72). Of the participants, 38 (23.6%) were psychiatrists, 43 (26.7%) were psychologists, 37 (23%) were psychological counselors, and 43 (26.7%) were social workers. 90 (55.9%) of the participants were undergraduates, 51 were graduates, and 20 (12.4%) were PhD graduates. When examined in terms of professional experience, the highest rate was composed of those who worked for 1-3 years (32.9%, n=53) and those who worked for more than 10 years (32.3%, n=52). When the ratio of the participants was analyzed in terms of the institution they worked for, it was found that the highest rate was formed by the participants working in the ministry of health (37.9%, n= 61%). The findings regarding the demographic characteristics of the participants are provided in Table 1. While 48 (29.8%) of the participants stated that they had heard of the concept of trauma-informed care before, 21 (13%) stated that they used the trauma-informed care model in the professional intervention process. In addition, 23 (14.3%) participants stated that they received training on trauma-informed care during their undergraduate education, while 101 participants (62.7%) stated that they needed training on trauma-informed care. The opinions of the participants about trauma-informed care are provided in In order to test the construct validity of the Trauma Informed Care Scale, EFA with principal components method was conducted using Varimax rotation with Kaiser Normalizer. EFA is a statistical analysis method that is frequently used for social science studies to determine the hidden factors underlying the observed variables (Orçan 2018). When the results of the Barlett sphericity test was examined, it was revealed that that the data met the sphericity assumption (<unk>2 (210)= 1151.34, p <unk>.001). As a result of the analysis, a three-factor structure with a KMO (Kaiser-Meyer-Olkin) value of 0.75, explaining 44.90% of the total variance, and an eigenvalue above 1 was obtained. However, the items "Recovery from trauma is possible", "Paths to healing/recovery from trauma are different for everyone" and "Informed choice is essential in healing/recovery from trauma" were excluded from the analysis because they were not included in the original subscale and had a load below 0.32 and analyzes were repeated. The results indicated that a 3-factor structure was emerged, which explained 50.36% of the total variance and included all items in the subscales of the original scale. As a result of the analysis, 3 items in the attitude subscale were removed from the total items of the scale and the final version of 18 items scale that could be used in Turkish culture has been created. The correlation analyzes indicated that the total mean score highly and positively correlated with all subscales. The internal consistency values of the scale were also examined. Cronbach's alpha coefficient is a reliability value that indicates whether the scale items are related to the characteristic to be measured. It provides information about how consistent the scale items with each other and how coherent a group they form (Büyüköztürk 2010). The Cronbach Alpha internal consistency coefficient of the scale was calculated as 0.81 for the practice subscale, 0.72 for the knowledge subscale, and 0.82 for the attitude subscale. The Cronbach Alpha internal consistency coefficient value calculated for the whole scale is 0.80. Findings are provided in Table 3. Practice There is a connection between mental health issues and past traumatic experiences or adverse childhood events. Pearson correlation analysis was conducted to examine the relationships between the total mean score and subscales of the Trauma Informed Care Scale. The results obtained demonstrated that the total score average was positively and highly correlated with all subscales (Information: r = 0.66, p <unk>.001; Attitude: r = 0.71, p <unk>.001; Practice: r = 0.73, p <unk>.001). There were positive and low relationships between knowledge and attitude (r = 0.35, p <unk>.001), between knowledge and practice (r = 0.20, p <unk>.001), and between attitude and practice (r = 0.20, p <unk>.001). The findings are provided in Table 4. --- Discussion The main purpose of this study was to adapt the knowledge, attitude and practice level measurement tool related to trauma-informed care developed by King et al. (2019) into Turkish and to prove its validity and reliability with scientific methods. As a result of the studies, Cronbach's alpha reliability coefficients calculated for both the total scale and subscales were found to be at satisfactory levels. The practice subscale was found to be 0.81, the knowledge subscale was 0.72, the attitude subscale was 0.82, and the Cronbach's alpha for the total scale was 0.80. The fact that the internal consistency coefficient is above 0.70 indicates that the scale has a very high reliability (Büyüköztürk 2010). Cronbach's alpha values obtained in the original study of the scale were 0.84 for knowledge, 0.74 for attitude and 0.78 for practice (King et al. 2019). According to the results of the EFA analyzes, the KMO value was 0.75 and the Bartlett test <unk>2 value was found to be 1151.34 (p <unk>.001). If the KMO value is between 0.5 and 0.7, it is considered normal, and between 0.7 and 0.8 it is considered good (Hutcheson and Sofroniou 1999). The BTS value should meet the condition of being significant at p<unk>.05 level (Alpar 2020). The significance level of this study was found to be p <unk>.001. The results showed that the sample and the scale were suitable for factor analysis. It was observed that the Turkish version of the scale was three-dimensional, as in the original, and it explained 44.90% of the variance regarding the feature measured by the three-dimensional scale. A high explained variance can be interpreted as an indicator that the related concept or construct is measured well (Büyüköztürk 2007). In addition, the eigenvalue results (Alpar 2020), which can be used as an indicator of how many factors the scale should consist of, show that it may be appropriate to use a 3-factor structure. The factor loadings of the items ranged between 0.43 and 0.85. Factor loadings above 0.30 indicate a strong construct validity (DeVellis 2017). The results showed that the scale met the validity criteria. However, the items "Recovery from trauma is possible", "Paths to healing/recovery from trauma are different for everyone" and "Informed choice is essential in healing/recovery from trauma" were excluded from the analysis because they were not included in the original subscale and had a load below.32 and analyzes were repeated. The results showed that a 3-factor structure was emerged, which explained 50.36% of the total variance and includes all items in the subscales of the original scale. As a result of the analysis, 3 items in the Attitude subscale were removed from the total items of the scale and the final version of 18 items scale that could be used in Turkish culture has been created. In the final study of the 28-item scale model created in the original study of the scale, a total of 7 items were removed, 5 items from the knowledge subscale and 2 items from the attitude subscale, and the final 21-item model of the scale was created (King et al. 2019). The fact that the three items in the original of the scale were removed after the analysis can be explained with the assumption that the related items cannot find meaning in Turkish culture. Pearson correlation analysis revealed that the correlation between knowledge and attitude sub-dimensions was 0.35, the correlation between knowledge-practice sub-dimensions was 0.20 and the correlation between attitude-practice sub-dimensions was 0.20 In the original study by King et al. 2019, the correlation coefficient between knowledge-attitude sub-dimensions was 0.55, the correlation coefficient between knowledge-practice sub-dimensions was 0.28, and the correlation coefficient between attitude-practice sub-dimensions was 0.65. Compared to the original study, the correlation values were relatively lower in this study. In particular, the correlation between attitude and practice sub-dimensions was much weaker than in the original study. However, all correlations were positive and significant as in the original study. The findings obtained from our study overlap with the findings obtained from the original study of the scale. An important limitation of the study is that the research data was collected online. The findings obtained from the research are limited to the answers given by the mental health professionals participating in the research. Research results can be generalized to the mental health professionals involved in the study. Additionally, although all participants of the study were mental health professionals, this does not mean that they have the same level of experience with trauma. Researchers should take this into account when evaluating the study. Finally, since the original version of the scale did not have validity and reliability studies for other cultures, the comparison of the findings obtained from this study was limited to the findings of the original study. --- Conclusion As a result of the statistical analyzes, the validity and reliability of the Trauma Informed Care Scale has been proven in the light of scientific data. With this study, a scientific measurement tool that will enable the determination of the knowledge, attitude and practice levels of healthcare professionals working with individuals with trauma history has been brought to the literature. The Trauma Informed Care Scale is a valid and reliable measurement tool that can be used by professionals (physicians, nurses, psychologists, psychological counselors, social workers) working with trauma survivors, and researchers planning studies on traumainformed care and/or trauma-sensitive care. --- Conflict of Interest: No conflict of interest was declared. Financial Disclosure: No financial support was declared for this study. --- Addendum-1. Trauma Informed Care Scale (Turkish Version) Trauma Informed Care Scale Turkish Version (Travma Bilgili Bak<unk>m <unk>lçe<unk>i) Instruction: This scale measures the level of knowledge, attitudes and practices of mental health professionals working with trauma victimized clients regarding trauma-informed care. It is scored as Strongly Disagree (0), Disagree (1), Neutral (2), Agree (3), Strongly Agree (4). Please mark the most appropriate option for you. 0 1 2 3 4 1. Travmaya maruz kalmak yayg<unk>nd<unk>r. 2. Travma fiziksel, duygusal ve zihinsel sa<unk>l<unk> etkiler. 3. Madde kullan<unk>m<unk> sorunlar<unk>, geçmişteki travmatik deneyimlerin veya olumsuz çocukluk yaşant<unk>lar<unk>n<unk>n göstergesi olabilir. 4. Ruh sa<unk>l<unk> sorunlar<unk> ile geçmiş travmatik deneyimler veya olumsuz çocukluk yaşant<unk>lar<unk> aras<unk>nda bir ba<unk>lant<unk> vard<unk>r. 5. Güvensiz davran<unk>ş, geçmiş travmatik deneyimlerin veya olumsuz çocukluk yaşant<unk>lar<unk>n<unk>n göstergesi olabilir. 6. Travma istemsiz bir şekilde tekrarlayabilir. 7. <unk>nsanlar kendi travmalar<unk>n<unk> toparlama ve iyileştirme konusunda uzmand<unk>rlar. 8. Dan<unk>şanlar<unk>m<unk>z ve aileleriyle etkin bir şekilde çal<unk>şmak için travma bilgili uygulama önemlidir. 9. Travma bilgili uygulama hakk<unk>nda kapsaml<unk> bir anlay<unk>şa sahibim. 10. Travma bilgili uygulama ilkelerine inan<unk>yor ve bunlar<unk> destekliyorum. 11 Travma bilgili uygulama hakk<unk>nda uzmanl<unk>m<unk> meslektaşlar<unk>mla paylaş<unk>yor ve onlarla etkin bir şekilde işbirli<unk>i yap<unk>yorum. 12.Travma bilgili uygulama konusunda daha fazla e<unk>itim almak istiyorum. 13. Dan<unk>şanlarla olan tüm etkileşimlerde şeffafl<unk> koruyorum 14. Dan<unk>şanlara seçenekler sunuyorum ve kararlar<unk>na sayg<unk> duyuyorum 15. Dan<unk>şanlar<unk>n ve meslektaşlar<unk>m<unk>n kendi güçlü yanlar<unk>n<unk> fark etmelerine yard<unk>mc<unk> oluyorum. 16. <unk>al<unk>şmalar<unk>ma başlamadan önce tüm dan<unk>şanlar<unk> bilgilendiririm. 17. Her dan<unk>şanla olan etkileşimim benzersizdir ve onlar<unk>n özel ihtiyaçlar<unk>na göre uyarlanm<unk>şt<unk>r 18. <unk>z-bak<unk>m yap<unk>yorum (kendi ihtiyaçlar<unk>m ve sa<unk>l<unk>mla ilgileniyorum). --- Scoring Subscale Items Knowledge 1,2,3,4,5,6 Attitude 7,8,9,10,11,12 Practice 13,14,15,16,17,18
Bu çalışmanın amacı travma bilgili bakımla ilgili bilgi, tutum ve uygulama düzeyini ölçmek için geliştirilmiş olan travma bilgili bakım ölçeğinin gerekli analizlerini yaparak Türk kültürüne uyarlamaktır. Tarama modelindeki bu çalışmaya 161 ruh sağlığı meslek çalışanı katılmıştır. Araştırmanın verileri kolayda örneklem yöntemi ile Demografik Bilgi Formu ve Travma Bilgili Bakım Ölçeği kullanılarak toplanmıştır. Veriler online veri toplama platformu surveey.com aracılığıyla üretilmiştir. Çalışmaya dahil olan ruh sağlığı çalışanlarının çoğunun (%70,2) travma bilgili bakım modelini daha önce hiç duymadığı, %87'sinin de bu modeli uygulamalarında kullanmadığı saptanmıştır. Yapılan AFA analizi toplam varyansın %50,36'sını açıklayan ve bütün maddelerin orijinal ölçekteki alt boyutlarda yer aldığı 3 faktörlü yapının ortaya çıktığını göstermektedir. Yapılan analizler sonucunda ölçeğin toplam maddelerinden Tutum alt boyutunda yer alan 3 madde çıkarılmış ve Türk kültüründe kullanabilecek 18 maddelik son hali ortaya çıkmıştır. Yapılan korelasyon analizleri toplam puan ortalamasının bütün alt boyutlarla yüksek düzeyde ve pozitif yönde ilişkili olduğunu göstermektedir. Travma Bilgili Bakım Ölçeği travma mağduru danışanlarla çalışan ruh sağlığı meslek mensupları (hekimler, hemşireler, psikologlar, psikolojik danışmanlar, sosyal hizmet uzmanları) ile travma bilgili bakım ve/veya travma duyarlı bakımla ilgili çalışmalar planlayan araştırmacılar tarafından kullanılabilecek geçerli ve güvenilir bir ölçüm aracıdır.
Introduction Adult learning and education (ALE) is currently gaining in importance in a policy discourse which looks at the human right for the future of education through the lens of lifelong learning (LLL) (Elfert 2019;UIL 2020;ICAE 2020). This paradigm shift calls for lifelong learning for all, and that includes ALE for all youth and adults. To better understand what this right entails, A review of Entitlement Eystems for LLL (Dunbar 2019), prepared for the International Labour Organization (ILO) and the United Nations Educational, Scientific and Cultural Organization (UNESCO), translates this as an entitlement for all adults at work and analyses the situation in sixteen countries,1 documenting achievements using a system of four stages. These stages range from the declaration of a commitment to lifelong lerning (stage 1), across the declaration of an entitlement to lifelong learning (stage 2), and implementing elements of a lifelong learning entitlement (LLLE) (stage 3) to successful fulfilment of an entitlement to lifelong learning (stage 4) (ibid.). Extending such entitlement to all those working in the informal economy includes an additional two billion people worldwide, many of whom are "three times more likely to have only primary education (as the highest level of education) or no education as compared to workers in the formal economy" (Palmer 2020, p. 4). Thus, in the face of a reality where educational governance is dominated by the formal sector of education, a structural transformation of current institutions and systems is needed urgently (ibid., p. 49). If lifelong learning for all is to be achieved, increasing the participation of youth and adults in ALE is highly important. This calls for a closer look not only at all face-to-face and digital opportunities, but also for an analysis of the diversity of institutions and providers of ALE. In this context, our particular interest here is in community learning centres (CLCs) as they have increased in numbers and geographic spread, serving a growing number of people over the past three decades. Indeed, policymakers, as well as the wider "policy community" at all levels, are increasingly using CLC as a generic term to capture a variety of community-based places of adult learning (e.g. Ahmed 2014;Yamamoto 2015;Chaker 2017;Le 2018;Rogers 2019). CLCs have also received attention and become a concern in the global monitoring of education, training and learning. UNESCO's Fourth Global Report on Adult Learning and Education (GRALE 4) suggests throwing the net even wider: "While CLCs have been in the foreground of the discussions on institutional infrastructure, little attention has been given to traditional popular/liberal adult education institutions" (UIL 2019, p. 165). The latest Global Education Monitoring Report (GEM) 2021/2022 on Non-state actors in education: Who choses? Who loses? opens up the relevant section by stating: Community learning centres (CLCs) are increasingly recognized as playing an important role in providing education opportunities meetings local communi-ties <unk> needs (UNESCO 2021, p. 265). In this article, we take a closer look at some of these aspirations and developments through the lens of ALE's local, national and global dimensions. Our discussion is guided by the question: What conditions are conducive to having more and better ALE for lifelong learning -and which roles can CLCs and other communitybased ALE institutions play? We are particularly interested in the conditions that promote and support CLCs to live up to the expectations of participants, providers and stakeholders -and how local, national and global recommendations and initiatives could help to improve conditions, including levels of institutionalisation and professionalisation. In the following sections we look at how CLCs and other forms of institutionalised community-based ALE emerged. We investigate why this seems to be important both for current policy discourses within countries as well as for the global development agenda with its 17 Sustainable Development Goals (SDGs). While the fourth of these goals (SDG 4) specifically concerns education, aiming to "Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all" (WEF 2016, p. 15), and, moreover, adopting an inclusive stance in ensuring that "no one is left behind" (ibid., p. 7), it has been noted that ALE is in fact still being left behind in the implementation of SDG 4 and its lifelong learning agenda (UIL 2017(UIL, 2019)). ALE continues to remain on the margins as "the invisible friend" of the SDGs (Benavot 2018), although actually education and learning opportunities for youth and adults have the potential to support most of the other 16 SDGs (Schweighöfer 2019) and should be recognised for playing an important role in transformation and sustainability (Schreiber-Barsch and Mauch 2019). --- Literature review Local places in local communities, including "centres" where people learn together, exist in many corners of the world. Centres where adults gather to learn carry many names and are provided by numerous providers in diverse settings. Some are government-supported and/or otherwise funded institutions for formally planned and accredited education and training. Others are created for other purposes and have been adapted and possibly renamed for different kinds of organised instruction. Some may be more diverse and flexible as locally determined and managed forms of learning, complementing facilites set up for some other purpose such as teaching about health, farming and animal husbandry, workers' rights, or reaching particular groups of learners such as women or those retired from paid employment. But what constitutes such ALE institutions as CLCs is less clearly known. --- Terminology and infrastructure We begin this literature review by considerering the variety of terms used in different countries. They are highly influenced by historical and cultural contexts. Language matters here, since a literal translation of the English term "community learning centres" and its acronym CLCs is found hardly anywhere in Eastern Europe, Latin America or francophone Africa. We also have to take into account that a term may point specifically to an institution while at the same time having some overarching meaning. For the purpose of this article, we use "community learning centre" or "CLC" as more of a generic term where we know of its origin, its original definition and later adaptation. We complement this with the wider term "community-based learning centres" for institutions with longer or shorter traditions. This seems appropriate, as does our use of "adult education" as a generic term complemented with the broader "adult learning and education" or "ALE" to reflect the changing understandings of lifelong learning, which is also life-wide and life-deep. Terms, traditions and trajectories of ALE institutions vary between and within communities, countries and world regions. So do commonly used names and providers which in a broader perspective of community-based institutions of learning opportunities for adults include folk high schools in the Scandinavian countries (Bjerkaker 2021), Volkshochschulen in Germany (Lattke and Ioannidou 2021), adult education centres in Georgia (Sanadze and Santeladze 2017), but also in Belarus and Ukraine where such centres are attached to the "houses of culture" run by the city council (Lukyanova and Veramejchyk 2017;Smirnov and Andrieiev 2021). In Japan, there are the kominkan (Oyasu 2021) and Bangladesh has people <unk>s centres (Ahmed 2014). In Mongolia, former non-formal education (NFE) or "enlightenment" centres are now referred to as lifelong learning centres (LLCs) (Duke and Hinzen 2016), while in the Republic of Korea the former community learning centres (CLCs) have also been renamed lifelong learning centres (LLCs), to reflect their designation as local institutions for the Korean national lifelong learning system (Choi and Lee 2021). In Tanzania, there are folk development colleges (Rogers 2019); South Africa has public adult learning centres (PALCs) (Daniels 2020), and Bolivia has "alternative education centres" (Limachi and Salazar 2017). In a number of countries, these centres have got together and built national associations or networks which provide opportunities for cooperation and support services. Examples are the Georgian Adult Education Network (GAEN), the National Network of Alternative Education Centres (REDCEA) in Bolivia, the adult education centres of the Afghan National Association for Adult Education (ANAFAE), the National Network of Folk Universities in Poland (Hanemann 2021, pp. 53, 55), and the National Kominkan Association in Japan (Oyasu 2021). In Germany, the Deutscher Volkshochschul-Verband (DVV) serves as the national umbrella organisation for its regional member associations and the Volkshochschulen as local centres (Hinzen and Meilhammer 2022). --- Europe In Europe, the early beginnings of modern ALE and its institutionalisation can be traced back to the Enlightenment era, especially in Scandinavia where the folk high school movement of today looks to Frederik Severin Grundtvig as a founding father (Bjerkaker 2021). More vocational training-oriented activities and programmes grew out of needs coming from the agricultural and industrial revolution and often embedded in working-class movements and education. In Great Britain, the campaign and research around the centenary of the 1919 Final Report of the Adult Education Committee emphasised the importance of ALE after World War I (Holford et al. 2019) as a form of workers' political and economic education. In Germany, ALE became a constitutional matter in 1919, with a special paragraph stipulating that "the popular education system, including the adult education centres, shall be promoted by the Reich, the federal states and the municipalities" (Lattke and Ioannidou 2021, p. 58). The need to support ALE in institutions was recognised as a governmental obligation. It seems that there are similarities and differences in historical evolution between Britain and Germany (Field et al. 2016), across Europe, and indeed globally. For all the wealth of ALE and local learning centres under different names worldwide (Avramovska et al. 2017;Gartenschlaeger 2017), Europe developed a rich tradition of community-based learning, often closely connected to voluntary endeavour at a time of major changes. The general movement was related in time and cause to industrialisation, followed by political democratisation, with the need for new skills, attitudes and conduct in new industrial, technical, economic and social conditions. The kinds and levels of state support to voluntary endeavour varied, but all saw partial devolution to local communities, often with activities and institutions to what today is called citizenship education (Hinzen et al. 2022). To some extent, Volkshochschulen (vhs) might be called a German version of CLCs (Hinzen 2020). In Germany today, ALE governance includes policy, legislation and financing for the almost 900 vhs which provide services to participants on their doorstep through offering courses, lectures or other activities, which are taken up at an annual level of around 9 million enrolments. Aggregated statistics showing data on institutions, participants, staff, courses, finances etc. have been collected and disseminated through the German Institute for Adult Education (DIE) -Centre for Lifelong Learning of the Leibnitz Society for the past 58 years and are available for further analysis and research (Reichart et al. 2021). Longitudinal studies show changes in content and offerings in terms of of vhs supply and demand, especially at times when socio-political developments require the acquisition of new competencies and skills, attitudes and values in the education and training of adults (Reichart 2018). Access and inclusion are key issues, giving special attention to respective policies and supporting barrier-free opportunities for youth and adults with disabilities; or providing targeted funding for equal chances in health education services (Pfeifer et al. 2021). These are areas of particular concern when monitoring ALE participation and non-participation (Stepanek Lockhart et al. 2021). --- North America The term community learning centres, as well as the aronym CLCs, is also used in North America for initiatives in educational reform. In Canada, the Government of Québec provided support and, in 2012, published a CLC "resource kit" for "holistically planned action for educational and community change". This was prompted by debates on reforming schools and training centres to better "respond to the particular culture and needs of the communities" they were serving and to "provide services that are accessible to the broader community" (Gouvernement du Québec 2012, pp. 2, 4). The framework for action underlying this resource kit understands the CLC as an institutional arrangement aiming to jointly engage children, youth and adults in developing their community and catering for the needs of the its members. In the United States, a similar debate using the term community learning centres is ongoing and keeps asking how schools can be improved through engagement of the communities they operate in, and also how the communities can benefit from such engagements (Lackney 2000;Jennings 1998;Penuel and McGhee 2010;Parson 2013). --- Other world regions The orientation and understanding of CLCs and related facilities is widened by Hal Lawson and Dolf van Veen (2016) through a variety of international examples. The most recent collection of experiences from more than twenty countries around the globe is by Fernando Reimers and Renato Opertti (2021); it includes a case study from Mexico on "Schools as community learning centers" (Rojas 2021). All of these examples and their findings are relevant to our discussion of community-based ALE through CLCs which have adults as their main participants, but often also provide opportunities for children and youth, including examinations for school leaving qualifications as second-chance opportunities (Lattke and Ioannidou 2021, p. 60). In sum, and keeping in mind our guiding question about conditions conducive to improved and enlarged ALE development, with particular focus on the role of institutions like CLCs, this literature review so far suggests that the need for wider participation in ALE is situated in a landscape featuring a variety of community-based ALE institutions with diverse backgrounds using different terms, including CLCs. However, while this landscape is bound to offer considerable potential for increasing participation in education, training and learning opportunities among adults so far not participating, there is also a need to search for and understand barriers and hindrances to participation, and identify those conditions which provide more ALE opportunities and make up better institutions. This is where ALE practice-related work and materials are getting increased attention. Examples are the Curriculum globALE (DVV International et al. 2021), tailor-made for the training of adult educators and staff, and the Curriculum institutionALE (Denys 2020), designed for organisational development and ALE system building (Belete 2020). Furthermore, Richard Desjardins and Alexandra Ioannidou's study on "some institutional features that promote adult learning participation" (Desjardins and Ioannidou 2020, p. 143) is of interest to us, complemented by this observation, made in GRALE 4: On the supply side, it is clear that a strong, universal ALE system is linked to relatively high levels of equality in participation. Within this, there is abundant scope for targeted initiatives that are designed to reach out to underrepresented groups and reduce institutional barriers to participation (UIL 2019, p. 176). This is where CLCs and other institutions of community-based ALE could and should strive to play an important role. Finally, we point to related discourses concerning expectations of CLCs beyond the usual claims. In the context of learning cities or learning regions, for example, Manzoor Ahmed asks: "Are community learning centres the vehicle?" (Ahmed 2014, p. 102). Or in the context of education for sustainable development, where Hideki Yamamoto positions CLCs as a "platform for community-based disaster preparedness" (Yamamoto 2015, p. 32). In a related vein, the dimensions of local solutions to the climate crisis for Indigenous minorities in Malaysia are exemplified by Mazzlida Mat Deli and Ruhizan Muhamad Yasin in their article entitled "Community-based learning center of renewable energy sources for Indigenous education" (Deli and Yasin 2017). Such wider perspectives were intensively discussed during an international conference on adult education centres which suggested making use of CLCs as local hubs for the implementation of the SDGs (DVV International 2017). This is close to the late Alan Rogers' interesting analysis of "Second-generation non-formal education and the sustainable development goals: Operationalising the SDGs through community learning centres" (Rogers 2019), with the first generation of non-formal activities and institutions being situated back in the 1970s (Coombs and Ahmed 1974). Having concluded our literature review, we now turn our attention to examples of CLCs in Asia and Africa, considering their development in and by communities. --- Experiences and examples from Asia and Africa There are several reasons why we focus here on examples and developments from the Asian and African regions more extensively than on other continents. In the case of Asia there is diversity in terms of how long CLCs have been operating for, and in directions and modes of development. The examples from Africa do not have decades of such development; they are part of current policy interventions dating back only a few years, albeit based on previous experiences. It is worth noting that the combined populations of these two continents (around 5.3 billion people; UNFPA 2022) amount to almost three-quarters of the world population (ibid.). Many countries in Asia and Africa have higher numbers of non-literate adults and out-of-school children and youth than those in other world regions. This increases the need for ALE participation in relevant institutions like CLCs, their institutionalisation and professionalisation. While we have to accept that limited data are available for ALE and CLCs globally, data are available for Asia, and in Africa some innovative developments supporting CLCs are grounded in broader approaches to ALE system-building. --- The Asia Pacific region: Viet Nam, Thailand and Japan In 1998, the UNESCO Regional Office in Bangkok started a CLC project as part of its Asia Pacific Programme of Education for All (APPEAL) (UNESCO Bangkok 2001). It was planned as an attempt to reach those "with few opportunities for education", and based on this definition of a CLC: A community learning centre (CLC) is a local place of learning outside the formal education system. Located in both villages and urban areas, it is usually set up and managed by local people in order to provide various learning opportunities for community development and improvement of the quality of life. A CLC doesn't necessarily require new infrastructure, but can operate from an already existing health centre, temple, mosque or primary school (UNESCO Bangkok 2003, p. 2). The project spread across many countries in the region, and by 2003 Bangladesh, Bhutan, Cambodia, China, India, Indonesia, Iran, Kazakhstan, Lao PDR, Malaysia, Mongolia, Myanmar, Nepal, Pakistan, Papua New Guinea, the Philippines, Samoa, Sri Lanka, Thailand, Uzbekistan and Viet Nam were mentioned as participating (UNESCO Bangkok 2003, p. 3). APPEAL provided a resource kit (UNESCO Bangkok 2006) and followed up with manuals, partner meetings and conferences. Cambodia developed cooperation with a French non-governmental organisatoin (NGO) and produced their own guide on managing CLC (ACTED 2018). At a regional meeting of APPEAL held in 2012, a new CLC definition emerged: A Community Learning Centre (CLC) is a community-level institution to promote human development by providing opportunities for lifelong learning to all people in the community (ACTED 2018, p. 1, referring to UNESCO Bangkok 2013). The orientation towards lifelong learning for all is growing. The increase in diversity within and between countries ever since the beginning of the APPEAL project can be seen in a collection entitled Community-Based Lifelong Learning and Adult Education: Situations of Community Learning Centres in 7 Asian Countries (UNESCO Bangkok 2016). The reasons for achievements and success seem to be manifold, including the harmony between programmes and local needs, lifestyles and strong government support. Ai Tam Pham Le provides an interesting case study for Myanmar, where she discusses the contributions of CLCs to personal and community development (Le 2018). In Indonesia, the CLC manages the non-formal education programme (Shantini et al. 2019), and in Nepal CLCs are seen as supporting lifelong learning and are now part of national education plans (MoE Nepal 2016). In this article, we present examples from Viet Nam, the country with the highest number of CLCs in Southeast Asia; Thailand, which has diverse CLC organisations; and Japan, with its own pre-CLC kominkan. Thes three country cases serve to describe some of the circumstantial similarities and differences in which CLC developments emerged and co-existed with other forms of community-based ALE. --- Viet Nam Learning is a traditional part of Vietnamese culture. Multiple folk sayings reflect the value of learning: "A stock of gold is worth less than a bag of books"; "An uneducated person is an unpolished pearl"; "Learning is never boring; teaching is never tiring". Respect for teachers is required, as in "He who teaches you is your master, no matter how much you learn from him". Learning is a way of life in this country. The history of Viet Nam is adorned with people who, against the odds, overcame difficulty and studied to achieve high levels. One example is Mac Dinh Chi, who studied by himself at night in the faint light of the fireflies he kept in his hand because his family could not afford an oil lamp. As a result of his studies, he became a Zhuàngyuán, the title given to the scholar who achieved the highest score on the highest level of the Imperial examination in ancient Viet Nam. When the country was reunited after the resistance wars, the Vietnamese government restarted the learning movement, a process initiated in 1945 by Ho Chi Minh, the first leader of the independent socialist republic of Viet Nam. Literacy classes and complementary education programmes (equivalent to primary education) were organised in schools, religious facilities like Catholic churches, Buddhist pagodas and large private houses. The establishment of two pilot CLCs in 1999 was a new national intervention by the government to adopt "CLC[s] as a delivery system of continuing education at the grassroots" (Okukawa 2009, p. 191), providing not only literacy programmes but also knowledge and skills that would empower learners and boost community development. Currently, approximately 11,000 CLCs form the most extensive network of nonformal education institutions in Viet Nam, reaching nearly all communes and wards of the country, providing local learning activities for people ranging from literacy to post-literacy, from income-generation to leisure skills and knowledge, practical knowledge of civil laws, legitimate actions and legal processes. In 2018, there was a total enrolment of 20 million participants in these CLCs according to capacity-building material circulated internally by the Ministry of Education and Training (MOET Viet Nam 2018). The success of the CLC operation is largely due to the principle "of the people, by the people and for the people" (MOLISA Viet Nam 2018; MOET Viet Nam 2018), under the guidance and with the support of the government through policies. A sense of shared ownership thus encourages local people to engage in CLC activities. Vietnamese CLCs are autonomous, while receiving professional guidance from the district Bureau of Education and Training (MOET Viet Nam 2008), and administrative management of the government at all levels. In each community, the head of the local People's Committee is also the Director of the CLC ( MOET Viet Nam 2008a, 2014), which gives the centre an advantage: easy alignment of CLC programmes and activities with central Government direction (Pham et al. 2015). The practical value of this was demonstrated during the first outbreak of the COVID-19 pandemic in 2020: following directives of the central Government, local governments implemented control measures, raised people's awareness of the disease, and gave advice on disease prevention. In their dual role as head of the local authority and leader of the CLC, these leaders organised appropriate CLC activities in cooperation with mass organisations like Viet Nam Women's Union and the Youth Communist Union. CLCs where newspapers were provided to "promote reading habits and reinforce reading skills for neo-literates" (Leowarin 2010). According to Suwithida Charungkaittikul of the Department of Lifelong Education of the University of Chulalongkorn, 9,524 CLCs spanned the country in 2018, reaching all rural corners. Thai CLCs are located in a variety of physical entities: district administration offices, schools, community halls, local elderly people's private houses, factories and temples. Buddhism is the dominant religion in Thailand, followed by around 95% of the population (ARDA 2021). The approximately 40,000 Thai Buddhist pagodas (MoE Thailand 2017) serve more than religious purposes. They are learning sites because Thai tradition requires that boys come and live in pagodas for an average time of three months before the age of 20, to learn to read and write, and to understand ethics and Buddhist history and philosophy. Thus, the pagodas are "the centre of all kinds of community activities, including learning" (Sungsri 2018, p. 214). Today they also host CLCs providing learning to all people, regardless of gender. Operating on the same principle "of the people, by the people, and for the people" as in Viet Nam, Thai CLCs have transformed non-formal education provision from "bureaucracy-oriented to community-based approaches" (Leowarin 2010). They have a strong base in the National Education Act (RTG 1999) and are especially supported by the Non-formal and Informal Education Promotion Act (RTG 2008), which paves the path to decentralisation of education by institutionalising CLCs. Two philosophical approaches have had great influence on adult education, and thus on CLC programmes in Thailand. Khit-pen, essentially conceived and introduced by Dr Kowit Vorapipatana, former head of government-led ALE, literally means having full ability to think (Sungsri and Mellor 1984;Nopakun 1985cited in Ratana-Ubol et al. 2021). It was initially applied to functional literacy programmes. The Sufficiency Economy of His Majesty the late King Bhumibol Adulyadej, promoting a way of life based on patience, perseverance, diligence, wisdom and prudence for balance and ability to cope appropriately with critical challenges, has given rise to a growing number of community learning centres called sub-district non-formal and informal education centres that teach local people a way of life that sufficiently and sustainably relies on natural resources. Traditions, religious norms and philosophical bases blended into a strong foundation and strong government support have given Thai CLCs the characteristic they have today: diversity in location, but uniformity in purpose. --- Japan The Japanese kominkan, a distinctive learning centre phenomenon which sprang up post-World War II, was not a child of UNESCO's APPEAL project, but shares purposes and functions with its CLCs. War-torn Japan needed to "build back better" -this slogan aptly applies to the period. Article XXVI in Japan's new constitution stated that "All people shall have the right to receive an equal education correspondent to their ability, as provided by law" (Prime Minister of Japan 1946). With this Constitution, the notion of democracy and a process of decentralisation were introduced into Japanese people's lives. In 1946, the Ministry of Education issued a plan for the establishment of kominkan [public citizens' halls], in every prefecture. The purpose of kominkan is to facilitate social education, self-improvement and community development through a variety of learning activities initiated and implemented by local people themselves, and through social interaction including meetings between the community and local government. Kominkan suited the lifestyle of most Japanese people at the time. "Until the mid-1950s it [Japanese society] was essentially a rural society", featuring a strong relationship manifested in the fact that "communities were structured into groups -the gonin gumi -and... the most important social value was the subordination of the individual to the group" (Thomas 1985, p. 81). Kominkan had a strong legal base in the 1947 Fundamental Law of Education (MEXT Japan 1947), and the Social Education Law of 1949 (MEXT Japan 1949). Kominkan quickly emerged as a tool for community empowerment, and became the backbone of social education. The number of kominkan soared from 3,534 in 1947 to 20,268 in 1950 (National Kominkan Association 2014) and peaked at 36,406 in 1955 (Arai and Tokiwa-Fuse 2013). Though the number is lower now, at 14,281 in 2018, according to the National Social Education Survey (Oyasu 2021, p. 98), for several social and administrative reasons, kominkan have retained their status as community-based learning sites that promote lifelong learning and a learning society at local levels. Many factors contributed to the success and extensive network of kominkan in the 1950s. Among the most important was the legality of kominkan as entities established under and for purposes set out clearly in the Fundamental Law of Education in 1947, and the Social Education Law of 1949 (MEXT Japan 1947, 1949), and subsequently, "the national government [...] standards for establishing and managing Kominkan and [...] financial subsidies for their construction" (MEXT Japan 2008). Secondly, kominkan met the genuine needs of society in the post-war era when people felt an urge to acquire new values, new skills to improve their own lives, and new knowledge to rebuild the country. This process of democratisation and decentralisation also gave a strong boost to people's spirit, as they understood that they were actually managing their own learning; and that learning benefited their own lives in addition to building community integrity. Collaborative learning in a general sense doubtlessly began when humans came to live together in groups, a primitive form of community. It was in living and learning from one another that Indigenous wisdom accumulated, based on which community systems developed. Today, CLCs exemplify the same correlation between individual members' learning and holistic community advancement. In this sense, kominkan are a good example of best practice. --- Research initiative on CLCs in Asia In 2013, a Regional Follow-up Meeting to the Sixth International Conferences of Adult Education (CONFINTEA VI) for the Asia and Pacific region suggested conducting country-based research in the context of the wider benefits of CLCs (UIL 2013). This was initiated by the National Institute for Lifelong Education (NILE) of the Republic of Korea, the UNESCO Institute for Lifelong Learning (UIL) and the UNESCO Regional Office in Bangkok (ibid.). All six countries which joined the project had already worked together within the APPEAL initiative on CLCs. Not least to enable comparability, research in each of these countries (Bangladesh, Indonesia, Mongolia, the Republic of Korea, Thailand and Viet Nam) was based on a joint design and questionnaire, and results were compiled in a synthesis report (Duke and Hinzen 2016). Despite the diversity of countries in terms of their history and their political, economic and cultural history and present situation, the synthesis report contained implications and proposals which are important here: Policy, legislation and financing. The findings suggest that to create a system of CLCs adequate in quantity and quality throughout the country, support is needed similar to what is available through the formal education system to schools, universities and vocational training. The necessary policies and legislation related to CLCs must have a sound financial basis, in this sense no different from that for formal education. [...]. Assessments, monitoring and evaluation. Learning and training assessments at local level should produce data relevant to the construction, planning and development of programmes, curricula and activities. These need to be guided by forms of continuous monitoring and regular participatory evaluation involving CLC learners and facilitators. All of this, including monitoring and evaluation, are professional support services to help local CLCs to improve (Duke and Hinzen 2016, p. 28). In the next section, we turn to Africa, where CLCs are still evolving. While focusing to some extent on Ethiopia and Uganda, where some research into CLCs has already been conducted, we do not present the two countries separately. Rather, they serve as examples of what is, as mentioned earlier, part of current policy interventions in a larger number of African countries. --- The African region The concept and practice of community-based ALE and CLCs in East Africa, as in many other parts of Africa, have evolved over time. The folk development colleges of Tanzania which started in the 1970s as part of international cooperation with Sweden and their folk high schools, are a special case, but interesting since they continue to be supported by government funding today (Rogers 2019). Local experiences of community learning are also found in Kenya, where CLCs have been brought into sustainable development efforts (Nafukho 2019); and in Lesotho CLCs are being tested as providers of ICT services for the community (Lekoko 2020). In South Africa there are attempts to combine CLCs with efforts to improve popular and community education (von Kotze 2019). A more general literature review of CLCs in selected African countries (Hinzen 2021) found that they are places where not only youth and adults, but also children and the elderly can access a variety of learning and education opportunities as well as other services (like community libraries, vocational training or internet access) provided by local government sector offices, often implemented with the involvement of civil society organisations (Hinzen 2021). --- Ethiopia and Uganda Ethiopia took action in 2016 after a delegation visited Morocco to learn more about CLCs. The Moroccan concept and design were adapted to the Ethiopian context and ten pilot CLCs were set up in five regional states (Belete 2020). As the benefits for the community and service providers started to emerge, other countries like Uganda and Tanzania became interested, and exposure visits were arranged for key government officials and NGO experts. Uganda has since set up nine CLCs across four pilot districts (Jjuuko 2021) including, as in Ethiopia, plans for upscaling within and rolling out to more districts. The interest from communities, different government sector offices and other ALE stakeholders has exceeded expectations. Therefore it is worth investigating the rationale for setting up CLCs in the region; the services and modalities to offer the services; the involvement of stakeholders from both the demand and supply side; steps to start and operationalise CLCs; and considerations for the sustainability and institutionalisation of CLCs within an ALE system. The concept of CLCs in the region is still evolving, and new pathways for ALE are being considered, so in the next section, we also look at what is currently planned for future consideration. --- Why is there a need for CLCs in Africa? ALE services are usually offered through learner groups who gather and meet within or close to their communities on a regular basis with a facilitator or trainer for adult literacy classes, different forms of skills training and extension services. While this serves the purpose of bringing ALE closer to its users, it also has limitations, especially in rural communities. In Africa, ALE trainers and facilitators have to travel long distances and cannot always reach all communities in need. Serving everyone requires more staff and more funding. Another limitation concerns the types of services offered, because equipment and materials necessary for certain types of training are not always readily available. To make provision effective, a place is needed where different ALE services can be offered as a one-stop service, and communities of all age groups can gather to conduct their own affairs. In rural African communities, such infrastructure is often poor or lacking. CLCs have the potential to fulfil the needs and interest of ALE service users and providers. --- What do CLCs offer in Ethiopia and Uganda? In Ethiopia and Uganda, CLCs have evolved as spaces that offer not only ALE services, but
Institutionalised forms of adult learning and education (ALE) such as community learning centres (CLCs) and related models are found in most parts of the world. These are spaces offering opportunities for literacy and skills training, health and citizenship, general, liberal and vocational education, in line with fuller recognition of the meaning of lifelong learning, and in the context of local communities. Often these institutions form the basis for even more informal and participatory learning, like study circles and community groups. They may share facilities like libraries and museums, clubs and sports centres, which are not within the remit of the Ministry of Education. This article reviews relevant literature and identifies recent studies and experiences with a particular focus on the Asia-Pacific and Africa regions, but also considers insights related to interventions at the global level. Findings point to low levels of participation of adults in general, and more specifically so for vulnerable and excluded groups which can hardly cross respective barriers. The authors' discussion is guided by the question What conditions are conducive to having more and better ALE for lifelong learning -and which roles can CLCs and other community-based ALE institutions play? This discussion is timely -the authors argue that CLCs need to be given more attention in international commitments such as those made in the context of the International Conferences of Adult Education (CONFINTEA) and the United Nations 17 Sustainable Development Goals (SDGs). CLCs, they urge, should be part of transformative discourse and recommendations at CONFINTEA VII in 2022.
planned for future consideration. --- Why is there a need for CLCs in Africa? ALE services are usually offered through learner groups who gather and meet within or close to their communities on a regular basis with a facilitator or trainer for adult literacy classes, different forms of skills training and extension services. While this serves the purpose of bringing ALE closer to its users, it also has limitations, especially in rural communities. In Africa, ALE trainers and facilitators have to travel long distances and cannot always reach all communities in need. Serving everyone requires more staff and more funding. Another limitation concerns the types of services offered, because equipment and materials necessary for certain types of training are not always readily available. To make provision effective, a place is needed where different ALE services can be offered as a one-stop service, and communities of all age groups can gather to conduct their own affairs. In rural African communities, such infrastructure is often poor or lacking. CLCs have the potential to fulfil the needs and interest of ALE service users and providers. --- What do CLCs offer in Ethiopia and Uganda? In Ethiopia and Uganda, CLCs have evolved as spaces that offer not only ALE services, but different forms of learning and education opportunities within the spectrum of lifelong learning. In the early days of setting up CLCs in Ethiopia, a need was identified for a place within the CLCs where mothers could leave their children while attending classes. This evolved in many CLCs into full-scale early childhood development (ECD) centres, where preschool-aged children are cared for and can start learning. Urban CLCs in Addis Ababa found that this is also a source of income for the CLC, providing affordable day care for mothers who could not otherwise afford it. The CLCs are government-funded, and the mothers pay a small amount. In Uganda, school-going children attend additional support classes at CLCs. Youth and adults have a variety of services to choose from, based on the concept and definition of ALE in both countries. Integrated adult literacy classes combine literacy and numeracy with livelihood skills training, business skills training, life skills, etc. Establishing libraries at each CLC, with books for all age groups, strengthens the skills of neo-literates, but also provides a resource centre for all ages, encouraging reading groups. One CLC in Ethiopia constructed an outdoor garden reading room as a quiet space for these activities. Youth enjoy sport and entertainment activities, and many youth clubs have been formed. In Ethiopia, the training offered by CLCs with support to engage in savings and loan schemes has assisted many youths to start a business and engage in farming. This has contributed to changing their minds about emigrating to other countries for their livelihood. Older adults have found a space to escape loneliness, enjoy discussions with their peers, have elder council meetings and engage with other age groups. The CLCs have thus also become a place for intergenerational learning. Beyond training and learning opportunities, CLCs also provide a service delivery point. CLCs in Uganda have schedules where different sector office experts are available on scheduled days with advice and services for individuals and small groups. Health sector offices in both countries have special days for vaccinations of children, health awareness-raising, and instructions on COVID-19 and other diseases. Paralegal services are offered, and local mediation of conflict within the community. CLCs have also started facilitating market days, where trainees can promote and sell their products. The outbreak of COVID-19 required and prompted adaptation. Ethiopia produced a series of 20 radio programmes on business skills training. This also provided virtual outreach to a bigger CLC target group and promoted existing CLCs and the services they offer. As CLCs evolved into one-stop service centres, assessment of services became a new concern, and CLCs in Uganda started using community scorecards to assess services and have interface meetings between users and providers. Local government offices and politicians alike began to view CLCs as places where good local and integrated governance can be promoted (Republic of Uganda 2018). --- Who is involved? Stakeholder involvement should be viewed from both demand and supply sides of service delivery. The different categories of service users from the demand side are highlighted above. Their involvement goes beyond the use of services: CLC management committees are elected and formed with community members acting as a board, and regularly engaging with local government service providers to discuss the types and quality of service, sustainability and finances of their CLC. These committees are provided with training to fulfil their roles. Service providers in Ethiopia and Uganda are mostly local government sector offices, some partnering with NGOs who use the CLC facilities as places to provide services and contribute resources. The sector office experts and managers have formed cross-sectoral technical committees who jointly plan, budget, implement and monitor service provision through regular meetings, promoting horizontal and intersectoral integration. These committees are mirrored at higher governance levels, thereby promoting vertical integration through the spheres of governance. --- How is the CLC policy intervention implemented? The establishment and management of CLCs take place in two phases: the first one is an establishment phase which takes care of orienting stakeholders and community members, conducting a situation analysis and needs assessment, training both the CLC management committee and the sector experts, and forming the necessary cross-sectoral committees across levels of governance. It involves selecting a space where the CLC will be established and appointing and training a CLC coordinator from one of the government sector offices. With few exceptions. all CLCs in Ethiopia and Uganda have been established in existing buildings donated by local government, with sufficient land for demonstration sites and sports facilities. Renovation costs have been shared by government and NGOs. The operational second phase starts the process of delivering different services and putting systems in place for monitoring and managing the CLC. --- Sustainability To ensure permanence of CLCs and the sustainability of their services, it is crucial for CLCs to be institutionalised. The East Africa region uses the Adult Learning and Education System Building Approach (ALESBA) to build sustainable ALE systems across five phases (Belete 2020). CLCs are at the nexus of service delivery, and provide an entry point to build a system for service delivery from the ground up. ALESBA's conceptual framework considers four elements, each of which has five system building blocks (ibid.). The elements and building blocks ensure attention to an enabling environment for implementing CLCs nationally: embedding them into national policies, strategies and qualifications frameworks, as well as the necessary institutional arrangements across sectors of governance, and making space for nonstate actors such as universities and NGOs to play a role. The establishment of CLCs in Ethiopia and Uganda has exceeded the expectations of both service users and providers. As the practice continues to evolve, more services are added to the CLC spectrum. The provision of computer and other forms of digital training, including radio programmes, is currently in preparation. Governments have scaled up CLCs with their own funds in different districts, and included further roll-out in plans and budgets for the coming years. Advocacy around CLCs should continue to ensure sustainability and inclusion for permanent service delivery within these ALE systems. Ideally, the experience from and success of these projects should be rolled out to other parts of the continent. --- Research initiative on CLCs in Africa Within the broader interest in lifelong learning and the institute's thematic priority of Africa, UIL analysed case studies from Ethiopia, Kenya, Namibia, Rwanda and the United Republic of Tanzania a few years ago, and identified a diversity of commu-nity-based activities (Vieira do Nascimento and Valdes-Cotera 2018). In 2021, a new research initiative was launched to provide deeper insight into the potential role for community-based ALE and CLCs (Owusu-Boampong 2021). A short survey comprising 12 questions was prepared to obtain comparable data on the status of CLCs in African countries. It was sent out to 35 African UNESCO Member States, using the channel established by UIL for requesting national reports from countries for the Fifth Global report on Adult Learning and Education (GRALE 5; UIL 2022). The 24 responses received by UIL provide substantial information on related legal frameworks, policies, strategies and guiding documents to support the operation of CLCs in African countries, and a variety of forms at different stages of institutionalisation in about 15 countries. Programmes in CLCs mainly include literacy, vocational and income-generation activities. Target groups are adults, women and youth, with an emphasis on disadvantaged groups and hard-to-reach communities (Owusu-Boampong 2021). In terms of outcomes of CLC activities, the following were reported: creating a reading culture in the community; empowering communities economically; complementing formal education; providing recreational facilities; participation in community development; creating awareness in health and hygiene; promoting girls' education; facilitating skills development for citizenship and entrepreneurship; and enabling inter-generational learning (ibid.). Respondents considered the integration of additional services (such as basic health services) in CLCs as having the effect of increasing the effectiveness and sustainability of CLCs, embodying an infrastructure that provides access to communities which often feel deprived or left behind (ibid.). Further findings in the questionnaire include: • Nine out of 24 participating countries reported that CLCs are specifically mentioned in their national ALE or NFE policies. • Half of the participating countries identified their Ministry of Education as the main entity or stakeholder responsible for coordinating CLCs in their country, followed by NGOs and local communities. • The majority of CLC programmes focus on the provision of basic education, and only two countries mention offering equivalency programmes; while four countries provide certification. • The provision of training and access to ICT was reported by 14 countries (ibid.). Twenty countries reported a marked interest in receiving national capacity development in the form of CLC development guidelines, as well as expressing an interest in participating in peer exchange and sharing experiences among African countries (ibid.). Also part of the research initiative at UIL was a review of documents available on community-based ALE in Africa (Hinzen 2021), and two of the recommendations emerging from that are the following: • Governments in Africa should strengthen community-based ALE and CLC in their policies, legislation and financing from the education budget, and additionally within the inter-sectoral programmes of rural or community development, health and social services. CLCs should be integrated into international funding agendas. [...] • More robust data on CLC are needed through regular collection of statistics on national, regional and global level in respect to providers, programmes and participants that could be used to inform future planning and development. GEM [the Global Education Monitoring Report] and GRALE, together with the UNESCO Institute for Statistics should get involved (ibid., pp. 38, 39). --- Monitoring progress and negotiating strategies for action The research initiatives in both the Asia Pacific and the Africa regions contribute to generating grassroots data which feed into global monitoring and reporting efforts, reflecting the status quo and highlighting areas in particular need of action. --- GEM 2021/2022: non-state actors in education Among the most prominent global monitoring reports on education more generally is UNESCO's Global Education Monitoring Report (GEM), which began monitoring the seven SDG 4 Targets (4.1-4.7) and three "means of implementation" (4.a-4.c) in 2016 (UNESCO 2016a), a somewhat challenging endeavour (Benavot and Stepanek Lockhart 2016). The latest GEM report (UNESCO 2021), includes relevant information on CLCs. In particular, its chapter on "Technical, vocational, tertiary and adult education" features a dedicated section stating that "Community learning centres have proliferated in many countries": Embracing an intersectoral approach to education beyond formal schooling, CLCs can act as learning, information dissemination and networking hubs.... The establishment and management of CLCs has been bolstered by local and national government authorities and non-state actors, such as non-governmental organizations (NGOs), which have supported community engagement with financial and human resources.... CLCs are characterized by broad-spectrum learning provision that adapts to local needs" (ibid., pp. 259-260). The report was well informed through a background paper on Non-state actors in non-formal youth and adult education (Hanemann 2021). Her findings relate to trends in the provision, financing and governance of ALE. Unsurprisingly, a key concern is "that many countries lack effective monitoring and evaluation systems including robust data on ALE. Moreover, this is also the case due to the multiplicity of non-state actors in this field" (Hanemann 2021, p. 15). She concludes with a set of recommendations. Two of them are relevant to the particular focus of our article on conditions conducive to ALE for lifelong learning and the potential role of CLCs and other community-based ALE institutions: Governments should create an enabling legal, financial and political environment to make use of the full transformative and innovative potential of non-state actors in ALE. Non-state actors are usually well-placed to address situational, institutional, and dispositional barriers to engagement and persistence in learning, in particular those related to socio-cultural and gender issues. Such an enabling environment can best be achieved within collaborative efforts involving public and non-public partnerships (Hanemann 2021, p. 109; emphasis added). Community participation and ownership must become a central goal of ALE programmes as it not only ensures the relevance and sustainability of programmes but also contributes to social cohesion. Therefore, the role of state and non-state ALE providers should increasingly become that of facilitator and assistance provider to help communities build strong local democratic governance of their programmes (ibid.; emphasis added). --- GRALE 4: leave no one behind Focusing on ALE more specifically are the Global Reports on Adult Learning and Education (GRALE) already mentioned in the introduction. In line with UIL's mandate initiated in 2009 (UIL 2010), they are prepared at three-year intervals. GRALE 4, monitoring the wider aspects of participation, equity and inclusion includes a chapter where certain institutional, situational and dispositional barriers to wider participation are analysed, and CLCs as a potential institutional infrastructure are discussed. While... CLCs may look somewhat different across countries and regions, their success is a result of the active involvement by the community, whose members act as learners, instructors, and managers, and the community has ownership of the site (UIL 2019, p. 165). The report throws the net wider than the CLCs of today, by looking at communitybased ALE institutions from their historical beginnings in Europe and later in Latin America. The section concludes with an important statement in the context of the role of the state in supporting the conditions under which ALE institutions can operate well: For ALE to function as an instrument for the promotion of democracy and in the struggle against inequality, two conditions have to be fulfilled: first, the state has to be ready to provide public funding to popular/liberal adult education institutions; and, second, while the state may set the overall purposes for funding popular/liberal adult institutions, they are given freedom in how to reach their goals (UIL 2019, p. 166). --- Conference outcomes: commitments, declarations and frameworks Progress in increasing participation in ALE (which of course includes the use of CLCs) is also reviewed at 12-year intervals during the UNESCO-led series of International Conferences of Adult Education (CONFINTEAs), resulting in outcome declarations and frameworks, such as the one adopted by participants of CONFINTEA VI, held in Belém, Brazil, in 2009. The relevance of the Belém Framework for Action (BFA) (UIL 2010) to CLCs is reflected in its call for "creating multi-purpose community learning spaces and centres" (ibid., p. 8). The World Education Forum (WEF) is another conference series involving UNESCO, the World Bank and other international organisations operating in the field of education. Preceding the final ratification of the United Nations Education 2030 Agenda, the WEF session held in Incheon in the Republic of Korea in May 2015 resulted in a declaration "towards inclusive and equitable quality education and lifelong learning for all" (WEF 2016). Its relevance to CLCs is reflected in its "indicative strategy" which strives to make learning spaces and environments for non-formal and adult learning and education widely available, including networks of community learning centres and spaces and provision for access to IT resources as essential elements of lifelong learning (ibid., p. 52; emphasis added) In September 2015, the 2030 Agenda was ratified during the UN Sustainable Development Summit held in New York, USA (UN 2015; Boeren 2019). Another important document, adopted in November 2015 by the UNESCO General Conference in Paris, is the 2015 Recommendation on Adult Learning and Education (RALE) (UNESCO & UIL 2016). Its relevance to CLCs it its call for creating or strengthening appropriate institutional structures, like community learning centres, for delivering adult learning and education and encouraging adults to use these as hubs for individual learning as well as community development (ibid., p. 11; emphasis added). These declarations, frameworks and recommendations are collaborative outcome documents jointly drafted by UNESCO Member States, international organisations, public and private sectors, etc., ideally in consultation with civil society and other actors and stakeholders. Many of them call for improvement of data collection and the provision of appropriate monitoring and evaluation services. In the run-up to CONFINTEA VII, to be held in Marrakech, Morocco, in June 2022, the drafting of the Marrakech Framework for Action (MFA) is already being prepared by way of an online consultation. As we are writing this article, the current draft includes the following section: Redesigning systems for ALE: We commit to strengthening ALE at the local level, as a strategic dimension for planning, design and implementation for learning programmes, and for supporting and (co-)funding training and learning initiatives such as community learning centres. We recognize the diversity of learning spaces, such as those in technical and vocational education and training (TVET) and higher education institutions, libraries, museums, workplaces, public spaces, art and cultural institutions, sport and recreation, peer groups, family and others. This means reinforcing the role of sub-national governments in promoting lifelong learning for all at the local level by, for example, pursuing learning city development, as well as fostering the involvement of local stakeholders, including learners (CONFINTEA VII online consultation, accessed 29 March 2022). It is encouraging that members of the ALE community and beyond now have the opportunity to comment on all aspects of the MFA, which will be an important document for the next twelve years. Based on best practice examples, future policy recommendations could be enriched. The ways in which CLCs and other forms of community-based ALE institutions are taken up in different UNESCO Member States through governments and civil society actors (CCNGO 2021), towards frameworks for action (Noguchi et al. 2015) has so far been uneven; the forms they take, and the management arrangements, are diverse and mainly "work in progress". We hope to contribute to this work by suggesting a few recommendations of our own in the next section. --- Recommendations Based on our our discussion of examples and experiences from countries in the Asian and African regions and their deeper analysis, in this section, we put forward some recommendations of our own towards creating conditions conducive to having more and better ALE for lifelong learning, and integrating a role for CLCs and other community-based ALE institutions: • Rethink and redesign educational governance and the education system to take full account of all sub-sectors from a lifelong learning perspective, and include all areas of formal, non-formal and informal education. • Acknowledge ALE as a sub-system of the education system, in a similar way that formal schooling, vocational education and training (VET) and higher education (HE) are acknowledged. This will require including different entry points and communication messages in advocacy strategies. • Set up a comprehensive ALE system, including all system-building blocks and elements, such as an enabling environment, management processes, institutional arrangements and technical processes. • Acknowledge and promote the reality that ALE, like any education sub-system, needs a place and infrastructure where it can be delivered. CLCs and other community-based institutions can be developed as cornerstones to local infrastructure because they: -offer a one-stop shop for a variety of ALE services to all target groups on the lifelong learning continuum and across sectors; -can therefore improve access to ALE service delivery and increase participation, including those too often excluded; -can reduce costs in ALE and other service delivery modalities for local governments because the costs for operating the CLC can be shared across sectors; and -provide opportunities for other stakeholders such as NGOs, universities and the private sector to use CLCs as a platform for engagement and cooperation. • Make CLCs part of the institutional arrangements of the national ALE implementation structure across all spheres of governance. Furthermore, cross-sectoral coordination structures should be put in place, including community representation and participation. • Strike a balance between community needs and interests and national policies and priorities: although the communities' needs and interests should be the main driver of the types of services to be delivered at CLCs, a balance should be struck with government priorities as elaborated in national and local development plans. This will ensure political and financial will and commitment towards CLCs. • Collect all data and information in the provision and practice of CLCs. These data should documented, recorded and used. They should feed into an ALE monitoring system as part of the overall education statistics to provide an evidence base for further advocacy for strengthening ALE and CLCs. --- Conclusions The research question we set out in the introduction of this article was: What conditions are conducive to having more and better ALE for lifelong learning -and which roles can CLCs and other community-based ALE institutions play? Throughout this article, we have considered information and related discourses at national and global levels. In each of the sections we contemplated findings from relevant literature, and more in-depth research on experiences and examples from the Asia-Pacific and African regions as well as insights related to interventions at the global level. We found that CLCs and community-based ALE are operating in many parts of the world and are diverse in many ways, including their nature and stages of institutionalisation and professionalisation, and the ways in which they are integrated, or not, in overall educational governance. What seems to be similar all over the world is that ALE and thereby CLCs remain marginal to all the other education sub-sectors. Therefore, CLCs are in dire need of better recognition, services and support. Unless support is substantially increased, especially in terms of financing the commitment to lifelong learning for all (Archer 2015;Duke et al. 2021), one can hardly imagine any of the necessary changes occurring. ALE is grossly underfunded in almost all national education budgets, and too often neglected in policies and related legislation. In too many countries, ALE is underrepresented in data collection, and the work of CLCs and other community-based learning institutions does not even find its way into systems of educational statistics. This makes monitoring efforts nationally and subsequently globally more than difficult. The provocative saying applies: "You measure what you treasure." However, despite all the deficiencies which have been identitified, many examples and experiences show that with improved conditions for ALE and CLCs, these can come closer to what they aspire to reach. This includes an enabling environment with policies and legislation; an overarching structure for educational governance; an adult learning system with related institutions, professionalisation for all working in the sector; as well as organisational developments for CLCs and other institutions. In addition, the current pandemic has shown the need to reflect more on new forms of blended learning and digital modes, and demonstrated their consequences for learners and their institutions. CLCs have become a convenient catch-all for locally provided and at least partly determined and managed opportunities for institutionalised forms of ALE, and for informal meeting and learning in local community settings. In this article, we show that there are distinctive CLCs as well as other community-based ALE institutions which differ in many of their features, but also have much in common. What is crucial here is a better understanding of how policymaking can combine bottom-up and top-down approaches within decentralisation efforts. The questions remain whether global-and national-level policies are working against local-level bottom-up practices of diverse local communities, or whether there might be possibilities in both directions, emerging from constructive evolution allied to applied learning and better practice. GRALE 4 closes with the following statement: This report has argued that a focus on participation in ALE is key to achieving the SDGs. This must mean reviewing policies in the light of the evidence on participation, and investing in sustainable provision that is accessible to learners from all backgrounds, as well as systematically supporting demand among those who have been the most excluded in the past. This will enable ALE to play its full, and wholly essential, part in achieving the SDGs (UIL 2019, p. 177; emphases added). Current socio-political and ecological malaise requires more locally based community understanding. Many changes and developments in ALE and lifelong learning are needed at this time of interlocking critical social, political, technological, cultural and ecological change, with a climate crisis and the incipient "great extinction". The ambitious SDGs, with their goals and targets for change by 2030, seriously underestimate the centrality of ALE to coping with change, and the latent reach and wider scope of ALE within lifelong learning, as a test of what is and is not sustainable in the longer term. --- Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Sonja Belete is an independent consultant with substantial experience in the fields of adult education, sustainable livelihoods, women's economic empowerment, good governance and systems building. She holds an MA degree in Adult Education and has authored several manuals, guidelines and articles. Her career with international NGOs such as DVV International, ActionAid, CARE and the UN provided her with exposure to Southern and the East/Horn of Africa, with occasional support missions to West and North Africa. She has managed large-scale national and regional programmes and was responsible for the pilot testing and up-scaling of CLCs in Ethiopia and Uganda as part of her work with DVV International. --- Chris Duke Professor, was founding CEO and Secretary-General of Place And Social Capital And Learning (PASCAL) and founding Secretary-General of the PASCAL International Member Association (PIMA). He is now Editor of the bimonthly PIMA Bulletin. Previously, he played leading roles in other international ALE civil society bodies, including the International Council for Adult Education (ICAE) and the Asia South Pacific Association for Basic and Adult Education (ASPBAE), and nationally in the UK and Australia. He qualified in Education, History and Sociology and was awarded an Hon DLitt by Keimyung University, Republic of Korea, for work in the field of lifelong learning. He has served as Professor and Head of Lifelong Learning in Australian, New Zealand and UK universities, and as President of the University of Western Sydney (UWS) Nepean. He has consulted extensively for UNESCO, the OECD, the EU and other international national and local bodies. Angela Owusu-Boampong is an education programme specialist at the UNESCO Institute for Lifelong Learning (UIL) holding a postgraduate degree in Adult Education from the Freie Universität, Berlin, Germany. She contributed to the coordination of the Sixth International Conference on Adult Education (CONFINTEA VI) held 2009 in Belém, Brazil; the CONFINTEA VI Mid-term Review held 2017 in Suwon, Republic of Korea; as well as CONFINTEA VII, to be held in June 2022 in Marrakech, Morocco. This included organising related regional and global preparatory and follow-up processes. She previously contributed to developing the Global Report on Adult Learning and Education (GRALE) UNESCO's Recommendation on Adult Learning and Education (RALE 2015) and Curriculum globALE, a core curriculum for the training of adult educators. Her current research focuses on promoting inclusive learning environments for youth and adult learners. Khau Huu Phuoc already had 22 years' experience in teacher training and curriculum design at Ho Chi Minh University of Education, Vietnam, before he transferred to the Regional Centre for Lifelong Learning (SEAMEO CELLL). As Manager of Research and Training at the Centre, he has conducted workshops and seminars aiming to promote understanding of lifelong learning and adult education, and sharing of related good practices for master trainers and teachers of non-formal education from the region. From 2016 to 2018, he coordinated the eleven Southeast Asian countries in the UNESCO Institute for Lifelong Learning (UIL)'s regional project "Towards a Lifelong Learning Agenda for Southeast Asia". Most recently, he developed the Curriculum for Managers of Adult Education Centres for international use by DVV International. He has contributed as a speaker to various events organised by the Asia South Pacific Association for Basic and Adult Education (ASPBAE), UNESCO Bangkok, DVV International, and has written articles for DVV International and the Friends of PASCAL International Member Association (PIMA). --- Authors and Affiliations
Institutionalised forms of adult learning and education (ALE) such as community learning centres (CLCs) and related models are found in most parts of the world. These are spaces offering opportunities for literacy and skills training, health and citizenship, general, liberal and vocational education, in line with fuller recognition of the meaning of lifelong learning, and in the context of local communities. Often these institutions form the basis for even more informal and participatory learning, like study circles and community groups. They may share facilities like libraries and museums, clubs and sports centres, which are not within the remit of the Ministry of Education. This article reviews relevant literature and identifies recent studies and experiences with a particular focus on the Asia-Pacific and Africa regions, but also considers insights related to interventions at the global level. Findings point to low levels of participation of adults in general, and more specifically so for vulnerable and excluded groups which can hardly cross respective barriers. The authors' discussion is guided by the question What conditions are conducive to having more and better ALE for lifelong learning -and which roles can CLCs and other community-based ALE institutions play? This discussion is timely -the authors argue that CLCs need to be given more attention in international commitments such as those made in the context of the International Conferences of Adult Education (CONFINTEA) and the United Nations 17 Sustainable Development Goals (SDGs). CLCs, they urge, should be part of transformative discourse and recommendations at CONFINTEA VII in 2022.
INTRODUCTION Few ethnographic studies in American social science have been as highly praised as William Foote Whyte's Street Corner Society (SCS) (1943c). The book has been re-published in four editions (1943c, 1955, 1981, 1993b) and over 200,000 copies have been sold (Adler, Adler, & Johnson, 1992;Gans, 1997). John van Maanen (2011 [1988]) compares SCS with Bronislaw Malinowski's social anthropology classic Argonauts of the Western Pacific (1985 [1922]) and claims that "several generations of students in sociology have emulated Whyte's work by adopting his intimate, live-in, reportorial fieldwork style in a variety of community settings" (p. 39). 1 Rolf Lindner (1998) writes that even "one who does not share van Maanen's assessment cannot but see the two studies as monoliths in the research landscape of the time" (p. 278). To be sure, the Chicago school of sociology had published contemporary sociological classics, such as The Hobo (Anderson, 1961(Anderson, [1923]]), The Gang (Thrasher, 1963(Thrasher, [1927]]), The Ghetto (Wirth, 1998(Wirth, [1928]]), and The Gold Coast and the Slum (Zorbaugh, 1976(Zorbaugh, [1929]]). But none of these empirical field studies were as deeply anchored in the discipline of social anthropology or had been, to use Clifford Geertz's (1973) somewhat worn expression, equally "thick descriptions" of informal groups in the urban space. Whyte's unique ability to describe concrete everyday details in intersubjective relations created a new model for investigations based on participant observations in a modern urban environment. SCS is a study about social interaction, networking, and everyday life among young Italian-American men in Boston's North End (Cornerville) during the latter part of the Great 1. Typical examples of this research tradition are Anderson (2003Anderson ( [1976]]), Gans (1982Gans ( [1962]]), Kornblum (1974), Liebow (2003Liebow ( [1967]]), Suttles (1968), Vidich & Bensman (2000[1958]). OSCAR ANDERSSON graduated with a PhD in Social Anthropology from Lund University, Sweden, in 2003. His thesis is about the development of the Chicago School of Urban Sociology between 1892 and about 1935. In 2007, his thesis was published by Égalité in a completely new edition. He is currently working with the same publisher on the Swedish translations, and publications, of books that are regarded as part of the Chicago school heritage and beyond, with comprehensive and in-depth introductions that place each book in the context of the history of ideas. Previous titles include The Hobo (1923) andStreet Corner Society (1943). Correspondence concerning this article should be sent to Oscar Andersson,Malmoe University,Faculty of Health and Society,Department of Health and Welfare Studies,Sweden;[email protected]. Depression. Part I of SCS describes the formation of local street gangs, the corner boys, and contrasts them with the college boys in terms of social organization and mobility. Part II outlines the social structure of politics and racketeering. Whyte spent three and a half years between 1936 and 1940 in the North End, which also gave him a unique opportunity to observe at close range how the social structure of the street corner gangs changed over time. The study is still used as a valuable source of knowledge in concrete field studies of group processes, street gangs, organized crime, and political corruption (Homans, 1993(Homans, [1951]]; Short & Strodtbeck, 1974[1965]; Sherman, 1978). Today, SCS feels surprisingly topical even though the book first appeared 70 years ago. What seems to make the study timeless is that Whyte manages in a virtually unsurpassed way to describe people's social worlds in their particular daily contexts. Adler, Adler, & Johnson (1992, p. 3) argue in the same manner that SCS represents a foundational demonstration of participant observation methodology. With its detailed, insightful, and reflexive accounts, the methodological appendix, first published in the second edition, is still regarded as one of the premier statements of the genre. [... ] SCS stands as an enduring work in the small groups literature, offering a rich analysis of the social structure and dynamics of "Cornerville" groups and their influence on individual members. SCS has thereby come to have something of a symbolic significance for generations of field researchers in complex societies. As Jennifer Platt (1983) examines in her historical outline of participant observation methodology, this took place mainly after Appendix A was published as an additional part in the 1955 second edition (p. 385). Lindner (1998) also points to the importance of the Appendix, and thinks that "With the new edition the reading of SCS is stood on its head: now the reader begins as a rule with the appendix, and then turns to the actual study" (p. 280). As a consequence, SCS has come to be considered as "the key exemplar in the textbooks of 'participant observation'" (Platt, 1983, p. 385); furthermore, numerous studies have used it as a symbol of how participant observations ought to be done. After SCS gained its iconoclastic status, the knowledge of the historical development of the study seems to have lost its importance or even been forgotten. For this reason, it might not be so surprising that researchers, as the introductory quote from van Maanen indicates, have often taken it for granted that SCS belongs to the Chicago school's research tradition (Klein, 1971;Jermier, 1991;Schwartz, 1991;Boelen, 1992;Thornton, 1997) or that it is a relatively independent study that cannot be placed in any specific research tradition (Ciacci, 1968;Vidich, 1992). There are at least four reasons for this. First, although Whyte was awarded a prestigious grant from the Society of Fellows at Harvard University in fall 1936, he was not a doctoral student at the university. Instead, he defended his doctoral dissertation at the University of Chicago in 1943. Second, SCS is about classical "Chicago" topics, such as street gangs, organized crime, police corps, and political machinery. Third, Whyte conducted fieldwork in an urban environment like many Chicago sociologists had previously done in the 1920s and 1930s. Finally, parts of SCS have, together with Chicago classics, been included in the Chicago school of sociology compilation volumes; a typical example is The Social Fabric of the Metropolis: Contributions of the Chicago School of Urban Sociology (1971). Given these facts, it is quite easy to take for granted that Whyte was part of the Chicago school's research tradition or was an independent researcher in a historical period before anthropology at home had been established as a research field. However, by using archival documents from Cornell University and other historical texts, I have traced the SCS to a social-anthropological comparative tradition that was established by W. Lloyd Warner's Yankee City Series, and later in Chicago with applied research in the Committee on Human Relations in Industry during the period 1944-1948. The committee, led by Warner, had the aim of bridging the distance between academia and the world of practical professions. Whyte's anthropological schooling at Harvard University led him to A. R. Radcliffe-Brown's structural functional explanatory model in SCS. 2 In the first two sections, I will describe Whyte's family background and the circumstances behind his admission to the prestigious Society of Fellows at Harvard University. In the following two sections, I will first examine how Whyte came to study corner boys' and college boys' informal structure in Boston's North End. I will then analyze why Conrad M. Arensberg's and Eliot D. Chapple's observational method was such a decisive tool for Whyte for discovering the importance of informal structure and leadership among street corner gangs. In the next section, I will outline the reasons that led Whyte to defend SCS as a doctoral dissertation in the sociological department at the University of Chicago and not Harvard University. Thereafter, I will examine why Whyte's conclusion that Cornerville had its own informal social organization was such a ground-breaking discovery in social science. In the two final sections, I will situate Whyte's position in the historical research landscape of social anthropology and sociology in the 1920s, 1930s, and 1940s and then, with the help of a diagram, set out which researchers exercised the most important direct and indirect influences on his thinking. --- BIOGRAPHICAL BACKGROUND William Foote Whyte was born on June 27, 1914 in Springfield, Massachusetts. Whyte's grandparents had immigrated to the United States from England, the Netherlands, and Scotland. His parents were John Whyte (1887Whyte ( -1952) ) and Isabel van Sickle Whyte (1887Whyte ( -1975)). John and Isabel met when they were in Germany, each on a university grant, and working on their theses-doctoral and masters, respectively-in German. After John received his doctorate and obtained employment as a lecturer at New York University, the family first settled in the Bronx district of New York City, but soon moved to the small town of Caldwell, New Jersey. Whyte, an only child, grew up in a Protestant middle-class family that appreciated literature, classical music, art, and education. During his earliest years, Whyte lived with different relatives, as his mother caught tuberculosis and his father was discharged by New York University when it abolished the German department at the outbreak of World War I. Due to his movements between families and since his parents brought him up to be a self-reliant boy, he often felt lonely and learned to keep his feelings to himself (Whyte, 1984(Whyte,, 1994(Whyte,, 1997;;Gale Reference Team, 2002). 3 Even though John Whyte himself had received a strict upbringing in the Presbyterian Sunday school, he held that it was his son Whyte's choice whether or not to go to church. John said that he had gotten so much church instruction while growing up that it would last a lifetime. Only after Whyte had acquired his own family did he begin regular visits to the Presbyterian congregation. The reason was that he looked up to clergymen who preached for social equality and justice. On the other hand, he was not so fond of priests who wanted to 2. Robert M. Emerson (2001Emerson ( [1983]]) certainly places Whyte and SCS in Harvard University's social anthropology tradition, but does so only briefly regarding the issue of method. Instead, Howard S. Becker supports the assumption that social science research has usually overlooked the fact that Whyte was educated in social anthropology at Harvard University (Becker, 1999, 2003, e-mail correspondence with Oscar Andersson dated July 11, 2009). 3. According to the anthropologist Michael H. Agar (1980, p. 3), the experience of alienation from one's own culture is common to many anthropologists. Whyte describes himself as a social anthropologist rather than a sociologist when he came from Boston to Chicago in 1940. The sense of estrangement also makes it easier for anthropologists to connect with other cultures. In his autobiography, significantly titled Participant Observer (1994), Whyte tells repeatedly of his emotional difficulties in feeling involved in the middle class's social activities and club life. save lost souls and gave superficial sermons about contemporary problems. Thus already from childhood, William Whyte learned to make independent decisions and to feel empathy for poor and vulnerable people (Whyte, 1984(Whyte,, 1994(Whyte,, 1997;;Gale Reference Team, 2002). In the autumn of 1932, aged 18, William Whyte was accepted at Swarthmore College, located in a suburb of Philadelphia, as one of five students granted a scholarship. Whyte devoted most of his time to studying for examinations and writing articles as well as plays that were performed in the college area. Already at age 10, he had been encouraged by his parents to write short stories, and while at Bronxville High School he published an article every Tuesday and Friday in a local newspaper, The Bronxville Press (Whyte, 1970(Whyte,, 1994)). As a second-year student at Swarthmore, Whyte got the opportunity to spend a weekend at a settlement house in a Philadelphia slum district. This experience proved decisive for his future career. In a letter to his parents dated March 10, 1934, he wrote It is foolish to think of helping these people individually. There are so many thousands of them, and we are so few. But we can get to know the situation thoroughly. And that we must do. I think every man owes it to society to see how society lives. He has no right to form political, social, and economic judgement, unless he has seen things like this and let it sink in deeply. (Whyte, 1994, p. 39) It was after this experience that Whyte realized that he wanted to write about the situation of poor people and daily life in the American urban slums. His interest in writing about corrupt politicians and slum poverty was also aroused by the investigative journalist and social debater Lincoln Steffens' 884-page autobiography, which he had devoured during the family's journeys in Germany in 1931. Three chapters in Steffens' book dealt with the extensive political corruption in Boston (Steffens, 1931;Whyte, 1994). --- THE SOCIETY OF FELLOWS In 1936, a senior researcher recommended that William Whyte be admitted to the prestigious Society of Fellows at Harvard University. The background to this recommendation was a 106 page essay, written and published the year before, with the title "Financing New York City." It drew great attention from politicians and civil servants in New York City, and his teacher in economics-which was also Whyte's main subject-thought that it was better written than many doctoral dissertations he had read. After considering several proposals for further studies and career opportunities, including an invitation to work for the city of New York, Whyte decided to accept the offer from the Society of Fellows. The associated grant meant that the researcher had the same salary as a full-time employed assistant professor at Harvard University, and could do research for three to four years on any topic, with free choice among the university's rich range of courses. Thanks to the basic freedom in selecting a research subject, it was not unusual for grantees to change subject after being accepted. The only academic restriction with the generous research grant was that the resultant writings could not be presented as a doctoral dissertation. This did not strike Whyte as a drawback when he was accepted; on the contrary, he regarded academic middle-class existence as all too limited and boring. The grant at Harvard gave Whyte, at barely 22, the opportunity to pursue what he had wanted to do since his time at Swarthmore College-an ethnographic slum study. Ever since he had read Steffens' notable autobiography and visited a Philadelphia slum district, he had dreamt of studying at close quarters and writing about a social world that was mostly unknown to the American middle class (Whyte, 1994(Whyte,, 1997)). Whyte tells in his autobiography: Like many other liberal middle-class Americans, my sympathies were with the poor and un-employed, but I felt somewhat hypocritical for not truly understanding their lives. In writing Street Corner Society, I was beginning to put the two parts of my life together. (Whyte, 1994, p. 325) The anthropologist and sociologist Arthur J. Vidich (1992) argues with insight that "Methods of research cannot be separated from the life and education of the researcher" (p. 84). In order to really emphasize what an exotic social world Boston's North End was for the rest of the population, Whyte-like an anthropologist who visits an aboriginal people for the first time-begins the first paragraph of the unpublished "Outline for Exploration in Cornerville" in July 1940 as follows: This is a study of interactions of people in a slum community as observed at close range through 3 1 / 2 years of field work. I call it an exploration, because when I came into Cornerville, its social organization was unknown to me as if I had been entering an African jungle. In this sense, the field work was a continual exploration-of social groupings, of patterns of action. 4Whyte moved to Harvard University for the start of the autumn semester in 1936. He wanted to get there when the university was celebrating its 300th anniversary and was hailed in that regard as the oldest university in the United States. The Society of Fellows provided him with comfortable student quarters in Winthrop House on the university campus. The only formal requirement for a younger member was to attend dinners on Monday evenings. These served as a ritual uniting young and old members of the Society, and Whyte took part in them even after, at the beginning of 1937, he moved in above the Martini family's Capri restaurant in the North End. The Society was led, during the period 1933-1942, by the rather conservative biochemist Lawrence J. Henderson. It consisted of fellow students, such as Conrad M. Arensberg, Henry Guerlac, George C. Homans, John B. Howard, Harry T. Levin, James G. Miller and Arthur M. Schlesinger, Jr. Although the young Whyte did not always feel comfortable with Henderson's authoritarian style of leadership, he looked up to him for his scientific carefulness. Another mentor was the industrial psychologist Elton Mayo, a colleague of Henderson and Warner, who led the Hawthorne study (1927)(1928)(1929)(1930)(1931)(1932) at the Western Electric Company in Cicero outside Chicago. Mayo is known chiefly for having conducted social-scientific field studies of industries, which Whyte was also later to do during his time at Chicago and Cornell universities. The classmate who would come to have by far the greatest importance for Whyte was Arensberg; he became his close friend and mentor during the study of the Italian-American slum district in the North End (Whyte, 1970(Whyte,, 1984(Whyte,, 1994(Whyte,, 1997)). 5THE NORTH END DISTRICT IN BOSTON William F. Whyte began his field study in the North End during the fall of 1936. As previously mentioned, it was his visit to a Philadelphia slum and his reading of Steffens' work that gave him the idea of doing his own part in studying slum districts for the cause of progressive social change and political reform. During his first weeks at Harvard, he explored Boston's neighborhoods and sought advice at various social agencies. It was only after this initial survey that he settled on the North End as the place for his study. After a while, Whyte (1994, p. 62) decided to study the North End because this district best met his expectations of how a slum area looked I had developed a picture of rundown three-to five-story buildings crowded together. The dilapidated wooden-frame buildings of some other parts of the city did not look quite genuine to me. One other characteristic recommended the North End on a little more objective basis: It had more people per acre than any section of the city. If a slum was defined by its overcrowding, this was certainly it. In an unpublished field study of the Nortons (street gang) from the autumn of 1938, Whyte gave a more neutral explanation with quantitative criteria for why he chose to study this city district: According to figures from the Massachusetts Census of Unemployment in 1934, the population of the North End was 23 411. These people were housed on 35 acres of land. With a density of about 670 persons per acre in 1934, the North End was reported to be the most congested district in the United States. The neighboring West End has 342 people per acre, just slightly over half of the density of the North End, and other sections of Boston are much less thickly populated. 6 As Whyte writes in Appendix A, in SCS, the models for his study were the community studies by the social anthropologists Robert S. Lynd andHelen Merrell Lynd, Middletown (1957 [1929]) and Middletown in Transition (1937) about Muncie in Indiana, and W. Lloyd Warner's not yet published five volumes in the Yankee City Series (1941Series ( -1962) ) about Newburyport in Massachusetts. 7 This is indicated not least by the fact that Whyte introduced his case study of the Nortons with a short social overview of the city district, which he called "A Sketch of the Community Surroundings." 8 Whyte also wrote the project plans titled "Plan for the North End Community Study" and "Plan for a Community Study of the North End," respectively, at the end of 1936 and beginning of 1937, which show that he intended to investigate the inhabitants' and the district's history, cultural background, economics, leisure time activities, politics, educational system, religion, health, and social attitudes-in other words, to make a comprehensive community study. 9 What characterizes these extensive American social studies is that they try to completely describe and chart the complex cultural and social life of a town or city district. As models they have anthropology's holistic field studies of the more limited cultures and settlements of aboriginal populations. For while neither the Lynds nor Warner specifically made studies of slum areas, they did social-anthropological field work in modern American cities. This was exactly what Whyte wanted to do in the North End, although with a special focus on the slums. The community studies were also models for more limited socialscientific field studies of industries, mental hospitals, and medical hospitals after World War II 6. Whyte, William Foote. Papers. #/4087. Kheel Center for Labor-Management Documentation and Archives, Catherwood Library, Cornell University, 14087 Box 2, Folder 32. 7. Whyte could also have taken part in the research project Yankee City Series during his stay in Chicago in the early 1940s, but he was persuaded by his supervisor Warner to finish his doctoral degree first (Whyte, 1994). 8. Whyte, William Foote. Papers. #/4087. Kheel Center for Labor-Management Documentation and Archives, Catherwood Library, Cornell University, 14087 Box 2, Folder 32. 9. Whyte, William Foote. Papers. #/4087. Kheel Center for Labor-Management Documentation and Archives, Catherwood Library, Cornell University, 4857 Box 2A, Folder 10. JOURNAL OF THE HISTORY OF THE BEHAVIORAL SCIENCES DOI 10.1002/jhbs (Whyte, 1967(Whyte, [1964]]; Kornblum, 1974;Becker, Geer, Hughes, & Strauss, 1977[1961]; Burawoy, 1982Burawoy, [1979]]; Goffman, 1991Goffman, [1961]]). 10 Fairly soon, however, Whyte came to revise his original plan and, instead, confined his study to the street gangs' social, criminal, and political organization in the city district. 11 He (1993b) came to this crucial understanding by connecting his field observations of the Nortons (corner boys) and Italian Community Club (college boys) and "their positions in the social structure (p. 323)." The theoretical framework was... first proposed by Eliot D. Chapple and Conrad M. Arensberg (1940), [where] I concentrated attention on observing and roughly quantifying frequencies and duration of interactions among members of street-corner gangs and upon observing the initiation of changes in group activities (Whyte, 1993b, p. 367). Whyte (1967Whyte ( [1964]]) writes further that, after about 18 months of field research, he "came to realize that group studies were to be the heart of my research (p. 263)." In the autumn of 1938, his case study of the Nortons arrived at the conclusion that even though the study "may apply to other groups of corner boys, I will specifically limit their application to this group which I have studied and not attempt to generalize for other groups." 12 Whyte's conclusion is probably a concession to Lawrence J. Henderson's strictly positivistic view of science, since he cited the study in his application for an extension of the research grant from the Society of Fellows at Harvard University. Long afterward, he (1993a) explained that the prevailing view of science at Harvard University in the 1930s emphasized "a commitment to 'pure science,' without any involvement in social action (p. 291)." As a result of Whyte's not having studied the North End inhabitants' working conditions, housing standards, family relations, industries, school system, or correspondence with native countries, it is thus not obvious that one should regard SCS as a community study (Vidich, Bensman, & Stein, 1964;Ciacci, 1968;Bell & Newby, 1972[1971], pp. 17-18; Gans, 1982Gans, [1962]]; Whyte, 1992Whyte,, 1994Whyte,, 1997)). --- SOCIAL ANTHROPOLOGY AND ARENSBERG'S AND CHAPPLE'S OBSERVATIONAL METHOD William Whyte's introduction to social anthropology occurred through a course-"The Organization of the Modern Communities"-which was taught by Conrad M. Arensberg and Eliot D. Chapple. He (1994, p. 63) found the course rewarding, but more important was that he thereby got to know Arensberg: 13 He [Arensberg] took a personal interest as my slum study developed, and we had many long talks on research methods and social theory. He also volunteered to read my early notes and encouraged me with both compliments and helpful criticisms. Archival documents from Cornell University show that Arensberg read and commented on SCS from its idea stage until the book was published in December 1943. The recurrent discussions and correspondence which Whyte had with Arensberg about the study's 10. Almost 20 years later, Herbert J. Gans (1982Gans ( [1962]]) would make a community study of the adjacent West End. 11. Whyte (1993b) writes in Appendix A: "As I read over these various research outlines, it seems to me that the most impressive thing about them is their remoteness from the actual study I carried on (p. 285)." 12. Whyte, William Foote. Papers. #/4087. Kheel Center for Labor-Management Documentation and Archives, Catherwood Library, Cornell University, 14087 Box 2, Folder 32. 13. Arensberg was formally admitted by the Society of Fellows as an anthropologist during the period 1934-1938, and Whyte as a sociologist during 1936-1940.. disposition, field-work techniques, and group processes contributed greatly to problematizing and systematizing Whyte's observations of the street gangs. 14 Whyte (1993a) tells that "While I was in the field, 1936-1940, I thought of myself as a student of social anthropology. I had read widely in that field, under the guidance of Conrad M. Arensberg (p. 288)." During the period 1932-1934, under Warner's supervision, Arensberg had done field work in County Clare in Ireland. Arensberg's field study resulted in the books The Irish Countryman (1968Countryman ( [1937]]) and Family and Community in Ireland (1968Ireland ( [1940]]) together with Solon T. Kimball. It belonged to a still, in some ways, unequaled social-scientific research project that was led by Warner and based on the cross-cultural comparative sociology of Emile Durkheim and A. R. Radcliffe-Brown. Radcliffe-Brown's and Warner's comparative social-anthropological field studies on different cultures and social types were path-breaking in several respects, and had the explicit aim of generating universal sociological theories about man as a cultural and social being (Warner, 1941a(Warner,, 1941b(Warner,, 1959(Warner,, 1962(Warner, [1953]], 1968 [1940]; Radcliffe-Brown, 1952, 1976[1958]; Whyte, 1991Whyte,, 1994Whyte,, 1997;;Stocking, 1999Stocking, [1995]]). Arensberg wrote, together with Chapple, a social-anthropological method book about field observations which is almost forgotten today-Measuring Human Relations: An Introduction to the Study of the Interaction of Individuals (1940)-and which passed Henderson's critical inspection only after five revisions. Their interactionist method would be used by Whyte throughout SCS and during the rest of his academic career. It emphasized that the researcher, through systematic field observations of a specific group, such as Nortons, can objectively "measure" what underlies the group members' statements, thoughts, feelings, and actions. The systematic method can also give the researcher reliable knowledge about the group's internal organization and ranking, for instance who a street gang's leader and lieutenants are. Above all, emphasis is placed on the quite decisive difference between pair interactions of two people and group interactions of three or more people (Chapple & Arensberg, 1940;Whyte, 1941Whyte,, 1955Whyte,, 1967Whyte, [1964Whyte, ], 1993a)). Whyte (1994, pp. 63-64) develops these ideas in his autobiography: In determining patterns on informal leadership, the observation of pair events provided inadequate data. At the extremes, one could distinguish between an order and a request, but between those extremes it was difficult to determine objectively who was influencing whom. In contrast, the observation of set events provided infallible evidence of patterns of influence. The leader was not always the one to propose an activity, although he often did. In a group, where a stable informal structure has evolved, a follower may often propose an activity, but we do not observe that activity taking place unless the leader expresses agreement or makes some move to start the activity. [... ] This proposition on the structure of set events seems ridiculously simple, yet I have never known it to fail in field observations. It gave me the theory and methodology I needed to discover the informal structure of street corner gangs in Boston's North End. According to Whyte, only in observations of group interactions was it possible to learn who was the street gang's informal leader. This could be shown, for example, by the fact that two or three groups merged into a larger unit when the leader arrived. When the leader said what he thought the gang should do, the others followed. Certainly others in the group could make suggestions of what they should do, but these usually dried up if the leader disagreed. If there were more than one potential gang leader, usually the lieutenants, this was shown by the members splitting up and following their respective leaders. Whyte maintained that the internal ranking in the group determines all types of social interactions. An example was that the group's leader basically never borrowed money from persons lower in the group hierarchy, but turned primarily to leaders in other gangs, and secondarily to the lieutenants. This was a recurrent pattern that Whyte could find among the five street gangs he observed. Another illustration of how the group members' ranking was connected with group interactions is the often-mentioned bowling contest in the first chapter of SCS. The results of the contest, which was held in the end of April 1938, reflected-with two exceptions-the group's internal ranking. 15 According to Whyte (1941, p. 664), the method requires... precise and detailed observation of spatial positions and of the origination of action in pair and set events between members of informal groups. Such observations provide data by means of which one may chart structures-a system of mutual obligations growing out of the interactions of the members over a long period of time. William Whyte is known mainly as an unusually acute participant observer with a sensitivity to small, subtle everyday details (Adler, Adler, & Johnson, 1992;van Maanen, 2011van Maanen, [1988]]). But in fact the observational method that he acquired from Arensberg and Chapple advocates quantitative behavioral observations of group processes. Whyte (1991) maintains, perhaps a bit unexpectedly, that "although SCS contains very few numbers, major parts of the book are based on quantification, the measurement (albeit imprecise) of observed and reported behavior" (p. 237). The behavioral scientist Chris Argyris (1952), who was a doctoral student under Whyte at Cornell University during the first half of the 1950s, concluded "In other words, Chapple and Arensberg believe, and Whyte agrees, that all feelings of individuals can be inferred from changes in their basic interaction pattern" (p. 45). Thus, Whyte made use of both participant observations and behavioral observations of group interactions. The reason why he emphasizes the use of measurable observations is probably that behavioral observations, in the United States during the 1920s-1940s, were usually regarded as scientifically objective and reliable. In this respect, Arensberg, Chapple, and Whyte were palpably influenced by Henderson's positivistic view of science. Participant observation was supposed to be more colored by the researcher's subjective interpretations. Hence Whyte drew a clear distinction between observations and interpretations of observations (Chapple & Arensberg, 1940;Whyte, 1941Whyte,, 1953Whyte, [1951Whyte, ], 1967Whyte, [1964Whyte, ], 1970Whyte,, 1982Whyte,, 1991Whyte,, 1993aWhyte,, 1994Whyte,, 1997;;Argyris, 1952). Platt (1998Platt ( [1996]], p. 251) writes There has been little or no commentary within sociology on its [SCS] connections with obviously behaviouristic and positivistic orientations to observation and to study small groups, despite some clues given in the text. Whyte was to have great use of Arensberg's and Chapple's method for observations of the social, criminal, and political structure in the North End. When he (1993b, p. 362) asked the Nortons who their leader was, they answered that there was no formal or informal leader
Social scientists have mostly taken it for granted that William Foote Whyte's sociological classic Street Corner Society (SCS, 1943) belongs to the Chicago school of sociology's research tradition or that it is a relatively independent study which cannot be placed in any specific research tradition. Social science research has usually overlooked the fact that William Foote Whyte was educated in social anthropology at Harvard University, and was mainly influenced by Conrad M. Arensberg and W. Lloyd Warner. What I want to show, based on archival research, is that SCS cannot easily be said either to belong to the Chicago school's urban sociology or to be an independent study in departmental and idea-historical terms. Instead, the work should be seen as part of A. R. Radcliffe-Brown's and W. Lloyd Warner's comparative research projects in social anthropology.
's subjective interpretations. Hence Whyte drew a clear distinction between observations and interpretations of observations (Chapple & Arensberg, 1940;Whyte, 1941Whyte,, 1953Whyte, [1951Whyte, ], 1967Whyte, [1964Whyte, ], 1970Whyte,, 1982Whyte,, 1991Whyte,, 1993aWhyte,, 1994Whyte,, 1997;;Argyris, 1952). Platt (1998Platt ( [1996]], p. 251) writes There has been little or no commentary within sociology on its [SCS] connections with obviously behaviouristic and positivistic orientations to observation and to study small groups, despite some clues given in the text. Whyte was to have great use of Arensberg's and Chapple's method for observations of the social, criminal, and political structure in the North End. When he (1993b, p. 362) asked the Nortons who their leader was, they answered that there was no formal or informal leader, and that all the members had equally much to say about decisions. It was only after Whyte had become accepted by the street gangs in the North End that he could make systematic observations of everyday interactions between the groups' members at the street level, and thereby reach conclusions that often contradicted the group members' own 15. Whyte, William Foote. Papers. #/4087. Kheel Center for Labor-Management Documentation and Archives, Catherwood Library, Cornell University, 14087 Box 2, Folder 32. notions about their internal ranking. Contrary to what several members of the Nortons said, he detected a very clear informal hierarchy in the group-even though the group's composition changed during the three and half years of his field work. Indeed, Whyte argues in SCS that the Nortons no longer existed in the early 1940s. One of Whyte's chief achievements in SCS was to make visible in detail the street gangs' unconscious everyday interactions and mutual obligations within and between groups. At the same time, it emerges in SCS that Whyte's main informant Doc, probably through his discussions with Whyte, increased his awareness of the Nortons' informal organization and interactions with other groups in the local community. To borrow a pair of concepts from the American sociologist Robert Merton, he thus arrived at a latent explanation that went against the street gang's manifest narratives (Whyte, 1941(Whyte,, 1993a(Whyte,, 1994(Whyte,, 1997;;Homans, 1993Homans, [1951]], pp. 156-189; Merton, 1996, p. 89). --- THE SOCIOLOGY DEPARTMENT AT THE UNIVERSITY OF CHICAGO As we have seen, an explicit condition of the Society of Fellows was that SCS could not be submitted as a doctoral dissertation at Harvard University. As Whyte (1994) noted, "the junior fellowship was supposed to carry such prestige that it would not be necessary to get a PhD" (p. 108). When he realized, after his intensive field work in the North End, that he would nevertheless need a doctoral degree, but that he could not obtain a doctoral position at Harvard, he was drawn to the possibility of going to Chicago. He (1994, p. 108) wrote about why he chose to go there The sociology department at the University of Chicago had an outstanding reputation, but that was not what attracted me. On the advice of Conrad Arensberg at Harvard, I chose Chicago so I could study with W. Lloyd Warner, who had left Harvard in 1935 after completing the fieldwork for several books that came to be known as the Yankee City series. Since Warner was the professor in both anthropology and sociology-the only chair in both subjects after the department was divided in 1929-Whyte did not at first need to choose a main subject. But he wanted to finish his doctoral studies at Chicago as quickly as possible, and decided that it would take longer to study anthropological courses, such as archaeology and physical anthropology, than sociological ones, such as family studies and criminology. Before settling on South Dorchester Avenue in Chicago during the autumn of 1940, Whyte had been influenced at Harvard by leading researchers who were not primarily based in sociology, although Henderson lectured for the Society about Vilfredo Pareto's economically impregnated sociology. At Chicago, the only sociologist who really influenced him was Everett C. Hughes (Whyte, 1970(Whyte,, 1991(Whyte,, 1994(Whyte,, 1997)). --- THE ORGANIZED SLUM When William F. Whyte researched and lectured at Chicago during the period 1940-1948, 16 a tense intradisciplinary antagonism existed there between Everett C. Hughes and W. Lloyd Warner, on the one hand, and Herbert Blumer and Louis Wirth, on the other. The background to this antagonism was that Blumer/Wirth did not think that Hughes/Warner maintained a high enough scientific level in their empirical studies, while the later thought that 16. The exception was the academic year 1942-1943 when Whyte did research at the University of Oklahoma (Whyte, 1984, p. 15;Gale Reference Team, 2002). JOURNAL OF THE HISTORY OF THE BEHAVIORAL SCIENCES DOI 10.1002/jhbs Blumer/Wirth only talked without conducting any empirical research of their own (Abbott, 1999). Whyte eventually found himself in the Hughes/Warner camp. As a result of this conflict, and of Warner not being able to attend his doctoral disputation, Whyte felt uncertain how it would turn out. At the dissertation defense he received hard criticism from Wirth for not having defined the slum as disorganized, and for not having referred to previous slum studies in sociology such as Wirth's own sociological classic-The Ghetto (1998 [1928]). But Whyte argued that SCS would be published without bothersome footnotes containing references, and without an obligatory introduction surveying the literature of earlier slum studies. He also gradually perceived that SCS would have been a weaker and more biased study if he had read earlier sociological slum studies before beginning his field work, since they would have given him a distorted picture of the slum as disorganized from the viewpoint of middle-class values (Whyte, 1967(Whyte, [1964(Whyte, ], 1970(Whyte,, 1982(Whyte,, 1984(Whyte,, 1991(Whyte,, 1993a(Whyte,, 1994(Whyte,, 1997)). Whyte (1967Whyte ( [1964]], p. 258) described how he avoided this pitfall: The social anthropologists, and particularly Conrad M. Arensberg, taught me that one should approach an unfamiliar community such as Cornerville as if studying another society altogether. This meant withholding moral judgements and concentrating on observing and recording what went on in the community and how the people themselves explained events. Whyte's social-anthropological schooling from Harvard made him a somewhat odd bird for certain sociologists at Chicago. Furthermore, it shows that an alternative preunderstanding can enable the researcher to view the studied phenomenon in an at least partly new light. After several fruitless attempts by Wirth to get Whyte to define the slum as disorganized, Hughes intervened, who also sat in the degree committee. He said that the department would approve SCS as a doctoral dissertation on the condition that Whyte wrote a survey of the literature of earlier slum studies. The survey would thereafter be bound together with the rest of the text and placed in the University of Chicago's library. Once Whyte had published two articles titled "Social Organization in the Slums" (1943b) and "Instruction and Research: A Challenge to Political Scientists" (1943a), Hughes persuaded the sociology department that these did not need to be bound with SCS (Whyte, 1984(Whyte,, 1991(Whyte,, 1993a(Whyte,, 1993b(Whyte,, 1994(Whyte,, 1997)). The concept of disorganization was fundamental to the Chicago school's urban sociology, for its view of the group's adaptation to city life and the individual's role in the group. This concept had first been introduced by William I. Thomas and Florian Znaniecki in the book that became their milestone, The Polish Peasant (1958Peasant ( [1918Peasant ( -1920]]). It subsequently became an accepted perspective on migrants' process of integration into urban social life in the Chicago school's studies during the 1920s, 1930s, and 1940s. When Whyte argued in his study that the slum was organized for the people who resided and lived there, he touched a sore spot in the Chicago school's urban sociology, which had held for decades that the slum lacked organization. Whyte made an important empirical discovery when, with great insight and precision, he described the internal social organizations of the Nortons and Italian Community Club, whereas the Chicago school had unreflectively presupposed such groups' lack of internal social structure. According to Whyte (1993b), the North End's problem was not "lack of organization but failure of its own social organization to mesh with the structure of the society around it" (p. 273). 17 17. See also Edwin H. Sutherland's (1944) review of SCS in American Journal of Sociology and R. Lincoln Keiser (1979Keiser ( [1969]]) for a similar critique of how certain sociologists regarded Afro-American street gangs in economically impoverished areas during the 1960s. The Chicago school came to use the concept of disorganization mainly in two ways. The first, based on Thomas and Znaniecki's definition, was an explanation for how the Polish peasants, and other groups in the transnational migration from the European countryside to the metropolis of Chicago, went through three phases of integration: organized, disorganized, and finally reorganized. The second way of using the concept was a later modification of the first. Chicago sociologists such as Roderick D. McKenzie and Harvey W. Zorbaugh described the groups who lived in the slum as permanently disorganized. The difference between the two viewpoints was that Thomas and Znaniecki emphasized that the great majority of the Polish peasants would gradually adapt to their new homeland, while McKenzie and Zorbaugh-who were strongly influenced by the human-ecological urban theory of Robert E. Park and Ernest W. Burgess-considered the slum as disorganized regardless of which group was involved or how long it had lived in Chicago. Thomas was also to write about young female prostitutes in The Unadjusted Girl (1969Girl ( [1923]]) where he alternated between the two viewpoints. McKenzie and Zorbaugh proceeded more faithfully from Burgess' division of the city into five concentric zones, and their manner of using the concept of disorganization became the accepted one in this school (Whyte, 1943b(Whyte,, 1967(Whyte, [1964]]); Ciacci, 1968;Andersson, 2007). 18 It was only Thomas and Znaniecki who made a full transnational migration study in Chicago. The other studies mostly took their starting point in the slum after the migrant had arrived in Chicago. The sociologist Michael Burawoy (2000) maintains, somewhat simplistically, that "the Chicago School shrank this global ethnography into local ethnography, and from there it disappeared into the interiors of organizations" (p. 33). An excellent example of a local monograph during the school's later period was the quantitative study Mental Disorders in Urban Areas (1965 [1939]) by Robert E. L. Faris and H. Warren Dunham, which concluded that the highest concentration of schizophrenia occurred in disorganized slum areas. Thus, the use of the concept in the Chicago school had gone from explaining migrants' transnational transition, from countryside to big city, to solely defining the slum and its inhabitants as disorganized. In his article "Social Organization in the Slums" (1943b), Whyte criticized the Chicago researchers McKenzie, Zorbaugh, Thomas, and Znaniecki for being too orthodox when they see the slum as disorganized and do not realize that there can be other agents of socialization than the family, such as the street gang and organized crime. In contrast, Whyte thinks that other Chicago sociologists-like John Landesco, Clifford R. Shaw, and Frederic M. Thrasher-have found in their research that the slum is organized for its inhabitants. Whyte emphasized in his article that it is a matter of an outsider versus an insider perspective. Some Chicago researchers have an outsider perspective nourished by American middle-class values, while others show greater knowledge about the slum population's social worlds. Whyte (1967Whyte ( [1964]], p. 257) developed this idea in the mid-1960s: The middle-class normative view gives us part of the explanation for the long neglect of social organization in the slums, but it is hardly the whole story. Some sociologists saw slums in this way because they were always in the position of outsiders. Rather surprisingly, Whyte in the above-cited article (1943b) argued that the Chicago sociologists had different views of disorganization even though all of the sociologists he named, besides Thomas and Znaniecki, had shared Park and Burgess as supervisors and mentors. Moreover, he does not mention that Chicago researchers during the 1920s, 1930s, 18. Whyte, William Foote. Papers. #/4087. Kheel Center for Labor-Management Documentation and Archives, Catherwood Library, Cornell University, 14087 Box 2, Folder 8. JOURNAL OF THE HISTORY OF THE BEHAVIORAL SCIENCES DOI 10.1002/jhbs and 1940s used the concept of disorganization in an at least partly different way than the original one of Thomas and Znaniecki. It is also worth noting that Whyte's article did not examine the Chicago monograph which perhaps most closely resembles his own: Nels Anderson's The Hobo (1961 [1923]). Anderson too, in his intensive field study of Chicago's homeless men in the natural area Hobohemia, concluded that the slum was organized. Both Anderson and Whyte made use of participant observation to illuminate groups' complex social worlds. Although Anderson does not give a concrete description of a few groups of homeless men corresponding to what Whyte gives for the street gangs in the North End, they both find that an insider perspective has a decisive importance for understanding the slum residents' multifaceted social worlds. When Whyte compared his own study with Thrasher's (1963Thrasher's ( [1927]]), which deals with child and teenage gangs in Chicago's slum districts, there emerge some of the most important differences between his approach and that of the Chicago school. While Whyte (1941, pp. 648-649) gave a dense description of five street gangs, the Chicago school strove-with natural science as a model-to draw generally valid conclusions about groups, institutions, life styles, and city districts: It [SCS] differs from Thrasher's gang studies in several respects. He was dealing with young boys, few of them beyond their early teens. While my subjects called themselves corner boys, they were all grown men, most of them in their twenties, and some in their thirties. He studied the gang from the standpoint of juvenile delinquency and crime. While some of the men I observed were engaged in illegal activities, I was not interested in crime as such; instead, I was interested in studying the nature of clique behavior, regardless of whether or not the clique was connected with criminal activity. While Thrasher gathered extensive material upon 1,313 gangs, I made an intensive and detailed study of 5 gangs on the basis of personal observation, intimate acquaintance, and participation in their activities for an extended period of time. What chiefly emerges in the quotation is that, whereas Whyte's main aim was to describe precisely the daily interactions in and between five street gangs and the surrounding local community, Thrasher's general objective was to make a social survey of all the street gangs in Chicago, even though the study does not show how well he succeeded with his grand ambition. Of course, the Chicago school's research projects during the 1920s, 1930s, and 1940s differed as regards their efforts to use sociological categories. For instance, Anderson's The Hobo is not as permeated by the urban sociological perspective of Park and Burgess as is Zorbaugh's The Gold Coast and the Slum. Neither the Chicago researchers nor Whyte seem to be fully aware that they used the concept of disorganization in different ways. Thus, they mixed together an ethnic group's transnational migration process and internally organized urban communities, on the one hand, with abiding social problems, such as homelessness, criminality, prostitution, schizophrenia, suicide, and youth gangs, in slum areas, on the other hand. A complementary explanation could be that Chicago concepts were sometimes betrayed by Chicago observations (Hannerz, 1980, p. 40). Whyte adopted a more relativistic cultural attitude toward people who lived in the slum, and thinks that the middle class's formal organizations and societies should not be considered "better" than the street gangs' and organized crime's informal organizations and networks inside and outside the slums. While the Chicago school, aided by the research results from its monographs, tried to find general patterns in migrant groups' adaptation to the new living conditions in the metropolis, Whyte's purpose was to expose in detail the street gangs' social, criminal, and political organization. These constituted alternative career paths for the slum inhabitants, and were connected not least with the formal and informal political structure on both the municipal and national levels. If we can get to know these people intimately and understand the relations between little guy and little guy, big shot and little guy, and big shot and big shot, then we know how Cornerville society is organized. On the basis of that knowledge it becomes possible to explain people's loyalties and the significance of political and racket activities. (Whyte, 1993b, p. xx) That Whyte employed a structural-functional explanatory model for how the different parts of the North End cohere in a larger unity is not a coincidence, as I will clarify later. At the same time, it is essential to notice that when Whyte made his study, the earlier optimism that characterized most of the Chicago school's studies had given way to a more pessimistic outlook on the future as a result of the Depression that pervaded American society in the 1930s. --- SOCIAL ANTHROPOLOGY AND SOCIOLOGY AT THE UNIVERSITIES OF CHICAGO AND HARVARD It was a historical fluke that William Whyte became a grantee at the Society of Fellows two years after Arensberg gained admittance. Due to the social-anthropological schooling that Whyte acquired at Harvard University, he was able throughout his 86-year life to argue against researchers who wanted to place him in the Chicago school's realm of thought. Besides criticizing certain Chicago sociologists' description of the slum as socially disorganized, he came to be included, through his mentors Arensberg and Warner, in Radcliffe-Brown's research ambitions of a worldwide comparative sociology. 19 In 1944, Radcliffe-Brown (1976 [1958], p. 100) wrote about this grand research project: Ethnographical field studies are generally confined to the pre-literate peoples. In the last ten years, field studies by social anthropologists have been carried out on a town in Massachusetts, a town in Mississippi, a French Canadian community, County Clare in Ireland, villages in Japan and China. Such studies of communities in "civilized" countries, carried out by trained investigators, will play an increasingly large part in the social anthropology of the future. While it is highly probable that Radcliffe-Brown refers to Newburyport rather than to Boston as the town in Massachusetts where social anthropologists had conducted extensive field studies, my argument is that SCS can also be placed in this tradition. Warner (1941a) also wrote in line with Radcliffe-Brown in the early 1940s that social anthropology's field studies of modern society "must in time be fitted into a larger framework of all societies; they must become a part of a general comparative sociology (p. 786)." Warner, who had also been taught by Robert H. Lowie at Berkeley, got to know Radcliffe-Brown in connection with his field work in Australia on the Murngin people during 1926-1929 (Warner, 1964(Warner, [1937]]). When Whyte came to Chicago in 1940, the departments of anthropology and sociology had conducted crosscultural comparative sociology with social-anthropological field methods since at least the end of the 1920s. It can certainly be argued that Thomas and Znaniecki's as well as Anderson's sociological field studies, The Polish Peasant and The Hobo, respectively, constitute the real origin of this research tradition-but it is perhaps more correct to point out Robert Redfield's (1971 [1935]), as the first two anthropological studies in the tradition. At the same time as these two studies of Mexico and Sicily, respectively, belonged to an incipient social-anthropological tradition with strong sociological influences, they were remarkable exceptions since most of the anthropological field studies in the 1920s and 1930s were conducted within the borders of the United States (Warner, 1968(Warner, [1940]]; Eggan, 1971;Peace, 2004, p. 68). When the departments of sociology and anthropology at the University of Chicago were divided in 1929, the previously close collaboration between the two disciplines took a partly new form and orientation. Before the division, the idea was that the sociologists would take care of anthropology at home, while the anthropologists would investigate immigrants' cultural background in their native countries. Fay-Cooper Cole, the head of the anthropology department, wrote in a grant application in 1928: It is our desire to continue such studies but we believe that there is also a field of immediate practical value in which ethnological technique can be of special service -that is in the study of our alien peoples. Most of our attempts to absorb or Americanize these alien groups have been carried on without adequate knowledge of their backgrounds, of their social, economic, or mental life in the homelands. It is our hope to prepare high grade students for these background studies, and to make their results available to all social workers. We have recently made such a study of one district in Mexico, as a contribution to the study of the Mexican in Chicago. We have a similar study in prospect of the Sicilian. However these investigations are of such importance that we should have ten investigators at work where we now have one. 20 The researchers in this comparative project would learn how the native countries' cultures were related to the immigrant groups' capacity for adaptation in Chicago and other American cities. For example, Redfield made a field study in Chicago of how the Mexican immigrants had managed to adapt from rural to urban life, before he began his field work on Tepoztlan. 21 The Chicago researchers thought that adaptive dilemmas of ethnic migrant groups could be mitigated if the city's welfare organizations and facilities had better and deeper background knowledge about the migrant's cultural patterns. The obvious social utility of such a crosscultural research project would be that the United States and Chicago could make social efforts specifically adjusted to each newly arrived ethnic group. The field studies of Mexicans and Sicilians, referred to by Cole in the quotation above, were the first two anthropological studies in the almost symbiotic collaborative project between anthropology and sociology. It was crowned in the mid-1940s with Horace R. Cayton's and St. Clair Drake's unsurpassed work Black Metropolis (1993 [1945]). Cayton and Drake, whose supervisor had been Warner, dedicated their book to Park. A historically decisive watershed for the anthropology department at Chicago in the early 1930s was the employment of Radcliffe-Brown. As George W. Stocking (1976) noted in regard to the department's development "the more important functionalist influence, however, was that of Radcliffe-Brown, who came to Chicago on the fall of 1931, fresh from his comparative synthesis of the types of Australian social organization" (p. 26). Radcliffe-Brown was employed as a professor of anthropology at the University of Chicago during 1931-1937 and succeeded Edward Sapir, who in the autumn of 1931 took over an advantageous 20. University of Chicago, Special Collections Research Center of Joseph Regenstein Library, Presidents'Papers 1925-1945, Box 108, Folder 9. 21. University of Chicago, Special Collections Research Center of Joseph Regenstein Library, Robert Redfield Papers 1925-1958, Box 59, Folder 2. professorship in the new anthropology department at Yale University (Darnell, 1986, p. 167;Stocking, 1999Stocking, [1995]]). Besides in regard to the importance of conducting intensive field studies, Radcliffe-Brown and Sapir had different views in several respects about the subject area and orientation of anthropology. While Sapir was a linguist, and schooled in historicism by the father of American anthropology, Franz Boas, it was Durkheim's comparative sociology that inspired Radcliffe-Brown's theories. Boas and his students were mainly occupied with historical and contemporary documentation of disappearing Indian cultures in the United States (Boas, 1982(Boas, [1940]]; Stocking, 1999Stocking, [1995], pp. 298-366)], pp. 298-366). Radcliffe-Brown (1976[1958]) defined social anthropology as a natural science whose primary task "lies in actual (experimental) observation of existing social systems" (p. 102). In other words, he was more eager to document the present than salvage the past. When Warner was employed in 1935 by both the anthropological and sociological departments at Chicago, Radcliffe-Brown's cross-cultural comparative sociology could be implemented in various research projects. But Warner had already begun, in collaboration with Mayo, the Yankee City Series at Harvard University in the early 1930s. Redfield was also to have great importance for a deeper cooperation between anthropology and sociology at Chicago. Strongly influenced by his father-in-law Robert E. Park's sociology, he had emphasized in his doctoral dissertation that anthropologists should devote themselves less to pre-Columbian archaeology and folklore, and focus more on contemporary comparative scientific cultural studies. Moreover, Park described Radcliffe-Brown as a sociologist who primarily happened to be interested in aboriginal peoples (Stocking, 1979, p. 21). In addition to Warner and Redfield, Hughes also contributed in a crucial way to deepening and enlivening the cooperation between the anthropology and sociology departments even after their division in 1929. It is in this research landscape that I want to place Whyte's SCS. Paradoxically, the employment of Radcliffe-Brown in 1931 and Warner 1935 meant that the anthropology department became more sociologically oriented than before the division in 1929. The reason was partly that Radcliffe-Brown replaced Sapir, and partly that Redfield and Warner had ever more to say while Cole had less influence after Sapir left Chicago. Furthermore, Redfield and Warner mostly shared Radcliffe-Brown's view of the subject's future development. Already from the start, as Park (1961Park ( [1923]], p. xxvi) wrote in the "Editor's Preface" to The Hobo, sociology in Chicago had the general aim of giving not as much emphasis to... the particular and local as the generic and universal aspects of the city and its life, and so make these studies not merely a contribution to our information but to our permanent scientific knowledge of the city as a communal type. Against the background of this reasoning, it may be worth observing that social anthropologists and sociologists in Chicago, after the arrival of Radcliffe-Brown and Warner, developed what the Chicago school had already initiated about the diversity of urban life, although with the entire world as a field of ethnographic work. While the Chicago school's dominance in American sociology began to taper off in the mid-1930s, since Park left the department in 1934 and the sociology departments at Columbia and Harvard universities were improved, social anthropology gained in prominence. Although Chicago sociologists, such as W. I. Thomas, could be critical of some aspects of Durkheim's comparative sociology, one can find a common historical link in Herbert Spencer's social evolutionism. Spencer, who was a notably controversial person in some academic circles, is perhaps best known for having sided with big industry against advocates of reform, as well JOURNAL OF THE HISTORY OF THE BEHAVIORAL SCIENCES DOI 10.1002/jhbs as for having coined the expression survival of the fittest. However, certain sociologists and social anthropologists were attracted to his idea of a comparative sociology. In contrast, the anthropology based on historicism with Boas in its front line was a consistent opponent of Spencer's social evolutionism as a whole (Warner, 1968(Warner, [1940]]; Voget, 1975, pp. 480-538;Radcliffe-Brown, 1976[1958], pp. 178-189;Boas, 1982Boas, [1940]]; Perrin, 1995;Stocking, 1999Stocking, [1995], pp. 305-306;], pp. 305-306;Andersson, 2007). --- HOW RADCLIFFE-BROWN'S AND WARNER'S STRUCTURAL-FUNCTIONAL MODEL OF THOUGHT INFLUENCED WHYTE In order to realize their grand plans for a cross-cultural comparative sociology, Radcliffe-Brown (1964[1939], p. xv) and Warner had a great need of systematic comparisons between different cultural forms, grounded in intensive field studies of particular societies: What is required for social anthropology is a knowledge of how individual men, women, and children live within a given social structure. It is only in the everyday life of individuals and their behavior in relation to one another that the functioning of social institutions can be directly observed. Hence the kind of research that is most important is the close study for many months of a community which is sufficiently limited in size to permit all the details of its life to be examined. SCS met Radcliffe-Brown's and Warner's high requirements for a long-term intensive social-anthropological field study of the structural function of social institutions (street gangs, organized crime, police corps, and political machinery) in a particular community. As in the above-mentioned community studies, the concrete research results of SCS were essential empirical facts that could make Radcliffe-Brown's and Warner's cross-cultural comparative sociology more than just groundless speculation about similarities and differences between the world's cultural forms. Earlier "armchair researchers," such as Edward B. Tylor, Lewis H. Morgan, William Sumner, and Herbert Spencer, had been sharply criticized by anthropologists and sociologists like Boas, Malinowski, Radcliffe-Brown, and Thomas for not having enough empirical data to verify their evolutionary theories about distinct cultures' universal origin and progress (Stocking, 1992;McGee & Warms, 2004[1996]; Andersson, 2007). There are more points of contact than the historical connection between SCS and Radcliffe-Brown's and Warner's research ambitions for a cross-cultural comparative sociology. Whyte (1993b, p. 272) draws a structural-functional conclusion in SCS: The corner gang, the racket and police organization, the political organization, and now the social structure have all been described and analyzed in terms of a hierarchy of personal relations based upon a system of reciprocal obligations. These are the fundamental elements out of which all Cornerville institutions are constructed. 22 Whyte's explanation, for how the concrete social structure in the North End is functionally and hierarchically linked together in a larger social system, lies completely in line with Radcliffe-Brown's 1952, p. 181) explanation of the same process on a more general level: 22. Whyte also emphasizes in the manuscript "Outline for Exploration in Cornerville" from July 17, 1940 that "the main purpose is to examine the functioning of various groups in the community in order to gain an understanding of human interactions which may be applied in other communities, in other studies" (Whyte, William By the definition here offered 'function' is the contribution which a partial activity makes to the total activity of which it is a part. The function of a particular social usage is the contribution it makes to the total social life as the functioning of the total social system. Such a view implies that a social system (the total social structure of a society together with the totality of social usages in which that structure appears and on which it depends for its continued existence) has a certain kind of unity, which we may speak of as a functional unity. We may define it as a condition in which all parts of the social system work together with a sufficient degree of harmony or internal consistency, i.e. without producing persistent conflicts which can neither be resolved nor regulated. Warner (1941a, p. 790), too, explains on a general level how the social structure functionally, and not least hierarchically, coheres in a larger social system: Once the system of rank has been determined, it becomes important to know the social mechanisms which contributed to its maintenance. There arise concomitant problems of how the different social structures fit into the total system. Whyte (1955, p. 358) explains on the basis of similar structural-functional ideas how the different institutions or organizations and leaders in the North End are functionally and hierarchically connected in a larger social system: Although I could not cover all Cornerville, I was building up the structure and functioning of the community through intensive examination of some of its parts-in action. I was relating the parts together through observing events between groups and between group leaders and the members of the larger institutional structures (of politics and the rackets). I was seeking to build a sociology based upon observed
Social scientists have mostly taken it for granted that William Foote Whyte's sociological classic Street Corner Society (SCS, 1943) belongs to the Chicago school of sociology's research tradition or that it is a relatively independent study which cannot be placed in any specific research tradition. Social science research has usually overlooked the fact that William Foote Whyte was educated in social anthropology at Harvard University, and was mainly influenced by Conrad M. Arensberg and W. Lloyd Warner. What I want to show, based on archival research, is that SCS cannot easily be said either to belong to the Chicago school's urban sociology or to be an independent study in departmental and idea-historical terms. Instead, the work should be seen as part of A. R. Radcliffe-Brown's and W. Lloyd Warner's comparative research projects in social anthropology.
condition in which all parts of the social system work together with a sufficient degree of harmony or internal consistency, i.e. without producing persistent conflicts which can neither be resolved nor regulated. Warner (1941a, p. 790), too, explains on a general level how the social structure functionally, and not least hierarchically, coheres in a larger social system: Once the system of rank has been determined, it becomes important to know the social mechanisms which contributed to its maintenance. There arise concomitant problems of how the different social structures fit into the total system. Whyte (1955, p. 358) explains on the basis of similar structural-functional ideas how the different institutions or organizations and leaders in the North End are functionally and hierarchically connected in a larger social system: Although I could not cover all Cornerville, I was building up the structure and functioning of the community through intensive examination of some of its parts-in action. I was relating the parts together through observing events between groups and between group leaders and the members of the larger institutional structures (of politics and the rackets). I was seeking to build a sociology based upon observed interpersonal events. That, to me, is the chief methodological and theoretical meaning of Street Corner Society. Although I argue consistently that SCS was part of Radcliffe-Brown's and Warner's crosscultural research project, it is equally important to stress that Whyte demonstrated both norm conflicts between groups in the North End (such as corner boys and college boys) and a different social organization than what existed in the surrounding majority society (Whyte, 1967(Whyte, [1964], p. 257;], p. 257;Lindner, 1998). Nevertheless, Whyte (1993b, p. 138) maintains through a structural-functional explanatory model that emphasizes consensus between groups that the main function of the police in Boston was not to intervene against crime, but to regulate the street gangs' criminal activities in relation to the surrounding society's predominant norm system: On the one side are the "good people" of Eastern City [Boston], who have written their moral judgments into the law and demand through their newspapers that the law be enforced. On the other side are the people of Cornerville, who have different standards and have built up an organization whose perpetuation depends upon freedom to violate the law. Vidich (1992), however, holds that "There is no evidence in Whyte's report that he used anyone else's conceptual apparatus as a framework for his descriptive analysis of Cornerville" (p. 87). A plausible explanation for his interpretation is that the study is primarily a detailed and particularly concrete description of the street gangs' organization. It is necessary to have comprehensive knowledge about the history of the subjects of anthropology and sociology in order to trace the connection in the history of ideas between SCS and Radcliffe-Brown's structural functionalism (Argyris, 1952, p. 66). Further direct support for my claim is that Whyte (1967Whyte ( [1964] ] called a structural-functional approach. It argues that you cannot properly understand structure unless you observe the functioning of the organization" (p. 265). --- WHYTE'S POSITION IN THE HISTORY OF SOCIAL ANTHROPOLOGY AND SOCIOLOGY The purpose of the accompanying diagram is to chart Whyte's position in social anthropology and sociology at the universities of Chicago and Harvard. As I have argued, the Diagram shows that Whyte did not have his disciplinary home among the Chicago sociologists, even though he ascribes some importance to Hughes. At the same time, I would emphasize that the diagram primarily includes the intellectual influences and lines of thought that were embodied in social anthropologists and sociologists who were active at those universities when Whyte wrote SCS during the 1930s and 1940s. Whyte was very probably influenced by other research colleagues and thinkers, mainly after he left Chicago in 1948. Certain influential thinkers, such as Durkheim, Radcliffe-Brown, and Park, had an indirect impact on Whyte, while other researchers like Warner, Arensberg, and Hughes exerted direct personal influences. My inclusion of researchers who did not influence Whyte is intended to give the reader a wider picture of the intellectual atmosphere and research landscape that prevailed at this time. The reason why no line has been drawn from Radcliffe-Brown to Whyte is that this influence went chiefly via Arensberg and Warner. Like all diagrams, mine builds upon various necessary simplifications; for instance, I have excluded a number of influential persons, such as Burgess, Chapple, Henderson, and Mayo. Although I show only one-way directions of influence, apart from the case of colleagues where no clear direction existed, there was naturally a mutual influence between several of the researchers. The influence between some researchers, such as Warner, Arensberg, and Whyte, was, however, stronger than that among others, for example, Hughes and Whyte. Moreover, I have chosen to include Spencer in the diagram, despite the strong criticism by researchers like Durkheim and Thomas in certain respects of his evolutionary laissez-faire and speculative racial doctrine. Neither should Spencer be perceived as a social-scientific forefather to the research traditions in the diagram. On the other hand, I would maintain that an idea-historical line of thought-largely originating in Spencer's evolutionism-united the scientific ambitions of Durkheim/Radcliffe-Brown/Warner as well as of Thomas/Park to make comparisons between different cultures and social types based on empirical research (Voget, 1975, pp. 480-538;Perrin, 1995;Stocking, 1999Stocking, [1995]], pp. 305-306). Warner (1968Warner ( [1940]], p. xii) therefore claims: Some modern anthropologists have come to realize that the diverse communities of the world can be classified in a range of varying degrees of simplicity and complexity, much as animal organisms have been classified, and that our understanding of each group will be greatly enhanced by our knowledge of its comparative position among the social systems of the world. In spite of its unavoidable limitations, the diagram contains a wealth of names to display the idea-historical relationship between Radcliffe-Brown's and Warner's cross-cultural research project and leading Chicago sociologists, such as Thomas and Park. The socialanthropological education that Whyte got from Arensberg and Warner at the universities of Chicago and Harvard had its historical origins in Durkheim's and Radcliffe-Brown's comparative and structural-functional sociology. As in the case of the Chicago school, Radcliffe-Brown's and Warner's research project did not last long enough to make it possible to reach any scientifically verifiable conclusions about similarities and differences between cultures. Nonetheless, Warner in the study Yankee City Series achieved path-breaking research results about American society, such as the crucial importance of social class affiliation, for the ability to pursue a formal professional career (Warner, 1962(Warner, [1953]], 1968 [1940]). --- CONCLUSION Social scientists, such as Martin Bulmer (1986Bulmer ( [1984]], Anthony Oberschall (1972), and George W. Stocking (1982Stocking ( [1968]]) have shown the importance of placing the researchers and their ideas in a historical context. In textbooks and historical overviews, there is often a tendency to include researchers and ideas within anachronistic themes that do not take sufficient consideration of the colleagues and the departments where the researchers were educated. However, Whyte and SCS have often been placed into an anachronistic context or there has been a tendency to take insufficient consideration of his colleagues and the department where the primary research was carried out. All of these studies assume that the SCS is part of the tradition that is today called the Chicago school of sociology (Klein, 1971;Jermier, 1991;Schwartz, 1991;Boelen, 1992;Thornton, 1997). Instead, there are other social-anthropological studies that belong to the same comparative research tradition as SCS. For example, Horace Miner's St. Denis (1963[1939]); John F. Embree's Suye Mura (1964[1939]); Conrad M. Arensberg's and Solon T. Kimball's Family and Community in Ireland;Edward H. Spicer's Pascua (1984[1940]); and Allison Davis, Burleigh B. Gardner, andMary R. Gardner's Deep South (1965 [1941]). Even Everett C. Hughes' French Canada in Transition (1963[1943]) might be argued to lie in the research field's margin. With the exception of Arensberg and Hughes, the reason that none of the other JOURNAL OF THE HISTORY OF THE BEHAVIORAL SCIENCES DOI 10.1002/jhbs researchers are included in the diagram above is that Whyte did not, according to my findings from archival studies, correspond with them, know them personally, or was directly influenced by any of them while doing research at Harvard or Chicago. During this same time period, that is, late 1930s and early 1940s, these researchers were doing fieldwork at such different geographical locations as Canada, Japan, Ireland, and United States. Not only did Whyte come to the ground-breaking conclusion that the slum is informally organized, he also conducted participant observations for a longer period of time than any one before him had done in an urban context. An equally important discovery was the understanding of street gang's internal structure and informal leadership. It was not until Whyte dropped the idea, after 18 months of intensive field work, that he would conduct a comprehensive community study, where the Middletown studies and Yankee City Series served as models, that the group structure of street gangs became the main focus of his study. Whyte came to this conclusion by interconnecting Arensberg's and Chapple's observation method with participant observations of several bowling matches in the fall of 1937 and spring of 1938. In the often-mentioned bowling match in April 1938, where most of the Nortons (corner boys) came to settle who was the greatest bowler, the results of the match coincides with a few exceptions, with the group's hierarchical structure. George C. Homans came to use Whyte's meticulous observations during 18 months of Nortons' everyday practices as a case in The Human Group (1993Group ( [1951]]). Using five ethnographic case studies, Homans aimed to reach universal hypotheses about norms, rank, and leadership in primary groups. He (1993He ( [1951]]) claims that after a number of detailed field studies of primary groups during the interwar period, 1919-1938, there was a need for sociological generalization of "the small group" (p. 3). The book, therefore, had a twofold purpose; specifically, "to study the small group as an interesting subject in itself, but also, in so doing, to reach a new sociological synthesis" (Homans, 1993(Homans, [1951]], p. 6). Whyte and Homans had been research colleagues at the Society of Fellows. Consequently, it is probably no coincidence that both, however at different times, became interested in primary groups' formal and informal organization. When Homans (1993Homans ( [1951]]) generalizes Whyte's ethnographic observations of the corner boys' internal structure, he also at the same time changes the concept of status to rank because he "wants a word that refers to one kind only" (p. 179). With "one kind," Homans (1993Homans ( [1951]]) means that status has a sociologically multifaceted meaning that refers to the person's social practice and position in a social network; while the concept of rank more clearly refers to larger organizations, such as companies or the military with a "pyramid of command" (p. 186). Because of this, Whyte's complex ethnographic discoveries of street gangs' social organization and mutual obligations to organized crime, police corps, and political machinery became reduced, in Homans theoretical study, to a question regarding the internal chain of command among the Nortons. Despite this shortcomings, Homans (1993Homans ( [1951]]) argued that the intention was to develop "a theory neither more nor less complex than the facts it subsumes" (p. 16). At the same time, it is difficult to disregard the fact that both Homans' The Human Group and SCS in various ways were pioneering contributions in the creation of the research field "the small group." However, it is worth noting that in The Human Group, there is an incipient conceptual change from the concept of the primary group to the small group, which from Homans' point of view probably marks a generational shift in sociological research, although it is basically about the same social phenomenon. Finally, I have argued that William Foote Whyte's social-anthropological schooling at Harvard was crucial to his, at the time, path-breaking conclusion that the North End had an informal well-functioning social organization. If Whyte instead had been educated in sociology at the University of Chicago, he would also have had a preunderstanding of the slum as being socially disorganized. Social anthropologists Radcliffe-Brown, Warner, and Arensberg passed on to Whyte the "paradigm" to view the slums (North End) as socially organized, and not socially disorganized as the majority of American sociologists claimed (Warner, 1941a;Gibbs, 1964). The fact that this debate is still relevant, at least in the United States, is shown by researchers, such as Philippe Bourgois (2003Bourgois ( [1995]]) and especially Lo<unk>c Wacquant (2008), who are very critical of those who define poor urban neighborhoods as disorganized, while researchers like Robert J. Sampson (2012) argue that the perspective continues to have relevance (pp. 36-39).
Social scientists have mostly taken it for granted that William Foote Whyte's sociological classic Street Corner Society (SCS, 1943) belongs to the Chicago school of sociology's research tradition or that it is a relatively independent study which cannot be placed in any specific research tradition. Social science research has usually overlooked the fact that William Foote Whyte was educated in social anthropology at Harvard University, and was mainly influenced by Conrad M. Arensberg and W. Lloyd Warner. What I want to show, based on archival research, is that SCS cannot easily be said either to belong to the Chicago school's urban sociology or to be an independent study in departmental and idea-historical terms. Instead, the work should be seen as part of A. R. Radcliffe-Brown's and W. Lloyd Warner's comparative research projects in social anthropology.
Introduction Anti-social behaviour on social media, such as harassment and bullying, is on the rise [1]. This trend has intensified since the beginning of the COVID-19 pandemic in 2020, when much social communication moved to online spaces [2][3][4]. Online anti-social behaviour can lead to several negative outcomes, such as decreasing an individual's satisfaction with technologies and being online in general [5] to causing mental and emotional stress in victims [6]. Consequently, those at the receiving end of online anti-social behaviour (such as people who experience online harassment) may adopt coping strategies that can further isolate them [7]. In this study, we use the term "online anti-social behaviour" to encompass a range of harmful acts, including trolling (the intentional provocation of others through inflammatory online comments), bullying (aggressive behavior towards an individual or group), and harassment (offensive or abusive conduct directed at others) that have a negative impact, causing harm or distress to individuals or communities [8][9][10]. While bullying and harassment are related concepts, bullying is often defined as repeated aggressive behavior, typically by someone who perceives themselves to have more power over someone else [11]. Harassment, on the other hand, is a broader concept that includes any unwanted, offensive, or abusive conduct towards others. While many studies on anti-social behaviour have focused on children and adolescents [12-16, for example], there is limited research focusing on young adults. Importantly, young adults are more likely than any other age group to report experiencing online harassment [1] and other forms of anti-social behaviour, especially during the COVID-19 restrictions [4]. Young adults are also generally more active online, particularly in Canada [17]. As such, the research focuses on university students. This research focuses on the perpetrators of anti-social behaviour on social media and asks: What factors are associated with young adults being perpetrators of anti-social behaviour when using social media? The contributions of this research are twofold. First, most previous research has examined the intrinsic and extrinsic characteristics of people targeted by perpetrators of anti-social behaviour [see 1,5,6,18]. Consequently, there is less understanding of what motivates perpetrators. Second, among the studies that focused on perpetrators, many looked at one or a few factors associated with the perpetration of anti-social behaviour [19][20][21][22][23]. Building on the previous scholarship, this research identifies and evaluates a more comprehensive model to understand psychological, social, and technology-associated factors related to being a perpetrator of online anti-social behaviour. Specifically, the proposed model incorporates the following factors known in the literature, but not necessarily tested together: online disinhibition, motivations for cyber-aggression, self-esteem, and empathy. --- Literature review While social media can provide rewarding social connections for many, it can also be a space where users face anti-social behaviour. A recent study identified that 41% of Americans have personally experienced some form of online harassment or abuse; people who experienced online anti-social behaviour cited they were potentially targeted because of their political views, gender, race, ethnicity, religion and sexual orientation [1]. Anti-social behaviour is not a phenomenon exclusive to the internet; psychologists have widely analyzed anti-social behaviour in other contexts for several years prior to the widespread adoption of the internet [10]. The increased use of online platforms has contributed to the exponential rise of online anti-social behaviour [24,25], which has, consequently, reduced the perceived benefits and promise of social media in society [26]. Recently, the increasing reliance on online platforms due to the COVID-19 pandemic restrictions has also been linked to the rise of anti-social behaviour [3,4], perhaps because people have been spending more time on social media [2]. Online anti-social behaviour has several negative outcomes. First, it can reduce online participation, which is particularly impactful for minorities and marginalized communities. Lumsden and Harmer [27] identified that online anti-social behaviour is another avenue of disenfranchisement and discrimination for equity-deserving and marginalized communities, impacting their status, legitimation, and participation in online spaces. Second, previous research has shown that the effect of anti-social behaviour goes beyond the targets and also includes bystanders. Duggan [6] reported that 27% of Americans decided not to share something online after witnessing the abuse and harassment of others. Together, these negative effects of online anti-social behaviour can reduce the diversity of voices on social media and make people uncomfortable going online [28]. Third, online anti-social behaviour can have profound effects on individual's emotional feelings, their reputation and personal safety [6]. While the effects of anti-social behaviour have been well documented, previous research is less clear on what makes someone engage in such behaviour towards another person online. To explain the prevalence of anti-social behaviour on social media and in public discourse, Hannan [29] revisited Neil Postman's [30] theory about how the entertainment frame, which identifies the need for all information to be entertaining, has influenced public discourse. Focusing on television broadcasts in the last century, Postman [30] warned that the entertainment frame has seeped into education, journalism, and politics, which has changed how people interact with one another and society. In a society driven by an entertainment frame, individuals begin to expect all interactions to be entertaining, which influences behaviour and the boundaries of what communication is deemed acceptable. While Postman was writing about television, his theoretical lens has been effectively employed to understand social media [29]. Hannan [29] argued that, like television turned public discourse into "show business" the preeminence of online platforms has turned the online public sphere into a sort of "high school". Trolling on social media has become mainstream as a new genre of public speech, which shapes the discourse and the practices of politicians, public figures, and citizens. To understand how the entertainment frame relates to a person's likelihood to engage in online anti-social network, we developed a conceptual framework. The following section describes our conceptual framework which seeks to explain what makes someone engage in anti-social behaviour on social media. Specifically, we describe the factors and formulate a model of the drivers of the perpetration of online anti-social behaviour. --- Conceptual framework and research hypotheses --- Cyber-aggression Since the goal of this research is to identify factors associated with being a perpetrator of antisocial behaviour on social media, Shapka's and Maghsoudi's [31] concept of cyber-aggression is applied. Instead of employing a binary classification and directly asking participants whether they consider themselves to be perpetrators or victims, the main dependent variable is the cyber-aggression construct. This construct assesses the level of people's engagement in behaviour frequently associated with being a perpetrator, such as making hurtful comments about somebody's race, ethnicity or sexual orientation, purposely excluding a certain person or group of people, and posting embarrassing photos or videos of someone else. --- Online disinhibition Online disinhibition refers to the phenomenon when people say or do something online that they would not normally do in a face-to-face setting [32]. Suler [32] attributes this effect to six factors: [1] dissociative anonymity, as it is harder to determine who online people are; [2] invisibility, as people often cannot see each other online; [3] asynchronicity, as online communication does not require the sender and receiver to be co-present online for messages to be sent; [4] solipsistic introjection, as people tend to assign voices and other visual elements to whom they interact with due to the absence of face-to-face cues; [5] dissociative imagination, as some people can imagine separate dimensions from the real world when interacting online; and [6] minimization of status and authority, as people may perceive more of a peer-relationship as everyone "starts off on a level playing field" (p. 324) and therefore may be more willing to misbehave. Benign disinhibition refers to the effect when these factors motivate people to engage in positive interactions online. On the other hand, toxic disinhibition refers to when these factors motivate people to propagate hate and violence [32]. This study focuses on the association between online disinhibition and perpetration of online anti-social behaviour, as online disinhibition is linked to a higher likelihood of sharing harmful content [33]. Research suggests that use of social media enhances online disinhibition leading to anti-social behaviour [9]. Research has identified a positive association between online disinhibition and being a perpetrator of cyber-aggression [33][34][35]. In particular, Udris [35] separately analyzed the two dimensions of online disinhibition (i.e., benign disinhibition, and toxic disinhibition) and found that both positively predicted being a perpetrator. Wachs et al. [36] and Wachs and Wright [37] similarly found a positive association between the toxic dimension of online disinhibition and online hate. Building on this work, we propose the following hypothesis: H1. Online disinhibition is positively associated with being a perpetrator of cyber-aggression. (Benign and toxic disinhibition are tested separately.) --- Motivations for cyber-aggression Runions et al. [38] proposed a model to explore aggression motives based on the Quadripartite Violence Typology. This typology explores two dimensions: motivational valence and self-control. The motivational valence is aversive when the aggressive action of an individual is the reaction to violence or provocation. The motivational valence is appetitive when the motivation for one's aggressive behaviour is to seek an exciting experience or some kind of reward. In summary, while aversive motivational valence is reactive, appetitive motivational valence is proactive. The self-control of aggressive actions might be impulsive or controlled depending on the deliberation and how it was planned. Based on the combination of the two dimensions, there are four distinct motivations for cyber-aggression: impulsive-aversive (Rage), controlled-aversive (Revenge), controlled-appetitive (Reward), and impulsive-appetitive (Recreation) [38]. Runions et al. [38] identified that all four motivations for cyber-aggression (i.e., Rage, Revenge, Reward, and Recreation) predicted being a cyber-aggression perpetrator. In terms of specific domains and different anti-social behaviours, Gudjonsson and Sigurdsson [20] found that excitement (Recreation) was a commonly endorsed motive for offending others. Ko <unk>nig et al. [23] found that victims of traditional bullying that engaged in cyberbullying tend to do it for revenge. Similarly, Fluck [14] identified that bullies indicate that their reason for engaging in cyber-aggression was mostly revenge, but also sadism attributed to fun experiences (Recreation) was mentioned for some bullies. Sadism was also found to be associated with online trolling, which indicates that trolls engage in anti-social behaviour for fun and enjoyment [39]. Thus, we expect that: H2. The motivations for cyber-aggression are positively associated with being a perpetrator of cyber-aggression. (Each of the four motivations for cyber-aggression are tested separately). --- Self-esteem Self-esteem refers to the perception one has towards the self [40,41]. Self-esteem is usually viewed as a two dimensional construct: self-confidence and self-deprecation. Self-confidence refers to the positive attitudes towards the self. Self-deprecation focuses on negative perceptions towards the self. It is important to analyze the influence of self-esteem on cyberaggression because self-esteem has been traditionally associated with offline anti-social behaviour, such as bullying [40]. Among research that explored the association between self-esteem and cyber-aggression, Rodr<unk> <unk>guez-Hidalgo et al. [15] found that self-deprecation was positively associated with being a perpetrator, but found nonsignificant associations between self-confidence and being a perpetrator. Other studies combined self-confidence and self-deprecation into a single construct of self-esteem (reverse-scoring items related to self-deprecation) and identified that lower levels of self-esteem lead to a higher likelihood of being a cyber-aggression perpetrator [40,42]. Aligned with the prior work, we hypothesize that: H3. Self-esteem is negatively associated with being a perpetrator of cyber-aggression. (Selfconfidence and self-deprecation are assessed separately). --- Empathy Empathy refers to the ability to experience and comprehend other people's emotions and consists of two dimensions: the affective dimension (i.e., how one experiences the emotions of others) and the cognitive dimension (i.e., the capacity to comprehend the emotions of others) [43]. Empathy is relevant to understanding the motivations of anti-social behaviour because the capacity to experience and understand the emotions of others often leads to positive social interactions, such as helping others and sharing positive emotions and thoughts [12,21]. In contrast, a lack of empathy may lead to negative social interactions. Ang and Goh [12] found that both cognitive and affective empathy negatively predicted being a perpetrator of cyber-aggression. Jolliffe and Farrington [21] analyzed the influence of empathy in bullying among adolescents and found mixed results: both cognitive and affective empathy were negatively associated with bullying among boys, and only affective empathy was negatively associated with bullying among girls (the authors note that the low numbers of girls involved in bullying could have prevented cognitive empathy from reaching statistical significance). Casas et al. [44] analyzed empathy as a unidimensional construct (combining both cognitive and affective empathy) and found that low empathy leads to higher cyber-aggression perpetration. Other studies using adapted various scales to measure empathy found similar results [15,22,45]. In a systematic review, van Noorden et al. [16] identified that: (1) most studies reported a negative association between cognitive empathy and being a cyber-aggression perpetrator (although a few studies did not find any significant association or found a positive association), and (2) most studies reported a negative association between affective empathy and being a cyber-aggression perpetrator (with a few studies finding no association). Thus, we propose the following hypotheses: --- H4. Empathy is negatively associated with being a perpetrator of cyber-aggression. (Cogni- tive and affective empathy are assessed separately). Table 1 provides a summary of the research hypotheses. To identify factors associated with perpetration of anti-social behaviour, the scales included in the model have specific dimensions that can provide more granular results. Therefore, the model includes detailed scales to analyze how each factor is associated with being a perpetrator of online anti-social behaviour. --- Methods Prior to data collection, the study received approval from the Research Ethics Boards at both Toronto Metropolitan University and Royal Roads University (at the time of the study, the authors were affiliated with one of these institutions). Undergraduate students at Toronto Metropolitan University who signed up for the Student Research Participant Pool were invited to voluntarily participate in an online survey. The Student Research Participant Pool invites students to voluntarily participate in scholarly research and receive extra course credit that can be applied to specific courses. Before taking the survey, participants were required to review and agree to the informed consent form before starting the survey which was hosted on Qualtrics, an online platform. Students were given the opportunity to review and save the consent form on their own devices. They were also able to withdraw from the survey at any time by simply closing their browser. In such cases, their data was not used in the study. As this was an online survey, students had the flexibility to complete it at their own pace and from any location of their choosing. In total, 557 students participated in the survey between March 9 and April 18, 2022. The survey dataset was cleaned and the data was completely anonymized. A two-step disqualification process was used to assure the high quality of the data. First, an attention check question was employed to identify participants who were not carefully reading the questions, which resulted in the removal of 182 responses who answered the question incorrectly. Second, responses from participants who completed the survey in less than 5 minutes, which indicates that they did not carefully read the questions (n = 16), were removed. We did not exclude responses that took longer than expected because some students may have opened the survey page but completed it at a later time. After data cleaning, the final dataset consisted of 359 participants. On average, respondents completed the survey in 25 minutes, and the median completion time was 13 minutes, which was aligned with the anticipated completion time in the piloted survey. The final dataset is available at doi.org/10.6084/m9.figshare.22185994. Partial Least Squares Structural Equation Modeling (PLS-SEM) was used to analyze the data. PLS-SEM is a non-parametric approach that can handle complex models and can be used to test relationships between multiple independent and dependent variables simultaneously [46,47]. This method has been widely used in several fields, such as business, political communication, and psychology [48,49], and more recently internet studies [50][51][52]. SmartPLS v. 3.3.9 software was used to analyze the association between the constructs below. --- Factors Hypotheses --- Online disinhibition H1a. Benign online disinhibition is positively associated with being a perpetrator of cyber-aggression. H1b. Toxic online disinhibition is positively associated with being a perpetrator of cyber-aggression. --- Motives for cyberaggression H2a. Rage is positively associated with being a perpetrator of cyber-aggression. H2b. Revenge is positively associated with being a perpetrator of cyber-aggression. H2c. Reward is positively associated with being a perpetrator of cyber-aggression. H2d. Recreation is positively associated with being a perpetrator of cyber-aggression. --- Self-esteem H3a. Self-deprecation is positively associated with being a perpetrator of cyberaggression. H3b. Self-confidence is negatively associated with being a perpetrator of cyberaggression. --- Empathy H4a. Cognitive empathy is negatively associated with being a perpetrator of cyberaggression. H4b. Affective empathy is negatively associated with being a perpetrator of cyberaggression. https://doi.org/10.1371/journal.pone.0284374.t001 --- Measurement scales The scales used in the online survey have been tested and validated by previous research. All constructs were measured using a 5-point Likert scale ranging from "strongly disagree" to "strongly agree," except for the measurement of being a perpetrator of online anti-social behaviour, which was measured using a 5-point Likert scale ranging from "never" to "always." S1 Appendix outlines the constructs and scales used in the research. Based on the previous applications of these scales, all were modeled as reflective constructs in the PLS-SEM analysis. Cyber-aggression was measured using the Cyber-aggression and Cyber-victimization Scale [31]. While this scale has two components: cyber-aggression and cyber-victimization, only the former was used in our research (CAVP) due to the focus on perpetrators of anti-social behaviour. The scale included twelve indicators with statements about how individuals behave toward others online, such as "posted or re-posted something embarrassing or mean about another person." This scale is particularly useful because it focuses on cyber-aggressive behaviour overall (i.e., specific acts associated with cyber-aggression). This scale overcomes a limitation of previous scales that focused on specific online platforms (e.g., Facebook) or modes of communicating (e.g., computers or cellphones) [31]. The Online Disinhibition Scale [35] was used to measure benign disinhibition (BOD) and toxic disinhibition (TOD). Benign disinhibition was measured by seven indicators and toxic disinhibition was measured by four indicators. To measure the four motivations for cyber-aggression, an adapted version of the Cyber-Aggression Typology Questionnaire [25] was used. In Antipina et al.'s [13] adaptation, each motive (i.e., Rage, Revenge, Reward, and Recreation) was measured by five indicators. To evaluate the levels of empathy of respondents, the Rosenberg's Self-Esteem Scale [41] was used whereby the two dimensions of self-esteem were separately explored. Self-confidence (RSEC) and self-deprecation (RSED) were each measured by five indicators. The Basic Empathy Scale [43] was used to explore cognitive empathy (BCE) and affective empathy (BAE). Cognitive empathy was measured by nine indicators and affective empathy was measured by eleven indicators. Table 2 provides descriptive data of the constructs in our dataset. --- Constructs and model assessments Current PLS-SEM guidelines were followed to assess the reliability of the constructs, the validity of the model, and to report the results [47,53]. The following procedures for the constructs and model assessments were used: internal consistency, discriminant validity, collinearity between indicators, and significance and relevance of the structural model. We identified issues of internal consistency in five constructs: Affective Empathy (BAE), Cognitive Empathy (BCE), Benign Online Disinhibition (BOD), and Self-Deprecation (RSED). Additionally, we identified indicators with low outer loadings for Toxic Online Disinhibition (TOC). To solve these issues, we removed indicators with loadings below 0.6. Although the ideal threshold is 0.7, a threshold of 0.6 is acceptable for exploratory research [53]. We decided to use the 0.6 threshold for outer loadings because the more conservative 0.7 threshold would cause the Cronbach's alpha for BOD to go below the minimum acceptable value of 0.6. After excluding six BAE indicators, five BCE indicators, four BOD indicators, two TOD indicators, and two RSED indicators, values of composite reliability were well above the minimum of 0.6, and values of Average Variance Extracted (AVE) were above the minimum of 0.5 for all constructions. Cronbach's alpha values were above the ideal 0.7 for most constructs, except for BOD and BCE, which were above the minimum acceptable of 0.6. In total, we removed 26% of the indicators, which is within acceptable limits for exploratory research [54]. We have verified that the majority of constructs (excluding toxic online disinhibition) were assessed using at least three items, which is considered ideal for statistical identification of the construct [54]. Table 3 details the internal consistency values, while Table 4 displays the loadings of the indicators. We also identified one discriminant validity issue. The HTMT correlation between Rage and Revenge was above 0.95, which suggests that both constructs were not empirically distinct from each other in the model. Therefore, we decided to combine the two constructs into one, since both focus on aversive motives for cyber-aggression [25,38]. This approach is aligned with prior research on the motivational valence of cyber-aggression [55]. After creating a single construct for aversive motives (Rage and Revenge), no other discriminant validity issues were identified (see Table 5). There were no collinearity issues in the data, as VIF values were below 0.5 for all indicators. Values of path (<unk>) coefficients, F 2, and R 2 were considered to measure the relevance of the model, while bootstrapping was used to test the significance of the associations between constructs. --- Results The analysis of the model (see Fig 1) shows a moderate positive and significant association between reward and being a perpetrator (<unk> = 0.292), and between recreation and being a perpetrator (<unk> = 0.290), which supports H2c and H2d. The analysis also indicates a weak but significant negative association between cognitive empathy and being a perpetrator (<unk> = -0.110), which supports H4a. No other construct had a significant association with being a perpetrator of cyber-aggression. Table 6 provides detailed information about which hypotheses were supported by the results. The assessment of effect sizes shows small effect size of reward and recreation on being a perpetrator (both f 2 = 0.043), and near negligible effect size of cognitive empathy on being a perpetrator (f 2 = 0.014). In terms of model assessment and explanatory power, the model shows a moderate predictive power (adj. R 2 = 0.352) and the SRMR indicates a good model fit (0.057 for the saturated model and for the estimated model). The blindfolding procedure with a distance omission of 7 returns positive values of Q 2 = 0.202, which confirms the predictive relevance of the model. --- Discussion In the model, the findings suggest that recreation and reward are two important constructs to understand the perpetration of online anti-social behaviour. In the context of our research, this indicates that appetitive motives for anti-social behaviour (i.e., when the aggression is proactive) are more important than aversive motives (i.e., rage and revenge), in which the aggression is a reaction to another situation. Our findings are consistent with studies that focused on online trolls [39] and young offenders on probation [20], and contrary to studies that focused on bullying and cyber-bullying [14,23]. While online trolls and young people on probation indicate that they engage in online anti-social behaviour for fun, enjoyment, and excitement (related to appetitive motives), bullies and cyber-bullies tend to indicate revenge as their main reason. Therefore, young people in our sample engaging in anti-social behaviour might be seeking excitement and aiming to obtain positive emotions or social status [25,38]. In this sense, self-control, which distinguishes recreation (impulsive) from reward (controlled) does not seem to play a significant role in the likelihood of young people engaging in anti-social behaviour. --- Factors Hypotheses Results --- Online disinhibition H1a. Benign online disinhibition is positively associated with being a perpetrator of cyber-aggression. --- Not supported H1b. Toxic online disinhibition is positively associated with being a perpetrator of cyber-aggression. --- Not supported --- Motives for cyberaggression H2a. Rage is positively associated with being a perpetrator of cyberaggression. --- Not supported H2b. Revenge is positively associated with being a perpetrator of cyberaggression. --- Not supported H2c. Reward is positively associated with being a perpetrator of cyberaggression. --- Supported H2d. Recreation is positively associated with being a perpetrator of cyberaggression. --- Supported Self-esteem H3a. Self-deprecation is positively associated with being a perpetrator of cyber-aggression. --- Not supported H3b. Self-confidence is negatively associated with being a perpetrator of cyber-aggression. --- Not supported --- Empathy H4a. Cognitive empathy is negatively associated with being a perpetrator of cyber-aggression. --- Supported H4b. Affective empathy is negatively associated with being a perpetrator of cyber-aggression. Not supported https://doi.org/10.1371/journal.pone.0284374.t006 A previous study that explored the role of different motivations in online and offline aggression [19] found that recreation was more prevalent in online environments, which is aligned with our findings. Graf et al. [19] suggest that recreation may be prevalent online because this motivation is generally associated with less interpersonal motives. On the other hand, Graf et al. [19] identified that reward was more prevalent in the offline context, especially because this motivation is generally associated with social dynamics such as group affiliation and power relations [18,19,56]. Therefore, perpetrators seeking rewards often prefer offline environments because they have more control over the bystanders and how they will shape social structure as a consequence of their acts [19]. Facing the COVID-19 pandemic, young people have been spending more time online, reducing the access to in-person activities in which they could have engaged in anti-social behaviour for reward purposes. This could explain why reward was identified as a prevalent reason for young people to engage in online anti-social behaviour; they had to adapt how they interact with others in a context that was heavily dependent on online platforms for social interactions. The data generally supports both Postman's [30] theory of the entertainment frame and how it was later modernized by Hannan [29]. Specifically, we found that university students engage in anti-social behaviour both for fun (i.e., recreation) and social approval (i.e., reward). Perpetrators of anti-social behaviour on social media are doing so because it is entertaining. While recreation is strongly associated with the original theory and the centrality of entertainment in public discourse, reward emerges as particularly important when the theory was revisited by Hannan [29] to account for how social media affected the public discourse, making trolling a central feature of social interactions that emulate a high school setting. In addition to reward and recreation, the model shows that cognitive empathy is also a factor associated with the perpetration of online anti-social behavior. Those with lower cognitive empathy, indicating a lower capacity to comprehend the emotions of others, are more likely to engage in such behavior. This suggests that perpetrators may be engaging in online anti-social behavior because they do not fully understand how their targets feel. Based on this finding, one potential strategy for reducing the prevalence of online anti-social behavior is to implement psychological interventions that highlight the negative effects of the behavior on the targets. Interestingly, other factors showed nonsignificant associations with cyber-aggression perpetration. The fact that both benign and toxic online disinhibition had nonsignificant associations with perpetration indicates that characteristics of online platforms (e.g., anonymity and asynchronicity) and perceptions of social norms in online interactions (e.g., minimization of status and authority) do not play a significant role in online anti-social behaviour among university students. Although studies and reports indicated that the prevalence of online antisocial acts (such as online harassment and cyber-bullying) increased during the pandemic [2][3][4], our results indicate that the spike in online anti-social behaviour is less about online disinhibition and more about how most social interactions moved to the online environment. Instead of being a consequence of the online environment, anti-social behaviour is more likely motivated by the need for social approval, group bonding, fun and excitement (as indicated by the positive associations with reward and recreation). There were no significant associations between any dimensions of self-esteem (i.e., self-confidence and self-deprecation) and being a perpetrator. Therefore, the results do not support findings from previous studies that identified an association between self-esteem and perpetration [15,40,42]. Our data suggests that one's perception towards the self is not a key factor of being a perpetrator, at least not among the studied population. In summary, this study provides evidence on why young adults, particularly university students, engage in anti-social behavior. By highlighting the association between engagement in anti-social behavior and social factors such as enjoyment and social approval, our study presents a direction for future research to further analyzehow social elements play a role in antisocial behavior. While engagement in various forms of anti-social behavior is frequently linked to psychological traits, we found cognitive empathy to be the only significant factor among our study participants. In particular, a lower ability to understand how targets feel may be fueling the desire for fun and social approval without regard for the consequences. Future studies can further explore the relationship between these constructs. --- Conclusion The research sought to identify the factors associated with the perpetration of anti-social behaviour. We developed a model to account for the role of online disinhibition, motivations for cyber-aggression, self-esteem, and empathy in the perpetration of online anti-social behaviour. The findings suggest that three factors are associated with the perpetration of online antisocial behaviour: recreation, reward and cognitive empathy. Both recreation and reward are appetitive motives for anti-social behaviour, which suggests that young people engage in online anti-social behaviour for fun, excitement, and social approval. Cognitive empathy was negatively associated with the perpetration of online anti-social behaviour, which suggests that perpetrators have lower capacity to comprehend the emotions of others. Perpetrators have a lower understanding of how their targets might feel and this could partly explain why they engage in online anti-social behaviour. Other factors showed nonsignificant associations with perpetration. Interestingly, both benign and toxic disinhibition had nonsignificant associations with perpetration, which indicates that the prevalence of online anti-social behaviour is less about the nature of the medium (e.g., anonymity, asynchronicity) and more about individuals involved. Building on the results, there are two potential strategies in mitigating anti-social behaviour. First, related to our findings that perpetrators are more likely to be motivated by recreation and reward and have lower cognitive empathy, we refer to earlier work by Jolliffe and Farrington [21] who found that making people think about their actions increases their awareness and builds empathy towards the target. In this regard, strategies such as Twitter's intervention to add friction to make people reconsider when posting potentially offensive content [57] might be a strategy to reduce anti-social behaviour on social media. These types of strategies may be useful both in terms of making people think about their targets and potentially understand how they might feel (cognitive empathy), and reducing impulsive anti-social acts (recreation). For example, a recent survey of Twitter users who had posts removed by the platform found that less than 2% of them posted something to intentionally hurt someone [58]. Second, while outside the scope of the current study, Kim et al. [59] found that showing basic community guidelines to users can also encourage individuals to engage in healthier discussions, reducing the number of problematic content that was reported by others. This suggests that in addition to introducing some friction into online communication, platforms should endeavour to include more education in highlighting community rules and norms set by a given platform or an online community. This way, newcomers to the platform would learn what is and is not acceptable behaviour in a given community from the beginning. While this idea is not new, various communities on Reddit have already adopted this approach; most larger social media platforms tend to develop long, jargon-ridden guidelines of community norms, which are then buried in the fine print and are not seen or read by users [60]. Katsaros et al. [58] found that one in five users who violated Twitter's rules has never read the platform's guidelines on appropriate behaviour, and of those who have read the rules, over half of them were merely somewhat familiar or less familiar with them. As with any empirical work, the research has several limitations that stimulate future research in this area. Since this study relies on a sample of undergraduate students from one urban university in Canada, our sample is only representative of this group of young adults. Future studies could expand the work by using different and/or larger samples, such as nationally representative samples of adults. The reliability of some scales were also below the expected threshold, an issue that was solved by following the current PLS-SEM procedures. Therefore, future studies can revalidate some of these scales by using larger and/or more diverse samples. --- The anonymized dataset is available via the following DOI: 10.6084/ m9.figshare.22185994. --- Supporting information S1 Appendix. Constructs. (DOCX
Online anti-social behaviour is on the rise, reducing the perceived benefits of social media in society and causing a number of negative outcomes. This research focuses on the factors associated with young adults being perpetrators of anti-social behaviour when using social media.Based on an online survey of university students in Canada (n = 359), we used PLS-SEM to create a model and test the associations between four factors (online disinhibition, motivations for cyber-aggression, self-esteem, and empathy) and the likelihood of being a perpetrator of online anti-social behaviour.The model shows positive associations between two appetitive motives for cyber-aggression (namely recreation and reward) and being a perpetrator. This finding indicates that young adults engage in online anti-social behaviour for fun and social approval. The model also shows a negative association between cognitive empathy and being a perpetrator, which indicates that perpetrators may be engaging in online anti-social behaviour because they do not understand how their targets feel.
norms, which are then buried in the fine print and are not seen or read by users [60]. Katsaros et al. [58] found that one in five users who violated Twitter's rules has never read the platform's guidelines on appropriate behaviour, and of those who have read the rules, over half of them were merely somewhat familiar or less familiar with them. As with any empirical work, the research has several limitations that stimulate future research in this area. Since this study relies on a sample of undergraduate students from one urban university in Canada, our sample is only representative of this group of young adults. Future studies could expand the work by using different and/or larger samples, such as nationally representative samples of adults. The reliability of some scales were also below the expected threshold, an issue that was solved by following the current PLS-SEM procedures. Therefore, future studies can revalidate some of these scales by using larger and/or more diverse samples. --- The anonymized dataset is available via the following DOI: 10.6084/ m9.figshare.22185994. --- Supporting information S1 Appendix. Constructs. (DOCX) --- Author Contributions Conceptualization: Felipe Bonow Soares, Anatoliy Gruzd, Jenna Jacobson, Jaigris Hodson. Data curation: Felipe Bonow Soares, Anatoliy Gruzd. --- Formal analysis: Felipe Bonow Soares. Funding acquisition: Anatoliy Gruzd, Jenna Jacobson, Jaigris Hodson. Methodology: Felipe Bonow Soares, Anatoliy Gruzd, Jenna Jacobson, Jaigris Hodson. --- Project administration: Anatoliy Gruzd. Writing -original draft: Felipe Bonow Soares, Anatoliy Gruzd. --- Writing -review & editing: Felipe Bonow Soares, Anatoliy Gruzd, Jenna Jacobson, Jaigris Hodson.
Online anti-social behaviour is on the rise, reducing the perceived benefits of social media in society and causing a number of negative outcomes. This research focuses on the factors associated with young adults being perpetrators of anti-social behaviour when using social media.Based on an online survey of university students in Canada (n = 359), we used PLS-SEM to create a model and test the associations between four factors (online disinhibition, motivations for cyber-aggression, self-esteem, and empathy) and the likelihood of being a perpetrator of online anti-social behaviour.The model shows positive associations between two appetitive motives for cyber-aggression (namely recreation and reward) and being a perpetrator. This finding indicates that young adults engage in online anti-social behaviour for fun and social approval. The model also shows a negative association between cognitive empathy and being a perpetrator, which indicates that perpetrators may be engaging in online anti-social behaviour because they do not understand how their targets feel.
Introduction Research in teaching English to speakers of other languages (TESOL) has generated a revived interest in encouraging teachers to engage in and with i research as part of their professional practice and development (Tavakoli & Howard, 2012;Belcher, 2007;Borg, 2009Borg,, 2010;;Borg & Liu, 2013;Ellis, 2010, Erlam, 2008;Nassaji, 2012;Wright, 2010). This interest is evidenced by the increasing number of research articles, conference themes and plenary speeches dedicated to this topic to promote teacher research engagement (Borg, 2011;Ellis, 2010;Kumaravadivelu, 2011). Despite all the research interest and notwithstanding the repeated call for further research in this area (Borg;2010;De Vries & Pieters, 2007;Korthagen, 2007;McIntyre, 2005), there is little evidence to demonstrate that TESOL teachers engage with research as part of their day to day practice or that adequate attention is paid to examining and analysing this limited engagement. Conscious of the divide between the two and cautious of the dangers associated with it, many researchers have highlighted the sensitivity of the divide by calling it "a perennial problem" (Korthagen, 2007: 303), defining it "a damaging split between researchers and teachers" (Allwright, 2005:27), and describing it as "already a significant and perhaps growing divide between research and pedagogy in our field" (Belcher, 2007: 397). The gap between research and practice is commonly acknowledged across different educational disciplines from science to language education (Biesta, 2007;Korthagen, 2007;Pieters & de Vries, 2007;Vanderlinde & van Braak, 2010) suggesting that the problem might be more widespread than documented and "may well be an endemic feature of the field of education" (Biesta, 2007: 295). While emerging rapidly as a global line of enquiry, there is neither sufficient empirical evidence nor adequate disciplinary effort to examine and highlight the underlying problems that help increase the divide (Biesta, 2007;Ellis, 2010;Korthagen, 2007;Nassaji, 2012). Korthagan (2007: 303) argues that given the recurrent nature of the problem and with more and more teachers, parents and politicians voicing dissatisfaction with the divide, it is necessary "to restart an in-depth analysis of the relation between educational research and educational practice". Borg (2010: 421) argues that our understanding of teacher research engagement is limited "with the levels of practical and empirical interest in this research area being minimal". Borg observes that the scope and depth of the available evidence on language teacher research clearly indicates that "teacher research remains largely a minority activity in the field of language teaching" (Borg, 2010: 391). The current paper is responding to the call for further research in this area. By providing an in-depth analysis of teachers' views and beliefs about the relationship between research and practice, the paper is an attempt to help enhance our understanding of teachers' perspectives on why they do or do not engage with research and what they suggest can be done to help improve the situation. --- Background Theory --- Teaching and Research Before discussing the relationship between teaching and research in more detail, and against a backdrop of the disagreement among researchers and teachers about what research is, it is necessary to provide a working definition for research. Following from Dornyei (2007), for the purpose of the current study research is defined as conducting one's own databased investigation which involves collecting and analysing the data, interpreting the findings and drawing conclusions from it. The interest in encouraging TESOL teachers to engage with research can be traced back to Chastain (1976) and Stern (1982). In educational research the underlying assumption is that teachers who are engaged with research in their practice deliver a better quality of teaching. Williams and Coles (2003) argue that the ability to seek out, evaluate and integrate appropriate evidence from research and innovation is an important aspect of effective development in professional practice. Borg (2010: 391) reports that "research engagement is commonly recommended to language teachers as a potentially productive form of professional development and a source of improved professional practice". Teacher research is also promoted as it is known to encourage teacher autonomy, improve teaching and learning processes and empower teachers in their professional capacity (Allwright, 2005;Borg, 2010;Burns, 1999;McKay, 2009). A brief overview of research in this area provides a list of factors contributing to the divide between teaching and practice. Pennycook (1994) interprets the divide in terms of incommensurability of discourses, and Wallace (1991) attributes it to researchers and practitioners being different people coming from different worlds. Freeman and Johnson (1998: 399) report that lack of a deep understanding and appreciation of teacher knowledge is a main issue, and argue that "research knowledge does not articulate easily and cogently into classroom practice". Non-collaborative school cultures, limited resources and limitations in teachers' skills and knowledge to do research are some of the other barriers reported in the literature (See Borg 2010 for a detailed account). Analysing the existing divide between research and practice, Ellis (2010: 2) argues that the nexus between research and practice in second language education has changed over the past years since the field "has increasingly sought to establish itself as an academic discipline in its own right". Drawing on the literature in TESOL and Applied Linguistics, Ellis (2010) reports that there is no consensus about the relationship between research and teaching, and that the relationship continues to remain a complex and multifaceted nexus of sometimes conflicting positions on whether or not the research findings are applicable to teaching. In a recent article, Richards (2010) calls for a better understanding of what constitute the nature of language teaching competence and performance and sets a 10-item core dimensions framework as the agenda for gaining insight into the necessary skills and expertise in language education. An important dimension that can shed light on the competenceperformance relationship, according to Richards, is 'theorizing from practice', i.e. "reflecting on our practices in order to better understand the nature of language teaching and learning and to arrive at explanations or hypotheses about them" ( (Richards, 2010: 121). Richards (2010) further argues that membership of a community of practice is a core dimension that can provide a rich opportunity for teacher further professional engagement and development. Interestingly, Richards labelling the call 'a somewhat ambitious agenda' (p. 120) suggests that achieving this understanding might be more challenging and formidable than it is perceived. In a study examining TESOL teachers' views on the relationship between teaching and research in England, Tavakoli and Howard (2012) reporting the findings of 60 questionnaires claimed that, regardless of the context the teachers worked in or the amount of experience they had, the majority of TESOL teachers were not engaged with research and were sceptical about the practicality and relevance of research to their professional practice. It is necessary to note that while teachers in the context of this study, i.e. England, did not mention action research as a research activity they were engaged with, action research is sometimes reported as a popular research activity in other educational contexts (Burns, 2005;Richards, 2010). The findings of Tavakoli and Howard (2012) were confirmed by Nassaji's (2012) study examining 201 TESOL teachers' views in Canada and Turkey about the relationship between teaching and research. Another interesting finding emerging from both studies is that the teachers who had some research training in their studies, e.g. those who had done a Masters' degree, had a more favourable towards the relationship between research and practice. Stenhouse's Curriculum project (1975) has been one of the first movements to bridge the divide between educational research and practice in mainstream education in the UK. In this project, Stenhouse introduced a new approach to mainstream teaching in which an active role for teachers in developing research and curriculum in their teaching was promoted. In TESOL, such efforts are more recent. Allwright's work on promoting Exploratory Practice (2003,2005) and Burns' innovative work advocating action research (1999,2005) have been influential initiatives to raise teacher awareness and to encourage teacher research engagement. Although promoting action research, i.e. research conducted by teachers to gain a better understanding of their practice and to improve teaching and learning, has attracted attention among teachers and gained currency among researchers, the findings of recent research (e.g. Nassaji, 2012;Tavakoli & Howard, 202) suggest that it is still not widely practiced by teachers around the world. At an organisational level, TESOL Quarterly's commitment to 'publishing manuscripts that contribute to bridging theory and practice in our profession', and ELT Journal's mission to link 'everyday concerns of practitioners with insights gained from relevant academic disciplines' are examples of attempts to connect TESOL research and practice. Recent plenary speeches about the divide (Ellis, 2010;Kumaravadivelu, 2011) and major publications on language teacher research engagement (Borg, 2010;Ellis, 2010Ellis,, 2013) ) are other strategies for linking the two. --- Efforts to Bridge the Divide The contribution of teacher education to the development of teacher research engagement is worth examining. Freeman and Johnson (1998) were among the first to suggest it was the responsibility of teacher education to link research to practice in second language education. Wright ( 2009) attributes a significant role to teacher education in defining and disseminating new ideas to teachers, and McKay (2009) considers introducing teachers to classroom research a challenge worth investigating. Overall, while there is a degree of awareness about the usefulness of research knowledge for and its positive impact on practice, there is insufficient evidence to indicate whether this awareness is transferred into action in teacher education and whether teacher education is effectively used as an opportunity to promote research (Borg, 2010;Kiely & Askham, 2012;Wright, 2009Wright,, 2010)). --- TESOL Teacher Education TESOL teacher education in the UK can be divided to two levels of initial (pre-service) and further (in-service) teacher training programs. An initial TESOL qualification, e.g. CELTA, is a certificate level qualification which has historically been a major point of entry to TESOL profession in the UK and some other countries (Kiely & Askham, 2012). This trend is recently changing with an increasing number of employers requiring more advance qualifications, e.g. a Diploma or an MA. The certificate level teacher training programs are for graduates with little or no teaching experience (Cambridge English, 2013), and are typically intensive 4-week courses providing the skills, knowledge and hands-on teaching practice less experienced teachers need. The Diploma level teacher training programs, e.g. DELTA are designed for experienced teachers "to update their teaching knowledge and improve their practice" (Cambridge English, 2013). These usually span over two years parttime and act as in-service training and/or professional development. Both types of programs draw on the principles of reflective teaching (Sch<unk>n, 1983). The study reported here has set out to look into TESOL teachers' views on the relationship between research and practice, to examine the potential factors they believe have contributed to the persistence of the divide, and to seek out solutions from the participants on how to bridge the divide. Of particular significance to the study is to find out if Wenger's framework for communities of practice (CoP) can help answer the following research questions. 1. What are TESOL teachers' views on the relationship between teaching and research? 2. What factors do they hold responsible for contributing to the divide between research and practice? 3. What do they suggest can be done to help bridge the divide? 4. What role do they consider for teacher education in promoting teacher research engagement? --- Analytic Framework: Wenger's (1998) Community of Practice In similar areas of research, Wenger's (1998) CoP has proved an effective and constructive conceptual framework that allows for an in-depth insight to emerge from issues related to teachers' understanding, knowledge and learning in the context of their practice (Hasrati, 2005;Kiely & Askham, 2013;Payler & Locke, 2013;Yandell & Turvey, 2007). Following Wenger (1998Wenger (, 2000)), this study perceives CoPs as "groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly" (Wenger, 2006: front page). In general, CoPs are known to work in a specific Domain, have a defined Community, and exercise a specific kind of Practice. In pursuing their interest and by engaging in a series of activities such as collaborations, discussions and information sharing tasks, members of a CoP help each other, exchange experiences, develop ways of addressing and solving problems and build relationships. The interplay between social competence (shared in the CoP) and personal experience (individual's own ways of knowing) is known to result in learning and the further development of a shared competence (Wenger, 2000). The shared competence emerging from participation in the social context of the CoP helps distinguish members from non-members. Wenger (1998) points out that coherence of a CoP relies on three defining elements: mutual engagement (having a common endeavour), a joint enterprise (being involved in a collective process of negotiation), and shared repertoire (developing common resources). The concept of CoP has been critiqued by a number of researchers as being elusive and slippery, often appropriated inconsistently in different studies (Barton & Tusting, 2005;Rock, 2005). Other researchers have argued that change as an inherent property of a CoP has been neither theorised nor clearly conceptualised in Wenger's framework (Barton & Hamilton, 2005;Barton & Tusting, 2005). The study reported here provides an opportunity to examine whether adopting CoP as an analytic framework would allow for a better understanding of teachers' views on the relationship between TESOL teaching and research. --- Methodology --- Participants The participants were 20 TESOL teachers teaching English in England at the time of the study. They were teaching EFL, ESOL and/or EAP courses in different organizations including university language centres, state-funded FE colleges and private language schools. To recruit the participants, a number of English language teaching institutions in England were contacted via emails and their teachers were invited to take part in the study. The 20 participants who volunteered and took part in the interviews came from a range of different educational and professional backgrounds, and had varying training and teaching experiences. The majority of the participants had taught English internationally as well, which is a typical characteristic of the UK TESOL teacher population. Given that Tavakoli and Howard (2012) did not find a significant correlation between years of experience or context of teaching and teacher research engagement, these variables were not included in the current study. While the study assumes that the participants belong to different CoPs, the focus of the study is on teacher practitioners as members of TESOL teachers' CoP. Table 1 presents some demographic information about the participants. --- INSERT TABLE 1 HERE --- Interviews Since the two most recent studies on this topic, i.e. Tavakoli and Howard (2012) and Nassaji (2012) had drawn on questionnaire data, a semi-structured interview was considered as a methodologically more appropriate data collection tool that can make up for the limitations of previous research by providing a more open platform to the teachers to discuss their perspectives in more depth. Following from Tavakoli and Howard (2012) who found that the concept of research was open to teachers' individual interpretations, the participants were informed of the working definition of research presented earlier in this paper (see Section 2.1). The face to face interviews were conducted in a place of convenience to the teachers, each lasting 30 to 45 minutes. The purpose of the study was explained to the participants and informed consent was sought before the data were collected. All but one ii of the interviewees agreed that the interviews being digitally recorded. The interview questions were guided by the previous research findings in this area. These questions can be divided to three sections. Drawing on the findings of Tavakoli and Howard (2012), the initial section of the interview aimed at investigating teachers' views on the relationship between teaching and research, the divide between the two and the main reasons for the persistence of the divide. Following from Ellis (2010) and Nassaji (2012), the second section of the interview invited the teachers to provide suggestions for bridging the gap. Questions about the role of teacher education were included in the last section of the interview as a gap in our understanding of this area has already been identified (Burns & Richards, 2009;Wright, 2009Wright,, 2010)). --- Data Analysis The interviews were transcribed and word processed before they were subjected to a thematic analysis (Creswell, 2007). The process involved three different stages. First, the transcripts were read and coded before a number of salient themes and patterns were identified. This then lead to grouping the themes together where possible. In the second stage, in order to examine the applicability of Wenger's CoP framework, the emerging themes were compared with the different aspects and components of Wenger's CoP discussed in Section 3 of the Introduction. These themes were then put under categories of Wenger's analytic framework to find out if they can provide a response to the research questions. In the last stage, a colleague experienced in working with Wenger's CoP framework examined the data separately. Any points for discussion or disagreement between the two coders were reconsidered until agreement was achieved. --- FINDINGS In the section below the findings of the study are grouped together to respond to Research Questions 1 to 4 in Sections 4.1 to 4.4 respectively. These findings will reflect the researcher's interpretation of teachers' views on the different aspects of teacher research engagement, and will highlight their suggestions on what can be done to bridge the divide between teaching and research. --- Teachers' Views on the Relationship between Research and Practice: Interdependence of Learning, Practice and Identity Fundamental to Wenger's concept of CoP is the intimate relationship between learning, practice and identity. In the field of TESOL teacher education, it is widely accepted that learning is essentially linked to the social and cultural contexts in which it occurs (Faez & Alvero, 2012;Johnson, 2006Johnson,, 2009;;Miller, 2009) and that learning should be perceived as both a cognitive and sociocultural process (Lantolf, 2000;Lantolf & Poehner, 2008;Nasir & Cooks, 2009). From a CoP perspective, learning mainly takes place through participation in social and cultural practices and activities (Lave & Wenger 1991;Nasir & Cooks, 2009;Wenger, 2000), and is identified as a characteristic of practice and participation in the community of practitioners (Wenger, 1998). Members of a community learn from one another and from more experienced members of their CoP, and they change through the processes of interaction and learning. Identity in Wenger's framework is "a way of talking about how learning changes who we are" and how it creates "personal histories of becoming in the context of our communities" (1998: p. 5).The teachers in the current study frequently The teachers' message echoes Wenger's argument that "learning is not merely the acquisition of a body of knowledge, but a journey of the self" (2011). To gain knowledge about their practice, teachers rely on experience and participate in the activities of their CoP, 'old timers' helping 'new comers', enabling them to move from periphery to legitimate membership of the CoP. --- Factors Contributing to the Divide: The Defining Elements of CoPs The data analysis suggests that the participants perceive teaching and research as two different CoPs and that the membership to one may not only limit but sometimes exclude a membership to the other. The analysis also implies that multi-membership in different professional CoPs has been a continuous challenge. T5: Researchers come from theoretical perspectives; I'm a teacher coming from sort of, well, from a teaching context, from a real teaching context.... I think um as long as the researcher hasn't been too long out of the classroom then you can rely on their research. Wenger (1998) argues that organizing themselves around some particular area of knowledge and activity gives members of a CoP a sense of joint enterprise and identity. The joint enterprise is therefore their collective negotiated response to their experiences and practices, and it creates a sense of mutual accountability within the community. The inherent differences between the two CoPs should at least to some extent be attributed to the three cohering features of a CoP, i.e. mutual engagement, joint enterprise and shared repertoire. T13: And having a dialog between researchers and teachers: so researchers perhaps speaking with teachers about their own interests and what the teachers are interested in and developing a conversation to bridge the gap. The pursuit of a joint enterprise, e.g. teaching English in language classroom, over time creates resources for negotiating meaning, i.e. a shared repertoire. Teachers' shared repertoire includes ways of doing things, anecdotes and stories they exchange, resources available to them and conversations in staff common rooms. A sustained engagement in their practice enables teachers to interpret and make use of this shared repertoire. The different sets of repertoire teachers and researchers rely on may be another source of divergence between the two communities. --- T7: The staffroom is the best place for ideas, um I mean with all that experience why make things difficult for yourself (i.e. engage with research). --- Bridging the Gap: Bringing the Two Communities Together Of valuable contribution to this study is Wenger's (1998) notion of 'boundaries'. While their main function is to separate different CoPs, boundaries come to spotlight when a required type of learning motivates members to move from one CoP to another. The concept of boundaries does not imply that CoPs are impermeable or that they function in isolation. Rather, connections can be made between CoPs through the use of 'boundary encounters' such as meetings and conversations, collaborative tasks, and sharing the artefacts used by them (Wenger, 1998). Given that boundary encounters allow for importing practices and perspectives from one CoP to another, they have a central role in bringing change to the way a community defines its own identity and practice. T18: the main job (for the research community) then is to take research and to make it available to practitioners. It is starting the research from where the practitioners wanted. Fundamental to success of boundary encounters is the role of brokering, "a process of introducing elements of one practice to another" (Wenger, 1998: 236). Brokers, individuals (and also institutions) who straddle different CoPs, are agents that can facilitate interaction, negotiation and other exchanges between the two CoPs. The concept of potential brokers, those who can connect the two communities, appeared to be a key message by the participants. While teachers teaching at universities' language centers were sometimes suggested as potential individual brokers, the main brokery role was attributed to mediatory organizations such as the British Council and the UK's National Research and Development Centre (NRDC). T18: What NRDC did was to take research and to make it available to practitioners... by starting the research from where the practitioners want it. Those projects and those approaches were useful and successful. --- Role of Teacher Education The analysis of the data provides further evidence for a socio-cultural perspective to teacher learning and confirms the significant role of learning as participation in the context of teaching (Lave & Wenger, 1991). The teachers' views indicate that although they have found teacher education useful in providing them with the essential needs of classroom practice they concede that it is the teaching experience itself that offers them the most useful experience and a fruitful opportunity for learning. --- T8: Teacher training gives you the initial tools to go and teach but I think the experience you get in your first job is much much more than the CELTA would give you. While most teachers agreed that initial teacher training programs, e.g. CELTA, do not allow for a focus on research, the more experienced teachers argued that including research training at this stage would be pointless if not counter-productive suggesting that introducing research to teacher training would only be beneficial at a more advanced stage in teachers' careers. T15: with CELTA (there is) very little (research) because CELTA is an initial teacher training of 4 weeks where people learn how to teach and the building blocks of that. And if you put research on top of that it's too much. With regard to how essential research was to teachers and their professional development, the teachers' views divided. While some found it less relevant to their needs and not an essential requirement for becoming a professional teacher, others considered research as central to teachers' professional practice. Overall, there was an emphasis on the role of research training in encouraging teacher research engagement. Teachers who had taught at university level were often more positive about the value of research and suggested that the university environment had been supportive of this positive attitude. Promoting action research, doing a research-oriented Masters degree and including a stronger research component in teacher education were other suggestions for bridging the divide. T16: so through a post-graduate, like a Masters degree you could sort of bridge the gap between research and practice, and that's perhaps how teachers have gone on to become researchers, I suppose.... it'd be through teacher trainers and director of studies that research can be passed to teachers. --- Discussion One of the key points the current study highlights is the complex relationship between teachers' views on teaching and research, their learning experiences and their identity as a professional teacher. The analysis suggests that teacher identity forms and develops primarily through practicing teaching and by interacting with other teachers in their CoP. This finding is in line with Freeman and Johnson's observation that learning to teach is a long-term, complex developmental process that operates through participation in the social practices and contexts of L2 teaching (1998: 402). Unlike Varghese (2006), this finding implies that regardless of their individual expectations and personal histories, the teachers demonstrate a coherent concept of CoP in defining their identity in light of their teaching experience, knowledge and learning as participation. Despite acknowledging research usefulness as an underlying assumption, the teachers argue that it is learning as and through participation in the situated contexts of their CoP that gives them the ownership of knowledge and establishes them as a legitimate participant of the teaching CoP. In this respect, while it confirms Nassaji's (2012) result on teachers' lack of interest in research engagement as one of the reasons for the divide, this finding goes further to explain that teachers' reluctance may originate from teachers' reliance on the knowledge that is owned by them as legitimate participants of the CoP. In line with the social constructivist view of teacher learning-to-teach in context (Johnson, 2006(Johnson,, 2009;;McIntyre, 2005;Miller, 2009), the teachers in this study feel it is necessary to recognise their learning as situated social practice and to acknowledge and appreciate the different ways they construct and define knowledge. This is something that TESOL research should pay more attention to when studying the divide between research and practice. Answering the question of how teacher knowledge is translated to identity and in what ways it leads to ownership of knowledge lies beyond the scope of this paper. However, the data indicates that, while teacher research engagement is limited, teachers remain committed to the principles of Reflective Teaching (Sch<unk>n, 1983;Wallace, 1991) and Exploratory Practice (Allwright, 2005). Whether it is possible to follow Clark (2005) to argue that it is philosophy rather than social science that governs teaching practice is beyond the purpose of this study. What this paper can argue for is that, while research engagement seems to have a restricted impact on teachers' practice, it is imperative to find out how principles of Reflective Teaching, usually introduced to teachers during pre-service teacher education, remain embedded in teachers' professional practice in many contexts (Borg, 2010;Burton, 2009;Kiely & Askham, 2012;Miller, 2003;Wright, 2010). To associate practice and community, the three dimensions of relation in the community, i.e. mutual engagement, shared repertoire and joint enterprise should be strengthened. One way to investigate the divide is to find out why these dimensions in each CoP are diverting from those of the other. William and Coles' (2007) findings from a survey of 312 teachers in the UK that report informal discussions with colleagues, professional magazines and newspapers, and in-service teacher education are the three most common sources of teachers' new knowledge. This is an example of the limited shared repertoire between teachers and researchers. The concept of "barriers to engagement with and in research" is not new in the literature, with scholars such as Borg (2010) and Ellis (2010) listing key obstacles that prohibit teachers from conducting research. Although the presence of these barriers cannot be denied and their impact on deepening the divide should not be undermined, the underlying problem for the limited teacher research engagement reported in the literature is more complex than the simple concept of barriers. In line with the findings of Flores (2001), the current study suggests that the impact of pre-service and initial teacher education preparing teachers for research engagement is limited. It is also known that the role of teacher education in preparing teachers for research engagement has been minimally investigated (Faez & Alvero, 2012;Kiely & Askham, 2012;Miller, 2009;Wright, 2010). --- Concluding Remarks Employing Wenger's (1998) CoP framework in this study has offered an insight into the complex relationship between knowledge, learning experiences and identity, and has opened up a novel way of interpreting the divide in the light of the differences between the two CoPs. However, using this framework has undermined the role of social forces at work in the creation of CoPs, e.g. the social force that imposes on researchers a research agenda distant from teachers' practical needs (Rock, 2005). Given the dynamic nature of a CoP, it is impossible to consider or evaluate it without taking into account how the world around a CoP influences it. In the current study, however, to achieve the research aims CoPs are considered in isolation. It is necessary to note that this is a small-scale study drawing on a small set of data in England. Although many of its findings may endorse issues, dilemmas and problems previously reported in various contexts, the impact of local pedagogies (Kumaravadivelu, 2011) should not be underestimated. There are a number of important conclusions this paper would find necessary to draw on. First, the findings of this study strongly suggest that teachers' knowledge and experience, developing through practice in their CoP, should be acknowledged and valued more intensely by the research community. Research that is aimed for TESOL teachers should be informed by this knowledge and experience, and should be designed to address their needs and requirements. Second, there is a strong need for researchers and teachers to build joint communities and to engage in mutual activities that can bring together a research and a practical focus. In order to indicate their membership to these different but inter-connected CoPs and to help bridge the divide, teachers, researchers and mediatory communities, e.g. the British Council should take a more active role in promoting collaborative research, running joint projects and holding shared academic and educational events. Richards (2010) refers to a number of successful projects of this kind delivered in the Asian contexts. The question to ask is if such projects can be used as a model to follow in other similar contexts. The final concluding remark is to highlight the important role of teacher education programmes in enhancing a research environment and in encouraging a research approach to teaching. Research evidence (e.g. Erlam, 2008;Wright, 2010) suggests that providing a more userfriendly approach to research combined with a supportive research environment on teacher training programmes would not only prepare teachers for a better engagement with research, but they would build confidence and lead to teacher empowerment. (2009) distinguishes between teachers' engagement in research and with research. For the purpose of this study as such distinction has not been found necessary, the term engagement with research is used consistently to represent both types of engagement. ii In the case of the only interviewee who didn't agree for her voice to be recorded, detailed notes were taken. --- Wenger, E. (1998). Communities of practice: Learning, meaning and identity. Cambridge: Cambridge University Press. Wenger, E. (2000). Communities of practice and social learning systems. Organization, 7( 2
In line with a growing interest in teacher research engagement in second language education, this article is an attempt to shed light on teachers' views on the relationship between teaching and practice. The data comprise semi-structured interviews with 20 teachers in England, examining their views about the divide between research and practice in their field, the reasons for the persistence of the divide between the two and their suggestions on how to bridge it. Wenger's (1998) Community of Practice (CoP) is used as a conceptual framework to analyse and interpret the data. The analysis indicates that teacher experience, learning and ownership of knowledge emerging from participation in their CoP are key players in teachers' professional practice and in the development of teacher identity. The participants construe the divide in the light of the differences they perceive between teaching and research as two different CoPs, and attribute the divide to the limited mutual engagement, absence of a joint enterprise and lack of a shared repertoire between them. Boundary encounters, institutionalised brokering and a more research-oriented teacher education provision are some of the suggestions for bringing the two communities together.
Introduction Child abuse, neglect, and household dysfunction are collectively referred to as adverse childhood experiences (ACEs) and are associated with worse outcomes over a child's lifetime [1]. It is important to establish that although ACEs are not determined by a child's race, class, or gender, they are more prevalent among historically excluded populations in communities made vulnerable by poverty and scarce public resources [2,3]. Previous research findings have suggested the significant impacts of systemic inequality, as historically marginalized children, families, and communities are more likely to live in high-risk environments that compound ACEs [4,5]. While ACEs scholarship has generally emphasized the 10 traditional adverse childhood experiences, emerging studies have acknowledged the importance of ACEs related to environmental factors that may disproportionately affect marginalized children and families [2,6]. Subsequent studies have identified expanded ACEs related to environmental factors (i.e., neighborhood violence, homelessness, foster care, bullying, and racism); this research has: (1) advanced dialogue around the diversity of ACEs; (2) examined the relationship between ACEs and child demographics; and (3) demonstrated which populations of children are more likely to experience ACEs [2,7,8]. However, scholars have not critically examined how systemic inequality shapes lived realities to understand the relationship among high-risk environments, access to resources, and ACEs. In addition, researchers have not explored how intersectional experiences within high-risk environments may compound the effects of ACEs and additionally marginalize populations within and across ecological levels (i.e., individual, group, or community units of analysis). To account for children's diverse experiences of abuse, neglect, and household dysfunction at the individual, group, and community levels, we examined urban and rural environments in which ACEs occur. We utilized 81 in-depth interviews with health and social service providers in the state of Tennessee to understand how historically excluded populations-that is, low-income families, children exposed to opioids, and children of immigrants-access resources and experience place-based challenges that raise high-risk conditions. Interview participants were at the frontlines of mitigating ACEs, playing a key role in helping families access vital resources and services [9]. Guided by a process-centered intersectionality framework [10][11][12], our grounded theory research design assumed there are various social, political, and economic inequities that perpetuate conditions of oppression among historically excluded children, families, and communities, which permitted a critical understanding of underlying issues that create high-risk environments [10,13,14]. We present the Intersectional Nature of ACEs Framework to showcase how environments shape high-risk conditions; link intersectional experiences of recognized and unrecognized individuals, groups, and populations; and have confounding effects related to ACEs. While quantitative, population-level studies can describe the existence of an ACE or multiple ACEs, our study identifies the underlying issues that construct high-risk environments and worsen ACEs for children, families, and communities. --- Background We utilized the concept of intersectionality to guide our review of the ACEs literature and identify the extent to which empirical studies have included systemic inequality across ecological levels. Intersectionality is a theoretical framework embedded in research studies that seek to support a nuanced understanding of how various forms of experienced inequality interface with one another and exacerbate marginalization among historically excluded populations. This positionality challenges single-axis frameworks and supports the ability to understand within-group differences at the individual (micro), group (mezzo), or community (macro) level [15][16][17]. In reviewing the ACEs literature, we sought to understand how previous studies have operationalized intersectionality and the extent to which findings have considered additionally marginalized subpopulations within and across ecological levels. Our literature review identifies 20 studies that espoused an intersectional framework directly or indirectly to examine the ACEs phenomena among historically excluded populations. --- Expanded ACEs and Historically Excluded Populations To date, ACEs scholarship has identified the following experiences as forms of expanded ACEs: neighborhood violence, witnessing violence, bullying, poverty, homelessness, and foster care [8,[18][19][20][21][22][23]. Although studies investigating expanded ACEs have focused on how children interface with environments, most have not described upstream factors that raise risk for ACEs and contribute to experiences of co-occurring ACEs among children who experience the consequences of systemic inequality. For example, two studies that identified expanded ACEs utilized the differential exposure hypothesis to contextualize the examination of which groups or populations were more likely to be exposed to ACEs per gender, economic status, and/or race [6,24]. Recent studies have also linked newly identified ACEs with higher risks for negative outcomes, such as poverty, poor mental health, behavior problems, and risky health behaviors [7,8,18,21,22,25]. These and other studies have confirmed that historically excluded populations (e.g., Black, Indigenous, People of Color also referred to as people of the global majority) experience additional challenges and are more likely to experience ACEs as a result of systemic inequality and how it shapes their identities and lived realities [5,24,[26][27][28][29]. While we acknowledge the importance of identifying which populations of children are at risk, we argue that it is critical to establish how systemic inequality within and across ecological levels shapes high-risk environments. --- Systemic Inequality and Co-Occurring ACEs Although the aforementioned studies built upon ACEs phenomena among historically excluded populations, they generally did not establish how these experiences are constructed by political and socioeconomic systems contributing to high-risk environments. Moreover, the super-majority of studies did not examine how political and socioeconomic factors contribute to experienced adversity among children. In fact, we only identified one study that considered the nuance of sociocultural factors that shaped high-risk environments associated with ACEs [5]. While it is helpful to know which populations need additional support to address ACEs and build resilience among children, it is even more important to know why higher risk conditions exist and to address root causes of inequities that increase the risk of ACEs. ACEs scholars refer to the differential burden of ACEs as a co-occurring phenomenon, and this, too, is experienced at higher rates among historically marginalized populations [25,29]. The differential burden concept explains why certain groups may experience worse outcomes from ACEs linked to demographic characteristics or social identities [27,29]. Limited access to resources based on one's identity may also play a role in which children affected by ACEs are most likely to access treatment and support. The ability of stakeholders, clinicians, and policymakers to distinguish between demographics and the inequitable environments that raise high-risk conditions for communities made vulnerable is critical to mitigating deficit perspectives and facilitating comprehensive support for children who experience ACEs. For example, higher exposure to ACEs should not be linked to the status of being part of the global majority or belonging to a historically marginalized population; rather, if ACEs are a universal experience, high-risk conditions must be regarded as imposed-upon environments that compound ACEs and inflict additional harm on historically excluded children, families, and communities. --- Contributions to the Literature In our review of the literature, no study operationalized a process-centered intersectionality framework or fully discussed how an intersectional approach could advance the analysis of ACEs. The two studies that examined the intersection of ACEs and demographic factors did not expand upon how policies and systems raise the risk of ACEs and stigmatize populations experiencing a higher burden of ACEs; rather, the concept of intersectionality was used to contextualize the introduction of the topic and explain the intersection of factors connected with ACEs [6,30]. Similarly, a study on ACEs and wellness outcomes among Black men who have sex with men introduced the concept of the intersection of identities and dual experiences, but the authors did not consider how researchers might use an intersectional approach to expand upon the understanding of ACEs [29]. Our study contributes to the ACEs literature within various dimensions. First, our approach expands upon the differential exposure hypothesis by explaining the conditions historically marginalized populations are more likely to experience and, thereafter, bridges the topics of high-risk conditions and ACEs. Second, we expand upon the differential burden concept by explaining how policies and systems determine access to vital resources and services for children experiencing ACEs; access to resources and services are important in reducing the risk of intergenerational trauma, abuse, and household dysfunction among children and families [2,31]. We also build upon previous ACEs studies that have espoused an intersectional perspective by being the first to operationalize a process-centered intersectional framework. --- The Current Study The purpose of this grounded theory study was to examine how health and social service providers (N = 81) from rural and urban counties in Tennessee provided services to low-income families, children exposed to opioids, and children of immigrants. Specifically, we explored two guiding research questions: (1) How do rural and urban environments shape high-risk conditions for children, families, and communities? (2) How do high-risk conditions compound ACEs and impede access to resources at individual, group, and community levels? Camacho and Henderson were researchers for the Policies for Action Research Hub study at Vanderbilt University funded by the Robert Wood Johnson Foundation. Camacho was a Policies for Action Research Hub postdoctoral fellow and responsible for developing and leading the qualitative data collection and analysis effort. Henderson was a Senior Health Services Research Analyst who contributed to quantitative and qualitative data collection, management, and analysis. This study is part of a larger, transdisciplinary study that seeks to identify policies and practices in the state of Tennessee that can improve the health and education outcomes among the state's most vulnerable children. The research team constituted nine investigators from the Vanderbilt University School of Medicine's Department of Health Policy and Peabody College of Education and Human Development. Collectively, researchers utilized quantitative and qualitative methods to understand the complex relationships that exist between state-level policies and access to services that either facilitate or impede the ability for children and their families to receive vital health and education services. --- Materials and Methods --- Recruitment and Sample The research team constructed a sampling frame to recruit individuals working in state agencies, local health departments, safety-net clinics, schools, and nonprofits. The research team utilized purposive sampling to prioritize organizations that served marginalized families and were situated in different geographic areas that were designated as economically distressed counties, had higher rates of neonatal abstinence syndrome (NAS), and had higher percentages of Latinx/e populations. Thereafter, random sampling was utilized to select participants from each type of organization to ensure diverse organization representation. They purposely selected two counties with the highest incidences of NAS, and the highest rates of Latinx/e children of immigrants. Administrative data was used to stratify the sample based on region designation (i.e., west, middle, east) by the Tennessee Department of Education and urbanicity (i.e., town, city, suburb, rural). In addition, distances were calculated using the percent of marginalized populations within each county to select closest to the average Mahalanobis score (this measures the distance between two points and the distribution). After finalizing the sampling frame, Camacho contacted interview participants via email and phone to request an interview either in person or over the phone. Each in-depth interview was 60 to 90 min in length, and we attempted to establish a supportive interview environment by acknowledging the significance of providing support to historically marginalized populations. We conducted 47 interviews in 26 counties (of the 95 in Tennessee) in all three regions of the state; nine of these counties were designated as economically distressed. Most interviews were in-depth, one-on-one interviews (34) conducted by two or three members of the research team. When possible, we encouraged interview participants to invite work colleagues to be part of the in-depth interview process to better understand organizational policies and practices from various perspectives; subsequently, we conducted a total of 13 focus groups with two to nine interview participants per focus group. Interview participants worked in the following organizations, and we include the number of interviews conducted with each organization: community advocacy organizations (n = 5), community anti-drug coalitions (n = 8), community mental health centers (n = 3), coordinated school health directors (n = 15), county health departments (n = 4), federally qualified health centers (n = 3), Medicaid (n = 1), neighborhood health centers (n = 3), opioid treatment programs (n = 1), school-based health centers (n = 2), and Tennessee early interventions systems (n = 1). A total of 81 health and social service providers participated in the interviews. The study was approved by the Institutional Review Board at Vanderbilt University, and informed consent was obtained from all interview participants. --- Data Analysis We employed a grounded theory methodology to examine data corresponding to our research questions and systematically utilized comparative analysis to construct a theory from the dataset [32]. Modeling the intersectional grounded theory research design [33], we assumed a process-centered intersectional approach to guide our understanding of withingroup differences (mezzo) and help identify how inequality functions within structured mechanisms. As developed by McCall (2005) and Davis (2008), an intersectional processcentered method does not limit the intersections of experiences to individuals (micro) but suggests that group intersectional analysis (mezzo and macro) reveals how systemic inequality operates [11,12,14]. Therefore, our data analysis process differentiated between personal and collective experiences to better understand how external factors are placed upon children, families, and communities [34]. In line with this approach, we conceptualized three waves of data analysis whereby the first data analysis utilized the interview guide to develop a priori codes, the second used our theoretical framework to identify emergent constructs pertaining to systemic inequality, and the third catalogued systems and processes that prohibited access to resources and services across ecological levels. All interviews were transcribed verbatim, and we used NVivo (version 12, produced by QSR International, London, UK), a qualitative software program, to organize the qualitative data and codebook for the qualitative data analysis process. We began the first wave of data analysis by applying Boyatzis's (1998) categorical analysis and derived a priori themes from the initial interview protocol [35]. The following themes helped frame the analysis: (1) information about the organization and role of the service provider; (2) current public assistance policies and support for vulnerable populations; (3) health and mental health resources for immigrant populations; (4) barriers to service awareness and strategies implemented to address systematic barriers; (5) prenatal and postnatal support for opioid users; (6) neonatal abstinence syndrome and treatment; (7) school-based healthcare resources; and (8) prohibitive policies, systems, and structures. In particular, sections in the interview protocol provided interview participants with the opportunity to respond to the changing needs of historically marginalized families, regardless of the organizational type, so that we could uniformly gauge the capacity of organizations to provide important services and resources during the data analysis. Thereafter, deductive codes emerged respective to each category and our intersectional process-centered theoretical framework. This thematic analysis produced additional codes that expanded across the categories, including: (1) the imposed-upon environments that historically marginalized families have to navigate at micro, meso, and macro levels; (2) how inequality perpetuates intersectional marginality, prohibiting access to services and/or compounding ACEs; (3) identified and non-identified subpopulations as a consequence of inequality; (4) the different forms of burdens that communities made vulnerable have to navigate and how inequality perpetuates burdens; and (5) the compounding effects of high-risk conditions related to ACEs. Since process-centered intersectionality emphasizes the examination of recognized and unrecognized populations, our data analysis included identifying subpopulations that are additionally impacted by high-risk environments not commonly accounted for in research studies. These identities extend beyond general demographic information such as race, gender, or family structure. As we engaged in the coding process during the first and second waves of the data analysis, we catalogued systems and processes that prohibited access to resources and support across ecological levels and according to place. This process constituted a third wave of data analysis, and we produced a master document that differentiated the types of prohibitive policies, systems, and processes by type of organization and in relationship to lack of monetary resources, administrative burdens, and need for new programs and sources of support. Our codebook was piloted three times among Camacho and Henderson, using the same five interviews. Codes were modified until there was 90% agreement among the researchers when coding a sample of responses. Upon establishing intercoder agreement, each interview was coded twice. True to the study design, diverse and conflicting data was regarded as indicative of the complexity of the phenomena. This approach assumed that the findings were not contradictory but, rather, multifaceted. Altogether, the three waves of data analysis were utilized to answer the guiding research questions, and our grounded theory approach ensured that the development of our framework was anchored in the dataset post data analysis. --- Limitations Given the extent to which process-centered intersectionality recognizes and elevates experiential knowledge, the lack of participation from historically excluded populations who cannot access health and social services is a methodological limitation. By focusing on the experiences of health and social service providers, this study elevates the experiences of a more privileged population. Therefore, findings from this are not meant to discount the real and lived experiences of historically marginalized populations. Rather, the experiences of service providers are contextualized within our analysis of power per the critical framework employed. --- Results Overall, the interviewees identified several factors that shape high-risk environments and compound exposure to ACEs within their service type and community context. Interviewees also described limited access to resources and support due to policy constraints at the local, state, and federal levels which further compounded negative outcomes among children in high-risk environments. --- Salience of Place: Rural, Urban, and Economic Characteristics Across Tennessee, health and education service providers spoke in-depth about the salience of place-the ways that rural, urban, and economic status of a county bring forth unique challenges that raise high-risk conditions for the populations served. While both urban and rural communities experience different place-based challenges, in Tennessee, the act of living in a rural community poses additional challenges related to limited socioeconomic opportunities (i.e., employment and living wages) and mobility (i.e., distance between resources and lack of public transit). For many of the interviewees, namely the majority of those who served rural communities with higher poverty levels, poverty brought forth other forms of adversity that either (a) shaped higher-risk environments which subsequently increased the risk of experiencing ACEs or (b) presented additional challenges when children and families needed to access resources and support. To illustrate differences between factors that shape higher-risk environments and the ways in which place can prohibit access to resources and support, we first present place-based challenges linked to poverty followed by related ACEs that may result due to higher-risk environments. Referenced factors that shape higher-risk environments included: food deserts, insufficient number of affordable housing programs due to lower population density, lack of public transit infrastructure, limited number of healthcare and hospital services, workforce recruitment and retainment issues for service providers, limited number of translation services for non-English speaking people, and insufficient number of beds at opioid treatment centers. These experienced place-based conditions meant individuals, families and communities were more likely to experience food, housing, and transit insecurity across ecological levels, as well as have unmet mental and physical health needs. The relationship between higher-risk environments and adverse childhood experiences translated into increased risk of experiencing ACE(s), as well as limited access to resources and support. For example, families in rural or sparsely populated counties lacked access to public transit in under resourced towns, which limited their ability to travel to receive health and social services. According to service providers, lack of resources and access to services contributed to intergenerational cycles of poverty, addiction, and other household-level crises that negatively affected children. Consequently, place was an important factor that often contributed to risk of exposure to ACEs, root causes of poverty, and the accessibility of resources and supports for children and families. The significance of place and factors that shape higher-risk environments and limited access to resources are illustrated in the following interview excerpts: "So, we didn't have a social worker. And the reason that the social worker is so important is, we are in a rural area. Our poverty rate here in [County] is 43.7%. So, I have a lot of students who live in isolation. We have a lot of students that are in transit all the time. I guess they would technically fit under homelessness because they're living with someone else, they're here, there, they're really hard-you know, those are the kids that are truant. Those are the kids who have health needs. So, today, this afternoon, I'll be talking with the department of children's services about continuing that funding for the social worker.... So, this is something that is needed, because our students who are in poverty, as you know, they're about seven times more likely to have mental issues or be living in a home where someone has mental issues. And that connection between the classroom and that student's parents, the caregiver, is almost nonexistent. We only have about 60% of the people here who have internet, and then they can't afford a phone a lot of times, or if they do afford the phone, they can't keep it on. Yeah, so the social worker has been able to reach out and go to the home, knock on the door, and say, "Hey, I'm from the school, you know, what do you need. How can we help you and how can you help us, you know, to better educate your child?" It has been a wonderful godsend having her to be able to reach out." (Coordinated school health director in a rural, economically distressed county) "So, we have 50 kids that are not being seen at least on school-based therapy. We try to find them places outside the school. See, here's the issue. We need the school-based therapy. And I'm not speaking just for my school. I'm talking about school in general, because (a) there's a transportation issue, especially in your high-poverty school districts, and (b), if parents have a car, they're at work, and they don't have-you know, these low-paying jobs do not offer sick days and, you know, time off and all that kind of stuff. So, parents are-cannot really take off and take the child to therapy, so-at least our Medicaid people are in that boat. And there are others, you know, insurance folks are in the same boat. I mean, we're seeing insured kids at the school, too, but Medicaid kids are top priority. So that's-we have got to have more focus on school-based therapy." (Coordinated school health director in a city-adjacent county) From the perspective of service providers, the less competitive wages in rural communities negatively affected their respective organization's ability to hire and/or retain top-qualified health and education service providers in the most high-need, high-risk communities. A rural community may or may not have a hospital or urgent-care services, and schools are often the only regular healthcare service that a child receives, especially when the child does not have a pediatric home; meaning, the child's family or caretaker has not established a primary physician for the child due to an inaccessible healthcare infrastructure per the nature of rurality and/or the lack of public transit. "The other thing that is-that we have a need for is more mental health inside the schools, and in an impoverished area like this, nobody wants to come here. We have been through five school-based mental health counselors in the last 3 years. We have a partnership with a local mental health agency, and they cannot keep someone employed inside this school. These people are getting money to go elsewhere and, you know, work in better places for more money. So, you know, it's really impacting the kids, because we also have a very high suicide rate. For example, you know, our youngest one is 9 years old." (Coordinated school health director in a rural, economically distressed county). --- Salience of Place: Sociopolitical Context and the Culture of Care Given the limitations imposed by rurality, we found that place also largely influenced how the culture of care was organized. Health and education social service providers acknowledged and understood the key roles they played in making services accessible to their respective communities and oftentimes referenced their longstanding commitment and role in being a social service provider. For example, they described how they cultivated relationships with public and governmental organizations in their community and how they utilized relationships at times to broker favors for the populations they served. While most social service providers utilized their capital to support historically excluded populations, service providers at times revealed their political beliefs and/or understood that providing care was not an apolitical process. Accordingly, this means service providers can provide support, to a certain extent, at their own discretion. Their decision to provide support is informed by their personally held beliefs, values, and who they deem to be deserving of such help. For example, statements from interviewees, such as "We take care of our own" and "We know everyone," were illustrative of their close-knit communities and an internal network that is assumed to be accessible by anyone in the community. However, the sociopolitical nature of insider/outsider dynamics and the ways service providers rely on support from faith-based communities, access to care can be determined by the social capital an individual, group, or population espouses, in addition to whether they possess membership in certain community groups. Below is an excerpt from an educator and health service provider on how he would navigate potential challenges when serving immigrant communities: "One of the first things I would do is call our county health department over there, which we have developed over 12 years, a very close relationship, and just like this situation here, they will give me some advice... so then the administrator will take it to their PTA or PTO to try to get some help... well, let me say we don't turn anybody over to ICE. We do not send anything to them. We're going to deal with the child and the family and so what we'll do is work through translators and so forth, we do have a-we have a person who works part-time here in our offices that worked for the county many years and has many connections in the county. They will help that father try to find a job, if needed, find somewhere to live, which is a problem in our county is housing, but they will try to find them a place to live temporarily so that that parent can actually start making some money, and then we'll monitor them to-you know, they'll do what they need to do as far as immunizations for the child, making sure they're in school all the time and so forth. We'll try to help them out as much as we can, if somebody else turns them in, we can't help that, but we will not let-We will not let-We will not let the federal government on our campuses to pick people up as their getting their child or dropping them off and that sort of thing... I had to chase them [ICE] off one of our primary's campuses last year." (Coordinated School Health Director in rural county) Additionally, changing populations across the state present challenges to families accessing services who oftentimes rely on service providers to assist them in navigating administrative requirements [9,36,37]. Population changes have also resulted in increased service needs for certain families; these changes include short-and long-term consequences of drug crises, the increasing number of grandparents and great-grandparents who serve as primary caretakers of their children, and a growing immigrant-origin population, particularly in predominantly White communities. Service providers in the study shared examples of complex situations they had to navigate, often because of the changing population in their community, that required them to provide additional support such as financial assistance or overcoming language barriers. A service provider from a federally qualified health center in an urban county shared the following: "We are able to do so little in those [high-risk] circumstances because on that, on top of that, it's not uncommon for the dad to be suicidal or there's someone in the home that is abusing alcohol or dad-and we've had this happen-dad is HIV-positive, and we find out the baby is HIV-positive. The mom is not there, so we don't know. But we've had-now what do you do? And then they don't have a place to stay. So, I'm going to just add that more to you because this is-this is our every day. We literally have-we've had-in our clinic 2 weeks in a row where the mother-was the father there? The father was okay. The mother was HIV-positive, and at least two out of the three kids were HIV-positive. And they didn't even know. And we are like-and you know, they don't speak English, so I'm just trying to see-this is our normal. That's our every day." --- Salience of Place: Policies That Inhibit Access to Resources and Support Health and education service providers understand that policies and systems can significantly influence access to resources for families, particularly those facing additional barriers (e.g., income status or degree of "belonging" within a community). Interviewees described both having insufficient monetary funds to meet the growing needs of the populations they served and working to support marginalized populations that did not have access to resources or economic structures needed to break through various types of poverty cycles due to stringent programs and policies. Interviewees referenced several stateand federal-level policies in connection with barriers that service providers and families navigate across a variety of care settings and county types. In the following illustrative example, an interviewee describes how state-level policies around resource allocation had significantly impacted their ability to meet the needs of children and families: "But we have a high rate of suicide and mental illness in the region, and I feel like that money should be allocated to areas that are in most need. But what I'm seeing a lot of times is, "Oh, we're going to give it to the bigger places," and what you have there is places that have more money, they have more resources, and then of course your impoverished areas, your small rural areas where nobody wants to come, we can't even afford to hire anybody at this time because the money has been given to bigger places.... We need to establish funding that is more reoccurring to the district, and every district, every district on that. Last year, I was able to secure in-kind and grant funding for our district, and that is a huge help to us, especially when, you know, you're in a really small district, and we don't get a lot of funding anyway, especially when it's based on [the state education funding formula]. They're just not going to give it to us. And so, our kids-our kids do without. And probably I would think our kids have more of a need than, you know, some of the bigger schools, you know, get [funding]. You know, I know they have needs, but I doubt that their poverty rate and their mental health issues and their opioid issues here, it's just not the same as it is here. I mean, we are in a crisis here." (Coordinated school health director in a rural, economically distressed county) Additional policies and systems that contributed to economic inequality and poverty among families with whom the interviewed service providers worked included: nonexpanded Medicaid, non-livable minimum wage, increasing cuts to federal and state programs meant to support low-resource households (i.e., social safety net programs such as Supplemental Nutritional Assistance Program, Women Infants and Children, etc.), and decreasing or stagnant investment in health and education programs. These economic disparities were further exacerbated in rural counties and between rural counties due to non-comprehensive measures of poverty, non-equitable investments in employment opportunities across counties, and lack of affordable or physically accessible childcare options depending on where one lives. Interviewees discussed several examples of how these measurements of poverty and other county metrics used by the state disadvantaged their ability to be prioritized for additional resources due to low population density, among other factors. Such economic-based dynamics are especially detrimental to low-income and working poor families. --- Intergenerational Experiences of Adversity within Communities The majority of interviewees spoke in depth about the interwoven place-based factors that created high-risk conditions, as well as the intergenerational nature of ACEs and forms of adversity experienced by a high proportion of families within a community. In their descriptions, service providers often explained how environmental factors influenced family-level factors, family structure, and, subsequently, a child's risk of ACEs. What follows are two illustrative quotes from service providers who link factors that shape highrisk conditions with their home life and respective ACEs linked to high-risk conditions. To provide additional context for the first excerpt, service providers from this particular drug coalition in a rural, economically distressed county described how high-poverty rates in their county had been the status quo since the 1970s due to the shutdown of the coal mining industry in their geographic area. For many generations, the majority of people living in their area did not have access to many full-time employment opportunities, jobs with adequate salaries, or the ability to develop employment skills. High-risk conditions for individuals, families, and their communities worsened due to an exponential number of pill mills that contributed to the opioid crisis before the government recognized the crisis. The service provider then proceeded to link these factors with various forms of adverse childhood experiences: "There is an actual poverty rate, a 27.7%... but [what] that is saying [is accepting poverty rates]-which makes me so angry because we have said to these students, to this group of youth, "Hey, what are we going to do?" That it is okay. And it's not. I mean, now try to tell those kids that they have more worth and value than that, you know? They're living with drug-infested homes. They're living with all of the problems. I mean, it's not just mothers and fathers. It's their grandmothers and grandfathers that are doing this. I had a young man tell me the other day that, you know, he was sitting at my feet, and he said-because he calls me grandmahe said, "I have watched my grandma take [motions injecting arm]-you know, tie off and shoot up in front of me and then she would pass out." And he said that was so scary. And he said what was even scarier, when [he] had to spend the night and all the roaches in the house. You know, this is the reality of what these kids are really living with. And the principal at [the local high school] told me at the very beginning of this, she said, "Our students can't come in here and worry about a chemistry test or, you know, what's going on in high school when they're more concerned about am I going to have food? When I get out at night, who's going to pick me up from school? Will I be allowed to ride that bus? You know, are the things going to be taken care of for me?" And so, they have no worth and value, so they step right into the paths of their parents. They're doing-making the same mistakes. This is generational mistakes in this community." (Director of a community anti-drug coalition in a rural, economically distressed county) The above illustrative quote references intergenerational drug abuse in the home accompanied by uninhabitable living conditions that present immediate health risks for children and families. The excerpt also references experienced food insecurity, transit insecurity, and an inability to rely on basic needs being met among children living in the community. According to service providers, the identified ACEs are experienced amid the consequences of social, political, and economic contexts that shape the salience of place. To showcase how the salience of place impacts intergenerational family histories, we present the following excerpt from a staff member who was responsible for working with
Children across all races/ethnicities and income levels experience adverse childhood experiences (ACEs); however, historically excluded children and families must contend with added adversities across ecological levels and within higher-risk conditions due to systemic inequality. In this grounded theory study, the authors examined how health and social service providers (N = 81) from rural and urban counties in Tennessee provided services to low-income families, children exposed to opioids, and children of immigrants. Guided by an intersectional framework, the authors examined how rural and urban settings shaped higher risk conditions for ACEs and impeded access to resources at the individual, group, and community levels. Findings from this study identified additionally marginalized subpopulations and demonstrated how inequitable environments intersect and compound the effects of ACEs. The authors present their Intersectional Nature of ACEs Framework to showcase the relationship between high-risk conditions and sociopolitical and economic circumstances that can worsen the effects of ACEs. Ultimately, the Intersectional Nature of Aces Framework differentiates between ACEs that are consequences of social inequities and ACEs that are inflicted directly by a person. This framework better equips ACEs scholars, policymakers, and stakeholders to address the root causes of inequality and mitigate the effects of ACEs among historically excluded populations.
going on in high school when they're more concerned about am I going to have food? When I get out at night, who's going to pick me up from school? Will I be allowed to ride that bus? You know, are the things going to be taken care of for me?" And so, they have no worth and value, so they step right into the paths of their parents. They're doing-making the same mistakes. This is generational mistakes in this community." (Director of a community anti-drug coalition in a rural, economically distressed county) The above illustrative quote references intergenerational drug abuse in the home accompanied by uninhabitable living conditions that present immediate health risks for children and families. The excerpt also references experienced food insecurity, transit insecurity, and an inability to rely on basic needs being met among children living in the community. According to service providers, the identified ACEs are experienced amid the consequences of social, political, and economic contexts that shape the salience of place. To showcase how the salience of place impacts intergenerational family histories, we present the following excerpt from a staff member who was responsible for working with youth within a drug coalition in a rural, economically distressed county: "I had a young man who was brave enough to come down and I had all this [ACEs] logic model and all of my curriculum all set up just so perfectly, and he came down, he said, "I grasp the concept of what you're trying to do here, and it's good," he said, "but you're missing the mark." And he began to really tell me, "You know, when you live in domestic violence, when you live in abusean abused home, and you're-you become a bully, and you-you know, that deals with your mental health. That starts, you know, all your mental health issues that are going on during this, you turn to drugs and alcohol. That's how we self-medicate."... He took the black pen and really started writing, "This leads to homelessness." You know, he's a foster child. He came to [this] county because he was in foster care. This is-it just makes so much sense. If you can help them to understand these issues, what led us there... it helps you understand that we don't have to go there.... Living in these issues, when you live like that, it becomes your comfort zone, even if you don't like it. You become comfortable in this crazy, you know, wacky environment." (Staff member of a community-anti-drug coalition in a rural, economically distressed county) To provide additional context for the above quote, the staff person had described how this child had experienced additional challenges as foster youth having moved from another county without a "close to kin" foster parent. In this case, the child was not just a foster youth, but a child who resided in a physically isolated geographic area, without familial mentorship to meet basic child development needs, who had a debilitating selfesteem as a resident of an economically distressed and rural county. The various identities that are referenced by the child are as follows: unhoused (homeless), victim of domestic violence, compromised mental health status, substance use disorder, intergenerational unhoused status. This quote highlights how high-risk conditions permeate family histories and impact generations. --- Recognizing Additional Subpopulations and Identities Guided by process-centered intersectionality, we identified subpopulations (see Appendix A) not commonly accounted for in research studies. Service providers identified these subpopulations according to their social and economic standing, place of origin, geographic location, family history, and experienced high-risk conditions. Our data analysis excavated 119 identities, either referenced directly by service providers or derived from the qualitative data analysis. While some identities are broadly recognized (i.e., race, gender, age, family structure), we compiled a list of additional identities that may contribute to one's likelihood of exposure to ACEs or one's likelihood of experiencing barriers when accessing services. From that point of reference, we developed the Intersectional Nature of ACEs Framework, illustrated in Figure 1. The Intersectional Nature of ACEs Framework illustrates that ACEs compounded by high-risk conditions are first and foremost undergirded by the consequences of social, political, and economic contexts which in turn shape the salience of place. The salience of place is not experienced during a particular moment; rather, the relationship between policies at federal, state, and local levels and the demographic and economic composition of place determine access to resources. This in turn shapes a dynamic culture of care that determines experienced access to resources among historically excluded populations and subpopulations per their espoused sociopolitical capital. Ultimately, intersectional identities across ecological levels are a result of the dynamism of high-risk conditions which can be traced back to systemic inequality. To illustrate the relationship and centrality of the consequences of political, social, and economic contexts, the preceding co-constructs the salience of place and intersectional experiences across ecological levels. The quotations and accompanying descriptions in Table 1 exemplify the intersection among ACEs, salience of place, and high-risk environments, informed by the Intersectional Nature of ACEs Framework. This framework builds upon previous ACEs frameworks, highlighting underlying and often upstream factors that may connect ACEs within the life of a child, a family unit, or a community. Operationalizing an intersectional lens in the study of ACEs allows for a non-cumulative measure of adversity that is not only affected by how many ACEs one experiences, but also by intersectional identities one can possess, thereby examining compounding effects of ACEs. Table 1 includes examples of ACEs experienced by children within various place-based contexts; these contexts have varying risk levels of ACEs and are also affected by broader policies and systems that affect marginality. Thus, the Intersectional Nature of ACEs Framework emphasizes the multiple levels of risk factors that affect exposure and access to resources, focusing on high-risk environments and upstream factors to expand upon previous approaches that primarily emphasize family and individual-level factors. gender, age, family structure), we compiled a list of additional identities that may contribute to one's likelihood of exposure to ACEs or one's likelihood of experiencing barriers when accessing services. From that point of reference, we developed the Intersectional Nature of ACEs Framework, illustrated in Figure 1. Figure 1. The Intersectional Nature of ACEs Framework a. Note. a Subpopulations experience high-risk conditions as a result of social, political, and economic contexts at individual, group, and community levels due to systemic inequality. Subpopulations identified in our interview data did not comprise an exhaustive list of place-based determinants. To see the comprehensive list of place-based determinants derived from this study which inform the relationship between the salience of place and intersectional experiences, see the Appendix A. b Place-based contexts can determine high-risk conditions and are constructed by the physical nature of place, as well as governing policies, systems, and processes that determine access and availability of resources, programs, and services. However, access to resources, programs, and services is additionally materialized by the culture of care and how individuals, groups, and communities are recognized. Community characteristics can include the physical location of resources, transportation access, and changes in population and place-based crises that compound high-risk conditions (e.g., opioid crisis). c Access to social services and resources mitigates factors that construct high-risk conditions which in turn lessen ACEs. --- Figure 1. The Intersectional Nature of ACEs Framework a. Note. a Subpopulations experience highrisk conditions as a result of social, political, and economic contexts at individual, group, and community levels due to systemic inequality. Subpopulations identified in our interview data did not comprise an exhaustive list of place-based determinants. To see the comprehensive list of place-based determinants derived from this study which inform the relationship between the salience of place and intersectional experiences, see the Appendix A. b Place-based contexts can determine high-risk conditions and are constructed by the physical nature of place, as well as governing policies, systems, and processes that determine access and availability of resources, programs, and services. However, access to resources, programs, and services is additionally materialized by the culture of care and how individuals, groups, and communities are recognized. Community characteristics can include the physical location of resources, transportation access, and changes in population and place-based crises that compound high-risk conditions (e.g., opioid crisis). c Access to social services and resources mitigates factors that construct high-risk conditions which in turn lessen ACEs. Neglect "Well, we were kind of used to recession and poverty because it started back in the '70s, okay? So, we was kind of used to that, but we didn't really know how to evaluate and identify because we had to get ourselves trained, and that's why it was so good when the coalition concept come in and really showed us and trained us on how to evaluate our population, and then everybody went through a decline economically for a long time. We went from where you could not get a job.... The unemployment rate at one time was up to 26%. We had the highest unemployment rate in the state. Five years in a row. Five. In the mid-2000s. Because there wasn't a lot of opportunity here, okay? The factories wasn't-weren't-we're a manufacturer type population. We're not a highly skilled labor population, okay? So, excuse me, we went through that, then we seen the drug epidemic starting, and really, I mean, it started as a means of sustainability, people selling their medications to pay their electric bill, and a self-coping mechanism. People were using it because they were depressed. Self-medicate. So that population and the opioid population, along with the country, just became an epidemic here." Collectively, these findings illustrate a more complex understanding of ACEs and how service providers experience the context of their community and/or their capacity to support these families. Identifying these high-risk conditions highlights the need to respond to ACEs not only on an individual level, but also on family, community, and state levels. --- Economic recession --- Discussion Inequitable environments are powerful forces that impose additional identities upon historically excluded populations. Children, families, and communities living in high-risk environments who are experiencing ACEs may have difficulty accessing resources due to the intersecting identities they possess. In this study, the high-risk conditions described by service providers across Tennessee illuminate the environmental factors contributing to co-occurring ACEs and challenge scholars and practitioners to reconceptualize the way programs and policies support these families. While original ACEs research focused on adverse childhood experiences individually as events to examine using a cumulative measure, we encourage program administrators to focus on high-risk environments where multiple ACEs may be connected by underlying mechanisms; we also recommend that public administration officials focus on policy solutions that allow families to break out of generational poverty that may contribute to the risk of experiencing ACEs. Inability to access services while living in poverty may specifically contribute to neglect, maltreatment, and foster care. The opioid crisis in Tennessee, for example, and subsequent high-risk environments may facilitate the co-occurring ACEs of substance abuse in the household, mental illness in the household, various types of abuse, foster care, death of a family member, and neglect. Evidently, health and education service providers must consider the basic needs (e.g., adequate food and housing) of historically excluded populations with their specific services. Program administrators should prioritize funding programs that allow providers to care for families and children comprehensively, with additional training on responding to a high-risk situation holistically instead of only assessing a child's well-being with a traditional ACEs screening tool. Considering the Intersectional Nature of ACEs Framework that emerged from this study, we emphasize the importance of developing health and education resources and services with practices that maximize safety nets. First, when possible, service providers should incorporate cradle-to-grave (lifelong) support and programming. As evidenced in this study, the challenges that historically excluded populations must navigate are many, and it will be impossible for them to navigate complex systems without major reform of current processes. To offset compounding effects from high-risk environments, service providers should redevelop current programs and services to meet the needs of diverse populations, regardless of which stage-in-life services are accessed. This safety-net approach will meet historically marginalized populations within their experienced reality, as well as mitigate the consequences of inequitable environments that ultimately rob people of their agency, human dignity, and ability to realize their full selves. Second, given that ACEs may be rooted in intergenerational high-risk conditions, service providers should assume multigenerational support as a form of safety net. This approach will enable services and resources to be developed for different-aged populations with the understanding that the objective is to prevent younger generations from inheriting high-risk conditions to break out of high-risk conditions. In the interviews, the multigenerational approach was described as vital by service providers as well as ACEs researchers and providers who had developed programs serving multiple family members together in high-risk situations, such as addiction and parent history of ACEs [38,39]. Situated at the frontlines, service providers can mitigate ACEs and espouse tremendous social capital; they have the agency to either discriminate or use their knowledge to help historically marginalized populations navigate complex social service systems. We strongly advise a higher level focus on improving policies and systems that help families break out of generational poverty and intergenerational cycles of ACEs-mobilizing for under-insured and uninsured populations, exposing the depths of poverty in their communities, educating policymakers, working across organizations to increase protections for the working poor, etc.-so that historically excluded populations can think beyond their immediate survival and work toward realizing intergenerational mobility. Previous studies have expanded upon the 10 traditional ACEs but have not necessarily contextualized high-risk, inequitable environments and multidimensional elements of place that intersect with ACEs. With the Intersectional Nature of ACEs Framework in mind, we present the term repression-ACEs to differentiate between ACEs that are the consequences of social inequities, such as neighborhood violence and racism, and ACEs that are inflicted directly by a person. Repression-ACEs signals the ways in which ACEs are constructed by higher-risk environments that are the consequences of social, political, and economic contexts that shape the salience of place. We hope this term will differentiate the power of imposed-upon environments and the collective responsibility to disrupt harmful policies and systems. --- Conclusions Espousing a critical intersectional approach and utilizing in-depth interviews to understand the ACEs phenomenon, this study's results make a significant and interdisciplinary contribution to the ACEs literature and deepens understanding of the social problems service providers navigate. When mitigating ACEs through prevention and support, scholars and practitioners need to consider the precarious place in society children inhabit and how policies and programs have the potential to worsen high-risk conditions. If scholars, policymakers, and practitioners provide high-risk communities with sufficient resources and support, the collective efforts can hopefully prevent children from experiencing the very worst consequences of childhood abuse, neglect, and household dysfunction. worsen effects of ACEs. Each of these identities were referenced directly or indirectly from our interviews with 81 service providers. English as a third language, Spanish as a second language (Indigenous language is native language) 31. Employment Employed Working Poor Underemployed Unemployed Exploitative work conditions Underserved (i.e., no access to healthcare benefits) Non-highly skilled Non-highly skilled, manual intensive labor Essential 32. Housing Housed Housed without basic necessities (e.g., running water, electricity) --- • Housing insecure English as a second language 3. Non-English speakers 4. Kin Foster Parents 5. Close to kin, lives in same county 6. Non-kin Foster Parents 7. Foster youth 8. Guardian grandparents and great grandparents 9. Temporary kin guardians 10. Special needs guardians 11. Residents of immigration raid county and perceived of being Latinx/e origin and/or undocumented 12. Residents of immigration raid whose fear of being profiled prohibits their ability to drive, travel, and seek health and education support services 13. Residents of distressed county 14. Residents of rural county 15. Residents of a high-risk geographic area (proximity to pill mill, cartel route, Appalachia, mountainous, and/or physical divide that perpetuates socioeconomic stratification, etc.) 16. Residents who do not have access to a hospital or urgent care (healthcare desert) 17. Women who live in healthcare deserts and do not have access to women's healthcare 18 --- Data Availability Statement: Interview data used in the study cannot be publicly shared. --- Acknowledgments: The authors would like to thank the P4A Vanderbilt research team for their contributions to the qualitative study materials, as well as their feedback on the manuscript. --- Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. --- Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study, collection, analyses of interpretation of data, or manuscript writing. --- Appendix A. Subpopulations at Individual, Group, and Population Identities Sub-populations are not commonly accounted for in research studies. These subpopulations were identified by service providers according to their social and economic standing, place of origin, geographic location, family history, and experienced high-risk conditions. The expanded-upon populations in this list present additional information about the experience that was referenced by health and social service providers. Higher-risk environments construct circumstances that may contribute to higher-risk conditions and
Children across all races/ethnicities and income levels experience adverse childhood experiences (ACEs); however, historically excluded children and families must contend with added adversities across ecological levels and within higher-risk conditions due to systemic inequality. In this grounded theory study, the authors examined how health and social service providers (N = 81) from rural and urban counties in Tennessee provided services to low-income families, children exposed to opioids, and children of immigrants. Guided by an intersectional framework, the authors examined how rural and urban settings shaped higher risk conditions for ACEs and impeded access to resources at the individual, group, and community levels. Findings from this study identified additionally marginalized subpopulations and demonstrated how inequitable environments intersect and compound the effects of ACEs. The authors present their Intersectional Nature of ACEs Framework to showcase the relationship between high-risk conditions and sociopolitical and economic circumstances that can worsen the effects of ACEs. Ultimately, the Intersectional Nature of Aces Framework differentiates between ACEs that are consequences of social inequities and ACEs that are inflicted directly by a person. This framework better equips ACEs scholars, policymakers, and stakeholders to address the root causes of inequality and mitigate the effects of ACEs among historically excluded populations.
Background Sexually transmitted infections (STIs), including HIV/AIDS, are a major public health burden in South Africa [1,2]. The epidemic is complex and thought to be influenced by a number of factors, including biological, behavioural, societal, and structural factors [1]. Although the contraceptive utilisation rate is high in South Africa, at 64% among sexually active women, unplanned and teenage pregnancies are an on-going problem [3,4]. Many South Africans are believed to be using condoms for HIV prophylaxis, but there are challenges with the use of condoms in certain communities [1,[5][6][7][8][9][10]. The South African government supplies free male and female condoms to the population and has positions in numerous areas considered hotspots. This includes hotels, shops, taverns, health facilities, brothels, and every place with a dense population at the same time. One of the biggest problems is that even if condoms are used, they are not used consistently, especially in long-term relationships and among those who engage in high-risk sexual practices, such as sex workers [11]. Much of the high HIV prevalence in South Africa is attributable to the inability of women to negotiate for safer sexual practices, often because of age disparities or financial dependence [11,12]. Lack of female-controlled prevention methods also plays a significant role in a woman's HIV risk [11,12]. Male condom compliance requires cooperation from the woman's male partner(s), something that is not always possible in abusive relationships [8,11,12]. Another option to prevent STIs, HIV, and unplanned pregnancies, that is, in women's control, is the use of female condoms [5,8]. This is both liberating and empowering for the woman, as she is in control of the situation and can practice safer sex if she wants to [8,11,12]. The literature suggest that women's empowerment and strategies such as the active promotion of female condom use can play a huge role in addressing challenges such as the high rate of STIs, teenage pregnancy, and HIV/AIDS [8,[11][12][13]. Female condom use in Africa is realistic, and it provides women with more independent protection [13]. It is an alternative that is in the woman's control, with less need to rely on the male partner's cooperation or negotiation skills [14]. However, despite the known effectiveness of female condoms in preventing STIs (and thus reducing their prevalence) among sex workers and women in general, there is low uptake among women, including female health workers [8,14]. The acceptability of the female condom among women faces two obstacles: the reaction of the woman's regular partner and attitudes towards the device itself (appearance, difficulties, or uneasiness concerning its use) [12,13]. It has, however, also been established that the use of a female condom may cause more stigma and challenges for women [15]. As earlier highlighted, sex workers are often at a higher risk of HIV [8,11,15]. They could benefit from the increased promotion and accessibility of female condoms, as it has been shown that an increase in female condom promotion is positively correlated with an increase in female condom uptake among sex workers in Thailand and Madagascar [8]. Equipped with the correct knowledge, sex workers could then also be recruited as peer educators and advocates of safe sexual practices during their trade [16]. In Calcutta, India, the Sonagachi Project employed sex workers as peer educators to distribute condoms, advise peer sex workers on where they can get health services, and disseminate information promoting behavioural change [17]. The 59% rise in condom use in the same period can be attributed to this collaborative model [16,17]. Other countries' sex worker advocacy organisations, such as South Africa's Sex Workers Education and Advocacy Task Force (SWEAT), have adapted the same model of peer education to their context [18]. With no studies reporting on sex work and female condom use in most parts of South Africa, little is known about the acceptability and usage of the female condom among sex workers. This is despite the fact that South African female sex workers are known to have a high HIV prevalence and incidence and are responsible for a significant role in the transmission of HIV [19]. This is because they are known to have unprotected sex with their romantic partners and some of their clients [19,20]. Furthermore, because sex work is illegal in South Africa, advocacy for their sexual and reproductive health is limited [21,22]. This study therefore aimed to determine the knowledge, attitudes, and practices of Grahamstown, Rustenburg, and Brits female sex workers on the use of female condoms and contraceptives. This descriptive study also explores factors that could have informed their knowledge, attitudes, and practices. This study will generate new ideas and new strategies that can be implemented to promote female condom use among women, suited to their needs. --- Methods --- Study Design The study is a quantitative design that surveyed participants over 10 days in 2018. Where it is not explicitly stated to be a female condom, the phrase "condom" refers to condom use in general. --- Study Setting The North-West (NW) and Eastern Cape (EC) provinces are two of South Africa's nine (9) provinces located in the north-west and south-east of the country, respectively. The study was conducted in two brothels in Brits and Rustenburg (NW) and in two brothels in Joza township in Grahamstown (EC) between 20-29 September 2018. These are small towns in predominantly rural provinces with similarities in that they have high proportions of truck rest stations and migrant labourers [23]. As mining towns, both Brits and Rustenburg have high proportions of migrant male workers who work far from their wives and thus serve as a good market for female sex workers. --- Population and Sampling The target population for this research involved female sex workers of all ages trading at Brits and Rustenburg, under the jurisdiction of Bojanala District in the NW province, and Grahamstown, under the jurisdiction of Cacadu District in the EC province. The principal investigator (PI) met with all the respondents at their risk reduction workshops (RRW), held on selected days of the week. This setting made it easier since they were not focusing on clients at the time. Seventy participants were offered consent forms on three occasions during the workshop, and all were returned signed as all participants were willing to participate. Surveys were undertaken in a private room within the center where none of the other participants could overhear. --- Measurements A researcher-administered questionnaire that was translated into isiXhosa and Setswana was used to collect demographic information, knowledge, beliefs, attitudes, and practices regarding condom use (mostly the female condom), and questions on sexual activity. The latter questions (on sexual activity) were adapted from the youth risk behaviour survey [24] and also incorporated common themes (sexual risk reduction, condom promotion, access, cost, and availability) found in literature [8,11,12,14]. Nine questions were used to assess knowledge of the female condom and contraceptives, the availability, costs, and effectiveness of the former in preventing HIV and STIs in general, and contraceptive options available in the South African public health sector. The content validity was reviewed by two experts (a health promoter and a public health medicine specialist), and there was 100% agreement on clarity, and the content validity index was 1.0. A knowledge score of at least 50% was considered adequate. The views of participants on female condoms were assessed for the best possible view on four options that were not necessarily mutually exclusive to ascertain the commonly expressed view. Perceptions of female condom use were assessed using a 3-item Likert scale (disagree, neutral, agree), where neutral was equivalent to being unsure and/or never having used a female condom before. An exception is the perception of access to female condoms, which also uses a 3-item Likert scale (very difficult, somewhat difficult, and not difficult). The translation of the questionnaire and the presence of a single interviewer for all participants enhanced the reliability of the study's findings. --- Data Management and Statistical Analysis All variables were captured and coded in Microsoft Excel 2013 and exported to Stata 14.1 for analysis. The numerical data were explored using the Shapiro-Wilk test. While numerical data that were normally distributed (age of participants and the age of entry into sex work) are summarised using the mean, standard deviation (SD), and range, numerical data that were not normally distributed (age of sex debut and the average number of daily sexual clients) are reported using the median and interquartile range (IQR). The two-sample t-test for equal variances was used to test the equality of two means by province, where numerical data were normally distributed, and the Wilcoxon sum rank test (Mann-Whitney U test) was used to test for the equality of two medians if data were not normally distributed. Categorical variables are presented using frequency tables, percentages, and graphs. Two proportions are compared using the two-sample t-test of proportion. Two numerical variables are compared using the Spearman's correlation. Simple linear regression is used to compare two different associations of knowledge (age in years and the average number of daily sexual clients). Binomial logistic regression is used for bivariate associations of knowledge for the overall population. The prevalence ratio (PR) is a measure of the association of knowledge. The 95% confidence interval (95% CI) is used to estimate the precision of estimates. The level of significance was a p-value <unk> 0.05. --- Ethics and Legal Considerations The Walter Sisulu University Human Ethics and Biosafety Committee granted ethical clearance and approval for the study to be conducted with an ethics approval number (HREC: 005/2019). Each participant gave informed consent; confidentiality was maintained, abiding by the four ethical principles of autonomy, beneficence, non-maleficence, and justice. Participation was completely voluntary without a promise of financial and/or personal incentives. --- Results A total of 70 participants were interviewed, but one participant from the North-West province was excluded due to inconsistent information on pregnancies and three other variables. As a result, only 69 participants are included in the final analysis, of which 21 (30.4%) and 48 (69.6%) were from the Eastern Cape (EC) and North-West (NW) provinces, respectively. The demographic characteristics are shown in Table 1. On average, participants were 32 years old (sd = 7.2, range = 18-46); the youngest age of entry into sex work was 16 years, and the average age of entry was 22.8 years (mean = 22.8; range = 16.0-35.0). More than half of the participants (53.6%) were cohabiting; 15.9% had multiple sexual partners; 82.6% had at least a matric as the highest level of education; and 21.7% had a tertiary qualification. Thirty-five (50.7%) had a job ranging from being a peer educator (26.1%), being a cashier or an intern (7.3%), administrative or general assistant work (4.3%), and volunteering as a police reservist (1.4%). All participants knew their HIV status, and there was a prevalence of 30.4% (95% CI: 20.5-42.5), which comprised 19.0% and 35.4% of participants from the Eastern Cape and North-West provinces, respectively. Implanon was the most commonly used contraceptive (52.2%), followed by 34.8% who were on injectable contraceptives, and this trend was a reflection of the picture in the two provinces. The first sexual encounter was voluntary for almost two-thirds of the participants (65.2%, 45/69); 92.8% of participants reported to have begun sex work due to poverty or unemployment; and 89.9% of participants had been pregnant before (Table 2). Participants from both provinces reported that they had a minimum of four daily sexual clients (median = 6.0; IQR = 5.0-8.0). Overall, 68.1% of participants had either been pregnant once or twice, and 22.0% had been pregnant 3 to 5 times. Only 20.3% of participants reported ever having an abortion. Twenty-eight participants (40.6%) had a single child, 30.4% had two, and 15.9% had three or more children (Table 2). Condom use in the most recent sexual encounter was reported by 82.6% of participants (Table 3). Even though condoms were reported to be advantageous by most participants, 81.2% of participants reported barriers to condom use. Such barriers included nonaccep-tance by clients, resulting in a negative impact on their income (63.8%). In other instances, clients refused to use a condom, and this was reported by 36.2%. Whereas 30.4% of participants found female condoms to be a useful preventative method, 49.3% found female condoms to be uncomfortable (Table 4). Whereas 89.9% of participants had not used a condom in the past 3-months, 17.4% reported having used a female condom at least once during the course of the most recent twelve months. The data in Table 5 show further perceptions of the female condom. Only 8.7% felt female condoms were easy to insert, only 7.3% felt they enhanced pleasure, and 81.2% confirmed that they were adequately promoted. A total of 65.2% of participants reported having consumed alcohol during their most recent sexual encounter. Of all the participants, only one reported drug use before the most recent intercourse. Adequate knowledge was attained by 76.2% EC participants and 29.2% NW participants, respectively. Overall, those who reported barriers to condom use were 6.7 times more likely to have adequate knowledge than those who did not, and this was statistically significant (PR = 6.7; 95%CI: 1.01-45.0; p-value = 0.004). Furthermore (Table 6), EC participants were 2.6 times more likely to have adequate knowledge than NW participants, and this was statistically significant (PR = 2.6; 95%CI: 1.6-4.3; p-value = 0.003). There was no statistically significant association between HIV status and the level of knowledge (PR = 0.6; 95%CI: 0.3-1.2; p-value = 0.099). A 1-year increase in age led to a 0.4% reduction in knowledge score, which was statistically significant (p-value = 0.032); despite this, however, only 6.6% of the variation in knowledge score could be attributed to the linear relationship it had with age (R 2 = 6.6%) (Table 7 and Figure 1). Similarly, the addition of a single sexual client resulted in a 1.9% reduction in knowledge score, and this was also statistically significant (p-value = 0.002); as with age, only 13.3% of the variability in knowledge score could be attributed to its linear relationship with the average number of daily sexual clients (R 2 = 13.3%). None of these associations were statistically significant when stratified by province. Figure 1 further illustrates the knowledge score for the sex workers in South Africa. --- Discussion This study sought to understand the knowledge, attitudes, and practices of sex workers towards female condoms and contraceptive use in the South African context. One of the most critical but under-valued strategies for reducing HIV incidence, other sexually transmitted diseases, and unwanted pregnancies is understanding highrisk populations and their reasons for uptake or failure to utilise interventions to help them. The understanding of the baseline knowledge should trigger behaviour change, habits (occupational practices, alcohol use, multiple sexual partners, etc.), perceived threats, perceived susceptibility to an adverse outcome (e.g., HIV infection, loss of income, etc.), and perceived benefits of behaviour change [25]. In the absence of government-driven programs for sex workers in South Africa, this study therefore adds value to the paucity of literature in this area, not only to inform the design of interventions but also to help find alternative methods of engaging stakeholders that could extend beyond female sex workers in the design of health interventions [16]. Participants in this study reported having begun sex work due to poverty or unemployment, even those with tertiary qualifications. This is consistent with previous UNAIDS findings, which reported that some individuals choose sex work as an occupation, but for some communities, it remains a means of survival, with as many as 86% of Canadian female and child sex workers from indigenous communities having a history of poverty and homelessness [26]. Other factors reported to sex work include a lack of education and/or employment opportunities, marginalisation, addictions, and mental illness [26]. This often affects the younger population, which is still in its prime. A peri-suburban South African study reported a median age of 31 years among sex workers in Soweto [27]. It is impressive to note that more than 80% of the participants who were only females had at least a matric level of education, which puts them above average when compared to any general 25-year-old South African women living in urban and peri-urban areas, whose high-school education attainment was measured at 68.2% [28]. In South Africa there is a high rate of unemployment, with graduates struggling to find jobs. Poverty and unemployment are the most contributing factors. That is the reason why we see young people with matric being in the sex work industry: it is because of poverty and hunger. This also contradicts the association of sex work with a lack of education as seen in other societies elsewhere in the world [26]. Individuals with such a level of education are therefore expected to grasp health promotion with ease if their knowledge is to be enhanced [8,23]. Even though the HIV prevalence of 30.4% is higher than the South African adult population prevalence of 20.4% reported for 2018 [2], it is slightly lower than the South African antenatal HIV prevalence of 30.8% reported in 2015 [2,29]. The HIV prevalence is also far lower than the estimated HIV sex-worker prevalence reported by the United Nations for 2018 of 57.7% [2]. In a study by Coetzee et al., an HIV prevalence of 39.7% was reported in a study population of sex workers in Cape Town, 53.5% in Durban, and 71.8% in Johannesburg [21,27]. With such a high prevalence, it is therefore highly critical for sex workers to protect themselves, their clients, and/or intimate partners against STI (including HIV) infection or re-infection by using dual methods of protection (i.e., condoms and other proven preventive measures). However, condoms were not found to be used consistently in this study, either as a result of non-acceptability by clients, perceptions of the sex workers who lose income, or ineffective marketing approaches. Qualitative data supported survey findings [28] on the inconsistency of condom use resonate with findings from this study where participants opted against condom use with clients for higher payment, substance use clouding their judgment, and the inability to negotiate safer sexual practices with spouses for issues related to trust and fear of sexual violence or force from clients or partners [28]. The disadvantages of condom use raised in this study are a common finding in the literature [20]. Opportunity costs for condom use included the fact that not using condoms had a negative impact on their income, as some clients would either leave or offer to pay less if a condom was used [20]. In other settings, sex workers have reported preference for the female condom as they could have it on before meeting a client, thus eliminating the need to negotiate condom use [8,14,20]. A further complication of unprotected sex and the consequent unplanned pregnancies is the risk associated with abortion (often in the informal sector due to being stigmatized) [20]. Even though the number of women with a previous abortion accounted for a fifth (20.3%) of all the participants and is lower than other previous African reports of between 22 and 86%, it is still of concern [20]. This suggests a lack of use of other family planning services available. Implanon and injectable contraceptives were the most commonly used contraceptives by 52.2% and 34.8% of participants, respectively, suggesting preferences for medium-term (mostly 3-months) to long-term (5-years) contraceptives. It is consistent with other literature findings where sex workers have a lower tendency for oral contraceptives as they require daily intervention, which could result in poor compliance [8,20]. In contrast, though, some female sex workers reject injectable contraceptives as they are considered bad for business due to their associated dizziness, nausea, and menorrhagia, or extended vaginal bleeding [20]. In previous Kenyan studies [20,28], participants preferred effective medium-or longterm contraceptives such as injectable contraception or an implant [20,28]. The individual circumstances of a sex worker often interfered with compliance and the correct use of other methods [20,28]. Risky behaviours, such as being drunk, were some of the common reasons associated with poor condom and contraceptive compliance [20]. Injectable contraceptives, Implanon, and intrauterine contraceptive devices are also beneficial as contraceptives for female sex workers who are raped or those who cannot negotiate safe sex [20,28]. Furthermore, condoms could also tear, thus the need for the additional contraceptive measure [20,28]. Even though adequate knowledge was attained by only 43.5% of the participants, those with adequate knowledge were 90% more likely to report opportunity costs associated with condom use. By inference, sex workers will engage in unprotected sex not due to a lack of knowledge but often due to the cost of not giving the client what he prefers. This also suggests that health promotion strategies and messaging are not reaching their target audience, suggesting the need for a change of tactics for better impact. Even though minimised, this study is not without limitations. It is not, however, anticipated that these could have given rise to different outcomes. Firstly, there is a selection bias in that the participants were recruited from a risk reduction workshop (a controlled environment) and are therefore likely to be more knowledgeable about safer practices than the general population of sex workers. Secondly, the use of a researcher-administered questionnaire could have led to a social desirability bias. This was limited by the asking of follow-up questions and the rephrasing of some questions in a different section of the questionnaire to assess reliability. As a result, a participant with inconsistent responses was eliminated. Even though the findings are not generalisable for all South African sex workers due to the small sample size, the study has certainly identified the health needs of this marginalised population, as the findings are internally valid and could be the basis for more detailed deductive qualitative studies and prospective quantitative studies. Furthermore, the study has highlighted the importance of increasing contraceptive uptake and the need to promote female condom acceptability and availability among female sex workers. --- Limitations Due to the nature of the work of the participants, it is often difficult to find them in situ. Subsequently, the obtained population size may not be representative. This study also could not investigate in-depth the events leading to the start of sex work. Paucity in literature has also limited the authors' ability to obtain adequate and recent literature. --- Conclusions This study has confirmed the low acceptability of female condoms, as manifested by the low usage of female condoms. Though adequately marketed, the effectiveness of marketing on effectiveness and efficacy can be linked to low use, negative perception by sex workers, and unacceptability by clients. Additionally, poverty and high unemployment rates remain challenges in facilitating decisions to engage in sex work. Knowledge of condom use led to a better understanding of barriers to condom use. It becomes imperative to strengthen the approaches used to market condoms to members of the community and improve access in order to improve attitudes and efficacy in their use. Furthermore, the government needs an inclusive approach to dealing with sex work and its associated risks. --- Data Availability Statement: All data used in the study will be available from the corresponding. --- Author Contributions: N.S. conceptualized this study, drafted the proposal, collected data, and drafted the first draft of the manuscript. S.C.N. and M.P. advised on survey methods and edited versions of the manuscript. W.W.C. co-supervised the research and edited versions of the manuscript. S.A.M. analysed the data, edited versions of the manuscript, and signed off on the final version of the manuscript. All authors provided feedback on the analytical strategy and drafts. All authors have read and agreed to the published version of the manuscript. --- Conflicts of Interest: The authors declare no conflict of interest.
Female sex workers are a marginalized and highly vulnerable population who are at risk of HIV and other sexually transmitted diseases, harassment, and unplanned pregnancies. Various female condoms are available to mitigate the severity of the consequences of their work. However, little is known about the acceptability and usage of female condoms and contraceptives among sex workers in small South African towns. This descriptive cross-sectional study of conveniently selected sex workers explored the acceptability and usage of female condoms and contraceptives among sex workers in South Africa using validated questionnaires. The data were analyzed using STATA 14.1. The 95% confidence interval is used for precision, and a p-value ≤ 0.05 is considered significant. Out of 69 female-only participants, 49.3% were unemployed, 53.6% were cohabiting, and 30.4% were HIV positive. The median age of entry into sex work was 16 years old. Participants reported use of condoms in their last 3 sexual encounters (62.3%), preference of Implanon for contraception (52.2%), barriers to condom use (81.2%), condoms not being accepted by clients (63.8%), being difficult to insert (37.7%), and being unattractive (18.8%). Participants who reported barriers to condom use were 90% more likely to have adequate knowledge than those who did not (PR = 1.9; p-value < 0.0001). Knowledge of condom use was an important factor in determining knowledge of barriers to their use. Reasons for sex work, sex workers' perceptions, and clients' preferences negatively affect the rate of condom use. Sex worker empowerment, community education, and effective marketing of female condoms require strengthening.
Introduction The World Health Organization (WHO) defines partner and non-partner sexual violence separately, with partner sexual violence being defined as the self-reported forced engagement in sexual activity by a current or ex-partner from age 15 despite their unwillingness due to fear that their partner might act unfavorably during sexual intercourse or being forced to do something that is humiliating or degrading; non-partner sexual violence is defined as being 15 years of age or older when someone other than a person's husband/partner is forced to perform any unwanted sexual act [1]. The revelation of sexual violence often creates shame and stigmatization of the victim; the perpetrator shames and blames the victim to reduce their responsibility, and a climate of stigma in sociocultural perceptions develops; in this case, most victims opt not to report their experiences or may not describe what happened to them as sexual violence [2]. WHO defines sexual abuse during childhood and adolescence (child sexual abuse (CSA)) as, "the involvement of a child in sexual activity that he or she does not fully comprehend, is unable to give informed consent to, or for which the child is not developmentally prepared and cannot give consent, or that violates the laws or social taboos of society; CSA is evidenced by this activity between a child and an adult or another child who by age or development is in a relationship of responsibility, trust, or power, the activity being intended to gratify or satisfy the needs of the other person" [3]. An important issue of sexual violence is the relationship between the victim and the perpetrator, and recent research has focused on the sexual violence between intimate partners, whether committed by a partner or a non-partner to the victim. Often traumatic, the pattern, extent, and effects of violence may vary by perpetrator [1][2][3]. The occurrence of spousal violence depends on determinants at the individual and environmental levels, with unemployment, poverty, and literacy having a significant impact on spousal violence against women [4]. Transgender and non-binary youths are exposed to significantly more violence compared to women and men. Experiences of sexual risk taking and ill health demonstrated strong associations with exposure to multiple violence [5]. Although most previous research has focused on the impact of domestic violence on women, a few studies have focused on the characteristics of adolescent girls and adult women who experienced sexual violence [6]. In Taiwan, no in-depth study has been conducted on this issue. Therefore, we hypothesized that sexual assault is the biggest risk factor for violence against women in Taiwan. This study intended to understand for the first time the main types of risk of violence against women in Taiwan through the National Health Insurance Research Database (NHIRD). --- Materials and Methods --- Data Source Taiwan's National Health Insurance launched a single-payment system on 1 March 1995. As of 2017, 99.9% of Taiwan's population had participated in the program. This study was a 16-year observational research and used the NHIRD to provide a representative NHIRD 2000 coverage sample of 2 million people for the parent cohort (Longitudinal Health Insurance Research Database, LHID2000) as the research data source, tracking data on new cases for 16 years from 1 January 2000 to 31 December 2015. The files used were "Outpatient Prescription and Treatment Details File", "Inpatient Medical Expense List Details File", and "Insurance Information File". Violent abuse research cases included 11,077 people. The National Institutes of Health encrypted all personal information before releasing the LHID2000 to protect the privacy of patients. In LHID2000, the disease diagnosis code was based on the "International Classification of Diseases, Ninth Revision, Clinical Modification" (ICD-9-CM) N-code standard. Cases that occurred in 2000 were excluded. Figure 1 shows the research-design flow chart of this study. All procedures involving human participants performed in the research complied with the ethical standards of the institution and/or the National Research Council and the 1964 Declaration of Helsinki and its subsequent amendments or similar ethical standards. All methods were carried out following relevant guidelines and regulations. The Ethical Review Board of the General Hospital of the National Defense Medical Center (C202105014) approved this study. All procedures involving human participants performed in the research complied with the ethical standards of the institution and/or the National Research Council and the 1964 Declaration of Helsinki and its subsequent amendments or similar ethical standards. All methods were carried out following relevant guidelines and regulations. The Ethical Review Board of the General Hospital of the National Defense Medical Center (C202105014) approved this study. --- Participants Defined children and adolescents who have suffered from violence (victims of violence) refer to minors under the age of 18 and who have joined the National Health Insurance for medical treatment. The scope, according to the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9 CM) N-code: 995.5 and the definition of the external classification codes (E-code): E960-E969, of violently abused adult includes those who are 18-64 years old, whereas violently abused elderly refers to those 65 years of age and above, according to the ICD-9 N-code: 995.8 and E-codes E960-E969, as the case group (victims of violence). The control group consisted of people who did not suffer from violence (victims of violence). People in the case and control groups were matched in terms of index date, gender, and age at the ratio of 1:4. The insured identity information came from the "unit attribute" variable of the underwriting file. The grouping method considered the original code, the data science center results' carry-out requirements, and actual data distribution. The cases were divided into seven groups, namely, Group 1: "public insurance"; Group 2: "labor insurance"; Group 3: "Farmers"; Group 4: "Members of Water Conservancy and Fisheries Association"; Group 5: "Low-income households"; Group 6: "Community Insured Population"; Group 7: --- Participants Defined children and adolescents who have suffered from violence (victims of violence) refer to minors under the age of 18 and who have joined the National Health Insurance for medical treatment. The scope, according to the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9 CM) N-code: 995.5 and the definition of the external classification codes (E-code): E960-E969, of violently abused adult includes those who are 18-64 years old, whereas violently abused elderly refers to those 65 years of age and above, according to the ICD-9 N-code: 995.8 and E-codes E960-E969, as the case group (victims of violence). The control group consisted of people who did not suffer from violence (victims of violence). People in the case and control groups were matched in terms of index date, gender, and age at the ratio of 1:4. The insured identity information came from the "unit attribute" variable of the underwriting file. The grouping method considered the original code, the data science center results' carry-out requirements, and actual data distribution. The cases were divided into seven groups, namely, Group 1: "public insurance"; Group 2: "labor insurance"; Group 3: "Farmers"; Group 4: "Members of Water Conservancy and Fisheries Association"; Group 5: "Low-income households"; Group 6: "Community Insured Population"; Group 7: "Other + Missing Values"; Group 7: "Others" (including religious people, other social welfare institutions, veterans, and others). The cause of injury in this study was identified in E-codes 960-969 (see Appendix A for details). The groups were combined based on the number of people that complied with the regulations of the Data Science Center of the Ministry of Health and Welfare. The main groups after grouping were, "Grapple, fighting, Sexual Assault (E960)", "Injury by Cutting Tools (E966)", "Children and Adults Persecuted and Abused (E967)", and "Injured by Blunt Objects or Dropped Objects (E968.2)". "Grapple, Fighting, Sexual Assault (E960)" was subdivided into "Unarmed combat or fighting-(E960.0)" and "Sexual Assault (E960.1)". "Persecuted and abused children and adults (E967)" was subdivided into "Persecuted by father, stepfather or boyfriend (E967.0)" and "Persecuted by spouse or partner (E967.3)", and the rest were classified as "Persecuted by others (E967.1), E967.2, E967.4-E967.9)". The rest of the injuring methods were classified as "injured by other methods (E961-E965, E968.0, E968.1, E968.3-E968.7)". --- Statistical Analysis This study used the SAS 9.4 statistical software for Windows (SAS Institute, Cary, NC, USA) provided by the Academia Sinica Branch of the Data Welfare Center of the Ministry of Health and Welfare for the analysis. The descriptive statistics were expressed in the form of percentages, averages, and standard deviations, and the chi-square test was used to compare the differences among the three groups (children, adults, and elderly). Differences in the cause of injury and the proportion of women who suffered from sexual assault among different age groups were determined. In addition, logistic regression was used to analyze the risk of sexual assault for women in different age groups or with various occupations (including dependent occupations). According to the central limit theorem, (a) if the sample data are approximately normal, then the sampling distribution will also be normal; (b) in large samples (>30 or 40), the sampling distribution tends to be normal regardless of the shape of the data; (c) the means of the random samples from any distribution will themselves have a normal distribution [7]. A p value <unk> 0.05 was considered to be statistically significant. --- Results During the 15-year period, 1592 children, 8726 adults, and 759 seniors were injured by violence and sought medical treatment. Among them, 301 children, 217 adults, and 0 seniors were sexually assaulted. Sexual assaults accounted for all the injuries. The proportions in each generation were 18.9%, 2.5%, and 0%, respectively, and the proportion of children suffering from sexual abuse was significantly higher than that of adults (Table 1). Although very few men were sexually assaulted, six cases occurred in childhood and five in adulthood. Among female victims of violence, the proportions of injuries caused by sexual assault were 38.9%, 7.4%, and 0% in each generation; the proportion of those injured by unarmed combat or fighting rose to 24.5%, which was significantly higher than that of adult women, whereas 18.4% and 5.5% were observed for older women and girls, respectively (p <unk> 0.0001) (Table 2). The highest rate of sexual assault was observed among women 12-17 years old (54.8%), which is 2.71 times that of women under 12 years old. In addition, women aged 24-44 and 45-64 years old are more likely to be sexually assaulted than girls under 12 years old, who are less vulnerable to sexual assault (Table 3). Girls and adult women were 100 times more likely to be sexually assaulted than men (p <unk> 0.001). Senior age students (12-17 years old) were 2.5 times more likely to be sexually assaulted than junior age students (6-11 years old) (p = 0.003). For children and adolescents, and adults who were sexually assaulted, the risks of aggression were 11.4 (p <unk> 0.001) and 2.51 times (p <unk> 0.001) higher than that of the elderly, respectively (Table 4). Girls who were insured as labor insurance, farmers, members of water conservancy and fishery associations, low-income households, and community insured population (public insurance as the reference group) were significantly more likely to seek medical treatment from sexual assault than adult women. Among them, the risk was the greatest for girls from low-income households (OR = 10.74) (Table 5). --- Discussion --- Importance of This Study The results of this study revealed that the children and adolescents suffering from violence and seeking medical treatment accounted for the largest proportion, and the proportions of children and adolescents suffering from sexual abuse were significantly higher than that of adults. The proportions of children and adolescents suffering from sexual abuse were the majority. Women aged 12-17 years old were 2.71 times more likely to be sexually assaulted than women under 12. High school students (aged 12-17 years old) were 2.5 times more likely to be sexually assaulted than primary school students (aged 6-11 years old). Young people (18-23 years old) and adults (24-44 years old) were 11.4 and 2.51 times more likely to be sexually assaulted than middle-aged people (45-64 years old), respectively. The risk of sexual assault for girls in low-income households is greater than that of adult women (OR = 10.74). Therefore, sexual assault is the biggest risk factor for violence against women in Taiwan. The most common forms of violence against women are domestic abuse and sexual violence [8]. Nearly 3 in 4 children or 300 million children aged 2-4 years regularly suffer from physical punishment and/or psychological violence at the hands of parents and caregivers. Exactly 1 in 5 women and 1 in 13 men reported having been sexually abused as a child at the age of 0-17 years. A total of 120 million girls and young women under 20 years of age have suffered from some form of forced sexual contact [9]. A study of grade 10 students in Iceland showed that 15% of them experienced some form of abuse, and two-thirds experienced abuse more than once [10]. A Swiss study noted that 40% of girls and 17% of boys reported CSA [11]. In a Swedish study, 65% of girls and 23% of boys reported CSA [12], which is consistent with our study. Numerous studies have demonstrated the impact of poverty or low socioeconomic status (SES) on adolescent development and well-being [13][14][15]. A recent report from the Health Behavior in School-Aged Children study showed that disparities in household affluence continue to have a significant impact on adolescent health and well-being [16]. These findings suggest that adolescents from low-income households have poorer health, lower life satisfaction, higher levels of obesity and sedentary behaviors, weaker communication with parents, less social interaction through social media, and less social interaction from friends and family [17]. Many of these inequalities will have lasting lifelong effects. The findings suggest that these inequalities may be increasing, with widening disparities in several key areas of adolescent health [16,17]. In regard to sexual abuse in adolescents, a few studies have focused on the relationship between economic status (poverty or affluence) and CSA, and the results have been inconsistent [18]. The research has found poverty to be a risk factor for sexual abuse, Sedlak et al. reported that children from families with low SES were twice as likely to experience sexual abuse and three times as likely to be endangered than children from families with higher SES [19]. In their recent study, Lee et al. reported a high risk of severe and multiple types of abuse, including sexual abuse, for children experiencing poverty during childhood. This condition also affects the overall health in adult years, especially for women [20]. However, Oshima et al. found no significant difference in the CSA rates between more affluent and poor families, but a significant difference was reported between poor victims and wealthier victims of childhood sexual abuse for repeated reports of maltreatment to child protective services [21]. A few studies have looked at sexual abuse in low-income households and adolescence. Several research have shown that the least affluent adolescents reported a higher risk of sexual abuse [19,20], whereas one study reported no significant difference in CSA rates between non-poor and poor households [20,21]. Differences in these findings may stem from the differences in research methodology given that Oshima et al.'s data were derived from CSA reports from child protective services [19][20][21]. A low SES is an indicator of social disadvantage; for women, it may independently lead to the risk of sexual abuse. The double-harm hypothesis proposes that two or more concurrent sources of social disadvantage may interact to produce particularly negative outcomes. Therefore, the detrimental effects of SES may be more effective in adolescent girls than in boys [22]. The results of this study support this line of thinking. From the perspective of violent criminology, countries attempt to prevent violence against women by formulating laws related to sexual assault [23]. However, the ineffectiveness of the law and the question of appropriateness still cannot effectively prevent women from suffering from violence; The Domestic Abuse Act of 2021 expanded the legal system's role in dealing with domestic violence, made common assault an arrestable offense for the first time, and strengthened civil laws related to domestic violence to ensure that common-law partners of any gender and couples of any gender who have never been married or do not live together receive the same non-harassment and work order as married individuals [23]. Young people are the most frequent victims of sexual violence, with 12% to 25% of girls and 8% to 10% of boys under the age of 18 being thought to experience sexual violence [24]. In addition, CSA is associated with an increased risk of dating violence in all three forms (psychological, physical, and sexual) among boys and girls [25]. Sexual violence is more likely to occur among young people, women, people with disabilities, and those who have experienced poverty, childhood sexual abuse, and substance abuse [26,27]. Parental addiction, parental mental illness, and exposure to domestic violence, both individually and cumulatively, have been associated with CSA [28]. The shocking incident of two women being kidnapped and murdered in Taiwan at the end of 2020 prompted the passage of the "Stalking and Harassment Prevention Law" [29,30]. Violence against women and girls, irrespective of their social status and cultural level, remains prevalent throughout the world [30]. Previous investigations in Taiwan have noted that sexual assault victims between the ages of 12 and under 18 were the most common age group in 2006-2015 [29] but did not specifically identify low-income households' girls as the most at-risk group [30]. Our study compared the risk of sexual assault between girls and adult women and pointed out that the risk of sexual assault for girls from low-income households in Taiwan is 10.74 times that of adult women. In Taiwan, according to the latest "Statistical Survey on Intimate Relationship Violence of Taiwanese Women" released by the Ministry of Health and Welfare, 20% of women have been subjected to violence by an intimate partner, of which mental violence is the most common, whereas sexual violence has doubled compared with previous surveys [30]. A slight increase has also been observed in harassment, which is a form of violence in intimate relationships and needs attention in the future [30]. In 2021, a woman was stalked and harassed in Taiwan [30]. When no legal basis and no way to seek help were found, an unfortunate incident finally occurred, which led to the passage of the third reading of the "Stalking and Harassment Prevention Act", making Taiwan a legal basis for the protection of women's rights and interests [30]. --- Cause of High Risk of Sexual Assault among High School Girls This issue needs to be discussed from the criteria for determining sexual abuse. The condition for CSA must be that the child does not have a genuine consent [31]; however, the consensual behavior of boys and girls in the case of mutual consent still constitutes a constitutive element of sexual assault in terms of legal standards [31]. Therefore, when medical personnel in Taiwan are faced with sexual assault cases under the age of 18, they are required to log in the sexual assault code and report according to law [31]. Previous studies indicated that adolescents who suffered from sexual assault were mostly younger than 14 years old, whereas this research showed that the high-risk group for sexual assault included high school girls aged 13-17 years old, which is consistent with the female sexual maturity age [32]. The Swedish survey revealed that sexual violence accounted for 16.3% before the age of 18, and 10.2% of women experienced/attempted sexual assault in adulthood [33]. Perpetrators consisting of uncles and stepfathers were more common among adolescents and partners or ex-intimate partners of adult women; in most cases, sexual assaults occurred in public places, although sex crimes at the perpetrator's residence were more frequent among adolescents [32]. The 2008-2020 Sexual Assault Notification Case Investigation in Taiwan provides the age distribution of sexual assault victims and perpetrators. Over the years, most of the victims were 12 to under 18 years old, and most perpetrators were 12 to under 18 years old and 18 to under 24 years old [34]. Feminist scholars reject biological and essentialist explanations, arguing that gender inequality is the driving force behind sexual violence against women [35]. Sanday, who first proposed the theoretical framework of sexual violence, believed that sexual assault was used as a means to control and dominate women to maintain the hierarchical status of men [36]. However, such a theoretical framework cannot fully explain the difference in the risk of sexually assaulted girls and adult women. In the work of, a more reasonable explanation can be obtained from the following three factors: (1) low-income girls face more capable and criminally motivated offenders than adult women; (2) low-income girls and adult women are more suitable targets for sexual violence crimes; (3) girls from lowincome households are more likely to face the absence of suppressors who can stop the crime [37]. When the above three conditions all develop in an unfavorable direction, the risk of sexual assault for girls in low-income households is 10.74 times that of adult women, and the risk of sexual assault for girls in other insurance statuses is also higher than that of adult women. This study has several limitations. First, Taiwan's National Health Insurance database employs the practice of delaying the release of data for two years. Moreover, from 2016 to 2018, the data will face the problem of changing the data code from ICD9 to ICD10, which may cause a deviation in the code conversion. Second, Taiwan's National Health Insurance database lacks information on personal factors, such as marriage, education level, and living habits. The problem of child marriage is not evident in Taiwan, and in Taiwanese women's secondary education (12-17 years old) from 2000 to 2015, the rate was 93.49-96.28%. Thus, the lack of the above variables had little impact on this study. Third, the occupational classification of the health insurance database is not in accordance with the classification required for the research, and a more detailed classification cannot be obtained. However, the identification of low-income status is recognized by the relevant Taiwan authorities and has credibility. However, researchers cannot avoid the low-income status. Finally, after years of promotion in Taiwan, the prevention and treatment of sexual assault in Taiwan now bears a standard medical procedure, and child sexual assault is now a public prosecution crime. Therefore, despite the possibility of bias, the researchers believe that the associated range is small. --- Conclusions Our results showed that regardless of whether women are children or adults, the risk of sexual assault is higher than that of men, and women in national high schools are at the highest risk, especially girls from low-income households. These results highlight the vulnerability of children, especially women, living in low-income households to CSA. They also underscore the urgency of financially supporting the children of these lowincome households, given the severity of the impact of CSA on the future health and well-being of victims. Therefore, the protection of women's personal autonomy is the direction that the government and people from all walks of life need to continue to strive for. Politicians and health professionals, welfare, and education play an important role in supporting low-income children and their families. For high school students from lowincome households, their protection must be strengthened through education, social work, and police administration. Future studies should compare the impact of pre-coronavirus disease (COVID-19) versus post-COVID-19 sexual violence issues, such as those in 2016-2020 versus 2020-2024, after the update year is released given that COVID-19 is expected to exacerbate this phenomenon. --- Data Availability Statement: Data are available from the NHIRD published by the Taiwan NHI administration. Because of legal restrictions imposed by the government of Taiwan concerning the "Personal Information Protection Act", data cannot be made publicly available. Requests for data can be sent as a formal proposal to the NHIRD (http://www.mohw.gov.tw/cht/DOS/DM1.aspx?f_list_ no=812 (accessed on 13 October 2021)). --- Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of Tri-Service General Hospital (C202105014). --- Informed Consent Statement: Not applicable. --- Conflicts of Interest: The authors declare no conflict of interest. Perpetrator child and adult abuse by mother, stepmother or girlfriend E967. 3 Perpetrator child and adult abuse by spouse or partner E967. 4 Perpetrator child and adult abuse by child E967. 5 Perpetrator child and adult abuse by sibling E967. 6 Perpetrator child and adult abuse by grandparent E967. 7 Perpetrator child and adult abuse by other relative E967. 8 Perpetrator child and adult abuse by non-related caregiver E967.9 --- Appendix A --- Homicide and injury purposely inflicted by other persons (E960-E969) [38]. --- E-Code Perpetrator
Objective: To understand the main types of risk of violence against women in Taiwan. Materials and methods: This study used the outpatient, emergency, and hospitalization data of 2 million people in the National Health Insurance sample from 2000 to 2015. The International Classification of Diseases, Ninth Revision diagnostic N-codes 995.5 (child abuse) and 995.8 (adult abuse) or E-codes E960-E969 (homicide and intentional injury by others) were defined as the case study for this study, and the risks of first violent injury for boys and girls (0-17 years old), adults (18-64 years old), and elders (over 65 years old) were analyzed. Logistic regression analysis was used for risk comparison. A p value of <0.05 was considered significant. Results: The proportion of women (12-17.9 years old) who were sexually assaulted was 2.71 times that of women under the age of 12, and the risk of sexual assault for girls and adult women was 100 times that of men. Girls who were insured as labor insurance, farmers, members of water conservancy and fishery associations, low-income households, and community insured population (public insurance as the reference group) were significantly more likely to seek medical treatment from sexual assault than adult women. Among them, the risk was greatest for girls from low-income households (odds ratio = 10.74). Conclusion: Women are at higher risk of sexual assault than men regardless of whether they are children or adults, and the highest risk is for women in senior high schools, especially for girls from low-income households. Therefore, the protection of women's personal autonomy is the direction that the government and people from all walks of life need to continue to strive for. Especially for high school students from low-income households, protection must be strengthened through education, social work, and police administration.
• Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal Take down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. Download date: 07. May. 2024
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
D espite advances in medical technology and pharmaceutical therapy, coronary heart disease (CHD) remains the leading cause of mortality and morbidity worldwide (1). Asian Indians, in particular, have been found to have higher levels of risk for CHD both in country of origin, as well as those who make up the diaspora. In addition, Asian Indians experience a first myocardial infarction at a much younger age (2) and mortality because CHD is five-to tenfold higher in those aged under 40 years (3). Adherence to dietary best practice recommendations is, among other critical factors, essential for primary and secondary prevention of CHD (4). Recommendations involve consumption of avaried diet high in wholegrain cereals, fruit and vegetables, foods low in salt, and limited consumption of saturated fats, sugar, and foods containing added sugars. In addition, the Australian Guide to Healthy Eating provides people with a visual representation to assist the selection of healthy foods (5). Measuring diet empirically, however, is difficult particularly in terms of accurately reporting intake in relation to overeating and disease (6). --- Indians and diet Dietary customs and habits among Asian Indians are varied depending on their region of origin in India, cultural, and religious beliefs (7). The traditional Indian diet is carbohydrate dense and lacks high-quality protein ), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license. and antioxidants (8). In addition, the Indian diet comprises large amounts of added sugars (9,10), large portions (11), and late dinners. Dietary patterns associated with immigration and acculturation may contribute to a higher risk of heart disease among Asian Indians (12). Asian Indians make up a steadily growing diaspora estimated to be over 20 million worldwide (13). In 2011, Indian-born Australians were the second largest overseas-born Asian group in Australia, and the fourth largest overseas-born group overall (14). In addition to permanent migrants, Australia has a large temporary Asian Indian population in the form of tertiary students coming to study in Australia. Given the evidence of increased CHD among Asian Indians particularly at a younger age, health professionals, and nurses in particular, can provide dietary education to reduce the risk of heart disease in this population. To date, little research has been done on the complex interplay of psycho-socio-biological factors on food practices in Asian Indians. In particular, no research has been done on the migrant population of Asian Indian Australians. Therefore, understanding the knowledge, attitudes, and beliefs influencing food practices is vital in order to develop culturally appropriate interventions for diet-related behavior change. Rather than measuring these constructs quantitatively, a more in-depth qualitative approach may provide greater insights into the ways these factors interact. This paper reports and discusses the findings of a qualitative study into knowledge, attitudes, and beliefs relating to food practices and strategies for the prevention of heart disease among Asian Indian Australians. --- Methods --- Participants A convenience sample of migrant Asian Indian Australians who took part in a larger risk factor profile study was recruited to participate in a focus group. Focus groups are particularly useful for exploring complex issues about which little is known. However, it is the dynamic of social interaction among the participants of focus groups that helps elicit rich data through the mutual support members' experience. That support aids deep discussion and therefore rich findings (15,16). Using a convenience sample, participants who agreed to take a survey associated with the larger study (17) were invited to attend a focus group. Inclusion criteria included adults who identified themselves as Asian Indians who were either migrants to Australia, or born to Asian Indian migrants living in Australia. The sample consisted of Asian Indian Australian adults who were capable of conversing freely in English. This was important as participants came from varying Asian Indian language groups. To provide equal access, the English language was chosen. Furthermore, a high level of English language skills is known to exist in the Asian Indian population (18). The focus groups were conducted at the university after consultation with participants to ensure that the venue was suitable to them. --- Data collection Community leaders of local cultural associations and members of the Indian Medical Associations were contacted from a list provided by the Consul General of India to participate in a larger study titled 'Heart Health and Well being among Asian Indians Living in Australia' <unk> a study undertaken to develop and implement an evidencebased intervention to reduce the risk of heart disease among Asian Indians. Participants were contacted by email and invited to participate in this study if they had previously provided the researchers their contact details as part of the larger study, and indicated an interest in participating in a focus group. A letter providing details of what was required during the focus group session was sent out along with a form to establish the most convenient time and location for the focus group. An information sheet was also sent out outlining the key areas of discussion regarding the Asian Indian community relating to practices about wellness, heart health, and preventive health, the various health and community resources available, and what they perceived to be the components of an effective program for reducing the risk for heart disease. Written consent was obtained from each participant prior to conducting the focus groups. --- Measures Open-ended questions were developed based on the risk factor data obtained from a substudy of the Heart Health and Well Being among Asian Indians Living in Australia Project. These questions related to their perception of their food practices, its impact on heart health, and strategies that would enable people in their community to eat healthy. Focus groups were conducted using a dual moderation approach as difficulties have been noted in single-moderator studies where the moderator is required to ask questions as well as keeps field notes (19). During the focus group sessions, feedback methods were used by the moderators to reflect back to the participants pertinent issues raised during the dialog. Each focus group lasted for 90 min, and data saturation was reached following the second focus group. Each focus group was digitally audio-recorded and transcribed verbatim to allow independent analysis by the research team. Field notes were compiled by each facilitator for inclusion in the analysis. Following the focus group sessions, the moderators met to debrief, note any common themes, and discuss the field notes gathered during the session. These notes informed the data analysis process. --- Analysis Following transcription, data were analyzed for emergent themes and subthemes. Three researchers independently analyzed the data and later discussed their results before arriving at a consensus of the essential themes and subthemes (20). Exemplars were selected to illustrate emergent themes and subthemes. Ethical approval for the study was obtained from the University of Western Sydney Human Research Ethics Committee (approval no.: H8403). --- Results Two focus groups were held with a combined sample of 12 participants (Group 1: 6 and Group 2: 6). The majority of the participants were male (n09). Participants had migrated from South, North, and Northeast India thus representing several subcultures of the region. Participants were aged between 35 and 70 years, all had completed a Bachelor's degree and only one was born in Australia. Two participants were retired but were actively working in their community groups. Participants in the study included a general practitioner, accountants, dietitians and financial advisors. Their length of stay in Australia ranged from 5 to 40 years. The main themes that emerged from the focus groups included (1) migration as a pervasive factor for diet and health; (2) the importance of food in maintaining vital social fabric; (3) knowledge and understanding of health and diet; and (4) preventing heart disease and improving health. --- Migration as a pervasive factor for diet and health Indian Australians identified the challenges of migration as negatively influencing dietary practices and health. Subsumed within this theme were challenges relating to stress and under-employment, loss of the extended family, and financial pressures. --- Stress and under-employment Migration as a source of considerable stress was discussed by the majority of participants. The stress associated with migration, particularly for skilled migrants, was substantial. Participants discussed the challenge migration presented in terms of affordability for living in the new environment. One aspect was under-employment of professionally trained migrants. This under-employment had a perceived impact on the health of the family, particularly the dyad as both husband and wife needed to work. As one participant expressed: [P]:... when we came here the whole thing changed, the whole place changed... the women had to look for a job, the children are neglected, the food were prepared in haste. The introduction of fast food was implicated in this process and, therefore, the westernization of their diet leading to a perceived reduction in health. One participant stated: [M2]:... when I was working... lunch becomes a fast food type of thing... you get used to this... if it's pizza or whatever you want to eat you see, or McDonalds. --- Access to low-cost low-nutrient foods Migration also added financial pressures to new immigrants leading to increased risk factors for heart disease. In particular, unhealthier food choices being cheaper than more nutritious foods were demonstrated in the discussions. Examples included ice cream, pizza, and beer all being cheaper to consume. As one participant described his transformation upon migration: [J]:... basically I never had ice cream when I was in India. I came here as a student and I found ice cream was the cheapest to eat... You know I'm not joking, when I came to Australia I was only 75 kg. In three years I was 120 kg and now I'm 100 kg. Another participant echoed the above comments elucidating on the link between diet and exercise: [P]:... when I came here I was 128 pounds, now I'm 228 [pounds]. You didn't take out what you put in, and you didn't walk too long, we use the cars. Back home we used to walk to work. The drinks [beer] were so cheap when I came here... buy a carton of beer for about $5.99... we used to drink a carton a week. --- Loss of the extended family The loss of the extended family as a major social support was identified. The notion of family had to be redefined by including friends as surrogates for that loss of extended family. As one participant stated: [J]:... Back home in India... my grandmother was the one who took care of me. So I was getting proper food and not like you know fast food kind of thing. Participants identified differences between traditional dietary practice and post-migration practice. For example, the number of times a person eats per day has changed. Prior to immigration, it was common to eat several small meals per day [P]: We have the habit of eating five meals a day, when we came here we just eat three meals because we don't even have time. --- Importance of food in maintaining vital social fabric Participants discussed the role food played within the traditional contexts of family and community. From the patterns of meals and communal eating to maintaining social cohesion, food was seen as integral to Asian Indian culture. As one participant discussed: [M]:... almost every weekend we socialize. So when we invite somebody, we have all those items [food] and we eat as much as possible. Beliefs around the importance of types of foods during social events were also expressed including'sweets' as a culmination to a meal. Two participants illustrated this well: [L]:... some have a sweet tongue... without... sweets they are not satisfied. Responding to this comment, another participant stated: [K]: The meal isn't over. --- Knowledge and understanding of health and diet When asked of their knowledge of the connection between diabetes and heart disease, some participants were not aware of the link. One participant made the following comment: [J]:... My family, there's nobody with heart disease. But with diabetes yes. But to be frank with you I still eat sweets and I don't think that will be a problem in my life. Other misconceptions about health, diet, and heart disease were also expressed, in particular, the issues related to risk factors for heart disease. Being overweight was not necessarily seen as a health-negative issue. Participants discussed the cultural aspects of this notion. When asked how the community in general regarded being overweight, one participant who was a health professional stated: [L]: I think they disregard it probably.... They know, 'I am overweight', but still when they see a piece of sweets they forget about it [the weight]. Fatalism regarding health and health outcomes was noted by participants. This was expressed more in terms of comparing the apparent irony of a person of advanced age with multiple risk factors yet appearing well. [J]:... people tend to compare instances, for example say so and so... had no problems... still he passed away at 50. [another] person was having all sorts of problems, overweight, diabetic and what not, still he is 90 he is still going strong... --- Preventing heart disease and improving health Participants had much to say about aspects of interventions that may improve health and dietary outcomes. These centered on the family, community, and the use of media. --- The family as a driver for change The family was singled out specifically as an important unit for primary and secondary prevention strategies. In particular, the woman's role within the household was emphasized as she was considered the primary preparer of food: [M]:... in our community, food is normally prepared by the lady at home... so awareness of those [issues] to the women is more important... for instance my wife, she decides what she should cook and how she could cook. Women were also the ones considered the most knowledgeable concerning dietary issues. One participant commented: [M2]:... my wife is more conscious about health issues than I am... --- Community empowerment The Asian Indian Australian communities were also identified as important contexts for heart disease prevention interventions. One participant stated clearly: [M2]... awareness and education within the community [Asian Indian Australian] is something which we need to do. Discussion included the use of cultural fairs, religious settings, such as Hindu and Sikh temples, and community settings such as grocery stores and restaurants. By way of example, the following participant encouraged the use of cultural fairs emphasizing the large numbers of community that attend: [M]:... a good number attend. I mean you cannot cover all the community... but majority... around 25,000 people... that's a big number that you can get at one place. Religious settings featured as alternative contexts for interventions. The deeply integrated nature of religion with everyday life was emphasized. As one participant expressed: [M2]:... the number of temples which have come up in Sydney since I came here... they [the communities] may go for social events and other things, but here the religious thing is a very important thing. Although the majority of the participants were Hindus, other religions including Sikhism, Buddhism, and Christianity were also discussed. The emphasis was on the role of religious gathering as a context for potential intervention. Other settings included shopping centers, in particular, culturally-specific shopping areas frequented by Asian Indian Australians. [M2]: Maybe community grocery shops you see, not the supermarkets so much because they may not provide that type of thing [dietary intervention]. Like other settings, the timing of delivering such an intervention in the community grocery context was considered important. [J]: Especially the weekends, because on weekends is when much of [the] people go there [Asian groceries]. --- Media as a change agent Media was the third identified area for focus in developing and delivering a dietary-related intervention with particular emphasis on television and radio. [M2]: You've got SBS radio now, a Hindi program... they've started a new service... disability which is again an educational awareness thing. Print media was also discussed. The many Asian Indian languages and dialects were discussed. However, the provision of health information in the most common languages used by the Asian Indian Australian community was seen as important along with the frustration that the government bodies have little understanding of the complex linguistic needs of the Asian Indian communities. [M2]: Within our community we've got about 12 languages or even more... 18 languages. Unless you have... language specific booklets, information ones, they won't understand... for instance, we persuaded the health department to produce handbooks in Tamil language, which is again a major language. They thought only Hindi was a major language. Aging members of the community were singled out as particularly in need of linguistically-diverse material. [M2]: Older people need it... it's an aging community you see. Language was also considered important when engaging health professionals. The example of a dietician was mentioned. [L]:... in my medical centre we've got a very good dietician, where we send the majority of our people... They speak the same language too, so it's very easy for them. Emphasis was also placed on the temporary migrants including students and the relatives that visit regularly. The following expresses the concerns of the participants well: [L]: Even more... there are a lot of students [that] have come here, have got permanent residency, and their parents are coming regularly... most of them are visitors, but they come every year 'cause they've got 10 year permit. They require this type of help in the local language. --- Discussion This paper presents the findings from a focus group study of Asian Indian Australians and their perceptions of heart disease and diet. The findings from this study provide insight into the challenges of achieving improved cardiovascular health outcomes amidst misconceptions regarding what constitutes a healthy diet. Migration as a substantial catalyst for diet change and subsequent impacts on cardiovascular health is a key finding of this study. In addition, while Asian Indians have similar anthropometric characteristics, cultural, linguistic, and religious attributes remain quite heterogeneous (21) and have a profound and wide-ranging influence on perceptions of health, heart disease, and dietary practice. An important insight from this study involves how culture forms a vital factor in determining dietary behaviors, as well as how its potential disruption through migration and subsequent acculturative stress can adversely impact on cardiovascular health. This finding is congruent with that reported in other literature on migrant health (22,23). Asian Indians have their own culturally-based diets and dietary habits comprising mainly of carbohydrate dense foods (24). Biculturalization due to migration could result in consumption of both Indian and Australian food (25) and not replaced with each other. For example, rice and roti continue to be consumed as the main meal, and pizza and burgers as snacks resulting in an even denser carbohydrate diet. These dietary behaviors place the already at-risk Asian Indian population at an even higher risk of cardiovascular disease. Under-employment and changes to the patterns of how income are brought into the family unit add to the challenges to adaptation to a new environment and consequently on health. In this study, participants expressed concern about unemployment and under-employment and how it affected affordability for living in a country rated as one of the most expensive in the world (26). In a recent survey conducted in Australia, approximately one-fifth of the skilled migrants were either unemployed or under-employed at 6 months following migration, which supports the findings obtained in this study (27). Similar findings have been reported, in particular, where economic hardship hinders healthy adaptation to the new country leading to acculturative stress and a lower self-reported health status. Costs for fresh foods continue to be higher than so-called 'fast-food' or 'take-away' food resulting in economically-disadvantaged populations opting for the more affordable yet less healthy 'fast-food' options. Health, itself as a construct, is seen from a perspective diverse from that of the dominant Anglo-Saxon Australian point of view. In particular, obesity was not readily perceived as a health-related issue. This finding is interesting as abdominal obesity is a well-established risk factor for heart disease (28), and specific cutoffs for abdominal obesity in Asian Indians (29) have been developed to initiate early management. In addition, a sense of fatalism governed the perceptions of health. Fatalism describes a belief system where the individual's locus of control over health behavior is externalized (30). Other studies into Asian Indian populations have reported similar issues (17,31,32). Participants identified the role of the family and community as important factors in developing future interventions. This itself is in keeping with the importance of family and wider social cohesion as a determinant of health itself (33). The role of women as providers of meals within the family was identified specifically. In Indian society, men may cook, however, women are generally responsible for everyday cooking. A number of the participants stated that it was the responsibility of the wife or mother to shop and cook, therefore, undertaking further research in this group is vital. In addition, establishing a gender-sensitive approach to education regarding food selection and meal preparation is warranted. Community approaches to dietary health promotion including media and places of worship were also emphasized. Given the cohesive nature of the Asian Indian Australian communities, such approaches may prove efficacious. Evidence from the literature (34) supports the use of community-based interventions based on theory, informed and initiated by community members in improving dietary habits of people and sustaining the change. There is, therefore, an urgent need to develop strategies that both respect the unique cultural perspectives on health while engaging in appropriate primary and secondary prevention necessary to ameliorate risk. These strategies to improve dietary behavior should build on the existing beliefs and attitudes to reduce the risk of cardiovascular disease. The major strength of the study is the recruitment of Asian Indian participants from the different regions of India given that the cultural, linguistic, and religious attributes of Asian Indians are highly diverse. In addition, the age range of the participants was varied, thus providing a broad perspective relating to the knowledge, attitudes, and beliefs relating to food practices. The participants in the study were community leaders who were educated and holding jobs that were appropriate to their qualifications at the time of the focus group, although some reported to have been under-employed or unemployed previously. In addition, two participants were health professionals who provided their views about their community from a health perspective. Therefore, the sample was able to cover a broad range of Asian Indian migration experiences while capturing the common themes expressed by the participants of both groups. As such, the study was able to gain greater insights into the role that food plays in their lives. The use of focus groups has been found to facilitate groups with a common characteristic in discussing complex issues (16). The provision in advance of the key questions that would be asked during the focus group also provided participants the opportunity to reflect on their responses prior to attending the focus group. Despite the evidence obtained from this study, the limitations inherent in undertaking such a study need to be acknowledged. The small sample comprising primarily men limits the extent of generalizability of the findings. While the focus group is an effective method at uncovering data, the information may not cover the depth of experiences as well as one-on-one interviews. Furthermore, the level of control the interviewer or moderator has over the course of the discussion is less in focus groups. --- Conclusion Food and associated behaviors are an important aspect of the social fabric. Entrenched and inherent knowledge, attitudes, beliefs, and traditions frame individuals' point of reference around food and recommendations for an optimal diet. There are many interconnected factors influencing diet choice that go beyond culture and religion to include migration and acculturation. Interventions to improve dietary choices and thereby influence cardiovascular health will require a socially cohesive approach, which includes families and communities and recognize social determinants of health. New contribution to the literature 1. Provides insights into the knowledge, attitudes, and beliefs relating to food practices and heart disease in Asian Indian Australians for the first time. 2. Highlights from the participants' perspective, the impact of migration on dietary choice and health outcomes. --- Conflict of interests and funding The authors have received funding from the University of Western Sydney, NSW Australia to conduct this study.
Background: Australia has a growing number of Asian Indian immigrants. Unfortunately, this population has an increased risk for coronary heart disease (CHD). Dietary adherence is an important strategy in reducing risk for CHD. This study aimed to gain greater understanding of the knowledge, attitudes and beliefs relating to food practices in Asian Indian Australians. Methods: Two focus groups with six participants in each were recruited using a convenience sampling technique. Verbatim transcriptions were made and thematic content analysis undertaken. Results: Four main themes that emerged from the data included: migration as a pervasive factor for diet and health; importance of food in maintaining the social fabric; knowledge and understanding of health and diet; and elements of effective interventions. Discussion: Diet is a complex constructed factor in how people express themselves individually, in families and communities. There are many interconnected factors influencing diet choice that goes beyond culture and religion to include migration and acculturation. Conclusions: Food and associated behaviors are an important aspect of the social fabric. Entrenched and inherent knowledge, attitudes, beliefs and traditions frame individuals' point of reference around food and recommendations for an optimal diet.
brief video scenes that have gone viral on social media around the world, touted to "make you cry." It seems that the nationality and identity of protagonists and audiences matter little for evoking this response. Or do they? Certainly the cultural contexts for these emotions are diverse, but are the emotions that emerge essentially the same, even if their cultural significance varies? We investigated whether individuals from different countries show similar responses to videos like the ones described above. Based on the kama muta model (Fiske, Schubert, & Seibt, 2017;Fiske, Seibt, & Schubert, 2017), we expected similar constellations of emotion terms, sensations, valence, appraisals, and outcomes across cultures. We will briefly summarize the literature, then present the kama muta model, and then report and discuss our studies collecting responses to video stimuli in seven samples from five countries. --- Being Moved: Phenomenology, Elicitors, and Outcomes In English, moved or touched or heartwarming seem to be the best descriptors of the emotion typically evoked by such video sequences. In the scientific literatures on emotions, philosophy, and artistic expression and reception, researchers have used various labels that are more or less synonymous: being moved (Cova & Deonna, 2014;Menninghaus et al., 2015), sentimentality (Tan & Frijda, 1999), elevation (Haidt, 2000), kama muta (Fiske, Seibt, & Schubert, 2017), or, in the musical context especially, chills or thrills (Kone<unk>ni, Wanic, & Brown, 2007). A review of the literature shows some overlapping ideas and observations regarding characteristics of these emotional states. When sufficiently intense, being moved appears to be characterized by at least three types of bodily sensations: goosebumps, chills, or shivers; moist eyes or even tears; and often a feeling of warmth in the center of the chest (Benedek & Kaernbach, 2011;Scherer & Zentner, 2001;Strick, Bruin, de Ruiter, & de Jonkers, 2015;Wassiliwizky, Wagner, & Jacobsen, 2015). The affective character of this emotional experience appears predominantly positive (Hanich, Wagner, Shah, Jacobsen, & Menninghaus, 2014), although it has been argued by some that the emotion entails coactivation of both positive and negative affect (Deonna, 2011;Menninghaus et al., 2015). In addition, the motivation of this experience appears to include approach tendencies, such as increased prosocial or communal behavior and strengthened bonds (Schnall & Roper, 2012;Schnall, Roper, & Fessler, 2010;Thomson & Siegel, 2013;Zickfeld, 2015). Elevation is assumed to motivate affiliation with others as well as moral action tendencies (Pohling & Diessner, 2016). Being moved is assumed to lead to a reorganization of one's values and priorities (Cova & Deonna, 2014), to approaching, bonding, helping, as well as promoting social bonds (Menninghaus et al., 2015) and to increased communal devotion (Fiske, Seibt, & Schubert, 2017). Less consensus has been reached on what exactly evokes such emotional experiences. As the main appraisal pattern, researchers have posited themes of affiliation and social relations, realization of core values, or exceptional realization of shared moral values and virtues (Algoe & Haidt, 2009;Cova & Deonna, 2014;Fiske, Seibt, & Schubert, 2017;Menninghaus et al., 2015;Schnall et al., 2010). Specifically, the elevation framework (Haidt, 2000; see Thomson & Siegel, 2017, for a review) argues that moving experiences are elicited by observing acts of high moral virtue. Cova and Deonna (2014) have theorized that the emergence of positive core values evokes being moved. Menninghaus and colleagues (2015) proposed that being moved is elicited by significant relationship or critical life events that are especially compatible with prosocial norms or self-ideals. Frijda (1988) characterized sentimentality as evoked by a precise sequence: Attachment concerns are awakened; expectations regarding their nonfulfillment are evoked, and then they are abruptly fulfilled (see also Kuehnast, Wagner, Wassiliwizky, Jacobsen, & Menninghaus, 2014;Tan, 2009). Appraised situations such as these can arouse strong feelings of being moved or touched (Kone<unk>ni, 2005;Scherer & Zentner, 2001;Sloboda, 1991). These emotion constructs have typically been posited to occur empathically through narratives, theater, movies, or music, rather than resulting from firsthand encounters. Research assessing moving or touching experiences has been conducted using U.S. American (Schubert, Zickfeld, Seibt, & Fiske, 2016;Thomson & Siegel, 2013), British (Schnall & Roper, 2012;Schnall et al., 2010), French-speaking Swiss (Cova & Deonna, 2014), German (Kuehnast et al., 2014;Menninghaus et al., 2015;Wassiliwizky, Jacobsen, Heinrich, Schneiderbauer, & Menninghaus, 2017), Japanese (Tokaji, 2003), Dutch (Strick et al., 2015), Norwegian (Seibt, Schubert, Zickfeld, & Fiske, 2017), and Finish (Vuoskoski & Eerola, 2017) participants. Yet each of these studies has used different elicitors and different methods, so, to date no study has systematically compared responses to moving stimuli with the same measures across a range of cultures. --- The Kama Muta Model: Intensified Communal Sharing as a Universal Elicitor Interviews in many different cultural contexts and languages, as well as ethnographic material from various places and times, suggest that people from a wide range of cultures and times have similar feelings and sensations in a set of situations that is broader than previously assumed, yet sharply demarcated. For example, elevation theory states that elevation is primarily a witnessing emotion (Algoe & Haidt, 2009;Haidt, 2000;Thomson & Siegel, 2017) yet the ethnographic material suggests that in many cultures and times, people report the typical being-moved sensations and motivations when feeling one with a divinity-or with their football team (Fiske, Seibt, & Schubert, 2017). Furthermore, while some theories stress prosocial norms (Menninghaus et al., 2015), moral beauty (Haidt, 2000), or core values (Cova & Deonna, 2014) as central appraisal themes, interviews and ethnographic material suggest that a person who sees a very cute sleeping infant or one who nostalgically remembers her first love can also feel this emotion. Experiments show that seeing cute kittens and puppies also evokes it (Steinnes, 2017). Rather than any specific deed, the affection itself in the perceiver seems to evoke the feeling in these cases. While some theories stress as central attributes of the emotion the coactivation of sadness and joy (Menninghaus et al., 2015), or the contrast between loss and attachment (Neale, 1986), we have found many reports where there is no apparent negative side-as when a guy who is deeply in love proposes to his girlfriend, and both feel this emotion intensely (the "Proposal" video in the current study had this theme). Kama muta theory predicts that a sudden intensification of communal sharing evokes this emotion, and that it is universal because the underlying social-relational dynamic is universal. This prediction is based on Relational Models Theory (Fiske, 1991(Fiske,, 1992(Fiske,, 2004b)), which posits four culturally universal relational models to coordinate social life, implemented in culturespecific ways. These models are Communal Sharing (CS), Authority Ranking (AR), Equality Matching (EM), and Market Pricing (MP), which are based, respectively, on equivalence, legitimate hierarchy, even matching, and proportionality. Individuals in communal sharing relations are motivated to be united and caring. Communal sharing typically underlies close relations among kin, in families, between lovers, and in closeknit teams, but is also used to construct larger and more abstract social groups and identities. Individuals in a communal sharing relation focus on what they have in common, and sense that they share some important essence such as "blood," "genes," national essence, or humanness. Communal sharing is communicated by and recognized from behavior that connects bodies or makes bodies equivalent and thus indexes the sharing of substance: touch, commensalism or feeding, synchronous rhythmic movement, exchange of bodily fluids, transmission of body warmth, and body modification (summarized as consubstantial assimilation by Fiske, 2004b). Communal sharing is also recognized from behavior that responds to the needs of the relational partner without expecting to be repaid, even among strangers. Relational models theory thus has a broad yet precisely characterized notion of communal sharing relationships with different types of entities, such as humans, animals, deities, music, or nature. Communal sharing is operating when people perceive themselves as, in some significant respect, essentially the same as these other entities, often because they have a strong experience of consubstantial assimilation, as in celebrating the Eucharist. Communal sharing relationships can be stable or transient, and perceived by both sides or not. We infer them from acts of kindness and of consubstantial assimilation. This wide range of circumstances fits the wide range of constellations where we found evidence of kama muta experiences. The universal importance of communal sharing makes it likely that there is a positive emotion signaling the event of a communal sharing relation suddenly intensifying (Fiske, 2002(Fiske,, 2010;;Frijda, 1988). We posit that this is the emotion that people often call being moved. In a number of languages, labels for this emotion use similar metaphors of passive touch or passive movement (or stirring), or warmth in the chest or heart. In Mandarin, you might say you feel g<unk>n d<unk>ng, <unk>; in Hebrew, noge'a lalev, ; in Portuguese, comovido/a; and in Norwegian, r<unk>rt. This emotion leads in turn to an increase in communal feelings toward those who evoked the emotion. Individuals make sense of and share this emotion through culture-specific concepts and practices (Barrett, 2014;Wierzbicka, 1999). English speakers sometimes use moved or touched for other experiences than the ones we denote as kama muta; conversely, they may denote kama muta with other terms (e.g., nostalgia, rapture, tenderness). Also, communal sharing intensifications may sometimes go unrecognized and unlabeled, yet still evoke the same motives. However, we have found that in many languages, there exist one or more words that are typically used for the emotion evoked by sudden intensifications of communal sharing. For scientific purposes, we cannot rely on imprecise and inconsistently used vernacular words from living languages. To give this construct a precise, consistent scientific definition, we name it with a lexeme from a dead language: kama muta (Sanskrit, literally meaning "moved by love"), which may or may not closely correspond to one or more emotion terms in any given language. --- Kama Muta as a Universal Emotion We predict that universally, a kama muta response is elicited by a sudden intensification of communal sharing, and that the emotion in turn makes persons affectively devoted and morally committed to communal sharing with those who evoked the emotion in them, and to a lesser degree with some others. In English, communal sharing relationships are typically labeled and reported as closeness (Aron, Aron, & Smollan, 1992). For Norway and the United States, we found indeed that an appraisal of increased closeness was related to being moved (Schubert et al., 2016;Seibt et al., 2017). However, no evidence has been presented yet on the universality claim, nor on the proposition that kama muta leads to feeling close and communal with the person who evoked it. As explained above, communal sharing is recognized from acts of consubstantial assimilation, or from acts of great care. Consubstantial assimilation, in turn, encompasses hugs, reunions, wishing or imagining another near, kissing, holding hands, sharing food, or dancing or singing in synchrony. Acts of great care are characterized by attending to the needs of another, which can range from simple kindness to heroic sacrifice. Both should lead to perceived closeness. In addition, when experienced between an individual and a group, consubstantial assimilation should be perceived as inclusion, while acts of great care should be perceived as moral acts. Both should make the perceived actor seem particularly human. In both cases, overcoming obstacles on the way to closeness evokes suspense that should increase the perceived suddenness of communal sharing intensification. To start examining the claim that kama muta is universally generated by sudden intensification of communal sharing, we sampled from cultures in different regions of the world. These cultures differ in emotional expressivity, as well as in some factors potentially related to it (some sorts of individualism and collectivism, gender equality, and historic heterogeneity; Matsumoto & Seung Hee Yoo Fontaine, 2008;Rychlowska et al., 2015). In addition, we were especially interested in comparing Western and East Asian cultures, as these have been found to differ markedly in the configuration and dynamics of facial emotional expression (Jack, Garrod, Yu, Caldara, & Schyns, 2012). We build on two prior studies that evoked kama muta through autobiographic memories and through a video (along with other videos eliciting other emotions) in Norway and the United States and measured five appraisals (Seibt et al., 2017). The research question is whether people in a wider range of cultures experience kama muta and whether these experiences are predicted by measures indicating intensified communal sharing. --- Overview of the Current Studies We conducted studies in the United States, Norway, China, Israel, and Portugal. An overview of the different samples including information on their demographics, sample location, and number of stimuli is provided in Table 1. Apart from being conducted in different languages, the procedures, stimuli, and materials were mostly identical but differed on some occasions as highlighted below. We identified a set of labels for the kama muta experience in each of the five languages. We presented the same set of four videos in all five countries, along with additional videos that were chosen to fit the culture where the study was run, to have both overlap and variety (we also included one comic to increase stimulus variability). We used video stimuli because they had been shown to evoke the emotion in many participants in the United States and Norway (Seibt et al., 2017). We selected them based on a search for keywords such as "moving" or "heartwarming" in various languages, and based on having similar length (90-180 s). Based on the universality claim of kama muta theory, we hypothesized that across all five countries we would detect kama muta experiences as a co-occurrence of using kama muta labels to describe the experience, reporting typical sensations, a positive experience, and feeling communal toward the protagonist as an outcome. We further expected that participants across all five nations would experience kama muta when communal sharing relations suddenly intensify. Specifically, the intensity of kama muta as indicated using the labels identified should be predicted (Hypothesis 1) by the judged positivity of the feeling, more than by its negativity in all five countries, and (Hypothesis 2) by the sensations of tears, a warm feeling in the chest, and chills/ goosebumps in all five countries. We further predicted (Hypothesis 3) that the intensity of kama muta relates to feeling unity and closeness with the protagonist in the video in all five countries. Based on kama muta theory's claim on the central appraisal pattern, we hypothesized that the intensity of kama muta would be predicted (Hypothesis 4) by the appraisal of increased closeness among protagonists in all five countries. All studies presented here were examined and approved by the Internal Review Boards of the respective institutions at which they were performed. For all studies, participants were presented with written information about study procedures, and the contact information of the principal investigator. By proceeding with the study, participants indicated their consent. --- Studies 1-7 --- Method Participants. In total, 671 participants were recruited through various means at five different sites: the United States, Norway, China, Portugal, and Israel. An overview of the study details is presented in Exclusion was based on cases where the screen was displayed shorter than the actual length of the video (with a buffer of 10 s), or for longer than 10 times its length (this allowed for long loading times). b Some measures not relevant to the present hypotheses were presented in English. c In contrast to the other countries stimuli were not presented in random order. Table 1, and descriptive statistics for the respective samples are provided in Supplementary Tables S1 andS2. Participants were excluded based on the duration of video presentation (see Table 1). In the Chinese sample, four cases were excluded because of a computer error. Two participants were excluded because they were younger than 18. The final dataset consisted of 624 participants (407 females, 178 males, 39 unspecified gender) ranging from 18 to 74 years of age (M = 29.90, SD = 11.71). With a few exceptions in Norway and Portugal, items were completed in the languages of the respective countries; hence, language is ignored as a factor. We drew two samples each from the United States and Norway, because we introduced a few changes after running the first wave in these two countries (see below) and decided to re-run the study in these countries with new stimulus sets and the changes in place, to broaden our evidence base. Nevertheless, the changes were small enough to justify including both samples in the final analysis. Overview and design. The topic of the studies was introduced as emotional reactions and media. After giving informed consent, participants were told that they were going to watch a number of videos. In most samples, participants were required to watch two videos and invited to continue watching (up to 10). In the Chinese sample, participants were instructed to complete all seven. Stimuli were presented in random order except for the Chinese sample. Materials. A total of 26 videos and one comic strip were utilized across all samples. An overview of the allocation and a summary of all stimuli are provided in the Supplementary Material (Table S2). We used one set of 10 videos in both the U.S. I and Norway I samples, and a different set in the U.S. II and Norway II samples. We showed three unique videos in China and two in Portugal. All other videos overlapped among the different samples, and four videos were shown in all five countries. Following each video clip, participants were presented with the questions "How moved were you by the video?" and "How touched were you by the video?" on 5-point scales anchored at not at all and very much. See Table 1 for the respective translations. In the Portuguese sample, only one item was used, while the Israeli version included an additional item asking about "How stirred were you by the video?" Valence was assessed by two items: "How positive [negative] is the feeling elicited by the film?" 1 on the same 5-point scale. For bodily experiences, we asked, "What bodily reactions did the film elicit in you? Mark all the bodily reactions that you were or are still experiencing." Participants answered items on goosebumps, chills, moist eyes, crying, tight throat, and a warm feeling in the chest, along with some filler items, on 5-point scales anchored at not at all and very much. In the first U.S. and Norwegian samples, these sensation items were rated on dichotomous scales and there was no item for crying. Five appraisals were assessed in all studies: "One or several of the characters did something that was morally or ethically very right" (moral), "All or some characters in the movie felt closer to each other at the end (compared with at the beginning)" (closeness), "Somebody who was excluded at first was included at the end" (inclusion), "All or some of the characters overcame big obstacles during the events" (obstacles), and "All or some of the characters became somehow more human during the events" (human). These were rated on 5-point scales ranging from not at all to to a high degree. Afterward we assessed, among some additional responses to the video clips, feelings of closeness to the main character(s) of the video clips and how much unity the video clip elicited on 5-point scales ranging from not at all to to a high degree. --- Results According to our hypotheses, the intensity of kama muta should be predicted in all five countries by (H1) the judged positivity of the feeling, more than by its negativity; (H2) the sensations of tears, a warm feeling in the chest, and chills/goosebumps; (H3) feeling unity and closeness with the protagonist; and (H4) the appraisal of increased closeness among protagonists. We tested each of these hypotheses in separate multilevel models for each sample, regressing a kama muta index on these various predictors. We then combined the samples meta-analytically. General modeling strategy. We tested our hypotheses with multilevel regression procedures (lme4 in R). Participant and video were added as random factors. Intercepts were allowed to vary randomly according to both participant and video to model different levels of the dependent variable for the different videos and participants (Judd, Westfall, & Kenny, 2012). For each sample, the unstandardized regression coefficients were standardized and employed as an estimate of effect size r (Bowman, 2012). The seven effect sizes were meta-analyzed utilizing the metafor package (Viechtbauer, 2010) in R. For each relation, a random effects model was fitted using a restricted maximum likelihood procedure (REML). Effect sizes were tested for differences across samples. Throughout this article, we report standardized effect sizes (r) and their correspondent 95% confidence intervals in brackets [a, b]. We do not present p values for the hypothesized effects because their significance can be easily inferred from the confidence intervals. Detailed information on differences across samples, videos, or gender of the participants is presented in the Supplementary Material. Index of being moved. To evaluate whether ratings of being moved and being touched, or their translation in other samples, could be combined into a common index, we estimated an unconditional three-level hierarchical model in HLM (Hierarchical Linear Modeling Software) for each separate sample (Nezlek, 2016). Reliabilities at Level 1 were sufficient, ranging from.90 to.96 (see Supplement for details). Therefore, ratings of being moved and touched were averaged into the main dependent variable (hereafter, "moved") of the study after subtracting 1 so that the variable ranged from 0 to 4. For the Israeli study, three items were combined, whereas the Portuguese study included only one item, which was utilized as the main dependent variable. Valence of being moved. To assess whether kama muta is experienced as a positive feeling (Hypothesis 1), we regressed being moved on ratings of how positive and negative the feeling was for each sample separately. The interaction of positivity and negativity was not significant in any sample and, therefore, dropped for the final model. The final random effects model indicated an overall effect size estimate of r =.59 [.53,.65] for positivity on being moved (Figure 1). The overall effect size of negativity on being moved was significantly smaller, r =.16 [.08,.23] (Figure 2). Effect sizes differed significantly for positivity, Q(6) = 31.82, p <unk>.001, I 2 = 82. 46 [56.91, 96.69], as well as negativity, Q(6) = 25.92, p <unk>.001, I 2 = 75. 75 [40.79, 94.89], across samples. Sensations. To test Hypothesis 2, we combined items on goosebumps and chills into a chills score, while ratings on moist eyes, crying, and a tight throat were combined into a tear score. Being moved and touched was regressed on the chills score, on the tear score, as well as on the item on warmth in the chest, without interactions, in three separate models for each sample. The overall effect size of crying on being moved was r =.54 [.46,.63] (Figure 3), followed by warmth, r =.41 [.31,.50] (Figure 4), and finally chills, r =.31 [.25,.37] (Figure 5). Effect sizes for crying differed for the different samples, Q(6) = 107.68, p <unk>.001, I 2 = 90. 55 [77.08, 97.88]. The same held true for warmth, Q(6) = 50.35, p <unk>.001, I 2 = 89. 78 [74.60, 97.95], and for chills, Q(6) = 19.08, p =.004, I 2 = 66. 27 [19.91, 92.17]. Communal outcome. Items on experiencing unity and closeness with the protagonists of the videos were combined into a communal outcome index. For each sample, being moved was regressed on communal outcome. The overall effect size of communal outcome was r =.59 [.51,.66], supporting Hypothesis 3 (Figure 6). Effect sizes differed for the different samples, Q(6) = 50.91, p <unk>.001, I 2 = 89. 19 [73.35, 97.86]. Appraisals. To test our fourth hypothesis, in a first model, we regressed being moved on the closeness item. The overall effect size was r =.29 [.22,.37] (Figure 7), with effect sizes differing across samples, Q(6) = 24.46, p <unk>.001, I 2 = 77. 24 [43.50, 95.56]. In a second model, being moved was regressed on all five appraisal items. In this joint model, being moved was predicted by increased closeness, r =.12 [.09,.16], perceiving actions as morally right, r =.21 [.17,.25], perceiving someone becoming more human, r =.19 [.11,.27], and perceiving that obstacles were overcome, r =.08 [.04,.13]. Inclusion had no overall effect r =.01 [-.02,.05]. Effect sizes did not differ significantly across samples, except for becoming more human. --- General Discussion In seven samples from five countries in East Asia, the Middle East, North America, and Northern and Southern Europe, we measured responses to videos. We used a total of 26 videos, and measured the amount of kama muta evoked using appropriate terms translating moved and touched in five languages. In addition, we assessed the valence of the experience, a set of sensations, appraisals, and communal outcomes. As predicted, in each sample, we found that the kama muta index was related to experiencing the emotion as positive when controlling for negativity, and, to a much smaller extent, also as negative when controlling for positivity. Kama muta covaried most strongly with tears, then with a feeling of warmth in the chest, and least strongly with chills or goosebumps. The kama muta index was predicted by judged increases of closeness among the characters in the video and by three other appraisals. It was related with feeling unity and closeness with the characters. We focused in the current study on identifying kama muta across cultures, rather than on explaining differences among cultures. In discussing our results, we will thus focus on the overall picture. We briefly discuss the cultural heterogeneity again in the section on limitations at the end. While there was significant variation in all effects across samples, the effects were positive and significant in each sample individually. The kama muta model derives a universal emotion with many names from a universal relational model (Fiske, 1991;Fiske, Schubert, & Seibt, 2017;Fiske, Seibt, & Schubert, 2017). Other models of being moved do not discuss the question of cultural differences or similarities regarding this emotion, nor do other models address the issue of the differences in meaning of vernacular lexemes in different languages (Wierzbicka, 1999). Our cultural comparisons revealed similar appraisals, sensations, valence, and outcomes of kama muta across the five countries. This lends support to the prediction that kama muta is a universal emotion; regardless of whether and how it is labeled in vernacular usage. --- Valence Two aspects are noteworthy about our findings regarding valence: The first is the strong and consistent characterization of kama muta as a positive feeling across all samples. The second is the value in assessing positivity and negativity separately. Across all samples, we found that greater negativity predicted greater kama muta when its shared variance with positivity was controlled for. However, this effect was much smaller than the one for positivity. We would not have found this pattern if we had assessed valence on only one dimension. It is possible that the instances where negativity contributed to being moved were, in fact, not kama muta experiences, but resulted from a broader usage of the terms we used to assess kama muta. It is also possible that some negativity prior to the eliciting event increased kama muta (Fiske, Seibt, & Schubert, 2017). Supporting this reasoning, Schubert et al. (2016) found that when removing the linear and quadratic trends, ratings of sadness had no cross-correlation with ratings of being moved for a continuous measure of both along watching videos like the ones shown in the present study. Finally, the valence of the feeling may be complex for some people watching some videos. The larger picture is, however, that kama muta is predominantly a positive emotion, elicited by a positive appraisal. Our valence results fit several being-moved models that predict being moved to be a predominantly positive emotion (Cova & Deonna, 2014;Hanich et al., 2014;Kuehnast et al., 2014;Tokaji, 2003), yet are at odds with others that see it as predominantly negative (Neale, 1986). --- Sensations Across five different regions, languages, and cultures, we found the same three sensations to be predictive of kama muta. This supports our model of kama muta as a universal emotion with coordinated changes across several systems, resulting in an experience consisting of several components. We measured tears with a combination of moist eyes, crying, and tight throat; and chills as a combination of chills and goosebumps. Overall, tears were most strongly correlated with being moved. This, along with the fact that being moved was characterized as a predominantly positive feeling, suggests that kama muta weeping is different from sadness weeping. This is no consensus in the literature on crying, and several authors make an argument that negative components in the being-moved experience such as helplessness provoke the tears (Miceli & Castelfranchi, 2003;Vingerhoets & Bylsma, 2015). However, the present data do not support that argument. A feeling of warmth in the chest was the second sensation. At this point, it is unclear what causes this sensation, possibly changes in cardiac activity, vagal tone (Keltner, 2009), or feedback from them. This feeling may be related to a gesture we often observe when people are strongly moved: placing one or both hands over the center of the chest (something that people are not always aware of doing). Chills and goosebumps were the third sensation related to kama muta. Although these skin sensations also occur in fear responses and when having uncanny experiences (and when exposed to low ambient temperature), their combination with tears, warm feelings in the chest, and positivity seems to be specific to kama muta (cf. Seibt et al., 2017). --- Appraisals The main appraisal we tested was one of increased closeness, an operationalization of our construct of a sudden intensification of communal sharing. As predicted, viewers' appraising characters as becoming closer significantly predicted increases in kama muta. In addition, increased closeness remained a significant predictor after controlling for appraisals of morality, becoming more human, inclusion, and overcoming obstacles. When testing all appraisals, morality, increased closeness, becoming more human, and overcoming obstacles each predicted kama muta. How do people judge morality? Acting morally is doing the right thing, and what is the right thing depends on which relational model is applied (Rai & Fiske, 2011): Acts are seen as moral when they fulfill the ideals of the expected relational model and as immoral when the relational model is violated. We believe that the morality appraisal is best understood in this way: Somebody was seen as acting morally because she or he fulfilled the ideals which underlie communal sharing relationships such as compassion, responsiveness to needs, kindness, generosity, and inclusiveness. Communal sharing consists in needbased sharing and consubstantial assimilation: Where one is, people expect the other. However, many individual acts are primarily one or the other: Either the act consists in saving someone, helping and protecting them, or it consists in touching, hugging, kissing, approaching, and synchronizing one's movements to the other. So people observing acts of need-based giving may infer closeness but they are most likely to focus first and foremost on the need-based giving, which is best captured by the morality appraisal. However, morality is not a very sharply defined construct as a folk concept or as a scientific concept (Haste, 1993); so future studies will need to corroborate this interpretation by asking more specific questions. Seeing someone as becoming more human implies that someone can be more or less human (Haslam, 2006). Whereas the dehumanization and infrahumanization constructs have generally been studied as perceptions of groups, here, we assessed humanness judgments about individual characters. Given that this judgment is rather remote from the actions depicted in the videos, it is unclear whether it leads up to the emotion or is a consequence of it. Even though we call them appraisals, we do not believe these judgments, as such, directly
Ethnographies, histories, and popular culture from many regions around the world suggest that marked moments of love, affection, solidarity, or identification everywhere evoke the same emotion. Based on these observations, we developed the kama muta model, in which we conceptualize what people in English often label being moved as a culturally implemented socialrelational emotion responding to and regulating communal sharing relations. We hypothesize that experiencing or observing sudden intensification of communal sharing relationships universally tends to elicit this positive emotion, which we call kama muta. When sufficiently intense, kama muta is often accompanied by tears, goosebumps or chills, and feelings of warmth in the center of the chest. We tested this model in seven samples from the United States, Norway, China, Israel, and Portugal. Participants watched short heartwarming videos, and after each video reported the degree, if any, to which they were "moved," or a translation of this term, its valence, appraisals, sensations, and communal outcome. We confirmed that in each sample, indicators of increased communal sharing predicted kama muta; tears, goosebumps or chills, and warmth in the chest were associated sensations; and the emotion was experienced as predominantly positive, leading to feeling communal with the characters who evoked it. Keywords communal sharing, cross-cultural, tears, goosebumps, being moved, kama muta An American soldier being reunited with his daughter, Australian men being welcomed by their lion friend in Kenya, a Thai man's doctor canceling his huge bill in gratitude for a kindness years before, a Norwegian singer commemorating the massacre of July 22, 2011. All of these describe
primarily one or the other: Either the act consists in saving someone, helping and protecting them, or it consists in touching, hugging, kissing, approaching, and synchronizing one's movements to the other. So people observing acts of need-based giving may infer closeness but they are most likely to focus first and foremost on the need-based giving, which is best captured by the morality appraisal. However, morality is not a very sharply defined construct as a folk concept or as a scientific concept (Haste, 1993); so future studies will need to corroborate this interpretation by asking more specific questions. Seeing someone as becoming more human implies that someone can be more or less human (Haslam, 2006). Whereas the dehumanization and infrahumanization constructs have generally been studied as perceptions of groups, here, we assessed humanness judgments about individual characters. Given that this judgment is rather remote from the actions depicted in the videos, it is unclear whether it leads up to the emotion or is a consequence of it. Even though we call them appraisals, we do not believe these judgments, as such, directly cause the emotion. Rather, we believe these judgments of humanization indicate the perception of an intensification of communal sharing that causes being moved. Perceptions of humanness may contribute to kama muta because they indicate that the characters are seen as relatable and sympathetic, or because they indicate that the characters are seen as sharing something essential in common with the participant (Haslam, 2006;Kteily, Bruneau, Waytz, & Cotterill, 2015;Leyens et al., 2000). Sharing a common essence, in turn, is the core of how we represent communal sharing relationships (Fiske, 2004a). Thus, the findings for the humanness appraisal can be explained by the kama muta model, but they are not a test of the model. Our results lend cross-cultural empirical support to theoretical analyses seeing being moved as evoked by communal feelings or acts: solidarity, a communion of souls, a generous act, or reconciliation (Claparède, 1930); fulfillment of the phantasy of union (Neale, 1986); resolution of attachment concerns (Frijda, 1988); love/acceptance (Panksepp, 1995); reunification (Tan & Frijda, 1999); love, forgiveness, sacrifice, and generosity (Kone<unk>ni, 2005); prosocial acts or reconciliatory moments (Hanich et al., 2014). Yet many of these models mention not one, but several alternative elicitors, not only the communal ones listed here but also others. The kama muta model traces all kama muta back to a common core: the sudden intensification of communal sharing. Perhaps the most similar theory to ours is the elevation model, which assumes that an act of generosity, charity, gratitude, fidelity, or any strong display of virtue evokes elevation (Algoe & Haidt, 2009). The difference from the kama muta model is best illustrated with an example. As we know from another study (Schubert et al., 2016), the peak of the kama muta experience in the lion video (one of the four videos presented in all five countries) occurs when a lion that had been saved and raised by two young men, and then released in Africa, later recognizes them in the wild, runs toward them, and hugs them repeatedly. We think this act exemplifies communal sharing by showing closeness through a joyous reunion with hugging, laughing, and relief, rather than a virtuous act by the lion or by the men at that moment. People around the world understand this gesture, without words, and react to it emotionally, often with tears, a warm chest, or goosebumps. In sum, the kama muta model seems to most parsimoniously explain the three appraisals that best predicted being moved across the five cultures. Our model is based in relational models theory, which integrates judgments of morality; acts of touching and other signs of closeness; social identity; humanness; and many other constructs into a common concept, communal sharing-the feeling of equivalence. This led to our theory that the many situations that people are likely to identify as moving, r<unk>rt, comovido/a,, or g<unk>n d<unk>ng (<unk>) all have something in common, the sudden intensification of communal sharing. This social-relational transition universally elicits the same emotion, kama muta, involving the same physiological sensations and motives. Its cultural significance may vary considerably, but we did not investigate the meanings of kama muta in these five countries. --- Limitations Although the current study focused on intensification of communal sharing, the kama muta model predicts that it is sudden intensifications that evoke kama muta. We assessed this aspect with overcoming obstacles, but the model defines suddenness as abrupt increase in communal sharing, or salience of communal sharing against a prior or default background of loss, separation, or concern about togetherness. This background can be an obstacle, but it can also be contrary expectations, norms, apprehensions or a reality, against which the foreground of a communal sharing act, event, fulfillment, or fantasy is contrasted (see also Frijda, 1988). The theory that a suddenness/sharp contrast is essential still awaits empirical verification, either by developing good measures, or by manipulating it experimentally. Across all five languages, people sometimes use the terms we used to assess kama muta to denote other "nearby" emotions or feelings, such as sadness or awe. This is not an insurmountable methodological problem for us, because along with labeling, we look for convergent evidence from appraisals, sensations, and valence to classify an episode as an instance of kama muta. It would be a problem for our model or methods; however, if increased closeness was not perceived in most instances of being moved, r<unk>rt, comovido/a,, or g<unk>n d<unk>ng (<unk>), because we assume that the vernacular labels for kama muta in these languages do approximately coincide with the kama muta construct. The concepts of equivalence and bias have been put forward with regard to cross-cultural assessment and interpretation (van de Vijver & Tanzer, 2004). In our studies, we observed not only similarities but also considerable variation among the samples, both within and across cultures. This variation or bias may have many sources: the use of different video material, which was confounded with study sample (method bias); differences in the meanings of questions due to cultural and language variations (item bias); differences in sample characteristics like age and socioeconomic status (SES); and of course also differences in kama muta prototypes, precedents, paradigms, precepts, and proscriptions across languages and cultures (construct bias). Due to methodological restrictions, we cannot infer equivalence or measurement invariance from the present data, because we assessed most of our constructs with one or two items. Our results show that intensifications of communal sharing are universally recognized and evoke a quite similar emotional response, a construct which we denote kama muta. This is a basis for cultural understanding: Even people lost in translation can recognize communal sharing when they see it, and in this way, figure out important relational building blocks in cultures other than their home cultures. Studies like the present one help to make this implicit relational cognition explicit, and can thereby help people navigate their increasingly multicultural societies and understand each other by recognizing something they all have in common, the kama muta emotion-whatever particular meanings they endow it with. --- Authors' Note Ravit Nussinson has previously published under the name Ravit Levy-Sadot. Thomas W. Schubert led the design of the studies, Beate Seibt wrote the first draft of the article, and Janis H. Zickfeld conducted the main analyses. All authors were involved in the translation, in data collection, and in revising the article. We thank the kamamutalab.org for helpful feedback and discussions. --- Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. --- Note 1. We provide the original questions for all languages in the supplemental material. Here, we use the English translations, knowing that the terms have different extensions, connotations, prototypes, and context-dependent meanings, reducing direct comparability across languages.
Ethnographies, histories, and popular culture from many regions around the world suggest that marked moments of love, affection, solidarity, or identification everywhere evoke the same emotion. Based on these observations, we developed the kama muta model, in which we conceptualize what people in English often label being moved as a culturally implemented socialrelational emotion responding to and regulating communal sharing relations. We hypothesize that experiencing or observing sudden intensification of communal sharing relationships universally tends to elicit this positive emotion, which we call kama muta. When sufficiently intense, kama muta is often accompanied by tears, goosebumps or chills, and feelings of warmth in the center of the chest. We tested this model in seven samples from the United States, Norway, China, Israel, and Portugal. Participants watched short heartwarming videos, and after each video reported the degree, if any, to which they were "moved," or a translation of this term, its valence, appraisals, sensations, and communal outcome. We confirmed that in each sample, indicators of increased communal sharing predicted kama muta; tears, goosebumps or chills, and warmth in the chest were associated sensations; and the emotion was experienced as predominantly positive, leading to feeling communal with the characters who evoked it. Keywords communal sharing, cross-cultural, tears, goosebumps, being moved, kama muta An American soldier being reunited with his daughter, Australian men being welcomed by their lion friend in Kenya, a Thai man's doctor canceling his huge bill in gratitude for a kindness years before, a Norwegian singer commemorating the massacre of July 22, 2011. All of these describe
Introduction This study investigates the escalating commodification of social institutions, such as marriage and human emotions, within the contemporary capitalist society. The Marriage Bargain, a literary work by Jennifer Probst, serves as a poignant representation of this stark reality concerning the evolving characteristics of social institutions and human emotions in a late capitalist society. The research is specifically centered on a Marxist analysis of The Marriage Bargain, which not only addresses the overarching themes of love, romance, and marriage, but also emphasizes the commodification of marriage. Despite the apparent focus on love and romance, the novel provides ample room for a Marxist literary examination. Commodification is the process of objectifying human emotions, and is often associated with reification. In Jennifer Probst's novel, The Marriage Bargain, the characters Alexandria Maria McKenzie and Nicholas Ryan utilize marriage as a means to solve their financial difficulties, thereby challenging the traditional notion of marriage as a bond between two individuals. By marrying for practical reasons, such as to save a family home or inherit a corporation, the characters view marriage as a commodity rather than a union of emotional and spiritual connection. Marriage, as a social institution, is typically viewed as a compromise between two individuals of opposite sexes, with love, care, and support being key components. However, conflicts and misunderstandings between partners can sometimes lead to divorce. Jennifer Probst's novel, The Marriage Bargain, sheds light on the challenges faced by traditional marriage in modern societies. The utilitarian values of the characters in the story challenge the conventional concept of marriage, exposing the emerging social problem of reification of human relationships. Late capitalism's economic power replaces the traditional human relations, kinships, and marriage with a definitive term. This process highlights the shifting notions of commodification, where abstract human emotions and norms are traded for objective monetary values. This trend results in the commodification of human emotions, reducing human relations to use value in which they are used and exchanged with materials. In Jennifer Probst's novel, The Marriage Bargain, the protagonists Alexandria Maria McKenzie and Nicholas Ryan enter into a contract that stipulates the terms and conditions of their living arrangement as if they were married. This contractual marriage serves as a means for Nicholas to inherit his uncle's assets. This study aims to examine how the social institution of marriage is being commodified to fulfill one's materialistic desires. Furthermore, the investigation seeks to shed light on the underlying reasons for challenging the conventional concept of marriage and treating human relationships as a commodity. Jennifer Probst is a prominent lesbian novelist who offers a nuanced portrayal of the complex and transitional society of late twentieth-century America. Probst artfully interweaves her personal experiences with social issues, featuring characters from diverse socioeconomic backgrounds. Despite the broad range of topics explored in her works, Probst maintains a stylistic simplicity that blends idealism and realism, rendering her works accessible to a wide readership. Probst's literary contributions in exploring introspective themes are unparalleled, offering a positive and transformative impact on readers grappling with personal struggles and confusion. Her exceptional contributions have earned her a place among the distinguished authors of the modern era, with her unique perspective and style emerging as a singular voice in contemporary postmodernist literature. --- Research Objectives To analyze the commodification of marriage and human emotions in Jennifer Probst's The Marriage Bargain from a Marxist perspective. To examine how the utilitarian values of the novel's characters challenge the traditional concept of marriage and use it as a commodity to satisfy their material needs. To explore the underlying reasons for the reification of human relationships in late capitalism and how the commodification of marriage serves as an illustrative example of this phenomenon. --- Review of Related Literature Several scholars have scrutinized Jennifer Probst's literary works from various perspectives. Vailas (2004) has shed light on how Probst's writing has effectively disseminated liberal ideals in society. The liberal values that Probst espouses in The Marriage Bargain, however, exhibit certain elements of cynicism. Vailas remarks as follows: On one level, The Marriage Bargain deals with the idea of a young generation of New York who have become enamored with deviant passions which leave them uprooted from the established ideals and norms of society. But, on a more intimate level, it examines the idea of happiness and whether or not man (or woman) is destined to ever find contentment. ( 12) According to Vailas, The Marriage Bargain portrays certain principles that can guide an individual's life in disarray. However, Vailas' perspective towards marriage appears to be one of cynicism, as her characters are portrayed as either unfaithful or marrying for financial benefits. This pessimistic outlook towards life leads to abstract philosophical musings. Smithson (1979) provides a succinct assessment of Jennifer Probst's notable literary works, delineating a gradual shift in tonality and thematic content. Smithson's perspective is presented below: Probst's novels bear the mark of imaginary or uncertain crimes. This can be called an innovation in the field of commercial literature. In The Marriage Bargain, the billionaire Nicholas Ryan is interested in nominal marriage. He is hopeful that this contract marriage is likely to land him in favorable condition. But the result turns out to be unexpected. At last Nicholas is filled with worry and anxiety. Nick's case of contract marriage is quite different. (17) Smithson asserts that Jennifer Probst's representative works contain innovative elements. Although the subject matter of The Marriage Bargain may be unpalatable and shocking to many readers, it is still relatively new. Probst masterfully dramatizes the societal pressure to conceal one's inner desires and personality in her novels. Tammy (2001) approaches The Marriage Bargain by examining the sequence of events within the novel. She believes that the plot's development is the most captivating aspect of the novel. With this in mind, Tammy comments as follows: The eventual denouement of the narrative is relatively disappointing. Alex is forced to choose between the naturalness of human emotions and pressures of shifting economic condition. Deciding she has already lost the battle, Alex gives up trying to run his own bookstore and accepts the offer of Nicholas. The moral dimension of the contract between Alex and Nicholas is noticeably striking. It deserves attention and analysis. (17) In her analysis, Tammy challenges the ordering principle of events in The Marriage Bargain, which she finds to be complicated and frequently decisive, leaving the reader unsure. Despite this, Tammy notes that the novel opens up new possibilities even in moments of disappointment and frustration. The characters Alex and Nicholas remain calm and composed in the face of unfavorable situations. Sander (2001) evaluates Jennifer Probst's novel The Marriage Bargain based on her ability to create new terms and neologisms to convey her original ideas. Sander believes that Probst's work promotes the emergence of a new concept of individual freedom and a demand for more space for creative expression. Probst introduces a new type of interpersonal relationship, which she describes using a newly coined term and neologism. This relationship requires understanding and familiarity between married partners to create a greater level of creative expression. Such a relationship can occur between a married woman and her unconventional lover turned fiancé. Probst also uses phrasal expressions and poetic neologisms throughout the novel to convey different implications. These expressions contain the ethos of Victorian protocol, indicating the influence of Victorian mentality on the language used by sober-minded individuals of that era. Bernard (2005) examines the theme of double consciousness in Jennifer Probst's The Marriage Bargain, particularly in relation to women who are aware of their growing passion. He suggests that Probst's text does not lend itself well to feminist analysis due to the author's excessive sobriety. Bernard notes that the character of Alex is intelligent and educated, but also immature and irrational in her decision to marry Nicholas due to financial pressures. He sees a similarity between the author's life and the protagonist's life, and suggests that extreme feminist consciousness harms the character's conscience. The novel is an example of popular fiction with naturalistic fervor and seems to showcase Probst's intellectual prowess through the character of Alexa. Wade (1999) regards The Marriage Bargain as a work that possesses both subtle and straightforward characteristics. The primary issue that Jennifer Probst addresses in the novel The Marriage Bargain is something of a subtle text in which Alexa is caught between the deterministic forces and consciecne. Among Americans of the twenty first century, and especially among the young, morality and sex are interchangeable terms. Frequently the judgment of right and wrong behavior rests almost exclusively on sexual behavior. Evil is identified with sex: there the devil wields his greatest powers. the relaxed social and sexual rituals of his time occupies the forefront of the novel. ( 27) Wade argues that Alexa struggles to uphold her moral principles when faced with practical challenges. The novel focuses on the conflict between commercial and ethical values. Probst's uncertainty is evident when she attends a party and is torn between asserting her independence and seeking her father's approval. By highlighting the negative impact of limited financial opportunities, Probst implies the influence of determinism. Macey (1992) sees both optimism and pessimism in The Marriage Bargain. The novel portrays the pessimistic condition arising from the growing poverty of the working class. However, Macey believes that there is also a ray of hope in this pessimistic world. He argues that the novel shows that even in the midst of corruption and sickness, there are yearnings and inarticulate strivings for a better world and a life with more dignity. Macey praises the novel's portrayal of financial liberation and pragmatic choices, but he criticizes the lack of reflection on the decency and dignity of human ambition. Knopf (2003) praises Jennifer Probst's personal style in The Marriage Bargain and her ability to convey emotion without being overly sentimental. He notes that the novel combines introspection with stream-of-consciousness techniques and that the narrator's sarcasm and peculiarities contribute to its success. Knopf emphasizes the author's wit and flair, and describes the novel as a matchless piece of art. --- Previous studies on --- Research Methodology The research methodology employed in this study utilizes the theory of Marxism to examine the issue of proletarians as others. Marxism, which has had a significant influence among workers and intellectuals in capitalist countries, has been utilized by non-Marxist intellectuals, particularly sociologists and historians in Western countries. Many liberation groups in Third World nations now clearly understand the character of their opponent thanks to Marxism, which has been adjusted to deal with the particular combination of primitive and sophisticated capitalist circumstances. The researcher adopts Marx's dialectical approach, which views actual changes in history as the outcome of opposing tendencies or contradictions. Marx's materialism is also used to analyze the interaction between social conditions and behavior, and people's ideas. Marx's theory of alienation in the labor system is based on four relations, which are investigated. These relations include the worker's alienation from productive activity, the product of that activity, other human beings, and the distinctive potential for creativity and community. Marxist critics' application of the perspective of Marxism in interpreting literary texts is also examined. The study emphasizes the significance of literature in supporting capitalist ideology since it is consumed mostly by the middle classes. Writers who sympathize with the working classes and their struggle are regarded favorably by Marxist critics. On the other hand, writers who support the ideology of the dominant classes are condemned. The research draws upon the insights of various Marxist theorists, whose interpretations may differ in breadth and sympathy. --- Analysis of Probst's The Marriage Bargain This study employs a critical analysis to investigate the commodification of human Ryan eagerly anticipates being included in his uncle's will as a beneficiary of his vast properties and wealth. This text describes the story of Nicholas Ryan and his uncle Earl, who is the head of a corporate house with significant wealth and properties. Nicholas is hoping to become his uncle's legitimate heir and inherit his estate, but his uncle puts a condition on his will that requires Nicholas to get married and live with his wife for at least one year before he can inherit everything. Unfortunately, Nicholas has a history of frequently changing romantic relationships, and his uncle is doubtful that he can have a stable family life. As a result, Nicholas seeks a marriage that is purely transactional, and he looks for a woman who will act as his wife for just one year, with the sole purpose of fulfilling the conditions of his uncle's will. The present study employs the theoretical perspective of Marxism as its primary theoretical framework. The research methodology is based on this approach. The theory of Marxism, as developed by prominent theorists such as Marx, Lukacs, and Adorno, is referenced throughout the analysis. The researcher asserts that Marxism is an appropriate lens for this study, given that Of Mice and Men examines themes of economic oppression, dispossession, and other forms of social injustice. As such, the theoretical perspective of Marxism is particularly salient to the analysis of the novel. Marxism is a theory he created that explains how society functions as well as the development of human history. Marx (2001) held that all other facets of society are primarily determined by the state of the economy and the structure of the productive system. Marx's theory describes the characteristics of capitalism, which he believed to be profoundly unsatisfactory and wished to eradicate through a bloody uprising in order to build a communist society. Marx (2001) disagreed with reformers who claimed that a simple shift in ideas could transform society because he argued that prevailing ideas are the outcome of material or economic realities. Nicholas maintains a platonic relationship with his contract wife by refraining from any sexual interaction with her. He seeks a wife who wills cohabitate with him solely for the purpose of fulfilling a contractual obligation without any emotional or physical expectations. The contract stipulates that the wife will receive a lump sum payment at the end of the year for fulfilling her role as a contractual wife. The following excerpt details the peculiarities of this proposed contractual agreement between Nicholas and his prospective spouse: A Woman who does not love me. A woman who does not have any animals. A woman who does not want any children. A woman who has an independent career. A woman who will view the relationship as a business venture. A woman who is not overly emotional or impulsive. A woman whom i can trust. ( 18) The extract describes Nicholas's view of marriage as a tool to secure his inheritance rather than a traditional emotional bond. He seeks a woman who will live with him as his wife without the expectation of sexual or emotional intimacy. Nicholas sees marriage as a commodity that can be traded and converted into monetary value, which sets him apart from traditional views of marriage as involving emotion, attachment, responsibility, trust, and cooperation. According to Marx, the interaction between the forces of production and the relations of production largely determines the kind of society and the course of social evolution. While the latter relates to the social structure of production and who owns or controls the productive resources, the former refers to the technology employed for production. In a capitalist society, the owners of the producing resources are also those who pay the employees. Marx saw that the new social relations of production under capitalism eventually hindered the full development of the new forces of production, leading to contradictions and revolutionary change. David Riazanov, a supporter of Marxism, also emphasized this contradiction in contemporary capitalist society. Nicholas is in search of a contractual wife who does not require emotional or affectionate attention, and is content with a purely transactional relationship. His main objective is to secure his inheritance of his uncle's properties, and he sees marriage as a tool to achieve this end. Despite having fond memories of Gabriella, a sharp conversationalist who he enjoyed spending time with, he dismisses the idea of marrying her as he fears she is already falling in love with him. According to Nicholas, the supermodel he is currently dating is ideal for social functions and sex, but not for marriage. He is afraid of emotional attachments and seeks a loveless marriage that would only last for a year. Nicholas concluded his quest for a wife when he meets Alexa, who is in dire need of money to save her bookstore from imminent bankruptcy. With the help of Maggie, Nicholas eventually meets Alexa and presents his proposal that she become his wife for a year, which she agrees to. Nicholas explains to Alexa that their marriage is essential for him to secure his inheritance of his uncle's properties. George Lukacs' theory of totality is essential to his own thinking and the subsequent development of Western Marxism. He had a desire for totality even in his early works, and it became the center of his book "History and Class Consciousness," where it is seen as the core of both Hegel's and Marx's methodologies. Lukacs (2001) warns against being too orthodox in interpreting Marxism and emphasizes the importance of the concept of totality. The theory of totality is crucial to later Western Marxists' interpretation of the metaphysical tradition and Marx's philosophy and their critique of the modern world. Therefore, understanding Lukacs' theory of totality is helpful in finding the right way to the entire tradition of Western Marxism. After extensive dialogue and deliberation, Alexa has ultimately decided to accept Nicholas's proposal. The two parties intend to derive mutual benefit from their union, which has been forged out of economic necessity through the execution of a contract to cohabit as spouses. Upon the conclusion of their one-year marriage, Alexa will receive a specified amount while Nicholas will inherit his deceased uncle's assets. Subsequent to this, their marriage shall be deemed null and void. The excerpt portrays how their marriage has been arrived at primarily due to financial considerations: "I am marrying you for business reasons, Alexa. Not your family." Her chin tilted up. He made a mental note of the gesture. Seemed like a warning before she charged into battle. "Believe me, i am not happy about this, either, but we have to play the part if people are going to think this is real." His features tightened but he managed a nod. Fine. His voice dripped with sarcasm. "Anything else?" She looked a bit nervous as she shot him a glance, then rose from the chair and began pacing the room. (30) Alexa accepts Nicholas's proposal due to economic pressure, but is uncomfortable with the idea of being his wife and sharing a home. She feels nervous about the situation. Prior to accepting Nicholas's proposition, Alexa retires to her residence to ponder the proposal. She finds herself in a state of dilemma, as circumstances compel her to opt for an option that is not congruent with her internal preferences. The excerpt sheds light on Alexa's predicament: The man before her struck out on everything she believed in. This was no love match. No, this was business, pure and simple, and so very cold. While her memory of their first kiss rose from the recesses of her mind, she bet he had forgotten the moment completely. Humiliation wriggled through her. No more, she had her money and could save her family home. But what the hell had happened to her list? (32) When materialistic pressures become overwhelming, concerns regarding morality and social decency are often overlooked. However, women like Alexa are hesitant to enter into loveless and commercialized relationships. It is typical for someone in Alexa's position to consider the potential negative consequences of agreeing to live with Nicholas and to contemplate how such a decision would affect her social status and reputation. During their discussion about their proposed marriage arrangement, Nicholas and Alexa have a conversation about the topic of sex. Nicholas suggests that they should be discreet about their sexual activities, which causes Alexa to feel shocked and uncomfortable. To alleviate her concerns, Nicholas explains that he deals with high-end clients and has a reputation to protect, so they must be extremely discreet. Despite feeling odd about Nicholas's proposition, Alexa tries to maintain her composure and not show any change in expression. Alexa finds herself in a predicament where she is unable to disclose to her parents about her decision to enter into a contractual marital agreement with Nicholas, with the intention of securing funds to rescue her bookstore and support her family. She is constrained to fabricate a falsehood due to her inability to reveal the truth. In her soliloquy one evening, Alexa ponders on the hypothetical response of her family towards her His offer suggested a real relationship between them, and it made her long for more. She should have introduced her family to a real-life love-not a fake. The lies of the night pressed down on her spirits as she realized she had made a bargain with the devil for cold hard cash. Cash to save her family. But cash nonetheless. ( 59) Alexa and Nicholas enter into a contractual marriage to overcome financial difficulties, but they have to lie to their parents about it. Alexa feels guilty about keeping this a secret from her parents but is compelled by their dire financial situation. However, they both treat marriage as merely a means to an end, disregarding its societal significance. Marx criticized capitalism for alienating workers from their labor and turning them into robotic objects that prioritize profit over human need. He argued that the only way to overcome this alienation and create a democratic, planned society is through a class struggle between the bourgeoisie and the proletariat. However, Luxemburg (2001) questioned the feasibility of abolishing advanced market-based societies and replacing them with a fully planned and controlled society, and criticized the shaky assumption underlying Marx's notion of alienation. Marx's scientific socialism was based on his theory of value and concept of alienation, which exposed the contradictions of capitalism and the necessity of the class struggle. During their conversation regarding her dire financial situation, Nicholas asks Alexa about the extent of her desire for the money and notes that she does not seem enthusiastic about marrying him and participating in a sham wedding while lying to her family. He questions whether this is all solely for the purpose of business expansion. This inquiry from Nicholas leaves Alexa perplexed, and she considers disclosing the truth to him, that: The lack of medical insurance to pay the staggering bills. Her brother's struggle to get through medical school while supporting a new family. The endless calls from collectors until her mother had no choice but to sell the house, already heavily mortagaged. And the weight of responsibility and helplessness Alexa carried along the way. "I need the money", she said simply. "Need? Or want?" She closed her eyes at the taunt. ( 60 Alexa's financial desperation leads her to accept the offer of a man who wants to use her for a year in exchange for a sum of money. On the other hand, Nicholas is motivated not by need, but by greed or the desire for money, which leads him to enter into a fake marriage contract. After a few weeks of living together as a contracted couple, the initial terms and conditions agreed upon by Nicholas and Alexa fade away. Alexa begins to feel the negative effects of their commercialized marriage, while Nicholas tries to follow the predetermined rules. Alexa realizes that telling Nicholas the truth would be self-destructive and instead decides to protect herself from his condescending behavior by cultivating his hatred towards her. She believes that this will allow her to maintain her pride and family's reputation while avoiding his unwanted advances. This shows that when a genuine relationship is based on commercialization and commodification, it can lead to harmful outcomes. Despite Nicholas' failure to uphold their agreement, he crosses a line with Alexa that poisons their relationship, and she takes deliberate steps to safeguard herself from his mechanical passions and sterile affection. Jameson (2005) critiques structuralism in literary criticism for its failure to consider historical context. He advocates for a dialectical criticism that takes into account both synchronic and diachronic aspects of texts. Critics accuse Jameson of trying to create a totalizing theory of interpretation, but Jameson denies making transcendent claims and asserts that his theory is openly ideological and superior to other theories in terms of comprehensiveness. In order to safeguard her family's reputation, Alexa determines that the most effective strategy is to provoke hatred in Nicholas. She adamantly rejects any notion of accepting pity from him. Despite the transactional nature of their marital arrangement, their innate bodily desires cause them to forget the conditions they had established. They consciously maintain a boundary between them even though they are legally married. However, their repressed sexual impulses periodically manifest themselves, ultimately overpowering and overwhelming them. The following passage illustrates how their suppressed sexual impulses and instincts weaken their resolve and motivate them to transgress the boundary they had established: Primitive sexual energy swirled between them like a tornado gaining speed and power. His eyes burned with a sheen of fire, half need, half anger as he stared down at her. She realized he lay between her open thighs, his hips angled over hers, his chest propped up as he gripped at her fingers. This was no longer the teasing indulgence of a brother. This was no old friend or business partner. This was the simple want of a man to a woman, and Alexa felt herself dragged down into the storm with her body's own cry. ( 84) Nick and Alexa fail to uphold the terms of their contractual marriage as they succumb to their sexual desires. Despite Alexa's attempt to make Nick hate her, their attraction for each other is too strong. Their agreement does not constrain the power of human emotions and impulses, which are not rule-bound. Nick and Alexa experience natural human desires such as the need for intimacy, care, and sexual satisfaction, yet they impose strict rules on their relationship. The restrictions they place on their marriage lead to the manifestation of intense and unfulfilled desires. Nick struggles to reconcile his bodily impulses with the contractual obligations he has made with Alexa, causing inner turmoil. The following passage illustrates this conflict: Her voice was raspy. Hesitant. Her nipples pushed against the soft fleece with demand. His gaze raked over her face, her breasts, her exposed stomach. The tension pulled taut between them. He lowered his head. The rush of his breath careesed her lips as he spoke right against her mouth. "This means nothing." His body contradicted his words as he claimed her mouth in a fierce kiss. (85) Nick and Alexa's marriage is based on a commercial agreement that does not allow them to seek love or mutual affection. However, they are both overpowered by their passions, and their attempts to control them result in violent and deviant behavior. Despite trying to maintain a distance from each other, they are drawn to one another, and their hunger for sexual satisfaction knows no bounds. As a result, their marriage does not follow the path they expected it to take. In contrast to other theories, Marx asserts that the reasons for a product being considered a commodity can be traced back to human needs, desires, and practices. In other words, the "use value" of a commodity is determined by its ability to satisfy human wants, while its "exchange value" is dependent on the desire of people to exchange it for something else. Additionally, a commodity's exchange value can only be quantified if it possesses a value derived from the exertion of human labor power, and that value is calculated based on the average labor time necessary to produce similar commodities. In their marriage, intimacy is a threat to both Nick and Alexa, but they are compelled to stay together due to practical reasons. Their happiness is tinged with fear, and the tension between them is highlighted in a scene where Nick corners Alexa in the kitchen. Despite the threat of a more intimate touch, Nick wants to fulfill his sexual desire for Alexa and keep her as a long-term partner. This is a departure from their initial contractual marriage arrangement, and Nick is surprised by the turn of events. Alexa also seeks to cheat on him, and their marriage is marked by a reversal of normal things. During their marriage, Alexa announces that she is pregnant, which shocks Nick. Despite his clear reluctance to have a child, Alexa remains hopeful that Nick's feelings will change with time. This is highlighted in the provided excerpt, where she tries to convince him that he may feel differently in the future. However, Nick is reminded of Gabriella's words, which haunt him. Their marriage began as a business transaction, but the unforeseen consequences of such a union have become a reality. It remains to be seen if Nick will accept the baby that Alexa will give birth to. In light of the foregoing, it may be deduced that the ramifications of the transformation of human emotions and revered social institutions, such as marriage, into commodities inflict immense suffering upon both Alexa and Nick. The compelling force of human desires renders economic and non-economic incentives irrelevant. The practical truth cannot be disregarded in favor of immediate financial gain. --- Conclusion In conclusion, this research sheds light on the commodification of human emotions in the era of late capitalism. The Marriage Bargain by Jennifer Probst illustrates how sacred institutions like marriage are treated as commodities that can be traded and transacted with money. The core finding of this research is that under harsh economic pressures, human feelings and emotions are no longer the pure bonding between two individuals. Alexa, a woman from a respectable family, enters into a contractual marriage with Nicholas for money. While Nicholas is seeking a wife to inherit his uncle's properties, Alexa is compelled to collect money by hook or crook due to her business's financial difficulties. As their marital life proceeds, both Alexa and Nicholas enter into a sexual relationship, and Alexa gets pregnant. The attempt to commodify human emotions incurs hazards and discomforts, ultimately ruining the beauty of human relationships.
Probst's The Marriage Bargain from a Marxist perspective to examine how social institutions like marriage and human emotions are commodified in the era of late capitalism. The novel deals with the theme of love, romance and marriage, but also highlights the commodification of marriage in a capitalist society. The two main characters, Alexandria and Nicholas, use marriage as a means to solve their economic problems, challenging the traditional notion of marriage as a bonding of two souls. This research aims to explore this phenomenon. This qualitative study employs a Marxist literary analysis of the novel, focusing on the commodification of marriage and human emotions in the late capitalist society. It examines how the novel's characters challenge the traditional concept of marriage and use it as a commodity to satisfy their material needs. The core finding of this research is that under the harsh economic pressures of late capitalism, human emotions and social institutions like marriage are commodified, and people compromise their ideals for economic gain. The novel shows how marriage is used as a commodity to solve economic problems, and how the traditional concept of marriage is being challenged by the utilitarian values of modern societies. The research concludes that The Marriage Bargain is an illustrative example of the commodification of marriage and human emotions in late capitalism. The exploration of this discourse clarifies how the institution of marriage is being used as a commodity to satisfy material needs. The novel raises an uncommon issue regarding the marital relationship, and the utilitarian attitude of its characters towards their own marriage represents the emerging social problem of the reification of human relationships.
Introduction According to the World Health Organisation, mental health is a condition of comprehensive physical, mental, and social well-being rather than just the absence of sickness or disability. The determination of mental health is influenced by many biological, psychological, social, cultural, and environmental elements that interact in intricate manners. The aforementioned characteristics are often recognized as risk and protective factors that impact the mental well-being of people and groups (Mrazek and Haggerty, 1994).Adolescence is often considered a critical period in an individual's life since it significantly influences their future development and outcomes. This is a critical phase characterized by establishing and sustaining social and emotional behaviours that are vital to one's psychological well-being. The strategies above include the adoption of good sleep patterns, engagement in regular physical activity, cultivation of coping mechanisms, problemsolving abilities, and interpersonal skills, as well as the acquisition of emotional management techniques. Supportive settings within the family, educational institutions, and the broader community are equally crucial. According to Kessler (2007) (Kessler et al., 2007), there is a worldwide prevalence of mental health issues among teenagers, with an estimated range of 10-20%. However, it is worth noting that these illnesses often go undiagnosed and get inadequate treatment.In the month of July in the year 2020, it was determined that around 17.6% of individuals between the ages of 11 and 16 had symptoms indicative of a potential mental condition. The prevalence of this statistic increased to 20.0% for those classified as young adults within the age range of 17 to 22. When examining the variations in mental health based on gender, it was found that females were more likely to present with a suspected [Citation Rehman, A.U., Jaffar, F., Rehman, S.U., Elahi, A., Akbar, R., Afzal, A., Mujahid, M.U.F., Devi, J., Mehmood, R., Bilal, M., Ibragim Y., Shaikh R., Afzal A. (2023). Comparative study of mental well-being in teenagers with working mothers in the private sector and homemakers attending public and private schools in Lahore, Pakistan. Biol. Clin. Sci. Res. J., 2023: 414. doi: https://doi.org/10.54112/bcsrj.v2023i1.414] 2 mental condition than males (England and Improvement, 2020). Approximately 50% of mental health illnesses in the adult population seem to manifest during adolescence, namely by age 14. However, many of these cases go unnoticed and get no treatment. Around one-sixth of the global population consists of adolescents, which amounts to around 1.2 billion individuals between the ages of 10 and 19. Depression ranks as the primary contributor to morbidity and impairment in the teenage population, while suicide stands as the third leading cause of mortality. The World Health Organisation (Sunitha and Gururaj, 2014) has identified that exposure to violence, poverty, humiliation, and feelings of devaluation might heighten the susceptibility to experiencing mental health issues. According to the findings of the 2017 Mental Health of Children and Young People (MHCYP) survey conducted in England, it was observed that 15.3% of individuals aged 11-19 exhibited symptoms indicative of at least one mental health condition. Additionally, 6.3% of this demographic matched the diagnostic criteria for two or more mental illnesses. In 2017, the prevalence rates across the age groups of 10-12 and 12-14 exhibited little change. However, notable disparities in prevalence rates emerged when the factors of both sex and age were considered. The prevalence of mental problems was higher among girls aged 17-19 (23.9%) than males (10.3%). The data from 2020 substantiates the observed disparity, indicating that likely mental problems are more prevalent among older teenage girls (27.2% among females aged 17-22) compared to boys (13.3%) (Mandal and Mehera, 2017).The presence of depression in children is a significant health concern that has a profound impact on their overall development. Major depressive disorder is characterized by a chronic feeling of a dysphoric mood and a diminished interest or pleasure in almost all activities. These emotions are accompanied by various supplementary symptoms that impact food and sleep, activity and focus level, and self-value perceptions. Parents have a significant role in the formation and development of subsequent generations. During adolescence, peers have a crucial role in facilitating the assimilation of values and the acceptance of cultural norms. Additionally, they contribute significantly to promoting healthy emotional and psychological growth in children, ultimately fostering their development into successful individuals. The significance of a mother's role stems not from her unique talents but rather from the substantial amount of time she spends with her children, which allows her guidance to profoundly impact their attitudes, abilities, and behavior. The extent of a mother's dedication to childcare is often assumed to be significantly impacted by her level of economic activity. Temporal limitations result in a reduced availability of childcare for employed women compared to their nonemployed counterparts. The mother assumes the responsibility of making daily choices, guiding her children as they grow, and equipping them with the necessary attributes of bravery and comprehension to confront life's challenges. Ensuring her children's nourishment and proper care is within her jurisdiction. She must provide training that enables individuals to progress according to societal norms and expectations. The individual in question has been endowed by a higher power with the inherent skill and aptitude to provide vitality and inspiration to subsequent cohorts. The advancement seen in industrialized nations may be largely attributable to the significant contributions made by women in such societies (Shah, 2015). Most children who succeed and exhibit a sense of security tend to originate from households characterized by positive parental attitudes and a nurturing parent-child interaction. Mothers provide their children with love, affection, and care from the moment of their birth. The provision of childcare services has emerged as a significant concern in several nations around the globe. It is well acknowledged that a mother figure's affection and care are essential for children's well-being and development. According to popular belief, the family serves as the first educational institution, with the mother assuming the role of the primary educator for each child. During ancient times, particularly under conventional family structures, women were primarily responsible for childcare and domestic duties. The individuals were prohibited from leaving their residences for employment purposes. The responsibility for generating income via breadwinning was exclusively shouldered by male members within the family unit. Mothers dedicate significant effort to fostering good personality traits, uncovering latent abilities, and facilitating effective coping mechanisms in challenging circumstances (Shrestha and Shrestha, 2020). Children can form a solid relationship with their biological mother and other members of their immediate family. A growing phenomenon of women joining the labor market is driven by economic constraints or a desire to establish their sense of self. This phenomenon has resulted in a significant transformation of the conventional role of mothers from being primarily responsible for caregiving to assuming the position of primary income earners. Consequently, this shift has also changed the objectives and methods of child upbringing (Rohman, 2013). Based on the Lahore Education Statistics of 2007-08, the total count of female instructors in Lahore was 679,503. In 2015, 773,332 were recorded, signifying a notable rise in the population of female teachers. This increase may be attributed to several causes, the predominant being the societal perception that teaching is a suitable vocation for women. Female educators can allocate much time to their families while fulfilling their professional responsibilities. Another significant element is that the educational policies implemented in Lahore over the years have prioritized the enrolment of women in the teaching profession. This has been achieved by providing supplementary incentives targeting women (Shrestha and Shrestha, 2020). Balancing the obligations of work with the duties of familial life is a well-known struggle encountered by parents raising children in the contemporary day. The market has shown a response to the increasing presence of working women with small children, prompting the ongoing development of work-life programs aimed at catering to the diverse demands of all workers. However, there is still little understanding of the unique work-life experiences of working parents who have children with special needs (Syed and Khan, 2017). Approximately 20% of families consist of children who have particular health or mental health requirements. [Citation Rehman, A.U., Jaffar, F., Rehman, S.U., Elahi, A., Akbar, R., Afzal, A., Mujahid, M.U.F., Devi, J., Mehmood, R., Bilal, M., Ibragim Y., Shaikh R., Afzal A. (2023). Comparative study of mental well-being in teenagers with working mothers in the private sector and homemakers attending public and private schools in Lahore, Pakistan. Biol. Clin. Sci. Res. J., 2023: 414. doi: https://doi.org/10.54112/bcsrj.v2023i1.414] 3 --- Methodology A cross-sectional study methodology was used to quantitatively examine the mental health of adolescents with working and non-working moms who are enrolled in public and private schools. The study was conducted at private schools in Lahore. The schools were selected through stratified random sampling. The research sample consisted of teenagers enrolled in private schools in Lahore, with moms who were either employed or not employed. The sample was chosen based on certain criteria for inclusion and exclusion. The data were obtained via a self-administered questionnaire provided to the participants. A Performa was devised to gather data about the socio-demographic characteristics of the participants, as well as conduct a mental health evaluation. The questionnaire was developed using an adapted tool from two validated tools. The primary objective of the questionnaire was to evaluate the mental well-being of teenagers with moms who are employed and those who are not employed. The dependent variable in this study was the mental health of teenagers, which was assessed using a modified measurement instrument. Data on independent variables was collected through a self-administered questionnaire constructed after an international and national literature review. The Performa included socio-demographic variables such as gender, age, institute, mother working status, etc. In addition, it also included some variables related to the mental health assessment, such as the mother's education and participation in extra-curricular activities. Type of family, number of siblings, school environment, etc. Before starting the formal data collection procedure, pilot testing was performed by including 10% of the sample size. Performa was tested for future changes; no major changes were made after pilot testing. One question was added in the demographic section: the number of siblings. Data from pilot testing was not included in the final analysis. The data were obtained via self-administered questionnaires without the involvement of paid data collectors. The study included recruiting adolescents from households with working and non-working moms. Oral consent was obtained from all participants, and only those who provided their agreement to participate in the study procedure were chosen. After obtaining the consent, the participants were given a self-administered questionnaire, and the researcher recorded their responses. Data collection was completed in approximately two months. All filled questionnaires were kept protected in plastic files, and no one had access to them other than the researcher. The codebook was established, and the data were inputted into the Statistical Package for Social Sciences (SPSS) version 26. Following meticulous data input, the data underwent a thorough error-checking process before continuing with further analysis. Following the process of data cleansing, certain variables underwent data transformation. The data analysis process was conducted in two distinct stages, namely descriptive analysis and inferential analysis. Socio-demographic factors were used to obtain descriptive statistics. The categorical variables were summarised by calculating the frequencies and percentages and then presented in a tabular format. The mean and standard deviation summarised continuous variables, assuming a normal data distribution. --- Results A total of 150 responses were included. A self-administered questionnaire was used. Out of 150 respondents (67) were boys and (33) were girls. Most of the 150 respondents were 15 years of age group (13.5%). All of the students were from private schools. Of the total number of respondents, 36.0% were those students whose mothers were working, and 64% were those whose mothers were non-working. An adapted questionnaire was used to assess adolescents' mental health (MHA). The outcome variable was the mental health assessment of adolescents. Although females were targeted slightly more than males, there was no significant difference, as shown in Figure 1. Therefore, according to the results, there is no major difference in the categories of males and females targeted, and further results show that the working and non-working categories of participants also do not have much of a difference. An adapted questionnaire was used to assess adolescents' mental health (MHA). The outcome variable was the mental health assessment of adolescents. Although females were targeted slightly more than males, there was no significant difference, as shown in Figure 1. Therefore, according to the results, there is no major difference in the categories of males and females targeted, and further results show that the working and nonworking categories of participants also do not have much of a difference --- Male --- Discussion The current research aimed to evaluate the mental health of teenagers with both working and non-working moms. Adolescents' mental health was assessed using tools adapted from previous studies. The study was conducted at private schools in Lahore city. A stratified random sampling was used, and schools were selected through a lottery method. Pilot testing was performed before the formal data collection procedure, including 10% of the sample size (150). Reliability was checked after entering data into SPSS. The current investigation demonstrated a statistically significant correlation between teenagers' mental wellbeing and gender. No statistically significant correlation was observed between the mental health of teenagers and demographic variables such as age, educational institution, monthly household income, and others (Singh et al., 2020). The present investigation revealed a marginal disparity in the mental health condition of adolescents with working and non-working moms. However, this discrepancy did not reach statistical significance. The preceding research in India revealed a notable disparity in the mental well-being of adolescents with working moms compared to those with non-working mothers. The obtained p-value indicated no statistically significant difference in the mean score of psychosocial disorders, as reported by Koirala in 2016.One potential reason for the observed findings might be that employed moms dedicate less time to their children, which may hinder the development of emotional bonds between them (Berghuis et al., 2014). Working women face the challenge of balancing their responsibilities in both the household and professional spheres, resulting in heightened stress and anxiety levels within their home life. In such circumstances, individuals will remain unaware of the changes in children's moods and behaviors. Therefore, as a result of these factors, the mental health condition of adolescents with working moms was shown to be worse in comparison to those with non-working mothers. The current investigation demonstrated a statistically significant correlation between teenagers' mental well-being and gender. There was a statistically significant difference in the mean scores for evaluating teenage mental health between male and female pupils (Mahmood and Iqbal, 2015). The preceding research done in Islamabad (Lahore) revealed a notable disparity in the psychological adaptation of pupils. The findings indicated a statistically significant correlation between males and females. According to Dr. Khalid Mahmood (2015), there is evidence suggesting that females exhibit greater psychological adjustment than males. Further research done in Lahore similarly indicated that there was no discernible correlation between teenagers' psychological well-being and their moms' employment status (Mahmood and Iqbal, 2015). The study revealed a lack of statistically significant disparity between male and female offspring of employed moms. Furthermore, a lack of correlation was seen among the offspring of moms who were not employed. The findings of the current research indicate that there is no statistically significant association between the mental health of teenagers and the educational level of their mothers. The present research observed a positive correlation between the educational attainment of mothers and the mental health evaluation scores of their children. The underlying cause of this problem stems from the limited opportunities for children of employed moms to interact with their peers and community members. Employed mothers have allocated less time to engage with their children. The limited availability of time has been shown to have a detrimental impact on the mental wellbeing of children, manifesting in challenges related to communication, attention, emotional comprehension, and the fulfillment of their needs. The limited scope of their social environment may be attributed to the insufficient amount of time that mothers can dedicate to their children's leisure and socialization (Van Droogenbroeck et al., 2018). The present research also revealed a marginal disparity between mental health evaluation and monthly income level; however, this discrepancy did not reach statistical significance. The preceding research done in Germany revealed a notable disparity between the mental well-being of teenagers and the monthly family income. The study conducted by ReissID (2019) found a substantial negative correlation between higher levels of family income and the prevalence of mental health disorders. One potential rationale for conducting the present investigation may be attributed to the smaller sample size compared to the prior study. The present research observed a marginal distinction in the mental well-being of teenagers based on family type; however, this difference did not reach statistical significance. The current research revealed that the average score for the nuclear family type was discovered to be, while for the joint family type, it was observed to be. The mean score of mental health evaluation was somewhat higher among teenagers from nuclear family types compared to those from joint family types. This observation suggests that adolescents from nuclear families may have better mental health outcomes than their counterparts from joint families. One potential explanation for these findings is that adolescents living in nuclear family structures may have limited opportunities for interpersonal engagement with extended family members, leading to decreased socialization (Smithson and Lewis, 2000). The present research also observed a marginal difference between "mental health assessment and engagement in extracurricular activities"; however, this distinction did not reach statistical significance. The mean and standard deviation for involvement in extra-curricular activities were determined. The previous research done in Brazil showed a noteworthy correlation between the evaluation of mental health and engagement in extra-curricular activities (Reverdito, 2017). One potential explanation for the observed outcome may be attributed to the smaller sample size used in the present research compared to the earlier investigation. The marginal distinction between mental health assessment and engagement in extra-curricular activities suggests that children participating in such activities can cultivate their social skills, critical thinking abilities, leadership qualities, time management proficiencies, and collaborative aptitude to pursue a collective objective (Reiss et al., 2019). The current investigation revealed a lack of statistically significant correlation between the evaluation of mental health and the quantity of siblings. The present investigation yields comparable findings about evaluating mental health and the influence of sibling count, as seen in prior research. The preceding research in Japan showed a lack of statistically significant correlation between mental health evaluation and the number of siblings (Liu, 2015). The rationale for doing this research may be attributed to the finding that the number of siblings did not provide statistically significant impacts on mental health. This suggests a multifaceted association between the kind of siblings, gender, age, and variations among siblings (Liu, 2015). The present research findings indicate a positive link between the number of siblings and the mental health scores of teenagers. Specifically, it was observed that as the number of siblings grew, the mental health scores of adolescents also increased. This suggests adolescents with fewer siblings tend to exhibit better mental health (Reverdito et al., 2017). The current investigation observed no statistically significant correlation between the evaluation of mental health and birth order. The present study's findings align with those of other research in terms of the relationship between mental health evaluation and birth order. The research done in Japan demonstrated no statistically significant correlation between birth order and mental health evaluation (Liu, 2015). The majority of children included in the present research were found to be middle children. The rationale for doing this research may lie in the observation that middle children often experience a desire to vie for parental attention as they find themselves between younger and older siblings (Kessler et al., 2007). --- Conclusion The present research has shown that there exists no statistically significant disparity between the mental health state of adolescents and the employment position of their mothers. Overall, the research findings suggest that both working and non-working moms have no significant impact on the mental well-being of their offspring. --- Declarations --- Data Availability statement All data generated or analyzed during the study are included in the manuscript. --- Ethics approval and consent to participate Approved by the department Concerned. --- Consent for publication Approved --- Conflict of interest The authors declared absence of conflict of interest.
Mental health plays a vital role in our ability to think, feel, interact, work, and enjoy life individually and collectively. A person's mental health is affected by several things at any moment, some of which are social, psychological, and biological. Children of working mothers may have different degrees of anxiety, depression, and social problems. Adolescents' mental health has been the subject of countless global studies. Still, less is known about the differences in adolescent mental health between children whose mothers work and those whose mothers do not work outside the home. This research aimed to compare students' mental health in public and private schools in Lahore based on their mothers' employment and its correlation with other sociodemographic characteristics. The research was cross-sectional and included 150 randomly chosen people from many different strata. The collected data were entered and analyzed using SPSS version 26.0. The majority of students we checked attended private schools. The study findings revealed no significant association between the mental health status of adolescents and their mothers' working status, especially in the private sector. However, a noteworthy correlation was observed between mental health status and gender. The average score for mental health assessment was not satisfactory. In conclusion, this research found no statistically significant difference in adolescents' mental health across groups depending on their mothers' employment level. The results indicated that a mother's employment or lack thereof had little impact on her children's psychological health. However, when comparing the mental health of male and female students, there was a clear gender gap. Adolescents' mental health was not significantly affected by factors like their mothers' education, the sort of household they were born into, their birth order, or their parents' monthly income. To learn more about this issue, researchers should investigate how teenagers see their parents' parenting styles in the future.
Introduction 1.1 Building empirically-grounded artificial societies of agents requires qualitative and quantitative data to inform individual behaviour and reasoning, and document macro level emerging patterns (Robinson et al. 2007). While quantitative data can be collected through surveys, literature and other available sources, gathering qualitative data to design the behaviour of the agents, their decision making process and their forms of interaction is not a straight-forward task (Janssen & Ostrom 2006). Likewise, macro-level data for model validation requires theoretical analysis about the system that is being modelled (Robinson et al. 2007). 1.2 Modellers commonly use behavioural and social theories, and desk research to cover the qualitative aspects of agent-based models. They may also use surveys and statistical analysis to understand the decision making behaviour of individuals (Sanchez & Lucas 2002;Dia 2002). --- 1.3 One field of research that can also be used to collect data for agent-based models is ethnography (Bharwani 2004). Ethnography is a research method covering many approaches in anthropology. The data is gathered through interviews and field surveys which are then 'coded' [1] for theoretical analysis. The collected data is a rich set for understanding human behaviour and interaction which is also a good source to build artificial humans or agents. Furthermore, the theoretical analysis that is performed on ethnographic data could be a good source of macro level data for model validation by observing whether the same mechanism and patterns concluded from the analysis result from the simulation (Robinson et al. 2007). --- 1.4 Since ethnography provides a rich set of data about the system and its entities, we anticipate it can be used to make richer agent-based models populating them with empirically grounded data. However, this data, although coded for theoretical analysis, is difficult to interpret and decompose in order to build agents and their behavioural rules. Ethnographic data is normally in textual format obtained from interviews, fieldwork, participant observation or formal documents (Yang & Gilbert 2008). --- 1.5 The difficulty in making use of ethnographic information for agent-based modelling and simulation (ABMS) is due to the fact, that in qualitative ethnographic research the interviewees are normally allowed to talk about their concerns in an open manner, which may lead to an overload of information that may also be immensely rich and diverse in terms of content. In addition, the researcher and the interviewees each have their own world-view, which leads to bias, as abstraction and generalization is required to arrive at specifications of behaviour and characteristics suitable for building agent-based models. --- 1.6 The most complete research in the intersection between ABMS and Ethnography is Bharwani (2004). Bharwani (2004) provides a detailed procedure for the fieldwork process which describes how ethnographic data is collected and formalized. Bharwani (2004) used knowledge engineering techniques in the process, allowing a continued engagement with the interviewees. She designed a specific ontology (i.e., architecture) for her particular domain namely, Agro-Climatic systems, to decompose the ethnographic information into a model. Yang and Gilbert (2008) discuss the differences and similarities between ethnographic data and ABMS and propose recommendations for modellers when using ethnographic data. They emphasize on the requirement for computer-aided qualitative analysis to manage and structure the data. Another requirement indicated by them is a model of data to represent relationships among actors (Yang & Gilbert 2008). --- 1.7 There are also case specific examples of using qualitative data in agent-based models. Geller and Moss (2008) present a model of solidarity networks in Afghanistan, informing agents' structures, behaviour and cognition by qualitative data. They use an evidence-based approach following rules according to which agents behaviours are directly drawn from empirical studies. Moore et al. (2009) use a combination of ethnography and ABMS to study psychostimulant use and related harms. They also indicate the difficulty in generalizing ethnographic information to build agent-based models. They built a model called SimAmph as a shared ontology to combine ethnography and ABMS for their particular case, which proved to be useful in making the connection between the two domains as well as in facilitating collaborative model development and analysis. 1.8 Thus, from the literature, it appears that a shared ontology or a conceptual framework is one of the main requirements for generalizing and structuring qualitative information, especially ethnographic data for ABMS. To address this requirement, in this research, we use an ABMS framework called MAIA (Ghorbani et al. 2013) which provides a shared ontology for social systems, covering a diversity of social, institutional, physical and operational concepts that are required for building agent-based models. Using MAIA as a template of required concepts may help collect and structure ethnographic data for building agent-based models. Therefore, in this research, we explore this possibility by using this modelling framework to structure ethnographic data collected from interviews, fieldwork and formal documents to build an agent-based model. To underpin this possibility, we use a case study on innovation practices in the Dutch horticulture sector. --- 1.9 The remainder of this paper is as follows. In Section 2, we give a brief overview on ethnography and introduce the MAIA framework. In Section 3, we introduce the horticulture case study. In Section 4, we explain the methodological process of integrating ethnographic processes into ABMS. In Section 5, we discuss the lesson learnt from this process and analyse our methodological process. Finally, we conclude in Section 6. Background --- 2.1 The goal of this research is to propose a methodology for using ethnography to build agent-based models. In this section, we will first explain ethnography. Then, we will introduce the MAIA framework, which will be used as the tool for this methodological process. --- Ethnography --- 2.2 Ethnography is a field of science that spans many methods and schools of approaches in anthropology. The power of ethnographic research is that real people are studied at the level of small communities/groups or individuals, and at the societal level, while the mutual interaction is also considered. This qualitative research aims to address complex phenomena by analysing and interpreting the system from the participants' point of view. Ethnography is often exploratory in nature, using observations to construct the analysis from 'bottom-up'. Together, this appears to be what is needed for developing agent-based models, in order to characterize the interaction of the individual and the system: Ethnographic research can range from a realist perspective in which behaviour is observed to a constructivist perspective where understanding is socially constructed by the researcher and subjects. Research can range from an objectivist account of fixed, observable behaviours to an interpretivist narrative describing "the interplay of individual agency and social structure." Critical theory researchers address "issues of power within the researcher-researched relationships and the links between knowledge and power (Ybema et al. 2010). --- 2.3 In ethnography there are several types of methodologies, which can broadly be categorized as either inductive or deductive. An inductive approach to ethnography formulates theories from the 'bottom-up' rather than from the 'top-down'. This means that the researcher starts by observing the community and by looking for repeated patterns of behaviour. If certain themes continue to appear, the researcher can develop a tentative hypothesis that is then verified and which may be turned into a theory. This may require the collection of more corroborating data from other communities within the same society [2]. 'Grounded theory' is an inductive method of analysis commonly applied in ethnography to help scientists generate theories (Corbin & Strauss 2008). Unlike other theories, grounded theory does not start by hypotheses for social behaviour but concludes with them. The grounded theory approach is an iterative process where the analysis of the data may raise new questions that stimulate new data collection (Neumann 2014). While this describes inductive research, some anthropologists also take the deductive approach, using prefixed questionnaires, hypothesis, quantitative data and statistics etc. --- 2.4 The inductive approach is more flexible, however, when it comes to addressing human societies, as it helps the researchers let go of their own preconceived (and often culturally biased) ideas of what the society they are studying is like. While the inductive approach is still used in cultural anthropology today, currently this theory has shifted from'start fieldwork and wait for answers' to'start field work with a few general questions to answer'. This would provide enough frameworks to focus the research, but would leave the questions general enough to allow for the flexibility that studying human culture needs. Some methods play a central role in this inductive approach: Open-ended and semi-structured interviewing: semi-structured interviews are open-ended, but the interview is guided by a list of topics [3]. Such interviews allow discussions that have not been prepared for, while the list guides the discussion. Together, this renders the interview to be both efficient and effective. Participant observation and field work: this method is the foundation of cultural anthropology, and entails the residence of the researcher in a field setting, where the observer blends into the daily life of the people and may closely monitor their activities. --- 2.5 The data produced in ethnography is a combination of written interviews, recordings, documents and personal notes. Structuring, analysing, interpreting and presenting the data is therefore an important step. The richness of data from ethnographic studies can be organized in programs like Atlas.ti [4]. In the analysis process, the next step is to generate categories, themes and patterns from the organized data. The processed and organized data can then be inspected and interpreted, and theories can be used to frame and analyse the data to elucidate patterns and give meaning and explanation to the data. The MAIA Framework 2.6 MAIA (Modelling Agent systems based on Institutional Analysis) is a modelling framework that structures and conceptualizes an agent-based model in a high level modelling language (Ghorbani et al. 2013). The concepts in the framework are a formalization of the Institutional Analysis and Development (IAD) framework of Elinor Ostrom (2009), extended with concepts from other social science theories (Structuration (Giddens 1984), Social mechanisms (Hedström & Swedberg 1996) and Actor-centered institutionalism (Scharpf 1997). --- 2.7 MAIA has been designed to support the participatory development of agent-based simulations. Since its concepts are taken from various theories, this modelling framework can be used by inexperienced modellers and those who are not familiar with programming skills. Furthermore, an online tool [5] supports the conceptualization process of agent-based models. In this tool, the MAIA model (i.e., the conceptual model developed using MAIA) is observable and traceable through cards and diagrams and can therefore be used for communication with domain experts and problem owners for concept verification. MAIA has been evaluated in several projects (e.g., transition in consumer lighting, the wood-fuel market, e-waste recycling sector, and manure-based bio-gas energy system) (Ghorbani 2013). --- 2.8 The framework provides a guideline to arrive at a comprehensive overview if not model of a social system by defining five interrelated structures that group related concepts: 1. In the Collective structure actors are defined as agents by capturing their characteristics and decision criteria based on their perceptions and goals. 2. The Constitutional structure defines roles and institutions. Actors can take multiple roles in social systems. These roles are formalized as unique sets of objectives and capabilities. Roles allow efficient modelling of heterogeneous agents who perform similar tasks. Institutions are defined as the set of rules devised to organize repetitive activities and shape human interaction (Ostrom 1991). In MAIA, institutions are defined using "ADICO grammar of institutions" proposed by Crawford and Ostrom (1995). In ADICO, 'A' is the attribute or the actor who is the subject of the institution, 'D' is the deontic type of the institution (prohibition, obligation, permission), 'I' is the aim of the institution, 'C' is the condition under which the institutional statement holds and 'O' is the sanction for non-compliance to the institution. 3. The Physical structure is the non-social environment that the agents are embedded in. Its building blocks are physical components. 4. The Operational structure is viewed as an action arena where different situations take place, in which participants interact as they are affected by the environment. These produce outcomes that in turn affect the environment. The agents, influenced by the social and physical setting of the system, perform their actions in the action arena. The action arena contains all the entity actions, ordered by plans, which are in turn ordered by action situations. 5. The Evaluative structure provides concepts with the help of which the modeller can indicate what patterns of interaction, evaluation, and outcomes she is interested in. The modeller identifies those variables that can serve as indicators for model validity (is it sufficiently realistic?) and model usability (will its implementation help me to explore the question(s) I set out to address?). Figure 2 at the end of this article shows the concepts in MAIA. Extensive specification of MAIA can be found in Ghorbani et al. (2013). Case Study: Horticulture Innovation --- 3.1 The key objective of our study of the horticulture sector is to elucidate the effects social institutions have on innovation practices in Westland, a region that is home to about 70% of all greenhouse acreage in the Netherlands. --- 3.2 The horticulture sector in the Netherlands at large is facing economic difficulties, which have become more severe since the crisis begun in 2008 (Schrauwen 2012). The dominant presence of innovation strategies that target cost-reduction and volume-increase brings down the cost of products. They fail to bring the growers sustained benefits however, which causes serious problems in the sector. Due to mechanisms in the market, the growers only benefit financially from their innovations for a relatively short period. When their innovations spread in the sector, the market price of their products drops rapidly, because it is subject to fierce price competition, a characteristic of 'cost leadership' market segments. Few growers attempt to increase the value of their products by developing niche product-market combinations, or expand their activities in the value-chain by developing new channels to the market to capture a greater share of the value created between growers and consumers. Such innovation strategies beyond process innovation for unit cost-price reduction are less popular in the sector, despite their potential to counteract the effect of downward spiralling prices in competitive markets. --- 3.3 The goal of this study is to investigate the innovation practices in the Westland horticulture sector to obtain an understanding on how this observed pattern of innovation has emerged and how the underlying behaviour of growers is shaped and maintained. We use grounded theory as our methodology to perform ethnographic field work. Besides using MAIA for data collection and model development, we perform a theoretical analysis using the Bathtub model of Coleman (1986) and several other theories (see Schrauwen 2012). The rationale for adopting a fieldwork approach (rooted in cultural anthropology) is that the organizations and innovation practices are socially embedded, and can be studied as such. Furthermore, the Westland is said to be home to Westlanders who share a common identity with respect to social and business culture, which is shaped by and has shaped their core business for centuries (Kasmire et al. 2013). The Modelling Process --- 4.1 The purpose of our methodological practice is to guide the collection of data for building an agent-based model using an ethnographic approach. This process is divided into two parts. The first part uses MAIA as a template for information collection, which includes field observation, interviews and the study of formal documents. For each of these methods, we make use of the MAIA framework to semi-structure the data collection process. The second part uses the collected information to build a MAIA model. --- Collecting data using MAIA Structuring interviews with MAIA --- 4.2 In inductive ethnographic research, interviews are normally semi-structured. Therefore, it is common practice, to develop a general structure or guideline for the interviews, to ascertain that at least all relevant aspects are addressed. We use MAIA as the general structure for the interviews in order to cover all the information required to build an agent-based model. At the same time, we leave the questions open-ended, so that the interviewees feel free to talk about what may seem relevant to them. --- 4.3 The interviews were conducted with various stakeholders in the Westland horticulture sector (Schrauwen 2012): Experts: Experts were interviewed to gain better insight into the sector as a whole and also to evaluate the assumptions that were being made during the analysis and modelling phase. Growers: Fifteen growers were visited at their organization. Each interview took between two to five hours. The growers were either contacted directly or introduced by other respondents. Organizations: The bank, churches, educational institutes, municipality, LTO GlasKracht and supermarket were the other actors interviewed in order to find out their influence on the social network of growers, their individual capital and investment, and their knowledge and background. --- 4.4 The concepts that were used to structure the interviews and direct the questions are: -Collective Structure Agent Decisions: What decisions do the growers make regarding their innovation practices? The growers are allowed to talk about their decisions freely without being forced to explain how they make those decisions [6]. Agent personal value: The growers are asked about what they care about most when they are making those decisions. Related Agents: During the interviews, the growers are asked about other social entities they may be interacting with. These can be individual actors, such as other growers, or composite actors (i.e., organizational type) such as the bank, or the municipality. -Operational Structure Actions and Plans: The growers are asked about what their general activities are and how often they perform these activities. In this case study, they were asked about their daily, monthly and yearly activities. If each of these practices constitutes a process, they were also asked about the events that take place in that process. For example, if a grower decides to apply for a subsidy, what actions does he have to perform during the application process? -Constitutional Structure Roles: The growers are implicitly asked about the different roles they take in their activities. This is not a straightforward question, but one that would rather need to be extracted from the explanations the growers provide. For example, a grower explains that he has to be a client of the bank to apply for a particular subsidy or he emphasizes that he would only expand his greenhouse, if he has a child who is willing to take over. From these remarks we can identify 'bank client' and 'being a father' as two of the roles, the growers may assume under certain condition. Formal Institutions: While asking about the operational activities and decisions, the subjects are also asked about the formal procedures, rules and regulations they need to go through. This is later used to collect relevant institutional documents. -Physical Structure Physical Components: During the interviews, the subjects are asked about the physical entities they use in their activities, the ones they own or the ones that influence their actions. It is important to ask about this aspect; while the interviewee is talking about the activities he performs in order to limit the information to what is relevant. The interviews are recorded and coded in Atlas.ti for later analysis. http://jasss.soc.surrey.ac.uk/18/1/2.html 2 22/10/2015 Using MAIA for field observation 4.5 During field observation, it is important to identify the relevant properties of the entities (i.e., agents and physical components) that are addressed during the interviews. The composition of the physical entities and their connections may be observed in the field and defined as physical components in the physical structure of MAIA. Thus, in a fashion similar to setting up the general structure for the semi-structured interviews, the MAIA structures can be used as a template for collecting data during field observation. Using MAIA for studying formal documents --- 4.6 The formal documents are collected according to the information provided by the subjects. To collect the right information for modelling institutions, the ADICO structure (see Section Background) is used as the template. Building a MAIA model 4.7 Upon completion of the previous steps, the collected data is used to build an agent-based model. This process is conducted by extracting relevant information from the data by using the MAIA framework. Again, we look at the structures one-by-one to clarify the process [7]. Collective Structure --- 4.8 The interviewed subjects can be defined as agent-types. Each subject can be defined as one separate agent-type if the simulation is limited to the people interviewed; alternatively, one may group the agents according to some criterion and use each category to define a separate agent-type. In the greenhouse case, the 15 growers that were interviewed were divided into five categories distinguished by their stated priorities, their physical assets and characteristics. The first category is the niche growers whose greenhouse is relatively small in size and whose innovation activities are mainly marketing-and product-oriented. The other four categories are large bulk growers, the innovative bulk growers, moderate bulk growers and shop growers (see Schrauwen 2012). --- 4.9 Agents in the simulation are not limited to the interviewees; there may also be social entities that were addressed during the interviews. For example, the European Union was a social entity addressed by the growers, who influences their innovation strategies. This entity is, therefore, also defined as an agent in the simulation. 4.10 From the qualitative data, whether in the form of field observation or interview, the properties, personal values, intrinsic behaviours and decision-making of the actors are extracted to build the agents in the model. --- Constitutional Structure 4.11 The main aspect of the constitutional structure is the institutions. These can be formal institutions extracted from legal documents, or informal institutions, namely, norms of behaviour and shared strategies extracted from the interviews or field observations. The patterns of behaviour observed from interviews can be the result of rules imposed by the society. These are defined as norms or shared strategies. If the rule of behaviour contains an obligation or prohibition by definition, the rule is considered to be a norm. If the actors perform the same routine without any obligation from the system, that routine can be considered as a shared strategy. All the formal and informal institutions are modelled as ADICO statements as defined in Section Background. Table 1 shows some of the institutions extracted from the interviews and legal documents. 4.12 Similar to building agents, the physical entities that are addressed by the interviewees are extracted from the text and defined as physical components in the MAIA model. These include energy, greenhouse and machinery (i.e., the innovative technology they adopt). The properties of these components are identified through field observation in addition to interviews. For example, during field work, it became clear that two properties, namely, the size of the greenhouses and their type of crops, mainly distinguish growers from each other. Operational Structure 4.13 The events that were described by the interviewees are defined as actions in MAIA. The condition for performing those actions and the outcomes of the actions should be extracted from the descriptions the subjects provide. The described sequence of actions helps to define agent plans in MAIA. Finally, the modeller has to make a decision about the time loop and the actions that take place per tick. For this study, we decided that in each tick, seven action situations take place according to the following sequence: Daily life: In this action situation, the intrinsic capabilities of actors take place: being born, die, have a child, learn and start relationships. Cooperating: Within the action situation of cooperating, growers can group together and make a joint decision on investments in innovations. Also, knowledge, norms and values are shared amongst growers that are cooperating, adding up to the social capital of the growers. GMO: In this action situation, growers request GMO (Gezamenlijke Markt Ordening -collective market structuration) subsidy where they may recover half of the investments. GMO applications can either be accepted or rejected. Previous subsidy receivers may also be punished in this action situation, based on their previous actions. Loan: In this action situation, the grower can apply for a loan. He has to pay back his loan and report his money level to the bank, who may take over, when the grower is in trouble. Innovating: In the innovation situation, the decisions are made by the growers to invest in one of the categories of innovations. They invest their money in that innovation, while adopting a new physical component (i.e., technology) in their greenhouse with specific characteristics. Cultivation: In the cultivation situation, all horticulture-related activities are performed such as cultivation, employing technologies, and increasing efficiency. The investments of the previous round of innovations affect the cultivation process and produce outcomes, in terms of products, efficiency, use of inputs, etcetera. Also, the money level is checked and reported to the bank (if the grower is a member). Selling: In the selling situation, growers calculate the costs and value of their products and calculate a market price. They sell their products to the merchandisers. Products are exchanged with money. Evaluative Structure 4.14 To build the evaluative structure of MAIA, not only the data collected was used, but also the anthropological analysis. We defined a set of variables that can be used to measure and study the possible emergent system elements from the simulation according to this analysis. --- 4.15 The theoretical analysis showed that a phenomenon called 'isomorphism' steers companies towards the same characteristics which gives rise to similar innovation practices that are not effective in the long run and may even harm the sector. To explore this phenomenon in the simulation, we defined the variable 'homogenization' to calculate the variation in innovation types. This value would be measured through time. The correlation between subsidies and this variable is also identified as a parameter of interest according to the ethnographic analysis. 4.16 One other issue in the analysis was 'decreasing product value'. Many products, especially bulk products, are sold with little margin. This means that the income flowing back to the grower is at risk of being less than cost, which decreases their capital. With just one innovation not giving good returns, this may put them in danger. This may even cause bankruptcy. Therefore, another variable to keep track of in the simulation is the developments of product value (i.e., product price) in relation to time and different innovation types. 4.17 The sector's sustainability is another point of interest in the study. This issue stands on three different pillars, namely, economical, ecological and social. To experiment with these pillars in the simulation, for the economical part, the ratio between product value and bankruptcy is calculated in relation to subsidies, loans and time. For the ecological aspect, the relation between water, energy and nutrient, and amount and value of products is defined as a metric. Finally, to track the social influence, we define two variables: social capital and bankruptcy. 4.18 In this section, we presented an overview of the process of ethnographic data collection and analysis used for conceptualizing an agent-based model of the horticulture sector. We explained how MAIA concepts can be used to inform data collection, and to build an agent-based model. In the next section, we will generalize this methodological procedure, to make it applicable to other social studies. Generalizing the Process http://jasss.soc.surrey.ac.uk/18/1/2.html 3 22/10/2015 --- 4.19 Figure 1 shows the general process of using ethnographic data to build an agent-based model using MAIA. Some concepts in the MAIA structures, as illustrated on the left side of the figure, are primarily used to semi-structure the data collection process. The collected data is then decomposed into an agent-based model, again, using the MAIA structures. 4.20 As Figure 1 shows, there is a cycle between the ethnographic research and the building of a MAIA model. Although semi-structuring data collection minimizes the need to redo interviews, it may still be required to collect further information for the model. This would especially hold for field observations and document collection. 4.21 Besides building the conceptual model, the ethnographic data is also used to perform theoretical analysis. Not only can this analysis be used to further enrich the model, specifically in the evaluative structure (see previous section), it is also used to draw conclusions. These conclusions can be used independently or in combination with the simulation results. Some sort of triangulation can thus be completed, comparing the social analysis with the dynamics generated by running the model. What may be an issue here, however, is that the same input data is used for both methods, so they are not completely independent. Discussion 5.1 Building an agent-based model requires both quantitative and qualitative data. Although much of the information can be represented in the form of numeric values, the actual context of the model which shows the order of the events, and how agents make decisions and interact, requires qualitative information. Ethnography can provide rich data for building agent-based models both at micro and macro levels. However, it needs structure and interpretation to be actually applicable to this simulation approach (Yang & Gilbert 2008). In this paper we presented MAIA as a tool to collect and structure ethnographic data for ABMS. The process of building an agent-based model for the horticulture sector helped us to identify several benefits of using this tool. --- 5.2 First, the MAIA framework ensures consistency and coherence between the features extracted from the ethnographic process. Since MAIA is constructed as software meta-model, its soundness, completeness and parsimony have been verified (Ghorbani 2013). Therefore, the modeller can be confident that the collected and structured data is by default consistent in the model. --- 5.3 Second, as Dey (2003) indicates, analysing qualitative data also involves an abstraction process which may not be a straightforward task given the immense amount of details provided by ethnography which mostly concerns individuals. Since MAIA is an abstract template or 'ontology' for a set of concepts, it proved to be highly instrumental for facilitating and documenting this abstraction process. --- 5.4 Third, another contribution of MAIA in making use of ethnographic data is that it helps to identify the normative aspects of the system. The insights people provide about their view of the world through interviews are not based on external reality but are culturally generated and emergent. With the ADICO statements in MAIA, the modeller can extract the norms and shared strategies from the interviews in order to add a cultural/institutional dimension to the simulation. --- 5.5 Fourth, an important contribution of using MAIA is that not only the collected ethnographic data can be used to build an agent-based model; the theoretical analysis performed on the data is also put to use. The theoretical ethnographic analysis helps define the variables that measure the outcomes of the simulation. These variables are covered in the evaluative structure of MAIA. Therefore, besides informing agent behaviour, the methodological process introduced in this paper can help measure the possible outcomes of interest, i.e., macro-level patterns for the simulation. --- 5.6 Fifth, when an ethnographic researcher uses MAIA, her activities become more structured and tractable. We anticipate this will facilitate the interpretation and discussion of field research, and lead to a growing body of empirically grounded information that can be re-used for modelling and research studies. --- 5.7 Finally, linking the body-of-knowledge of anthropology and agent-based modelling of social systems may be mutually beneficial. We believe, the proposed method supports non-computing anthropologists in building agent-based models in order to complement their research methods. To explore the feasibility of this claim, an anthropologist performed the whole process starting from the ethnographic fieldwork to the development of the conceptual model. We observed that MAIA can indeed bring ABMS within the reach of anthropologists who even have no familiarity with modelling. 5.8 Indeed, to build agent-based models from such data, a major difficulty is the step from a limited number of individuals interviewed to the creation of a whole society. The stories and decision-making are usually personal and related to personal incidents; it is hard to draw certain 'types' of agents from that, because those coincidental incidents in life have a large influence. While estimating the percentages of the type of people forming the society is hard, in the eventual ABM, these can become parameters for variation. --- 5.9 Finally, it is important to emphasize that the structuring of collected data although highly facilitated with MAIA, still depends on the creativity of the modeller. There are many choices and interpretations that the modeller has to make to transform qualitative data into an agent-based model. When MAIA is used, however, there will be both a unambiguous language to communicate about the decision taken, and a traceable track record of how the researcher arrived from empirical data to interpreted model results and model. --- Conclusion 6.1 Managing and structuring data, especially qualitative, is a major challenge for agent-based modelling. This research presented a method to effectively use ethnographic data for building agent-based models. --- 6.2 We used the MAIA framework to semi-structure the data collection procedure and later on used the same framework to decompose the information and build a conceptual agent-based model. The conceptual model is then used to produce running simulations. --- 6.3 Although MAIA facilitated the structuring of qualitative information, another phase of data collection is required, namely one to complete the quantitative aspects of the simulation. This phase is not yet supported by the methodological process presented here. Therefore, the next step of this research is to extend the MAIA framework to support the quantitative data collection process. Figure 2. The UML class diagram for the MAIA meta-model (Ghorbani et al. 2013)
Using ethnography to build agent-based models may result in more empirically grounded simulations. Our study on innovation practice and culture in the Westland horticulture sector served to explore what information and data from ethnographic analysis could be used in models and how. MAIA, a framework for agent-based model development of social systems, is our starting point for structuring and translating said knowledge into a model. The data that was collected through an ethnographic process served as input to the agent-based model. We also used the theoretical analysis performed on the data to define outcome variables for the simulation. We conclude by proposing an initial methodology that describes the use of ethnography in modelling.
INTRODUCTION The COVID-19 pandemic drastically impacted on family life. Parents worried about their own and their families' health, job losses, and salary reductions, while keeping up their family life in social isolation. Moreover, because of (partial) school closures, families were suddenly faced with additional pressure of homeschooling their children. There may be considerable variability in how families deal with pandemic challenges and the extent to which they were impacted by COVID-19. For some families, the sequelae of the pandemic may lead to heightened psychological distress and, in turn, an overreliance on less effective parenting practices such as a harsh disciplinary style or even child abuse or neglect (1), with negative impact upon children's wellbeing. Other families, however, may manage relatively well. The current study therefore aims to identify risk and protective factors associated with impaired parenting during the lockdown amidst COVID-19. More specifically, we examined key family factors predicting maternal harsh discipline across three countries, China, Italy, and the Netherlands, using a cross validation modeling approach (2,3). We particularly focused on the role of support from father and grandparents as a protective factor facilitating mothers' adaptability and buffering the effects of pandemic-related distress on caregiving behaviors. Harsh discipline, characterized by parental attempts to control a child using verbal violence (e.g., screaming) or physical punishment (e.g., hitting) (4), can be considered child emotional or physical maltreatment (5,6). Given the long-term negative consequences of maltreatment for children's development (7) examining the predictive performance of factors contributing to harsh parenting is essential for identifying at-risk families and preventing detrimental effects on children during future pandemics. --- Kinship Networks and Harsh Parenting The traditional African proverb "It takes a village to raise a child" may express an underlying truth (8). Mothers, or fathers, do not rear children on their own, but childrearing is usually embedded in larger kinship networks (e.g., grandparents, relatives, neighbors) and communities (schools, daycare centers) that offer support with childcare and/or education. This shared child care appears crucial for parental well-being and optimal child development. For example, involvement of nonresidential grandparents decreases parental stress and promotes children's well-being by stimulating prosocial behaviors and academic engagement (9). Similarly, support from relatives, friends, or neighbors reduces parental stress and lowers risk for child abuse and neglect (10). However, during COVID-19, support outside the family unit has abruptly been lost due to social distancing, closures of schools and daycare centers, and other pandemic and lockdown restrictions. Parents suddenly needed to rely solely on each other, yet distress triggered by the pandemic may interfere with the ability to provide adequate partner support (11). These circumstances may increase risk for harsh parenting practices. --- Pre-existing Vulnerabilities and Harsh Parenting Families with pre-existing vulnerabilities may be particularly at risk for inadequate or harsh parenting during the pandemic. For example, economic hardship is an important factor contributing to risk for child abuse and neglect (6), but the level of risk that pandemic-related financial insecurities poses for parenting abilities likely depends on families' financial situation prior to the pandemic (11). Similarly, psychological distress induced by the pandemic may be particularly difficult to regulate for parents with pre-existing mental health problems, another well-known factor elevating risk for harsh parenting (6). Further, major life stressors, such as the COVID-19 pandemic, may lead to marital conflicts and dissolution or intimate partner violence (IPV) (11). The first studies on family functioning during COVID-19 report increased rates of IPV (12), which may spillover to and harm the child because violence is modeled as a way to deal with conflicts that may also emerge in the parent-child relationship (6). Lastly, environmental factors, such as overcrowded living conditions and lack of access to private outdoor space, may further elevate risk for abuse (13), in particular during lockdown amidst COVID-19 when families are required to stay home. --- Protective Factors and Harsh Parenting Protective factors may, however, buffer the negative effects of COVID-19 on parenting abilities. These protective factors may either lie at the level of the individual parent, such as good (pre-existing) mental and physical health, or may be located in the family composition. One potentially important factor buffering the impact of crises, such as COVID-19, on maternal caregiving is allomaternal care, that is, childcare by adults other than the biological mother including fathers, grandparents, and other group members. Evidence from studies with high-risk families underscores how much allomaternal support matters. For example, father support reduces the adverse long-term effects of maternal depression during a child's infancy on later child behavior problem (14), suggesting that father involvement may compensate for maternal stress. In contrast, in families where father involvement is low or father is absent, as in the case of single mothers, mothers are at increased risk for abusing or neglecting their children (15,16). Other family members may also offer allomaternal assistance, such as older siblings (17) and grandmothers (18). Research shows that the presence of a grandmother in the same household with a teenage mother increases the quality of mothering and, in turn, chances of a secure mother-infant attachment relationships (19). Similarly, having a grandmother at hand predicts improved health and cognition among low birthweight infants (20), although under adverse conditions, such as extreme poverty, presence of grandparents may reduce life expectancy of offspring because they use scarce resources (21). These findings are in line with the grandmother hypothesis (22), stating that extended human female postmenopausal lifespan is an evolutionary adaptation that allows grandmothers to provide allomaternal care to their grandchildren in order to increase their fitness. Based on the grandmother hypothesis, it could be expected that shared childrearing may function as a resilience buffer in times of adversity and may also exert protective effects on mothers' caregiving abilities in the times of pandemics. --- Cultural Differences Across the Netherlands, Italy, and China Although the cooperative nature of human childrearing is universal (23), it is influenced by cultural and economic factors (24). For instance, Western-European families are often only partly supported in child care by grandparents, but for example in low and middle-income countries grandparental involvement is much stronger (25). Moreover, the probability of grandparental co-residence with children and grandchildren is higher in nonwestern societies with traditions of filial piety (26). In China, co-residence with extended family, including grandparents, is common practice (27) and grandparents are often involved in full-time child care. In particular the grandmother forms an important child care provider for Chinese mothers who need to balance the competing demands of childcare and (full-time) work in the absence of adequate child care provisions (28). Chinese fathers also share care with mothers and are more likely than in the past to emotionally invest in their children because the single-child policy has weakened gender roles (29,30). In contemporary China, child rearing is therefore considered a joint mission of mothers, fathers, and grandparents who together form an intergenerational parenting coalition (27). During COVID-19, this extended family may be a source of resilience as the unexpected burden of the pandemic is shared among more people. Indeed, in a previous study with the same sample, we found that support from grandparents during the lockdown was associated with less maternal mental health symptoms (31). From an evolutionary perspective, it has been argued that human childcare practices in the context of extended families enhances children's survival by sharing the costs and load of raising children (18). Exclusive maternal care has even been considered out of step with nature (18) because, according to calculations of evolutionary anthologists, human children consume more than 13 million calories until they reach adulthood (32), which is far more than a mother can provide. Contrasting with extended families in China, in most western societies, including Italy and the Netherlands, the nuclear family is the traditional family, consisting of parents and children, living apart from grandparents and other relatives, e.g., (33). This may be disadvantageous during the lockdown. Non-residential grandparents, among those most vulnerable to COVID-19, were kept at distance from children and grandchildren, which increased their chances of survival but posed a problem for working parents who had grandparental childcare support prior to the pandemic. For mothers in nuclear families, father involvement in childcare may be an important resilience factor buffering the effects of the pandemic on maternal caregiving. Yet, father involvement varies across cultures and paternal behaviors should not be presumed to have similar influences on mothers' caregiving behaviors across different cultural groups. For example, Craig and Mullan (34) showed that mothers' and fathers' work arrangements only predicted equal distribution of childcare between parents in countries supporting equal gender divisions. In Italy, where gender inequality is high and the rate of female employment is amongst the lowest in Europe (35), fathers do not re-adjust for mothers' working hours (34). Italian fathers tend to stick to unequal shares of childcare, promoting Italian families to rely on additional sources of allomaternal support. Due to modestly available formal child care and a ubiquitous feeling of compliance, it is customary that Italian grandparents assist parents and take care of their grandchildren on a regular basis (36). Contrasting with Italy, the Netherlands shows a lower prevalence of the male breadwinner family. Dutch mothers often switch to a part-time job while fathers keep working full-time after becoming parents (37). This is also known as the one-anda-half earner household (38). Although Dutch women still bear the largest part of the burden of household chores and child care activities in daily life (38), levels of gender equality are considered quite high (39). The Dutch formal child care system is used by a large proportion of parents (38,40). Nevertheless, many parents in the Netherlands prefer to combine formal child care with some kind of informal child care, the most prevalent form of the latter being non-residential grandparents taking care of their grandchildren (40). Co-residence with grandparents is, however, uncommon in the Netherlands and COVID-19 separated many Dutch children from their non-residential grandparents, thus lowering sources of allomaternal support. In addition to cultural differences in family composition, culture may also shape parenting practices since cultural values and norms may affect attitudes about raising children, which may in turn influence parent-child interaction (41). It is therefore important to take into account the role cultural context (42), when examining parenting during the COVID-19 lockdown. More specifically, parents may acquire certain beliefs on disciplinary styles, such as corporal punishment, within a cultural context and harsh discipline may occur more often in cultures or countries where practice of violence is viewed acceptable or normative. For example, a cross-cultural study on parenting across six countries Lansford, Chang (43) showed that harsh parenting is most prevalent in countries where physical discipline is perceived normative by parents. However, other research shows that there are far more cultural similarities than differences in parenting practices and that differences among cultural groups disappear when socioeconomic status is controlled (44). --- Aims and Hypothesis In the current study we examined risk and protective factors predicting harsh parenting among mothers with children aged 1-10 years during the COVID-19 lockdown in China, Italy, and the Netherlands. Examining harsh parenting during the lockdown is important because expressions of violence in a family context has negative effects on children's development and psychosocial adjustment (45,46). Our study extends a previous study in which we examined maternal mental health during the lockdown, but did not examine harsh parenting (31). Initial findings of research on the impact of COVID-19 point to increases in harsh parenting, with pandemic-related distress as a mediator (47). However, social and cultural context may either accentuate or minimize the impact of individual-level and family-level factors predicting harsh parenting. Hence, the constellation of parent and family characteristics as predictors of maternal harshness may not be replicable across countries. In the current study, maternal harsh parenting will therefore be examined across cultures by applying a cross-validation approach (2) for selecting models predicting maternal harshness in each country. Crossvalidation allows accurate estimation of how a model would perform on other samples (3). In a predictive modeling context, cross-validation does not select the model predictors based on statistical significance, but based on their predictive performance. Predictive performance is especially important for the purpose of the current study, because in case of future pandemics involving lockdowns, identifying families at risk of harsh parenting or even child abuse is essential. It can be expected that previously identified antecedents of child abuse and neglect, such as parental psychopathology, marital conflict, low socioeconomic status, low father involvement, a large number of children, and poor housing (6,15,16,48), also enhance risk for harsh caregiving in the time of COVID-19. However, in addition to these previously identified antecedents, risk factors more closely related to acute COVID-19-related stress, such as COVID-19 related concerns about health and work increase, may further elevate risk for maternal harshness, whereas allomaternal support may exert protective effects on mothers' caregiving abilities. Hence, our first hypothesis was that previously identified risk factors for child abuse and COVID-19 related stress about health and work would increase risk for harsh maternal caregiving, whereas involvement of father and (co-residential) grandparents would buffer against risk. Second, we hypothesized, in line with the grandmother hypothesis (22,49,50), that grandparental involvement would be particularly beneficial for mothers with young children who are still highly dependent on the physical and emotional availability of caregivers. Thirdly, we expected that high levels of allomaternal support, i.e., support from both fathers and grandparents, facilitate mothers' adaptability and mitigate the effects of pandemic-related distress on caregiving. Lastly, we hypothesize that mothers in the three countries may be differently impacted by the pandemic. This expecation was also based on our previous finding that grandparental support during the lockdown lowers risk for mental health symptoms for Chinese mothers, but not for Italian and Dutch mothers (31). Although child physical abuse is a global phenomenon, unaffected by cultural-geographical factors (51), factors predicting harsh parenting during COVID-19 may differ across countries due to cultural variations in allomaternal support. Thus, we tested the hypothesis that the constellation of factors contributing to maternal harsh parenting during COVID-19 is subject to influences of family composition and may therefore vary across countries. --- METHODS --- Participants and Design Dutch, Chinese, and Italian parents aged 18 years or older with children between 1 and 10 years were invited to participate by completing an online survey. In each country, parents were recruited by contacting elementary schools. In the Netherlands and Italy, parents were also recruited by contacting day care centers using social media advertisements (facebook, linkedin, twitter). Dutch parents were also recruited by distributing the questionnaire among parents who were members of the Dutch I&O research panel (www.ioresearch.nl). The minimum sample size was 400 parents in each country, providing sufficient power to detect moderately sized correlation coefficients (power = 0.80, r = 0.20) between harsh parenting and each of the predictor variables, but we strived for larger sample sizes. Parents who completed the questionnaire but did not meet the inclusion criteria (e.g., they had only children older than 10 years, N = 8 Dutch parents, N = 47 Chinese parents) were excluded. The final sample consisted of 1,156 Dutch parents, 674 Italian parents, and 1,243 Chinese parents. Fathers were excluded from the analyses for the purpose of the current study, resulting in a sample of 900 Dutch, 641 Italian, and 922 Chinese mothers for this study. Characteristics of the Dutch, Chinese, and Italian samples are presented in Table 1. Permission for the study was obtained from the local ethics committees of the School of Social and Behavioral Sciences of Tilburg University, Department of Psychology of Padua University, and Peking University Medical Ethics Board. Participants gave informed consent and were given a chance at winning a gift voucher. --- Procedure Data was collected using Qualtrics in Italy and the Netherlands, and using a web-based platform (https://www.wjx.cn/app/ survey.aspx) in China. Timeframes for data collection were: April 17-May 10 2020 for the Netherlands, April 21-June 13 2020 for Italy, and April 21-April 28 2020 for China. During these timeframes, governmental pandemic measures in the three countries included: remote working, keeping social distance from others, and schools and daycare centers were closed. In each country, in particular older people were advised to keep distance. Dutch people were allowed to leave their home if they had no COVID-19 diagnosis or symptoms and if they had not been exposed to infected others. Also in Italy people were gradually allowed to leave their home during the period of data collection (after May 4). The Chinese data was collected in the aftermath of the COVID-19 peak, but pandemic restrictions were comparable to the Netherlands and Italy. Similar to Italy and the Netherlands, people worked remotely, were allowed to leave their home, but were advised to keep social distance. We focused on recruitment in the regions that were most affected by COVID-19, that is, Northern Brabant (the Netherlands), Lombardy (Italy), and Henan, Hubei, and Shenzhen city (China), although parents from others regions in Italy and the Netherlands were also allowed to participate. --- Measurements --- Parent-Child Conflict Tactics Scale The Parent-Child Conflict Tactics Scale (CTSPC) (52) was administered in order to assess maternal harsh disciplinary style. The CTSPC measures psychological and physical maltreatment and neglect of children by parents, as well as sensitive modes of discipline. For the purpose of the current study, we focused on the subscales psychological aggression (five items) and physical assault subscales (four items). An example item of the psychological aggression scale is "I shouted, yelled, or screamed angrily at my child", while an example item of the physical assault scale is "I slapped my child on the hand, arm, or leg". One item of the original 5-item physical assault subscale was excluded in order to prevent feelings of discomfort in parents. Mothers rated how often they used the different types of disciplinary behavior in the past two weeks on a 6-point scale, ranging from never to <unk>5 times). A harsh parenting score was calculated by summing the nine items of the psychological aggression and physical assault subscales. Confirmatory factor analyses for ordered categorical item scores indicated that a 1-factor harsh discipline model fitted the data (RMSEA (95% CI) = 0.067-0.08; CFI = 0.969; SRMR = 0.057). The estimated reliability was good (McDonald's Omega = 0.99). --- Allomaternal Support Participants were asked to indicate whether or not they received support in child care from residential or non-residential grandparents. In Italy and the Netherlands, very few mothers reported receiving support from residential grandparents (Italy: 3.0%, N = 19, the Netherlands: 1.1%, N = 10) whereas approximately half of the Chinese sample reported a cohabitating grandparent (China: 53.1%, N = 490). Despite governmental recommendations to keep safe distance from grandparents, some mothers reported child care by nonresidential grandparents (Italy: 15.3%, N = 98, the Netherlands: 8.3%, N = 75, China: 0.5%, N = 4). Since the number of parents receiving support for nonresidential grandparents was very low, we decided to combine support for residential and nonresidential grandparents. In addition, involvement of father in household management/tasks and child care was assessed by asking the degree of maternal and paternal contributions to 20 household chores or child care activities. Activities included: homeschooling, clearing the table, large purchases, loading dishwasher/washing dishes, grocery shopping, cooking, small purchases, paying bills, cleaning up house, chores in and around the house, making beds, washing and dressing up child, cleaning the house, bringing child to bed, soothing child at night, making list for grocery shopping, washing clothes, ironing, washing car, taking out trash. Mothers were asked to rate their own contribution and the contribution of their child's father to these tasks in the past week on a scale ranging from 1 (almost exclusively mother) to 5 (almost exclusively father). Cronbach's Alpha was 0.90. Mean scores were calculated, with higher scores representing greater involvement of father. The average of these 20 item scores was used as a measure of father involvement. --- Work Changes and Stress Participants reported on changes in their employment that occurred due to the COVID-19 outbreak, such as loss of hours or job or decreased job insecurity. Mothers reported on the following work changes: moved to remote working, loss of hours, decreased pay, loss of job, decreased job security, disruptions due to childcare challenges, increased hours, increased responsibilities, increased monitoring and reporting, loss of health insurance, reduced ability to afford childcare, reduced ability to afford rent/mortgage, having to fire or furlough employees, decrease in value of retirement, investments, or savings. A total score was calculated by summing reported negative changes. In addition, participants reported on the level of distress they experienced due to the employment and financial impacts of the COVID-19 outbreak on a Likert scale ranging from 1 (no distress) to 10 (severe distress). The correlation between work changes and work-related distress was r = 0.35, p <unk> 0.001. --- General Psychopathology Mental health was measured with the Brief Symptom Inventory 18 (BSI-18, omitting suicidality), measuring somatization (six items), depression (five items), and anxiety (six items), and a subset of 10 questions of the posttraumatic stress disorder (PTSD) checklist for DSM-5. Because these four latent mental health constructs were highly correlated (range r 0.776-0.961), aggregate psychopathology scores were computed by averaging all 27 item scores. Confirmatory factor analysis for ordered categorical data supported this decision by indicating that one general psychopathology factor adequately explained the correlational structure of the four latent psychopathology factors (RMSEA = 0.06; CFI = 0.974; SRMR = 0.043). In addition, health concerns specifically related to COVID-19 were measured. Parents rated the level of distress they experienced due to COVID-19 related symptoms or potential exposure they had or their family or friends had. A score representing general COVID-19-related health concerns was calculated by averaging the two items measuring concerns for self and family and friends. The correlation between health concerns for self and health concerns for others was r = 0.825 p = <unk>0.001. --- Statistical Analysis All analyses were conducted using the freely available software R [version 4.0.2; (53)]. Means and standard deviations were computed for continuous and normally distributed characteristics, and median and range were used for nonnormally distributed continuous variables. Categorical characteristics were expressed in frequencies and percentages. For continuous characteristics, the differences between the three countries were tested using one-way analyses of variance and interpreted using the Eta squared effect size. Chi-square tests were used for categorical characteristics and interpreted using Cramer's V effect size). The 9-item harsh discipline scale was used as the primary outcome measure in all cross validation analyses. The R-package xvalglms (2) allowed for conducting linear regression analyses using K-fold cross validation. Cross validation allows for estimating how a model would perform on other samples. This out-of-sample predictive performance is more accurately determined by cross validation than by traditional model fit measures such as R-squared (3). One advantage of cross-validation is that it more accurately tests out-of-sample predictive performance than by traditional model fit measures such as R-squared. Other advantages of cross validation are that (1) it prevents overfitting the model to the idiosyncrasies of the data collected, (2) often violated regression model assumptions [e.g. linear relation between a predictor and the outcome; homoscedastic and normally distributed residuals; (2)] are no longer required, and (3) it does not rely on p-values to determine the significance of a predictor, thereby preventing the problems related to p-hacking [e.g., inflated false positive rates; (54)].Our cross validation analyses involved two steps. In the first step, ten folds and 200 repeats were used to determine which combination of the 15 predetermined effects showed the best predictive performance in each of the three countries. This project's open science framework page includes a list of the predetermined effects, as well as the R-scripts (https://osf.io/9w8td). The inclusion or exclusion of each of those 15 effects corresponds to a total of 2 15 = 32,768 different regression models. Given that interaction effects were investigated, incorrectly specified models were excluded (i.e., those including interaction effects without the corresponding main effects), resulting in a final amount of 13,311 regression models. For each country, each of those 13,311 models was fit to each of the 200 repeatedly drawn training datasets. In each repeat, the full data was split randomly into ten parts. One of those parts served as the training data, the remaining nine as the test data used to validate the model estimated on the training data. The predictive performance on these test datasets was evaluated in terms of the root mean square error of prediction (RMSE p ). For each country, the model that most often showed the lowest prediction error across the 200 repeats was considered to have the best predictive performance. In the second step of our analyses, the best fitting model of each of the three countries was validated on the data of the other two countries, in order to determine the cross-cultural validity of the factors predicting harsh discipline in each country. For each country's winning model, the importance of the predictors was evaluated based on standardized regression coefficients resulting from a robust regression analysis to handle the violation of the homoscedastic residuals assumption in standard OLS regression. --- RESULTS --- Descriptive Characteristics Table 1 presents the characteristics of Chinese, Italian, and Dutch families during the COVID-19 pandemic, including age of the mother, marital status, and employment. Significant differences between countries were found for almost all characteristics, because the large sample size of the study makes these statistical tests sensitive to detect very small differences between countries. Effect sizes of between-country differences on socioeconomic/demographic variables (age youngest child, age mother, education, marital status, number of children, employment) were small. However, as expected, there were large differences between countries in childcare involvement of grandparents. In China, 53.6% of the mothers indicated that one or more grandparents provided support, whereas this percentage was considerably lower in both the Netherlands (9.4%) and Italy (18.3%). Figure 1 provides a visual representation of the differences between countries on the continuous characteristics listed in Table 1. See Supplementary Table 1 for additional information regarding quarantine situation and COVID-19 diagnoses among parents. Figure 2 shows for each country the distribution of the harsh discipline total scores. Harsh parenting differed significantly between the three countries: Dutch mothers used less harsh parenting than Chinese and Italian mothers. Supplementary Tables 3, 4 and 5 present the correlations between the two subscales of the CTSPC (psychological aggression and physical assault), childcare --- Cross Validation Table 2 shows for each country the top three regression models in terms of minimizing the prediction error (RMSE) in the cross validation analyses. The number of wins indicates the percentage of the 200 cross validation repeats a particular model showed the lowest prediction error (RMSE) of all 13,311 investigated models. The cross validation procedure identified a unique winning model for each of the three countries. In Italy, number of children, education, house with garden, general psychopathology, and marital conflict were important predictors. In the Netherlands, the following predictors were found: number of children, work change, general psychopathology, marital conflict. In China, income, education, work stress, general psychopathology, marital conflict, father involvement and the interaction between grandparental involvement and age youngest child were important predictors (see Supplementary Table 2). Table 2 presents the standardized regression coefficients (<unk>) and Wald test p-values according to three robust regression analyses, including for each country the predictors of the winning model identified through cross validation. In all countries, marital conflict and psychopathology showed a substantial positive association with harsh parenting, although there were considerable between-country differences in the identified predictors. In line with our expectations, harsh parenting was partly explained by the interaction between childcare offered by grandparents and age of the youngest child. Figure 3 illustrates this interaction effect, showing that grandparental childcare was associated with less harsh parenting by Chinese mothers, especially when the youngest children were still young. To determine the cross-cultural predictive validity of each country's winning model, a second series of cross validation analyses were conducted, evaluating the predictive performance of each winning model when predicting harsh parenting in the other two countries. Figure 4 visualizes the resulting prediction error distributions for each of the fitted top models and each of the three datasets. Unsurprisingly, for each dataset, the country's own best model showed the lowest prediction error in 100% of the cross validation repeats. The distributions in the bottom row of Figure 4 show that the Dutch and Italian models perform poorly in predicting harsh parenting in China. Interestingly, the overlapping distributions of the Dutch and Italian models in the Italian data suggests that the Dutch predictors can reasonably well predict harsh care of Italian mothers. --- DISCUSSION In the current study we examined risk and protective factors predicting maternal harsh parenting during the COVID-19 lockdown in China, Italy, and the Netherlands. We applied a cross-validation approach (2) for selecting which combination of 15 predetermined effects showed the best predictive performance in each country. Predictive modeling pointed to marital conflict and maternal psychopathology as shared risk factors predicting harsh parenting in each of the three countries. Despite these common factors, cross-validation identified a unique winning model for each of the three countries, thus indicating that the winning models with the best predictive performance differed between countries. In the Netherlands, work changes and number of children in the home predicted harsh parenting in addition to psychopathology and marital conflict, whereas in Italy, number of children, education, and house with garden were considered important predictors of maternal harsh parenting. In contrast, harsh parenting used by Chinese mothers was best predicted by education, income, and work-related stress of the mother. In addition, father involvement and grandparental involvement for mothers with a young child were considered important protective factors lowering risk for harsh parenting in China. Our findings extend our previous study in which we examined maternal mental health during the lockdown in China, Italy, and the Netherlands, but did not assess harsh parenting (31). Results indicate that, in addition to marital conflict and maternal psychopathology as shared risk factors, models predicting harsh parenting during COVID-19 include distinct risk factors that are not replicated across cultures, possibly due to cultural variations in family composition and allomaternal support. Hence, although harsh parenting is a global phenomenon (51), the constellation of factors predicting maternal harshness during COVID-19 is not identical. First results of COVID-19 studies indicate that the pandemic drastically impacted on family life and that COVID-19 related distress can increases harsh parenting practices [e.g., (47)]. Our cross-validation results extend results of initial studies by indicating that there were considerable between-country differences in the identified predictors of maternal harshness. In our cross-validation approach, model predictors were not selected based on statistical significance, but based on their performance in predicting harsh parenting in each country. This predictive modeling context contrasts with the traditional explanatory data analysis approach used by previous COVID-19 studies and enables the identification of a risk factor model that most accurately predicts harsh care during the lockdown in each of the three countries. Our finding that each country has a unique constellation of factors predicting harsh parenting indicates that we should be careful with generalizing findings on disrupted parenting during the lockdown to other countries. The predictive performance of models predicting harsh care during COVID-19 is not the same across countries, implying that there is no universal risk factor model that can be used for the identification of at-risk families across countries. In line with our expectations, we found that grandparental involvement lowered the risk for harsh parenting among Chinese mothers. Interestingly, grandparent involvement interacted with age of the child. The grandparent effect was particularly pronounced for Chinese mothers with younger children, which is in line with previous studies showing that grandparental involvement is particularly advantageous for children in the post weaning phase. For example, (50) showed a positive grandmother effect on the nutritional status of Aka children in Congo, with their effect most evident during the critical 9-36 months postweaning phase. This post-weaning phase may be a critical period demanding high levels of allomaternal support because maternal caregiving decreases while toddlers are still heavily dependent on care. Moreover, toddlerhood is also the period characterized by increases in parent-child conflict related to the child's burgeoning autonomy and parental disciplinary strategies (55), thereby increasing caregiving load for parents. According to the grandmother hypothesis (22), the prolonged post-reproductive lifespan of grandmothers is the result of evolution favoring post-reproductive individuals their fitness through assisting their own offspring to reproduce successfully (49). Our results add to these findings and suggest that, under the adverse COVID-19 conditions, grandparents indirectly promote children's wellbeing by exerting protective effects on the rearing environment. Grandparental involvement was, however, only an important predictor in the top winning model predicting maternal harshness in China, but not in the Netherlands and Italy. This is consistent with our previous study with the same sample in which we found that grandparental support only lowers mental health problems in Chinese mothers (31). Hence, no grandparent effect was observed in Italy and the Netherlands, possibly because in these countries the nuclear family is the most common family constellation, and nonresidential grandparents were kept at a distance from parents and grandchildren during the lockdown. Another remarkable difference between the Dutch and Italian vs. the Chinese models, potentially related to cultural variations in family structure, was that the number of children contributed to harsh care in the Netherlands and Italy, whereas this factor was considered unimportant in the Chinese model. Although previous research has identified a large number of children in the home as a risk factor for child maltreatment (48), these studies were predominately conducted in Western societies with nuclear families. In extended families, grandparents or other kin may assist with child care in the home environment, thus sharing the caregiving load and allowing parents to have more children without increasing the risk for child maltreatment (49). In China, where the extended family is considered traditional, a large number of children may therefore be a less important predictor for maltreatment. These results suggest that the antecedents of harsh parenting during the lockdown may be different across countries due to cultural variations in family composition. This interpretation is supported by our observation that Dutch risk factors predicted harsh care of Italian mothers reasonably well, possibly because in both countries the nuclear family is most prevalent, whereas Dutch and Italian models performed poorly in predicting harsh parenting in China. It should be noted that many countries are multicultural and include multiple ethnic groups. Hence, our findings do not only indicate that there is no universal risk factor model that can be used for the identification of at-risk families, but also warrant caution against accepting one model for COVID-19-related risk factors within one country. Cultural variations in family composition may accentuate or minimize the importance of risk and protective factors, possibly leading to between-and within country differences in the constellation of risk factor models. In addition to the potential role of family composition, employment rates of mothers may also have resulted in a differential constellation of predictors across the three countries. The employment rate of the Chinese mothers sample was very high in the current sample (93.6% of mothers), which matches well with the above world-average record of female labor force participation in China (56). Moreover, the vast majority of women are involved in full-time employment as part-time working has not yet been initiated/sti
Background: The COVID-19 pandemic drastically impacted on family life and may have caused parental distress, which in turn may result in an overreliance on less effective parenting practices.The aim of the current study was to identify risk and protective factors associated with impaired parenting during the COVID-19 lockdown. Key factors predicting maternal harsh discipline were examined in China, Italy, and the Netherlands, using a cross-validation approach, with a particular focus on the role of allomaternal support from father and grandparents as a protective factor in predicting maternal harshness.The sample consisted of 900 Dutch, 641 Italian, and 922 Chinese mothers (age M = 36.74, SD = 5.58) who completed an online questionnaire during the lockdown. Results: Although marital conflict and psychopathology were shared risk factors predicting maternal harsh parenting in each of the three countries, cross-validation identified a unique risk factor model for each country. In the Netherlands and China, but not in Italy, work-related stressors were considered risk factors. In China, support from father and grandparents for mothers with a young child were protective factors.Our results indicate that the constellation of factors predicting maternal harshness during COVID-19 is not identical across countries, possibly due to cultural variations in support from fathers and grandparents. This information will be valuable for the identification of at-risk families during pandemics. Our findings show that shared childrearing can buffer against risks for harsh parenting during COVID-19. Hence, adopting approaches to build a pandemic-proof community of care may help at-risk parents during future pandemics.
in family composition. This interpretation is supported by our observation that Dutch risk factors predicted harsh care of Italian mothers reasonably well, possibly because in both countries the nuclear family is most prevalent, whereas Dutch and Italian models performed poorly in predicting harsh parenting in China. It should be noted that many countries are multicultural and include multiple ethnic groups. Hence, our findings do not only indicate that there is no universal risk factor model that can be used for the identification of at-risk families, but also warrant caution against accepting one model for COVID-19-related risk factors within one country. Cultural variations in family composition may accentuate or minimize the importance of risk and protective factors, possibly leading to between-and within country differences in the constellation of risk factor models. In addition to the potential role of family composition, employment rates of mothers may also have resulted in a differential constellation of predictors across the three countries. The employment rate of the Chinese mothers sample was very high in the current sample (93.6% of mothers), which matches well with the above world-average record of female labor force participation in China (56). Moreover, the vast majority of women are involved in full-time employment as part-time working has not yet been initiated/stimulated in China (57). As a consequence, the need of allomaternal support may be high in China: Chinese mothers may need support with childcare from either grandparents or father in order to meet the demands from work (58). This may explain why Chinese mothers who benefitted from support from highly involved fathers showed lower levels of harsh parenting, whereas father involvement was not considered an important predictor in Italy and the Netherlands. In line with this explanation, we found that father involvement was higher in China compared to Italy and the Netherlands. Another unexpected finding was that work-related stress or work-related changes predicted harsh parenting in the Netherlands and China, but not in Italy. In Italy, the male breadwinner model is most prevalent and female employment rates are rather low (59). Although work-related changes and stress reported by Italian mothers was quite high and the majority of mothers were employed, her partner's financial and job security may have lowered maternal stress regarding financial resources and buffered the effect of mothers' work stress on parenting abilities. During COVID-19, in particular older adults were advised to keep social distance and (non-residential) grandparents who were involved in child care prior to the pandemic suddenly refrained from babysitting. Although this may have been a necessary precaution in order to avoid exposure to the virus, loss of allomaternal support from grandparents may have had a negative impact on parents (31) as well as children. The unexpected loss of grandparental support during the lockdown may have increased parenting stress, which may in turn leads to an overreliance on less effective disciplinary strategies, such as harsh discipline. Although grandparental involvement in child care exerts positive influences on children's health and well-being (9), the role of grandparents in caregiving is still sidelined in policy decisions. Research on caregiving also focused mainly on the mother as the primary caregiver and neglected the role of other caregivers such as grandparents. Our finding that high levels of allomaternal support from grandparent and father reduces the risk for harsh maternal caregiving during the lockdown in China underscores the importance of shared care, and may inform policies regarding child care during future pandemics. Adopting approaches to build a pandemic-proof community of care and strengthening networks of support inside and outside the family unit may help at-risk parents during future pandemics. Some strengths and limitations should be noted. One strength of the study is that we examined the cross-cultural validity of factors predicting harsh care using large samples from three different countries. Examining parenting during the pandemic across countries is important because COVID-19 is a global crisis and understanding factors predicting harsh care will help identifying at-risk families during future pandemics. Yet, it is unclear whether results from individual countries are replicable across countries. Another strength is the use of cross-validation, which enabled us to identify those predictors that best predict maternal harshness in our data, but also perform well in predicting harsh parenting in various random subsets of the data. Cross-validation therefore revealed models that can be used to predict harsh parenting during future pandemics. This contrasts with standard statistical analyses that risk overfitting their regression models, resulting in models that fit the initial data very well, but are difficult to replicate in future research. Another strength is that allomaternal support from father was measured with a 20-item task division questionnaire, enabling us to study how degree of paternal involvement impacts on maternal caregiving. However, it should be noted that grandparental involvement was measured dichotomously and we were not able to differentiate between maternal and paternal grandparents. Effects of grandparental involvement may be even more pronounced with continuous measures with more power. A second limitation is that some variables did not have sufficient within-country variability to test whether they contributed to harsh care. For example, in the Netherlands almost all parents reported living in a house with a private garden. In contrast with our expectation that lower quality housing would predict harsh care, living in a house with a garden was related to higher levels of harsh parenting in Italy. This effect, however, only approached significance in the robust regression analysis, was absent in China, and may therefore be the result of confounding factors that we did not control for in the current study. In addition, it should be noted that the Chinese, Italian, and Dutch samples showed differences in sociodemographic variables, such as age and employment. However, due to the large sample size, statistical tests were sensitive to detect very small differences between countries. It is not very likely that this has influenced the results, as effect sizes were small and we controlled for sociodemographic variables in all analyses. The analyses also mainly focused on predictive models in which multivariate associations are more important than mean level differences between the countries. Furthermore, Italy was affected to a larger extent by COVID-19 than the Netherlands and China. During data collection, China was in the aftermath of COVID-19, whereas the number of infections were still high in Italy and the Netherlands. Pandemic restrictions concerning closures of schools and day care centers, social distancing, and remote working were, however, the same across countries. Moreover, our results show that COVID-19-related health concerns did not contribute to the prediction of harsh parenting. It is therefore unlikely that the constellation of factors predicting harsh care differed across countries due to differences in COVID-19 severity. Furthermore, it should be noted that the threshold parameters in the harsh parenting factor model for ordinal items were not invariant across countries, implying that factors other than harsh parenting were influencing the differences between countries on some harsh parenting item scores. The deviation from invariance however seemed small and invariance did hold for factor loadings. This analysis suggests that mean differences between countries on the harsh parenting scale should be interpreted with care. Lastly, we examined only maternal harshness and excluded fathers from the current analyses although we did examine paternal involvement in child care. Future COVID-19 studies should involve fathers. Moreover, future research should also examine the impact of lockdowns in families at risk for maltreatment. Allomaternal support may be particularly important in at-risk families. For example, a high-quality relationship with involved grandparents may play a buffering role for children in at-risk families. In conclusion, during COVID-19 parents were presented with unprecedented challenges. For some families, pandemic-related distress may interferes with adequate parenting. Examining risk and protective factors for impaired parenting is therefore important and will help identifying at-risk families during COVID-19 and future pandemics. Our study showed that the constellation of factors predicting maternal harsh parenting during the COVID-19 lockdown is not identical across countries. Although marital conflict and maternal psychopathology are shared risk factors, the predictive performance of models predicting harsh parenting during COVID-19 differed across countries. Hence, the constellation of factors predicting maternal harshness during COVID-19 is not universal. This information will be valuable for the identification of at-risk families during future pandemics. Importantly, our results indicate that shared childrearing can buffer against risks for harsh parenting during adverse circumstances such as COVID-19, thus motivating the development of pandemic-proof support approaches, customized for individual countries, to assist parents with childcare and reduce parenting stress during future pandemics. During the lockdown, in the absence of any childcare support from community, the concept "It takes a village to raise a child" (8) may have had more meaning than ever. Mothers do not rear children on their own and allomaternal support from fathers, grandparents, and the community may be needed to establish resilience at a family level. Hence, building a pandemic-proof community of care can be leveraged in efforts to prevent harsh caregiving practices and their detrimental effects on children's well-being during future pandemics. --- DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. --- ETHICS STATEMENT The studies involving human participants were reviewed and approved by School of Social and Behavioral Sciences of Tilburg University, Department of Psychology of Padua University, Peking University Medical Ethics Board. The patients/participants provided their written informed consent to participate in this study. --- AUTHOR CONTRIBUTIONS MR: conceptualization, investigation, validation, data curation, writing-original draft, funding acquisition, supervision, project administration, and resources. PL: software, methodology, validation, data curation, formal analysis, visualization, and writing-original draft. MV-V: investigation, writing-review, and editing. MB-K and MvIJ: methodology, supervision, writing-review, and editing. PDC and JG: investigation, data curation, writing-review editing, resources, and funding acquisition. All authors contributed to the article and approved the submitted version. --- SUPPLEMENTARY MATERIAL The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyt. --- 2021.722453/full#supplementary-material --- Conflict of Interest: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's Note: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Background: The COVID-19 pandemic drastically impacted on family life and may have caused parental distress, which in turn may result in an overreliance on less effective parenting practices.The aim of the current study was to identify risk and protective factors associated with impaired parenting during the COVID-19 lockdown. Key factors predicting maternal harsh discipline were examined in China, Italy, and the Netherlands, using a cross-validation approach, with a particular focus on the role of allomaternal support from father and grandparents as a protective factor in predicting maternal harshness.The sample consisted of 900 Dutch, 641 Italian, and 922 Chinese mothers (age M = 36.74, SD = 5.58) who completed an online questionnaire during the lockdown. Results: Although marital conflict and psychopathology were shared risk factors predicting maternal harsh parenting in each of the three countries, cross-validation identified a unique risk factor model for each country. In the Netherlands and China, but not in Italy, work-related stressors were considered risk factors. In China, support from father and grandparents for mothers with a young child were protective factors.Our results indicate that the constellation of factors predicting maternal harshness during COVID-19 is not identical across countries, possibly due to cultural variations in support from fathers and grandparents. This information will be valuable for the identification of at-risk families during pandemics. Our findings show that shared childrearing can buffer against risks for harsh parenting during COVID-19. Hence, adopting approaches to build a pandemic-proof community of care may help at-risk parents during future pandemics.
Introduction Neighborhood reputations are based on common perceptions of neighborhood disorder and common perceptions about a neighborhood's ability to cope with disorder (Sampson 2012). Once recognized, neighborhood reputations shape individual sentiments about neighborhood quality, these sentiments then guide residential mobility decisions (e.g., Lee, Oropesa, and Kanan 1994;Speare 1974), reinforce stigmas in urban communities and perpetuate urban spatial inequalities, influence growth machine politics (Baldassare and Protash. 1982;Temkin and Rohe 1996), and potentially affect the resiliency of a community in the wake of catastrophe (e.g., Hartigan 2009). Numerous studies examine the determinants that influence individual sentiments regarding neighborhood quality and residential satisfaction (e.g., Amerigo and Aragones 1997;Dassopoulos, Batson, Futrell, and Brents 2012;Galster and Hesser 1981;Grogan-Kaylor et al. 2006;Hipp 2009;Lovejoy, Handy, and Mokhtarian, 2010;Parkes, Kearns, and Atkinson 2002), but research on the dynamic processes that reinforce or alter residential sentiments during a crisis period are largely absent from the literature. This article contributes to an emerging area of urban community and disaster research by advancing a thesis that helps explain how neighborhood reputations function during crisis periods, when residents are forced to reassess the correspondence between objective circumstances and their residential sentiments. During times of neighborhood crisis-caused for example by natural disasters or sharp economic downturns-neighborhood reputations are more likely to be relied upon to guide the thoughts and actions of residents, and in the process, individual sentiments about their neighborhoods are apt to be altered in more favorable or less favorable ways as residents actively evaluate whether the purported reputation is living up to expectations. To begin to examine this premise empirically, our study uses survey-based data during the most recent housing foreclosure crisis to analyze how objective neighborhood circumstances, together with measures of neighborhood reputation, influence individual assessments of the quality of their neighborhood. The Great Recession, which officially began in December 2007 (Muro et al.2009), triggered a housing foreclosure crisis throughout the US. For this study, we focus our examination of the relationship between housing foreclosure rates, neighborhood reputations, and resident sentiment on a strategic location, Las Vegas, Nevada. Following nearly 20 years of the nation's most rapid population growth and urban sprawl (CensusScope 2000), Las Vegas was one of the most heavily impacted metropolitan areas with some of the highest unemployment rates and home foreclosures in the nation (Bureau of Labor Statistics 2011; Center for Business and Economic Research 2011). Yet, Las Vegas is an advantageous area to study the effects neighborhood reputation on the making of resident sentiment, not only because it is an especially hard hit area, but because many newly built master-planned communities throughout Las Vegas have untested and potentially precarious neighborhood reputations (e.g., Knox 2008). In the wake of a deep economic recession, as residents face difficult decisions about their homes and neighborhoods, understanding the how neighborhood reputations shape residential satisfaction will open new ways of thinking about what manifests neighborhood resiliency and neighborhood change. --- Boom and Bust: Las Vegas and the Foreclosure Crisis The Las Vegas metropolitan area led the nation in population growth during the 1990s at 66.3%, almost doubling the rate of population growth of second ranked Arizona (CensusScope, 2000). Population growth in the Las Vegas metropolitan region continued apace in the 2000s with roughly half a million people arriving between 2000 and 2007. In this context of population growth, transiency was also high. In 2000, Nevada ranked highest among all states in residential mobility, where 25% of the population had moved from another state to Nevada within the past five years. Between 2000 and 2004, Nevada had the highest domestic annual rate of net migration in the country (Perry 2006). As a result of such rapid population growth and the attendant economic boom, the Las Vegas housing market flourished between 1990 and 2006. With approximately 6,000 newcomers per month arriving in Las Vegas at the height of the boom, home prices reached all-time highs in 2006, as many residents moved into newly developed master-planned communities equipped with additional amenities and homeowners associations. The average median price of a singlefamily home was $349,500 in January of 2007. Just four years later, following the economic bust and housing crisis, the median price of single-family homes in January 2011 was $132,000 -an astonishing 62% decline (Greater Las Vegas Realtors Association, 2007, 2011). This is the largest decline of any metropolitan area in the United States (Community Resources Management Division 2010). Ultimately, problems with subprime lending began to emerge in urban areas that had large racial and ethnic concentrations, mid and low-level credit scores, new housing construction, and high unemployment rates (Rugh and Massey 2010;Mayer and Pence 2008). For Las Vegas, it was a booming housing market, relaxed lending standards, low short-term interest rates, and irrational exuberance about housing prices that contributed to rapid rates of home value appreciation and concentrations of subprime lending (Muro 2011;Mayer and Pence 2008). Recent scholarship has also identified the function of metropolitan residential segregation and racial and ethnic targeting of subprime lending as a primary contributor to the housing crisis (Hyra, Squires, Renner, and Kirk 2013). However, even with a large and growing Hispanic population, Las Vegas ranks relatively low on both black/white and Hispanic/white segregation levels (Frey 2010) -suggesting that the bustling housing market was the most likely driver of subprime lending and the housing collapse in Las Vegas. With the largest concentration of subprime mortgage originations in the country (Mayer and Pence 2008), the Las Vegas housing market was a ticking time bomb for a housing bust. Subprime mortgage products were designed to provide home ownership opportunities to the most credit-vulnerable buyers, including those with no established credit history, little documentation of income, and/or those with smaller down payments. In addition to subprime lending, mortgage companies also made it easier for current homeowners to refinance loans and withdraw cash from houses that had appreciated in value (Mayer and Pence 2008). As a result, since 2007, approximately 70,000 housing units have been foreclosed upon with nearly 6,000 new foreclosures occurring every quarter (Community Resources Management Division 2010). Up until 2006, Nevada had a very low loan delinquency rate, particularly among subprime borrowers. This was partly because borrowers in the robust Nevada housing market could often avoid foreclosure by quickly selling their homes to eager buyers (Immergluck 2010). However, between 2007 and 2010 the foreclosure rate in Nevada increased by about 3 percentage points a year (Community Resources Management Division 2010). Such rapid and chaotic economic stress raises questions about the changing quality of neighborhood life for Las Vegas residents in this recessionary climate. --- Neighborhood Reputations: Disorder and Collective Efficacy Neighborhoods are often the environment wherein residents develop identities, forge relationships with peers, and create meaning and coherence in their lives. A neighborhood's reputation-shared beliefs among residents about the positive or negative qualities of a residential area-can influence people's views about themselves and the broader community. Neighborhoods with positive reputations are vital to the sustainability of healthy cities. When residents feel a sense of pride and satisfaction with their neighborhoods, they report a greater sense of attachment to the local community, higher overall life satisfaction, better mental and physical health, greater political participation, and are more likely to invest time and money in maintaining that positive image of the community (Adams, 1992;Hays & Kogl 2007, Sampson, Morenoff & Gannon-Rowley, 2002;Sirgy & Cornwell, 2002). Consequently, when residents are dissatisfied with their neighborhoods, they report a lower quality of life, are less invested in the community, and are more likely to engage in outmigration, which hinders long-term stability and reduces the capacity of a neighborhood to be resilient when challenges arise (Bolan, 1997;Oh, 2003;Sampson, 2003). Residents' shared perceptions about various neighborhood qualities-e.g., convenient location and access to good schools-affect a neighborhood's reputation, but there are two essential neighborhood characteristics in particular that form the foundation of any neighborhood reputation. The first is whether residents jointly feel physical disorder is problematic for the neighborhood (e.g., abandoned property, broken windows, crime, etc.), and the second are shared expectations of residents in the collective ability of the neighborhood to address problematic issues (Sampson 2012). Through the lens of social disorganization theory, researchers have long studied the effects of neighborhood structural characteristics and physical signs of disorder on crime rates (Hipp 2010;Kurbin and Wetizer 2003;Sampson and Groves 1989), but an important distinction is warranted between objective observations of physical disorder (i.e., whether or not there is graffiti on the buildings and trash and litter on the streets) and people's stated sentiments about whether those conditions are problematic. The latter, people's shared evaluation of the problem, constitutes an important aspect of a neighborhood's reputation. According to Robert Sampson's recent work on the stability and change of Chicago neighborhoods, "perceptions of disorder" are what "molds reputations, reinforces stigma, and influences the future trajectory of an area" (2012:123; also see Hunter 1974:93). Perceived neighborhood disorder, independent from actual objective measures of disorder, greatly affects the character of a neighborhood over time. Sampson (2012:144-145) finds in predicting future neighborhood conditions (e.g., poverty levels, crime rates, and outmigration), perceived neighborhood disorder is at least as strong a predictor as prior (i.e., lagged) neighborhood conditions. In the case of crime, prior perceptions of disorder are actually a much stronger predictor of future neighborhood crime rates than prior levels of crime. Adams (1992) also finds that residents' perceptions of crime and disorder have greater influences on neighborhood satisfaction than the actual existence of such crime and disorder. The second aspect of a neighborhood's reputation is collective efficacy. Collective efficacy is "the linkage of cohesion and mutual trust among residents with shared expectations for intervening in support of neighborhood social control" (Sampson 2012: 127). Neighborhood cohesion among residents is believed to be a local resource for organizing around problems when they occur (Morenoff, Sampson, and Raudenbush 2001;Kubrin and Weitzer 2003;Larsen et al. 2004). Prior work has shown, like perceived neighborhood disorder, that perceived social trust and neighboring is meaningful to residents in their assessments of neighborhood quality (Grogan-Kaylor et al. 2006;Parkes et al. 2002). Neighboring fosters mutual support and trust among neighborhood residents (Sampson et al. 1989), and forming social ties helps foster attachments to an area (Austin and Baba 1990;Hipp and Perrin 2006;Kasarda and Janowitz 1974;Parkes et al. 2002;Sampson 1988Sampson, 1991)). Neighborliness reflects attachment through various activities that range from helping a neighbor in need to organizing to address a shared neighborhood problem (Woldoff 2002). As residents participate in neighborhood activities, they develop a shared sense of community and develop positive communal feelings (Ahlbrandt, 1984;Guest & Lee, 1983;Hunter and Suttles, 1972;Kasarda and Janowitz, 1974;Riger and Lavrakas, 1981). Metropolitan context has implications for neighborhood reputations. Much of the research on neighborhood disorder and collective efficacy has taken place in Chicago, a city with many longstanding and historic neighborhoods. But, Las Vegas is a different kind of metropolitan area with many newly built "master-planned communities" (MPCs). These MPCs typically have homeowner's associations (HOAs) and additional amenities that are not commonly associated with neighborhoods in cities like Chicago. These newer MPCs are also less likely to have firmly entrenched reputations, and this will likely increase the variability in how residents respond to a crisis. Although new, MPCs in Las Vegas are certainly not without reputations. Many MPCs are actually provided simulacra-based reputations of community life through marketing strategies before any homes are even sold. This is because neighborhood qualities that are associated with communal bonds and collective efficacy have not been lost on the developers of contemporary master-planned communities. Today, developers of MPCs seek to enhance the marketability of their properties by providing amenities and design features that are intended to provide buyers with "a sense of community." Knox (2008:99) keenly recognizes as a product-branding process where developers synthetically attempt to instill upon a neighborhood a positive community-orientated reputation in order to sell buyers, not only on the quality of the homes, but on the quality of the entire neighborhood (also see Freie 1998). HOAs are also popular with these MPCs because the fees they solicit, and the rules they enforce, are meant to ensure a degree of consistency in the quality of the neighborhood brand. The high rate of urban development prior to the Great Recession, the magnitude of the foreclosure crisis in the Las Vegas area, and the unique characteristics of MPCs make Las Vegas an advantageous place to study the making of residential sentiments for several reasons. First, the making of residents' sentiment in an unsettled period is important because these sentiments will likely facilitate neighborhood resiliency or neighborhood change during the recovery period. Second, given the highly volatile conditions in Las Vegas, our ability to discern the effects of objective neighborhood circumstances (like foreclosure rates) on subjective residential sentiments is enhanced. In other words, the objective reality of the crisis is likely to be physically more salient in Las Vegas than elsewhere making the effects more visible. Third, people's preconceived ideas about their neighborhoods are more likely to be challenged and subjected to dissonance because of the relative newness of many Las Vegas neighborhoods and their relatively unproven statuses. As alluded to above, the stability of a neighborhood's reputation typically exerts an inertia-type effect on individual sentiments during settled periods, but when crises strike, newer and older neighborhoods alike, have their reputations tested. We elaborate on this dynamic below. Fourth, homeowners associations common among MPCs are likely to act as intermediate institutions when crises strike. That is, HOAs may take steps to protect property values in ways that bolsters resident sentiments toward their neighborhoods, or conversely, the powerlessness of HOAs to deflect the foreclosure crisis could create an even greater disjuncture in expectations that further erodes resident sentiment. The uniqueness of Las Vegas makes it possible to more clearly observe these key dynamics in action. --- Neighborhood Reputations during a Crisis High foreclosure rates and the accumulation of real estate owned properties (REOs) have detrimental effects on neighborhoods (Apgar and Duda 2005;Immergluck and Smith 2006;Schuetz, Been, and Ellen 2008). In many neighborhoods, foreclosed homes are boarded up and vacant with unkempt yards and real-estate signage to indicate the neighborhood's diminished status. As a result, these properties create opportunities for criminal activity, discourage remaining residents from investing in their properties, potentially damage neighborhood social capital, and ultimately lower a neighborhood's perceived quality (Leonard and Murdoch 2009). These spillover effects result in neighborhood property devaluation as foreclosed homes typically sell at much lower prices and appreciate much more slowly than traditionally sold homes (Forgey, Rutherford, and VanBuskirk 1994;Pennington-Cross 2006). Based on data collected on foreclosures and single-family property transactions during the late-1990s, Immergluck and Smith (2005) estimated that each foreclosure within a city block of a single-family home resulted in a 0.9%-1.4% decline in that property's housing value. Ordinarily foreclosures may pose a serious threat to neighborhood stability and community well-being, and during the Great Recession unprecedented levels of housing foreclosures have become an objective symbol of genuine neighborhood crisis. Despite the potential effects of housing foreclosures on assessments of neighborhood quality and the remaking of a residential area's reputation, there is little known about how a metropolitan-wide foreclosure crisis affects individuals' perceptions of their neighborhoods. As with high levels of perceived neighborhood disorder and low levels of perceived collective efficacy, we can reasonably expect high levels of foreclosures will be negatively associated with individuals' assessments of their neighborhoods. Yet, new realities and new ways of life emerge during unsettled periods, and these changes can challenge prior views and perceptions (e.g., Swindler 1986; Elder 1974). To more fully understand the potential for change during these unsettled times, it is important to focus on how objective neighborhood circumstances, like foreclosure rates, may alter the relationship between a neighborhood's reputation and individual sentiments. Neighborhood reputations are generally stable during non-crisis periods, and are highly predictive of future neighborhood change, even more highly predictive than objective measures of neighborhood conditions (as reported above). But, importantly, during a crisis period when objective neighborhood circumstances cannot be easily ignored, the salience of a neighborhood reputation might weaken and come to matter less in shaping people's perceptions. This could be especially true in Las Vegas where the reputations of many new MPCs are untested. From this perspective emerges the foreclosure crisis hypothesis: Housing foreclosures will significantly mediate the relationship between neighborhood reputation (measured via collective efficacy and neighborhood disorder) and (a) individual assessments of neighborhood quality and (b) individual satisfaction with neighborhood property values. Thus, the effects of the crisis will have more influence on the sentiments of residents than perceived neighborhood reputations. Objective circumstances may carry greater significance during a crisis because residents are forced to evaluate the correspondence between the objective situation and what they thought they knew about their homes, investments, and neighbors. However, disaster research reminds us time and again that individuals, families, neighborhoods, and communities are quite resilient when crises strike. It is common, for example, for areas affected by natural disasters to rebound within a few years to achieve a full functional recovery in terms of returning to, or in some cases exceeding, pre-disaster levels of population, housing, and economic vitality (Cochrane 1975;Friesema et al. 1979;Haas et al. 1977;Pais and Elliott 2008;Wright et al. 1979). A surprisingly unexplored factor that is potentially a major facilitator of resiliency is a neighborhood's reputation, especially collective efficacy as people are much more likely to need to rely on others during a crisis. Positive neighborhood reputations might ward against high foreclosure rates in the first place, or as a crisis unfolds, residents may filter the situation through their commonly shared beliefs about their community. Relying on preconceived beliefs for guidance during a crisis may produce the kinds of behaviors and outcomes consistent with the neighborhood's reputation. From this perspective, families and neighborhoods are more or less resilient because individuals respond to crises in ways that create a correspondence between reputation and reality. In support of this perspective, emerges the neighborhood resiliency hypothesis: Neighborhood reputations (i.e., collective efficacy and neighborhood disorder) will significantly mediate the relationship between neighborhood foreclosure rates and (a) individual assessments of neighborhood quality and (b) individual satisfaction with neighborhood home values. Thus, neighborhood reputations will have more influence on the sentiments of residents than housing foreclosures. The evaluation of the foreclosure crisis hypothesis and neighborhood resiliency hypothesis is an important first step toward a more comprehensive understanding of the reciprocal connection between disasters and neighborhood reputations: Disasters have the power to fundamentally alter neighborhood reputations through the collective changes of individual sentiments, and yet, existing neighborhood reputations are potentially able to mitigate the effects of disasters on individuals and families. Ultimately, individual sentiments regarding their neighborhoods are the intervening link between disaster and changes to neighborhood status. Although we are unable to fully capture the entire reciprocal cycle-from existing neighborhood reputation through the crisis period to the altered neighborhood reputationwe do focus keenly on the linchpin in the process, individual sentiments regarding their neighborhoods. --- Data and Methods --- Study Area The data for this study come from the Las Vegas Metropolitan Areas Social Survey (LVMASS). LVMASS provides individual-level data gathered from respondents living in 22 neighborhoods in the Las Vegas metropolitan area of Clark County, Nevada in 2009. Clark County has a population of roughly 1.95 million people and is home to 72% of the population of Nevada (U.S. Census Bureau 2010). Our sample includes neighborhoods in each of the four distinct municipal jurisdictions composing the Las Vegas metropolitan area: eight in the City of Las Vegas, four in North Las Vegas, four in Henderson, and six in unincorporated Clark County. Our data on housing foreclosures came from the Housing and Urban Development (HUD) Neighborhood Stabilization Program (NSP) authorized under Title III of the Housing and Economic Recovery Act of 2008. The data provide the approximate number of foreclosure starts for all of 2007 and the first six months of 2008. We use these data to calculate the proximate foreclosure rates at the census tract level, matching the NSP data to the LVMASS survey data by census tract identifiers to create a multilevel data set of individual respondents clustered within Las Vegas neighborhoods. --- Sampling Frame For the LVMASS, we used a stratified cluster sampling design to ensure that our sample included neighborhoods with socioeconomic diversity. Using a stratified (by income quartiles) cluster sample, our study resulted in 22 distinct neighborhoods. Our primary goal was to capture neighborhood-level data from "naturally-occurring" neighborhoods that were geographically identified in the same way that most residents identify with their neighborhood. We diverge from studies that rely strictly on census-based boundary definitions and instead collected information from independent neighborhoods that lie within census tracts. In the fall of 2008, through extensive field work, we identified neighborhoods by key physical characteristics within selected census tracts, including contiguous residences, interconnected sidewalks, common street signage, common spaces, common mailboxes, street accessibility, visual homogeneity of housing communities, and barriers separating housing areas such as gates, waterways, major thoroughfares and intersections. 1 For inclusion as a study neighborhood, we specified that there must be at least 50 visibly occupied homes to avoid non-response and invalid addresses. Our final sampling frame of household addresses was compiled from the Clark County, Nevada Assessor's Office which maintains electronic records of all residential addresses. We then randomly selected a range of 40 to 125 addresses from the sampling frame in each neighborhood. The final study population included 1,680 households in 22 neighborhoods and resulted in 664 individual 1 At the time of sampling, there were a total of 345 Census tracts in the Las Vegas metropolitan area. Using data from both the 2000 Census and the 2005'2009 American Community Survey 5-year estimates, we compared our study neighborhoods within the 22 Census tracts to the remaining 323 Census tracts along several socio-demographic characteristics, including median household income, percent poverty, racial composition, percent married, percent 65+, educational attainment, median year house was built, and percent owner occupied housing units. We found no significant differences between our study tracts and those not included in the study, leading us to conclude that the neighborhoods we included in this study are representative of the Las Vegas Metropolitan Area in general. respondents and a 40% response rate2. The household member with the most recent birthday and over the age of 18 was asked to complete the survey. After excluding cases with values missing on our key dependent variables, our final analytic sample for this study was 643 Las Vegas households. Among those that responded to the survey, there were no statistical differences along any of our observed independent variables between those with missingness on our dependent variables and those without missingness. --- Survey Instrument For this study, each household received a letter offering an incentive of a family day pass to a local nature, science, and botanical gardens attraction to participate in the study and a website address for a web-based survey or telephone number to complete the survey by phone. After exhausting the telephone and web-based responses, we used mailed surveys and door-to-door field surveys. The survey was made available in English and Spanish and administered by trained survey administrators. --- Sample Characteristics Table 2 shows descriptive statistics of the total sample. Residents in our sample have a mean age of 54 years old and an average length of residence in their neighborhood of 11.7 years. Our sample is 73% non-Hispanic white and 27% non-white. Most of our respondents were employed (93%) and homeowners (80%). Nearly 33% of our sample held at least a college degree, followed by 41% with some college education, and 26% with a high school degree or less. Our analytic sample characteristics differ slightly from 2010 population statistics of the Las Vegas metropolitan area (U.S. Census Bureau 2010). In addition to our sample being older and slightly more educated than the average resident, we also have more homeowners in our data. Because our random sampling methodology did not discriminate by housing type (single-family housing vs. multi-family housing), our sample returned very few places of multi-family housing. As a result, we have undersampled those most likely to be in renting situations and living in apartment complexes, including younger residents, those with lower incomes, and those with shorter residential tenure. These sampling disparities may bias results toward more established middle-class homeowners in the Las Vegas metropolitan area if controlling for demographic and socioeconomic characteristics do not fully capture attitudinal differences concerning neighborhoods between middle-class and working-class households. --- Dependent Variables The majority of our survey instruments were replicated from the Phoenix Area Social Survey (PASS), including our key dependent variables. The first dependent variable in the LVMASS comes from a survey question that captures the perceived quality of life in the neighborhood. Residents were asked to rate the overall quality of life in their neighborhood as "Very Good," "Fairly Good," Not Very Good," and "Not at all Good." Neighborhood Quality was coded 1(Not at all Good) to 4 (Very Good). The second dependent variable comes from a four-point Likert scale that asks respondents to rate their satisfaction with the economic value of homes in the neighborhood. Specifically, respondents indicated whether they were "Very Satisfied," Somewhat Satisfied," "Somewhat Dissatisfied," or "Very Dissatisfied" with the economic value of the homes in their current neighborhood. We arrange the responses from the most negative response of 1 (Very Dissatisfied) to the most positive response of 4 (Very Satisfied). This measure taps residents' perceptions of home values, not actual home values, as most home prices were in decline at this time. For the regression analyses we maintain the ordinal level of measurement of these variables. --- Key Neighborhood-Level Independent Variables First, from the 2008 NSP data, we assess census tract foreclosure rates from the number of new foreclosure starts that occurred between 6-18 months preceding the LVMASS. These are the first data since the Great Recession to allow scholars the opportunity to examine the relationships between neighborhood-level foreclosure rates and residential neighborhood sentiments. To test the reliability of HUD's estimated foreclosure rate at the local level, HUD asked the Federal Reserve to compare HUD's estimate to data the Federal Reserve had from Equifax showing the percent of households with credit scores that were delinquent on their mortgage payments 90-days or longer. Analysis by the Federal Reserve staff found that when comparing the HUD-predicted county foreclosure rates to the Equifax county level rates of delinquencies, HUD's data and the Equifax data had high intrastate correlations. For the state of Nevada, the correlations were 0.88 (Department of Housing and Urban Development 2008). After merging the NSP data with LVMASS data, the average neighborhood foreclosure rate is 21.6%, which corresponds closely to the average foreclosure rate of 22% from the 345 census tracts reported for Las Vegas metropolitan from the NSP data. Harding (2009) identifies three distinct phases of the foreclosure process: a period of delinquency leading to foreclosure, a period wherein the bank takes possession of the property (i.e., it becomes a REO: Real-Estate Owned Property), and the resale period after the REO transaction. Our foreclosure measure best captures the later stages of the first step in this process. Prior research suggests a lagged foreclosure effect on the property values of nearby residents in the neighborhood. Harding et al. (2009) finds that that the maximum negative effect of a foreclosure on home values of nearby properties occurs right around the time of the REO transaction, whereas Gerardi et al. (2012) finds that the negative effect of foreclosures on nearby properties peak before the distressed properties complete the REO transaction. Our measure of foreclosure starts up to 18 months prior to the launch of the LVMASS should overlap quite nicely with when we would expect there to be peak foreclosure effects on sentiments concerning one's neighborhood and property values. At a minimum, the temporally variant nature of the foreclosure process (and its effects) means our measure certainly captures a period when the crisis is unfolding, but it may or may not capture the exact peak of the crisis and could therefore underestimate the full magnitude of the crisis. However, the timing of the LVMASS and our foreclosure measure is also advantageous because it captures the first wave of mass foreclosures. If the study was conducted a year or two later at the absolute peak, then there might not have been enough neighborhood variation to detect statistically significant effects. Second, we construct a measure of neighborhood disorder from an index of five items asked in the LVMASS. We asked respondents whether vacant land, unsupervised teenagers, litter or trash, vacant houses, and graffiti in their neighborhoods are a big problem (coded 3), a little problem (coded 2), or not a problem (coded 1). The index ranged from 5 (Lowest Disorder) to 15 (Highest Disorder), is normally distributed, and has a Cronbach's alpha score of 0.74, indicating sufficient internal consistency among items. To create a neighborhood level measure we then calculated each neighborhood specific mean from this scaled index. Third, our measure of collective efficacy or "neighborliness" was composed of five items that assessed respondents' evaluations of neighborly interactions. The items were: "I live in a close-knit neighborhood," "I can trust my neighbors," "My neighbors don't get along" (reverse coded to match the direction of the other items), "My neighbors' interests and concerns are important to me," and "If there were a serious problem in my neighborhood, the residents would get together to solve it." Responses ranged from strongly disagree to strongly agree. The index ranges from 5 (Least Neighborly) to 25 (Most Neighborly), is normally distributed, and has a Cronbach's alpha of.79. We calculated neighborhood specific means to create a neighborhood-level measure of collective efficacy for each of our 22 neighborhoods. 3 --- Control Variables Previous studies indicate that homeownership and length of residence are important predictors of neighborhood attachment (Kasarda and Janowitz 1974;Sampson 1988;Adams 1992;Rice and Steel 2001;Lewicka 2005;Brown et al. 2004;Schieman 2009). Therefore, we included a dichotomous variable for homeownership (vs. renting) and a continuous variable for length of current residence in years. We also controlled for variables that approximate life-cycle stage and indicate socioeconomic status. Age is a continuous variable. Race is coded White (1) and Non-White (0). Education is categorized into "High School Degree or Less," (ref. ) "Some College Education," and "College Degree or More." Marital Status was a binary variable indicating Married (1) vs. Non-Married (0). Finally, employment status was a dichotomous variable indicating whether the respondent was employed (0) vs. unemployed (0) at the time of survey completion. We find that roughly 7% of the sample was unemployed, which is consistent with the unemployment rate of 7.4% for Las Vegas reported in the 2007-2011 American Community Survey 5-year estimates (U.S. Census Bureau 2011). Additional descriptive statistics are provided in Table 2. --- Analytic Approach Multilevel methods are employed for this study to address the issueof non-independence caused by the clustering of residents within neighborhoods. Multilevel models address the issue of non-independence by appropriately adjusting the standard errors of the independent 3 We acknowledge the absence of non-resident input in our measures of neighborhood reputation. Non-resident viewpoints are important to consider when policy decisions are being made about urban development and resource redistribution that affect neighborhoods. However, we assume a good deal of correspondence between resident and non-resident perceptions of neighborhood reputation, and although there is likely to be some slippage between resident and non-resident viewpoints, it is unlikely that these viewpoints would be so discrepant as to render a fundamental misinterpretation of neighborhood reputation. Of course we have no way of testing this directly, but we welcome further empirical inquiry on this matter. variables. More specifically, for this study we estimated several multilevel models for ordinal response variables. These multilevel ordinal logistic models assess the relationship between neighborhood foreclosure rates, neighborhood disorder, and neighborhood collective efficacy on individual sentiments regarding neighborhood quality and neighborhood property values. The specification of these multilevel ordinal logistic models maintains the proportional odds assumption required by ordinal logistic regression (Raudenbush and Bryk 2002:320). Importantly, by taking a multilevel approach, this study is also able to determine the proportion of variation in residents' sentiments that exists across neighborhoods, and we are then able to determine how much of that neighborhoodlevel variation is explained by our key independent variables. The analysis proceeds in five steps. First, for both dependent variables a null model with no predictor variables is estimated to determine the amount of neighborhood-level variation in residents' sentiments toward their neighborhoods. Second, we include individual-level control variables to minimize any conflating of the variance components that may be attributed to the compositional characteristics of the neighborhoods (e.g., socio-demographic characteristics). Third, we introduce into the model our measures of neighborhood reputation-perceived neighborhood disorder and collective efficacy-to (a) assess the total effect of neighborhood reputation on residents' neighborhood sentiments, and (b) determine how much neighborhood variation in the response variables are accounted for by the inclusion of neighborhood reputation (using the model with just individual-level control variables as the comparison model). Fourth, we remove the measures of neighborhood reputation and add neighborhood foreclosure rates into the model to assess the same empirics as for neighborhood reputation. Finally, we estimate the complete model that includes all the individual-level control variables and our measures for neighborhood reputation and neighborhood foreclosure rates. The objective of this final model is to assess the mediation effects of our key neighborhood-level variables. We rely on the KHB-method to examine the statistical significance of these key mediation effects (Breen, Karlson, and Holm 2013). --- Results According to the descriptive statistics in Table 2, a majority of residents (84%) reported a fairly good or very good level of neighborhood quality despite the ongoing foreclosure crisis. Sentiments regarding neighborhood property values are also generally positive in that a slight majority (56%) report being either very satisfied or somewhat satisfied with current home values. Unfortunately, a baseline measure is unavailable to determine whether these reported satisfaction levels are below pre-recession levels. According to the intraclass correlation coefficient (ICC) calculated from the null intercept only model (not shown), approximately 30% of the variation in the sentiments regarding neighborhood quality exits across neighborhoods, and approximately 8% of the variation in sentiments regarding home values exits across neighborhoods. In both instances, there is greater variation in resident sentiment within neighborhoods than across neighborhoods, which is likely to be the case when studying neighborhood effects within a single metropolitan area. Yet, there is sufficient between neighborhood variation for the primary objective of examining the relative role of neighborhood reputation versus neighborhood foreclosure rates in shaping individual-level sentiments. Neighborhood reputations are reflected in shared individual perceptions regarding problematic issues in the area and whether there is
This study examines how two major components of a neighborhood's reputation-perceived disorder and collective efficacy-shape individuals' sentiments toward their neighborhoods during the foreclosure crisis triggered by the Great Recession. Of central interest are whether neighborhood reputations are durable in the face of a crisis (neighborhood resiliency hypothesis) or whether neighborhood reputations wane during times of duress (foreclosure crisis hypothesis). Geo-coded individual-level data from the Las Vegas Metropolitan Area Social Survey merged with data on census tract foreclosure rates are used to address this question. The results provide qualified support for both perspectives. In support of the neighborhood resiliency hypothesis, collective efficacy is positively associated with how residents feel about the quality of their neighborhoods, and this relationship is unaltered by foreclosure rates. In support of the foreclosure crisis hypothesis, foreclosure rates mediate the effects of neighborhood disorder on resident sentiment. The implications of these findings for community resiliency are discussed.
(Breen, Karlson, and Holm 2013). --- Results According to the descriptive statistics in Table 2, a majority of residents (84%) reported a fairly good or very good level of neighborhood quality despite the ongoing foreclosure crisis. Sentiments regarding neighborhood property values are also generally positive in that a slight majority (56%) report being either very satisfied or somewhat satisfied with current home values. Unfortunately, a baseline measure is unavailable to determine whether these reported satisfaction levels are below pre-recession levels. According to the intraclass correlation coefficient (ICC) calculated from the null intercept only model (not shown), approximately 30% of the variation in the sentiments regarding neighborhood quality exits across neighborhoods, and approximately 8% of the variation in sentiments regarding home values exits across neighborhoods. In both instances, there is greater variation in resident sentiment within neighborhoods than across neighborhoods, which is likely to be the case when studying neighborhood effects within a single metropolitan area. Yet, there is sufficient between neighborhood variation for the primary objective of examining the relative role of neighborhood reputation versus neighborhood foreclosure rates in shaping individual-level sentiments. Neighborhood reputations are reflected in shared individual perceptions regarding problematic issues in the area and whether there is a common held belief among residents in the collective ability of the neighborhood to address issues if problems arise. On average, neighborhood reputations in Las Vegas during the foreclosure crises are at a 50/50 level on both measures, as the overall means fall approximately halfway on the aggregated scale (e.g., ave. neighborhood disorder = 7.68; ave. collective efficacy = 13.04). This means that half of Las Vegas neighborhoods enjoy a generally positive reputation, whereas the other half generally has poorer than average reputations. There is also noteworthy geographic variation in projected neighborhood foreclosure rates as the range of rates goes from a low of 15% to a high of nearly 30%. The bivariate correlation between the two components of neighborhood reputation-disorder and collective efficacy-and foreclosure rates are high (.809 and -.737, respectively), These correlations indicate a strong positive association of high levels of perceived neighborhood disorder and high foreclosure rates, and the strong negative association of low levels of perceived collective efficacy and high foreclosure rates. Note that collinearity is not a concern in the regression models as the variance inflation factor for the foreclosure rate (VIF = 3.45) is below even the modest cut point for concern (e.g., 4). Table 3 provides the results from an analysis that disentangles the relative influence of neighborhood foreclosure rates and neighborhood reputation on individuals' sentiments regarding the general quality of the neighborhood and regarding property values. The results from six multilevel ordinal regression models (three for each outcome) are presented in Table 3. The null models that contain only individual-level controls (not shown) provide the baseline variance components that are used for comparative purposes with the results that appear in Table 3. First, note that there are several individual-level effects that are generally robust throughout the analysis. More highly educated individuals are more critical of the quality of their neighborhood, whereas age is positively associated with an individual's satisfaction with the current property values. 4 Homeownership is positively associated with neighborhood quality, although after conditioning on neighborhood reputation, homeownership fails to attain statistical significance. On the other hand, homeownership is negatively associated with the satisfaction level of the neighborhood's property values, suggesting a greater level of insecurity homeowners feel about what is usually their most valuable financial asset. The neighborhood-level variances from the models with only the individual-level controls are 1.004 for neighborhood quality and.200 for neighborhood property values. The respective intraclass correlation coefficients are.30 and.06, which are very similar to the ICCs from the intercept only models, meaning the compositional effects stemming from these individual-level characteristics is minimal. Model 1a and Model 1b in Table 3 report the effects of neighborhood reputation on individual sentiments regarding the general quality of their neighborhoods, and their satisfaction toward property values in the neighborhood, before accounting for the foreclosure rate. The effects from perceived problems with neighborhood disorder and collective efficacy on assessments of neighborhood quality are strong and statistically significant beyond a 99.9% confidence level. For example, a one unit difference in perceived neighborhood disorder (i.e., nearly a standard deviation) is associated with a 32% [1-(exp (.391) =.676)*100] decline in the average resident's odds of reporting a "not very good" response toward neighborhood quality compared to a "fairly good" assessment. A one unit difference in collective efficacy is associated with a 38% [1-(exp (.321) = 1.38)*100] increase in the odds of reporting a positive response toward neighborhood quality compared to a negative assessment. The effect of collective efficacy is also strong when considering assessments of neighborhood property values in Model 1b. There a one unit difference in collective efficacy is associated with a 33% [1-(exp (.285) = 1.33)*100] increase in the odds of reporting being "somewhat satisfied" verses "somewhat dissatisfied' with neighborhood property values. The effect of perceived neighborhood disorder fails to attain statistical significance in Model 1b, suggesting a lesser role of perceived disorder on property assessments than collective efficacy. Considering these measures together, we can say that neighborhood reputation does a very good job of explaining neighborhood-level variation. The proportional reduction in neighborhood-level variance is 97% [(1.004-.030)/1.004)] for assessments of neighborhood quality, and 99.5% [(.200-.001)/.200] for neighborhood property values. Even when starting from modest intraclass-correlation coefficients to begin with, the reduction in level-two variance attributed to neighborhood reputation is noteworthy. 3 assess the relationship between foreclosure rates and assessments of neighborhood quality and neighborhood property values prior to adjusting for neighborhood reputation. As expected, the effects of foreclosure are negative and statistically significant. A one percentage point increase in a neighborhood's foreclosure rate is associated with a 22% decline in the average resident's assessment of the quality of their neighborhood and an 11% decline in the average resident's satisfaction with neighborhood property values. Foreclosure rates also explain neighborhood variation in resident's sentiments, but the explanatory power of foreclosure rates is not as impressive as it is for neighborhood reputation. Foreclosure rates account for 74% of the neighborhood variation in assessments of quality, but only 7% of the neighborhood variation in the assessments of property values. --- Model 2a and Model 2b in Table The theoretical motivation for this study concerns the role of neighborhood reputation in shaping individual sentiments during a crisis period. One perspective advanced here, via the foreclosure crisis hypothesis, suggests that the effects of neighborhood reputation may be largely filtered through objective neighborhood circumstances when a crisis strikes causing the effects of neighborhood reputation to be less salient than during ordinary times. In support of this perspective, we should expect objective measures of neighborhood foreclosure during an economic crisis to significantly mediate the effects of neighborhood reputation. According to the results in Model 3a and 3b in Table 3, we find rather limited support for this perspective. When foreclosure rates are added to the model with the covariates for neighborhood reputation, the effect of neighborhood disorder attenuates by over a third when examining sentiments of neighborhood quality (b = -.391 vs. b = -.250), and when considering assessments of property values, the mediation effect of foreclosure on perceptions of neighborhood disorder are upwards of 88% of the initial effect [e.g., (-.056 +.007) / -.056]. However, several patterns in the results temper these findings. First, although foreclosure rates do attenuate the effects of neighborhood disorder, the initial effect of disorder on property assessments is not statistically significant and the direct effect of foreclosure in Model 3b also fails to attain statistical significance. Second, the attenuation of collective efficacy after adjusting for foreclosure in both Model 3a and 3b is minimal. Drawing on disaster recovery research, this study also posited an alternative hypothesis regarding the role of neighborhood reputations during a crisis. According to the neighborhood resiliency perspective, the relationship between foreclosure rates and the sentiments of residents may be mediated once adjusting for neighborhood reputation because neighborhood reputations may act as a guide for residents during the crisis. This should be especially true of collective efficacy, as neighbors may be more likely to witness the kinds of behaviors that conform to their preconceived beliefs. According to Model 3a and 3b, we find fairly strong support for this perspective, as the foreclosure rate is notably attenuated in both models (-.245 vs. -.077 and -.116 vs. -.028); and rather impressively, the effect of collective efficacy remains robust and statistically significant at a high level (. 321 vs..301 and.285 vs..278). Thus, collective beliefs about a neighborhood's ability to prevent and address problematic issues appear to be a resounding aspect of a neighborhood's reputation that continues to shape individual sentiments during a crisis period. To formally test the statistical significance of these mediation effects we use the KHBmethod (Breen, Karlson, and Holm 2013). We rely on the KHB-method because the method typically used for assessing mediation effects (e.g., the Sobel test) in linear models cannot be used in the context of nonlinear probability models (e.g., those using a logit link) because the change in the mediated coefficient is not only influenced by the mediators but also by a rescaling of the logit coefficients in relation to the error variance. The KHB-method distinguishes the change in the focal coefficient due to true mediation from the change that is due to rescaling. Robust standard errors for the decomposition effects (indirect, direct, and total) are used to get cluster-adjusted p-values. The results of this formal test confirm our preliminary conclusions. On the one hand, we find that only neighborhood disorder is significantly mediated by foreclosure rates (-.391 +. 250 = -.141; p <unk>.05, one-tail) when assessments of neighborhood quality is the outcome. When satisfaction with neighborhood home values is the outcome, the mediation effect (-. 056 +.007 = -.049) is not statistically significant at even p <unk>.10 level. Collective efficacy is not significantly mediated by foreclosure rates in either model. These findings provide fairly limited support for the foreclosure crisis hypothesis: A foreclosure crisis only appears to modestly shape the relationship between a neighborhood's perceived level of disorder and a resident's assessment of neighborhood quality. Conversely, we find that the effect of foreclosure rates on individual assessments of neighborhood quality and satisfaction with home values are significantly mediated by collective efficacy and neighborhood disorder (p <unk>.001, two-tail). Collective efficacy accounts for nearly 58 percent, and neighborhood disorder accounts for 42 percent, of the mediated effect of foreclosure on neighborhood quality (-.245 +.077 = -.168). Collective efficacy also accounts for nearly all the mediated effect of foreclosure rates (-.116 +.028 = -.088) on home value satisfaction. These statistically significant effects support the neighborhood resiliency hypothesis: Neighborhood reputations appear to have mitigated the local response of residents to the foreclosure crisis. --- Conclusion and Discussion The motivation for this study is based on the premise that neighborhood reputations matter in people's lives, and that they matter especially during unsettled times when preconceived beliefs are likely to be more heavily relied upon to guide residents. Yet simultaneously, this study recognizes that the objective realities wrought by a crisis will also force residents to reevaluate and potentially remake in a new light these previously held beliefs. Among those familiar with living in disaster areas, this is known as "the new normal." Understanding this dynamic interplay involves paying close attention to the way collective behaviors and shared beliefs function during a catastrophe. This study advances our understating of this interplay by examining how objective realities and neighborhood reputations shape individual sentiments as a foreclosure crisis unfolds. Central to our premise, on the one hand, is whether a particular crisis creates a large enough disjuncture between the current beliefs and new realities to significantly alter resident sentiment toward their neighborhood, or on the other hand, whether residents largely respond to the crisis in a manner consistent with the neighborhood's current reputation thereby either minimizing or maximizing the potential harm of the crisis. These are not mutually exclusive possibilities, but these two perspectives do lead to alternative hypotheses. The former perspective-the foreclosure crisis hypothesis-posits that commonly held beliefs about a neighborhood are affected by the realities of the crisis, and as a result, the reputational effects of a neighborhood should wane once the crisis-related effects are taken into consideration. The latter perspective-the neighborhood resiliency hypothesis-places more emphasis on the durability of a neighborhood's reputation by anticipating a robust, and largely unaffected, correspondence between collective beliefs and individual sentiments despite any objective crisis-related circumstances. The findings from our study provide qualified support for both perspectives. On the one hand, the effects of perceived neighborhood disorder on the sentiments residents feel toward the general quality of the neighborhood, and their comfort with current property values in the neighborhood, are greatly attenuated once we control for the neighborhood foreclosure rate. In other words, objective realities presented by the foreclosure crisis do affect the collective importance residents place on perceived levels of disorder when assessing, and perhaps reassessing, the quality of their neighborhoods. This finding supports the foreclosure crisis hypothesis. On the other hand, we also find that the effects of neighborhood foreclosure rates on the sentiments residents feel toward their neighborhood are significantly mediated by both neighborhood disorder and neighborhood collective efficacy-with collective efficacy accounting for the majority of the mediation effects in this study. This means that the effects from neighborhood foreclosure rates are in large part filtered through the neighborhood's current perceived status to influence how residents respond to the crisis. This finding supports the neighborhood resiliency hypothesis. Of particular note in this study are the salient and robust collective efficacy effects. It is actually quite remarkable that neighborhood collective efficacy during the foreclosure crisis, not only significantly mediates the effects of foreclosure rates, but collective efficacy also continues to independently shape individual sentiments. This is remarkable because Las Vegas is a highly transitory city with sufficient speculation about the reputational authenticity of some of the area's newer master-planned communities (cf. Knox 2008). If collective efficacy is this salient under nascent conditions, it is also quite possible collective efficacy will be even more important during a crisis for cities with many older wellestablished neighborhoods. Moreover, collective efficacy might be the key differentiating factor among otherwise homogenous master-planned communities in other areas of the county. The results of this study should be considered in light of several limitations. First, the LVMASS data are cross-sectional. As such, they represent a snapshot of residents' perceptions of neighborhood quality of life, satisfaction with the economic value of their homes, and attitudes toward neighborliness during the midst of the Las Vegas foreclosure crisis. While the results of this research have demonstrated a robust link between housing foreclosures and residents' sentiments, as well as evidence that neighborhood collective efficacy mediates the effects of housing foreclosures, these data do not allow us test the complete cycle from the crisis event through current neighborhood status to individual sentiments and then back full circle to neighborhood change. Our findings do capture, however, the all-important first stage in this process, and we look forward to collecting longitudinal data that will allow us to model the complete process. Second, omitted variable bias is always a concern with observational data, and as a result, we caution readers from inferring definitive causality from our results. It is possible that factors other than foreclosure rates and neighborhood reputations affect resident sentiments. For example, residential sorting could bias our foreclosure effects downward if many dissatisfied homeowners had time to relocate before the LVMASS. This also could mean that those residents remaining in hard hit neighborhoods might be more content with their neighborhoods (e.g., for sentimental reasons), biasing the effects of neighborhood reputation upward. However, a more plausible scenario, at least for the beginning of the crisis, would be that dissatisfied homeowners would be unable to move because (a) their properties are underwater (i.e., they owe more than the market value of the property), and/or (b) they simply can't sell because of the lack of buyers. Frustrated residents in this situation would very likely feel resentment for the neighborhood, and importantly, this effect would off-set any downward bias attributed to residential out-migration. Third, although LVMASS data do not allow us to examine different neighborhood amenities, we suspect that some neighborhoods may be more protected from economic distress and report less negative neighborhood experiences than others because of particular amenities. Future research should explore in more detail whether master planned communities and/or those with homeowners associations are buffered from the negative effects stemming from housing foreclosures. If these communities are commodified in ways that shield them from property value decline (Le Goix and Vesselinov 2012) through covenants, conditions, and restrictions (CCRs), then they might also be shielded from neighborhood quality decline during an economic downturn. On the other hand, to the extent that master-planned communities produce a housing price premium, and to the extent that HOAs fail to mitigate foreclosures, the effects of major boom-and-bust cycles may be especially pronounced in these types of neighborhoods. Future research should explore these possibilities in more detail. Lastly, our results need to be put into context. A foreclosure crisis is a relatively weak and slow moving crisis scenario compared to several recent natural disasters. Although the prospects for a full housing recovery in Las Vegas remain very much in question-wavering somewhere between the "Sunburnt" city envisioned by Hollander (2011) and that of the "business-as-usual" growth machine (Pais and Elliott 2008)-it would be rather surprising for housing foreclosures alone to completely refashion an area's reputation. In fact, recent evidence suggests that housing values are moving back toward pre-recession levels, with the Las Vegas metro area leading the pack in property value increases (Firki and Muro 2013;Friedhoff and Kulkarni 2013). Comparatively, it is perhaps more difficult to image how collective efficacy, or any other reputational characteristic, can spur resiliency when an entire community is physically leveled and fully displaced. Yet, time and time again, communities are rebuilt from utter devastation, and it would be equally as difficult to imagine how this is possible without an appreciation for the collective trust residents have in their neighbors. What remains entirely unknown are the crisis thresholds and event conditions for when neighborhood reputations matter the most for disaster resiliency.
This study examines how two major components of a neighborhood's reputation-perceived disorder and collective efficacy-shape individuals' sentiments toward their neighborhoods during the foreclosure crisis triggered by the Great Recession. Of central interest are whether neighborhood reputations are durable in the face of a crisis (neighborhood resiliency hypothesis) or whether neighborhood reputations wane during times of duress (foreclosure crisis hypothesis). Geo-coded individual-level data from the Las Vegas Metropolitan Area Social Survey merged with data on census tract foreclosure rates are used to address this question. The results provide qualified support for both perspectives. In support of the neighborhood resiliency hypothesis, collective efficacy is positively associated with how residents feel about the quality of their neighborhoods, and this relationship is unaltered by foreclosure rates. In support of the foreclosure crisis hypothesis, foreclosure rates mediate the effects of neighborhood disorder on resident sentiment. The implications of these findings for community resiliency are discussed.
Most studies examining the role of social inequalities for adolescent overweight and obesity in the United States focus on differences in family income. An underlying assumption motivating this area of research is that money protects individuals from obesity in today's "obesogenic" society. Several theoretical and practical arguments have been used to buttress this supposition, yet most prior studies do not support this assumption. They do not find a significant negative association between family income and adolescent overweight or obesity in nationally representative samples of U.S. youth (Goodman et al., 2003b;Gordon-Larsen et al., 2003;Martin, 2008;Troiano & Flegal, 1998;Wang & Zhang, 2006;Zhang & Wang, 2007). Nonetheless, many scholars still assert that money should be an important resource that protects adolescents from being overweight. We seek to better understand how socioeconomic resources matter for adolescent weight in two important ways. First we consider whether a different stratified resource is importantparents' education. Second, we consider whether adolescent weight is associated with the financial and educational resources in schools -another social context that is highly influential for adolescent well-being (Teitler & Weiss, 2000). Both families and schools are highly stratified with regard to both income and parents' education. Thus, we investigate how these two resources in these two contexts influence adolescents' risk of being overweight by analyzing data from the National Longitudinal Study of Adolescent Health (Add Health). --- FAMILY AND SCHOOL-LEVEL RESOURCES INFLUENCING ADOLESCENT WEIGHT Researchers often study family income and parents' education together as indicators of socioeconomic status and predict that they have a similar association with adolescent weight given that family income and parents' education are positively correlated (Balistreri & Van Hook, 2009;Goodman, 1999;Goodman et al., 2003a;Goodman et al., 2003b;Haas et al., 2003;Kimm et al., 1996;Strauss & Knight, 1999). We depart from this general approach and seek to unpack how parental education and income are distinctly associated with adolescent weight. This line of investigation and the hypotheses we derive are driven by prior research, which suggests that family income and parents' education may influence adolescent weight in different ways. We also examine how income and education are associated with adolescent weight across two contexts: the family-and school-level. Research on the latter is relatively novel, which leads us to be more speculative about how school-level income and education are associated with adolescent overweight. Our hypotheses about their importance derive from empirical results in two prior studies and from weaving together several strands of prior research on schools. Bringing schools' socioeconomic resources into the research on adolescent overweight and obesity is a significant contribution given the paucity of research on this topic and the strong influence of schools on adolescents' lives (Teitler & Weiss, 2000). --- Family Resources and Adolescent Overweight Because most American adolescents are dependent on their parents, adolescent stratification is a function of their families' socioeconomic status, meaning their parents' income and education. We argue that parents' education and family income capture unique resources and patterns that can influence adolescent weight. Family income provides family with the power to purchase goods and services, depending on their relative prices. In general, "healthy" food is relatively expensive and "bad" food is cheap (Drewnowski & Specter, 2004). Furthermore, the costs for adolescents' physical activity are rising as schools implement pay-to-play policies for organized sports (McNeal, 1998). As such, scholars have argued that greater family income can affect an adolescent's ability to maintain a healthy weight because it increases their ability to purchase "healthy" weight-related goods (Cawley, 2004). This theoretical perspective is pervasive in the literature and leads to the argument that family income should be negatively correlated with adolescent weight. Despite the dominance of the supposition of a negative correlation between family income and adolescent weight, income could also be positively correlated with adolescent weight. Instead of using money to promote a healthy weight, families and adolescents could spend their money on goods that generate risks for adolescent overweight, such as video games, or meals prepared away from home. The empirical evidence regarding the association between income and adolescent weight is mixed and generally does not fit with the dominant perspective that the correlation between income and adolescent weight is negative. Only one study finds a significant, negative association between family income and adolescents' weight in a nationally representative sample (Goodman, 1999), while other studies find a negative association only for narrowlydefined adolescent subpopulations (Balistreri & Van Hook, 2009;Goodman et al., 2003b;Gordon-Larsen et al., 2003;Kimm et al., 1996;Miech et al., 2006;Troiano & Flegal, 1998;Zhang & Wang, 2007). Evidence supporting a positive association is also scarce: only one study finds a positive association among a nationally representative sample of adolescents (Haas et al., 2003). The majority of studies find no link between either family income or poverty and adolescent weight (Goodman et al., 2003b;Gordon-Larsen et al., 2003;Martin, 2008;Troiano & Flegal, 1998;Wang & Zhang, 2006;Zhang & Wang, 2007). More consistent is a small body of literature demonstrating that parents' education is significantly and negatively associated with adolescent weight among U.S. adolescents (Goodman, 1999;Goodman et al., 2003b;Haas et al., 2003;Martin, 2008;Sherwood et al., 2009). Some may interpret this finding as a different way of measuring family income, but that interpretation ignores evidence that parents' education captures other resources, net of family income, that shape adolescents' health and well-being. First, schooling contributes to learned effectiveness -a sense of control to accomplish goals, including those that are health-related (Mirowsky & Ross, 2003). More highly educated parents, thus, have more learned effectiveness, which should make them more likely to believe that they can influence their child's weight. Further, prior research shows that when parents try to regulate what their children eat (Ogden et al., 2006) and how active they are (Arluk et al., 2003), their children are generally leaner and less likely to be overweight. Second, education provides parents with general capabilities, skills and knowledge (Becker, 1993) and correlates with the volume and breadth of their health-related knowledge (Link et al., 1998). We expect that education is positively correlated with an understanding of obesity's etiology and possible consequences. Research bears this out. Lower educated parents tend to rely on folk understandings about what signifies a healthy weight for youth (Jain et al., 2001) and underestimate the incidence of youth overweight and obesity (Goodman et al., 2000). One factor that might explain this is that higher educated parents are more likely to engage with medical professionals about their child's health (Lareau, 2003). Furthermore, a better understanding of obesity has been shown to prevent weight gain. Highly educated adults are less likely to be obese because of their greater awareness of the association between diet and disease (Nayga, 2000). We anticipate that this awareness carries over into how more educated parents feed and socialize their adolescent children and, thus, could influence adolescents' own weight-related choices. Together, the arguments and empirical evidence lead us to expect that parents' education but not income will be related to adolescent overweight because of the knowledge, skills, experiences, and perspectives that are associated with more formal education. We do not discount the importance of income. Instead, we hypothesize that income matters at the school-level because money strongly shapes the amenities and stressors in adolescents' nonfamilial environments. --- School Resources and Adolescent Overweight We focus on schools, because outside of families, schools are the primary social institutions that organize adolescents' lives. During the academic year, adolescents spend the majority of their day at school (Zick, 2010). Schools also shape adolescents' daily activities and friendships through their extracurricular offerings (Guest & Schneider, 2003) and by organizing students into grade levels and academic tracks (Kubitschek & Hallinan, 1998). Schools also influence what adolescents eat, do and value (Story et al., 2006;von Hippel et al., 2007). Most adolescents eat at least one meal per day at schools, which serve breakfast and lunch and have vending machines available on campus (Delva et al., 2007). Adolescents' physical activity is affected by the availability and quality of a school's physical education courses, extracurricular activities, and exercise facilities (Leviton, 2008;Sallis et al., 2001). Finally, school-based cliques influence students' weight-related norms and values (Ali et al., 2011b;Paxton et al., 1999), which in turn shape their dieting and weight-control behaviors (Ali et al., 2011a;Mueller et al., 2010). Despite the large role that schools have in adolescent's lives, few studies have examined how school-level resources influence adolescent weight. Two exceptions focus on parallel resources to those that we investigate at the family level: the average family income of schools (Richmond & Subramanian, 2008) and the average education level of parents in the school (O'Malley et al., 2007). In these studies, both school-level resources have a significant, negative association with adolescent weight. Unfortunately, neither study addresses whether the associations they uncover are confounded by other school-level resources, including the alternate measure of school socioeconomic resources. Other potential confounders are also not addressed well. O'Malley and colleagues (2007) control for many school characteristics, the adolescent's race/ethnicity and parents' education in their statistical models, but not other individual-or family-level factors that predict adolescent weight. Richmond and Subramanian (2008) account for a limited number of individual-and family-level predictors of adolescent weight, but only include one schoollevel confounder -the school's racial/ethnic composition. A primary contribution and strength of this study is the examination of whether the average family income and parental education level in schools are related to adolescent overweight net of each other and other school-, family-, and individual-level confounders. Furthermore, as the first study to examine parallel income and educational resources across family and school contexts, we can present a more complete picture of how these resources are related to weight across the two most important social institutions in adolescents' lives. An additional contribution is that we develop explanations for how and why school-level income and parents' education are associated with adolescent overweight. Given the paucity of research on this topic, we offer new, but speculative arguments and predictions, bringing in related research where possible. We hypothesize that the average family income of a school better predicts adolescent overweight than does the average education level of parents within a school. Furthermore, we expect the estimated effect of school-level income to be nonlinear. We hypothesize that poor schools are particularly risky for adolescent overweight relative to both middle-and high-income schools. These suppositions stem from several factors. First, school-level income is highly correlated with the school funding, despite states' redistributive efforts (Corcoran et al., 2004;U.S. Government Accountability Office, 1997). Further, school funding plays a direct and important role in a school's food provisions and ability maintain facilities and curricula that promote physical activity. Richer schools generally offer healthier à la carte and vending options than poorer schools (Delva et al., 2007) and can fully finance school physical education programs and extracurricular activities (Leviton, 2008;Story et al., 2006). Poorer schools have frequently had to cut physical activity programs given recent pressure to focus on academic test scores (Leviton, 2008;Story et al., 2006). Yet physical education and extracurricular programs are particularly important for middle and high school students given that physical activity falls precipitously during adolescence (Must & Tybor, 2005). In addition, after-school programs (of any kind) could help adolescents maintain a healthy weight because their participation limits adolescents' time available for snacking and watching television (von Hippel et al., 2007). Second, school poverty may also be associated with adolescents' weight indirectly. Poor schools have a greater prevalence of juvenile delinquency, disorder, and classroom disruption (Mrug et al., 2008), making them stressful environments that induce individuals' stress response. Unfortunately, chronic activation of the stress response increases abdominal fat (Anagnostis et al., 2009;Bjorntorp & Rosmond, 2000;Fraser et al., 1999). This further buttresses our hypothesis that poor schools are adverse weight-related environments. We speculate that a school's average parental education level and the prevalence of highly educated parents, in particular, could indirectly be associated with adolescent weight. Because highly educated parents make more demands for school improvements (Lareau, 2003), we expect that schools would face more pressure to maintain or improve aspects related to adolescent weight as the average of parents' education increases. Yet these efforts may be futile if the associated financial costs are high. For example, seemingly simple suggestions like eliminating advertisements for and availability of high-calorie foods and beverages in schools come at a cost because many schools rely on food industry subsidies to fund academic and extracurricular programs (Nestle, 2002). Therefore, we predict that school poverty constrains the relative influence of parents' collective education within a school. --- The Intersection between School Poverty and Own Parents' Education Our study asks one final question about family-and school-level resources: Does familylevel parental education and school-level poverty work in conjunction to produce a joint association with adolescent weight? We speculate that school-level poverty modifies the association between family-level parental education and adolescent weight. This proposed interaction is motivated by theories of resource multiplication and resource substitution (Ross & Mirowsky, 2006). Resource multiplication theory argues that various resources accumulate to impact health (Ross & Mirowsky, 2006). In our study, this would imply that adolescents of highly educated parents in rich schools have more opportunities for maintaining a healthy weight. Those opportunities would cascade and amplify the effects of each other. As such, differences in adolescent weight by parents' education would be larger in rich versus poor schools. Conversely, resource substitution theory predicts that various resources can have a compensatory dynamic that offsets the risks (or advantages) of another resource for one's health (Ross & Mirowsky, 2006). In our analysis, this would lead us to expect that adolescents with more educated parents are better buffered against the weight-related risks of attending a poor school. In this scenario, differences in adolescent weight by parents' education are greatest in poor schools and relatively diminished in rich schools. A priori, we think both processes are plausible. In summary, we argue that the relationships between financial resources, educational resources and adolescent weight are complicated. We agree with scholars who argue that money is important for weight and we agree that parental education is important. But we argue that the function and relative importance of these resources varies across families and schools. We offer an initial examination of these parallel resources by exploring whether there is any evidence for the differential associations we propose across the family-and school-level by analyzing cross-sectional, nationally representative data. --- DATA AND METHODS Add Health is a United States school-based sample of 20,745 1994-1995 7 th -12 th graders from over 140 high schools and middle schools (Udry, 2003). The original sample, which was followed up in 1995-1996, 2001-2002 and 2007-2008, includes oversamples of Cubans, Puerto Ricans, Chinese, and high socioeconomic status African Americans (Harris et al., 2003). Human subjects approval for this study was obtained from the Pennsylvania State University's IRB. We received an expedited review for secondary data. Our analysis relies on the 1994-1995 Wave 1 data. This is the only survey wave when parents were interviewed (and, thus, family income measured) and when school-level characteristics were obtained. Changes in family and school resources cannot be assessed. In addition, a significant proportion of adolescents are not in their Wave 1 schools by Wave 2. Some students have made normative transitions from middle school to high school and some have made non-normative transfers to other schools (Riegle-Crumb et al., 2005). In addition, adolescents who were high school seniors in Wave 1 were not followed in Wave 2. Thus, for nearly a third of our sample, Wave 1 school characteristics no longer characterize their Wave 2 schools. By Waves 3 and 4, Add Health respondents are no longer in secondary school. Despite these limitations, Add Health is still the best data source for our study. No other nationally representative data set collected since Add Health contains the requisite information on adolescents' schools and families or has data on so many factors that are confounded with socioeconomic status and weight. We make the following sample restrictions. We randomly select one adolescent per family using STATA's random number generator if a family contributes more than one sibling to the Add Health sample. We do this because siblings cannot be treated as independent observations and, to estimate a more complicated three-level HLM model (with individual students nested within families within schools), we would need to drop 70% of the sampled families because they have only one sampled adolescent in Add Health. We exclude adolescents who were pregnant or had an unknown pregnancy status between 1994 and 1996 to avoid confounding due to the joint determination of weight and fertility. Finally, we dropped adolescents who did not have a valid sampling weight or did not attend an Add Health school. We utilize multiple imputation to replace any missing data on analytic variables, which replaces missing values with predictions from information observed in the sample (Rubin, 1987). We use the supplemental program "ice" within STATA 9.0 (Royston, 2005a, b) to create five imputed data sets. The imputation models include all of the variables included in the empirical models, as well as each parents' occupation and adolescents' Wave 2 weight. We estimate the empirical models for each imputed data set and then combine the results, accounting for variance within and between imputed samples to calculate the coefficients' standard errors (Acock, 2005;Rubin, 1987). The final sample is 16,133 adolescents in 16,133 families attending one of 132 schools. Given Add Health's design (Chantala & Tabor, 1999), there are, on average, 128 interviewed students per school (range: 16-1,443; interquartile range: 67-136). Overall, 33% of our sample has missing data on at least one analytic variable. The variable with the most missing data is family income, the variable that we use to assess poverty status. Among sample members, 26% has missing data on income, where 15% are missing because parents did not complete a parent questionnaire and 11% is due to item nonresponse. Because this is a primary study variable of interest, we confirm in several supplemental tests that neither missing income data nor our imputation procedure biases our study results. The robustness checks include (1) estimating the models on a listwise deletion sample, (2) substituting an alternative indicator of poverty (i.e., parents cannot pay bills) that has less missing data (only 2.3% due to item non-response), and (3) including flags for whether family income or any other data are missing. Regarding the latter, we find that the flag for missing family income is never statistically significant, but the flag for any missing data is positive and statistically significant. Yet our substantive conclusions about our key variables do not change with any of the three robustness checks. (Results available upon request.) Despite the importance of schools for adolescents, some may worry that our school measures are simply capturing neighborhood characteristics. Yet American schools, especially high schools, typically draw from multiple neighborhoods. For the schools in our sample, the median number of census block groups (each containing approximately 1,000 residents) per school is 29 (range: 2-286), the median number of census tracts (a common measure of U.S. neighborhoods containing approximately 4,000 residents) is 15 (range: 2-231), and the median number of counties is 3 (range: 1-9). In our sample, schools are not reducible to neighborhoods. --- Measures Adolescent Overweight-This dichotomous variable is based on adolescents' Wave I self-reported height and weight, which we use to construct age-and sex-specific BMI percentiles using U.S. Centers for Disease Control and Prevention guidelines (Ogden et al., 2002b). We then classify adolescents as overweight or obese (BMI <unk> 85 th percentile) versus normal weight or underweight. In supplemental models, we also predict BMI z-scores with a linear model and arrive at the same substantive conclusions. (Results available upon request.) Family Resources-We measure parents' education, as years of completed schooling (Ross & Mirowsky, 1999). In two-parent families, it is the average of both parents' education. These data are first obtained from the parent, but are supplemented with the adolescent's report when parent-reported data are missing. If both reports are missing, it is multiply imputed. Supplemental models find nearly identical results using maternal education. For income, we create a dichotomous variable indicating that the family is poor (=1) based on parental reports of the total, pre-tax income the family received in 1994, the family's composition, and the U.S. Census Bureau official poverty thresholds for 1994 (United States Census Bureau, 2005). We focus on poverty rather than other income specifications for ease of interpretation and comparability to other studies. That said, we also estimated models using several alternative measures to ensure that our findings are insensitive to how family income is operationalized. The specific measures are as follows: (1) a linear measure of the family's originally reported total, pre-tax income, (2) the started log of income (i.e., ln[income + 1]) to have a more normal distribution of income and allow for nonlinearities whereby a $1 increase in income is more consequential at the bottom versus the top of the income distribution, (3) five dichotomous variables to indicate where, within six income percentile categories, the family income falls to examine nonlinearities throughout the income distribution, and (4) a linear measure of the family's income-to-needs ratio, which is calculated as the ratio of the family's income to the U.S. Census Bureau's official 1994 poverty threshold for their family type. The substantive results are identical across these measurements (see Appendix Table 1). School Resources-School-level parental education is measured as the median of parents' years of schooling for attending students. School income is defined as the percentage of students in poverty, to parallel our measure of family poverty. We operationalize this variable by aggregating Wave 1 family poverty data for children attending each sampled school to calculate the percent who are poor. In supplementary analyses, we also investigate aggregations of the four other family-level measures of income (described above). Results from these supplementary models, shown in Appendix Table 1, reinforce our theoretical emphasis on school poverty; all of the nonlinear models demonstrate that the key differences are at the bottom of the school income distribution. Control variables-The models control for the adolescent's age (measured in years), racial/ethnic identity (non-Latino, white = reference, African American, Latino, Asian, other), parental obesity (neither parent obese [reference category], both parents obese, mother obese, father obese), and dummy variables for whether they are female (=1), disabled (=1), born in the United States (=1), and/or athletic (=1). Most individual-and family-level control variables derive from the adolescent's Wave 1 self-reports. Racial/ ethnic identity is based on questions with predetermined categories, but with the option to select more than one. Adolescents' athleticism is based on reports of participating in an organized school sport and/or playing an active sport or exercising five or more times a week during the past week. We include this variable because BMI conflates fat mass with fat-free mass (i.e., muscle and bones). Parental obesity is based on the parent's report of whether the adolescent's biological mother and/or father is "obese." We also control for school characteristics to guard against confounding with school resources and to account for Add Health's complex survey design. These include the school's size, regional location (west, midwest, south, or northeast [reference category]), urbanicity (suburban, rural, or urban [reference category]), whether it is a public school (yes =1), and the school's racial/ethnic composition (% African American, % Latino, % Asian, % other, % non-Latino white [reference category]). The school's racial/ethnic composition is derived from aggregating across attending students' characteristics, while the others are derived from Add Health's administrative data. --- Statistical analysis We use hierarchical logistic regression models in HLM 6.0 to model the effects of both family-level (i.e., "level 1") and school-level (i.e., "level 2") resources for adolescent overweight. Hierarchical models separate between-group (here, defined as schools) and within-group variance to provide accurate estimates of parameter effects and standard errors, adjusted for the non-independence of people in the same group (Bryk & Raudenbush, 1992). We estimate four models. The null model identifies the extent to which adolescent overweight clusters within schools. The second model includes all individual and family characteristics, as well as the school-level variables used in Add Health's sampling design (i.e., size, region, urbanicity, and school type). The third model adds school-level income and school-level parental education, as well as their racial/ethnic composition. Fourth, we add interactions between parents' years of schooling and school-level poverty. --- RESULTS We begin by describing our analytic sample using weighted descriptive statistics presented in Table 1. Similar to national estimates for the mid-1990s (Ogden et al., 2002a;Troiano et al., 1995), 25% of our sample is either overweight or obese. For the remainder of the text we refer to this group as "overweight." On average, the adolescents' parents have completed 13.2 years of schooling and approximately 19.6% of the adolescents live in poverty. Given that school resources are aggregates of these adolescent data, it is not surprising that the average level of student poverty and the school mean of parental education are similar to the individual estimates. The sample characteristics generally fit with national patterns. In Table 2 we show calculated correlations between parents' years of schooling, the dichotomous variable for family poverty, and the log of family income to ensure that there is sufficient variation in these family-and school-level resources to estimate their independent effects. The correlation between parents' education and family poverty is -0.34, while the correlation between parents' education and the log of family income is -0.73. These estimates suggest that, while there is notable overlap, there is also sufficient variation to distinguish between these two types of family resources with 16,133 cases. Table 2 also shows that the correlation between family-and school-level poverty is 0.35, the correlation between the log of family income and the school's mean log of family income is 0.46, and the correlation between parents' years of schooling and school's median years of parents' schooling is 0.43. Thus, there is sufficient variation between family-and school-level resources to examine their differential effects on adolescent overweight. Results from multivariate, hierarchical logistic regression models predicting adolescent overweight are presented in Table 3. We estimate a null model (Model 1) without any covariates to identify the extent to which adolescent overweight differs across schools. The estimated variance between schools (i.e., the intraclass correlation coefficient) is statistically significant, suggesting that there are school-level differences in the prevalence of overweight. The intraclass correlation provides empirical justification for our exploration of school-level factors in a hierarchical model. Model 2 adds family poverty and parental education to the model. Similar to many prior studies, we find that living in a poor family is not significantly related to whether an adolescent is overweight. In supplemental models, we omit parental education from the model and find that family poverty is still not statistically significant. (Results available upon request.) Therefore, issues of multicolinearity are not driving the null finding for family poverty. In contrast, the association between parental education and adolescent overweight is statistically significant regardless of whether we include family poverty in the model or not. With each additional year of parents' schooling, the odds that an adolescent is overweight declines by 5% (1-[e -0.055 ] = 0.053 = 5.3%). This suggests that, for adolescent overweight, how much money a family has is less important than parents' formal schooling. Model 3 adds school-level resources and racial/ethnic composition. As expected, the findings for school resources are the opposite of what we find for families. The median level of parental education in a school is not significantly associated with adolescent overweight, but school-level poverty is. The odds that an adolescent is overweight increase by 1% (e 0.013 = 1.013 = 1.3%) with each percentage point increase in how many students are poor at one's school or by 19.5% with a one-standard deviation increase in school poverty (s.d. = 15%). This result is rather robust given that Model 3 includes such a wide range of individual-, family-and school-level confounders. The significant negative association between (own) parents' education and adolescent overweight is only minimally reduced and remains statistically significant in Model 3. Overall, the results in Model 3 indicate that adolescents are at greater risk of overweight if they attend poor schools and if they have parents with less education. The remaining question is whether these two risk factors moderate each other. The results in Model 4 show that they do. We find a small, but significant and positive interaction between school-level poverty and one's own parents' education. The association between parents' education and adolescent overweight is not uniform across different levels of school poverty. To clarify the patterns, Figure 1 shows the predicted probability of adolescent overweight as parents' education increases for students who attend schools with average, low, and high proportions of poor students when all other variables in Model 4 are held constant at their mean or modal values. Average school poverty equals the school mean (20%), whereas low and high poverty are defined as one standard deviation (15%) below (i.e., 5%) or above the mean (i.e., 35%), respectively. Figure 1 indicates that the benefits associated with increased parental education are smallest in the poorest schools and greatest in the richest schools. In the poorest schools, the predicted probability of being overweight among students whose parents have 12 versus 16 years of completed schooling is 0.47 and 0.45, respectively. In other words, the risks are almost exactly the same and relatively high, regardless of whether adolescents have parents that are high school or college graduates. Conversely, in the richest schools, the predicted probability of adolescent overweight is lower overall and the same four-year difference between parents who are high school and college graduates nets a larger reduction in the predicted probability of adolescent overweight. In summary, the protective effects of parents' education are greatest in richer schools and fit patterns of resource multiplication (Ross & Mirowsky, 2006). --- CONCLUSIONS AND DISCUSSION This study aims to clarify how the risk of adolescent overweight is associated with socioeconomic stratification across the two primary social institutions in adolescents' lives. We contribute to the literature on socioeconomic status and adolescent weight, showing that these patterns are quite complicated, requiring investigators to draw from different research strands to understand why and how family-and school-level poverty and educational resources could matter for adolescent weight both alone and net of each other. Our findings make three significant contributions to knowledge about how socioeconomic stratification influences adolescent overweight. First, our analysis demonstrates that net of confounders and each other, parents' education, but not family poverty, is associated with adolescent overweight. We speculate that highly educated parents influence their child's weight by using their learned effectiveness, knowledge, and skills to help adolescents better navigate obesogenic environments. Educated parents also likely transmit their knowledge to their children, which may help adolescents make better weight-related choices themselves. We suspect that family poverty does not predict adolescent weight because, with more money, parents could just as easily buy a bigger cable TV package, meals out, or a house in a distant, new suburb versus buying goods that encourage more physical activity or a healthier diet. In essence, it may take knowledge or a particular outlook for parents to even consider the weight-related dimensions of their purchases. Second, we demonstrate that poverty matters for adolescent overweight, but at the schoollevel. We speculate that school poverty shapes the weight-related structural features of schools. It likely diminishes a schools' ability to offer students' healthier food choices and physical activity options and may necessitate food industry corporate sponsorships (Nestle, 2002). The stressful nature of poor school environments may also contribute to adolescent overweight given that repeated activation of the stress response increases abdominal fat. An alternative explanation is that poor schools may engender or reinforce weight-related norms that are more accepting of or less averse to adolescent overweight. Schools and the peer groups they foster help define whom adolescents see as appropriate references for social comparisons (Crosnoe, 2000). Some may worry that the significant association between school poverty and adolescent weight simply reflects rich parents self-selecting into better schools, especially given that we aggregate family-level data to measure school poverty. Our analysis actually speaks to this concern. If this were the case, then it would imply that school poverty mediates the association between family poverty and adolescent obesity. Thus, family poverty would to be a significant predictor of adolescent overweight before measures of school resources are included in statistical models (see
The current study examines how poverty and education in both the family and school contexts influence adolescent weight. Prior research has produced an incomplete and often counterintuitive picture. We develop a framework to better understand how income and education operate alone and in conjunction with each other across families and schools. We test it by analyzing data from Wave 1 of the U.S.-based National Longitudinal Study of Adolescent Health (N= 16,133 in 132 schools) collected in 1994-1995. Using hierarchical logistic regression models and parallel indicators of family-and school-level poverty and educational resources, we find that at the family-level, parent's education, but not poverty status, is associated with adolescent overweight. At the school-level, the concentration of poverty within a school, but not the average level of parent's education, is associated with adolescent overweight. Further, increases in school poverty diminish the effectiveness of adolescents' own parents' education for protecting against the risks of overweight. The findings make a significant contribution by moving beyond the investigation of a single socioeconomic resource or social context. The findings push us to more fully consider when, where, and why money and education matter independently and jointly across healthrelated contexts.
of schools. It likely diminishes a schools' ability to offer students' healthier food choices and physical activity options and may necessitate food industry corporate sponsorships (Nestle, 2002). The stressful nature of poor school environments may also contribute to adolescent overweight given that repeated activation of the stress response increases abdominal fat. An alternative explanation is that poor schools may engender or reinforce weight-related norms that are more accepting of or less averse to adolescent overweight. Schools and the peer groups they foster help define whom adolescents see as appropriate references for social comparisons (Crosnoe, 2000). Some may worry that the significant association between school poverty and adolescent weight simply reflects rich parents self-selecting into better schools, especially given that we aggregate family-level data to measure school poverty. Our analysis actually speaks to this concern. If this were the case, then it would imply that school poverty mediates the association between family poverty and adolescent obesity. Thus, family poverty would to be a significant predictor of adolescent overweight before measures of school resources are included in statistical models (see Model 2, Table 3). We find no such association even when we utilize other specifications of family income in supplementary analyses. (Results available upon request.) We also estimated supplemental models that include a dummy variable for whether the parent agreed with the following statement (48% did agree): "You live here because the schools here are better than they are in other neighborhoods." This variable is never statistically significant in models predicting adolescent overweight, nor is its interaction with family-or school-level poverty. Further, our study results remain unchanged when statistical models include this additional confounder. (Results available upon request.) This further suggests that endogenous sorting into schools does not drive our results. Our third contribution is that we demonstrate that school-level poverty moderates the association between adolescents' own parents' education and their body weight. This is one of our most important and intriguing findings. School poverty impinges on the protective role of increased parents' education. In high poverty schools, parental education has an almost negligible association with adolescent overweight. This supports Ross and Mirowsky's (2006) model of resource multiplication. The effectiveness of one resource is hampered when other resources are limited. This finding speaks to the power of larger social environments for setting the opportunities and constraints that youth and their families must navigate. In more obesogenic school environments, parents' educational resources may be overwhelmed. In summary, the manuscript notably advances our understanding of families and schools as stratified health contexts that shape adolescent weight. Our findings are also instrumental in helping explain the counterintuitive null association between family poverty and adolescent overweight. It is not that poverty does not matter. Instead, it matters at a larger level of social organization than the family -the school context, both directly and as a moderator. With additional data, future research should explore the mechanisms that undergird these observed patterns. In addition, future research should consider the degree to which these patterns vary by sex and race/ethnicity. Prior research finds that family income is statistically significant and negatively correlated with adolescent weight among white girls, but not white boys while such sex differences are muted among other racial/ethnic groups (Gordon-Larsen et al., 2003;Wang & Zhang, 2006;Zhang & Wang, 2004). Such research would further reveal how these resources in these two stratified health contexts "get under the skin." Our findings must be considered within the boundaries of three study limitations. First, the study design is cross-sectional because Add Health does not have longitudinal data on family or school resources. That said, we do not expect the causal process to work in the opposite direction, whereby adolescent overweight affects family and school resources (i.e., as a "health selection" process (Haas, 2006;Palloni, 2006)). Furthermore, the cross-sectional measures of poverty and parental education that we use ensure that we assess these schoollevel resources when students are observably within them. This is particularly important given that many adolescents change schools outside of the formal, age-graded process (Riegle-Crumb et al., 2005). Second, with only 132 schools, we have less power to adjudicate between the relative role of poverty and parental education at the school-level. Finally, the measure of income in Add Health is rather limited because it is based on one question. As such, the estimated coefficients for both family-and school-level poverty could be downwardly biased if there is significant and systematic measurement error. This suggests that, if we had a more detailed measure of income, we could find a significant effect of family poverty, but it also implies that we may be under estimating the role of school poverty for adolescent overweight. Despite limitations, our study makes significant contributions to understanding which socioeconomic resources matter for adolescent overweight. We move beyond thinking about resources solely in terms of their volume and begin to consider variations in their meaning, operation and effectiveness across different health-related contexts. The study's findings push us to more fully consider why money and education matter independently and the environment within which these different resources are embedded. It is simpler to measure and construct hypotheses about the meaning of different family resources, but given that adolescents spend significant portions of their day outside the home -and, of that time, mostly in schools -it is important to consider the resources of environments outside the home and their associated risks or benefits. Such nuanced lines of investigation are imperative for the development of effective interventions for adolescent overweight and improving population health more generally. • Family-level parental education and school-level poverty predict adolescent overweight. • Family-level poverty and school-level parental education do not predict adolescent overweight. • Family education and school poverty interact: the benefits of having better educated parents are reduced in poor schools. All models include sex, age, race/ethnicity, nativity, disability, athleticism, parental obesity, region, urbanicity, school size, if a public school, and school racial/ethnic composition. --- Appendix Table 1 Select coefficients from hierarchical logistic regression models predicting adolescent overweight with different measures of income (N = 16, 133)
The current study examines how poverty and education in both the family and school contexts influence adolescent weight. Prior research has produced an incomplete and often counterintuitive picture. We develop a framework to better understand how income and education operate alone and in conjunction with each other across families and schools. We test it by analyzing data from Wave 1 of the U.S.-based National Longitudinal Study of Adolescent Health (N= 16,133 in 132 schools) collected in 1994-1995. Using hierarchical logistic regression models and parallel indicators of family-and school-level poverty and educational resources, we find that at the family-level, parent's education, but not poverty status, is associated with adolescent overweight. At the school-level, the concentration of poverty within a school, but not the average level of parent's education, is associated with adolescent overweight. Further, increases in school poverty diminish the effectiveness of adolescents' own parents' education for protecting against the risks of overweight. The findings make a significant contribution by moving beyond the investigation of a single socioeconomic resource or social context. The findings push us to more fully consider when, where, and why money and education matter independently and jointly across healthrelated contexts.
Introduction The focus of my study is to explore the socioeconomic inequality (SEI) and educational management information system (EMIS) among rural background students in China. Nowadays, China's fast economic expansion has come at the cost of increasing socioeconomic disparity and a low level of information technology in education. Similarly, the SEI is also a relationship with the capita gross domestic product (GDP), and it increased to 8.6% between 1979 and 2014. The improvement in ordinary people's living conditions remains modest, and the wealth gap between the affluent and the poor is expanding (Cheuk et al., 2021). Only one percent of Chinese families had more than onethird of total household wealth and education in 2014, while the poorest one-fourth owned less than 2% (Xie and Jin, 2015). Meanwhile, inhabitants in some areas of China have been exposed to substantially increased risks of severe weather and environmental pollution and access to education as a consequence of aggressive economic growth (Liu et al., 2010;Cho et al., 2022;Ye et al., 2022), threatening their quality of life, education, and health (Han et al., 2021). Furthermore, human is expected to rise to 1,563/million persons per year by 2060 in China. Meanwhile, in 2060, the loss of GDP due to an increase in health spending, access to education and information technology, and labor productivity are expected to rise by around 2.1% Organization for Economic Co-operation and Development (OECD) (OCDE et al., 2016). Figure 1 discusses the conceptual understanding of the interlinking disparities process in the picture form. The idea constitutes a new domain with largely unstudied potential in the systematic literature because there is a SEI problem in the sector of education of EMIS. The interconnection of SEI and the EMIS academic field matures with qualitative methods and techniques through the ethnographic lens. SEI has been the object of various studies in the last two decades, and it has a direct relationship with information technology as well as education. In light of this, in Chinese cities with the fastest expanding economies, such as Beijing and Shanghai in China, pollution-induced disparity and accompanying environmental inequality (EI) between the affluent and the poor may be seen (Xie and Jin, 2015). Nevertheless, SEI and EMIS are still missing in the perspective of students, and it is characterized as disadvantaged academic domain, for instance, ethnic minority communities or low-SES and SEI rural groups bearing a disproportionate distribution in the domain of education and their access to the information management system (IMS). A considerable body of research has been conducted in exploring the people with low socioeconomic status (SES), low income, and low education or non-professional occupation are more likely to experience, but the IMS is a higher level of influence on the education, which could decrease environmental catastrophes, such as air pollution, flood, drought, and extreme heat (Li et al., 2018;Park et al., 2018;Ur Rahman et al., 2021;Zhuo et al., 2021). Hajat et al. (2015) concluded that the above scientific pieces of literature have concentrated on the effects of EI on these three primary SES indicators without taking into consideration possible influencing variables such as family income, education, and MIS. Vassilakopoulou and Hustad (2021) examine the most recent decade's worth of information system (IS) research on the digital divide in the high technical infrastructures and economic conditions. It was found that models of digital disparities were present, and the SEI factor impacts the digital divide among different societies. This particular ethnographic qualitative research paper found the gap in the previous literature and then drew the cyclic process of socioeconomic inequalities regarding the EMIS for rural background Chinese students, which is depicted in Figure 2. As a consequence, these studies may have failed to explain SEI and the consequent differences in SES effectively. Several social epidemiological factors are missed in the domain of SEI, such as wealth and income, education, and IMS, which are a stronger indication of wellbeing inequality in society, especially for a new generation of students labeling. Over the last two decades, several academic authors have claimed that environmental consequences disparities and family wealth, rather than household income, are more robust indicators of SES in China (Pastor-Satorras et al., 2015;Chu et al., 2020). The family's wealth may represent one's capacity to acquire an apartment in one's chosen location, which could influence the household members' exposure to education and its requirements in the technological world. To acquire a better knowledge of SES and SEI-based in China, it is necessary to look at whether different SEI has an impact on the EMIS and its exposure in general among students (Zheng and Yin, 2022). This is a problem posed in terms of SEI and EMIS among rural students of the middle level (Figure 3). The importance of SEI research done at the rural-level students captures the critical spatial factors of EMIS, which is still ignored in a high gross domestic product (GDP) growth country China. For instance, Wang et al. (2021) pointed out that Information and communication technology (ICT) had a significant influence on the economy and society in recent decades. More precisely, although ICT is critical for promoting socioeconomic development (SED), and its detrimental impact on SED in neighboring regions (education, schools, and academic achievements), meaning that China's provinces have a digital divide that might lead to high socioeconomic growth and the inequality remains constant. The research concluded that some practical policy proposals for the growth of ICT in the future are important to finish inequality among rural communities based on EMIS, reduce the negative impacts of the digital divide, and maximize the advantages of ICT-based SED is possible. Furthermore, Zehavi et al. (2005) found many social inequities exposed by several academic studies. However, most rare, the interaction and entanglement of digital technology, structural stratifications, and the established propensity of "othering" in cultures of education especially using the lens of an intersectional feminist approach, are narrated minor, which is a dire need in the China rural background. We propose that IS research move beyond simplistic notions of digital divisions to examine digital technology as implicated in complex and intersectional power systems and improve our sensitivity to the positionality of individuals and groups within social orders as part of a future research agenda (Zheng et al., 2022). There are other implications for practice and policy, such as going beyond the single-axis analysis of digital exclusion and students' education related to IS s were an excellent lens to study in the future. In light of this, Stewart (2021) argued that academic institutions, academics, administrators, educators, and students have thoroughly appreciated the emergency remote teaching (ERT) strategy. The global world started academic communities throughout switch to ERT. This literature overview combines the four significant themes that emerged from a thematic analysis of the findings. Such as ERT experiences, digital divide, and massive educational/socioeconomic disparities, routinely encountered ERT difficulties, issues, and challenges, and frequently made ERT changes. The study recommends to future researchers that technology is the best tool to teach students without socioeconomic inequalities (Ma et al., 2022). This problem is a long-standing challenge for the Chinese rural students and communities, especially in the sector of education, which could be counted with these particular remedies (see Figure 3). --- Theorizing social class The concept of social class was coined by Karl Marx, and Krieger (2001) explained the socioeconomic domain in the form of social class. Social class is a person's economic contacts that lead to the formation of social groups. The production, distribution, and consumption of goods, services, and information, as well as the relationships between them, affect these interactions. As a result, social class is founded on a person's position in the economy, whether as an employer, employee, self-employed person, or unemployed person (in the formal and informal sectors alike). Furthermore, the exploitation and dominance of the people are part of social --- SEI and EMIS --- Ethnic --- Conflict theory of extension in the form of exploitation and domination Wright describes the relation between exploitation and domination as part of class theory. This conceptualization is most closely aligned with Marxism (or neo-Marxism) (Muntaner et al., 2002;Muntaner and Lynch, 2020), and it describes the processes by which some social classes control the lives and activities of others (domination), as well as the processes by which capitalists (owners of the means of production) gain economic benefits from the labor of others (exploitation) (Wright, 2015). In this perspective, the main distinction between social classes is between those who own and control the means of production and those who are paid to utilize them. Additional subcategories may be added education of the parents, and their children have full exploited in this way; the educational capabilities of non-dominate class students do not grow because they have no access to a high level of education and technology (Breen and Goldthorpe, 2001). The theory of exploitation and domination is applied to the SEI Conceptual perspectives of advanced literature and in-depth themes representation. --- FIGURE 4 The exploitation and dominance of the people in social classes (reproduced with permission from Krieger, 2001). among Chinese rural students related to EMIS. Some classes have all educational opportunities in information technology, a modern school system, a high level of facilities, and wealthy living. On the other hand, some students have no access to these facilities due to low SES. Similarly, the lens of social class theory is deductively explaining the importance of EMIS for rural background students in China (Ma and Zhu, 2022). In the light of this, the Wright relative power of social classes theory is more powerful in overcoming the socioeconomic inequalities among societies (Wright, 2015; see Figure 5). --- Frontiers in For an overview of existing techniques, the study is directed to solve the SEI and EMIS with help of social class theory and its subtheory of exploitation and domination of class. It is to be noted that the importance of SEI is not ignored in the domain of EMIS because family wealth, economic status, and its relationship with students' education are challenging topics in Chinese academic research. This is a common problem encountered while using such a qualitative method to explore the in-depth understanding of the SEI and its interconnected challenges with China rural students at the middle school level (Chen et al., 2021). China is growing to overcome such social epidemiological challenges for the urban students, but the part of the rural students is ignored, and this in-depth, subjective study explored the challenges and solutions from the emic and ethical perspective of the students. Second, we examine the existing Chinese home wealth database and highlight current research limitations, such as the absence of high spatial resolution and economically representative family wealth data that may aid new a domain of EMIS research in China. Third, using a qualitative methodology, we explore the various methodologies (e.g., emic, and etic perspectives) for constructing appropriate SEI proxies throughout the research, which is also highlighted by the global educationist for solving such type of phenomenological challenges in the rural background students. Fourth, concerning SEI and EMIS research in China, we address the advantages and disadvantages of current new SEI proxies for assessing SES, household wealth, education access to all, and its relation to the social class theory of Karl Marx. Fifth, in relation to SEI proxy development in China, we summarize the challenges to data availability and quality, including ethical and privacy concerns, and recommend that policymakers improve the quality and availability of MEIS while removing the SEI in the provision of the IS at school level (Xiong et al., 2022). Finally, we wrap up our research and provide recommendations for future research into new SEI proxies to aid EMIS investigations in China to overcome student socioeconomic disparities. --- Research design The qualitative research process was ethnographic. First, we reviewed the Chinese and worldwide literature to construct SEI, SES, EMIS, IS, and IT and then systematically specified the studies emphasizing the social class, domination, and exploitation related to these specific themes. Consequently, the theoretical framework was developed to position the debate on inequalities and their relationship with rural students in China. The population and sample of the students were taken from China one province of the total. In this particular study, we selected Hainan Province rural areas school and their students (see Figure 6). --- FIGURE 5 Relative power of social classes theory and the socioeconomic inequality (reproduced with permission from Wright, 2015). In this research, the interpretive viewpoint was applied. A subjectivist presumption, which generates reality within a social context, is the foundation of the interpretive viewpoint (Bell and Bryman, 2005). The research used a constructivist methodology. This method leads to the epistemological underpinning of the method (Davis and Sumara, 2002). Similarly, Guba (1989) distinguishes between conventional and constructivist belief systems, in which socially created realities are based on the society's dominant belief system and are viewed and understood differently by many people. Natural rules do not regulate socially produced reality, which is a truth. When an individual's view is based on a single fact, it is not acceptable; alternatively, a consensus of persons is acceptable under a constructivist approach that emphasizes truth. Constructivist views rely on a monistic subjectivist epistemology that postulates questions, with humans posing these questions about the social environment and then discovering the final answer in their own time (Guba and Lincoln, 1989). As a result, it is shown that dialectical repetition employs a hermeneutic technique that is considered constructivist. Similarly, analysis and criticism, reiteration, reanalysis, and recritique are pragmatic criteria for reaching logical knowledge and building strong thinking skills. --- Frontiers in The research was based on the author's subjective interpretation of earlier ideas on the link between Chinese middle school rural students' socioeconomic disparities, education, and EMIS. The laddering methodology, which was further explored in the data gathering procedure, was used to eliminate bias from the data. According to Bell and Bryman (2005), the interpretative viewpoint is commonly employed in qualitative research. According to Eriksson and Kovalainen (2015), it is conceptually reliant on explanation. The quality of interpretative research focuses on human sensibility and complexity rather than preset categories and variables (Eriksson and Kovalainen, 2015). In this study, a qualitative ethnographic technique was applied. The ethnographic study is a way for researchers to get a more profound knowledge of a field by immersing themselves in it to build in-depth information and analyze the people's culture and social environment. Its goal is to "make the unfamiliar familiar" through "making sense of public and private, overt and obscure cultural meanings" (Grech, 2017). A sample of 10 male (middle school students) and 10 female (high school students) students were chosen. The students from the rural region ranged from 11 to 14 years old, and both male and female students took part in the research. The respondents had similar FIGURE 6 The qualitative research data for the illiteracy rate in China (reproduced with permission from Hannum et al., 2021). features such as age, class, rural school system, and government educational system, and they were chosen purposively among school students (population). Participants were from a rural background. It should be highlighted that all male and female students were from low socioeconomic levels (SES), except one student who was from high SES in this study. During the interview, the participant described this information. --- Frontiers in In-depth and unstructured interviews were used to gather information from participants. Voices, knowledge, and perspectives are prioritized in this strategy (Smith, 1999). During the interview, a laddering strategy was also applied. It is one of the psychological interview strategies that's quite useful for field research. The laddering approach has the advantage of allowing researchers to explore the participants' behavior. "This strategy entails asking the interviewee follow-up questions based on their prior responses to acquire a better understanding of the respondents' perspectives" (Veludo- de-Oliveira et al., 2006; see Figure 7). The interview lasted 30 min and included all of the participants. The interview was done in Chinese to understand the respondent's opinions fully. It was then translated into English to report on this research. The school heads and their parents gave their informed agreement, and then the students were asked to agree to an interview. In this particular study, the rural schools in the Hainan province of China students were interviewed, and the schools' names are below: "Tengqiao Middle School in Sanya Haitang District, National Middle School in Jiyang District, Meishan Middle School, Meishan Primary School, Baogang Middle School, Yacheng Middle School, and Nanbin Primary School in Yancheng District." During this period, the participants were advised that their real identities would not be revealed in the reports and that pseudonyms would be used instead. The theme analysis approach helped assess outcomes in earlier investigations. This strategy may provide the reader with detailed, contextual, and culturally sensitive facts. The thematic analysis technique was employed for data analysis, discovering, interpreting, and reporting patterns or themes within acquired data. In this case, narratives were used to describe the outcomes, which helped to clarify them (Braun and Clarke, 2006). One of the benefits of thematic analysis is that it allows researchers to identify patterns in the respondents' statements via a flexible, inductive, and ongoing process of connecting with narratives. As demonstrated in the thematic data analysis funnel system picture (Figure 8), all context is categorized into fluid categories of deductive and inductive themes, subthemes, and coding. --- Data analysis and findings The results, which portray the genuine voices of participants and all names and identities, are classified for anonymity and confidentiality, are shown below. The theoretical link between rural students' SEI viewpoint for EMIS and its relationship with social class, dominance, and exploitation is written in the data analysis section (Figure 9). The participants replied that income, education, and employment are typically used to establish traditional metrics of SES. Similarly, some participants claimed that household SES had influenced their educational achievements during school. Such as, we could not use information technology because we have no exposure and other schools in the urban areas have information technology. Furthermore, some respondents described that family wealth is essential and then we will get a good education. Most schools in the urban area have access to education and information technology. The real verbatim is written below: The laddering approach to explore the participants' behavior (reproduced with permission from Veludo- de-Oliveira et al., 2006). The thematic data analysis funnel system. --- Frontiers in "Chinese version (, W<unk> x<unk>wàng w<unk> fùq<unk>n sh<unk>gè n<unk> mào jiàng, wo zài g<unk>oj<unk> xuéxiào x<unk>tong xuéx<unk>) (EMISS-2)." "I wish my father would be a milliner, and I study in the high-level school system" (EMISS-2). Furthermore, the participants stated that our middle school is good, and we have information technology without discrimination against SES. Family affluence may impact the individual but not the school, and the Chinese government gives almost equal education opportunities. "I am happy with my parents' socioeconomic status, and there is no difference in the school regarding getting an education" (EMISS-10). Some participants suggested that the impact of household wealth on the student's exposure regarding education is more. The participants might have been readily explained by household income, representing a family's economic wellbeing, and positively connected with education exposure. The distribution of economic values represented substantial divergence for the students in the school. Family wealth is more unequally distributed in urban and rural China. In this regard, one student discussed that household income might be better for getting an education, and SES could create disparities among rural and urban areas students' education. The real words of the respondent have narrated below... "I noted that my classmate has high parents' SES, and his teacher taught him after his school time. I wish that I should learn many languages, such as the English language and the American English Language accent, but I could not" (EMISS-20). According to some participants, there is no strong connection between parents' high SES and access to education. Teachers are from the same family income, and they have not created disparities among students regarding the lesson's teaching. On the other hand, participants replied that household wealth or income becomes a more important SES indicator for educational exposure and getting an education. Given its more excellent stability and more prominent effect on living standards over time, family wealth may reflect a higher degree of education. "I know there is no educational discrimination because of parents' SES in middle school" (MISS-15). Because of the issue of socioeconomic disparity in educational achievements, this concept represents a novel and mostly unexplored area of research in the academic literature. In light of this, Jiang found that SES has influenced China's students' educational achievements (Jiang, 2021). Pesando (2021) claimed that social consequences and family wealth for getting an education are more significant predictors. The current study found that SES has a relationship with educational achievements. SES and family prosperity may have an impact on educational achievements. Furthermore, the household and SES influenced students, educational achievements at the school level (Figure 10). The participants revealed that income shifting does adequately reflect living standards and also does influence the educational achievements of middle school students. Some students agreed that socioeconomic inequalities exist in the students, and some students have a good understanding of EMIS compared to low-income students. The high socioeconomic students can buy computers, laptops, and mobile to get online education. Household wealth inequalities reflect structural and chronic poverty, which further stop students from learning in the online education system. Similarly, household wealth inequality is the form of transient poverty, which is less volatile and more reliable. Wealth, rather than income, is a better predictor for EMIS and information technology. "I wish that I have a fast computer and internet connection for the learning of education" (EMISS-12). However, the participants were from industrialized family backgrounds, and they described the socioeconomic inequalities that exist in some school systems. The city school system was different from online information-sharing system, and the city school gave all the facilities to their students whenever we were studying online. Now, we shifted to rural area schools, and there is no such reasonable EMIS for students learning. Theme: The socioeconomic status and education. --- FIGURE 10 Theme: Socioeconomic inequalities and educational information management system. "I believe that the city school system was sound good, and its EMIS performance was better than the rural school system. My parents shifted village home from the urban industrial areas, and now I feel a difference regarding EMIS and information system in this rural area school" (EMIS-18). The participants agreed that some of our classmates dropped out of school because of their parents' low household income, which is a dangerous sign for the overall personal career of the students. Income drops the family's living standards, and their students do not remain relatively consistent with getting an education. The participants conditioned here and said that if household income is more robust, it is a good sign for the students' once future or personal careers. The true words of one participant were quoted. "I agree that socio-economic inequality can stop some from getting an education in their career" (EMIS-9). The relationship between EMIS and family salary is interrelated with the instructive career. Members advance cited that family salary makes a great pointer to one's mental capacity since, without the pressure of money, a person can examine and teach very well. Within the final month, a few students' guardians were challenged within the Hainan territory for their children's data innovation get to and web of things. These guardians were from the moo SEI, and they might not give EMIS contraptions to their understudies. Moreover, a few members have talked about how long-term fabric amassing is superior to short-term wage since long-term fabric accumulation sustains the children's instruction, and they can purchase EMIS contraptions for way better learning results within the school. The moo SES of understudies certainly uncovered the non-appearance of different data technologyrelated contraptions. "I have no information technology-related tools for accessing the EMIS system. The school is closed, and our educational activities are not sustained due to no computer system and mobile phones for the online management information system" (EMISS-7). Wang et al. (2021) revealed that ICT significantly influenced the economy and society in recent decades. Although ICT is critical for promoting SED, the inequality of the digital divide is present. Furthermore, Zehavi et al. (2005) found that many social inequities and digital technology interactions are different in the structure of society, which influence educational culture among students. Stewart (2021) argued that academic institutions, academics, administrators, educators, and students have thoroughly appreciated the ERT strategy. Such implementation is not fruitful due to socioeconomic disparities in the educational institution. The results of the current study were in link with the previous literature. Similarly, SEI exists in the students, and high-income students have a good understanding of EMIS compared to low-income students. Likewise, high socioeconomic students have the capacity to buy computers, laptops, and mobile to get online education (Figure 11). Participants replied that EMIS exploitation and domination are present to some extent. Similarly, some participants claimed that low socioeconomic students are exploited based on online educational learning. Similarly, we could not use information technology because we have no exposure and other people of high economic status have information technology access in their homes. Furthermore, some respondents described that EMIS is essential for getting a good education. Most students have access to information technology. The authentic verbatim is written below: ",." "W<unk> x<unk>wàng ji<unk>li y<unk>u x<unk>nx<unk> j<unk>shù xi<unk>o g<unk>ngjù, w<unk> bù hu<unk> zài kètáng xuéx<unk> zh<unk>ng bèi b<unk>xuè" (EMISS-16). "I wish would have information technology gadgets at my home, and I would not be exploited in my class learning" (EMISS-16). The participants answered that we have no access to EMIS, and our education is exploited due to no access to information technology, and it is one sort of discrimination based on low SES. Family exploitation may impact the student's education at the school level, and the Chinese government should give information technology tools to every student to save their academic life. ",." "W<unk> du<unk> z<unk>j <unk> de jiàoyù chéngj<unk> bù m<unk>ny<unk>, y<unk>nwèi qùnián w<unk> y<unk>n wéi w<unk>f<unk> sh <unk>y<unk>ng x<unk>nx<unk> j<unk>shù hu<unk> EMIS ér méiy<unk>u shàngkè" (EMISS-6). "I do not feel happy with my educational grades because last year, I did not attend classes due to no access to information technology or EMIS" (EMISS-6). The study participants suggested that exploitation in the school influences students' exposure to EMIS. Dominant household income students represent their selves in the school, and their EMIS exposure is higher than low household income students. For the students in the school, the distribution of economic values showed a significant disparity. The information investigation appears how complex the exchange between the person, his social context, and the instructive framework truly is. It too uncovers the tirelessness of the meritocratic perfect of person organization in instructor, parent, and indeed student discourses. However, at the same time, the problematization of minority and working lesson habitus and the culturalization of "educational failure" appears to drag the plug out of this argument, because it presupposes that a person's understudy is (emphatically) decided by his or her domestic environment. From that viewpoint pupils', parents', and indeed teachers' agency as it appears to play a minor part. In this talk, we begin with expanding some limitations and preferences of our paper, whereas in a moment area, the broader social implications of the discoveries are talked about the urban and rural China education system is unequal, and their family SES is unequally distributed. This reason creates dominancy among students, and low SES students have no access to the EMIS at home during Theme: Conflict theory of extension in the form of exploitation and domination. online classes. The true words of the participant have narrated below... "I noted that my classmate has a dominant level in the class, and she has access to the current EMIS in the home. I wish that I should learn about EMIS in the school" (EMISS-11). Moreover, the participants revealed that the dominant attitude of the students is due to high SES in the class, and they already have access to the EMIS. EMIS and parents' high SES have a strong connection to their educational achievements. Teachers are from the same family income, and sometimes they exploit students of low SES in the classroom. On the other hand, participants said that household wealth or income is more important for EMIS. The compensatory potential of the school and its staff is accepted to be exceptionally modest or indeed missing. However, the inquiry about appears that a more comprehensive approach centering on the consistency between the domestic and school environment can make a distinction and the talks made clear that instructors feel like being only a pawn in an instructive framework that's emphatically influenced by sociodemographic changes within the broader society. On the one hand, instructors ended up demotivated or experience sentiments of futility, whereas on the other hand, teachers' thoughts approximately the low teachability of ethnic minority understudies are reflected in pupil's sentiments of futility, demotivation, and indeed mental withdrawal from instruction. "I feel that high SES is more in the female students, and teachers are not doing discrimination based on a gender level" (EMISS-15). Vassilakopoulou and Hustad (2021) described that technology-enabled information lacks in terms of SEI. There is digital divide inequality among different socioeconomic representations. The current study found that access to IS s is different among rural schools. Similarly, the study found that removing SEI based on an information technology system should also be accessible to students in rural areas. Additionally, exploitation in the school influences EMIS. Dominant household income students represent their selves in the school. The distribution of economic values showed a significant disparity related to EMIS. These factors mentioned above create dominancy among students, and low SES students were exploited for EMIS home-based and online classes. The theory of neo-Marxism suggests that domination controls the lives and activities of others frontiersin.org Ye 10.3389/fpsyg.2022.957831 (Muntaner et al., 2002;Muntaner and Lynch, 2020). In this regard, Wright (2015) described that gaining economic benefits from others is a form of exploitation in society. Our results conclude that SES brings exploitation and domination among students. For instance, the dominant attitude of the students is due to high SES, and these students have access to the EMIS. --- Conclusion The study aimed to explore the SEI regarding EMIS in rural area students at the middle school level. Our research explores that the SEI present regarding EMIS and household wealth and income brings unequal educational learning among different schools in China. Moreover, family wealth and SES also affected students' educational learning in school at home. Family wealth and SES-based exploitation are present in the EMIS among male and female students. Household wealth is significant for the EMIS, and it is recommended to future researchers that a quantitative study should be conducted to measure the exact facts and figures of the amount for the EMIS. The statistical outcomes of SEI research may predict a spatial solution to overcome this problem considerably. However, only a few primary research have been undertaken on the SEI and EMIS in China, and this research is very limited to the schools of Hainan districts as well as not generalizable to the whole Chinese schools. Legal, policy, and information technology-based measures should be arranged for the male and female students as well as data quality and availability for low SES students to overcome the exploitation among rural schools' students. Significant in this handle will be bringing the differences displayed in society into the classroom by utilizing social diversification of the staff and the substance of the educational program and to lock in and raise the accountability of all people, communities, and instructive organizations included. Research can offer vital experiences for all performing artists included. --- Practical recommendations 1. Information technology-based data quality and availability for low SES students are mandatory at the middle school level. 2. It is recommended that exploitation could be overcome among rural students if the government provides equal opportunities for access to the EMIS. 3. This study does not generalize to the whole population of China because it is limited to a few schools, not all schools in China. 4. The correlational analysis could be conducted between SEI and EMIS direction for rural Chinese schools. --- Data availability statement The original contributions presented in this study are included in the article/supplementary material, further inquiries can be directed to the corresponding author. --- Ethics statement Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. --- Author contributions The author confirms being the sole contributor of this work and has approved it for publication. --- Conflict of interest The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. --- Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
There is currently enough systematic literature presents about socioeconomic inequalities across different disciplines. However, this study relates socioeconomic inequality (SEI) to rural students educational management information systems (EMIS) in different schools in China. The dynamic force of information technology could not be constrained in the modern techno-based world. Similarly, the study was qualitative and ethnographic. Data were collected through an interview guide and analyzed with thematic scientific analysis. Ten male and ten female students were interviewed based on data saturation point. The purposive sampling technique was used for the rural school and students' selection. This study summarizes the findings and brings together in-depth emic and etic findings based on new Marxist conflict theory, exploitation, and domination power lens. The study found that SEI creates disparities among EMIS. Household income inequality has influenced on educational achievements of rural areas' students. Gender-based SEI was not present among students. Family wealth and SES-based exploitation are present regarding EMIS among male and female students. Household wealth is significant for the EMIS. The study put forward a recommendation to the policymakers that exploitation could be overcome among students if the government provides equal opportunities for access to the EMIS.
Introduction Given the globalization of health professions education (Schwarz 2001;Harden 2006;Norcini and Banda 2011), health professions educators need to pay attention to cultural differences and values, and the events that shape them. If people feel it is inappropriate to bring their identity or ideological background into educational environments, students may remain "physically and socially within...a culture that is foreign to, and mostly unknown, to the teacher" (Hofstede 1984), and teachers' cultural assumptions will prevail. The term 'cultural hegemony' describes this power of a dominant class to present one authoritative definition of reality or view of culture in such a way that other classes accept it as a common understanding (Borg et al. 2002;Gramsci 1995). Thus, an implicit consensus emerges that this is the only sensible way of seeing the world. Groups who present alternative views risk being marginalized, and learning may suffer (Arce 1998;Monrouxe 2010;Hawthorne et al. 2004). Therefore, leaders of cross-cultural health professions education need to avoid inadvertently encouraging learners to leave their cultural background at the classroom doorstep (Beagan 2000). The term cross-culturalism refers to exchanges beyond the boundaries of individual nations or cultural groups (Betancourt 2003) as opposed to multiculturalism, which deals with cultural diversity within a particular nation or social group (Burgess and Burgess 2005). This research applies the concept of cross-culturalism to faculty learning and developing a leadership community of practice (Burdick 2014). This research is conceptually orientated towards the critical theory research paradigm (Bergman et al. 2012) and the concept of 'critical consciousness.'Kumagai and Lypson argued that cultural education in medicine must go beyond traditional notions of 'competence' (Kumagai and Lypson 2009) to reflective awareness of differences in power and privilege in society, and a commitment to social justice (Freire 1993). To avoid tacitly imposing cultural assumptions, faculty need to facilitate diverse viewpoints. The ability to do so is most important in online education due to its lack of nonverbal communication and emphasis on written learning (De Jong et al. 2013). Discourse theories also fall within the scope of critical theory. Stemming from the parent disciplines of linguistics, sociology and psychology, this family of theories holds that language and other symbols and behaviors express identity, culture, and power (Hajer 1997). Those symbols and signs reflect the order of society at a micro-level, which in turn reflects social structure and action at a macro-level (Fairclough 1995;Alexander 1987). Discourse theories provide heuristics, which can be used to explore relationships between power, privilege, and identity. Our research question was: How do participants' sociopolitical backgrounds enter online discussions focused on health professions education and leadership to generate critical consciousness? We selected the Foundation for the Advancement of International Medical Education & Research ® (FAIMER ® ) as the setting because its purpose is to develop "international health professions educators who have the potential to play a key role in improving health professions education at their home institutions and in their regions, and ultimately help to improve world health" (FAIMER September 24, 2013). This group of individuals, participating in communal activity, and continuously creating a shared identity by engaging in and contributing to the practices of their communities (Norcini et al. 2005) forms a community of cross-cultural practice (Burdick et al. 2010). --- Methods --- Educational setting and participants The FAIMER Institute (Burdick et al. 2010;FAIMER September 24, 2013;Norcini et al. 2005) provides a 2-year fellowship, which each year develops a cohort of 16 mid-career health professions faculty from Latin America, Africa, the Middle East, and Asia to act as educational scholars and agents of change within a global community of health professionals. There are 3and a 2-week residential sessions 1 year apart in Philadelphia and two 11-month online discussions conducted via a list serve. Both formal and informal meetings during the residential sessions foster cross-cultural understanding by encouraging fellows to share information about their ethnicity, religion, political influences, food, dress, and language. Respect for differences is supported by structured 'Learning Circle' activities (Noble et al. 2005;Noble and Henderson 2008) and sessions covering a range of topics related to education and leadership. Internet connectivity is problematic in remote areas, so a list serve is used for online discussions. These discussions had two major elements in 2011-2012, when this study was done. First, Fellows reported progress on educational innovation projects they had implemented at their home institutions with the guidance of faculty project advisers. Second, teams of 5-6 current Fellows selected topics, and then collaboratively designed and implemented six 3-week e-learning modules to deepen their health professions education and leadership expertise. Faculty e-learning advisers, mainly from the U.S., and an alumni faculty adviser facilitated the online discussions, whose participants included 32 first and second year Fellows and any of the 150 program alumni who wished to take part. The list serve also provided an informal resource and social support network for Fellows (e.g., congratulations for professional or personal milestones; condolences on personal or national tragedies; holiday greetings). To help those were not native English speakers, had limited time, or were using mobile devices with limited editing functions, Fellows were encouraged to post short comments and not be overly concerned with English grammar. Fellows were required to post "at least one substantive comment that advances the topic" during the e-learning modules, but were not given any specific guidelines to deliberately post cross-cultural comments. --- Methodology It has been argued that qualitative research is of good quality when epistemology, methodology, and method are internally consistent (Carter and Little 2007). Located within the critical theory paradigm (Lincoln et al. 2011) this research had a subjectivist epistemology. Discourse theory holds that our words are never neutral; each has a historical, political and social context (Fiske 1994). Researchers use their 'critical reflexivity' to explore the relative value of different subject positions. Critical discourse analysis methodology allows them to explore dialectical tensions within participants' written language. We now describe the methods we used to do that. --- Critical reflexivity ZZ, a FAIMER Institute Fellow from Pakistan, was educated as a physician in Pakistan, trained as an Internist in the United States, returned to academic medicine in Pakistan, and 10 years later immigrated to the United States. PM is a U.S. faculty member of the FAIMER Institute with extensive experience of academic leadership development involving gender and minority participants (Morahan et al. 2010). DV, RN, and TD (from the Netherlands, Canada, and U.K.) are extensively involved with cross-cultural education and one (TD) has published on critical discourse (Dornan 2014). All authors had extensive experience of online education. ZZ's cross-cultural experience and understanding of participants' situations inevitably influenced her interpretation of posts to the list serve. In order for this background to serve as a resource to the project, her co-researchers, including PM who is one of the residential FAIMER faculty advisor, joined in an explicit, conscious process of critical reflexivity, reading data, joining periodic Skype calls, commenting on documents, emailing reflexive comments to one another, and helping each other identify their preconceptions. PM contributed the perspective a of faculty advisor involved with the list-serve. --- Identification of text for analysis ZZ compiled all posts to the list serve between August 1, 2011 and August 1, 2012 related to the topics of the e-learning modules, social posts, information requests, and spontaneously generated discussions (but not congratulatory posts, as they consisted of single words or short phrases like "Congratulations"; "Well done") into a 1286-page document. She used her reflexive understanding of the posts to identify those which referred to sociopolitical issues, including religion and gender. Guided by this initial review, the authors compiled a list of keywords and used them to text-search the document to identify any text missed in the first pass. The words were: Terror(ism), Liberal(ism), Conservat (ism), Religion, Islam, Hinduism, Buddhism, Christian, Eid, Christmas, New Year, Chinese New Year, Diwali, Basant, Easter, Carnivale, Lent, Passover, Female, Women, Democra(cy), Dictator(ship), Multicultural(ism), and Diversity. ZZ ensured that entire posts, including associated back-and-forth dialogue between participants, were included, checking with another author (PM) who had actively participated in the discussions. The posts containing these concepts were compiled into an 11-page transcript. --- Methodological framework The content analysis drew insights and analytical tools from critical discourse methodology, which is consistent with the critical paradigm in which this research was conducted. Discourse theory holds that our words are never neutral: each has a historical, political and social context (Fiske 1994). Qualitative analysis can identify connections between texts and social and cultural structures and processes (Fairclough 1995). Gee specified features of the structure and content of text, which identify how social structures and processes influence social action (Gee 2014) and said they could be combined with a general thematic analysis not rooted in any particular linguistic methodology (Gee 2004). --- Analytical procedures The researchers used analytical tools developed by Gee (2014) to explore how language built identities, relationships, and the significance of events. They all read the 11-page transcript, searching systematically for the'situated,' or contextual, meaning of words, identifying typical stories that invited readers or listeners to enter into the world of a writer, looking beyond what contributors were saying to identify what their discourse was 'doing,' and exploring how metaphors were used. They worked independently of one another, highlighting material of interest and annotating them with marginal comments. They exchanged and discussed comments to identify and explore areas of agreement and disagreement. ZZ kept notes about the discussions, archived the comments into a single dataset, and maintained an audit trail back to the original data. She then wrote the narrative of results, proceeding from description to interpretation to explanation while constantly comparing these explanations to the original textual materials. The other authors contributed their reflexive reactions to the evolving narrative of results. --- Results Although FAIMER's mission includes fostering cross-cultural education, <unk>1 % of the text (11 pages) was explicitly sociopolitical. Participants from 16 countries in Africa and the Middle East (Ethiopia, Nigeria, Kenya, Cameroon, Egypt and Saudi Arabia), Latin America (Mexico, Colombia, Chile), Asia (India, Sri Lanka, Pakistan, Bangladesh, China, and Indonesia), and the United States, contributed to the sociopolitical discussions. They contributed posts, typically in response to events in their home countries, which did not necessarily relate to the topics of the formal discussions. In other words, the geo-political contributions appeared spontaneously, without a specific request by faculty facilitators. These conversations soon petered out for several reasons. There was limited back-and-forth dialogue between an initiating participant and other participants, which limited the depth of the discussions. Posts were greeted not with positive or negative responses, but with silence, and faculty did not ask for more information or build on what had been said. Within the limited discussions that did take place, we identified four strands (parts of conversation within an email thread). Participants discussed experiences related to political events in their countries (political strand); highlighted gender issues (gender-related strand); discussed religion in their home countries (religion-related strand); and offered glimpses into the impact of cultural factors on their lives (general cultural strand). The following paragraphs elaborate those four topics, and Table 1 provides examples of specific posts. --- Political strand Political text concerned two main topics: terrorist attacks in India and Pakistan, and the Arab Spring in Egypt. There were two additional posts (from Egypt and Saudi Arabia) about local governments fostering progress and a view from the U.S. on the value of democracy. Arab Spring Egyptian woman chronicling her lived experiences through the Egyptian revolution using the metaphor of childbirth South American participant providing global context of events and encouragement using the metaphor of breast feeding "When I gave birth to my kids, I went through a normal delivery, and refused to take pain killers...I wanted to experience labor pain, which is unbearable; yet I enjoyed every single moment of it...with all those intermingled feelings of suffering, curiosity, serenity, fear, happiness, just waiting for the moment of listening to the first cry". She metaphorically then linked child birth to the electoral process: "Today, while I was impatiently waiting for announcing Egypt's first civil president, the same feeling was projected on me: Egypt was giving birth...very painful...laborious..." "Well, I think that movement to change the model of government in your country is IMPORTANT FOR ALL OF US (INCLUDING LATINO AMERICA) because that kind of change has effects in all middle east country (at same manner that the movement to fall the dictator), effects in economics fields around the world, effects in the way to reorganize and how to obtain a common view of your country where are different points of view about it (that is a common situation in a lot of countries around the world)...So the problem is for all Egyptians not only for the president and his government and if the homework is well done this condition could be a wave more bigger than the last and I hope that it be great. All my prayers for you and your country in this new endeavour. And the image about the pain when the women had given birth could be compensated with the image when the newborn goes to her mamas to take breastfeeding (what a lot of happiness!!! between both)..." Participant from U.S reflecting on western roles of men and women "After I selected the (4) employees, I realized the trouble of having (4) females who are trying to prove themselves in a very masculine culture. Competition was as evident as the sun from first day...and it was hell. Complaints everyday...unhealthy climate, poor relationships, poor communication...the [good of the] unit was the last thing they ever thought of considerably". "Western culture has evolved more and more into a self-directed, self-centered, individualistic culture of science, savage capitalism and alpha male/alpha female thinking." Discussing differences in east-west health care practices-CDA tool: activities conforming to social norms or routinization Participant discussing examining women "Exposure of body parts is not allowed or only minimal exposure is allowed (e.g. in UK we were trained to examine the patient with tops off so that both breasts, chest and axillae could be properly examined. In [my country], patients will only allow the affected breast to be examined and despite request will not allow the contralateral breast to be examined. Men cannot do gynecological examination on women even in an emergency." "Asking to take off clothes and wear a gown may be considered a norm in one society but a totally unacceptable behavior (or request by a doctor-even with the best of intentions) in another society of culture. We do come across such incidents in our conservative societies and this does conflict with what we were taught (and practiced) in the West." Religionrelated discourse Participant view on impact of religion in guiding professional outlook Although the "charter of professionalism" started with Hippocratic oath, it is right that most of the religions have their own versions. As has been mentioned there are Hindu religion guidelines on ethics (selfless dedication to preservation of human life) as well as Chinese (skill with benevolence, the persons who undertake this work should bear the idea of serving the people of the community/world). Although Quran is taken as the main guidance book for all ethics in Islam, still the first written book on medical ethics was way back in ninth century when Ishaq bin Ali-Rahawi wrote the book "Adaab Al-Tabib" (Conduct of a Physician) (854-931 AD). Al Razi (Rhazes) is also well-known in the world of ethics as far as muslim ethic are concerned. Maimonedes is a well known name in Jewish ethics. Percival's "Medical Ethics" was published in 1794 and AMA code of medical ethics in 1847, and so on. So most of the societies and religions have their contribution to the field of ethics (and for us professionalism as well). It is nice to hear so many different views on how professionalism is perceived in different corners of the world. Although, overall the main principles of do no harm, do good, justice, altruism and patient autonomy are part of all cultures however some subtle differences still remain (some serious) Excerpts are part of a back-and-forth dialogue Gender, religion, and sociopolitical issues in cross-cultural... --- Terrorism As shown in Table 1, a participant from India broke into the on-line discussion by announcing a terrorist bomb attack. A participant replied empathically that such events are part of normal life in Pakistan. Then participants who had experienced bomb blasts or other forms of terrorism due to the Tamil guerrilla war and drug-related violence in South-America joined the discussion. As participants contributed their experiences, geographic borders became irrelevant. Participants wrote of terrorism as anti-social behavior; a life of living with terror; lack of safety; vigilance; not allowing oneself to be terrorized; life going on despite bomb blasts; hopes of terrorism ending, and peace returning. The text in Table 1 shows that participants did not comment on socioeconomic and political factors contributing to terrorism and relevant to healthcare. Terrorists were characterized as radicalized zealots who do not deserve sympathy or understanding: 'Thankfully, except for the person who was carrying the bomb, no one else was injured.' The net effect of this conversation was to create solidarity between participants who were potential victims of terrorism and emphasize the "otherness" of terrorists, but it did not relate the terrorism discussion to medical education. --- The Arab spring In a second part of this strand, vivid metaphors of childbirth and breastfeeding described the local political environment during the Arab Spring (Table 1). The metaphors gave readers a unique window into the life of someone they knew, who was now caught up in an uprising that held the world's attention. A South American picked up on the metaphor, expressed support, and expressed opinions about social change. Later, a participant from the Middle East wrote that "Boundaries are boundaries-they are there to define the environment and mobilizing them is not always a choice" and asked "is it always feasible especially if it requires moving boundaries and making it safe?" A U.S. faculty participant reminded participants of a debate about democracy versus dictatorship during another module but back-and-forth dialogue did not result. The conversation explored differences in Fellows' political environments but did not analyze their relevance to medical education. --- Gender-related strand Table 1 contains example text from a conversation about gender issues in treating women patients, which began during an e-learning module on Professionalism. Male and female participants participated in a candid and uninhibited way, describing social norms in their different countries. A participant from Bangladesh wrote that "Shaking hands is culturally and religiously governed, male doctors usually don't shake hands with women patients, they exchange salam (Assalamu Alaikum-peace be upon you!). But it is not mandatory. Our present [female] Prime Minister Sheikh Hasina shake hands with all, but previous [female Prime Minister] Begum Khaleda Zia shakes hand only with ladies! So there is difference in same culture!" Participants from many countries discussed cultural restrictions imposed by male leaders to prevent women from receiving adequate medical care. Participants from India, Pakistan, Saudi Arabia and Egypt shared differences in physical examination of women patients (Table 1): "Exposure of body parts is not allowed or only minimal exposure is allowed (e.g. in UK we were trained to examine the patient with tops off so that both breasts, chest and axillae could be properly examined. In [my country], patient will only allow the affected breast to be examined and despite request will not allow the contralateral breast to be examined. Men cannot do gynecological examination on women even in an emergency." Another participant wrote, "Asking to take off clothes and wear a gown may be considered a norm in one society but a totally unacceptable behavior (or request by a doctor-even with the best of intentions) in another society of culture. We do come across such incidents in our conservative societies and this does conflict with what we were taught (and practiced) in the West." In other posts, one participant offered a view about women physicians saying: "In India specially, the attire is important-at the hospital such as ours the female residents cannot come in skirts etc.-not as a rule but as an unwritten norm." Women's rights were touched on briefly: "USAID is also funding many projects on gender equality in Pakistan and a lot of work is being done by Pakistani females in this regard. A great example of how they are succeeding in their mission is that of one Pakistani Film producer, Sharmeen, who received an Oscar award for her film 'Saving Face' a few days ago. This film is regarding women who were disfigured because someone threw acid on their faces. Sharmeen brought this to the attention of the world through her film and this film also earned her an Oscar award, first time any Pakistani has won this award. Yesterday Pakistani parliament passed a law that will now lead to fine of one million rupees and life sentence or death sentence to any one who would carry out such a brutal act." Other posts touched on women trying to make their mark in a'masculine' work environment. Taken as a whole, the discussions identified and compared social norms in different cultures, exploring a spectrum of stances, from conservatism to liberal feminism. Explicit links were made to medical education but the relevance of the discussion was often left implicit. --- Religion-related strand Some participants wrote of the influence of religion-the Muslim, Hindu, Buddhist or Taoist faiths-on their professional identities. 'God' and 'Allah' were mentioned on several occasions, either in social posts or in the Professionalism e-learning module. The Muslim faith was discussed more frequently than other religions; participants emphasized the significance of moderation and how the Islam religion preaches "never be radical or extreme." One participant described Buddhism as preaching "ethical behavior which is compassion, loving kindness, the giving up from self-centeredness and greed." Another described the Hindu oath from fifteenth century BCE in the context of medicine: "the basic expectation from a physician is'selfless dedication to preservation of human life', sometimes even at the cost of one's life!" A participant from China discussed how he related with the ancient Chinese mantra of "8 Chinese characters (<unk>, <unk>), and that it means that, 'Medical work is a kind of skill with benevolence, the persons who undertake this work should bear the idea of serving the people of the community/world in their mind'. This has been recognized as the standard for the health care workers in ancient China, and is still mentioned today." The pattern noted in the previous strand, of exchanging experiences and norming, was again apparent, but in-depth exploration of the relevance of those cross-cultural issues to medical education was lacking. --- General cultural strand Posts during the Professionalism e-learning module addressed the topic of primary socialization. One participant posted about "the process of being raised by the origin family, since, we see and understand the world by what they do and convey to us and share concerning their values. All those values they have are, dialectically fruit of the sociocultural and political system." Another participant used capital letters to emphasize the significance of the Asian culture of respect: "the deep rooted culturally driven perception of RESPECT and the socially rejected CRITICISM against hierarchy, where feedback could be perceived as disrespect." Participants shared the "insider" view of culture in their countries, discussing what an "outsider" would find strange if they did not share the knowledge and assumptions that render communications and actions natural and taken-for-granted by insiders. For example, participants noted that in some of these countries, especially in rural areas, a paternalistic doctor and patient relationship is the norm. --- Discussion --- Principal findings and meanings The most striking finding of this research was not what was present in the data, but what was absent. A thorough search of a large corpus of posts to a cross-cultural discussion forum found that <unk>1 % of the text addressed cross-cultural issues. More detailed analysis showed that, even when cross-cultural topics were introduced, participants' responses to them tended to be rather muted. When more lively discussions took place, superficial comparisons of social norms, and solidarity between participants, were more likely to emerge than an exploration of how contrasting cultural perspectives illuminated the practice of medical education. Links between cross-cultural issues and the FAIMER curriculum were rarely made. That is not to denigrate the importance of telling stories, whose value is increasingly recognized (King 2003) because they lead to better understanding of other peoples' lives, which may foster cultural tolerance. The silence which greeted some posts may be an example of'situational silence,' in which institutional expectations constrain participants from responding (Lingard 2013). It may also signify cultural hegemony, when dominant cultural expectations make it different for people to identify themselves with positions that deviate from expected norms. Under those conditions, the discourse of faculty development may be restricted to uncontroversial subject matter (Lingard 2013;Dankoski et al. 2014). It is noteworthy that the mostly U.S. FAIMER faculty made very few contributions (fewer than 10) to the cross-cultural discussions. Whether this faculty'silence' was related to cultural hegemony or lack of facilitation skills remains to be explored (Dankoski et al. 2014). --- Relationship to other publications Considerable theory and research show that cultural exchanges as part of curriculum are essential for transformative learning because they disrupt fixed beliefs and lead people to revise their positions and reinterpret meaning (Teti and Gervasio 2012;Kumagai and Wear 2014;Frenk et al. 2010). Otherwise, cultural hegemony imposes powerful influences on what and how people think about their society (Teti and Gervasio 2012). The role that silence, humor and emotions play in enhancing or inhibiting transformational learning (Lingard 2013;Dankoski et al. 2014;McNaughton 2013) has been little studied in crosscultural health professions education settings. Transformative learning is the cognitive process of effecting changes in our frame of reference-how we define our worldview where emotions are involved (Mezirow 1990). Adults often reject ideas that do not correspond to their particular values, so altering frames of reference is an important educational achievement (Frenk et al. 2010). Frames of reference are composed of two dimensions: points of view and habits of mind. Points of view may change over time as a result of influences such as reflection and feedback (Mezirow 2003). Habits of mind, such as ethnocentrism, are harder to change (Mezirow 2000). Transformative learning takes place by discussing with others the "reasons presented in support of competing interpretations, by critically examining evidence, arguments, and alternative points of view" (Mezirow 2006). This learning involves social participation-the individual as an active participant in the practices of social communities, and in the construction of his/her identity through these communities (Wenger 2000). When circumstances permit, transformative learners move toward a frame of reference that is a more inclusive, discriminating, self-reflective, and integrative of experience (Mezirow 2006). Emancipatory learning experiences must empower learners to move to take action to bring about social and political change (Galloway 2012), therefore, in designing transformative learning, simply mixing participants from different cultures or including a topic addressing ideological backgrounds of participants may not be enough (Beagan 2003;Kumastan et al. 2007) to foster critical consciousness. While information and communications technology has enabled globalization of health professions education, several factors impact outcomes. The inhibiting power of cultural hegemony can make participants hesitate to interrupt curriculum-related discussions and contribute cultural observations. Participants' culture or media preference, and their individualist and collectivist cultural traits can also affect communication styles (Schwarz 2001;Al-Harthi 2005). Pragmatic issues also play a role, such as participants' previous experience with using online settings for learning, professional development, or communities of practice (Dawson 2006). On a facilitator's part, lack of confidence in facilitating cross-cultural discourse, especially in the online environment, can also adversely impact such discourse (Dankoski et al. 2014). Recent reports note the need for training of both faculty and learners to let go of the concept of objectivity, scrutinize personal biases, acquire skills to "make the invisible visible" (Wear et al. 2012) and unseat the existing hidden curriculum of cultural hegemony. Faculty need to find the balance between task completion and discussion of'stories,' and acknowledge and take advantage of the tension between the opposing discourses of standardization and diversity (Frost and Regehr 2013). --- Limitations and strengths One factor that likely affected the cross-cultural discourses in this study was the perceived safety of disclosure. This may be particularly pertinent in the online setting, where current participants did not personally know all Fellows, and where privacy and security cannot be guaranteed. Fellows from two countries, whose governments are widely thought to be authoritarian (but not fellows from other countries), told us they were fearful of putting sensitive topics on the list serve due to government surveillance and IT monitoring, however this was limited to Fellows from two counties. We were also limited to the voices appearing in the online discussion; there may have been additional communication outside the list serve (e.g., personal emails between participants and faculty). Participants may more likely support and repeat mainstream stories of experiences common to many, while they may not share stories of vulnerability. Pragmatic group level usability issues, such as information overload and challenges in accessing the list serve, may also have lowered frequency of posts; such parameters are known to affect discourse structure and sense of community (Dawson 2006). Useful future research could include in-depth interviews seeking to understand why some participants felt comfortable sharing information about their lives while others did not, and exploration of the impact of culture and the online technology on this participation. Though instruments have been developed to measure participants' global cultural competence (Johnson et al. 2006;Kumastan et al. 2007), sense of community (Center for Creative Leadership 2014), and classroom community strength (Dawson 2006) Kumas-Tan's work shows that current instruments measuring cultural competency ignore the power relations of social inequality (Johnson et al. 2006;Kumastan et al. 2007). This would add another dimension to future research. Additionally, we realize that technology itself is a cultural tool; while not the focus of this study, the results, together with other studies we are conducting, are providing useful information for designing further studies to explore this issue. While we did not attempt an exhaustive documentation of the cross-cultural discourses over years, the discourse over a 1-year period was sufficient to provide initial insights. This report provides a base line for us and others studying the nature of cross-cultural interactions in professional community of practice settings. --- Implications for health professional educators These observations lead to fundamental questions: Should a person's cultural background or current events in his or her home country be brought up in an online e-learning environment for faculty development and fostering a professional community of practice? Is it possible to do this in an online discussion, or should this be left to face-to-face learning activities? What has it to do with health professions education? Is this a distraction for other faculty? Should learning environments maintain cultural hegemony by limiting such discourse? Should faculty actively facilitate or not? If we conclude that cultural issues should be addressed in online cross-cultural discussions, then we need to look at the depth of these discussions; in our sample, they remained non-analytical and relatively superficial. Future interventional research could include addressing how to foster discussions about participant social identity (Burford 2012), the impact of doing so on learner engagement, and the facilitation skills needed to provide a safe environment for such discussions. While we may be able to keep a group of learners 'on task' by prescribing cultural hegemony, we may miss a critical opportunity to transform the frames of reference of both learners' and educators' (Frenk et al. 2010) and to 'unmask illusions of pure objectivity' (Wear et al. 2012). Letting go of the need to keep contributions "culture-free" may empower participants to talk (or write). Moreover, knowing each other's stories makes participants in a teaching/learning setting feel they are part of a group, which can stimulate participation and reduce dropout rates (Tinto 1997). Allowing room for spontaneous stories, such as the terrorist bombings in India or the Arab rising in Egypt, can also help a group understand and accept limited participation from those who may be preoccupied with current events in their countries or lack regular access to the internet because of various conditions. Openness to sharing cultural perspectives may be an important way to foster cultural competence, a Liaison Committee on Medical Education (LCME) mandated goal for all U. S. and Canadian medical schools (Association of American Medical Colleges, Liaison Committee on Medical Education 2003). Attention to informal discussions in online --- Conflict of interest None.
or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal. If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the "Taverne" license above, please follow below link for the End User
Introduction Unwanted pregnancies and unsafe abortions can seriously affect any sexually active women and have negative impacts on women's personal and conjugal life, their families, and societies. Due to unsafe abortions, thousands of women die and millions more suffer long-term reproductive problems, including infertility. The incidence of unwanted pregnancies and unsafe abortions is likely to continue to increase until women's need for modern contraception is met [1]. To estimate women's need for family planning services (i.e., modern contraception) and assess women's ability to obtain their reproductive desire, recently the concept 'unmet need for modern contraception' (UNMC) has been introduced [2]. Globally this important tool is widely used for advocacy, developing family planning policies, and implementing and/or monitoring family planning programs [3]. Conceptually, UNMC captures those sexually active or fecund women who are not using modern contraceptive tools but intend to conceive a child later, or to abstain of having any more children [3,4]. Since the degree of UNMC is one of the basic indicators for evaluating the effectiveness of family planning program in any country, women having UNMC are the logically important targets for such program management [3,4]. In 2012, the global community launched the Family Planning 2020 (FP2020) initiative at 'London Summit for Family Planning' which is built on the principle that all women, regardless of their place of residence and economic status, should enjoy their human right to access safe and effective, voluntary contraceptive services and commodities [4]. Since then, the FP2020 movement has focused on 69 poorest countries, and consequently, the global coverage of modern contraceptives among reproductive age married women has been increased by 30.2 million from 2012 (270 million) to 2016 (more than 300 million) [5]. However, the usage of modern contraceptives among married women was increased slowly in Asia (from 51 percent to 51.8 percent) between 2012 to 2017, compared to their counterparts in the African region (from 23.9 percent to 28.5 percent) [6]. On the other hand, the overall UNMC was reported to be 21.6 percent among the FP2020 focused countries in 2017 with a coverage of over 25 percentage in most of the Southern Asian and Sub-Saharan African countries [6]. This higher percentage of UNMC indicate a significant barrier in achieving Sustainable Development Goal 3.7 (SDG-3.7). The high dominance of UNMC backpedals the achievement of a higher proportion of demand satisfied by modern methods, which is one of the major health related indicators of the SDG (SDG Indicator 3.7.1) [5,7]. Efforts to reduce the extent of UNMC effectively require the region-wise assessment of the socio-demographic characteristics of the population and the identification of underlying factors that directly influence unmet needs [8,9]. Though some country-specific studies [8,10,11] and much earlier literature [2,[12][13][14] reported that different socio-economic factors of women; limited choice and access to family planning methods; fear of side effects of using contraceptives; child marriage; urban-rural disparities; spousal age difference; and religious or cultural constraints, etc. have the potentiality to shape the level of UNMC among reproductiveaged women [8,11,12,15]. Moreover, some of these studies included all sexually active women regardless of their age and marital status [6,15]. But compared to the older married women, it has been reported that young married women (aged 15-24 years old) experience disproportionately higher levels of UNMC owing to their distinct fertility preferences, (i.e., partner's desire of more or male children, avoiding older-aged pregnancy complications, and persistent high child mortality, etc.), and such preferences varied from culture-to-culture [2,12,13,15]. Hence, the actual association between different socio-economic factors and UNMC, especially for the younger married women, might not be reflected properly after including all reproductive aged women. The percentage of UNMC is still high among the younger married women of low-and lower-middle-income countries (LMICs), particularly from South Asian, Southeast Asian, and Sub-Saharan African region [2]. But rarely any study has explored and compared the prevalence and associated factors of UNMC among the young married women of these regions. Though Ahinkorah et al. (2020) [16] investigated on the socio-demographic variations in unmet need for contraception among the younger aged women, that study was conducted regardless the marital status, confined to only Sub-Saharan African region, and considered any types of contraception. Such limitations of the existing literature impede international comparability and manifest the necessity of region-by-region investigation of UNMC and its associated socioeconomic factors among the young married women of LMICs. On the other hand, a comprehensive comparative research examining the present prevalence of UNMC and identifying the associated factors will assist policymakers in the individual regions to adapt and implement successful family planning programs based on their respective cultural contexts and the socioeconomic factors. Aiming these issues, this comparative study investigated the coverage of modern contraceptive usage and UNMC among the young married women, and further identified the socioeconomic factors that were associated with UNMC in the LMICs of Asian and Sub-Saharan African regions. --- Methods --- Data sources Data from latest Demographic and Health Surveys (DHS) with available information on family planning conducted in 32 LMICs of Southern Asia and Sub-Saharan Africa (from 2014 onwards) were used. Five countries from South Asia (Afghanistan, Bangladesh, India, Nepal, and Pakistan), four from Southeast Asia (Cambodia, Myanmar, Philippines, and Timor-Leste), 13 from West and Central Africa (Angola, Benin, Cameroon, Chad, Congo Democratic Republic, Ghana, Guinea, Liberia, Mali, Nigeria, Senegal, Sierra Leon, and Togo), and 10 countries from East and Southern Africa (Burundi, Ethiopia, Kenya, Lesotho, Malawi, Rwanda, Tanzania, Uganda, Zambia, and Zimbabwe) were included. DHS is publicly available, nationally representative, cross-sectional surveys conducted in LMICs with multistage (usually two-stage) cluster sampling. Along with other information on maternal and child health outcomes and interventions, DHS regularly gather information on family planning and reproductive health. However, detailed administrative procedures, trainings, sampling strategies and methodology of DHS have been described elsewhere [17,18]. --- Study population This cross-sectional study was limited to only currently married younger women aged 15-24 years old. After excluding the missing information on outcomes or covariates, a total of 100,666 married women, with complete information from 32 LMICs of Asia and Sub-Saharan Africa (SSA), were finally selected for this study (Table 1). Modern contraception methods. Modern contraception methods include contraceptive pills, condoms (male and female), intrauterine device (IUD), injectables, hormone implants, sterilization (male and female), patches, diaphragms, spermicidal agents, and emergency contraception [17]. Prevalence of modern contraceptive usage was determined as the percentage of women of reproductive age who report themselves or their partners as currently using at least one of the modern contraception methods. --- Measurements Unmet need for modern contraception. UNMC, the third core indicator of FP2020 initiative, was measured as the percentage of fecund women of reproductive age who want no more children or to postpone having the next child, but are not using any contraceptive method, plus women currently using a traditional method of family planning [3]. Women using any of the traditional methods (like-abstinence, the withdrawal method, the rhythm method, douching, and folk methods) were also assumed to have a UNMC. Again, pregnant women with a mistimed or unwanted pregnancy were also considered in need of contraception. --- Exposures. Based on the empirical literature [8,9,19,20], this study considered four variables i.e., educational level, type of earning from work, exposure to media (family planning messages), and household level decision-making autonomy-as the proxy variables to indicate the socioeconomic status of respondents. Educational level was classified as no education, primary, secondary, and higher. Type of earning from respondent's works was categorized into not working, working and paid (cash paid, or in-kind paid, or both), and working but not paid. Exposure to family planning messages via mass media refers to hearing family planning messages via listening radio, watching TV, and reading newspaper for the last few months. 'Exposure to media' was dichotomized by assigning a value of 1 if the respondent heard family planning messages from at least one of the mass media, and 0 if they did not. Women's household-level decision-making autonomy was measured using their responses to four questions that asked who makes decisions in the household regarding obtaining health care for herself, making large purchases, visiting family and relatives, and using contraception. Response categories were the respondent alone, the respondent and her husband/partner jointly, her husband/partner alone, someone else or other. For each of the four questions, a value of 1 was assigned if the respondent was involved in making the decision, and 0 if she was not; then, the values were summed and dichotomized as 'participated' and 'not participated'. And finally, the household wealth index, which was measured by the DHS authority using principal component analysis of the assets owned by households, and the detailed analytical procedures were described elsewhere [21]. The score was categorized into five equal quintiles (poorest, poorer, middle, richer, and richest) with the first, representing the poorest 20%, and the fifth, representing the richest 20%. --- Controlling variable. Based on the previous studies [8,9,11,12,15], the following controlling variables were used in the analyses along with the predictor variables: partner was more educated than the wife (yes, no); spousal age difference (less than 5 years, 5 to 9 years, 10 years and more); the number of living children (no child, 1 to 3, 4 and more); whether respondents married before 18 years old (yes, no); and place of residence (urban, rural). --- Statistical analysis Frequency distribution and univariate analysis were used to compare the proportion of UNMC with the socio-economic status of respondents. Multilevel logistic regression models with a random intercept term at community-and country-level were used to estimate adjusted odds ratios (ORs), along with 95% confidence intervals (CIs), for the relationship between exposures and UNMC. Models were adjusted for-partner was more educated than the wife; spousal age difference; number of living children; marriage before 18 years old; and place of residence. For all analyses, P <unk> 0.05 was set as the significant level. The complex survey (DHS) design was considered in all the analyses using Stata's 'SVY' command. Data management and statistical analysis were conducted in Stata version 16.1/MP. --- Result --- Country-specific coverage of modern contraceptive and percentage of unmet need Overall, 1,00,666 young married women from Asia and Sub-Saharan Africa were included in this analysis. The mean age (<unk>SD) of the study population was 21.17 (<unk>2.23) years, while their mean age (<unk>SD) at marriage was 17.15 (<unk>2.50) years. The pooled estimate from 32 LMICs showed that about 37% young married women used modern contraceptives and 24% women had UNMC (S1 Table ). From the country specific estimation, the overall percentage of modern contraceptive usage was higher in South Asia (SA) (44.7%; CI: 43.9% -45.6%), followed by Eastern and Southern Africa (ESA) (42.7%; CI: 41.6% -43.8%), and Southeast Asia (SEA) (36.5%; CI: 34.8% -38.3%) (S1 Table ). Modern contraceptive usage varied from 11.6% (Pakistan) to 54.4% (India) in SA, from 17.9% (Timor-Leste) to 58.1% (Myanmar) in SEA, from 2.3% (Chad) to 19.7% (Ghana) in WCA, and from 23.5% (Burundi) to 58.5% (Zimbabwe) in ESA (Fig 1). On the other hand, the UNMC was highly reported in SA (24.6%; CI: 24.0% -25.1%), compared to WCA (24.2%; CI: 23.5% -25.0%), SEA (24.0%; CI: 22.6% -25.5%), and ESA (21.5%; CI: 20.7% -22.4%) (S1 Table ). The proportion of UNMC ranged from 20.5% (Bangladesh) to 41.5% (Nepal) in SA, from 14.8% (Myanmar) to 28.5% (Timor-Leste) in SEA, from 15.6% (Nigeria) to 39.5% (Togo) in WCA, and from 11.0% (Zimbabwe) to 32.6% (Uganda) in ESA (Fig 1). The younger married women of Asia possessed higher socioeconomic status in all the selected aspects than their counterparts of SSA. The percentage distribution of different socioeconomic characteristics of the study population has been displayed in S2 Table. On the other hand, S3 Table showed that the UNMC was significantly higher among the South Asian younger women who had not-paid job (29.1% vs 22.4%), no media exposure (25.4% vs 23.9%), and high decision-making autonomy (27.5% vs 23.0%) than those with paid job, media exposure, and medium decision-making autonomy. In contrast, women from East and South Africa who received secondary and higher level (17.5% vs 24.6%), possessed medium decision-making power (18.7% vs 39.7%), and richest wealth-index (17.2% vs 24.6%) reported significantly lower proportion of UNMC compared to those with no education, high decision-making autonomy, and poorest household (S3 Table ). --- Socioeconomic factors affecting unmet need for modern contraception To investigate the associated socioeconomic factors for UNMC, the models (unadjusted and adjusted) of multilevel logistic regression analyses from the four regions and the pooled data have been presented in Tables 2 andS4, respectively. From the adjusted analysis (Model II), SA countries showed that women's secondary and higher education level [AOR: 1.37; 95% CI:.66] were positively and significantly associated with UNMC among the women of WCA. In ESA, while women with high decision-making autonomy had 1.94 times higher odds, women's medium level decision making had 14% lower odds of experiencing UNMC, compared to the women with low decision-making autonomy. Additionally, women's not working status, no media exposure, and poorest wealth index reported positive and significant association with UNMC in ESA (Table 2). However, from the pooled data of 32 low-and lower-middle-income countries of Asia and SSA, the adjusted association revealed that primary education level, secondary and higher education level, not working status, no media exposure, high decision-making autonomy, poorest wealth index had positive association, whereas women's medium decision-making autonomy showed negative but significant association with UNMC (S4 Table ). --- Discussion To the best of our knowledge, this is the first comprehensive study that explored and compared the associated socioeconomic factors of younger married women with their UNMC, across the 32 LMICs of Asia and African region. Younger married women from SA region were ahead of their counterparts of SSA regions in terms of modern contraceptive usage. But UNMC was highly reported among the SA young married women followed by WCA, SEA, and ESA regions. Different socioeconomic factors of the study population like-higher educational level (in SA and WCA), not working status (in SA and ESA), no exposure to media (in SA and ESA), and higher decision-making autonomy (in SA, WCA, and ESA) showed positive and --- Socio-economic factors Odds ratio (95% CI) significant association with UNMC. Poorest households were positively associated with UNMC among the women of SA and ESA, whereas it showed negative association with UNMC in SEA and WCA regions. Similar to our study, a recent investigation of World Family Planning (2017) indicated that SA had a higher rate of modern contraceptive use than Africa and SEA [22]. Again, UNMC was found to be higher where modern contraceptive prevalence was low [22], i.e., in WCA, SEA, and ESA regions; which is also similar to our study findings, except for SA. Younger married women hold new norms about family planning and family size due to the development of their empowerment status [19], which can outpace the availability and use of contraceptives [22]. This might be one of the plausible reasons behind the stable or increasing prevalence of UNMC among the younger married women of SA. On the contrary, WCA reported higher UNMC in our study which might be explained due to the high-level usage of traditional contraceptive methods [23], and unaware, unavailability, and cost of modern contraception [24]. Regionally, the women of Africa (Benin 36%, Burkina Faso 27%, Burundi 33%, Cameroon 33.2%, DR Congo 40%, Ghana 34%, Liberia 32%, and Uganda 33%) and Southern parts of Asia (Afghanistan 28%, Nepal 26%, Pakistan 30%, and Sri Lanka 22%) experienced more UNMC than other regions (Southeast Asia and East Asia <unk> 20%) [6]; which was nearly consistent for younger married women of this study. While exploring the associated socioeconomic factors, the UNMC of younger married women of SA, WCA, and ESA showed positive association with both higher educational level, and high decision-making autonomy. A comparative study of Kerry MacQuarrie conducted in 41 developing countries also reported similar positive relationship between higher educational attainment and UNMC [2], but this outcome is contradicted with the some studies from Pakistan [8], Ethiopia [14], and some African countries [9]. The possible reason behind such contradiction might be the age of study population. The study population of these aforementioned studies [8,9,14] included reproductive aged women (15-49 years), whereas our study as well as the study of Kerry MacQuarrie [2] considered only the younger married women. Even though young women can be educated and aware of contraceptive usage, factors like-pregnancy expectations early in marriage, male child preferences, limited access to modern spacing contraceptives (such as-oral contraceptive pills, intrauterine devices, condoms, and sterilizations, etc.), family resistance to adopt contraceptives, and husband's reluctance on family planning issues etc. can increase their UNMC [25]. NGO conducted yard-meeting, counseling services from family planning workers, and teaching basic family planning education at schools etc. might be effective to eradicate the existing reluctance, resistance, and primitive misconceptions of using modern contraception among the spouses, and other family members. Similar to our study, one of the studies from Southern Asia [19] showed that reproductive aged women (15-49 years) with higher decision-making autonomy used modern contraceptives frequently and experienced less UNMC. Women's decision-making autonomy greatly depends on their age at marriage. Women marrying at a premature age usually possess lower social standing in the household, whereas later marriage provides a woman proper authority inside the home, ability to negotiate with household members, and strong involvement in decision-making after marriage [26]. So, in many LMICs, when younger women try to raise their voices, especially for their reproductive rights, and try to make a decision regarding her choice of using family planning methods, they experience different types of spousal violence [20]. On the other hand, couples possessing an equalitarian power structure at household and women holding medium level of authority within the home appeared to be more effective in satisfying their unmet contraception demand [27]. That is why, high decision-making authority showed positive association with UNMC among the married young women of SA, WCA, and ESA region in this study. By promoting community-based outreach campaigns and multisectoral programs for family planning focusing on couple's egalitarian decision-making power structure in the household might reduce the level of UNMC [27]. In both SA and ESA, unemployment, and poorest wealth-index were positively associated with the experience of UNMC among the study population. These findings were accordant with some empirical study results [2,8,9,15]. Employed women, as well as, women from the economically advantaged household are usually able to increase their opportunity cost of bearing and rearing a child, compared to the unemployed and poor women [8]. But the scenario is different in most of the low-income countries. Because rearing babies by baby sitters are too much costly and they are not always available in LMICs [28]. In such cases, the mother's sole responsibility of bearing and rearing a child reduces their time devoted to paid work and consequently, they may have to forego their source of income. Thus, unemployed and poor mothers try to avoid the extension of their family size and focus on the cost management of the household. Moreover, compared to the poor families, solvent households possess better access to modern contraceptives and most of the family planning services [8]. Therefore, similar to the studies of Nigeria, Pakistan, and Zambia [9], our study observed an increased likelihood of UNMC among the unemployed women and poor households, compared to the employed and rich ones. Introducing home-craft markets and promoting different micro-finance programs will create more employment opportunities for younger women. Establishing healthcare complex in remote and rural areas will provide better access to family planning services among the underprivileged population. Additionally, Non-Government organizations (NGOs) and local governing bodies should supply modern contraceptives at a low cost to the economically disadvantaged regions. Consistent with previous studies [8,29], lack of exposure to family planning messages via media was found to be one of the major socio-economic determinants of UNMC in this study. The plausible reasons might be the lack of knowledge about the advantages of contraceptive usages, negative perceptions, and the excessive fear of side-effects of contraception [8,29,30]. A qualitative study from the rural areas of India [25] revealed that young married couples without proper media access have the misperception about the usage of oral contraceptive pills and intrauterine devices. For the last two decades, Governments of LMICs have been implementing a lot of actions to convince people concerning to the efficacy of birth control programs via extensive media campaign, where the messages of celebrities and influential personalities of the society are communicated to people to persuade them about the benefits of family planning programs. But, due to the lack of access to media among rural women, the effort of the government does not seem to be fruitful to change the perception of rural and superstitious people about the side effects of using contraceptives [8,30,31]. However, the access to media has to be increased through intervening programs, and family planning messages, advertisements, and campaigns via mass media should be accelerated. Such campaigns and messages may be helpful to remove the superstitions and fear of side-effects of using contraceptives among rural and lowly educated people. Additionally, this will eventually increase the awareness about their sexual and reproductive health, the acceptability of using the modern contraceptive, and the autonomy in fertility decision making. The prime strength of this study comes from using the large and nationally representative surveys from 32 LMICs of Asia and Sub-Saharan Africa. So far, this is the first study that has estimated the country-wise UNMC among younger women as well as regionally examined and compared its association with women's socio-economic status. Most importantly, this study was limited to those younger women who were married. Because, by including unmarried women, estimates in some regions may be underrepresented as they (some African and South Asian countries) have limited data on reproductive health for unmarried women, and many unmarried women with sexual experience may feel uncomfortable to report, which could potentially bias the measurements. On the other hand, the study sample was limited to only younger married women aged 15-24, while the data was mainly dependent on the verbal report provided by them. Again, women's perception of wanting the next pregnancy or spacing it may change during the pregnancy, or depend on different circumstances of life. Additionally, the possibility of social desirability bias retains due to the self-reporting nature of the data collection with unknown validity and reproducibility. Finally, as this was a cross-sectional study, it was not possible to make any causal inference but rather only associations. --- Conclusion The highest coverage of modern contraceptive usage among the younger married women was reported in SA and the lowest was in WCA. But women from SA and ESA experienced the highest and lowest proportion of UNMC, respectively. In SA, women's socioeconomic factors like-higher education, unemployment, lack of media accessibility, high decision-making autonomy, and poor wealth-index etc. showed positive association, whereas medium decisionmaking autonomy and poor wealth-index showed negative association with UNMC in SEA. High decision-making autonomy increased women's UNMC in both WCA and ESA. Additionally, higher education in WCA, and unemployment, no media exposure, as well as poor wealth-index in ESA were positively associate with women's experience of UNMC; which is a noteworthy contribution to the field of UNMC. However, to achieve Sustainable Development Goals (SDGs) target 3.7, i.e., ensuring universal access to sexual and reproductive health-care services by 2030, the international community must continue the existing campaigns of increasing modern contraceptive usages worldwide. And policy makers of respective LIMCs can implement versatile intervening program to reduce UNMC among the younger married women based on the findings and suggestions elicited in light of this comparative study. --- All the datasets are available in: https://dhsprogram.com/data/ available-datasets.cfm.
Modern contraceptive methods are effective tools for controlling fertility and reducing unwanted pregnancies. Yet, the unmet need for modern contraception (UNMC) remains high in most of the developing countries of the world. This study aimed to compare the coverage of modern contraceptive usage and the UNMC among the young married women of low-and lower-middle-income countries (LMICs) of Asia and Sub-Saharan Africa, and further examined the likelihood of UNMC across these regions. This cross-sectional study used Demographic and Health Survey (DHS) data on family planning from 32 LMICs of South Asia (SA), Southeast Asia (SEA), West-Central Africa (WCA), and Eastern-Southern Africa (ESA). Multilevel logistic regression models were used to investigate the relationship between UNMC and women's socioeconomic status. Out of 1,00,666 younger married women (15-24 years old), approximately 37% used modern contraceptives, and 24% experienced UNMC. Regionally, women from SA reported higher modern contraceptive usage (44.7%) and higher UNMC (24.6%). Socioeconomic factors like-higher education (in SA and WCA), unemployment (in SA and ESA), no media exposure (in SA and ESA), and higher decision-making autonomy (except SEA) showed positive and significant association with UNMC. Poorest households were positively associated with UNMC in SA and ESA, while negatively associated with UNMC in SEA. UNMC was highly reported among the SA young married women, followed by WCA, SEA, and ESA regions. Based on this study findings, versatile policies, couples counseling campaigns, and community-based outreach initiatives might be undertaken to minimize UNMC among young married women in LMICs.
Background Alcohol use is a major cause of mortality and morbidity among young people, being implicated in large proportions of unintentional injuries [1][2][3][4], as well as of violent behaviours resulting in homicides and suicides [5,6]. Underage alcohol drinking has been also associated to school drop-out [7], and unsafe sex [3], which in their turn predict poor general health later in life [8]. Studies in the United States, Australia, and Europe have indicated that early onset of alcohol use is a predictor of substance abuse and alcohol dependence in adulthood [9][10][11]. Although most of these behaviours are associated with socioeconomic characteristics among youths [12], little evidence exists in the literature in support of a socioeconomic gradient of alcohol use during adolescence [13]. However, some differences emerge when investigating different drinking dimensions. Some studies among young people have reported a direct relationship between household income and frequency of alcohol consumption [14,15], but an inverse relationship between occupational level of the father and quantity of alcohol consumed on a typical drinking occasion [16]. Also, other studies suggested that low socioeconomic status may be associated with problematic drinking in youth [17][18][19]. Given social differences in profiles of alcohol use and the recognized need to reduce the social gap in the burden of risk factors [20], an evaluation of preventive programmes across social strata is desirable. Since most preventive programmes are delivered at the community level (e.g. in schools) rather than at the individual level, measures of social disadvantage should be assessed accordingly, at the collective level. In fact, recent studies in the United States reported complex associations between community-level indicators of socioeconomic status and underage drinking [21][22][23]. Besides, research has shown that neighbourhood socioeconomic position influences health related behaviours [24,25]. Several potential mechanisms have been hypothesized such as availability of health, social and community support services and provision of tangible support (e.g. transportation, leisure and sporting facilities) [26]. Therefore, it is plausible that the context into which a preventive program is brought will influence its effectiveness. However, this effect has rarely been considered in the evaluation of school-based interventions against alcohol use, for instance comparing intervention's effects between areas with different social level. The purpose of the present study was to analyse whether the social environment at the level of the school area affects the effectiveness of preventive school curricula on alcohol use. The EU-Dap (EUropean Drug Abuse Prevention) study was the first European trial designed to evaluate the effectiveness of a new schoolbased programme ("Unplugged") for substance use prevention. Participation in the programme was associated with a lower occurrence of episodes of drunkenness and alcohol-related behavioural problems 18 months after baseline, compared to usual curricula, while average alcohol consumption was not impacted [27,28]. Since the socio economic status of the living environment has been associated with adolescents' educational achievements and health behavior [29,30], we hypothesized a different preventive impact of the intervention in environments with different socio-economic level. --- Methods The EU-DAP trial (ISRCTN-18092805) took place simultaneously in nine centres of seven European countries: Austria, Belgium, Germany, Greece, Italy, Spain and Sweden. The research protocol complied with the ethical requirements foreseen at the respective study centres. --- Experimental Design and Sample The study was a cluster randomised controlled trial among students attending junior high school in the participating regional centres: one urban community from each involved country (the municipality of Wien, Merelbeke, Kiel, Bilbao, the North-west region of Thessaloniki, and the Stockholm region of Sweden) and three urban communities from Italy (the municipality of Turin, Novara, and L'Aquila). One-hundred and seventy schools were selected on the basis of inclusion criteria and of willingness to cooperate. Schools were sampled in order to achieve a balanced representation of the underlying average socioeconomic status of the population in the corresponding catchment area. Prior to randomisation schools within each regional centre were ranked by social status indicators and classified, as schools of either high, medium, or low socioeconomic level on the basis of tertiles of the corresponding distribution. This stratification was done independently by each regional centre using the most reliable and recently available data. Different indicators were used (Table 1). Indicators of population's social conditions of the catchment area of the school were used in Greece and Sweden. In Germany, Belgium and in the two Italian centres of Turin and Novara type of school was used, because there is a clear social class gradient in the corresponding school systems. In the remaining regional centres a combination of area and school indicators was used. Schools in each centre were randomised to either the intervention or a "usual curriculum" (control) group within the socioeconomic stratum. Students in the intervention group participated in the EU-Dap substance abuse preventive programme consisting of 12 one-hour sessions designed to tackle adolescents' use of alcohol, tobacco and illicit drugs. This new curriculum is based on a Comprehensive Social Influence model [31], and focuses on developing and enhancing interpersonal and intrapersonal skills. Sessions on normative education and information about the harmful health effects of substances are also provided. Details on the curriculum theory base and content have been provided elsewhere [32]. Ordinary classroom teachers were trained during a 3-day course in interactive teaching techniques. Thereafter, they administered the intervention sessions over three months. The protocol of the programme implementation was carefully standardised. Students in the control group received the programme normally in use at their schools, if any. In October 2004, 7079 students aged 12-14 years (3532 in control schools and 3547 in intervention schools) participated in the pre-test survey. Post-test data were collected in May 2006, i.e. at least 18 months after baseline. Data from baseline and follow-up surveys were matched using a self-generated anonymous code [33], leaving an analytical sample of 5541 students (78.3%). Additional information on the study design and study population has been published elsewhere [34]. --- Data collection and measures Self-reported substance use, along with relevant cognitive, attitudinal, and psychometric variables, was assessed by an anonymous paper-and-pencil questionnaire, administered in the classrooms without teachers' participation. Students were reassured about the confidentiality of their reports and the anonymous code procedure was explained. Apart from language adaptation, the same questionnaire and assessment procedures were used across all countries and all data collection points. Most questions were retrieved in the "Evaluation Instruments Bank" (http://eib.emcdda.europa.eu/), assessed in 2004. A test-retest evaluation of the survey instrument was conducted during a pilot study [33]. The outcomes of interest in the present analysis were: average frequency of current alcohol consumption, past 30-day prevalence of episodes of drunkenness, intention to drink and to get drunk within the next year, and occurrence of problem-behaviours related to the use of alcohol. The latter occurrence was assessed by asking the students whether they, in the past 12 months, had experienced any of 11 problems, including fighting and injury, because of their drinking. Intentions to drink alcohol or to get drunk within the next year were reported by the students on a 4-point scale ranging from "Very likely" (1) to "Very unlikely" (4). In addition, we explored some individual psycho-social characteristics: perceived school performance, exposure to siblings' alcohol use, and perceived parents' tolerance concerning alcohol drinking. The questions used for the assessment of outcomes and predictors have been fully described in previous reports [34,35]. We dichotomized the frequency of alcohol consumption into "Any current drinking" versus "No current drinking" as well as into an indicator of frequent drinking ("Drinking at least weekly" versus "Drinking less than weekly or not at all"). Also intentions to drink and to get drunk were dichotomized in "Very likely" or "Likely" versus "Unlikely" or "Very unlikely". Since the baseline prevalence of each alcohol-related behavioural problem and of episodes of drunkenness was very low, we collapsed these responses into two dichotomous outcomes of "No alcohol-related problems" versus "Any problem" in the past 12 months, and "No episodes of drunkenness" versus "Any episode" in the past 30 days respectively. Perceived school performance, based on self-comparison of own grades with those of the classmates, was coded as "Worse" versus "As good or better". Exposure to siblings' alcohol use was dichotomized, and students without siblings were considered unexposed to this influence. Perceived parents' tolerance concerning alcohol drinking was dichotomized into "Would not allow me to drink at all" versus "Others". Assessed Table 1 Indicator of social status, number of enrolled schools and students at baseline, by regional centre socio-demographic characteristics included gender, age, school-grade and the family living situation coded as "Living with both parents" versus "Other living situation". --- Statistical analysis We performed descriptive statistical analyses to summarize the main characteristics of the study sample. We tested the baseline equivalence by experimental condition of outcomes and predictors of interest separately by socioeconomic level using chi-square tests with the appropriate degrees of freedom. Odds Ratios (OR) and their corresponding Confidence Intervals (95% CI) were estimated as measure of association between experimental conditions and behavioural outcomes, separately for each socioeconomic level of the school area. A multilevel logistic regression model was fitted to account for the hierarchical structure of the data with one random effect at the classroom level and one at the regional centre level [36]. We tested several established predictors of substance use as potential confounding variables. These included gender, age, family living situation, family alcohol use, perceived school performance, perceived parents' tolerance concerning alcohol drinking, and the baseline status of the behaviour under study. Models were adjusted for variables on which the intervention and control group significantly differed at baseline and for the baseline status of the outcome. We also formally tested for statistical interaction by including in the regression model a cross-product term between the treatment condition and the socioeconomic status indicator, coded in dummy variables. A significant test statistic based on the likelihood ratio test for this interaction term is evidence that treatment effects vary by school socioeconomic level. All analyses were performed using the statistical package MLwiN 2.2 [37]. All outcome analyses were intent-to-treat. --- Results The sample consisted of 5541 students, 49.1% of whom were females. Mean age was 13.2 years. At baseline, gender and age distributions differed among social levels (data not shown). Schools in the lowest level had a higher percentage of male and older students. Students in schools of high socioeconomic level were more likely than students in other schools to drink at least monthly (17.2% vs. 14.6%, p = 0.01) and to have intention to drink (43.7% vs. 39.0%, p <unk> 0.01) while students in schools of low socioeconomic level were more likely to report recent episodes of drunkenness (7.0% vs. 4.0%, p <unk> 0.01), intention to get drunk (20.0% vs. 17.6%, p = 0.03), and alcohol-related problem behaviours (4.2% vs. 3.0%, p = 0.02). The only difference among social levels with regard to the considered psychosocial variables was that students in schools of low socioeconomic level compared to other adolescents were more likely to perceive their school performance as worse than average (10.7% vs. 6.1%, p <unk> 0.01). Figure 1 shows the sample size and the equivalence of some baseline characteristics by experimental condition, separately by socioeconomic level. Within levels of socioeconomic environment we found different distributions between the control and intervention groups for gender, age, family living situation, frequency of alcohol consumption, and intentions to drink and to get drunk. Controls in the lower social level had higher proportions of well-known predictors of alcohol use (male gender, older age and early drinking experience) compared to students in the intervention group. Missing information at baseline was negligible for any of the assessed characteristic (at the most 2.1%, data not shown). Participation in the programme was associated with a significantly lower prevalence of episodes of drunkenness and of intention to get drunk, compared to usual curricula, among students attending schools in low socioeconomic context (Table 2). For both outcomes the estimated OR was approximately 0.60.. The same students had an OR of 0.68 of reporting behavioural problems due to their drinking, but this effect was only marginally significant (p = 0.06). Concerning the frequency of alcohol consumption, the estimated effects did not reach statistical significance within sub-groups, but the estimates were consistently lower among students attending schools in disadvantaged contexts. No significant programme's effects emerged for students in schools of medium or high socioeconomic level. Interactions between intervention condition and socioeconomic status at the area level were found to be statistically significant only for intention to get drunk (p = 0.02). --- Discussion In a multi-centre trial among European students we found some evidence that the effectiveness of a comprehensive social influence school-based preventive programme on problematic drinking might differ by socioeconomic environment of the school. The differences indicated a higher preventive impact of the curriculum on episodes of drunkenness and intention to get drunk among students attending schools in a socially deprived context, compared to students in medium or high social context. The effects of the programme on the frequency of alcohol consumption and the intention to drink were weak and not statistically significant in subgroups, in line with results on the whole study sample [28]. However, even for these outcomes the direction of the estimated effects suggested a higher impact of the curriculum in schools in low social context. The absence of statistical significance in most interaction tests is compatible with homogeneity of the effects among social strata. However, given the overall pattern of associations, consistently indicating the most favourable effect in areas with low social index, it is also plausible that an existing difference was not detected due to limitations of the study, in particular the imperfect classification of social status of the living environment and the limited sample sizes. Few studies have examined how socioeconomic characteristics influence the effectiveness of substance use school-based prevention. If only life skills training approaches are considered, the evidence is extremely scant and based on observations limited to low social class contexts. In fact, evaluation studies have reported preventive effects on alcohol use in low socioeconomic contexts for Botvin's "LifeSkills Training" [38][39][40] as well as for the "keepin' it REAL" curriculum of the Drug Resistance Strategies Project Results from multilevel models adjusted for gender, age, family living situation and baseline status of the outcome: odds ratios (OR) and 95% confidence interval (95%CI) of alcohol-related behaviour for students in the intervention group compared to the controls, by socioeconomic level of the school area. The EU-Dap Study, 18-month follow-up. [41]. None of these studies provided a comparison of the programme impact with upper social context populations. As an exception, the original edition of Project ALERT was proven equally effective in schools with populations of high and low average social level, but the programme resulted in only short lived effects for alcohol use [42]. To our knowledge, only one recent study has investigated how neighbourhoods influence the effectiveness of a school curriculum in preventing alcohol use [43]. This study reported that living in poorer neighbourhoods decreased the programme's effectiveness in one ethnic subgroup of the sample. A possible explanation for the indication of an effect modification of social environment on problematic drinking in our study is that the curriculum was more relevant to schools with average low socioeconomic status of the population. It is also plausible that neighbourhood disadvantage correlates with lack of educational resources and of social and familial support to adolescents. Therefore, the relative "preventive gain" from school prevention would be higher in these under-privileged contexts. Differential teacher's response to training is another possible explanation. Teachers in schools from socially disadvantaged communities may have taken a greater advantage of the training, improving their capability to conduct interactive teaching to a larger extent than teachers in communities of medium or high socioeconomic status. It is also possible that contamination occurred to a larger extent in control schools from medium or high socioeconomic areas, if these schools conducted other health promoting interventions, based on skill-enhancing methods similar to the "Unplugged" curriculum. There are three major weaknesses in this study. First, the sample size was calculated to study the programme effects on the whole sample. Economic and organizational difficulties made it impossible to sample a number of schools sufficient to explore differential effects across sub-groups. Given the need to employ a multilevel analysis, the study had limited statistical power to detect intervention effects for specific subgroups. Despite the lack of power, tendencies in the results were consistent in indicating a higher effectiveness in socially disadvantaged contexts, with significant interaction for one outcome. Secondly, the participating centres classified the socioeconomic status of school areas using the best locally available indicators and sources of information, that were however different among centres and not validated. This may have lead to measurement error and misclassification of social grouping for some schools. However, since schools in the EU-Dap study were randomised within blocks of social level the misclassification would be independent from the experimental arm as well as from the study outcome. The most probable consequence of this unconditional classification error would be to bias the effect estimates towards the value expected under the null hypothesis, i.e. an underestimation of the effect modification. Thirdly, intervention and control arms within each socioeconomic level differed on some potential confounders related to baseline characteristics, despite the random assignment. This was probably due to group allocation of a relatively low number of schools within each socioeconomic block. Therefore, some residual confounding within strata could be present, although all analyses were adjusted for measured baseline factors that could constitute potential confounders. Lack of information on socioeconomic status at the individual level could also be acknowledged as a limitation. However, this was rather the consequence of a deliberated choice, since children's reports of parental occupation or education are generally not reliable [44]. This was one of the few studies designed to consider differential effects of a preventive programme across socioeconomic groups. In fact, the assignment of schools to treatment or control conditions was accomplished through block randomisation that controlled for environmental socioeconomic characteristic, thus achieving a balanced representation of social strata in the study sample. Also, many different behavioural aspects of alcohol use were investigated. --- Conclusions The innovative school curriculum evaluated in the EU-DAP study seems to have a beneficial preventive effect on problem drinking, motivating its further dissemination in schools in lower socioeconomic levels. Since higher prevalence rates of unhealthy behaviours among lower socioeconomic groups contribute substantially to socioeconomic inequalities in health [45], universal prevention programmes that are effective in lower socioeconomic groups may be useful in reducing this socioeconomic gap, one of the major priorities of public health policy in Europe. --- Authors' contributions FF and MRG designed the study. MPC and MRG drafted the paper. FF and RB contributed to revising the paper. MPC performed the statistical analysis. The members of the EU-Dap Study Group carried out the intervention and collected the data. MPC has overall responsibility for the paper. All authors contributed to and approved the final manuscript. --- Competing interests The authors declare that they have no competing interests.
Background: Although social environments may influence alcohol-related behaviours in youth, the relationship between neighbourhood socioeconomic context and effectiveness of school-based prevention against underage drinking has been insufficiently investigated. We study whether the social environment affects the impact of a new school-based prevention programme on alcohol use among European students. Methods: During the school year 2004-2005, 7079 students 12-14 years of age from 143 schools in nine European centres participated in this cluster randomised controlled trial. Schools were randomly assigned to either control or a 12-session standardised curriculum based on the comprehensive social influence model. Randomisation was blocked within socioeconomic levels of the school environment. Alcohol use and alcohol-related problem behaviours were investigated through a self-completed anonymous questionnaire at baseline and 18 months thereafter. Data were analysed using multilevel models, separately by socioeconomic level. Results: At baseline, adolescents in schools of low socioeconomic level were more likely to report problem drinking than other students. Participation in the programme was associated in this group with a decreased odds of reporting episodes of drunkenness (OR = 0.60, 95% CI = 0.44-0.83), intention to get drunk (OR = 0.60, 95% CI = 0.45-0.79), and marginally alcohol-related problem behaviours (OR = 0.70, 95% CI = 0.46-1.06). No significant programme's effects emerged for students in schools of medium or high socioeconomic level. Effects on frequency of alcohol consumption were also stronger among students in disadvantaged schools, although the estimates did not attain statistical significance in any subgroup. Conclusions: It is plausible that comprehensive social influence programmes have a more favourable effect on problematic drinking among students in underprivileged social environments.
Introduction This study analyzed the influence of economic capital, culture capital, social capital, social security, and living conditions on children's cognitive ability. With deepening development and reformation of education, the human capital cultivation of children is becoming a key step for many families. A fundamental aspect of the cultivation is children's cognitive ability, which is the ability of human beings to extract, store, and use information from the objective world. It mainly involves human abstract thinking, logical deduction, and memory (Autor 2014). As documented, there is a significant correlation between family factors and children's cognitive ability (Zimmer et al. 2007;Kleinjans 2010;Li 2012Li, 2017;;Saasa 2018;Fan et al. 2019;Wang and Lin 2021). Specifically, there are three different capital theories that focus on the impact of family on children's cognitive ability, namely economic capital, cultural capital, and social capital (Bourdieu and Wacquant 1992;Farkas 2003). In particular, the impact of cultural capital is particularly important (Li and Zhao 2017;Yao and Ye 2018;Zhang and Su 2018;Hong and Zhang 2021) since economic capital reflects its value only by cultural capital (Hong and Zhao 2014). In addition, there are great differences in economic capital, social capital, and cultural capital between urban and rural families, which leads to the urban-rural education gap (Jin 2019). These findings concentrate on the influence of family capital on children's cognitive ability, but the social security and living conditions are not touched upon. In contrast, this study investigated the influence of all those factors on children's cognitive ability, in particular, the social security and living conditions. Different family capital has corresponding measurement indicators. In particular, economic capital includes family income (Yang and Wan 2015;Fang and Hou 2019;Hou et al. 2020), health investment (Shen 2019;Wu et al. 2021), and education expenditure (Lin et al. 2021;Fang and Huang 2020), which refers to the sum of economic related resources owned by a family (Xue and Cao 2004). Culture capital is not only reflected in the diplomas obtained by family members, but also in the educational concept, attitude, and expectation of parents for their children (Guo and Min 2006), which includes three forms: concrete culture capital, such as family parenting (Zhang et al. 2017;Huang 2018), lifestyle (Wu et al. 2020), education expectation (Gu and Yang 2013;Wang and Shi 2014;Xue 2018;Zhou et al. 2019), participation (Wei et al. 2015;Liu et al. 2015;Liang et al. 2018); objectified culture capital, including books (Hong and Zhao 2014;Yan 2017); and institutionalized culture capital, referring to the educational diploma obtained (Xie and Xie 2019;Zhu et al. 2018). From the perspective of micro social network, social capital referred to in this paper is defined as a kind of resource embedded in the network (Granovetter 1973), which takes social capital as a new form of capital, so that actors can obtain a better professional position or business opportunities, so as to affect the income return (Lin 2005). In specific, social capital includes occupation (James 2000;Teacherman 2000;Fang and Feng 2005;Zhou and Zou 2016;Zhu and Zhang 2020), social communication (Putnam 2000; Liang 2020; Yang and Zhang 2020), information utilization (Cao et al. 2018;Zheng et al. 2021), and human expenditure (Wang and Gong 2020). Social security can improve residents' household consumption (Fang and Zhang 2013;Yang and Yuan 2019) and alleviate economic poverty (Guo and Sun 2019) through income redistribution, which can increase the economic capital of families and affect investment in children. Thus, the social security affects children's cognitive ability, including medical insurance (Chen et al. 2020), endowment insurance (Xue et al. 2021), and government support (Liu and Xue 2021;Yin and Fan 2021). Living conditions refer to the family infrastructure and facilities that affect children's lives, including safe drinking water, sanitary toilets, clean energy, waste treatment, and sewage treatment (Zhao et al. 2018). In particular, exposure to air pollution (Chen et al. 2017a;Schikowski and Altug 2020;Nauze and Severnini 2021), water (Chen et al. 2017b;Gao et al. 2021), and fuel (Cong et al. 2021;Chen et al. 2021) also affects cognitive ability. Other factors include family structure (Zhang 2020;Jiang and Zhang 2020), family size (Liu and Jin 2020;Fang et al. 2020), and family health (Li and Fang 2019). Unlike previous work, this study applied instrumental variables and two-stage least squares regression analysis to solve the endogenous problem, assessing the influence of numerous factors on children's cognitive ability. The robustness of this study's results was assessed by controlling sample size and increasing variables. In addition, children's individual and social characteristics affect cognitive ability. For example, the performance of girls is better than that of boys, although the gender difference is decreasing (Hao 2018). The older the migrant child, the worse the academic performance (Wang and Chu 2019). Number of siblings has a significant impact on youth's cognitive ability (Tao 2019). In contrast, this study investigated heterogeneity in gender and urban location for those influences. This study examined the impact of numerous factors, including social security and living conditions, on children's cognitive ability, using data from the China Family Panel Studies in 2018. Rather than the ordinary least squares method, the study used two-stage least squares regression to solve endogeneity. In addition, we explored heterogeneity in gender and urban location and the impact of those factors on children's cognitive ability. These results obtained may provide guidance for the government, society, and families to improve children's cognitive ability. The remainder of this paper is organized as follows. Section 2 describes data, variables, and summary statistics. Section 3 outlines the basic model for the influence of those factors on children's cognitive ability. Section 4 describes the instrumental variable test, endogeneity test, empirical results, and robustness test. Section 5 outlines the heterogeneity analysis of gender and urban location. Section 6 concludes. --- Data, Variable, and Summary Statistics --- Data This study used the data from the China Family Panel Studies (CFPS), a tracking survey of individuals, families, and communities implemented by China Social Science Investigation Center of Peking University, which aims to reflect the changes of China's society, economy, education, and health. The data sample covers 25 provinces/cities/autonomous regions, and the respondents include all family members. In the implementation of the survey, the multi-stage, implicit stratified, and population scale proportional sampling method was used. The main research object of this study was children aged 6-16. Since the respondents of the CFPS personal self-administered questionnaire are children over nine years old, and children's cognition of their own situation is not necessarily accurate, this study mainly used the children's proxy questionnaire and combined the relevant variables such as parents' situation in the personal self-administered questionnaire and family basic information in the family questionnaire. The data supported this work. The basic information related to families, parents, and their children in 2018 was extracted and matched with the data. --- Explained Variables Following Li and Shen (2021), Wu et al. (2020), and Dong and Zhou (2019), children's Chinese and math scores were used in this study to measure Chinese cognitive understanding ability and math reasoning cognitive ability, respectively, using the "How about Chinese score" and "How about math score" tests in the CFPS questionnaire, both of which use ordinal categorical variables (1 for "fail", 2 for "intermediate", 3 for "good", and 4 for "distinction"). --- Explanatory Variables In this study, the main explanatory variables were divided into five parts. They are economic capital, culture capital, social capital, social security, and living conditions. Economic capital was measured by the family income, children's health investment, and education investment. They are all continuous variables and were added 1 before taking the natural logarithm. Culture capital was measured by the questions of "How many books do you have in your family?", "What is the highest degree you have completed?", "What level of education do you want your child to attain?", "How often do you discuss what's happening at school with your child?", and "When your children's grades are not satisfactory, which way do you usually deal with them?". They represent the family books, education, educational expectation, educational participation, and parenting style, respectively. There are three aspects of culture capital, namely the objective, institutional, and concrete culture capital (Bourdieu and Passeron 1977). For family education and education expectation, 0 is for illiterate/semi-illiterate, 1 for nursery, 2 for kindergarten, 3 for primary school, 4 for junior middle school, 5 for senior middle school, 6 for junior college, 7 for undergraduate, 8 for master, and 9 for doctor. For parenting style, we redefined scolding the child, spanking the child, and restricting the child's activities as 0, and contacting the teacher, telling the child to study harder, helping the child more, and doing nothing as 1. Among them, 0 is for stern parenting, and 1 is for gentle parenting. Family books and children's education participation are continuous variables, and the number of books was added 1 before taking the natural logarithm. In addition, family lifestyle consists of smoking, drinking, exercise, and lunch break, which is an ordered variable. Social capital was measured by "nature of work", "information utilization", "social communication", and "human expenditure". For job, 1 is unemployed, 2 is agricultural work, and 3 is non-agricultural work. We used the questions of "Do you use a mobile phone?", "Do you use mobile devices?", and "Do you use a computer to surf the Internet?" to measure the information utilization. We defined information utilization as follows: 0 means that none is used, 1 means that at least one is used, 2 means that at least two are used, and 3 means that at least three are used. The questions of "How good do you think your relationship is?" and "How do you rate your trust in your neighbors?" were used to measure the social communication. We summed and then averaged the answers to these two questions and obtained a continuous variable. Human expenditure is a continuous variable and was added 1 before taking the natural logarithm. Social security was measured by the participation of medical and endowment insurance and government support. Among them, medical and endowment insurance are continuous variables. For government support, 0 is for not accepting subsidies, and 1 is for accepting the subsidies. Living conditions were measured by the questions about "water for cooking", "cooking fuel", and "indoor air purification", and the answer 0 is for no and 1 is for yes. Specifically, for tap water, 0 represented no tap water use, and 1 is for tap water use. For cooking fuel, 0 is for no use of clean fuel, and 1 is for clean fuel use. For air purification, 0 is for no air purification, and 1 is for use of air purification. In addition, for gender, 0 is for women and 1 is for men. The registered residence was redefined: 0 is for rural, and 1 is for urban. The registered marital status was redefined: 0 is for unmarried, and 1 is for married. For nationality, 0 is for others, and 1 is for Han nationality. Family age, the child's age, and family size are the continuous variables. For family health, 1 denotes unhealthy, 2 relatively unhealthy, 3 average, 4 relatively healthy, and 5 very healthy. We used the question "How many times a week do you eat with your family?" to measure parenthood, which is a continuous variable. In addition, we consider parents' cognitive ability as proxy variable of genes. According to the CFPS in 2018 for the children's questionnaire, the respondents may be father or mother. Following Li and Zhang (2018), we select two dimensions of father's or mother's word ability and mathematical ability to construct parents' cognitive ability indicators. To compare, we standardized the scores of word ability and mathematical ability, and added up to obtain a comprehensive cognitive ability, which is recorded as family cognitive ability. Table 1 shows the summary statistics of variables. By deleting invalid values, 2647 final valid samples were included. As shown in Table 1, for children's characteristics, approximately 54% of children were boys, 46% were girls, 43% lived in urban areas, 57% lived rurally, and the children's age ranged from 6 to 16. For family characteristics, approximately 35% were male, 65 were female, 96% had a spouse, the family age ranged from 18 to 78, and the average family size was 5. For family economic capital, the mean values of family income, children's health investment, and education investment are 10.74, 4.30, and 7.27, respectively. Education investment is significantly greater than health investment. For family culture capital, approximately 89% of families adopted a mild parenting approach, the frequency of families talking with their children is 3.26, the average educational level of the family is primary school, and the family education expectation is undergraduate. The average value of family lifestyle is 1.87, indicating that families account for at least two of smoking, drinking, exercise, and lunch break. The average number collected books in the family is 2.51. Institutionalized and materialized cultural capital are not high, but the level of morphological cultural capital is relatively high, indicating that families pay more attention to education. For family social capital, family non-agricultural employment is significantly greater than agricultural employment or unemployment; the average family information and human expenditure are 1.79 and 7.372, respectively; the popularity of social communication is 6.83; and the family social capital is moderate to good. For family social security, every family has at least one kind of medical insurance and endowment insurance, and at least half of the people have received government subsidies. For living conditions, the values for utilities of tap water, fuel, and air purification are 73%, 70%, and 3%, respectively; the popularity of tap water and clean fuel is high, while the popularity of air purifiers is low. In addition, children's Chinese and math cognitive ability were both moderate; the average cognitive ability of math is higher than that of Chinese. For family cognitive ability, the average of Chinese and math cognitive ability is 18.33 and 8.74, respectively, and the overall level of family cognitive ability is not high. We included the standardized and aggregated comprehensive family cognitive ability in Table 1, with a maximum of 4.70 and a minimum of -3.53. --- Basic Model This study included 29 characteristics as covariates. To investigate effect of those factors on children's Chinese cognitive ability and math cognitive ability, respectively, we established the following model. E ni = <unk> 0 + 3 <unk> k=1 <unk> k1 C ki + 6 <unk> j=1 <unk> j2 F ji + 20 <unk> l=1 <unk> l3 S li + <unk> i (1) where E ni is the n-th cognitive ability for the child i (n = 1, 2, where 1 is for Chinese and 2 for math); C ki is the k-th children's characteristics for the child i (k = 1, 2,... 3); F ji is the j-th family information for the child i (j = 1, 2,..., 6); S li is the l-th family capital and family cognitive ability for the child i (l = 1, 2,..., 20); <unk> k1, <unk> j2, and <unk> l3 are the corresponding parameters to those variables, and <unk> i is the regression error term. Through the above model, we used ordinary least squares (OLS) regression to obtain results. However, due to the reverse causal relationship and confounding factor, we had to find proxy variable to genetic, instrumental variables to solve endogeneity, and verify them according to the assumptions. Thus, we used two-stage least squares (2SLS) as the main empirical approach and compared with ordinary least squares (OLS). As a robustness check, we conducted analysis by adding variables and controlling sample size. In addition, the heterogeneity in gender and urban location was checked based on two-stage least squares (2SLS). As for the sharing genes and environment between parents and children being concerned, we make the following discussion. On the one hand, the social environment experienced by children and their parents is different. In specific, the children studied in this paper were born in the 21st century, so they did not experience major social changes and disasters. However, their parents have experienced great social changes, for example, cultural revolution, educational reform, and natural disasters. On the other hand, the inequality of family resources will lead to the inequality of children's cognitive ability and early skills dependent partly on genetics (Plomin and Stumm 2018;Silventoinen et al. 2020). Thus, these two factors usually produced an interesting phenomenon, that is, the higher the importance of one, the smaller the other. However, as resulted by Houmark et al. (2020), the relative importance of genes depends on how parents' investment is distributed among their children, whether parents or society are. As also resulted by Victor Ronda et al. (2020), the worse the childhood environment, including family resources, the weaker the role of their genes. In addition, as proved, cognitive ability can be developed through acquired cultivation (Hu and Xie 2011;Kuang et al. 2019;Zhou et al. 2021), but the cognitive ability, in this paper, refers to children's word understanding ability and mathematical reasoning ability, which are measured by the scores of Chinese and math tests, respectively, and not measured by IQ test scores, though IQ test scores largely depend on genes. Furthermore, as observed from the samples in CFPS data, Chinese and math cognitive abilities of children with the same family ID were inconsistent. In particular, since the data of the 2018 China Family Panel Studies that we applied in this work do not provide genetic information, we take parents' cognitive ability as the proxy variable of genes in regression analysis. In this study, proxy variables meet the following two conditions: (1) After introducing proxy variables (parental cognitive ability), there is no correlation between family capital and genes. Indeed, following Zheng et al. (2018), family capital is an acquired environmental factor. (2) Once the genes are observed, parents' cognitive ability will no longer mainly explain children's cognitive ability. Specifically, parental cognitive ability is highly correlated with their genes, and parental cognitive ability is not collinear with other explanatory variables. As checked, parental cognitive ability is not related to random error, and family cognitive ability can be used as a proxy variable to reflect the genetic difference. Following Cui and Susan (2022), instrumental variables and two stage least squares regression are applied. In particular, when the exposed group and the non-exposed group are not comparable, some background variables need be used to stratify the total group so that the exposed sub-group and the non-exposed sub-group are comparable. Instrumental variable analysis can control those bias in observational studies (Geng 2004;Brookhart et al. 2006). The instrumental variables and two stage least squares analysis in this paper will be shown in Section 4.2. --- Results --- Results from OLS Using the survey data of CFPS in 2018, we successively incorporated family cognitive ability and family capital into the regression and applied the ordinary least squares (OLS) method to investigate the influence of family economic capital, culture capital, social capital, social security, living conditions, and family cognitive ability on children's Chinese and math cognitive ability. After excluding the influence of collinearity, the results are shown in the second to fifth column of Table 2. As shown in second and third columns of Table 2, the effect of family cognitive ability on children's cognitive ability was significant (0.054, p <unk> 0.01), i.e., the shared genes partly determine children's cognitive ability. As shown in the fourth and fifth columns of Table 2, the effect of family cognitive ability is no longer significant, i.e., the role of genes will be weakened by family capital. This has also been confirmed in Victor Ronda et al. (2020). Besides, children's age (-0.053, p <unk> 0.01) and gender (-0.287, p <unk> 0.01) have significant influence on Chinese cognitive ability, while only children's age (-0.096, p <unk> 0.01) has significant influence on math cognitive ability. The influence of children's age and gender on the two cognitive abilities are both negative, while family age (0.005, p <unk> 0.05; 0.007, p <unk> 0.01) has a positive effect on their children's cognitive ability for Chinese and math. For family culture capital, family education (0.081, p <unk> 0.01; 0.085, p <unk> 0.01), education expectation (0.122, p <unk> 0.01; 0.163; p <unk> 0.01), and family books (0.020, p <unk> 0.1; 0.019, p <unk> 0.1) have a positive impact on the two cognitive abilities. Among them, education expectation has the greatest impact, followed by family education and family books, and the influence of education expectation and family education on math cognitive ability is greater than that of Chinese, while the influence of family books is opposite. The more frequently families participate in education (0.082, p <unk> 0.01; 0.058, p <unk> 0.01), the better their children's cognitive abilities, and the impact on Chinese cognitive ability is greater than the impact on math. For family social capital, the impact of social communication on both children's Chinese (0.045, p <unk> 0.01) and math (0.038, p <unk> 0.01) cognitive abilities is positive. For living conditions, only tap water (0.089, p <unk> 0.05) exhibited a positive impact on children's Chinese cognitive ability. In general, cultural capital has the greatest impact, followed by living conditions and social capital. However, the influence of family economic capital is not significant. The above results are based on ordinary least squares (OLS). --- Endogeneity Test In Equation (1), to avoid the endogenous problems caused by omitted variables, we consider the children's characteristics and family information, including age, gender, nationality, residence, marriage, and family size. These variables have been proved to have an impact on children's cognitive ability in previous studies. In this model, the main endogenous problems may be caused by the confounding factors and mutual causality. For example, children of high cognitive ability may have better genes than those of low cognitive ability. If children of high cognitive ability do not receive the acquired training, they are also more likely to obtain high cognitive ability, since their genes are excellent. However, as summarized by Miettinen and Cook (1981), confounding factors are independent risk factors; the distribution of confounding factors in exposed population and non-exposed population is different. So, we take family cognitive ability as poxy variable of genes. Family books and family medical insurance passed the test of endogenous variables, while the family cognitive ability did not. Possible causes are confounding factors or mutual causality. For mutual causality, family books and family medical insurance may affect children's cognitive ability. Conversely, children of higher cognitive ability may have more books bought for them by their parents to support and encourage them, and the medical insurance decision will also change (Zhang and Li 2021). Therefore, we solve these problems by selecting appropriate instrumental variables. Specifically, we adopted instrumental variables (IVs) and two-stage least squares (2SLS). We used the lag variable Bookiv as the instrumental variable of family books and the average participation rate of medical insurance (Mediv) in 28 provinces as the instrumental variable of medical insurance. Our instrumental variables satisfy the assumptions of IVs (Angrist et al. 1996). Specifically, Bookiv is highly correlated with family books, and its impact on children's cognitive ability is realized through family books, rather than directly affecting children's cognitive ability. For Mediv, which is highly correlated with family medical insurance, the average participation rate does not have a direct impact on children's cognitive ability. No other confounding factors exist between instrumental variables and children's cognitive ability. In the previous literature, the factors that affect children's cognitive ability were included in the regression to avoid the influence of confounding factors. To ensure that the IV estimation was reliable, we used the weak instrumental variable test, and as the result show, family books and medical insurance are endogenous variables. Furthermore, the Cragg-Donald-Wald F is 30.984, which is obviously greater than 10. As shown in sixth and seventh columns of Table 2, children's age (-0.055, p <unk> 0.01) and gender (-0.284, p <unk> 0.01) have significant influence on their Chinese cognitive ability. The influence of children's age and gender on the two cognitive abilities is negative, while the influence of family age (0.006, p <unk> 0.05; 0.008, p <unk> 0.01) is positive. For family culture capital, family education (0.087, p <unk> 0.01; 0.090, p <unk> 0.01), education expectation (0.116, p <unk> 0.01; 0.158; p <unk> 0.01), and books (0.101, p <unk> 0.05; 0.089, p <unk> 0.1) have a positive impact on the two cognitive abilities. Similarly, education expectation has the greatest impact, followed by family education and books, and the influence of education expectation and family education on math cognitive ability is greater than that of Chinese, respectively, while the influence of family books is the opposite. The more frequently families participate in education (0.078, p <unk> 0.01; 0.055, p <unk> 0.01), the better their children's cognitive abilities, and the impact on Chinese cognitive ability is greater than on math. For family social capital, the impact of social communication on both children's Chinese (0.048, p <unk> 0.01) and math (0.039, p <unk> 0.01) cognitive abilities is positive. In addition, for family social security, medical insurance (-1.427, p <unk> 0.01; -1.273, p <unk> 0.01) has negative impact on both Chinese and math cognitive abilities, while endowment insurance (0.229, p <unk> 0.01; 0.183, p <unk> 0.05) has positive impact on both Chinese and math cognitive abilities. Tap water (0.091, p <unk> 0.1) has a positive impact on children's Chinese cognitive ability. After introducing instrumental variables, the impact of family books and medical insurance on children's cognitive ability increased. The above results are based on the two-stage least squares (2SLS). --- Robustness Checks To verify the reliability of the estimated results, we carried out robustness checks using three methods. Specifically, we controlled the sample size and the number of explanatory variables and took the family health and family relationship into account. Family health refers to the self-evaluation of family health: 1 for unhealthy and 5 for healthy. Family relationship is a continuous variable measured by the number of meals with family members. As shown in the second and third columns of Table A1 in Appendix A, children's age (-0.055, p <unk> 0.01; -0.098, p <unk> 0.01), children's gender (-0.284, p <unk> 0.01, for Chinese), family age (0.007, p <unk> 0.05; 0.009, p <unk> 0.01), family education (0.086, p <unk> 0.01; 0.089, p <unk> 0.01), education expectation (0.115, p <unk> 0.01; 0.157, p <unk> 0.01), books (0.105, p <unk> 0.05; 0.093, p <unk> 0.05), education participation (0.076, p <unk> 0.01; 0.054, p <unk> 0.01), social communication (0.044, p <unk> 0.01; 0.035, p <unk> 0.01), medical insurance (-1.450, p <unk> 0.01; -1.287, p <unk> 0.01), endowment insurance (0.236, p <unk> 0.01; 0.188, p <unk> 0.05), and tap water (0.092, p <unk> 0.05, for Chinese) still have significant influence on children's cognitive ability. Family health (0.038, p <unk> 0.05; 0.041, p <unk> 0.05) has a positive impact on the two cognitive abilities. Similarly, as shown in the fourth, fifth, sixth, and seventh columns in Table A1 in Appendix A, the significance remains unchanged. Therefore, the results based on 2SLS are robust. --- Heterogeneity Analysis The heterogeneity was checked to determine the influence of family factors on children's Chinese and math cognitive abilities. --- Heterogeneity in Gender As shown in Table A2 in Appendix A, for family culture capital, the influence of family education (0.100, p <unk> 0.01; 0.102, p <unk> 0.01, for girls) and education participation (0.133, p <unk> 0.01; 0.104, p <unk> 0.01, for girls) on girls' cognitive ability is greater than that of boys. The influence of family education expectation on girls' (0.157, p <unk> 0.01) Chinese cognitive ability is greater than that of boys (0.093, p <unk> 0.01), while the influence of family education expectation on boys' (0.162, p <unk> 0.01) math cognitive ability is greater than that of girls (0.157, p <unk> 0.01). Family books (0.135, p <unk> 0.1) only have a significant impact on girls' Chinese cognitive ability. For family social capital, social communication has the greatest impact on girls' cognitive ability (0.054, p <unk> 0.01; 0.049, p <unk> 0.05, for girls). For social security, medical insurance (-1.958, p <unk> 0.05; -1.619, p <unk> 0.05, for girls) and endowment insurance (0.298, p <unk> 0.05; 0.271, p <unk> 0.05, for girls) have the greatest impact on girls' cognitive ability. For living conditions, only tap water has a positive impact on boys' math cognitive ability (0.145, p <unk> 0.05). In addition, the larger the family size, the greater the impairment of boys' math cognitive ability. Therefore, the culture capital, social capital, and social security are more sensitive to girls' cognitive ability, while living conditions are more sensitive to boys' cognitive ability. --- Heterogeneity in Urban Location As shown in Table A3 in Appendix A, for family culture capital, the influence of family education on the cognitive ability of rural children (0.101, p <unk> 0.01; 0.116, p <unk> 0.01) is greater than that of urban children (0.065, p <unk> 0.05; 0.069, p <unk> 0.05). Family education expectation has the greatest impact on rural children's math cognitive ability (0.191, p <unk> 0.01) and urban children's Chinese cognitive ability (0.123, p <unk> 0.01). Family books only affects the math cognitive ability of urban children (0.108, p <unk> 0.1). Family education participation has the greatest impact on rural children's Chinese cognitive ability (0.092, p <unk> 0.01) and the least impact on urban children's Chinese cognitive ability (0.054, p <unk> 0.1). For social communication, the impact on the cognitive ability of rural children (0.057, p <unk> 0.01; 0.039, p <unk> 0.05) is greater than that of urban (0.041, p <unk> 0.05; 0.035, p <unk> 0.1). Medical (-1.468, p <unk> 0.01; -1.087, p <unk> 0.05) and endowment insurance (0.243, p <unk> 0.05; 0.193, p <unk> 0.1) have a significant impact on the cognitive ability of urban children but not on rural children. For living conditions, only tap water (0.149, p <unk> 0.1) was significant for urban children's Chinese cognitive ability. Therefore, the culture capital and social capital are more sensitive to rural children's cognitive ability, while the social security and living conditions are more sensitive to urban children's cognitive ability. --- Conclusions This study used the data from the 2018 China Family Panel Studies to analyze the impact of numerous factors on children's Chinese and math cognitive ability. Firstly, children's and family's characteristics have significant impact on children's Chinese and math cognitive ability. Among them, children's age, gender, and family size are negative for children's cognitive ability, while family age has a positive impact on children's cognitive ability. Family culture capital, education, education expectation, books, and education participation have a positive impact on children's cognitive ability. For family social capital, the more family social communication, the higher children's cognitive ability. For family living conditions, family use of tap water is more conducive to the improvement of children's cognitive ability. What is more, the influence of family cognitive ability on children's cognitive ability is attenuated by the family capital, which means that the impact of genes are weakened. The above results are based on ordinary least squares (OLS). After introducing instrumental variables Bookiv and Mediv and solving endogeneity, some changes took place in the results. On the one hand, the influence of family books on children's cognitive ability increased significantly. On the other hand, the impact of medical insurance and endowment insurance on children's cognitive ability became significant. Medical insurance was negative, and endowment insurance was positive. In addition, according to the two-stage least squares (2SLS) method, the results are robust after controlling the sample size and increasing the variables. Moreover, there is heterogeneity in gender and urban location for the influence of numerous factors on children's Chinese and math cognitive ability. In regard to gender, the culture capital, social capital, and social security are more sensitive to girls' cognitive ability, while living conditions are more sensitive to boys' cognitive ability. Specifically, girls' family education, education expectation, books, education participation, social communication, and medical and endowment insurance have a greater impact on cognitive abilities, and tap water is significant for the math cognitive ability of boys. In urban locations, the culture capital and social capital are more sensitive to rural children's cognitive ability, while the social security and living conditions are more sensitive to urban children's cognitive ability. Specifically, rural children's family education, education expectation, education participation, and social communication have a greater impact on cognitive ability, while urban children's family books, medical insurance, endowment insurance, and tap water are more significant for their cognitive ability. There are some open problems following this research. Due to the imbalance of the initial sample proportion, the proportions of agricultural residence and non-agricultural residence samples were slightly unbalanced after data processing. The heterogeneity in urban location may lead to a slight bias in our full sample model. The error terms of the model may not be independently identically distributed. In addition, there may be further heterogeneity for the influence of numerous factors on children's Chinese and math cognitive ability, and a full mediation analysis should be worthwhile in the future. In this study, we take family cognitive ability as proxy variable of genes, but the empirical results reported in this study are worth checking in full data directly including genetics and environment. Those findings above provide theoretical support to further narrow the cognitive differences between children. --- Data Availability Statement
The aim of this study was to analyze the influence of economic capital, culture capital, social capital, social security, and living conditions on children's cognitive ability. However, most studies only focus on the impact of family socio-economic status/culture capital on children's cognitive ability by ordinary least squares regression analysis. To this end, we used the data from the China Family Panel Studies in 2018 and applied proxy variable, instrumental variables, and two-stage least squares regression analysis with a total of 2647 samples with ages from 6 to 16. The results showed that family education, education expectation, books, education participation, social communication, and tap water had a positive impact on both the Chinese and math cognitive ability of children, while children's age, gender, and family size had a negative impact on cognitive ability, and the impact of genes was attenuated by family capital. In addition, these results are robust, and the heterogeneity was found for gender and urban location. Specifically, in terms of gender, the culture, social capital, and social security are more sensitive to the cognitive ability of girls, while living conditions are more sensitive to the cognitive ability of boys. In urban locations, the culture and social capital are more sensitive to rural children's cognitive ability, while the social security and living conditions are more sensitive to urban children's cognitive ability. These findings provide theoretical support to further narrow the cognitive differences between children from many aspects, which allows social security and living conditions to be valued.
rural children's cognitive ability, while the social security and living conditions are more sensitive to urban children's cognitive ability. Specifically, rural children's family education, education expectation, education participation, and social communication have a greater impact on cognitive ability, while urban children's family books, medical insurance, endowment insurance, and tap water are more significant for their cognitive ability. There are some open problems following this research. Due to the imbalance of the initial sample proportion, the proportions of agricultural residence and non-agricultural residence samples were slightly unbalanced after data processing. The heterogeneity in urban location may lead to a slight bias in our full sample model. The error terms of the model may not be independently identically distributed. In addition, there may be further heterogeneity for the influence of numerous factors on children's Chinese and math cognitive ability, and a full mediation analysis should be worthwhile in the future. In this study, we take family cognitive ability as proxy variable of genes, but the empirical results reported in this study are worth checking in full data directly including genetics and environment. Those findings above provide theoretical support to further narrow the cognitive differences between children. --- Data Availability Statement: Data used in this paper can be found from the China Family Panel Studies, http://www.isss.pku.edu.cn/cfps/ (accessed on 13 March 2022). --- Author Contributions: Conceptualization, X.D.; methodology, X.D.; analysis, X.D. and W.L.; investigation, X.D. and W.L.; data curation, W.L.; writing-original draft preparation, W.L.; writing-review and editing, X.D.; supervision, X.D.; project administration, X.D.; funding acquisition, X.D. All authors have read and agreed to the published version of the manuscript. --- Conflicts of Interest: The authors declare no conflict of interest. --- Appendix A
The aim of this study was to analyze the influence of economic capital, culture capital, social capital, social security, and living conditions on children's cognitive ability. However, most studies only focus on the impact of family socio-economic status/culture capital on children's cognitive ability by ordinary least squares regression analysis. To this end, we used the data from the China Family Panel Studies in 2018 and applied proxy variable, instrumental variables, and two-stage least squares regression analysis with a total of 2647 samples with ages from 6 to 16. The results showed that family education, education expectation, books, education participation, social communication, and tap water had a positive impact on both the Chinese and math cognitive ability of children, while children's age, gender, and family size had a negative impact on cognitive ability, and the impact of genes was attenuated by family capital. In addition, these results are robust, and the heterogeneity was found for gender and urban location. Specifically, in terms of gender, the culture, social capital, and social security are more sensitive to the cognitive ability of girls, while living conditions are more sensitive to the cognitive ability of boys. In urban locations, the culture and social capital are more sensitive to rural children's cognitive ability, while the social security and living conditions are more sensitive to urban children's cognitive ability. These findings provide theoretical support to further narrow the cognitive differences between children from many aspects, which allows social security and living conditions to be valued.
INTRODUCTION Falsehood flies, and truth comes limping after it. Jonathan Swift, The Examiner No. 14, Thursday, 9th November 1710 Politicians have always been 'economical with the truth' and newspapers have toed an editorial line. However, never in recent times does it seem that confidence in our media has been lower. From the Brexit battle bus in the UK to suspected Russian meddling in US elections, fake news to alternative facts -it seems impossible for the general public to make sense of the contradictory arguments and suspect evidence presented both in social media and traditional channels. Even seasoned journalists and editors seem unable to keep up with the pace and complexity of news. These problems were highlighted during Covid when understanding of complex epidemiological data was essential for effective government policy and individual responses. As well as the difficulty of media (and often government) in understanding and communicating the complexity of the situation, various forms of misinformation caused confusion. There are obvious health impacts of this misinformation due to taking dangerous 'cures' (Nelson, 2020) and vaccination hesitancy (Lee, 2022a), as well as its role in encouraging violence against health workers (Mahase, 2022). In addition, a meta-review of many studies of Covid misinformation identified mental health impacts as also significant (Rocha, 2021). If democracy is to survive and nations coordinate to address global crises, we desperately need tools and methods to help ordinary people make sense of the extraordinary events around them: to sift fact from surmise, lies from mistakes, and reason from rhetoric. Similarly, journalists need the means to help them keep track of the surfeit of data and information so that the stories they tell us are rooted in solid evidence. Crucially in increasingly politically fragmented societies, we need to help citizens explore their conflicts and disagreements, not so that they will necessarily agree, but so that they can more clearly understand their differences. These are not easy problems and do not admit trite solutions. However, there is existing work that offers hope: tracking the provenance of press images (ICP, 2016), ways to expose the arguments in political debate (Carneiro, 2019), even using betting odds to track the influence of news on electoral opinion (Wall, 2017). I hope that this paper will give hope that we can make a difference and offer challenges for future research. --- THE B-MOVIE CAST OF MISINFORMATION Deliberate misinformation is perhaps the most obvious problem we face. There is extensive data science studies by academics and data journalists attempting to understand the extent and modes of spread (e.g. Albright, 2016;Vosoughi, 2018). Crucially false information appears to spread more rapidly than true information; possibly because it is more novel (Vosoughi, 2018). Although there is considerable debate as to the sufficiency of their responses, both Facebook and Twitter are constantly adjusting algorithms and policies to attempt to prevent or discourage fake news (Dreyfuss, 2019;NPR, 2022;Twitter, 2022). Within the HCI community there has been considerable work exploring the human aspects around the spread of misinformation online (Flintham, 2018, Geeng, 2020;Varanasi, 2022), ways to visualise it (Lee, 2022b), tools for end-users to help identify it (Heuer, 2022) and CHI workshops (Gamage, 2022;Piccolo, 2021). --- Bad Actors Much of the focus on misinformation is on 'bad actors': extremist organisations, 'foreign' powers interfering in elections, or simply those aiming to make a fast buck. In the context of mis-information, 'bad' can mean two things: 1. They are intrinsically bad people, bad states, or bad media. 2. They use bad methods and/or spread bad information (including misinformation and hateful or violent content). The first of these can be relative to clear criteria such as human rights or terrorism, but may simply mean those we disagree with; and, of course, the boundary between the two may often be unclear. When the two forms of 'bad' agree the moral imperative is clear, even though implementation may be harder. Forced in part by government and popular pressure, social media platforms have extensive mechanisms both to attempt to suppress bad information and suspend accounts of those who promulgate it (Guardian, 2018). Probably the most high-profile example of the later was Twitter's suspension of @realDonaldTrump. This was both met with widespread relief, but also caution due to its potential impact on free speech (Noor, 2021), especially given Twitter's arguments for why it was suspended when it was (Twitter, 2021). Of course, sometimes bad actors may spread true (or even good) information. In some cases this is simply because few are altogether bad. For example, those who believe and then promulgate Covid conspiracy theories; many will be well meaning, albeit deeply misguided, and some of the information may be accurate. However, true information can also be cynically used to give credence to otherwise weak or misleading arguments; for example a recent study of cross-platform misinformation (Micallef, 2022) found a substantial proportion of cases where a YouTube video with true information about Covid was referenced by a tweet or post that in some way mis-interpreted the material or used it out of context. In addition, many Astroturfing accounts will distribute accurate information as a means to create trust before disseminating misinformation. It can be hard to distinguish these and it is not uncommon for politicians or other campaign groups to inadvertently re-tweet or quote true or at least defensible information that originated from very unsavoury groups, thus giving them credence. --- When Good Actors Spread Bad Information As we saw in the last example, those we regard as 'good' actors can also sometimes spread bad information. Sometimes this is deliberate. An extreme case is during war when misinformation campaigns in an enemy country are regarded as a normal and indeed relatively benign form of warfare (Shaer, 2017). In peace time deliberate misinformation is likely to be less extreme and more often stretching or embroidering the truth, or selectively reporting. It may also be accidental. For example, Figure 1 shows a "Q&A" (form of fact check) on the BBC news web site following a claim made by Boris Johnson in January 2018 regarding UK contributions to the EU budget. The overall thrust of the Q&A is correct, the net amount that was sent to the EU at that time was substantially less than the £350 million figure that Johnson claimed, but the actual figures are wrong, the Q&A suggested that around 2/3 of the gross figure was returned, when the actual figure was close to a half. This is probably because at some point a journalist lost track of which figure the half was referring to, but the overall effect was to create a substantially incorrect figure. (BBC, 2018). Note this Q&A pop-up is no longer in the news item; instead there a link to a 'Realty Check' page which is correct, but with no explicit retraction. In between are the subtle biases are simply assumptions of journalists that play out in the selection of which stories to report and also in the language used. For example, in crime or conflict reporting passive language may be used ("the assailant was shot", or "shells fell on") compared with active language ("AAA shot BBB" or "XXX fired shells on") depending on which side is doing the shooting or bombing. Personally, while I may despair or be angry at the misinformation from those with whom I disagree, I am most upset when I see poor arguments from those with whom I agree. This is partly pride, wanting to be able to maintain a moral high ground, and partly pragmatic, if the arguments are poor then they can be refuted. In an age of adversarial media, any mistakes, misrepresentation or hyperbole can be used to discredit otherwise well-meaning sources and promote alternatives that are either ill-informed or malicious. This was evident in the US during the 2016 presidential campaign when many moderate Republican supporters lost faith in the reputable national press in favour of highly partisan local papers; a trend which has intensified since (Gottfried, 2021;Meek, 2021) 3 SEEKING TRUTH --- The Full Cast We have already considered the 'B-movie' bad/good guy roles, of the producers and influencers, both of whom can mislead whether ill-intentioned or illinformed. In reality even the 'bad' actors may be those with genuinely held, albeit unfounded, beliefs about 5G masts or communist take-over of US government. Of course, those of us who would consider ourselves 'good' actors, may still distort or be selective in what we say albeit for the best of reasons. In addition, those who receive misinformation and are confused or misled by it may differ in levels of culpability. It is easier to believe the things that make life easier, whether it is the student grasping at suggestions that the impact of Covid may be over exaggerated in order to justify a party, or the professional accepting climate change scepticism to justify buying that new fuel-hungry car. Of course, the purveyors of news and information are under pressure, and may not be wholly free in what they say, or may run risks if they do. Even in the last year we have seen many journalists, bloggers and authors arrested, sanctioned, stabbed and shot. Perhaps more subtle is the interplay within the ecology of information: journalists and social media modify what and how they present information in order to match the perceived opinions and abilities of their readership. --- Two Paths The greatest effort currently appears to be focused on fighting back against bad actors. This includes algorithms to detect and counter mis-information, such as Facebook's intentions to weed out anivaccination. These are predominantly aimed at the bad actors. However, in addition we need to think about doing better, ways for the good actors to disseminate and understand information so that they are in a better position to evaluate sources of information and ensure that they do not inadvertently create bad information. We'll look briefly at four areas where appropriate design could help us to do better: • echo chambers and filter bubbles • better argumentation • data and provenance • numeric data and qualitative-quantitaive reasoning These are not the only approaches, but I hope they will stimulate the reader to think of more. --- Echo Chambers and Breaking Filter Bubbles Social media was initially seen as a way to democratise news and information sharing and to allow those in the 'long-tail' of small interest groups to find like-minded people in the global internet. However, we now all realise that an outcome of this has been the creation of echo chambers, where we increasingly only hear views that agree with our own. In some ways this has always been the case, both in choices of friendship groups for informal communication and the audiences of different newspapers. However, social media and the personalisation of digital media has both intensified the effect and made it less obvious -you know that a newspaper has a particular editorial line, but do not necessarily recognize that web search results have been tuned to your existing prejudice. This is now a well-studied area with extensive work analysing social media to detect filter bubbles an understand the patterns of communication and networks that give rise to them (Terren, 2021, Garimella, 2018;Cinelli, 2021). Notably, one of these studies (Garimella, 2018) highlighted the role of 'gatekeeper', people who consume a broad range of content, but then select from this to create partisan streams. Perhaps more sadly, the same study notes that those who try to break down partisan barriers pay a "price of bipartisanship" in that balanced approaches or multiple viewpoints are not generally appreciated by their audiences. In addition, there has been work on designing systems that in different ways attempt to help people see beyond their own filter bubbles (e.g. Foth, 2016;Jeon, 2021), but on the whole this has been less successful, especially in actual deployment. Indeed, attempts to present opposite arguments can end up deepening divides if they are too different and too soon. --- Argumentation It is easy to see the flaws in arguments with which we disagree, we know it is wrong and can thus hunt for the faults -the places where our intuitions and the argument disagree are precisely the places where we are expecting holes in the reasoning. Of course, we all create bad arguments. It is very hard to notice the gaps in one's own reasoning, but also the fallacious arguments of others when one agrees with their final conclusions. Of course, those who disagree with us will notice the gaps in our arguments, thus increasing their own confidence and leading them to discount our opinions! It is crucial therefore to have tools that both help the public to interrogate the arguments of politicians and influencers, and also to help those who are aiming to create solid evidence-based work (including academics) to ensure valid arguments. There is of course long-standing work on argumentation systems, such as IBIS (Noble, 1988) and work in the NLP community to automatically analyse arguments. Much of this is targeted towards more professional audiences, but there are also steps to help the general public engage with media, such as the Deb8 system (Carneiro, 2019) developed at St Andrews, an accessible argumentation system that allows viewers of a speech or debate to collaboratively link assertions in the video to evidence from the web. This is an area which seems to have many opportunities for research and practical systems aimed at different audiences including the general public, journalists, politicians, academics, and fact checkers. This could include broad advice, for example, ensuring that fact checkers clearly state their interpretation of a statement before checking it to avoid inadvertently debunking a strawman misinterpretation. Similarly, we could imagine templates for arguments, for example, given an implication of the form "if A then B", it is important to keep track of the assumptions. In particular, while more formal logics and some forms of argumentation schemes focus on low-level argumentation, it seems that the tools needed perhaps need to focus on the higher-level argumentation, the information and assumptions that underly a statement, more than the precise logic of the inference. In addition, in the AI community there are now a variety of tools to help automatically detect possible bias in data or machine learning algorithms. Maybe some of these could be borrowed to help human reasoning, for example shuffling aspects of situations (e.g. gender, political party or ethnicity), to help us assess to what extent our view is shaped by these factors. --- Data and Provenance One of the forms of misinformation is the deliberate or accidental use of true information or accurate data divorced from its context. For the spoken word or text, this might be a quotation, for photographs or video the choice of a still, segment or even parts edited together that give a misleading impression. Indeed the potential for digital media to be compromised in different ways lead some to look for technology such as blockchains to prevent tampering, or the use of analogue or physical representations (Haliburton, 2021). One example of work addressing this issue was the FourCorners project (ICP, 2016), a collaboration between OpenLab Newcastle, the International Centre for Photography and the World Press Photo Foundation, which embeds provenance into photographs allowing interrogation such as "what are the frames before and after this photograph?", "are there other photos at the same time and place?". One can imagine similar things for textual quotes, in the manner of Ted Nelson's vision of transclusion (Nelson, 1981), where segments quoted from one document retain their connection back to the original. This is an area I've worked on personally in the past with the Snip!t system, originally developed in 2003 following a study of user bookmarking practice (Dix, 2003). Snip!t allowed users to 'bookmark' portions of a web page and automatically kept track not just of the quoted text, but where it came from (Dix, 2010). Later work in this area by others has included both commercial systems such as Evernote, and academic research, such as Information Scraps (Bernstein, 2008). Currently there is an explosion of personal knowledge management (PKM) apps, some of which, such as Readwise (readwise.io) and Instapaper (instapaper.com), help with the process of annotating documents. However, these systems are mostly focused on retaining the context of captured notes and quotes; we desperately need better ways to retain this once the quote is embedded in another document or web page. This connection to sources is also important for data. In the example from the BBC in Figure 1, the journalist had clearly lost track of the original data on UK/EU funding and so misremembered aspects. Can we imagine tools for journalists that would help them keep track of the sources for data and images. Indeed, it would be transformative if everyday office tools such as word processors and presentation software made it easy to keep references to imported images. In work with humanities and heritage, we have noted how file systems have barely altered since the 1970s (Dix, 2022) -the folder structures allow us to store and roughly classify, but there is virtually no support for talking about documents and about their relationships to one another. Semantic desktop research (Sauermann, 2005), which seemed promising at the time, has never found its way into actual operating systems. Happily there are projects, such as Data Stories (2022) that are helping communities to use data to tell their own stories, so that the online world can allow open discourse and interpretation, whilst connecting to the underlying data on which it is based. Furthermore, one of the popular PKM apps Obsidian (obsidian.md) supports semi-structured meta-data for every note. --- Numeric Data and Qualitative-quantitaive Reasoning Going back to the example in figure 1, part of the problem here may well simply be that journalists are often more adept with words than numbers. We are in a world where data and numerical arguments are critical. This was true of Covid where the understanding of exponential growth and probabilistic behaviour was crucial, but equally so for issues such as climate change. One of the arguments put forward by climate change sceptics, is that it is hard to believe in longterm climate models given forecasters sometimes struggle to predict whether it is going to rain next week. This, at first sight, is not an unreasonable argument; although anyone who has deal with stochastic phenomena knows that it is often easier to predict long-term trends than short-term behaviour. Indeed, it is also relatively easy to communicate this -we can all say with a degree of reliability that a British winter will be wetter and colder than the summer, even though we'll struggle to know the weather from day to day. This form of argument is not about exact numerical calculation, nor about abstract mathematics, but something else -informal reasoning about numerical phenomena. Elsewhere I've called this qualitative-quantitative reasoning (Dix, 2021a(Dix,, 2021b) ) and seems to be a critical, but largely missing, aspect for universal education. Again this is an area that is open for radical contributions, for example, iVolver (Nacenta, 2017) allows users to extract numerical and other data from visualisations, such as pie charts, in published media. My own work has included producing table recognisers in commercial intelligent internet system OnCue in the dot-com years (Dix, 2000) and more recently investigating ways to leverage some of the accessibility of spreadsheet-like interfaces and simple ways to allow users to combine their own data (Dix, 2016). --- CALL TO ACTION We are at a crucial time in a world where information is everywhere and yet we can struggle to see the truth amongst the poorly sourced, weakly argued, deliberately manipulated or simply irrelevant. However, there are clear signs of hope in work that is being done and also opportunities for research that can make a real difference. Of course, as academics we are also in the midst of a flood of scholarly publication, some more scholarly than others! There are calls for us to 'clean up our own act' too including rigour of academic argumentation (Basb<unk>ll, 2018) and transparency of data and materials (Wacharamanotham, 2020). As well as being a problem we need to deal with within academia, it is also an opportunity to use our own academic community as a testbed for tools and techniques that could be used more widely.
Many of the issues in the modern world are complex and multifaceted: migration, banking, not to mention climate change and Covid. Furthermore, social-media, which at first seemed to offer more reliable 'on the ground' citizen journalism, has instead become a seedbed of dis-information. Trust in media has plummeted, just when it has become essential. This is a problem, but also an opportunity for research in HCI that can make a real difference in the world. The majority of work in this area, from various disciplines including datascience, AI and HCI, is focused on combatting misinformation -fighting back against bad actors. However, we should also think about doing better -helping good actors to curate, disseminate and comprehend information better. There is exciting work in this area, but much still to do.
INTRODUCTION Gender, as a fundamental social construct, influences every facet of human life, shaping identities, roles, and interactions within societies (Ram et al., 2023;Pulerwitz et al., 2010). The intricate interplay between societal norms, power structures, and cultural influences has engendered a diverse landscape of gender realities that are often veiled beneath the surface (Nandigama, 2020;Srivastava, 2020). In light of the growing recognition of gender's pivotal role in shaping individual experiences and social structures, this article endeavors to embark on an in-depth analysis of gender dynamics. By delving into the complexities of how gender operates within various contexts, this study aims to uncover the multifaceted dimensions of gender realities and shed light on the underlying mechanisms that influence behaviors, perceptions, and opportunities (Nalini, 2006) (Verma et al., 2006). As societies progress and evolve, gender-related discussions have gained momentum, bringing to the forefront issues such as gender equality, representation, and violence. Yet, a comprehensive understanding of the nuanced interplay between gender constructs, power dynamics, institutional norms, and cultural influences remains a critical endeavor (Iyer et al., 2007;Apriyanto, 2018). This article stands as a response to the pressing need for an expansive exploration of gender dynamicsone that transcends conventional narratives and delves into the less visible realms of gender interactions. By employing a multidimensional approach that acknowledges the intersectionality of gender with other dimensions of identity, including race, class, and sexuality, this analysis aims to illuminate the complexity of gender realities that often defy simplistic categorizations (Narwana & Rathee, 2017;Prusty & Kumar, 2014). This study recognizes that the exploration of gender dynamics extends beyond academic inquiry; it has implications for policy, advocacy, and social change. To craft effective strategies that address gender-based inequalities and discrimination, it is imperative to unravel the intricate tapestry of gender's influence on personal lives and societal structures. By unveiling the hidden intricacies of gender dynamics, this research seeks to contribute to a deeper comprehension of the challenges faced by individuals across the gender spectrum, ultimately fostering more informed conversations, evidence-based policies, and a more equitable world (Ahoo & Sagarika, 2020; Scott et al., 2017). The concept of gender, far from being confined to binary categorizations, exists along a spectrum, encompassing diverse identities, expressions, and experiences. The conventional understanding of gender as a simple dichotomy has been challenged by evolving societal awareness, acknowledging the need for a more inclusive and nuanced perspective. This study recognizes that gender dynamics are not static, but fluid and contextual, influenced by historical legacies, cultural contexts, and socio-economic structures (Gordin & True, 2019; Gupta et al., 2017). While progress has been made in addressing gender disparities, persistent inequalities persist. These inequities are deeply rooted in deeply ingrained gender norms and power imbalances that permeate institutions, policies, and everyday interactions. This investigation seeks to unravel these intricate threads that form the fabric of gender dynamics, unveiling the often hidden and subtle mechanisms that perpetuate these disparities. By examining the complexities of gender dynamics, we aim to contribute to a more profound understanding of the lived experiences of individuals and communities, facilitating a more empathetic and informed approach to fostering gender equality (Park & Maffi, 2019;Goodrich, 2020). As we delve into this exploration, it becomes evident that gender is not isolated; it intersects with other aspects of identity and inequality. Marginalized groups often face compounded discrimination due to the intersections of race, class, and gender, making the study of gender dynamics an essential step towards dismantling systemic oppression. Through this investigation, we strive to bridge the gap between academic discourse and real-world impact, offering insights that can inform policy decisions, advocacy initiatives, and transformative social change. In the subsequent sections of this article, we present a comprehensive framework for analyzing gender dynamics, drawing on interdisciplinary perspectives and methodologies (Van, 2010; Mutenje et al., 2016). Our analysis is grounded in the understanding that unraveling gender realities requires a multifaceted approach, acknowledging both the visible manifestations and the underlying structures that perpetuate gender norms and disparities. By exploring these dimensions, we aim to contribute to a more holistic understanding of the complexities of gender dynamics and advance the discourse surrounding gender equality and social justice. Gender dynamics, as a fundamental aspect of social structures, play a profound role in shaping individuals' lives and societal norms. In the diverse and intricate landscape of India, where traditions, cultures, and socioeconomic contexts intermingle, understanding the multifaceted nature of gender realities becomes particularly crucial. This article embarks on an in-depth analysis of gender dynamics within the Indian context, aiming to illuminate the complexities, challenges, and opportunities that define gender relations and identities in this diverse nation. India, known for its rich cultural tapestry, is also a country grappling with deeply rooted gender inequalities. The complexities of gender dynamics extend beyond mere biological distinctions, encompassing cultural norms, historical legacies, and contemporary shifts. This study seeks to unveil the nuanced interplay between these factors, delving into the myriad ways in which gender influences roles, expectations, and power dynamics within Indian society. In a country characterized by its diverse ethnicities, languages, and traditions, gender dynamics are influenced by regional variations and historical contexts. While there have been advancements towards gender equality, persistent challenges such as gender-based violence, unequal access to education, and limited political representation continue to shape the gender landscape in India. As such, this study aims not only to reveal the existing gender realities but also to provide a comprehensive understanding of the factors that perpetuate or challenge gender disparities (Tyaqi & Das, 2018). Recognizing the intersectionality of gender with other dimensions of identity, such as caste, class, and religion, is crucial. These intersections further complicate the dynamics of gender relations, often leading to compounded discrimination and marginalized experiences. By adopting an intersectional lens, this analysis seeks to untangle the intricate threads that weave together the tapestry of gender experiences in India. By uncovering the layers of gender dynamics, this research contributes to a more informed dialogue and evidence-based policymaking. As India continues to strive for progress and equality, an exploration of gender realities is essential to address deeply ingrained inequalities and foster a more inclusive and equitable society. Through this examination, we aim to deepen our comprehension of gender dynamics within the Indian context, providing insights that inform both academic discourse and practical interventions. --- B. METHOD This study adopts a qualitative research design to conduct an in-depth analysis of gender dynamics within India. Qualitative research is chosen for its capacity to explore the intricate aspects of gender realities. Semi-structured in-depth interviews and focus group discussions are conducted with a diverse sample to capture a range of perspectives. Thematic analysis is employed to identify recurring patterns and themes in the data, facilitated by NVivo software. Ethical guidelines are followed, obtaining informed consent and ensuring confidentiality. Researchers' reflexivity is acknowledged, and limitations include contextual and interpretation biases. This qualitative approach aims to unravel the complexities of gender dynamics, providing comprehensive insights into cultural norms, power relations, and individual experiences shaping gender realities in India. --- C. RESULT AND DISCUSSION The process of thematic analysis delved into the rich tapestry of narratives from diverse participants, uncovering a spectrum of insights that collectively illuminate the multifaceted and often paradoxical gender dynamics deeply embedded in India's societal fabric. As the data was meticulously examined, five overarching themes emerged, each offering a window into the complex interplay of cultural norms, power dynamics, and individual experiences that intricately shape gender realities within the nation. 1. Cultural Perceptions and Gender Norms: The participants' voices resonated with a consistent theme underscoring the profound influence of cultural norms on gender identities and roles. These deeply ingrained norms perpetuate distinct expectations for men and women, often confining them within predetermined roles that restrict opportunities and reinforce unequal power dynamics. Narratives unveiled the tug-ofwar between tradition and progress, as individuals grapple with the juxtaposition of longstanding norms against modern aspirations. 2. Intersectionality of Identity: The vivid mosaic of gender dynamics is further nuanced by the intersection of gender with other dimensions of identity. Participants emphasized how factors like caste, class, and religion intersect with gender, shaping unique experiences and magnifying inequalities. The narratives poignantly revealed that these intersections, while often ignored, have profound implications, leading to layered discrimination and impacting access to resources and opportunities. 3. Evolving Masculinities and Femininities: The research unfurled the evolving perceptions of masculinity and femininity, signaling a shifting socio-cultural landscape. Traditional definitions of gender roles are gradually making way for more fluid expressions of identity. However, this evolution is not without resistance, as traditional notions of gender are deeply entrenched. The narratives provided insight into the tension that arises when these progressive shifts challenge deeply rooted conventions. 4. Educational Empowerment: Amid the complexity, education emerged as a beacon of change and empowerment. Narratives highlighted how education offers a platform to challenge gender norms, empowering women and marginalized groups to pursue opportunities beyond traditional boundaries. Yet, a stark dichotomy emerged-while education is seen as a powerful tool for change, disparities in educational access persist, particularly in rural areas, where the transformative potential of education remains unrealized. 5. Gender-based Violence and Discrimination: The themes of gender-based violence and discrimination reverberated throughout the narratives, emphasizing the pervasive nature of abuse. Participants shared heart-wrenching stories of harassment, unequal treatment, and systemic barriers that reinforce gender inequalities. The narratives laid bare the urgency of addressing the systemic and cultural factors perpetuating genderbased violence and discrimination. Beyond these focal themes, cross-cutting insights intertwined with the broader analysis. The impact of media in shaping and challenging gender stereotypes emerged as a dual-edged sword. While media can promote progressive ideals, it can also perpetuate harmful norms. Additionally, the discussions on policies and legal frameworks highlighted a complex landscape of opinions on their effectiveness, underlining the need for comprehensive strategies that encompass both systemic reform and cultural change. The rich tapestry of themes and cross-cutting insights collectively underscores the intricacies of gender dynamics within the Indian context. The intersectionality of identity adds layers of complexity, magnifying the challenges faced by marginalized communities. The evolving definitions of masculinity and femininity reflect a society in transition, where progress coexists with resistance. Cultural norms were revealed as both influential and constraining, emphasizing the need for cultural change alongside policy reform. Education's transformative potential and the pressing concerns of gender-based violence and discrimination together represent a call to action. This comprehensive exploration challenges policymakers, advocates, and society at large to address deeply ingrained inequalities and work towards a more inclusive and equitable future. --- Social Interaction and Community Engagement In the rapidly urbanizing landscape of India, where concrete jungles often dominate the horizon, the significance of green spaces transcends mere aesthetics. Beyond providing environmental benefits, green spaces serve as crucial platforms for fostering social cohesion, nurturing a sense of community, and cultivating a shared sense of belonging. This segment of the study delves into the intricate ways in which green spaces facilitate social interactions, encourage community bonding, and engender a deep-rooted sense of belonging among individuals across diverse walks of life. Green spaces, ranging from parks and gardens to community squares, stand as communal havens that draw people from various backgrounds. They act as natural magnets, offering a neutral ground for individuals to converge, communicate, and engage in a myriad of activities. This phenomenon is particularly pronounced in densely populated urban areas, where green spaces become a respite from the hustle and bustle of city life, allowing residents to connect on a human level. The verdant expanse of green spaces provides a canvas for fostering connections beyond individual identities. Picnics, group exercises, cultural events, and impromptu gatherings become catalysts for forging bonds among neighbors who might not otherwise cross paths. These spaces dissolve social barriers, facilitating interactions between generations, economic classes, and cultural backgrounds. As community members engage in shared activities and collaborate on various initiatives, a sense of collective identity emerges, knitting together a fabric of unity that transcends differences. Perhaps most notably, green spaces nurture a profound sense of belonging among those who frequent them. The communal ownership of these areas fosters a feeling of stewardship and responsibility, strengthening ties between individuals and the land they share. Green spaces often become canvases for community expression, where murals, sculptures, and gardens serve as testaments to collective identity. This sense of belonging extends beyond the immediate vicinity of the green space, fostering a ripple effect that contributes to broader social cohesion within neighborhoods and even entire cities. The role of green spaces in promoting social interactions and community bonding is especially significant in the context of India's diverse cultural landscape. These spaces serve as platforms where cultural celebrations, performances, and festivals unfold, allowing people to share their heritage with one another. This cultural exchange enhances understanding and appreciation among diverse groups, thereby fostering an environment of inclusivity and mutual respect. --- Recreational Opportunities: Assessing the Impact of Availability of Recreational Activities on Social Engagement in India In the dynamic cultural milieu of India, where social interactions are deeply woven into the fabric of daily life, the presence and accessibility of recreational opportunities play a pivotal role in shaping the vibrancy of communities. This segment of the study scrutinizes the spectrum of available recreational activities and their influence on fostering social engagement, connecting individuals across diverse backgrounds, and contributing to the collective wellbeing. India's recreational landscape is a tapestry woven from diverse threads, encompassing both traditional and contemporary activities. From traditional dance forms and religious celebrations to modern sports and entertainment, the array of options reflects the multifaceted nature of the nation's interests. Festivals, community events, sports tournaments, cultural workshops, and outdoor adventures serve as canvases for shared experiences, where people congregate to celebrate, compete, and connect. Recreational activities act as a social glue, binding individuals together in shared pursuits. Festivals, for instance, transcend religious and regional boundaries, creating platforms for people to unite in celebration. Sports leagues and tournaments not only promote physical fitness but also provide avenues for camaraderie, teamwork, and friendly competition. These activities offer individuals common ground, a space where relationships form, and social networks expand. Recreational opportunities transcend individual pursuits, extending their reach into the realm of collective engagement. Participation in these activities often requires interaction and collaboration, leading to the cultivation of a sense of belonging within a larger community. Whether through volunteering at cultural events, joining book clubs, or participating in local sports teams, individuals engage in a collective endeavor that nurtures social bonds and mutual support. The availability of a diverse range of recreational activities fosters inclusivity by accommodating a wide spectrum of interests and talents. Individuals of varying ages, backgrounds, and abilities find avenues to express themselves and engage with their community. In this way, recreational activities contribute to breaking down social barriers and creating spaces where diversity is celebrated. The advent of the digital age has also introduced new dimensions to recreational activities. Virtual spaces, social media platforms, and online gaming communities offer avenues for connection that transcend physical boundaries. While fostering virtual connections, these platforms also raise questions about the nature of social engagement in the digital realm and its implications for in-person interactions. --- Equlity and Access Within the intricate tapestry of India's socio-environmental landscape, the equitable distribution of green spaces emerges as a potent lens through which to examine issues of social justice and environmental equity. This segment of the study delves into the multifaceted dynamics surrounding the availability of green spaces, particularly their accessibility and benefits for marginalized communities. The investigation seeks to unveil how these spaces can serve as tools for bridging disparities and fostering environmental equity across diverse contexts within India. Green spaces, often emblematic of natural respite and recreation, take on an added dimension as symbols of social justice. As these spaces offer moments of tranquility and interaction, their accessibility becomes a matter of equitable distribution of resources. Examining the distribution of green spaces and their accessibility within different communities provides insights into the allocation of amenities that contribute to social wellbeing. Green spaces, while providing havens for leisure, exercise, and community engagement, can sometimes perpetuate inequalities if their distribution disproportionately favors privileged communities. Investigating how marginalized communities access and benefit from green spaces is pivotal to understanding broader issues of social justice. The availability of these spaces to all members of society, regardless of economic status, becomes an essential gauge of a society's commitment to inclusivity. The availability of green spaces is intricately linked to environmental inequalities, often reflecting patterns of urban planning and development. Communities with limited access to green spaces may also face exposure to environmental hazards, further exacerbating disparities. By exploring the spatial relationships between green spaces, marginalized neighborhoods, and environmental risks, this inquiry sheds light on the intersections between social justice and environmental concerns. Green spaces are not merely physical entities but spaces imbued with cultural and social significance. Investigating their equitable distribution extends beyond access to encompass the preservation of cultural heritage. These spaces can serve as anchors for cultural expression and community identity, promoting social cohesion and resilience within marginalized communities. The equitable distribution of green spaces intertwines with issues of environmental justice and public health. Disparities in green space accessibility can impact air quality, mental well-being, and physical health outcomes, disproportionately affecting marginalized communities. Exploring these correlations deepens our understanding of how green spaces contribute to a broader framework of social and environmental justice. In the intricate web of India's urban and rural landscapes, the accessibility of green spaces becomes a lens through which to examine the extent of equitable distribution and social inclusivity. This section of the study delves into the multifaceted factors that impact access to green spaces, shedding light on how proximity, transportation, and physical barriers collectively shape individuals' opportunities to connect with nature and communal spaces. The geographic proximity of green spaces to residential areas profoundly affects their accessibility. As urban centers expand, ensuring that green spaces are conveniently located becomes paramount. Analyzing the spatial distribution of green spaces relative to population densities provides insights into the effectiveness of urban planning in promoting equitable access. Proximity is a crucial determinant, influencing whether individuals, particularly those from marginalized communities, can integrate these spaces into their daily lives. The role of transportation infrastructure in mediating access to green spaces cannot be underestimated. Availability of efficient public transport and pedestrian-friendly routes can bridge the gap between neighborhoods and distant parks. Examining transportation options, including walking, cycling, and public transit, offers a nuanced understanding of how communities navigate physical distances to engage with nature. Conversely, inadequate transportation options can create barriers, limiting green space access predominantly to those with private vehicles. Physical barriers, such as highways, water bodies, and infrastructure limitations, can fragment communities and impede access to green spaces. Analyzing the presence of such barriers and their impact on different demographics underscores the intersectionality of accessibility challenges. Marginalized communities often disproportionately bear the brunt of these barriers, reinforcing patterns of exclusion. Evaluating how urban development addresses or perpetuates these barriers reveals a complex interplay between urbanization and social equity. Social and cultural dimensions can either enhance or inhibit green space accessibility. Community perceptions, safety concerns, and cultural norms can influence individuals' decisions to frequent these spaces. A deeper examination of these dynamics reveals the interplay between societal values and accessibility, offering insights into potential strategies to bridge gaps and increase inclusivity. Climate and weather patterns introduce another layer of complexity. Extreme heat or monsoons can influence individuals' willingness to travel to green spaces. Assessing how these seasonal variations impact different communities reveals the need for adaptable strategies that ensure year-round access. --- Policy and Planning Implications By providing a holistic comprehension of the socioeconomic implications associated with urban green spaces, this framework emerges as a valuable tool for urban planners and policymakers. It offers nuanced insights into optimizing the design, allocation, and management of green spaces within urban landscapes. Central to its findings is the call for integrated policies that effectively harness the potential of green spaces to enhance both the well-being of urban inhabitants and the overarching sustainability of cities. This framework underscores the pivotal role of green spaces as more than just aesthetic additions, positioning them as essential components of thriving, resilient, and socially inclusive urban environments. The multifaceted framework, rooted in a comprehensive analysis of the socioeconomic dynamics linked to urban green spaces, holds significant implications for urban development and governance. As urban centers continue to expand, the insights drawn from this framework offer practical guidance for decision-makers. Optimizing Urban Green Space Design: The framework illuminates the intricate interplay between green spaces, community well-being, and economic vitality. It provides urban planners with a roadmap for designing green spaces that cater to diverse needs, from recreational opportunities and cultural expression to health and social interaction. By understanding the nuanced ways in which different communities engage with these spaces, planners can create environments that foster inclusivity and address local demands. Strategic Allocation and Management: With land at a premium in urban settings, the framework's insights into the socioeconomic impacts of green spaces guide informed decisions regarding land allocation. It aids policymakers in striking a balance between commercial development and green infrastructure. Moreover, the framework advocates for strategic management that aligns with the evolving needs of communities. This approach not only enhances green space utility but also maximizes their potential to stimulate local economies. Urban Well-being and Quality of Life: The recognition of green spaces as contributors to urban well-being is pivotal. The framework underscores how access to nature and recreational opportunities can mitigate stress, boost mental health, and enhance overall quality of life. Urban policymakers can leverage these findings to prioritize the creation and preservation of green spaces, safeguarding the health and vitality of city dwellers. Sustainability and Climate Resilience: Embracing the insights from this framework also aligns with sustainable urban development goals. Green spaces play a crucial role in mitigating the urban heat island effect, improving air quality, and contributing to overall climate resilience. By incorporating these considerations into urban planning, policymakers can foster environments that are both socially and environmentally sustainable. In essence, this comprehensive framework serves as a compass for urban planners and policymakers, guiding them toward the creation of cities that are not only economically vibrant but also socially inclusive, environmentally resilient, and conducive to the well-being of all residents. Its holistic perspective underscores the integral nature of green spaces in shaping the cities of tomorrow, inviting collaboration across disciplines to realize a harmonious urban future. Certainly, let's delve further into the implications and potential applications of the framework for urban planners and policymakers: Equitable Urban Development: The framework's emphasis on socioeconomic impacts underscores the importance of equitable urban development. It highlights the potential of green spaces to bridge social disparities by providing accessible spaces for people from all walks of life. Urban planners can utilize this understanding to ensure that green spaces are strategically located in underserved communities, addressing historical inequalities and promoting social cohesion. Community Engagement and Empowerment: One of the framework's underlying principles is community engagement. By involving local residents in the design and management of green spaces, urban planners can empower communities to shape their environments. This not only enhances the sense of ownership but also fosters a stronger bond between residents and their neighborhoods, leading to more sustainable and resilient communities. Economic Opportunities: The framework sheds light on the economic benefits that green spaces can generate. From creating jobs in park maintenance and recreational services to boosting nearby property values, green spaces have a tangible impact on local economies. Urban planners can leverage this data to advocate for investments in green infrastructure, highlighting the potential return on investment and long-term economic growth. Health and Well-being Initiatives: Given the growing concern about urban health challenges, the framework's insights into the positive impact of green spaces on physical and mental health are invaluable. Policymakers can use this information to support health and wellbeing initiatives. By integrating green spaces into health programs and campaigns, cities can proactively address health issues and reduce the burden on healthcare systems. Climate Change Mitigation and Adaptation: Green spaces are essential components of climate change mitigation and adaptation strategies. The framework's acknowledgment of their role in reducing urban heat, improving air quality, and enhancing resilience is crucial. Urban planners can incorporate these findings into broader climate action plans, contributing to the overall sustainability and climate readiness of the city. Tourism and Cultural Preservation: Green spaces often possess cultural and historical significance. The framework's exploration of how green spaces contribute to cultural expression and identity opens avenues for cultural preservation and tourism. Urban planners can collaborate with local communities to design green spaces that honor heritage while providing spaces for cultural events and celebrations. Cross-sector Collaboration: The framework's multidimensional insights necessitate collaboration across sectors. Urban planners, policymakers, environmentalists, public health experts, and community advocates can unite to harness the full potential of green spaces. This collaboration extends beyond government bodies to include NGOs, academic institutions, and private sector entities, fostering innovative solutions and holistic approaches. Long-Term Urban Vision: By integrating the framework's findings into urban development plans, cities can establish a long-term vision that prioritizes the well-being of residents. This vision goes beyond immediate gains, focusing on creating resilient, vibrant, and socially inclusive urban environments that stand the test of time. In summary, the framework's comprehensive exploration of the socioeconomic impacts of urban green spaces extends its significance beyond theoretical insights. It equips urban planners and policymakers with practical tools to craft more resilient, equitable, and sustainable cities. As cities evolve and face increasingly complex challenges, this framework offers a roadmap to navigate the intricate tapestry of urban development while prioritizing the needs and aspirations of the people who call these cities home. The discussion of policy implications and societal shifts is pivotal. The study's insights underscore the need for policies that challenge traditional norms, promote inclusivity, and empower marginalized communities. Furthermore, the narratives suggest that societal shifts are underway, albeit with challenges. This highlights the importance of continued education, awareness campaigns, and grassroots efforts to facilitate change. --- D. CONCLUSION In conclusion, this in-depth analysis of gender realities underscores the intricacies of a multifaceted landscape shaped by cultural norms, evolving identities, educational empowerment, and discrimination. The findings offer a nuanced understanding of the complex web of interactions that define gender dynamics in India. The study's insights call for collaborative efforts from policymakers, civil society, and communities to challenge discriminatory norms, promote inclusivity, and work towards a more equitable and just society for all genders.
This journal article presents a comprehensive exploration of gender dynamics through an indepth analysis of gender realities. By delving into the intricate interplay of cultural norms, evolving identities, educational empowerment, and gender-based discrimination, this study sheds light on the complexities shaping gender experiences. The research employs qualitative methods, including semistructured interviews and thematic analysis, to capture diverse perspectives across India. The findings reveal a nuanced spectrum of gender dynamics, emphasizing the intersectionality of identity, the evolving definitions of masculinity and femininity, and the impact of educational opportunities. The study underscores the challenges posed by gender-based discrimination and violence, while also highlighting the potential for progress through policy interventions and societal shifts. Overall, this research contributes to a deeper understanding of the intricate fabric of gender dynamics, urging for concerted efforts towards fostering gender equality and social justice.
Introduction The incidence of melanoma in Denmark has increased with over 4% per year during the past 25 years and by 2012, the yearly incidence was <unk>30 per 100,000 personyears. 1 Melanoma is the fourth and sixth most common cancer type, respectively, in women and men in Denmark. 2 Despite a higher incidence rate among persons with higher socioeconomic position, lower socioeconomic position has been associated with poorer survival in this patient group, [3][4][5] and we need to know more about where in the cancer pathway these survival disparities occur. A possible explanation is delayed diagnosis in patients with lower socioeconomic position, and more knowledge is needed in order to detect cancer early in all patient groups and to identify groups at high risk of delayed diagnosis. A late diagnosis may result in advanced cancer stage at time of diagnosis, and hypothesized explanations are delay in recognizing symptoms of the cancer, delayed health care seeking or later referral to specialized care among patients with lower socioeconomic position. The presence of other chronic disease, which is more frequent among patients with lower socioeconomic position, may influence timing of cancer diagnosis either through an increased observation because of more frequent health care contacts due to the health condition in question or conversely by decreasing individual resources in order to manage further health problems. Histological type of the tumor may also be differentially distributed according to socioeconomic group because some tumor types occur mainly among people with a certain lifestyle or risk behavior in relation to sun exposure. Furthermore, patients with lower socioeconomic position also tend to live in more rural rather than urban areas, where access to health care services may be lower. Several studies have shown that patients living in neighborhood areas with lower socioeconomic position tended to be diagnosed at a later stage of melanoma. 4 Besides results from two Swedish studies, 6,7 evidence is sparse from nationwide, population-based studies about the effect of individual level socioeconomic factors, such as education and income, on stage of cancer in melanoma patients. The role of comorbidity has only rarely been investigated, and only a few studies have looked at major geographical differences in combination with the socioeconomic factors. This study presents results from Denmark where most primary and secondary health care services including all cancer treatments are tax-paid and thereby free of charge, with the aim of minimizing differential access to diagnosis and treatments. A referral from primary to secondary care is required, and the general practitioners play the role of gatekeepers to the rest of the health care system. Data are obtained from a nationwide Clinical Quality register with a coverage of <unk>95% of all Danish patients with melanoma in recent years 8 and unique individual socioeconomic information from national administrative registers. The aim of the study was to investigate whether educational level, disposable income, cohabitating status or region of residence is associated with cancer stage and further to analyze the role of comorbidity and tumor type in these potential relations. --- Methods --- Study population From the Danish Melanoma Database (DMD), we identified 13,626 patients diagnosed with their first invasive melanoma between 2008 and 2014. DMD is a clinical register containing prospective and systematically collected data related to clinical observations, diagnostic procedures, tumor characteristics, treatments and outcomes. It was established in 1985 and now has a national coverage of <unk>93-96%. 8 --- Clinical variables Information on cancer T-, N-and M-stage; tumor location; histological subtype; tumor thickness; and ulceration was obtained from the DMD. The clinical stage at diagnosis was categorized according to AJCC's 6th (2008-2013) and 7th edition (2013-2014), 9,10 and for the analyses, cancer stage was divided into early (clinical stage I-IIA) and advanced-stage cancer (clinical stage IIB-IV). This cut-point is in accordance with the Danish follow-up program for melanoma, where stage IA is assessed as low-risk cancer and IB-IIA as intermediate-risk cancer, while stage IIB-IV include the thickest tumors (stage IIB and IIC), with regional spread (stage III) or distant metastases (stage IV), all of which have the highest risk of relapse and dismal outcome. 11 Tumors were grouped into histological subtypes: superficial spreading malignant melanoma, lentigo maligna melanoma, nodular melanoma, other and unknown/unclassified. Data on comorbid conditions were obtained from the Danish National Patient Register, which is an administrative register containing data from all hospitalizations at somatic wards in Denmark since 1977. 12 Diagnoses other than melanoma were retrieved, and the Charlson comorbidity index (CCI) 13 was calculated. The CCI covers 19 selected conditions with a score from 1 to 6 by degree of severity, and these conditions were summed from 10 years before and until 1 year before date of the melanoma diagnosis. The CCI index was grouped into 0 (none), 1-2 and 3+. --- Sociodemographic variables Individual level sociodemographic factors were obtained by linking the unique personal identification number (assigned to all Danish residents) of the study population to the registers of Statistics Denmark, which contains data on each individual and is updated annually. [14][15][16] We retrieved information on educational level, income and cohabiting status 1 year before diagnosis for each patient. Education was divided into three categories based on Statistics Denmark's recommendations of categorizing the individual's highest attained education level: short education (7/9-12 years of basic or youth education), medium education (10-12 years of vocational education) and longer education (short, medium and longer higher education [>13 years of education]). Yearly disposable income per adult person in the household was calculated and categorized in to three groups based on quartiles of the disposable income per person in the population: 1st quartile (<unk>150.708 Danish crowns [DKK]), 2nd-3rd quartile (150.708-279.715 DKK) and 4th quartile (>279.715 DKK). Persons with high negative income (>50.000 DKK) were excluded from the analyses. One thousand DKK equals <unk>135 Euros. Cohabiting status was defined as living with a partner (married or cohabiting) or living without a partner (single, widow/widower or divorced). Cohabiting was defined as, in the absence of marriage, two adults of the opposite sex, with a maximum age difference of 15 years, living at the same address and who have no family relation or with a mutual child. Information about age, sex and region of residence was obtained from the Civil Registration System. From the study population, we excluded 105 patients because there was no match on any sociodemographic information, and further 178 persons were excluded because they had high negative income in Statistics Denmark's registers. Further 328 patients under the age of 25 years were excluded as those persons might not have reached their final educational level. This yielded 13,015 patients (Table 1). For the adjusted analysis, 2,597 patients (20%) with missing TNM information or unclassifiable clinical stage and 260 patients with unknown educational level were excluded, which resulted in a study group of 10,158 patients (Table 2). --- Statistical analyses The associations between socioeconomic and -demographic factors and cancer diagnosis stage were analyzed in a series of logistic regression models. First, the associations between sociodemographic factors and cancer stage were adjusted for age and sex. Second, the results were mutually adjusted for other sociodemographic factors, except for educational level, which was not adjusted for income, because income was hypothesized to be a clear mediator between education and cancer stage. Third, the model included additional adjustment for tumor type and the fourth model also adjusted for comorbidity (CCI index). Interactions between single socioeconomic variables with sex, age, comorbidity and localization of the tumor were tested one pair at a time with Wald test statistics. A significant interaction existed between education and comorbidity with a higher effect of comorbidity on stage for patients with longer compared to short education; however, this was driven by a very small group of patients with long education level and comorbidity 3+ and therefore results were not stratified on this basis. There was an interaction between sex and cohabiting status; however, only borderline significant (P <unk> 0.07) and sex-stratified data are not shown. Because data completeness was higher in 2013-2014 (start of the DMD as a clinical quality register) than in 2008-2012, we repeated the analyses including only these two most recent years to assure that the interpretation of the results were close to what was found from analyzing the whole cohort. In supplementary analyses, we repeated all the analyses with the outcome variable clinical stage dichotomized into stage I vs II-IV in order to assure that results were the same even if the cut-point for early vs advanced cancer was changed. This yielded estimates that were close to what is reported in Table 2, and the interpretation of the results from the two categorizations was the same. The analyses were carried out in SAS 9.4 with the PROC GENMOD procedure, and the level of significance was P <unk> 0.05. --- Ethics Use of data for this project was approved by the Danish Health Authorities under the Capital Region of Denmark (J.no.: 2012-58-0004). --- Results The descriptive statistics in Table 1 show clinical and sociodemographic factors distributed according to the main exposure of interest: educational level. More patients with short compared to long education tended to have higher cancer stages, and thereby also thicker tumors and ulceration, and more short-educated patients had nodular malignant melanoma and comorbidity. Patients with shortest education also tended to have higher age, lower income, lived alone and outside the Capital Region. Table 2 shows that patients with shorter education, with lower income, living without partner, with male sex, higher age, with comorbidity and who lived in the Northern, Central or Zealand region of Denmark had an elevated odds ratio (OR) of being diagnosed with advanced-stage cancer when adjusted for sex, age and sociodemographic factors. For example, the OR for advanced-stage cancer in patients with short compared to longest education was 1.50 (1.25-1.67) and for lowest vs highest income level OR was 1.59 (1.33-1.89), while OR for advanced cancer stage was 1.52 (1.30-1.78) for patients living in Zealand compared to the Capital region (Table 2, model 2). When adjusting for tumor type and comorbidity (Table 2, models 3 and 4, respectively), the ORs for advanced-stage cancer by socioeconomic and -demographic factors were only a little lower than the ORs in model 2, ie, for short vs longer education the adjusted OR was 1.40 (1.20-1.63) in the fully adjusted model. The estimates for region of residence were lower when adjusted for tumor type (model 3) than the confounder-adjusted estimates (model 2); however, this reduction in ORs was not found when restricting data to patients with diagnosis year 2013-2014 (data not shown). Patients with high comorbidity burden had a higher OR of advanced cancer (comorbidity 3+ vs no comorbidity, adjusted OR = 1.54 [1.24-1.93]). --- Discussion The results of the present study showed that patients who were socially disadvantaged in terms of education, income or partner status had an increased risk of a diagnosis with advanced-staged melanoma. Region of residence was also associated with a higher risk of advanced stage when living in the Northern, Central or Zealand health care region. The effects of the socioeconomic factors seemed unexplained by differential distribution of comorbidity or tumor types among different socioeconomic groups. It is an important finding that several different indicators of socioeconomic position were related to cancer stage at diagnosis, and this adds evidence to the current literature. Studies from USA, Europe and New Zealand consistently showed that patients living in neighborhood areas with lower socioeconomic position tended to be diagnosed with a more advanced stage of melanoma. 4,[17][18][19][20] These studies were, however, based on socioeconomic measures at area level, with the risk of misclassification. Larger differences in health outcomes may be found in populations from USA because of an insurancebased vs the mostly tax-based health care systems that exist in especially the Northern European countries, which should be considered when directly comparing inequality results. A nationwide population-based Swedish study with individually measured educational information reported a dose-response relation between three levels of education and disease stage with effect estimates close to our results. 7 Besides this, a few other smaller studies linked data on individual level education to tumor thickness, which is a measure of locally advancement of the disease and reported short education and unemployment to be associated with thick tumors. 4 Being married or living with a partner has earlier been associated with an early diagnosis of melanoma. 4,21 In a nationwide population-based Swedish study, findings of advanced disease in single living were most pronounced among men. 6 We found a similar trend of sex difference (data not shown), and especially men living without a partner seem to be a vulnerable group in terms of diagnostic delay. A questionnaire study from USA on the link from socioeconomic position to advanced melanoma points to the following underlying reasons for such an association: patients with short education were more likely to believe that melanoma was not very serious, they had less knowledge of skin symptoms of melanoma, they were less likely to have routinely examined their skin and to have ever been told by a physician that they had atypical moles or that they were at risk of skin cancer, or had been instructed by a physician how to look for signs of melanoma. 22 However, results from older studies from the Northern Europe are conflicting on the association between socioeconomic position and knowledge and understanding of melanoma. Other studies indicate that higher socioeconomic position is associated with more use of specialist health care services in general, 23 and a lower access to specialist dermatologist or specialized hospital treatment among patients with lower socioeconomic position could be an explaining factor for their delayed diagnosis. Taking several socioeconomic factors into account, we found that patients with residency in three out of five geographical health care regions had a higher risk of advancedstage cancer. In a recent Swedish study, differences in stage distribution were found across smaller geographical areas, 24 and further in the population-based Swedish study, rural/other urban areas had higher melanoma-specific survival compared to metropolitan areas. 7 Each of the five Danish Regions has responsibility for primary and secondary health care, and the organization of the referral to specialized care might thus be different between regions. Furthermore, the outer areas of Denmark have less primary and specialized doctors per inhabitant and longer distances to care. For instance, in the Zealand region, there is currently what corresponds to <unk>16 specialized treatment centers for dermatology/plastic surgery compared to <unk>27 centers per 100,000 inhabitants in the Capital Region. 25 That being said, region of residence may also be a mixture of unmeasured social factors and cultural/behavioral factors as well as a measure of organization of care. Comorbidity did not seem to explain the socioeconomic difference in stage at diagnosis, although it was a significant independent risk factor for being diagnosed with advanced cancer. The findings point to lower awareness or decreased resources in terms of dealing with another health problem than the comorbid disorder. A similar association was found for melanoma screening in primary practice in France, where chronic disease was associated with non-participation. 26 A Danish population-based study showed interaction between comorbidity and cancer stage with an increased mortality among patients with advanced melanoma and high comorbidity, 27 underlining the importance of a focus on comorbidity in detection and treatment of melanoma. We adjusted the socioeconomic and geographical results for histological type of the cancer, because it was hypothesized that some tumor types occur mostly in groups of people with a certain lifestyle or risk behavior. Lentigo maligna melanoma and superficial spreading melanoma are related to sun exposure, and sun habits could be speculated to change in a direction where more people from lower socioeconomic groups are exposed to sun or especially to use of sunbeds. 23 However, it was found that more of the patients with longest education were diagnosed with superficial spreading malignant melanoma, whereas more patients with short education had nodular melanoma -even though the risk profile of nodular melanomas is primarily related to biology rather than behavior. As nodular melanomas are often fast growing and sometimes amelanotic, increased awareness hereof is crucial. 28 Tumor type seemed to explain part of the geographical differences in cancer stage, but not when looking at the data solely from 2013 to 2014. We suggest that missing data on tumor histology in the early study period drive the finding since a larger part with unknown/unclassified histology appeared in the North and Central regions (19 and 23%, respectively, for the whole study period vs 8% in the Capital Region, data not shown), which may bias the effect of tumor type. Strengths of the current study include the populationbased data from both a clinical database and administrative registers, which minimize selection bias, information bias and misclassification of both exposure and outcome measures. Limitations are some missing clinical data for patients diagnosed during the years 2008-2012 (before onset of the DMD as a Clinical Quality register); however, there was an equal distribution of missing/unclassified TNM stage in the groups of patients with lower and higher socioeconomic position. Furthermore, we checked that the main results were similar for the study period as a whole as for the years 2013-2014. To measure comorbidity, we used the CCI with summarized data of hospital diagnoses and therefore milder diseases not treated or followed up in hospital setting were not included. This may have resulted in some misclassification with the risk of an underestimation of the true effect of comorbidity on outcome. Another limitation is that we did not have information on contacts with primary practicing doctors, which could have pointed to some explanation of why there is a socioeconomic difference in cancer stage -patient's delay in health care seeking or doctor's delay in referral to specialized care. These relations should be further investigated in future studies. The incidence of melanoma is increasing 1 -an increase that has newly been shown across all socioeconomic groups, but with the highest increase of regional-distant disease among patients from the lowest socioeconomic areas in USA, 29 and reducing socioeconomic and sex inequalities in stage at diagnosis would result in substantial reductions in deaths from melanoma. 19 Results from our study document important socioeconomic and -demographic differences in stage at diagnosis. Initiatives should be directed to social disadvantaged groups, men and older people in order to increase awareness of symptoms of melanoma. In primary care, an increased attention should be paid to patients from these groups in order to discover skin changes or melanoma at an early stage. Additional efforts to improve early diagnosis of nodular melanomas would improve the early vs advanced ratio and thus have the potential to affect mortality significantly. The newly suggested amendment to the diagnostic ABCD rule with EFG for Elevated, Firm and Growing nodule should be applied, and "when in doubt, cut it out" should be taught to both patients and doctors. 28 Further studies should investigate regional differences in delay, effects of number of specialized doctors per inhabitant as well as different referral patterns from primary to secondary health care across health care regions. --- Disclosure The authors report no conflicts of interest in this work. Clinical Epidemiology 2018:10 submit your manuscript | www.dovepress.com --- Dovepress --- Dovepress --- Clinical Epidemiology --- Publish your work in this journal Submit your manuscript here: https://www.dovepress.com/clinical-epidemiology-journal Clinical Epidemiology is an international, peer-reviewed, open access, online journal focusing on disease and drug epidemiology, identification of risk factors and screening procedures to develop optimal preventative initiatives and programs. Specific topics include: diagnosis, prognosis, treatment, screening, prevention, risk factor modification, systematic reviews, risk and safety of medical interventions, epidemiology and biostatistical methods, and evaluation of guidelines, translational medicine, health policies and economic evaluations. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use.
Background: Socioeconomic differences in survival after melanoma may be due to late diagnosis of the disadvantaged patients. The aim of the study was to examine the association between educational level, disposable income, cohabitating status and region of residence with stage at diagnosis of melanoma, including adjustment for comorbidity and tumor type. Methods: From The Danish Melanoma Database, we identified 10,158 patients diagnosed with their first invasive melanoma during 2008-2014 and obtained information on stage, localization, histology, thickness and ulceration. Sociodemographic information was retrieved from registers of Statistics Denmark and data on comorbidity from the Danish National Patient Registry. We used logistic regression to analyze the associations between sociodemographic factors and cancer stage. Results: Shorter education, lower income, living without partner, older age and being male were associated with increased odds ratios for advanced stage of melanoma at time of diagnosis even after adjustment for comorbidity and tumor type. Residence in the Zealand, Central and Northern region was also associated with advanced cancer stage. Conclusion: Socioeconomically disadvantaged patients and patients with residence in three of five health care regions were more often diagnosed with advanced melanoma. Initiatives to increase early detection should be directed at disadvantaged groups, and efforts to improve early diagnosis of nodular melanomas during increased awareness of the Elevated, Firm and Growing nodule rule and "when in doubt, cut it out" should be implemented. Further studies should investigate regional differences in delay, effects of number of specialized doctors per inhabitant as well as differences in referral patterns from primary to secondary health care across health care regions.
Introduction With an estimated 2.3 million HIV infected persons, India has the third largest HIV burden in any country in the world [1]. One of the goals of the current third phase of National AIDS Control Program (NACP-III) in India is to halt and reverse the HIV epidemic by 2012 by implementing an integrated strategy focusing on prevention, care and treatment of HIV/ AIDS [2]. This goal can be achieved by maintaining the primary prevention continuum, effectively tracking the HIV incidence in various sub-populations and implementing appropriately evaluated prevention and therapeutic interventions. Projections for the year 2031 marking 50 years of AIDS pandemic have indicated that almost three times the current resources will be required to control the epidemic by focusing on high impact tools, efforts to attain behavior-change and efficient and effective treatment [3]. All such efforts would require high level of utilization of services and programs by the stakeholders and their continued participation in the program. Retention in prevention programs, cohort studies and clinical trials is very critical and yet can be very challenging. The losses to follow-up (LTFU) might result from participants' loss of interest, inadequate oversight by the study investigators or absence of built-in mechanisms for tracking the study participants being lost [4]. Recent studies have shown that in resource poor countries, investigators can achieve high retention rates over long follow-up period in marginalized or ''hard to reach'' populations by employing special efforts which are expensive and management intensive [4,5]. Health program managers and research scientists have to take necessary steps to ensure that their clients return to the health facility at the assigned time points. Hence, understanding of dynamics of retention of clients is likely to help in planning measures to retain people in prevention programs and research settings requiring long follow-up such as cohort studies and clinical trials. Our long-term prospective study provided an opportunity to estimate levels of retention and their predictors using a modeling approach in the context of various HIV prevention and research program related scenarios such as those described below. We present three possible scenarios in the area of HIV prevention and research wherein retention is crucial: --- 1) Primary prevention through Voluntary Counseling and Testing (VCT): We hypothesized that high uptake of voluntary counseling and testing services for HIV, an important primary prevention strategy of the National AIDS Control Program of India, would contribute to reliable estimation of HIV burden in various sub-populations and may guide in deciding strategies for secondary prevention and control of AIDS. We studied factors affecting retention in the three HIV prevention and research scenarios described above among men enrolled in a high risk cohort of patients having current or past history of sexually transmitted infections (STI) in Pune, India. We explored demographic, behavioral and biological factors that might predict retention in the modeled scenarios of primary prevention programs, cohort studies and clinical trials. --- Methods --- Ethics Statement The cohort studies were approved by the national and international scientific, ethics and regulatory committees or boards of National AIDS Research Institute, India and Johns Hopkins University, USA. All participants were enrolled after obtaining written informed consent as approved by the Ethics Committee. Between 1993 and 2002, as part of collaborative studies between National AIDS Research Institute in Pune, India and Johns Hopkins University in the United States of America, cohort studies were undertaken in the industrial city of Pune located in the high HIV prevalence western state of Maharashtra in India. Using this dataset we carried out case-control analysis to study factors affecting retention of clients in HIV prevention research and programs. The ''cases'' in the three distinct modeled scenarios were selected from the cohort of male STI clinic attendees. The overall aim of the parent cohort study was to prepare sites and generate baseline data for undertaking Phase I, II and III HIV prevention clinical trials. Men with current or past history of STI, female sex workers (FSWs) and non sex worker females (non-FSWs) attending STI clinics were enrolled in the parent cohort study after they received their HIV negative report. Thus all those who tested HIV negative were offered enrollment in a longitudinal study requiring quarterly visits for a period of two years as described in our previous papers [6,7]. In this paper, we describe predictors of retention among men in the STI cohort using case-control analysis. Three modeled scenarios of Primary prevention, Cohort study and Clinical trials were identified as described previously. --- Participants ''Cases'' represented individuals who were ''retained'' in the hypothetical scenarios created for the retention analysis of primary prevention, cohort studies and clinical trials described above. Age and time of recruitment matched ''controls'' were selected from the STI cohort in 1:1 ratio. Defining outcome variable ''retention'' in three distinct modeled scenarios 1. Retention in primary prevention scenario: Individuals who returned for their first follow-up at 3 months after they received their HIV test report. 2. Retention in cohort studies scenario: Individuals who reported for follow-up to the study clinics at least once at the end of the first year and then at the end of the second year. 3. Retention in clinical trials scenario: Individuals who completed at least three scheduled visits both during the first year and the second year after enrolment. In the parent cohort study from which this analysis is done, only standard counseling, offering HIV test and giving scheduled date for the next follow-up visit was done. No additional efforts were made to contact the participants either telephonically or through home visits to specifically improve retention. --- Statistical analysis Univariate and multivariate conditional logistic regression analyses were performed to identify demographic (religion, marital status, education, employment), behavioral (living away from family, alcohol consumption, number of FSW partners, age at first sex, involvement in commercial sex work) and biological (tattooing, diagnosis of various types of STI, syndromic diagnosis of genital ulcer and discharge type of diseases) factors associated with retention in the three modeled scenarios respectively. The comparison of baseline characteristics of individuals in all the three scenarios was done using Chi-square or Fisher's Exact test whichever was applicable. The variables that were found to be significantly associated with retention in the univariate models were retained in the multivariate models. As an exception, the variable 'number of FSW partners' although not significant in the univariate model, was retained in the multivariate model due to its known relationship with retention [8,9]. Forest plots in excel software were used to generate figures (1a -c) for multivariate analysis [10]. Data was analyzed using intercooled STATA version 10.0. --- Results Between 1993 and 2002, a total of 14,137 individuals visited the STI clinics in this study. Of these, 10,801 (76%) were men, 3252 (23%) were women and 83 (0.5%) were eunuchs or trans-genders. Of all the 10, 801 screened male STI patients, 8631 (80%) were found to be HIV uninfected who were enrolled in the parent cohort study. The present case -control analysis is restricted to these enrolled men. Most of the men were employed (89%), belonged to Hindu religion (81%), were living with their families (77%) and nearly 50% were 'ever married'. More than half of these men reported history of alcohol consumption and 84% reported having FSW contact in the lifetime. The median age at initiation of sex among them was 19 years. Thirty two percent of the men presented themselves with the diagnosis of genital ulcer disease (GUD) (Data not shown in the tables). --- Profile of men in case control analysis in three modeled scenario A total of 1286, 940 and 896 cases and equal number of matched controls were considered in three respective modeled scenarios of primary prevention, cohort study and clinical trials (Table 1). Cases and controls differed significantly for various baseline demographic and behavioral characteristics. --- Predictors of retention in scenario 1: primary prevention Marital status, education, employment, diagnosis of GUD and diagnosis of any STI were found to be associated with retention in the univariate analysis (Table 2). In the multivariate analysis (Fig. 1a), men who were married and monogamous (p = 0.03), employed (p = 0.02) and those with the clinical diagnosis of GUD (p = 0.04) were less likely to return for the follow up visit. In contrast, male STI patients reporting higher level of education (p,0.001) and those who had more than three FSW partners were more likely to report back for follow-up (p = 0.03). --- Predictors of retention in scenario 2: cohort study In the univariate analysis, marital status, education, employment, alcohol consumption and involvement in sex work were observed to be associated with retention (Table 1). In the multivariate analysis (Fig. 1 b), men who were married monogamous (p = 0.001), employed (p = 0.001), who gave history of alcohol consumption (p = 0.002) or those who were involved in sex work (p = 0.001) were 30% less likely to be retained in the cohort study. All these variables were found to be independent predictors of lower retention. However, men who were educated to high school and beyond were almost 2 times more likely to be retained in the cohort study scenario (p,0.001). --- Predictors of retention in scenario 3: clinical trials Marital status, living away from the family, education, employment, alcohol consumption, number of FSW partners, age at first sexual intercourse and diagnosis of STI were significantly associated with retention in the clinical trial scenario in the univariate analysis (Table 1). In the multivariate analysis, independent predictors of retention were living away from the family (p = 0.04), being employed (p = 0.003) and habit of alcohol consumption (p,0.001). More educated male patients or those who had more than three FSW partners or those who initiated sex at an older age were almost 1.5 times more likely to be retained and maintain rigorous follow-up schedule of a clinical trial scenario (Fig. 1c). --- Discussion We have used data from large cohort studies on STI patients in Pune, India in modeled scenarios to study the extent of retention and determinants of retention in male STI patients that constitutes an important bridge population in HIV transmission in India [11]. We have identified demographic, behavioral and biological factors that might predict adherence/ non adherence of male STI patients to suggested visit schedules. We expect that this knowledge would be very useful to design specific strategies that might assist in optimizing retention in HIV prevention research and programs. It is possible to identify potential defaulters for retention and implement appropriate interventions. This might be less expensive than tracking patients or research participants after enrollment. Being employed was a common predictor of lower retention across all the three study models. Level of education showed likelihood of retention across all three modeled scenarios. Education level among high risk men in India is low [12][13]. Additional efforts are required to be made for the less educated or illiterate men to effectively retain them in primary prevention programs and clinical trials. Similar observations have been made in other studies among men who have sex with men [14][15][16]. Our observation also corroborated with a similar observation in NIMH HIV prevention trial [17]. As majority of VCT center attendees in the Government sector facilities in India are less educated [18], special efforts to improve their retention in primary prevention will be required. Additionally, we observed that retention was less among employed men although the education level is expected to be high among them. Paucity of time could be the logical limiting reason for employed men to come for repeated follow-up visits as reported by several investigators [19][20][21]. To facilitate retention, it might be necessary to keep the health facilities and research clinics open and available out of routine work hours. Presence of GUD, history of commercial sex work and living away from the family were predictors of lower retention in primary prevention, cohort study and clinical trial models respectively. Alcohol consumption predicted lower retention in the cohort study and clinical trial models while the married monogamous men had lower likelihood of retention in the primary prevention and cohort study models. It is well known that in therapeutic programs, benefits are generally immediate and more readily visible. In contrast, success of prevention programs lies in better, sustained and prolonged utilization of services which indicates'retention needs'. Retention in primary prevention and allied research is expected to be dependent on many factors and strategies such as retention counseling, quality of delivery of programmatic and research activities, and participant related factors such as motivation, costs and time required to be spent by them. As the prevention programs mature and new prevention trials are undertaken, the need to identify potential drop outs has to be addressed on priority. Optimizing retention of the end-users is crucial for assessing efficacy [22] and hence strategies should be considered to address various factors influencing retention during implementation of prevention programs and research. Predictors of retention identified in the study could be used for developing an instrument to identify the clients who are likely to fail to return for required follow-up visits either in prevention program or in prevention research. Using such an instrument could be a cost effective strategy to minimize 'drop-outs' rather than using expensive measures to track participants or patients who are lost to follow-up later. It has been suggested that both prevention and adherence science need to expand beyond individual boundaries to learn more about motivational and structural strategies that can be applied to large populations so that prevention technologies have adequate time to prove useful when implemented in the communities [5]. Therefore it is relevant to explore individual factors as well as those related to individual's family or societal environment that can prevent retention in prevention or research programs. Poor sexual health seeking behavior among men despite their high risk behavior poses a grave challenge [23]. We observed that married men, who were monogamous, were less likely to be retained in prevention programs and cohort study scenarios in this study. The precise reasons for this observation may have to be explored through qualitative studies. Important role of spouses in men's health seeking has been reported [24]. Several studies have also reported that men who are living away from spouse as well as divorced or single individuals have high risk behaviors [12] and higher dropout rate from the offered prevention umbrella [25][26][27][28]. Our observation that men who were 'living away from family' were less likely to be retained in the clinical trials scenario provides supporting evidence to this possibility. All these observations are strongly suggestive of better health seeking by men having family support. We feel that couple centered approach and involvement of female partners in male oriented programs may contribute to the success of program for men. However, this approach has an inherent limitation that men will have to share information about their health and sickness with their spouses. Counseling sessions in programs and research could focus on specifically discussing the role of spouses and families not only in improving health seeking, but also in keeping up with the visit schedules of programs or studies they are participating in. Among the behavioral characteristics, those men who reported having more than three female sex worker partners were more likely to return for follow-up in the primary prevention and clinical trial scenarios. This probably reflects men's'self perception' about their risk behavior. Health seeking in terms of regular and frequent follow up is perhaps better among men practicing high risk behavior. Focused attention would be required to be given on men reporting high risk behavior less frequently. There is an opportunity to effectively intervene to achieve behavioral change through prevention programs. In India, male commercial sex work is all but invisible and not much is currently known about the status of male sex workers although some studies have reported high HIV prevalence among them indicating a need to develop new [29], innovative interventions targeted towards men in commercial sex work. In the present study among male STI patients, men reporting commercial sex work were less likely to be retained in the cohort study scenario. This is a high risk population and a reliable estimate of HIV incidence in this category of men is an important public health need. Additionally this population would also be targeted for Phase IIb or III studies of HIV prevention technologies and their retention in future clinical trials would be very critical. Lower age at sex initiation has been reported to be associated with early HIV infection in this cohort [30]. Hence, emphasis should be given on targeting younger men in prevention programs and ensuring their continued retention in the programs to sustain safer behavior. Alcohol intake has been reported as a predictor of non-retention in several studies [17,31,32]. It was no surprise to find that men who gave a history of alcohol consumption were less likely to be retained in our study as well. Long term commitment might be a challenge in cases of alcohol addiction. It might be important to emphasize on identification of alcohol consuming behavior at the entry point of prevention settings and making special efforts to ensure retention of alcohol consuming individuals under the HIV prevention umbrella. The diagnosis of GUD was an independent predictor of return for a follow-up visit within 3 months of enrollment i.e. primary prevention scenario. This observation has specific public health significance because it provides opportunities and complete treatment of GUD and appropriate counseling for behavior change. We have already reported decline in HIV acquisition risk with decline in GUDs [7]. GUDs are ''visible or noticeable'' STI that could motivate a person to seek further medical advice and hence such individuals are probably more likely to return to the study clinics. However, it has been reported that non-GUD STIs are also associated with high HIV prevalence [33][34]. Hence, it is advisable that men with clinically invisible or non-apparent STIs should also be targeted for HIV prevention interventions and retention counseling. Interactive counseling approaches directed at a patient's personal risk, the situations in which such a risk is likely to occur and the use of goal-setting strategies are effective in STI/ HIV prevention [35]. Shepherd et al [36] have provided evidence that by enhancing access to treatment and interventions through mechanisms such as counseling, education, and provision of condoms for prevention of STIs, especially GUD among disadvantaged men, the disparity in rates of HIV incidence could be lessened considerably. As part of the clinical interview, healthcare providers should routinely and regularly obtain sexual histories from their patients and plan retention management measures along with implementing measures for risk reduction. It is important to ensure that the clients continue to practice safe behavior through sustained follow-up. We recommend that counselors working with participants and beneficiaries of research studies and program should specifically take into consideration clients' occupation, current marital relationship, habit of alcohol consumption, possibility of non-GUD STI, and identify cases that may have a potential for being lost to follow-up. This strategy may prove to be cost effective, less cumbersome and easier to ensure high retention. In future, the identified predictors in this study could be used to develop a counseling check-list with measurable indicators of failure in retention. Such a tool would require validation studies in prevention programs and clinical trial settings. The recruitment of participants in this study was through public sector based STI clinics which is a limitation for generalizability of the findings. The profiles of clients visiting the public and private sector facilities available are known to be different [37][38]. Since VCT was primarily offered in a research context in this study, lessons learnt may have some limitations in terms of applicability to primary prevention programs rolled out to masses. Hence the predictors of retention identified in this study will have to be understood appropriately in context of the patients receiving health care in other facilities. Secondly, the study essentially involves men and in India men are not only the key decision makers in the community and families but also the major contributors to transmission of HIV in India [20,39]. The National Family Health Survey III data [40] in India has shown that 10-15% of Indian men are at risk of HIV infection. Hence studies to identify predictors of retention among men gains significance. However, the predictors of retention among women are likely to be different and they must be explored. We conclude that achieving high levels of retention and preventing drop outs was a challenge in case of all the three scenarios of primary prevention, cohort studies and clinical trials. The knowledge about identified predictors of sub-optimal retention could be useful in developing appropriate retention checklists or tools in case of the above-mentioned prevention and research programs to minimize potential drop-outs. --- Author Contributions
Background: Retention is critical in HIV prevention programs and clinical research. We studied retention in the three modeled scenarios of primary prevention programs, cohort studies and clinical trials to identify predictors of retention. Methodology/Principal Findings: Men attending Sexually Transmitted Infection (STI) clinics (n = 10, 801) were followed in a cohort study spanning over a ten year period (1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002) in Pune, India. Using pre-set definitions, cases with optimal retention in prevention program (n = 1286), cohort study (n = 940) and clinical trial (n = 896) were identified from this cohort. Equal number of controls matched for age and period of enrollment were selected. A case control analysis using conditional logistic regression was performed. Being employed was a predictor of lower retention in all the three modeled scenarios. Presence of genital ulcer disease (GUD), history of commercial sex work and living away from the family were predictors of lower retention in primary prevention, cohort study and clinical trial models respectively. Alcohol consumption predicted lower retention in cohort study and clinical trial models. Married monogamous men were less likely to be retained in the primary prevention and cohort study models. Conclusions/Significance: Predicting potential drop-outs among the beneficiaries or research participants at entry point in the prevention programs and research respectively is possible. Suitable interventions might help in optimizing retention. Customized counseling to prepare the clients properly may help in their retention.
Background Race [1,2] as well as socioeconomic status (SES) [3-10] impact health outcomes. There is, however, a debate regarding whether the effects of race on health outcomes are fully due to lower SES of the minority groups or not [11][12][13][14][15][16][17][18][19]. Similarly, while race [20][21][22][23][24][25][26][27] and SES [28,29] both impact cancer incidence and outcomes, it is still unknown to what degree SES explains the racial gap in cancer outcomes [30][31][32][33][34][35][36]. While poor cancer beliefs are more common in racial minority groups as well as individuals with low SES [37][38][39][40], we still do not know whether all of the racial disparities in cancer beliefs are due to SES differences between races or race influences cancer beliefs above and beyond SES. A considerable amount of research suggests that SES only partially explains the racial differences in health [11][12][13][14][15][16][17][18][19], a finding which is also shown for cancer outcomes [24,25,[30][31][32][33][34][35][36]. In other terms, it is still unknown whether it is race and SES or race or SES which shape cancer disparities [11,24,25]. If it is race or SES, then racial differences in cancer outcomes are fully explained by SES. In such case, eliminating SES gap across racial groups would be enough for elimination of racial gap in cancer outcomes. If it is race and SES, however, SES would only partially account for racial differences in outcomes [12][13][14][15][16][17][18][19]. In this case, elimination of racial gap in cancer outcomes would require interventions and programs that go beyond equaling SES across racial groups [41][42][43][44][45]. Residual effect of race that is above and beyond SES may be due to racism, discrimination, and culture [41,45]. Thus, understanding of whether SES fully mediates the effect of race on cancer outcomes has practical implications for public and social policy, public health programs, as well as clinical practice. A considerable body of research has shown that low SES people and Black individuals have a higher risk of cancer [35,[46][47][48], probably due to environmental exposures and behaviors such as poor diet, drinking, and smoking [49]. At the same time, Blacks and low SES people have lower health literacy [50] and a lower trust to the health care system as well as a lower perceived risk of cancer [51,52]. As a result, compared to high SES and White individuals, Black and low SES people have a lower tendency for cancer screening behaviors [53]. As a result, when diagnosed with cancer, cancer is at a more advanced stage, which reduces survival, and worsens the prognosis [35]. These all result in what we know as racial and SES disparities in cancer outcomes [35,[46][47][48]54]. Aims: To expand the current knowledge on this topic, we used a national sample of American adults to test the separate and additive effects of race and SES on fatalistic cancer beliefs, perceived risk of cancer, and cancer worries. The implication of such knowledge will help with designing and implementing the most effective policies, programs, and practices that may eliminate the racial and SES gaps in cancer beliefs, cognitions, and emotions. --- Methods --- Design and Setting Using data from the Health Information National Trends Survey (HINTS-5, Cycle 1, 2017), this was a cross-sectional study. HINTS is a national survey which has being periodically administered by the National Cancer Institute (NCI) since 2003. The HINTS study series provides a nationally representative picture of Americans' cancer related information [55]. HINTS-5, Cycle 1 data were collected between January and May 2017 [56][57][58]. --- Ethics All participants provided informed written consent. The Westat's Institutional Review Board (IRB) approved the HINTS-5 study protocol (Westat's Federal wide Assurance (FWA) number = FWA00005551, Westat's IRB number = 00000695, the project OMB number = 0920-0589). The National Institute of Health (NIH) Office of Human Subjects exempted the HINT from IRB review. --- Sampling The HINTS sample is composed of American adults (age <unk> 18) who were living in the US and were not institutionalized. The HINTS-5, Cycle 1 used a two-stage sampling design in which the first stage was a stratified sample of residential addresses. Any non-vacant residential address was considered eligible. The address list was obtained from the Marketing Systems Group (MSG). In the second sampling stage, one adult was sampled from each selected household. The sampling frame composed of two strata based on concentration of minorities (areas with high and areas with low concentration of racial and ethnic minorities). Equal-probability sampling was applied to sample households from each stratum [55]. --- Surveys The surveys were mailed to the participants' addresses. A monetary incentive was given to the participants (included in the mails) to increase the participation rate. Two specific toll-free numbers were provided for the respondents to call: one number for English calls and one number for Spanish calls. The overall response rate was 32.4% [55]. --- Study Variables The study variables included race, age, gender, educational attainment, income, history of cancer in family, health insurance status, cancer worries, fatalistic cancer beliefs, and perceived risk of cancer. Outcome measures included cancer beliefs, perceived risk of cancer, and cancer worries. Race/ethnicity was the independent variable. Educational attainment and income were mediators. Age, gender, history of cancer in family, and health insurance status were covariates. --- Independent Variable Race/ethnicity. Race/ethnicity was the independent variable of interest. Race/ethnicity was treated as a dichotomous variable (0 non-Hispanic Whites, 1 non-Hispanic Blacks). --- Covariates Demographic Factors. Age and gender were the demographic covariates. Age was an interval measure ranging from 18 to 101. Gender was treated as a dichotomous variable (0 female, 1 male). Health Insurance Status. Availability of health insurance was measured using the following insurance types: (1) Insurance purchased from insurance companies; (2) Medicare (for people 65 and older, or people with disabilities); (3) Medicaid, Medical Assistance, or other government-assistance plans; (4) TRICARE and any other military health care; (5) Veterans Affairs; (6) Indian Health Services; and (7) any other health coverage plan. Insurance status was operationalized as a dichotomous variable (0 no insurance, 1 any insurance, regardless of its type). Family History of Cancer. History of cancer in the family was asked using the following single item. "Have any of your family members ever had cancer?" The answers included yes, no, and do not know. --- Dependent Variables Fatalistic Cancer Beliefs. Fatalistic cancer beliefs were measured using the stem "How much do you agree or disagree with each of the following statements?" followed by the following items: (1) There's not much you can do to lower your chances of getting cancer"; (2) "It seems like everything causes cancer"; (3) "There are so many different recommendations about preventing cancer, it's hard to know which ones to follow"; and (4) "When I think about cancer, I automatically think about death". Answers included four-response Likert items ranging from strongly disagree to strongly agree. A sum score was calculated, with a possible range from four to sixteen. Fatalistic cancer beliefs were operationalized as an interval measure, with higher scores reflecting higher fatalistic beliefs [59]. Perceived Risk of Cancer. Perceived risk of cancer was measured using the following item: "How likely are you to get cancer in your lifetime?" Responses were on a five item Likert scale ranging from (1) very unlikely to (5) very likely. Perceived risk of cancer was operationalized as an interval measure, with a higher score indicative of higher perceived cancer risk [60]. Cancer Worries. Cancer worries were measured using the following item: "How worried are you about getting cancer?" Responses were on a 5-item response, items from (1) not at all to (5) extremely high. Cancer worries were operationalized as an interval measure, with a higher score indicating more cancer worries [61]. --- Mediators Educational Attainment. Educational attainment, one of the main SES indicators, was the mediator in this study. Educational attainment was treated as an interval variable ranging from 1 to 5: (1) less than high school graduate, (2) high-school graduate, (3) some college education, (4) completed bachelor's degree, and (5) having post-baccalaureate degrees. Educational attainment ranged from 1 to 5, with a higher score indicating higher SES. Income. Income, one of the most robust SES indicators, was the other mediator in this study. Income was treated as an interval variable ranging from 1 to 5: (1) Less than $20,000; (2) $20,000-34,999; (3) $35,000-49,999; (4) $50,000-74,999; (5) $75,000 or more. Income ranged from 1 to 5, with a higher score indicating higher SES. --- Statistical Analysis For data analysis, we used Stata 15.0 (Stata Corp., College Station, TX, USA). For our univariate analysis, we reported mean or relative frequencies (proportions) with their standard errors (SE). For multivariable analysis, we ran three structural equation models (SEM) [62], one model for each outcome. Specific models were fitted for fatalistic cancer beliefs, perceived risk of cancer, and cancer worries. Race was the main independent variable. Gender, age, insurance status, and having a family member with cancer were the covariates. Educational attainment and income were the mediators. To test whether educational attainment and income fully explain the effect of race on outcomes, we ran models in the pooled sample, without and with educational attainment and income as mediators. Path coefficients, SE, 95% CI, z-value, and p-values were reported. SEM uses maximum likelihood estimates to handle missing data [63,64]. Conventional fit statistics such as the comparative fit index (CFI), the root mean square error of approximation (RMSEA), and Chi-square to degrees of freedom ratios were used. A Chi-square to degrees of freedom ratios of less than 4.00, a CFI more than 0.90, and a RMSEA of less than of 0.06 were indicators of good fit [65,66]. We did not define our mediators and outcomes as latent factors for several reasons. First, income is one and not all of the underlying mechanisms by which education improves health and behaviors. Due to labor market discrimination, differential correlations exist between educational attainment and income across racial groups. Overall, education attainment has a stronger correlation with income in Whites, as their education is more strongly rewarded by the society by high paying jobs [67][68][69]. As our findings showed, education and income differently functioned as partial mediators of the effect of race on our outcomes. Similarly, unique patterns of determinants were found for each of our outcomes, supporting our decision not to conceptualize SES and our outcomes as latent factors. Despite not having latent factors, our decision to use SEM for data analysis was based on the following advantages of SEM compared to regression models: (1) SEM more efficiently uses data, in the presence of missing data, (2) SEM enabled us to decompose the effects of race on education, income, and also the direct effects on our outcomes, (3) the error variance of the education and income were correlated, which is a feature not available in regression analysis. --- Results --- Descriptive Statistics Table 1 summarizes descriptive characteristics among the participants. Participants had an average age of 49 years (SE = 0.34). Almost half (52%) of the participants were females. From all participants, 87% were non-Hispanic White and 13% were non-Hispanic Black. About 92% of the participants had insurance. --- Bivariate Correlations Race was correlated with age, education attainment, and income. Education attainment was positively correlated with income and negatively correlated with fatalistic cancer beliefs. Income was also negatively correlated with fatalistic cancer beliefs. Cancer worries and perceived risk of cancer were positively correlated, however, Cancer worries and perceived risk of cancer were not correlated with fatalistic cancer beliefs (Table 2). --- Fatalistic Cancer Beliefs Model 1 was performed for cancer beliefs, which showed an acceptable fit (chi 2 = 97.276, p <unk> 0.001, CFI = 0.923, RMSEA = 0.06). According to this model, race (b = 1.68; p <unk> 0.001), educational attainment (b = -0.65; p <unk> 0.001), and income (b = -0.33; p <unk> 0.001) were all associated with cancer beliefs. Black, low educated, and low-income individuals had worse cancer beliefs. This model showed that SES indicators only partially mediate the effect of race on poor cancer beliefs. Race was directly associated with poor cancer beliefs, on top of its indirect effects through low educational attainment and low income (Table 3, Figure 1A). --- Perceived Risk of Cancer Model 2 was performed for perceived risk of cancer as the outcome. This model showed an acceptable fit (chi 2 = 95.541, p <unk> 0.001, CFI = 0.914, RMSEA = 0.06). According to this model, race (b = -0.55; p <unk> 0.001) and income (b = 0.07; p = 0.005) but not educational attainment (b = 0.02; p = 0.714) were associated with perceived risk of cancer, with non-Hispanic Blacks and those with low income reporting lower perceived risk of cancer. This model showed that low income only partially mediates the effect of race on perceived risk of cancer. That is, race was directly associated with perceived risk, in addition to showing an indirect effect through income levels (Table 4, Figure 1B). mediates the effect of race on perceived risk of cancer. That is, race was directly associated with perceived risk, in addition to showing an indirect effect through income levels. (Table 4, Figure 1B). --- Cancer Worries Model 3 was performed with cancer worries as the outcome. This model showed an acceptable fit as well (chi 2 = 94.999, p <unk> 0.001, CFI = 0.917, RMSEA = 0.06). According to this model, race (b = -0.36; p <unk> 0.001) but not educational attainment (b = -0.05; p = 0.126) or income level (b = 0.02; p = 0.232) was associated with cancer worries. According to this model, non-Hispanic Black individuals had lower cancer worries, net of their SES. Based on this model, SES indicators did not mediate the effect of race on cancer worries. We found that race is directly associated with cancer worries, independent of educational attainment or income level (Table 5, Figure 1C). --- Discussion In a nationally representative sample of Non-Hispanic White and Black American adults, this study found that SES does not fully explain the racial differences in fatalistic cancer beliefs, perceived risk of cancer, and cancer worries. That is, race has direct effects on cancer related cognitions, emotions, and perceptions, that go beyond its effect on SES. As a result, elimination of SES gaps would not be enough for elimination of racial gap in cancer outcomes. Low SES individuals and Blacks are at an increased risk of cancer, compared to high SES and White individuals [20,28]. Despite their higher risk, they have less accurate cancer beliefs, lower perceived risk of cancer, and less cancer worries [56,57,[70][71][72][73]. This pattern suggests that Blacks may discount their risk of cancer, possibly to minimize their cognitive dissonance, particularly because cancer results in high levels of fear in them [74][75][76][77][78]. These psychological processes may contribute to low uptake of cancer screening, possibly due to avoiding cancer anxiety and worries [74][75][76][77][78][79][80]. Blacks experience other types of adversities. For instance, while age increased Whites' chance of having a conversation about lung cancer with their doctors, Blacks' chance of discussing lung cancer with their doctor did not increase due to ageing, which may increase the risk of undiagnosed cancer in high risk Black individuals [58]. In another study, perceived risk of cancer was associated with higher cancer screening for Whites but not Blacks [21]. It is shown that elimination of racial disparities in cancer screening may contribute to the elimination of disparities in cancer outcome, particularly mortality [23]. This combination makes the health and well-being of Black and low SES individuals at jeopardy. At the same time, this combination imposes enormous costs to the US health care system, directly and indirectly. This is not only paradoxical but troubling. Being at high risk of cancer, combined with fatalistic cancer beliefs, low perceived risk of cancer, low cancer worries, poor cancer knowledge, low self-efficacy regarding cancer prevention is a real public health and policy challenge [37][38][39][40]61,81,82]. This challenging reality invites policy makers, public health practitioners, and clinicians to over-invest on enhancing cancer beliefs, cognitions, and emotions of low SES and Black individuals; the groups that most need these interventions, but at the same time lack them. That means, instead of universal programs, we need interventions that disproportionately target the low SES and Black individuals, instead of universal investments. If SES could fully explain (i.e., mediate) the effects of race on health, then reducing socioeconomic disparities between racial groups would be easier, as it would be able to fully eliminate the racial inequalities in health through equalizing access of racial groups to SES resources [11]. But the reality is that such efforts, while effective, are not enough [11,41,42]. We are not arguing that such efforts are not needed, or they are not effective in reducing the racial gaps. Instead, our argument is that these differences would not be eliminated if the only focus is SES. Still, despite equal SES, racial groups will show differential outcomes [41,42]. This is mainly because SES better serve Whites than non-Whites particularly Blacks, and high SES Blacks still have high health needs [43][44][45][83][84][85]. This disadvantage of Blacks, also known as "Minorities Diminished Returns", suggests that we tend to over-estimate the effects of enhancing SES on racial disparities [43][44][45]84,85]. The ultimate solution to racial disparities includes policies that focus on racism and structural aspects of the society, rather than merely addressing racial gaps in access to SES resources [86][87][88][89][90][91]. Racism and discrimination are possible causes why racial minority groups have worse cancer beliefs, cognitions, and emotions, above and beyond SES. Another explanation for this phenomenon may be health literacy, and cancer literacy, in particular [33,92]. Finally, some of the racial and ethnic differences in cancer beliefs, cognitions, and emotions may be due to culture [93][94][95]. Additional research is needed to decompose the role of structural and social factors, culture, and knowledge (e.g., health literacy) in racial differences that are beyond SES differences. Stigma, mistrust, and fear should not be left behind when we address race and SES disparities in cancer emotions and cognitions [96]. Elimination of SES differences across racial groups is not enough for elimination of racial gap in health, and cancer is not an exception to this rule. The effect of race outside of SES is mainly due to racism and discrimination. Society unequally treats racial groups, based on their skin colors, and any non-White group is perceived as inferior, and is discriminated against. Discrimination is a known risk factor for poor health [97,98]. Barriers beyond SES should not be ignored as a major cause of racial disparities in cancer outcomes [99]. Mass media campaigns enhance cancer control via cancer education that target marginalized groups. Such efforts should simultaneously target racial minorities and low SES people, instead of merely focusing on either SES or race. Addressing one and ignoring the other may not be the optimal solution to the existing problems. Cancer related cognitions, emotions, and perceptions have major implications for prevention and screening. Seeking services, as well as pro-health behaviors collectively reduce prevalence and burden of cancer [93]. Such cognitions, emotions, and perceptions are among the reasons Blacks and low SES individuals have higher cancer risk, are at risk of late diagnosis, receive late diagnoses, have lower adherence to cancer screening and treatment, and die more often from cancer [93]. According to this study, race and SES jointly cause disadvantage in cancer outcomes through their effects on cancer cognitions, emotions, and perceptions. All these processes in turn contribute to the disproportionately high risk of cancer as well as high burden of cancer in low SES and Black individuals [100]. Poor access to the health care system may partially explain poor cancer outcomes of marginalized groups including low SES and Black individuals [95]. This study checked for two indicators of access to the health care. Although we did not directly measure stigma, our SES constructs correlate with stigma. Thus, our study may have indirectly captured the confounding role of access and stigma. This argument is based on the fact that individuals who regularly use health care have lower stigma and higher trust toward the health care system and health care providers [101]. Low SES individuals and Blacks have higher stigma and lower trust to the health care system [102], which is one of the reasons they have worse cancer beliefs, cognitions, and emotions as well as cancer burden [103]. --- Study Limitations and Future Research Current study had some limitations. First, the sample size was disproportionately lower for Blacks, which may have implications for statistical power. To solve this issue, we ran all of our models within the pooled sample, rather than running models across racial groups. Second, the study was cross-sectional in design. We cannot infer causation but association. Third, this study only included individual level factors. Fourth, this study missed some potential confounders such as history of cancer. Fifth, some of the study constructs were measured using one or only a few outcomes. There is a need to study using more sophisticated and comprehensive measures that have higher reliability and validity. There is also a need to study whether these patterns differ for age groups, and cohorts. Finally, there is a need to replicate these findings for each type of cancer, and for other race and ethnic groups. Despite these methodological and conceptual limitations, this study still makes a unique contribution to the existing literature on additive effects of race and SES, on cancer beliefs, cognitions, and emotions. The current study was limited in how it measured the dependent variables namely cancer beliefs, cancer perceived risk, and cancer worries. Cancer beliefs were measured using the following items: (1) "There's not much you can do to lower your chances of getting cancer", (2) "It seems like everything causes cancer", (3) "There are so many different recommendations about preventing cancer, it's hard to know which ones to follow", and (4) "When I think about cancer, I automatically think about death". While all of these four items also reflect "fatalistic cancer beliefs", some of these items at the same time also reflect confusion about cancer information or low perceived self-efficacy in preventing cancer ("There's not much you can do to lower your chances of getting cancer", "There are so many different recommendations about preventing cancer, it's hard to know which ones to follow". The wording of some of the items may also be problematic. For example, we do not know whether the item # 2 is taken literally or not. Particularly because of the term "seems", this item may simply suggest that there is a barrage of information out there that is hard to interpret. Item # 3 reflects cancer misbelief but may also reflect poor self-efficacy in determining the validity of cancer information. Due to the surfeit of information from various sources that are available, it can be hard for many individuals to assess the validity of the information. These items may be confounded by a sense of frustration about own ability to determine the validity of certain claims, some of which are well known for having been reversed, even by top medical facilities. The item # 4 reflects cancer beliefs but may also be an indication of the fear associated with cancer. It may or may not literally mean that all cancer diagnoses are lethal. --- Implications The results reported here have major implications for research, practice, and policy making. The results advocate for looking beyond SES as a root cause of cancer disparities across racial groups in the US. Although SES is one of the major contributors of racial disparities in cancer, it is not the sole factor. Racial disparities in cancer are the results of race and SES rather than race or SES. Therefore, US policies should address social and structural processes and phenomena such as racism as well as poverty and low educational attainment. Elimination of racial disparities in cancer is not simply achievable via one line of interventions that focus on SES. Instead, multi-level solutions are needed that address race as well as SES. Policies that only focus on economic and social resources are over-simplistic and will not eliminate the sustained and pervasive disparities by race and SES [41,42]. --- Conclusions To conclude, only some of the racial disparities in cancer beliefs, cognitions, and emotions are due to racial differences in SES. Policy makers, practitioners, public health experts, and researchers should consider race as well as SES as factors that jointly cause disparities in cancer outcomes. Racism, discrimination, culture, access to the health care system, and other individual and contextual factors may have a role in shaping racial disparities in cancer outcomes. --- Author Contributions: S.A.: conceptual design, analysis, manuscript draft. P.K. and H.C.: interpretation of the findings, revision. --- Conflicts of Interest: The authors declare no conflict of interest.
Aim: To determine whether socioeconomic status (SES; educational attainment and income) explains the racial gap in cancer beliefs, cognitions, and emotions in a national sample of American adults. Methods: For this cross-sectional study, data came from the Health Information National Trends Survey (HINTS) 2017, which included a nationally representative sample of American adults. The study enrolled 2277 adults who were either non-Hispanic Black (n = 409) or non-Hispanic White (n = 1868). Race, demographic factors (age and gender), SES (i.e., educational attainment and income), health access (insurance status, usual source of care), family history of cancer, fatalistic cancer beliefs, perceived risk of cancer, and cancer worries were measured. We ran structural equation models (SEMs) for data analysis. Results: Race and SES were associated with perceived risk of cancer, cancer worries, and fatalistic cancer beliefs, suggesting that non-Hispanic Blacks, low educational attainment and low income were associated with higher fatalistic cancer beliefs, lower perceived risk of cancer, and less cancer worries. Educational attainment and income only partially mediated the effects of race on cancer beliefs, emotions, and cognitions. Race was directly associated with fatalistic cancer beliefs, perceived risk of cancer, and cancer worries, net of SES. Conclusions: Racial gap in SES is not the only reason behind racial gap in cancer beliefs, cognitions, and emotions. Racial gap in cancer related beliefs, emotions, and cognitions is the result of race and SES rather than race or SES. Elimination of racial gap in socioeconomic status will not be enough for elimination of racial disparities in cancer beliefs, cognitions, and emotions in the United States.
Introduction Climate Change is a burning problem affecting all countries across the globe. Being one of the most vulnerable countries and one of the largest Green House Gas emitters, addressing climate change is a complex policy issue in India (Thaker, 2017). While, the impacts of climate change are becoming more obvious in recent years in the form of flash floods, cyclones, droughts, or landslides and are predicted to be even worse in the coming years. In times of such climate emergency, it becomes crucial to look into how actors -scientists, activists, journalists and environmental NGOs-communicate this issue. Research over the years has positioned media as the focal point of climate change communication as publics' understanding and engagement of the issue mostly based on how media represent it (Carvalho, 2007;Junsheng et al., 2019;Wolf & Moser, 2011, p. 2). The transition from traditional media to social media has opened up the new ways of communicating and engaging the general public about a range of topics. Yet, making climate change meaningful to the masses has been proven challenging (DiFrancesco & Young, 2011). Despite all the communication efforts from various actors over the years, it still remains an abstract issue, far removed from the day-to-day lives of most people (S. J. O'Neill & Hulme, 2009). Researchers attribute this to the lack of visibility of the causes and the stakeholder indirect experience with the impacts of climate change (Doyle, 2007;O'Neill & Smith, 2014;Wang et al., 2018). It has been well known that visuals and images strengthen publics' understanding of complex issues, but when it comes to climate change, it is deeply contested. The time lag between cause and effect has made the visual depiction of climate change problematic (Doyle, 2011). Leiserowitz (Leiserowitz, 2006) argued that the lack of "vivid, concrete and personally relevant affective images" make people feel it as a disconnected and far away issue. Until recently, the visual language of climate change has been mostly dominated by graphs and scientific figures (O'Neill & Smith, 2014). While the cumulative trait of climate change poses problems for its visual representation, a considerable array of potential imageries associated with climate change is extensively used across online platforms today (Wang et al., 2018). Environmental NGOs play a critical role in bridging the communication gap between the scientific community, government officials and the local public on climate change issues (Jeffrey, 2001). Earlier studies on climate change communication by non-governmental organizations (Doyle, 2007) found that the popular iconographies of climate change found today are produced through the cumulative impact of campaigning choices of NGOs. The popularity of digital media has prompted environmental NGOs to employ more visuals to engage the public in social networking sites as visuals are considered central to digital media consumption. There have been many studies on the visual representation of climate change across various media platforms (Culloty et al., 2018;Lehman et al., 2019;O'Neill & Smith, 2014;Wang et al., 2018) The theoretical perspectives of visual climate change communication are, so far, limited at present. The most widely used framework for climate change communication is frame theory proposed by Entmann (Entman, 1993), but it is mostly used in the analysis of climate change news in printed media. Framing assumes that media coverage and representation influence how people perceive an issue (Culloty et al., 2018). The present study understands how NGOs represent climate change visually on their social media (Instagram) page. To understand the visual framing, the study used the seven principles of visual climate communication proposed by Climate Outreach in their 2015 report on which the research questions are discussed. The seven principles included the portrayal of'real' people; new climate narratives; the causes of climate change at scale; emotionally powerful climate impacts; climate impacts at local context; problematic visuals of protests and audience (Corner et al., 2015). --- Methodology The present study employed visual content analysis to explore the visual representation of climate change on the social media pages of environmental NGOs in India (Metag, 2016). Through the analysis, study aimed to investigate how the visual limitations of climate change have been negotiated by NGOs to communicate the issue on an image centric platform such as Instagram and to examine how much the content aligns with the visual principles for effective climate change communication proposed by Climate Outreach in 2015 report. To develop the sample frame, the 'Site' search function was used with the key terms "climate change" "NGO" "India" (Site: Instagram.com "climate change" "India" "NGO") across two popular search engines (Google and Yahoo). Out of the 23 Instagram accounts emerged in the initial search results, the researcher purposively selected four NGO Instagram accounts, namely Green Yatra, Greenpeace India, Climate Change India and Climate Front India, that have fulfilled the following criteria: popularity (with more than 500 followers); activity level (a minimum of 100 posts) and #climatechange tagged contents. The most recent thirty Instagram posts as of 20 October 2022, which carried any of the following hashtags: #climatechange or #climatecrisis or #globalwarming or #climateaction from each NGO account were selected for the study. While, the repetitive posts and the posts containing promotions or advertisements related to the organization were excluded from the selection process. Thus, a total of 120 posts were retained for the coding. --- Coding procedures Coding was mostly done based on existing codes emerged in literatures on climate change visuals (DiFrancesco & Young, 2011;Doyle, 2011;Lehman et al., 2019;León et al., 2022;O'Neill & Smith, 2014) and other Instagram studies (Cohen et al., 2019). This study presents the categorization codes and sub-codes utilized for coding visual posts in Table 1. Only the first image of the post series was coded. The posts were analyzed along with the captions and were grouped into four categories-type of imagery used, the subject of the image, its geographic context and its thematic focus (DiFrancesco & Young, 2011). An imagery type is the type of visual component used for the post and is further categorized into four main codes-visual image (photographs/ illustrations/ artwork); text only(Quotes/ data driven/ news/ narrative story); text combined with image and video (Cohen et al., 2019). Image Subject was coded into human subjects (human/illustrated figure) and non-human subjects. The human subjects were further categorized under certain codesidentifiable/unidentifiable, victims/have agency, or locals/activists (Doyle, 2007;S. O'Neill, 2020;O'Neill & Smith, 2014). If the identity of the portrayed human subject was not well known or mentioned anywhere (in post or captions), it would be coded under 'unidentifiable'. The non-human subjects were coded into nature (Greenery/ urban or industries/ disaster or pollution); animals (wild /domestic) and others. Coding image subjects are not mutually exclusive. In case the imagery contained more than one visual element, only the most meaningful information would be coded. Image context is the setting shown in the imageries and is coded into local, national and general. The post themes were coded into Causes, Impacts, Solutions and others (DiFrancesco & Young, 2011). --- Potential limitations and ethical consideration While Instagram remains as a popular social media platform for NGOs to connect with public on issues such as climate change, relying solely on it may result in an incomplete understanding of the broader NGO landscape and their communication efforts across other important social media platforms such as Facebook and Twitter. Additionally, the selection of NGOs based on popularity and activity level introduces the risk of bias and potentially overlook important contributions from less known organizations. This limitation could impact the generalizability of the findings and may not capture the whole picture of NGOs communication pattern on the issue. Ethical considerations were addressed by analyzing only the publicly accessible contents on Instagram. However, it is important to note that the study used content shared by NGOs without obtaining any explicit informed consent from individual users. Although efforts were made to adhere ethical guidelines, it might still be possible that individual privacy could be compromised. Future research could address these ethical issues and explore ways to obtain informed consent when studying social media contents. --- Results and Discussion --- Types of imagery Figure 1 illustrates the types of imagery used in the study. Out of the total 120 posts analyzed, approximately 60% of the posts featured 'text combined with images', 20% consisted of visuals only, 11% were texts only, and 9% were in video format. The level of prominence of the imageries varied with different NGOs. For example, Green Peace India used more video contents than the rest. However, the 'text combined with image' remained the most used imageries across all the 4 NGO accounts. Photographs remained predominant followed by illustrations and posters. 38% of the overlaid contents were rated as educational, 25% were of opinions or quotes, 16% were motivational, 13% had the warning contents, 3% were humorous and 5% were for others. The images accompanying the texts mostly comprised of photographs (69%) and illustrations (31%). Similarly, photographs dominated the visual only content posts (84%). Eleven percent of the posts were text only, containing mostly quotes and data driven information. Videos, in general, were rarely used. Figures 2(a) and 2(b) provide illustrations of the various types of imagery used to depict climate change in India. --- Image subject Imageries used by NGOs covered both humans and nonhuman subjects; however, there was a domination of human subjects with almost half of the visuals, as shown ini Figure 3. 32% focused on nature, 11% covered animals and remaining 7% focused on other elements (Ex: food). Out of all the human figures (85.1% real people and 14.8% illustrated figures), locals dominated the sample, followed by activists' group. However, this pattern changed upon analyzing individual accounts separately (Ex: In Climate Front India account, activists were more predominant). Most human subjects shown were unidentifiable without any their description in the posts or captions except few in the Greenpeace India's posts. The presence of celebrities and officials were insignificant in numbers. The presence of males and females were almost equal. Whereas, the representation of other genders was absent. Most of the posts depicted young and middle-aged humans, followed by children. None of the imagery featured human figures with a visible physical disability. Most of the posts (70.3%) shown humans as having agency while 22.2% were portrayed as victims and 7.4% were portrayed as perpetrators. Of the imagery that contained nature, 50% featured urban environment (industries/polluted), 29.4% greenery and 20.5% Figure 4(a) and 4(b) illustrate the image subjects related to humans. In the Greenpeace India post, the image depicts an 'ordinary man' riding a bicycle, while in the Green Yatra post, humans are portrayed as 'victims'. --- Image context Figure 6 presents the research findings pertaining to the context of images used in the study. Of 120 posts, 50 posts (41.6%) carried general contents. 32% features contents were related to India and 25% carried the localized contents specifying villages, cities and states in India. A large portion of the local based contents (64%) were produced solely by Greenpeace India. Whereas, climate change India produced more general and non-context sensitive contents (46%). The examples of image context are illustrated in Figure 7, showcasing the general contents related to climate change in India. --- Post themes The research findings related to post themes can be observed in Figure 8. On the other hand, Figure 9(a), 9(b), 9(c), and 9(d) present several examples of post themes used in climate change campaigns in India. The number of posts focusing on solutions (47.5%) was found higher than the causes (24%) and impacts (17.5%). Around 11% of the posts dealt with posters and quotes that did not fit in any of these frames. The solutions covered diverse topic including sustainable lifestyles, forest and water conservation, wildlife protection, and reviving traditional food culture. Around 35% of the solution posts showed climate activism. The cause frame of the climate change mostly covered the visuals concerning pollution, food wastage, deforestation, and coal usage. Impacts were mostly illustrated through the visuals of natural disasters local/regional National (Indian) general/global (flood/drought), water scarcity, animal sufferings, and heat wave. --- Discussion The present study looked into the Instagram account of four environmental NGOs working on climate change issues in the Indian context namely, Green Yatra, Greenpeace India, Climate Change India and Climate Front India. The findings were analyzed to understand how the type of imageries used by the four NGOs traverse the visual complexities of climate change. The usage of climate change imageries by these NGOs was discussed on the basis of seven principles proposed by Climate Outreach in their 2015 report. The seven principles included the portrayal of'real' people; new climate narratives; the causes of climate change at scale; emotionally powerful climate impacts; climate impacts at local context; and problematic visuals of protests and audience (Corner et al., 2015). These 'climate change visuals principles', grounded in a substantial body of work in visual communication and climate change communication disciplines, are a helpful heuristic for analyzing the main findings of the present study (Wang et al., 2018). The abstract nature of climate change due to the lack of visual evidence create difficulties in communicating climate change through visuals (Doyle, 2011). Environmental NGOs employ the wide array of imageries such as visuals only, text combined with visuals, text only and video to represent climate change issues in Instagram. Although the study included only NGOs working in India, there was a considerable difference in how each address climate change. Most of the visuals in their Instagram posts are accompanied with texts, reinforcing the limits of visuals alone in representing climate change. A standard approach for visualizing climate change is to use universally recognizable icons such as polar bears, glaciers and smoke stacks (Schroth et al., 2014). However, the findings showed a limited use of such "cliched" iconographies with only a few NGO posts having polar bears and smoke stacks in it. This may be the result of the decade long arguments in climate communication literature (Doyle, 2011;Manzo, 2010;O'Neill & Smith, 2014) around the problematic use of symbolic and iconic photographs in climate change communication. On the other hand, while such images are criticized for "psychologically distant", publics find it as the most 'easy to understand' image of climate change (Lehman et al., 2019). Images such as flood, cracked ground, forest fires, and animal death were the other impact visuals used in the NGO communication in India. Such images capture people's attention and create a sense of importance of climate change (S. J. O'Neill et al., 2013). Flood images have been ranked most important in many studies (Lehman et al., 2019). However, communicators still struggle to understand how such images could empower people to act on climate change. Currently, research (Corner et al., 2015) has found seven principles upon which evidence-based climate change communication can be done effectively. The presence of human figure is important in climate change imageries. Showing'real' humans in climate change visuals can be effective in evoking emotions (Corner et al., 2015). Previous literature showed that most climate change visuals portray humans as separated and disconnected from the environment (Doyle, 2011). According to Ockwell (Ockwell et al., 2009), people fail to internalize climate change visuals is in view of the lack of human element in it. The findings of the present study revealed that almost half of the climate change imageries of the NGO posts had at least one human figure in it. Although the ratio varied when considering individual accounts separately. It is also noted that considerable illustrations are also used to portray humans. However, research showed that increasing public engagement is possible only when real people doing real things are represented (O'Neill & Smith, 2014). Such images are considered 'authentic' and can evoke emotions in the public (León et al., 2022). Most humans portrayed by the NGOs in their Instagram pages are ordinary and non-identifiable people. This is in align with previous studies, which argued that identifiable people are less shown on social media platforms compared to traditional media (León et al., 2022). The findings also noted that certain community of people was not given proper coverage, like, humans with visible physical disability in the NGO Instagram posts. The new narratives of climate change are necessary to draw more attention. Although the 'classic' images of smokestacks, polar bears or deforestation are useful in communication, --- Cause --- Impacts Solution others audiences find them as cynical most of the time (Corner et al., 2015). Images that produce real life stories is an effective attempt to remake the visual representations of climate change in public mind (Corner et al., 2015). It has been noted that there have been considerable attempts from the NGOs in India to include the narratives of people into their climate change posts. This is more evident with Greenpeace India in which they used the quotations of the affected parties within the post over their visuals with the full story given in the captions followed. Such communication attempts are proven to be more effective than historical narratives. But then again, such images are criticized for only evoking feelings but not actions (S. O'Neill, 2020). On the other hand, the personal stories of successful adaptations or mitigation activities were found effective in fostering engagement among'resistant audience' (León et al., 2022). Humor is another way to give diverse interpretation to climate change; however, only a limited NGO posts under study had humorous contents in it. For a long time, the visuals of smoking chimneys dominated the cause frame of climate change (Wang et al., 2018). But this has changed with NGO campaigners focus shifted on to changing individual behaviors. Research have shown that general public will not connect their behavior such as driving a car or scooter or eating meat or wasting food with climate change. The causes of climate change therefore need to be shown at large scale (Corner et al., 2015). Majority of the posts related to climate change cause used in the study were either of congested traffic or landfills or smoke chimneys. Research over the years has repeatedly demonstrated the power of climate impacts visuals in making climate change relevant (Lehman et al., 2019). Climate change impacts visuals started becoming more prominent in 1990s with the images of melting ice, floods, and drought (Wang et al., 2018). Research has shown fear inducing and negative impact photographs, though create sense of urgency of the issue, can be overwhelming (Nicholson-Cole, 2005;Ojala et al., 2021). The impact frames are found less in the Instagram contents of the NGOs. Their focus was more on climate solutions such as sustainable lifestyles, clean energy, reviving traditional food culture etc. Research indicates that such solution images are more effective when coupled with emotionally powerful impact visuals (Corner et al., 2015). However, no such visual framing was found in the samples. Majority of the impact visuals cover animal sufferings and were not exclusively in the Indian context. People will likely to act when they find the issue being connected with their local context and immediate surroundings (Hulme, 2015). However, emphasizing local contexts-based impacts though effective, may reduce people's concern about wider issues (Hulme, 2015), if not shown the intensity of the situation as such. Activists and protesters are the other key subjects found in climate change communication. It becomes a common sight to see activists becoming the face of the issue they represent (ex: Greta Thunberg). However, research have shown that such images attracted wide spread pessimism and it will not engage public beyond those who are already involved (O'Neill & Smith, 2014). Protesters and activists occupied majority of the contents in Climate Front India and Climate Change India. Though they are crucial in representing marginalized section in climate change communication and act as a watch dog for government projects (Syahrir, 2021), such images tend reinforce the idea that climate change are for 'them' not 'us' (Corner et al., 2015). Overall, the contents on the Instagram accounts of each selected NGOs showed variations in framing and communicating climate change visually. Greenpeace India shares contents mostly in align with the visual principles proposed by Climate Outreach. They gave emphasis on sharing local yet relevant social and environmental issues while using photographs of local public. Most of their posts contain the voices of local people as quotes accompanying the visuals. Green Yatra uses illustrations and data to visually represent the issue. Though photographs are used, they are mostly stock photos, with accompanying information/ data rich texts. Climate Change India used visuals that demands urgent call for action. Their visuals mostly cover animals and frame humans as perpetrators. And Climate Front India covers climate activists and protesters in their contents. The visuals mostly include the photographs of protesters holding pluck cards. Thus, the study reveals diverse visual framing of climate change across NGO communication. This open up the need for a more in-depth understanding of climate change visuals used across various social media platforms by various actors. Since the present study only explores the imageries used by NGO for communicating climate change issue, future studies could look in to its impacts and effectiveness on the users, which will be beneficial for planning more audience centric communication strategy. --- Conclusion The historical favoring of visuals within environmental discourse pose difficulty for environmental organizations (NGOs) in communicating temporally complex environmental issues such as climate change to skeptical government and disinterested public (Doyle, 2007). But the proliferation of increasingly image centric digital platforms indicates that climate change imageries will be essential for fostering public engagement both in the present and in future (Wang et al., 2018). People understand and perceive issues based on what media represents, now the digital media. The content analysis of climate change related Instagram posts of four NGOs working in India (Greenpeace India, Green Yatra, Climate Change India and Climate Front India), found a diverse use of imageries on the topic despite its problematic visual shortcomings. The lack of central visual tropes was negotiated with diverse choice of imagery with accompanying texts in the Instagram posts. Around half of the imageries was in the sample feature humans; however, the majority of them were staged photographs as opposed to suggestions outlined by climate outreach in their report. The classic narratives of climate change such as polar bears and melting glaciers were rarely found in the samples. On the other hand, local narratives and stories were more evident especially in Greenpeace India posts. Much of the NGOs' communication efforts was towards changing individual behavior by focusing more on climate change solutions. The Causes and Impacts of climate change were given limited focus by the NGOs. Despite the fact that the NGOs selected for the study were based in India, they showcased great diversity in addressing the issue. Much of the contents carried generalized themes with less reference to Indian and local contexts. Locals and ordinary people were given more emphasis unlike traditional media, which tended to focus on celebrities and politicians. Protesters and activists were seen as the key players in some posts, especially in Climate Front India posts. Though they were crucial in representing marginalized section in climate change communication and acted as a watch dog for government projects (Syahrir, 2021), such images tended reinforce the idea that climate change are for 'them' not 'us' (Corner et al., 2015). The causes and impacts of climate change were given limited focus by the NGOs. Despite the fact that the NGOs selected for the study are based in India, they showcased great diversity in addressing the issue. Much of the contents carry generalized themes with less reference to Indian and local contexts. The general public were given more emphasis unlike traditional media, which tended to focus on celebrities and politicians. However, it turns out that much of the visuals aligning with the seven principles of climate change communication were from Green Peace India account. This suggests potential variation in communication patterns among the NGOs in climate change and opens up the need to look in to the communication strategies of various climate change communication actors.
The rising accessibility of mobile phones and the proliferation of social media have revolutionized the way climate change has been communicated. Yet, the inherent invisibility and temporal complexities of climate change pose challenges when trying to communicate it on visual media platforms. This study employs visual content analysis to investigate how environmental nongovernment organizations (NGOs) in India address these limitations on their Instagram pages. Four environmental NGOs based in India were selected, and their thirty most recent Instagram posts related to climate change were analyzed based on imagery type, subject, context and themes. The findings revealed that these NGOs employed a diverse range of climate change imageries, often accompanied by overlaying texts, to traverse the lack of standardized visual tropes. Moreover, it is noted that a significant majority of analyzed Instagram imageries following the visual principles advocated by Climate Outreach emerged from one single NGO account, suggesting potential variations in the visual communication strategies among different NGOs.
Introduction Indigenous children in Canada (including First Nations, Métis and Inuit) are at a disproportionately higher risk for overweight and obesity compared to their non-Aboriginal Canadian counterparts. 1,2 Defined as the accumulation of excess body fat, obesity is associated with poor health outcomes including compromised immune function, mental health disorders, type 2 diabetes, cardiovascular disease, sleep apnea and decreased quality of life. [3][4][5][6][7] According to the 2009-2011 Canadian Health Measures Survey, approximately one-third of Canadian children and youth between 5 and 17 years of age are classified as overweight (body mass index [BMI] <unk> 25kg/m 2 -<unk> 30kg/m 2 ) or obese (BMI <unk> 30kg/m 2 ), with Indigenous children and youth being twice as likely to be classified as obese in comparison. 4 Corroborating this pattern, the Public Health Agency of Canada reports that 20% of First Nations children living outside of First Nations reserves and 16.9% of Métis children have a BMI <unk> 30, compared to 11.7% of non-Indigenous Canadian children. 2,4 While the etiology of obesity is multifactorial and complex, a social determinants of health framework provides a starting point for unpacking the distal * causes of child obesity, as well as identifying targets for prevention and treatment. 8,9 However, the health disparities experienced by Indigenous peoples highlight the fact that these social determinants are experienced differently by Indigenous populations and must be explored alongside more culturally relevant factors. Several Indigenousspecific social determinants of health models have been developed as a result, including an ecological model by Willows et al. 8 that includes causal factors related to households, schools, communities and the macrosocial context. Greenwood and de Leeuw 9 use a web diagram to demonstrate that there are multiple interrelated relevant social determinants of Aboriginal peoples' health operating at various socioecological levels. One factor noted in these models that has been gaining increased attention in obesity research is the importance of food security for weight status. Food insecurity is defined as a situation in which availability or access to nutritionally adequate and culturally acceptable food is limited or uncertain. 10,11 While the relationship between food insecurity and obesity may seem paradoxical, research is increasingly linking the two, as food insecurity results in a lack of affordable nutritious food choices, which then may result in obes ity. [12][13][14][15][16] Adults and children have distinct experiences of food insecurity, as children are more vulnerable to resultant behavioural problems, such as decreased school attendance and performance, and poorer overall health and nutrition, despite parents' efforts to minimize food insecurity's impact. 13,17,18 A possible relationship between food insecurity and obesity may be especially relevant for Indigenous children, as Indigenous households are three times more likely to experience food insecurity than non-Indigenous Canadians. 19,20 The 2007/2008 Canadian Community Health Survey found that 20.9% of Indigenous households were food insecure, with 8.4% experiencing "severe" food insecurity. 20 In comparison, 7.2% of non-Indigenous households were food insecure and 2.5% experienced severe food insecurity. 20 Much of this discrepancy can be explained by the higher prevalence of sociodemographic risk factors in Indigenous households (e.g. household crowding, lower household income), 19 many of which have also found to be related to obesity. 21 Previous qualitative research with offreserve Métis and First Nations parents found that food insecurity was perceived by community members to be an important cause of obesity in their communities. 22 In those interviews, food insecurity was thought to be not only a result of low income, but also the high price of fresh food in some locations and a lack of transportation. For some, the loss of traditional food and knowledge about its preparation was also important, leading to poorer diets. 22 However, the association between food insecurity and obesity in Indigenous children has not been quantitatively examined. Moreover, it is important to consider this relationship in the context of other potentially important effects, including house hold characteristics, school-level factors, geography and cultural factors. In this paper, we make use of the 2012 Aboriginal Peoples Survey (APS) 23 to examine the association between household food security status and obesity among off-reserve First Nations and Métis children and youth in Canada, independent of other household, school, geographic and cultural factors. --- Methods --- Data and participants The 2012 APS was a postcensal, national survey of the population aged 6 years and older identified in the 2011 National House hold Survey, 24 and living outside of First Nations reserve communities as well as select Indigenous communities in the North. 21,23 This study focussed on First Nations and Métis children and youth aged 6 to 17 years. Inuit children and youth were excluded, as the geography-driven factors affecting their food security status, as well as their unique BMI profiles and body fat distribution, require independent investigation. 25,26 After excluding the Inuit population and adults aged 18 years and over, the final sample included 6900 individuals. Questions for children aged 6 to 14 years were answered by the "person most knowledgeable" (PMK) about the child, generally a parent or guardian. Youth aged 15 to 17 years were interviewed directly. Details about the sampling, data collection and weighting are available in the APS concepts and methods guide. 23 --- Main variables --- Obesity status The dependent variable was weight-status based on BMI categorization using Cole's BMI cut-offs. 27 BMI was calculated using PMK-reported height and weight of children. The APS asked, "How tall is [your child] without shoes on?" and "How much does [your child] weigh?" in order to calculate BMI. 28 Weight status categories included normal, overweight and obese. --- Food insecurity The 2012 APS measured household food insecurity over the past 12 months using a series of six statements to which the PMK responded, "often true," "sometimes true" or "never true." The statements captured whether households were able to afford balanced meals, if meals had been downsized or skipped because there was not enough money for food, the frequency of these events, and how often household members experienced hunger. These responses were used by Statistics Canada to categorize households into four levels of food security: high, marginal, low and very low. 28 In the analyses, "highly secure" and "marginally secure" were combined into one category. --- Covariates In addition to household food insecurity, covariates included demographic, household, school, geographic and cultural variables previously identified as having potential relationships with food insecurity or obesity. The demographic variables included were Indigenous identity group (First Nations or Métis), age (6-11 or 12-17 years) and gender (male, female). Household socioeconomic characteristics included annual household income and mother's educational attainment. Household income was divided by the number of household members to provide a "per capita" household measure, which was included as quartiles (less than $9510; $9510-$16 680; $16 690-$27 260; and $27 280 and above). Other household characteristics included family structure (two-parent, lone-parent or other), as well as household crowding, which was measured based on the number of people per room. The APS included questions about the school environment. Respondents were asked to indicate their level of agreement using a four-point scale (strongly disagree, disagree, agree, strongly agree) with eight statements. Aspects of a positive school environment were captured by asking: 1) "Overall, respondent feels/felt safe at school"; 2) "Overall, respondent is/was happy at school"; 3) "Most children enjoy/enjoyed being at school"; and 4) "The school provides/provided many opportunities to be involved in school activities." Negative aspects of the school environment were captured by agreement with 1) "Racism is/was a problem at school"; 2) "Bullying is/was a problem at school"; 3) "The presence of alcohol is/was a problem at school"; 4) "The presence of drugs is/was a problem at school"; and 5) "Violence is/was a problem at school." For each child, responses to the positive and negative environment questions were averaged, so that higher scores indicate more positive or more negative environments. Regional and urban/rural geography were also part of the analysis, as research strongly suggests the importance of broader environmental factors. Lastly, the cultural variables, "exposure to Indigenous language" and "family members' attendance of residential schools," were also included to capture their potential influence on children's weight status. It has been suggested that cultural characteristics such as language retention are important for Indigenous peoples' health in general, and previous research using the 2006 APS has found that parental residential school attendance was predictive of obesity among Métis children. 9,22 Children who were reported to be exposed to an Aboriginal language at home or outside the home were coded as "exposed." The APS asked whether the child's PMK (usually a parent) or the PMK's mother or father (the child's grandparent) had attended Indian residential or industrial schools. Those who did not respond to these questions (17%) were retained as a separate category called "not stated." --- Statistical analyses We used Pearson chi-square tests to assess bivariate associations between the independent variables and obesity. Thereafter we used a binary multivariate logistic regression to test the likelihood of children and youth having BMI in the "normal" range, versus being "overweight" or "obese," cond itional on the independent variables that we found to have significant bivariate associations with overweight and obesity. A total of five nested models were fitted, including different groups of predictor variables. We performed our statistical analysis using SAS software version 9.4. 29 We used bootstrap weights provided by Statistics Canada and balanced repeated reestimation (BRR) to adjust variance estimates for the survey's complex sampling design. --- Results Table 1 independent of the other variables, but First Nations and Métis children in British Columbia (OR = 0.65, 95% CI: 0.50-0.86) and the three territories (OR = 0.68, 95% CI: 0.49-0.95) were less likely to be overweight or obese, controlling for the other variables in the model. Lastly, Model V included the two cultural variables-exposure to an Indigenous language and family members having attended residential schools. Neither had a significant independent effect on obesity status. --- Discussion This study provides additional evidence that Indigenous children and youth are at higher risk of overweight and obesity than are other Canadian children. Among youth aged 12 to 17 years in our study sample, 30% were classified as either overweight or obese, compared with 20.7% of all Canadian youth in 2013. 30 First Nations and Métis girls were less likely to be overweight or obese than were boys, an observation that is consistent with previous literature on weight status and sex/ gender. 16,31,32 Given that Indigenous children and youth are at a higher risk of overweight and obesity and the potential for weight to impact health outcomes over the life course, [3][4][5][6][7] it is important to understand the distal and "upstream" determinants that drive their weight status. The data shown here support the importance and utility of a socioecological perspective for those ends. 8 There has been little exploration of the relationship between food security and weight status among Indigenous children and youth, despite research suggesting its importance for the health of Aboriginal peoples more generally. 33 Research on the relationship between food insecurity and obesity or overweight among children and youth has thus far been inconclusive, as studies have found either a positive association between food insecurity and obesity 15,[34][35][36] or insignificant results. [37][38][39] There are only a few Canadian studies exam ining the food insecurity-obesity relationship. 14,40,41 Overall, this study found that food insecurity is indeed a risk factor for overweight or obesity among Indigenous children, with children in very food-insecure households having significantly higher odds of experienced low food security and 6.8% were severely food insecure. There were significant differences in the percentage of children and youth classified as normal, overweight and obese for all of the covariates examined (Table 1). At the individual level, among those who experienced very low food security, 27.7% were overweight and 19.2% were obese. Age was a critical factor for weight status, as 47.3% of Aboriginal children between the ages of 6 and 11 years were either overweight or obese compared to 30% of youth aged 12 to 17 years. A larger proportion of males fell into the overweight or obese classification (40.3%) compared to females (34.5%). Indigenous identity also had a marginal impact on the likelihood of overweight or obese weight status, as 40% of First Nations children fell into these weight categories, compared with 34% of Métis children. Children and youth who were exposed to an Aboriginal language were more likely to be overweight or obese (40.5%) compared to those who had no exposure (34.5%). The family-level variables also tell an interesting story. The proportion of overweight or obese children does not largely differ based on mother's educational attainment; 41% of children whose mothers had less than secondary school graduation were overweight or obese, and approximately 35% of children whose mothers obtained a post-secondary certificate, diploma or degree fell into these weight categories. Almost half (44%) of children from the lowest income quartile were overweight or obese. The proportion of children from two-parent families classified as overweight or obese (35.6%) was almost six percentage points less than children from lone-parent families (41.3%), but similar to the proportion of overweight and obesity among children who lived in "other" family structures (i.e. children or youth living alone, with a relative or nonrelative) (35.7%). Of children and youth living in households where there was more than one person per room, 40.0% were classified as overweight or obese compared to 37.2% of children living in households with one or fewer people per room. While 17% of the sample did not respond to the question about a family member attending residential schools, children whose family members had attended residential schools had a higher proportion of overweight or obese status (40.3%) compared to those who did not (36.2%). The regional and urban/rural geography variables showed that almost 40% of Aboriginal children and youth living in the Atlantic provinces, Quebec and Ontario were either overweight or obese. In small population centres, the proportion of children and youth who were overweight or obese was 42.5%, followed by medium population centres (38.9%), large population centres (35.7%) and rural areas (34.4%). The bivariate relationships between the school environment variables and overweight were unclear. Children and youth in school environments that were rated the most positive were the most likely to be obese (18.4%), although those in the third quartile were the least likely to be obese (12.8%). Those rating their school environments the least negatively were the most likely to be obese (24.1%), while those with the most negative school environment rating were the least likely (13.0%). We investigated the adjusted associations between these variables and children's weight status using sequential multivariate logistic regression (Table 2). In Model I, only food security and demographic variables were included, and those with very low food security had higher odds of being obese or overweight (OR = 1.54, 95% CI: 1.11-2.15). In Model II, other household variables were added, and the effect of food security fell below significance. Mother's educational attainment, family structure and crowding had no significant independent effects, but those in the third (OR = 0.76, 95% CI: 0.59-0.97) and fourth (OR = 0.72, 95% CI: 0.55-0.95) income quartiles were significantly less likely to be overweight or obese than those in the first (lowest) quartile. School environment variables were added in Model III. A positive school environment rating was unrelated to overweight or obesity, while those in the second, third and fourth quartiles of "negative" school environment were more likely to be overweight or obese than those in the first quartile. Those whose school environments were rated the most negatively were the most likely to be overweight or obese, relative to those who rated their school environments the least negatively (OR = 1.43, 95% CI: 1.11-1.84). Model IV added geographic variables. Rural or urban residence had no effect, household socioeconomic and demographic characteristics. Understanding these results requires further investigation, but it has been suggested elsewhere that schools with negative climates may also be less likely to offer effective opportunities for physical activity. 42 Regional geography appeared to have an impact on weight status, as children and youth living in British Columbia or the three territories were significantly less likely to be overweight or obese compared to children living in Ontario, controlling for household socioeconomic characteristics. Similar variation has been observed previously, and some research suggests that greater emphasis on outdoor physical activity and availability of facilities may be partially responsible for the observed difference in weight status across provinces. 43 In addition, socioeconomic status 44,45 as well as being born outside of Canada 44 has been inversely associated with a lower BMI among adults in several provinces, including British Columbia. Somewhat surprisingly, however, there was no difference between Indigenous children and youth living in rural, small, medium or large cities in their odds of being overweight or obese, suggesting that the more important factors were operating at the household and school levels. Given previous literature on the determinants of Indigenous peoples' health, we had expected to find that exposure to an Indigenous language, as a measure of cultural preservation, would be protective against being overweight or obese, and that having a family member who attended residential schools would be a risk factor. Although neither had an independent effect, it must be recognized that these measures included in the APS are only weak measures of cultural attachment or preservation. Further research is necessary to understand whether cultural factors might be related to overweight and obesity at the population level, and if so, in what way. --- Strengths and limitations No other studies to date have examined the relationship between food insecurity and obesity among Aboriginal children and youth at the population level. This study used a national survey with the largest available sample size of Indigenous children and youth. A key limitation of this study, as well as many others investigating the food insecurity-obesity relationship, is that the design is cross-sectional and does not allow us to establish causation or explore how the relationship changes over time. Subjective BMI data were collected, as caregivers were asked to report their children's height and weight. This may have resulted in an underestimate of the prevalence of obesity, as research shows that parents tend to underestimate their children's weight and overestimate height, leading to a lower BMI than when objectively measured. 45,46 Covariates not measured in this study, such as physical activity and diet, could be responsible for confounding effects. Additionally, given that this is not a well-studied topic, we were not able to compare this association in Aboriginal children and youth with any similar associations in the general Canadian population. It is also difficult to compare our results with other studies, because different measures are used to assess food insecurity. The United States uses the Agricultural Department Food Security Scale, 47 which is different from the measures used in the APS or the Canadian Community Health Survey, limiting comparisons. Moreover, while the literature discusses the importance of including culture and access to traditional foods for an Aboriginal definition of food security, 8,9 the APS food security questions do not include these dimensions. --- Conclusion We concluded that off-reserve Indigenous children and youth who are in households with very low food security are indeed at higher risk for overweight and obesity, but that this excess risk is not independent of household socioeconomic status; household income adjusted for household size are reliable predictors. This suggests that household socioeconomic status is a major contributor to the high risk of overweight and obesity among First Nations and Métis children and youth. We also found that being in a negative school environment is associated with obesity risk, independent of demographic, household and geographic factors. Given the complexity of childhood obesity and overweight, the available data limited our ability to identify conclusively the factors that are most important, including the potential role of food insecurity. There is a lack of longitudinal data to help us understand the interplay of various factors over the life course in different populations. Among Indigenous peoples specifically, community-based participatory research and research using qualitative methods would strongly complement quantitative investigations. Previous research on interventions in Aboriginal communities demonstrates the strength of such an approach. 33,41,42 --- Conflicts of interest The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the paper. --- Authors' contributions JB conceived the idea for the paper, conducted the literature review and preliminary data analysis, and wrote the first draft. MC assisted with the data analysis and manuscript draft, revised the paper and is principal investigator (PI) on the supporting grant. YG conducted the data analysis, and revised and commented on later drafts. PW supervised the data analysis and is co-PI on the supporting grant. All authors read and approved the final manuscript.
Introduction: Indigenous children are twice as likely to be classified as obese and three times as likely to experience household food insecurity when compared with non-Indigenous Canadian children. The purpose of this study was to explore the relationship between food insecurity and weight status among Métis and off-reserve First Nations children and youth across Canada.We obtained data on children and youth aged 6 to 17 years (n = 6900) from the 2012 Aboriginal Peoples Survey. We tested bivariate relationships using Pearson chisquare tests and used nested binary logistic regressions to examine the food insecurity-weight status relationship, after controlling for geography, household and school characteristics and cultural factors. Results: Approximately 22% of Métis and First Nations children and youth were overweight, and 15% were classified as obese. Over 80% of the sample was reported as food secure, 9% experienced low food security and 7% were severely food insecure. Off-reserve Indigenous children and youth from households with very low food security were at higher risk of overweight or obese status; however, this excess risk was not independent of household socioeconomic status, and was reduced by controlling for household income, adjusted for household size. Negative school environment was also a significant predictor of obesity risk, independent of demographic, household and geographic factors.Both food insecurity and obesity were prevalent among the Indigenous groups studied, and our results suggest that a large proportion of children and youth who are food insecure are also overweight or obese. This study reinforces the importance of including social determinants of health, such as income, school environment and geography, in programs or policies targeting child obesity.