abstract
stringlengths
3
11.9k
article
stringlengths
21
1.17M
This paper presents an agent-based model to study the effect of grievance, net risk, social, sympathy, and political influence on the likelihood of student protests emerging in South Africa universities. Studies of student protests in several fields have been conducted, but no ABM has been used to explore factors contributing to student protests. Student protests have proved to be disorderly, frequently leading to property damage, academic program cancellations, and injuries. Simulation experiments demonstrated that inequality level, number of activists, activist's influential size, number of friendship ties, suspend delay, and sympathy are elements that determine the model of social conflicts, since there are statistically significant in the logistic regression. For university administration to effectively handle disruptive student protest actions, risk management policies should focus on understanding network structures that integrate students' interactions to monitor the spread of opinions that initiate protest mobilization.
INTRodUCTIoN Student protests at Public Higher Education Institutions (PHEI) in South Africa continue to be prevalent, even after more than two decades of democracy, for example, #FeesMustFall protest (Luescher, Loader, & Mugume, 2017). Students are becoming impatient when faced with current high tuition fees, decreased funding opportunities, inadequate student residence, and significant academic and financial exclusions, given the current political and socio-economical landscapes fueled by the promise presented by the National Plan for Higher Education (2001) document (Dominguez-Whitehead, 2011), hence we are currently witnessing high volume of state-directed protests in our institutions. Recent student protest actions have proven to be unruly and frequently leading to property damage, academic program cancellations, intimidation of non-protesting students, and injuries (Peté, 2015). Several studies of student protest have been conducted in a variety of fields, including social and political studies (Oxlund (2010); Dominguez-Whitehead (2011)), but no agent-based model (ABM) has been suggested to predict student protests at higher education institutions. The construction of such a model will aid in the forecasting of student protests. Studying how social conflicts emerges from social context and how they lead into a protest remains a central important topic in political studies, history, social psychology, and sociology (Lemos, Lopes, & Coelho, 2014a). However, several studies that seek to evaluate communities through the framework of complex adaptive systems have increased in the last decade. The most adopted approach in modelling complex system is bottom-up technique, which represent a fundamental characteristic of ABM (Ormazábal, Borotto, & Astudillo, 2017). A number of studies based on conflict or violent collective behavior have shown how ABM through crowd simulation can support the development of a useful techniques to examine protests (Bhat & Maciejewski, 2006;Epstein, 2002;Lacko et al., 2013). In the early-2002, Epstein developed a widely adopted classical agent-based computational model of civil violence, and since then, crowd simulation has evolved. For example, the Epstein's model was adopted by among other, Lemos, Lopes, and Coelho (2014b), Kim and Hanneman (2011). Agent-Based Modeling Simulation (ABMS) approach is ideal when modeling a complex scenario, for example, studying the behavior of actual protest participants which involves the interaction of heterogeneous agents (Pires, 2014). This study aims to design, implement, and simulate a theoretically grounded Agent-Based Model (ABM) that predicts the emergence of student protests in order to gain insight understanding of macro-level behavioral dynamics of a complex student protest system at Public Higher Education Institutions (PHEI) in South Africa. The proposed model will assist in identifying micro-level behavioral patterns which may result into a protest action. The understanding of this emergent behavior will assist university management in several ways, such as identifying behavioral patterns that may result into a protest and subsequently prevent damage to property, intimidation of staff and non-protesting students and possible injuries (Peté, 2015). The structure of this article is as follows: In the second section, an overview of the agentbased modeling method is provided. Then, the article presents an investigation of ABMs of social conflicts proposed by other scholars. Hypotheses and a conceptual model are then introduced with the description and implementation of the model. The article is concluded by the findings of the simulation experiment and followed by conclusion. --- AGeNT-BASed ModeL ABM is an early majority modelling paradigm that is gaining its popularity in several fields that leads to modelling of complex dynamic systems such as student protests, artificial financial markets, pedestrian movement, and population dynamics (Macal & North, 2008). Agent based model is normally used as a bottom up individual-based approach to simulate heterogeneous and autonomous decisionmaking agents that uses behavioral rules to interact with their artificial world (Kiesling, Günther, Stummer, & Wakolbinger, 2012). ABM can be utilized as a methodology to simulate behavioral patterns which are challenging to be modelled using mathematical equations (Dada & Mendes, 2011). In addition, interactions of the agents within an ABM are represented by a set of behavioral rules and the emerged behavioral actions or patterns are observed at the macro level. Agents social interactions may have non-linear influence which can be a challenge to represent using analytical mathematical equations (Lu, 2017). In ABM, social interactions can be categorized as micro-level, meso-level and macro-level. Micro-level represent agent to agent or agent to environment interactions at a local level (Démare, Bertelle, Dutot, & Lévêque, 2017). At micro-level, students can exchange their discrepancies in resources allocation (levels of inequalities) to formulate their dissatisfaction or grievances. Meso-level represents interactions between agents and their group of conformity (Kiesling et al., 2012). For instance, at meso-level, student activists can influence a group of students that are linked to their political group, or a student can influence friends that are linked to their social group as well as neighbors during protest recruitment. Macro-level represents the emergence of overall patterns of the system at a global level (Démare et al., 2017). For instance, macro-level represents the overall emerged behavioral pattern used to provide model users with insights about students' protest under conditions which are systematically represented within an environment. Co-evolution and emergence of social structures are the outcomes of agent-based model simulation (ABMS) which are mostly used in predicting human behaviour. System patterns accumulated from a series of changing behavioral rules by heterogeneous agents can lead to coevolving or emergent phenomena (Narasimhan, Roberts, Xenitidou, & Gilbert, 2017). Emergence represent the overall system behavioral patterns that are simulated, as well as resulting from the continuous interactions of agents at individual level over a set of time (Narasimhan et al., 2017). Co-evolving social structure are caused by peer-to-peer agent's behavioral influence. Agent Based Modelling approach represents a process that changes over a period of time (Dulac-Arnold et al., 2020). ABM contribute to the development of knowledge and understanding of significant processes and methods that enables complex adaptive systems to be solved. ABMs are used to formulate theories instead of regenerating the exact occurrence of events nor provide accurate predictive model (Dulac-Arnold et al., 2020). ABM aids in exploring the significance of several parameters under certain artificial world settings and various agents rule set (Dulac-Arnold et al., 2020). Agent-based models of social conflicts, such as student protests, can help researchers get a better understanding of how social gatherings aimed at addressing social inequities mobilize a large number of people (Lemos 2018). Furthermore, the social conflict model may show how individuals choose to organize a collective group based on their perceived grievances (Kim & Hanneman, 2011;Lemos 2018). The social network aspect of ABM, in particular, may be utilized to better understand the function of newly emerging technologies like social media as a valuable protest mobilization tool (Filippov, Yureskul, & Petrov, 2020;Waldherr & Wijermans, 2017). Furthermore, protest simulation models may show how a network of social and political groups in deprived communities can be used to mobilize people in order to address accumulated grievance (Akhremenko, Yureskul, & Petrov, 2019). --- SoCIAL CoNFLICTS Social conflict is an appropriate theory to investigate causes of protests within societies. Social conflict studies present how conflicts emerge, and their variations, as well as their societal effects. In the study of Lemos , Coelho, and Lopes (2017), social conflict is defined as confrontation or social dynamic of imposing will to produce desire end. For example, conflict can be in the form of a protest, disagreement between few individuals, or goes as far as an international conflict, such as World war (Pires, 2014). Inequality in the distribution of resources and power have been identified as main sources of existence of conflicts between population groups (Pedersen, 2004). Furthermore, Social conflict generates beliefs, unity and sympathy among interacting individuals or groups within the society (Marx, 2020). Weber suggested two classifications of social actions, which are instrumental rationality, in which objectives are attained through rationally chosen actions, as well as value-oriented rationality whereby values are attained by conscious believe (such as religious, ethnicity, political, and so on) (Fukuda, 2018). Three social stratification dimensions-economic class, status group, and political party-were provided in other researchers (Protsch & Solga, 2016). These stratification dimensions show significant variations in how people behave or think. When it comes to protests, status groups offer chances for compassion and the mobilization of others who share the same grievance, while political parties offer forums that encourage the action of those who feel wronged (Bischof, 2012). To conceptually study the interaction between class and race, as well as further examining the patterns of protest waves, Kim and Hanneman (2011) suggested an ABM to model crowd dynamics of workers' protests. According to Kim and Hanneman (2011), the motivation to protest is driven by a grievance, which is symbolized by relative deprivation brought on by wage disparities, a perception of an increased risk of being detained, and group affiliations, which is shown by ethnic and cultural identities. The simulation experiment results in Kim and Hanneman (2011) show that wage disparities (or grievances) have a significant impact on how frequently protests occur. However, Kim and Hanneman (2011) analysis only considers the neighborhood social contacts of agents without integration of influences from network structures and activists. Similarly, the study of Ormazábal et al. (2017) developed an ABM to explore the dynamics of social conflicts on Epstein (2002)'s ABM of revolution when money distribution is incorporated to condition each individual's level of grievance. The study of Ormazábal et al. (2017) is aimed at evaluating the effect of inequalities in the distribution of resources on social mobilizations. Furthermore, Ormazábal et al. (2017) ascertained that protest outbursts and strength are significantly reduced when the level of dispersion of resources are even. However, unlike workers' protest model of Kim and Hanneman (2011), Ormazábal et al. (2017)'s ABM lacks factors that seeks to explore ethnic and cultural identities as well social and political network structures. Furthermore, the study of Fonoberova, Mezić, Mezić, Hogg, and Gravel (2019) presented an ABM of civil violence to explore the effect of non-neighbourhood links on protest dynamics when varying cop agents' density and network degree. The ABM proposed by Fonoberova et al. (2019) does not integrate the effect of friendship links, political ties and the influence of empathy towards aggrieved neighbors. Pires and Crooks (2017) adopted a geosimulation approach by proposing an ABM that incorporate social interaction over a spatial environment. The model proposed by Pires and Crooks (2017) seeks to explore the effect of local interactions of heterogeneous individuals, their environmental characteristics, constructed from on empirical data of an actual geospatial landscape, population and daily activities of Kibera residence in to the emergence of riots. Rumor was utilized as an external factor to trigger the riots. The simulation results in Pires and Crooks (2017) indicate that youth are more attracted to rioting behaviour, which is evident that their model captures the right dynamics, and further provide support to existing empirical evidence and theories of riots. Although this model captures adequate social interactions of protesting civilians, and simulates more realistic dynamics of crowd patterns, it does not explore effect of risks as participation cost. In addition, their model does not incorporate network structures to explore the effect of social influence, political influence, and sympathy. The goal of this research was to construct an ABM of student protests that builds on Epstein's ideas. The model incorporates the hardship resulting from resource distribution disparities which is computed as function of RD. Furthermore, the model investigates the impact of integrating sympathetic effect which arise from Moore's neighborhood network graph, political effect which was denoted by directed activist links, and social influence indicated by undirected friendship ties. --- HyPoTHeSeS The conceptual framework of the model that predict student protest is demonstrated in Figure 1. In the social conflict context, the first latent variable proposed is the grievance factor that can influence the willingness of students to participate in protest action (Epstein, 2002;Raphiri, Lall, & Chiyangwa, 2022). Klandermans, Roefs, and Olivier (2001) applied theory of relative deprivation to investigate grievance development in the South African context, where grievance was defined as the effect of objective conditions (such as perceived living conditions) and subjective conditions (in relation to others over time). Number of policies have been implemented to address inequalities caused by apartheid policies in South Africa, but these inequalities are still prevalent. Ortiz and Burke (2016) argued that, for government to be legitimized, they need to address the grievances of protesters, such as reducing inequalities within the society. Lemieux, Kearns, Asal, and Walsh (2017) theorized that high grievance will (a) increase the probability of any form of participation in political activities in general (b) increase interest of participating in protest action (c) increase participation in conflicts activities. Thus, the first hypothesis considering the conceptual model in Figure 1 reads as follows: H1: Grievances resulting from discrepancies in living conditions have positive influence on students' decision to participate in protest action. At a personal-level, perceived risk of punishment has been identified to influence people's decision to engage in protest. Other scholars have identified operationalized risk as consequences rather than the probability that a potential punishment will be imposed on the individual (Lemieux et al., 2017). The study of Lemieux et al. (2017) further theorize that when risk is high: (a) it reduces the probability of any form of participation in political activities in general (b) it reduces interest of participating in protest (c) reduce participation in conflicts activities. Therefore, H2: Perceived net risk resulting from risk aversion and probability of being suspend negatively influence students' decision to participate in protest activities. When an individual is integrated into network structure, the likelihood that one will be targeted with messages during social movement mobilization process increases. For instance, van Stekelenburg and Klandermans (2013) emphasized that individuals with friendship links or acquaintances that are actively involved in a protest action are more likely to participate in social movement actions than others. Therefore, Historically, activists heavily relied on mass media to stay connected to a larger public, but nowadays they have established their own platforms on Twitter and Facebook for protest mobilization and interactions with their (Poell & Van Dijck, 2015). For example, the Arab Spring revolutions, Occupation protests and #FeesMustFall protests managed to attract a larger number of people because activists' influential sizes were higher as a result of social media. Therefore, H4: Student activists with larger influential size develops high level of political influence which positively contribute to protest occurrence. van Stekelenburg and Klandermans (2013) argued that individual's first step in protest participation is guided by consensus mobilization, whereby the general society is distinguished into people who sympathize with the cause and others who do not. The more effective consensus mobilization has been, the bigger the number of sympathizers a protest can attract. Therefore, H5: Sympathetic students who are exposed to other active students develops sympathy influence which positively contributes toward decision to participant in protest action. --- deSCRIPTIoN oF STUdeNT PRoTeST ModeL The "Overview, Design concepts, and Details" (ODD) (Grimm et al., 2010) protocol is used for the proposed simulating student protest model. The ODD protocol provides mechanisms to standardized model description, and to make it more understandable and repeatable. --- Purpose The model presented in this study was to simulate the effects of grievance on the possibility of student protests. It used the factors of relative deprivation (RD), net risk, social influence, political influence, and sympathy influence on the likelihood of student to engage in protests action. To achieve this purpose, the classical model of civil violence proposed by Epstein (2002) was extended to incorporate the new decision parameters. For the proposed model to be simulated and analyzed, several equations are assumed to integrate state variables, behaviour and scale of model entities. --- entities, State Variables and Scales The 2 presents an abstract flow diagram that outline the behavioural states of the student agents. The go submodel repeatedly executes: 1) the action rules for both student and officer breeds, 2) then run the move procedure, 3) decrement the suspend term of laid off students and suspend delay for rested officers, 4) increment the time step, and 5) display and update the macro or behavioural state of model entities using plots or interface controls provided by NetLogo. --- Model details --- Initialization The setup procedure initializes fixed parameters and properties of each entity used in the model. The development environment allows model observers to use sliders and switches to adjust global scales and categorical parameter input. The hypothetical environment simulated in this study does not denote any particular terrain, as there is no integration of spatial data to feature mountains, buildings, or forests that guide agents' movement. Lattice only portrays an abstract virtual environment focusing on the interaction of students. Table I provides the attributes of the student and officer breeds used in parameter sweeping when conducting simulation experiment. --- Submodels In this study, five main submodels are implemented in the model to compute relative deprivation (grievance), net risk as well as network influences classified as social, political and sympathy. This fundamental submodels presented in this section forms decision variables in predicting the likelihood of protest occurrence. --- a. Grievance The ABM of student protest represent grievance as a function of a person's intuitive relative deprivation (RD). This model looked at consumable resources as a proxy for relative deprivation: consumable resources were viewed as a measure of a person's ability to access resources; whereby each unit of consumable resource reflected a distinct set of goods that were consumed. Each student's possible range of RD in relation to certain reference group (neighbors, social links, political groups) within the society is [ 0 , x * ], whereby x * denote the maximum resource available in the society. For each student i, 0, xi       represent range of resources accessible to i , whereas [ xi , x * ] denote ) can be computed by 1 -F x ( ) , whereby F x f y dy k ( ) = ( ) ∫ 0 is the cumulative resource distribution, and 1 -( ) F x is the related frequency of students with resource accessibility above x . Grievance as a feeling of RD of student i at time t is defined in (1): RD x F y dy i t x x i * ( ) = -( )       ∫ , 1(1) b. Perceived net risk Students decide to participate in protest action if the grievance outweighs the cost or consequences. For student i at time t , cost of participation was quantified by perceived net risk ( NR i t , ) of being suspended from academic activities. The NR i t , is denoted by a function of layoff probability estimate, risk aversion (RA), and maximum layoff term (J). The layoff probability estimate ( P i t , ) of student i at time t , is calculated based on (2): P i t , = - -                  1 exp k V V O t AS t . , ,(2) Where constant k = 2.3, V AS t , and V O t , represent the local ratio of the number of active students and law enforcement officers at time t in the vision radius, respectively. Vision radius is determined by the number of patches, based on Moore (or indirect) neighborhood network (Klancnik, Ficko, Balic, & Pahole, 2015), which each student can see, that can host other students and officers. Risk aversion (RA) represent a uniformly distributed value ranging from 0 to 1 which is heterogeneous and remained fixed for each student during simulation experiment. Whereas the maximum layoff term (J) was a fixed and homogenous value across all students. The net risk ( NR i t , ) of student i at time t is represented by the (3): (4) NR RA P J i t i i t , , . . =(3) The propagated opinions among aggrieved individuals in the integrated social friendship network was quantified as the different between relative deprivation ( RD x i t ( ) , ) and net risk ( NR i t , ) at time t . Where Î ASN t represent the number of active students over time within the friendship network structure of student i and w 1 denote global social influence weight which was constant. --- d. Political Influence The political network structure in the ABM of student protest is modelled in the form of a directed graph G A A A E E E num activists ∈ … ∈       … 1 1 , , , , , _ _ political influenc ce size _       { } , whereby A and E represent set of activist nodes ranging from one to NUM_ACTIVISTS and set of directed edges ranging from one to POLITICAL_INFLUENCE_SIZE, respectively. Each edge represented an ordered pair node a n , ( ) from a n ® (whereby influential opinion is directed from activist (a) to normal student (n)). The constructed political network graph incorporated activists that have a positive out-degree denoted by deg a + ( ) which was defined by POLITICAL_INFLUENCE_SIZE in this model and zero in-degree (deg a -( ) ). Each activist act as a source with directed links pointing to a proportion of randomly selected student population whose internal property of POLITICAL_PARTICIPATION? is equivalent to TRUE. Activists still maintain their individual undirected friendship ties. A student can be linked to multiple activists. Equation ( 5) was used to represent the form of political influence ( PInf i t , ) in this model: PInf RD x NR i t NAP i t i t i , , , . = ( ) - ∈ ∑ w 2 (5) Whereby, Î NAP i represent number of incoming opinions (quantified as RD x i t --- ( ) , minus NR i t , ) from political sources which are activist directional links for student i and w 2 denote global political influence weight which was constant. --- e. Sympathy Influence In the constructed ABM of student protest, students can be sympathetic towards other active neighboring students located in their vision radius. For student i , Moore neighbourhood graph G x y x x r y y r =       - ≤ -≤ { } , : , 0 0 , where x y ) . Sympathy influence ( SyInf i t , ) of student i was calculated using ( 6): SyInf RD x NR i t AV i t i t i t , , , , . = ( ) - ∈ ∑ w 3 (6) Similar to the other network structures, propagated opinions between protesting neighbors were calculated as the difference of relative deprivation ( RD x i t ( ) , ) and net risk ( NR i t , ) at time t . Where Î AV i t , denote set of active students in the neighbourhood graph of student i over time t , and w 3 denote global sympathy influence weight which was constant. --- ModeL IMPLeMeNTATIoN The model constructed in this study was coded using Netlogo 6.1 which is an ABM integrated development environment (IDE) (Wilensky, 1999). Simulation experiments were carried out on Netlogo BehaviorSpace. In Netlogo, a model is implemented by simply drag-and-dropping components into the IDE's graphical user interface, and by writing source code through Netlogo programming language which represent a simplified English like syntax. Netlogo IDE further include a model documentation tab. The IDE allows developers to implement, simulate, and observe the model. In addition, the IDE provide helpful and easy to follow tutorials and documentation materials. BehaviorSpace aid in execution of the model in the background and provide model users with platform to run several scenarios, while conducting parameter sweeping and store simulated data into a comma separated values (.csv) file. Figure 3 shows the user interface of an agent-based model of student protest that was implemented in this study. --- experiments The model presented in this study uses several combinations of parameters to simulate various conditions that led to the emergent of student protest behavior. The fixed global parameters values used in the model during simulation experiments that in this study, were adopted from social conflict research (Epstein, 2002;Kim & Hanneman, 2011;Moro, 2016;Ormazábal et al., 2017;Raphiri et al., 2022). Quick runs of the model were done using Netlogo's graphical user interface platform for debugging and testing the code, as well as instant visualization of simple system dynamics. Systematic simulation experiments were carried out using Netlogo BehaviorSpace, to enable parameter sweeping while storing model output into csv file for further in-depth analysis. This systematic simulation performed by BehaviorSpace also aided in improving model execution time. Each experimental scenarios of certain combined parameters were repeated 10 times for 250-time steps. Table 2 shows frequencies of variated parameters used during simulation experiments. The effect of grievance as a function of relative deprivation was computed through varying INEQUALITY_LEVEL parameter, while SUSPEND_DELAY was utilized to reduce the risk of active students being suspended by law enforcement officers. The variation of MAXFRIENDS was used to calculate the social influence, whereas political influence was based on the variation of NUM_OF_ACTIVISTS and POLITICAL_ INFLUENCE_SIZE. Sympathy influence was activated using SYMPATHY_ACTIVATION? parameter. Table 2 contains factors used in the experiment design. When other parameters were kept constant, the main focus was to evaluate how the degree of inequality level, number of activists, activist's influential size, number of friendship ties, suspend delay and sympathy affect the dynamics of student protests. Running simulation experiments was challenging and time consuming due to resource constraints, such as high-performance computing desktop. A desktop computer with eight core processor was used to run the model. As illustrated by Netlogo' BehaviorSpace tool in Figure 4, simulation of ten experiments with similar combination of parameters took an average time ranging between 25 and 60 hours. Simulation experiments that took more time to complete were encountered when running scenarios with network structure that contains larger average degrees. --- Model Calibration Model calibration techniques were performed in verification, validation, and sensitivity analysis stages. Firstly, iterative programmatic testing is conducted throughout the model implementation phase to make sure that the code is free from syntax and logical errors, and that it behaves as expected (Anderson & Titler, 2014). Model-to-model validation is carried out to ensure that the dynamical patterns of the implemented model correspond to theories presented by Kim and Hanneman (2011)'s ABM of worker protest and other similar computational models when similar parameter values are used. Sensitivity analysis is carried out to explore the dependencies of model output to parameters variations and evaluate the influence degree of each input parameter toward the observed output (Iooss & Saltelli, 2017). Going through sensitivity analysis assisted in gaining insights understanding of various dynamics represented in the implemented model and the robustness of output towards parameter uncertainty. --- Results As illustrated in Figure 5 and Figure 6, in each simulation run, a line graph indicating the average grievance (shown in red) and net risk (drawn in green) which were both calculated from inequality level, and suspend delay are plotted in each time interval. The dynamics of grievance and net risk over time in the social conflict scenario are recorded when system decision parameters are set to low, medium, and high. High inequalities rapidly increased grievances and reduced the risks of protesting because the grievance was always above the net risk in most time step. An increase of decision factors (i.e., number of activists, maximum activist's influential size, and number of social friendships links) result in rapid increase of network influential values (which are political, social, and sympathy values) as shown Figure 7. An increase in the number of activists as well as maximizing their influential size, which may be regarded as optimizing mobilization resources, resulted in an increased in the political influence value, more especially when students are sympathetic towards one another. As mentioned earlier, the focus in this research was to evaluate how various factors presented in the proposed conceptual model significantly assist in the prediction of student protest emerging. Therefore, the logistic regression model to evaluate the effect of each parameter on the probability of protest emerging was conducted using Python. As demonstrated in Table 3, all predictor variables in the model were statistically significant with p-value less than 0.01. The accuracy of logistic regression classifier on test set is 0.949. The precision of 0.96 for false positive prediction and 0.95 for true positive predictions and the recall of 0.98 for false positive prediction and 0.89 for true positive predictions can be observed from Figure 8. --- Grievance from Inequality Level The findings of this study show that growing levels of inequality have an impact on students' grievances; as was shown in the statistically significant in predicting the likelihood that students would take part in a protest action (Coef. =3.4779; p value < 0.05). This is consistent with the research of Lemieux et al. (2017) who claimed that a certain degree of resentment tends to enhance the likelihood of an individual participating in political activities like protest actions. This result thus supports hypothesis (H1) from section IV. --- Perceived Net Risk Based on Suspend delay The research's findings also highlighted the significant of the contribution of perceived suspension delay, which is used to compute net risk (Coef. =-0.8446; p < 0.05), that indicates the likelihood of students engaging in protest behavior. This is in line with the findings of a study by Lemieux et al. (2017), who claimed that when risk reaches a certain level, (1) it reduces the likelihood of engaging in any political activity, (2) it decreases interest in taking part in protest, and (3) it reduces engagement in conflict-related activities. Therefore, this finding supports (H2). --- Social Influence as Function of Friendship Ties: According to the study's findings, bidirectional friendship networks are a statistically significant factor in the likelihood that a student will engage in protest behavior (Coef. =-0.2933; p < 0.05). This finding is further supported by a research by van Stekelenburg and Klandermans (2013), which found that people (participants) are more likely to take part in social movement activities if they have friends or acquaintances who are actively engaging in a demonstration. As a consequence, the outcome supports the hypothesis (H3) ----------------------------------------------------------------- ----------------------------------------------------------------- --- Political Influence Based on Number of Activists and Activist's Influential Size The simulation findings show that both of the political impact calculation components were statistically significant determinants of the likelihood that a student would engage in protest behavior (=-0.2121; p 0.05) and (=-0.1953; p 0.05). These results are consistent with earlier work. Political discourses like protest, according to Singh , Kahlon, and Chandel (2019), are largely articulated via several levels of concerns including the spread of political influence and political mobilization. According to Poell and Van Dijck (2015), the widespread use of social media platforms has made it possible for activists to mobilize huge numbers of people for protests, which has increased protest participation. Consequently, this outcome supports (H4). --- Sympathy Influence The findings of this study imply that sympathetic impact contributes statistically significantly to the likelihood that students would engage in protest behavior (=3.3129; p 0.05). According to a research by van Stekelenburg and Klandermans (2013), the initial stage in a person's protest involvement is determined by consensus mobilization, in which the general public is divided into those who support the cause and those who do not. The more supporters a demonstration may gather, the more successful consensus mobilization has been. As a result, the conclusion supported by this finding (H5). --- CoNCLUSIoN The literature review showed that recent student protests have been disruptive, leading in property destruction, academic program cancellations, intimidation of non-protesting students, and injuries, to name a few outcomes. In this study, we adapted Epstein's (2002) AB of civil violence in the context of students' protests. Simulation experiments demonstrate that inequality level, number of activists, activist's influential size, number of friendship ties, suspend delay and sympathy are elements that determines the model of civil violence proofed by the statistically significant in the logistic regression model. We discovered that when these independent variables have increased in various scenario's studied, both the volume of outbursts and strength of protests increases. The results of this research imply that university administration or policy-makers should design their risk management strategies and policies to concentrate on understanding the network structures that integrate student interactions that seek to support the spread of students' grievances. The interception of such channels by policymakers will aid in reduction of the disparities in resource distribution, and subsequently lessen grievances that causes frustrations among students. --- FUTURe ReSeARCH dIReCTIoNS The model created in this study offers a general tool to explore some processes that influence the genesis of protest activities, although it is yet unclear if the model can be applied to actual events. Future research may thus use actual empirical data to confirm the model assumptions to build more plausible models that can also be used to forecast student protest events. Additionally, the model developed included endogenous decision-making factors that influence students whether to protest or not. As was shown during the #FeesMustFall and #RhodesMustFall social movements, using exogenous mechanisms to launch messaging, such as tweets about active protest activities, would have a huge influence on student participation in protests. --- Joey Jansen van Vuuren (PhD) heads the research in the Department of Computer Science, Faculty of Information and Communications Technology at Tshwane University of Technology and is the Vice Chair of IFIP (Federation for Information Processing) Working Group 9.10. She is also one of the coordinators of SA for the BRICS Integrated Thematic Group Computer Science and Information Security (ITG-CSIS). Her research focus on cybersecurity, education, government, policy and culture. She was the coordinator of the South African Cybersecurity Centre of Innovation for the Council for Scientific and Industrial Research (CSIR) that initiated several cybersecurity government initiatives in South Africa. The centre also focused on the promotion of research collaboration, cybersecurity education and the exchange of cyber threats. She was also involved in the development of cybercrime strategies for the South African Police Services. Previously as the Research Group Leader for Cyber Defence at CSIR, she gave the strategic research direction for the research conducted for the South African National Defence Force and Government sectors on Cyber Defence. She has spent over 25 years in academia and research, and she has published various journal papers, conference papers, and book chapters on cyber security governance. She has presented on numerous forums, such as national conferences, and also international conferences, some of which she has been invited to as the key note speaker. Bertie Buitendag is a passionate researcher, in ICT education, knowledge management systems and organizational knowledge sharing and collaborative practices. His research interests also include Living Labs, knowledge networking, data exchange technologies and the semantic web.
Efforts aimed at the abandonment of Female Genital Mutilation/Cutting (FGM/C) in the communities where it is deeply rooted have extensively considered and addressed women's perceptions on the issue, leaving those of men barely acknowledged. Although the practice is generally confined to the secret world of women, it does not mean that men cannot be influential. Indeed, men can play an important role in prevention. In order to address this gap, and having as background an extensive ethnographic field work, a transversal descriptive study was designed to explore Gambian men's knowledge and attitudes towards FGM/C, as well as related practices in their family/household. Results show ethnic identity, more than religion, as the decisive shaping factor on how men conceive and value FGM/C. The greater support towards the practice is found among traditionally practicing groups. A substantial proportion of men intend to have it performed on their daughters, although reporting a low involvement in the decision making process, with very few taking alone the final decision. Only a minority is aware of FGM/C health consequences, but those who understand its negative impact on the health and well-being of girls and women are quite willing to play a role in its prevention.
Introduction Female Genital Mutilation/Cutting (FGM/C) is defined by the World Health Organization (WHO) [1] as all procedures involving partial or total removal of the external female genitalia, or injury to the female genital organs, for nontherapeutic reasons. The WHO classifies the practice into four types: type I (clitoridectomy), type II (excision), and type III (infibulation) are ordered according to a growing level of severity, while type IV comprises all other harmful procedures performed on the female genitalia for nonmedical purposes (e.g., pricking, piercing, incising, scraping, and cauterization). According to the WHO latest data, 140 million women and girls in the whole world are thought to have been subjected to the practice, and 3 million girls are at risk of having it performed every year. FGM/C constitutes an extreme form of discrimination and violation of the human rights of girls and women, with health consequences now acknowledged and documented. In the short term, the practice can result in shock, haemorrhage, infections, and psychological consequences, while in the long term it can lead to chronic pain, infections, keloids, fibrosis, primary infertility, increase in delivery complications, and psychological sequela/trauma [2][3][4][5][6][7]. FGM/C has been practiced for centuries, having acquired a deep cultural meaning. Under a shared vision of the world where life is understood in cycles, FGM/C had been linked with the moment in which a girl becomes a woman in many societies. During the rite of passage to adulthood, within a ceremony secretly kept from outsiders, especially men, initiates were taught about the cultural and social wealth of their community, as well as their roles and responsibilities as women, mothers and wives, establishing gender power relationships [8]. The physical cutting would be the proof that a girl was granted with all necessary teachings that make her worthy to belong to her community. FGM/C had become a synonym of cleanliness, femininity, beauty, and purity, a way to protect virginity, guarantee "family's honour, " and ensure marriageability [9,10]. In The Gambia, the overall prevalence is estimated at 76.3% [11], meaning that it affects approximately 3 out of 4 women. However, this global figure obviates important discrepancies within regions and ethnic groups, as shown in Tables 1 and2. Its impact for health has been assessed in two clinical studies conducted in-country by the first author of the present paper, which revealed that 1 out of 3 girls and women presented injuries as a consequence of the practice [12] and the risk of complications during delivery and for the newborn increased 4.5 times for women with FGM/C [13]. Whilst these girls and women will need specific medical care for decades to come, prevention is an urgent step. However, strategies need to be carefully designed in order to respect the deep cultural value of the practice within the communities where it is performed. Although traditionally the practice was part of the rite of passage to womanhood among certain ethnic groups, as extensively described in an ethnographic research conducted by the first author of this paper [8], over the past generation several changes have been occurring. In a recent study, Shell-Duncan et al. [14] found that the physical cutting is increasingly becoming divorced from the traditional ritual. FGM/C is not a condition to ensure marriageability, but mainly a way to facilitate entry into a social network and have access to social support and resources, with peer pressure playing a major role in its perpetuation. In order to gather evidence to inform prevention strategies, many studies have focused in women's perception regarding the practice, but much is still unknown about the role played by men on its perpetuation. However, their perception of the "secret world of women" might bring important elements to understand the context in which the practice occurs, as well as enlighten effective ways to involve them in prevention. What lies under their support towards the practice? Do they establish a parallelism with male circumcision, the cutting-off of the penis' foreskin prepuce? Indeed, in all the societies where FGM/C is found, male circumcision is also performed [15], sometimes linked to the rite of passage to adulthood as a keystone component of the socialization process. It has a similar hygienic and aesthetic meaning and an analogous power to preserve ethnic and gender identities [2,3,8,16]. A deep situation analysis on FGM/C conducted in The Gambia in 1999 [17] revealed that some respondents established a parallelism between the two practices. Since Islam endorses male circumcision as an acceptable practice and makes no distinction between genders, some would argue that female circumcision is also prescribed. Acknowledging this gap, a new line of research is now emerging, interested in exploring how men position themselves on the matter, with the objective of assessing their potential inclusion in preventive actions and programmes. The results obtained so far have showed different-and sometimes contradictory-levels of involvement and support towards FGM/C that seem to be influenced by sociodemographic variables, such as ethnicity and religion [18][19][20]. Others have highlighted that both men and women blame each other for the continuation of the practice and position themselves as victims [21]. In a recent study conducted in The Gambia with health care professionals [22], it was discovered that FGM/C found higher support among men. While women would give more strength to the deep cultural roots of the tradition, men seemed to privilege a moral perspective, prioritizing the fact that the practice is mandatory by religion and attenuates women's sexual feelings, contributing to family honour. This study intends to contribute towards this field of research, by exploring the knowledge and attitudes of Gambian men towards FGM/C, as well as practices in their family and household. It expects to help to increase the understanding of the social environment embedding the practice, in order to inform prevention strategies that might successfully accelerate its abandonment. --- Materials and Methods --- Design of the Study. A transversal descriptive study was designed with the main objective of assessing the knowledge and attitudes of Gambian men on FGM/C, as well as related practices in their family/household, exploring eventual associations with sociodemographic characteristics. A secondary objective was to empower and promote knowledge's ownership of the native population, through a strategy designed to build capacities on FGM/C and social research skills. For this reason, the study was integrated in the Practicum of Community Medicine of the School for Enrolled Community Health Nurses and Midwives (ECHN/M) at Mansakonko, Lower River Region. Students were given the responsibility for data collection, under the supervision of their tutors and trainers from Wassu Gambia Kafo (WGK), the non-governmental organization that supported the study. To ensure the accuracy of this process, students received specific training on social research skills, by a team consisting of a medical anthropologist and ECHN/M tutors. Furthermore, prior to their involvement on this study, students had already been trained on FGM/C identification, management and prevention, as their school is one of the health schools that integrated FGM/C in its Academic Curriculum-an initiative of WGK. The survey was implemented through questionnaires administered face to face. Taking into consideration the sensitivity of the topic, it was considered that the best strategy to avoid resistance was to administer the questionnaires in the communities where these students were doing their practicum, and in their home villages. In this way, it was ensured that (1) they were known and respected; (2) shared the same cultural background of the interviewees; and (3) were able to speak their local language, what contributed to create an environment of trust conducive to conduct the interviews. The selection of the communities where the practicum was conducted was a responsibility of ECHN/M tutors. As a consequence of this strategy, the survey was implemented in three regions of the country: Lower River Region, North Bank Region, and West Coast Region. According to Census 2003, the population in the first two regions is predominantly rural (80% approximately), while in West Coast Region is mainly urban (60%) [23]. As stated in Table 1, FGM/C prevalence rates in these regions are 90.6%, 49.2%, and 84.5%, respectively [11]. --- Research Population. The overall sample is composed of 993 men. The study intended to capture men with heterogeneous profiles in terms of occupation, age, ethnicity, religion, and marital status, both from rural and urban areas. Due to the fact that this study was integrated on a strategy to build students and tutors capacities on social research, it was considered that a quota sampling method was the most feasible method to apply. Each student was requested to administrate the questionnaire to 30 men. --- KAP Questionnaire. The data collection tool was a questionnaire with nineteen close-ended questions, designed to gather information on men's knowledge and attitudes with regard to FGM/C, related practices in their families/households, and sociodemographic data. The questionnaire was developed by a researcher and medical anthropologist, having as background former ethnographic studies conducted in the country since 1989 [8]. Although the questionnaire was drawn up in English, the official language of The Gambia, students were carefully instructed to know how to administer it in local languages whenever needed, in order to ensure an accurate understanding of the questions and of what was meant by "FGM/C. " In The Gambia, the practice is generally conceived as the equivalent to types I and II as established by WHO, which are the most prevalent in the country (66.2% and 26.3%, resp., [12]). Each ethnic group has specific words to distinguish the "cutting" and the "sealing" formed during the healing process after cutting and repositioning the labia. --- Variables. The five socio-demographic variables comprised occupation (agriculture, livestock, and fishery sector; services sector; health professionals; education professionals; students), age, ethnic group (Mandinka, Wolof, Fula, Djola, Serahule, and Serer), religion (Muslim, Christian), and marital status (married, single). The variables analyzed, chosen from the questionnaire, are presented below. Among them, Q1, Q5, Q8, Q13, and Q15 were selected as active variables for the Cluster Analysis. --- Ethical Aspects. The study was submitted and approved by The Gambia Government/Medical Research Council Laboratories Joint Ethics Committee (Ref: R08002). The purpose of the research was carefully explained and clarified by the students to the interviewees. The administration of the questionnaires only took place after respondents' signature or thumb print on an informed consent that was kept under the custody of WGK. The identity of the participants was maintained through rigorous confidentiality. Obstetrics and Gynecology International 2.6. Statistical Analysis. A descriptive analysis was carried out of the main variables, and prevalence proportions (%) and 95% confidence intervals (95% CI) were calculated for the overall sample and, in order to detect differences, for each of the socio-demographic variables (occupation, age, ethnic group, religion, and marital status). Prevalence proportions were compared with Chi-squared test or Fisher's exact test when appropriate. Unspecified data ("other religion" and "other ethnic group") were not taken into account in the analysis. Statistically significant differences were considered at 𝑃 < 0.05. A multiple correspondence analysis (MCA) and a cluster analysis were conducted to detect underlying groups of individuals according to their knowledge and attitudes regarding FGM/C, as well as related practices in their families/households, as defined by the active variables. The five socio-demographic variables were included as supplementary information, allowing the identification of opposite profiles of men towards the practice. The information was computerized via EpiData. Descriptive univariate and bivariate analyses were conducted through SPSS Version 19, while MCA and cluster analysis through SPAD version 5.6. --- Methodological Issues. The main methodological issue regarding this study has to do with the sensitivity of the topic itself, as it is common to find resistance to talk openly about FGM/C, especially to an outsider. This was addressed by giving Gambian students the responsibility for interviewing people in communities where they were known and respected. Another methodological issue is related to the fact that Serahule's sample size was quite small (only 12 individuals). --- Results The socio-demographic characteristics of the respondents are shown in Table 5. The sample was composed predominantly of young men, their mean age being 36.5 years old, with Muslim affiliation (96.2%). The majority were married (74.4%) and worked in agriculture, livestock, and fishery (51.3%) or in services sector (20.6%). However, the sample also included education and health care professionals (7.8% and 7.0%, resp.) and a few students (7.6%). With regard to ethnicity, 41.2% were Mandinka, 19.9% Wolof, 17.6% Fula, 9.7% Djola, and 1.2% Serahule. The prevalence proportions and 95% CI of knowledge, attitudes, and practices, according to socio-demographic variables, are presented in Table 3. FGM/C appears in this study as a widespread practice, with a prevalence rate (70.0%) not far from the most recent official data (76.3%). A total of 61.8% men embrace its continuation and 60.7% intend to have it performed on their daughters in the future. Although FGM/C is mainly performed by families affiliated with Islam (72.5% versus 27.3% Christians, 𝑃 < 0.05), prevalence proportions disagree amongst Muslims with different ethnic backgrounds. With statistically significant differences, traditionally practicing groups (Mandinka, Djola, Fula, and Serahule) are the ones reporting the highest prevalence rates in their families/households, expressing the highest support towards the continuation of the practice and the strongest willingness to have it performed on their daughters. Also with statistically significant differences, almost 60% of Mandinka consider FGM/C as equivalent to men's circumcision, a parallelism that is established by 47.3% Djola, 43.8% Fula and 33.3% Serahule. Whilst 75% Serahule and 72.8% Mandinka believe that the practice is mandatory by Islam, only 56.0% Fula, and 36.4% Djola do so. Serer and Wolof, which are also Muslims but traditionally nonpracticing groups, do not establish a connection between the practice and Islam neither acknowledge a parallelism between FGM/C and male circumcision-indeed, around 95% of Wolof and 90% of Serer deny it (𝑃 < 0.05). Interesting but not statis-tically significant, men over 60 years old establish the relation between FGM/C and Islam and its equivalence with male circumcision in a higher percentage than other age groups. In the overall sample, almost 72.0% of men do not know that FGM/C has a negative impact on the health and wellbeing of girls and women. The highest awareness is found among Wolof men (47.9%, 𝑃 < 0.05), health and education professionals (48.0% and 46.3%, 𝑃 < 0.05). Although not being a statistically significant trend, it is found that awareness of FGM/C health consequences decreases with age, with the lower levels being found among men over 60 years old (15.4%). Also interesting but not statistically significant is to find that the group of men between 31 and 45, who have the highest awareness of FGM/C health consequences, are also the less supportive of the practice, with a lower intention to have it performed on their daughters and the highest willingness of seeing men intervening in its prevention. The negative impact that the practice has on the health and welfare of girls and women is, indeed, the major reason given by 72.9% of those who, on the overall sample, are against its perpetuation. This study also reveals that over 39.8% of girls are subjected to FGM/C before completing their fourth anniversary. This is mainly reported by men between 31 and 45, whilst men above 60 report the practice to occur when the girl child has already completed 4 years old (67.4%, 𝑃 < 0.05). A minority of men take part in this decision-making process, especially if they are not married (married 39.3%, single 21.1%, 𝑃 < 0.05). Only 8.0% take the final decision towards subjecting their daughters to the practice, and 6.2% join the wives in this decision (Table 4). FGM/C appears mainly as a women's choice (75.8%) or a decision of other relatives and community members (10.0%). Since there is no statistically significant association with the socio-demographic variables, this information is not shown in Table 4. Cluster Analysis. The cluster analysis revealed statistically significant differences for ethnicity and religious affiliation, allowing the identification of two profiles of respondents which are identified in Clusters 1 and 2 (Tables 5 and6, and Figure 1). Cluster 1 is composed of those men who declare, on a rate statistically significant and higher than the overall sample, that FGM/C is practiced in their families/households (99.7% versus 67.4%); that they are involved in the decision making process (37.0% versus 25.6%); intend to have it performed on their own daughters (92.5% versus 60.9%); are not aware of the practice having health consequences (82.9% versus 71.7%); and do not think that men have a role to play in its prevention (68.8% versus 48.4%). This cluster comprises almost two-thirds of the overall sample (65.1%) and is overrepresented by men from Mandinka, Fula, Serahule, and Djola ethnic origins, with Muslim affiliation. Cluster 2 comprises the remaining one-third of the total sample and is composed of those men whose knowledge, Obstetrics and Gynecology International attitudes, and practices are opposite to the ones expressed by men in Cluster 1. This group gathers those who report, on a rate statistically significant and lower than the overall sample, that FGM/C is not practiced in their family/households (80.7% versus 30.0%); that they are not involved in the decision making process (87.2% versus 74.4%); do not intent to have it performed on their daughters (96.0% versus 39.1%); are aware that the practice has health consequences (47.0% versus 28.3%); and believe that men have a role to play in its prevention (86.1% versus 51.6%). In this group, Wolof and Serer ethnic origins are overrepresented, together with the Christian religion (7.5% versus 3.5%). --- Discussion Seen through men's eyes, the secret world of women remains embedded in cloudy concepts shaped by culture in ethnic tradition, also influenced by religion. All ethnicities included in this study follow Islam, but each one of them establishes a different relation between FGM/C and religion. While those from traditionally practicing groups tend to consider the practice as a religious injunction or as "Sunna, " finding justification for its continuation, almost all those from traditionally nonpracticing groups deny that the practice is an obligation in Islam. FGM/C is, in fact, a pre-Islamic practice. Even within traditionally practicing groups, perceptions diverge substantially. Mandinka found FGM/C on its mandatory character by Islam and are eager to consider it as equivalent to male circumcision. Serahule share the same religious conviction but do not establish the equivalence with the male practice, in opposition to Djola, for whom religion does not seem to be significant but the parallelism with male circumcision is more evident. Although sharing the same nationality and religion, ethnic identities are built up on different cultural values and social norms, which are the decisive shaping factors of men's concept of the practice. Ethnicity's power to influence the knowledge, attitudes, and practices with regard to FGM/C had already been shown in a previous study conducted with Gambian health care professionals, by the same authors of this paper [22]. Amongst older men, FGM/C is seen as a mandatory practice by religion, equivalent to male circumcision, with no health consequences. But a window of opportunity for change is found among younger generations. Men between 31 and 45 are the less supportive towards the practice, have the lowest intention to have it performed on their daughters and the highest willingness to play a role in its prevention, and are also the group more aware of FGM/C health consequences. Can this increased knowledge and less supportive attitudes be linked, and on this foundation built on a strategy for prevention? This and other findings from this study suggest that it can. Indeed, among the group of men who are against the continuation of the practice, health consequences are presented as the major reason to stop its continuation. Health and education professionals, who are the ones more aware of FGM/C health consequences, show more willingness to participate in prevention. The fact that the majority of men are not active in the decision making process concerning the practice does not mean that they do not have the power to influence it. The finding that 60.7% of men intend to have FGM/C performed on their daughters in the future, but only 34.8% actually participate in the decision-making process and a few 14.2% take the final decision, alone (8.0%) or with their wives (6.2%), suggests that decision-making is not a simple oneway process. Indeed, field work evidence reveals that women who decide that their daughters will not undergo the practice face, not only peer pressure, but also feelings of helplessness when not actively supported by their husbands, as well as other influential male leaders from their communities. In a patriarchal society, although men might not be actively participating in FGM/C decision making process, they are still decision-makers. The finding that decisions concerning FGM/C can be made by multiple actors including women, men, relatives, and community members corroborates the results achieved by Shell-Duncan et al. in a study recently conducted in The Gambia and Senegal [14]. These authors explain that the multiplicity of decision makers and peer pressure among women makes individuals less able to act upon intentions to carry on with the practice or not. In the secret world of women, avoiding discrimination is a powerful motif to perpetuate FGM/C, and this social force must be acknowledged. However, men's power to influence it should also not be disregarded. Over the past generation, FGM/C practices have changed in many ways in Gambian societies. The group ritual in the "bush" is giving place to individual ceremonies behind doors [14]. Field experience reveals that the traditional knife, used to perform FGM/C on a number of girls without being sterilized, is being replaced with individual razor blades, as a result of HIV/AIDS awareness campaigns. Similarly, traditional herbs and charms, used to manage bleeding, relief pain and accelerate the healing process, are being complemented with modern drugs. Nowadays some babies and girls are taken to health facilities when health complications cannot be managed at community level, in opposition to the secrecy that characterized the seclusion period in the past. Sometimes, FGM/C is even performed by health professionals themselves: medicalization is already a reality in the country [22]. Finally, the age at which the practice Obstetrics and Gynecology International is performed is declining-our study reveals that over 40% of Gambian girls are subjected to FGM/C before celebrating their fourth anniversary. This reduction may be explained by the belief that wounds heal faster and pain is lower for babies than for grown-up girls. This paper suggests that new actors can be called on stage to play an important role in FGM/C prevention. May knowledge be shared and synergies be built up, in order to promote positive changes that lead to the abandonment of the practice. --- Conclusions Although sharing the same religious beliefs, men from traditionally and nontraditionally practicing groups see the relation between the practice and Islam in different ways and have diverse perceptions of its parallelism with male circumcision. Differences are also significant within traditionally practicing groups, showing how ethnic identities are the decisive shaping factors on how men conceive and value FGM/C. The decision to subject or not a girl to the practice appears as the result of a complex process involving multiple actors. Although few men are active participants in this process, their intention to have FGM/C performed on their daughters is likely to influence it. The support towards the practice is highly dependent on ethnic identity, being much higher among men from traditionally practicing groups. However, awareness on FGM/C health complications is prone to positively influence men's willingness to play a role in its prevention. In this line of thought, a strategy of acknowledging men's ethnic background and focusing on increasing their understanding of FGM/C negative impact on health might well be an effective way to influence and promote a positive change to the secret world of women.
Social Psychology is a branch belonging to the sphere of Psychology, preliminarily studying humans' behaviors in society. Conformity and obedience, two significant terminologies in social psychology, are frequently manifested in real-life situations, such as the case that emerged during COVID-19, which firmly illustrates the above phenomena, and currently, people experience these two situations, or phenomena, often. Starting from their definitions and experiments, this paper mainly studies the phenomena of these two situations, and this aims to investigate the causes of conformity and obedience. The methodology of this paper is literature review and theoretical analysis. This paper finds that such phenomena (obedience and social conformity) could be attributed to some reasons: culture impact, internet impact, pressure impact, education impact, and the existence of authority.
Introduction This paper mainly focuses on obedience and conformity, which are two classical psychology terms. Obedience refers to the behavior that people, to avoid being punished or blamed, tend to obey others' instructions and thoughts. Conformity refers to the behavior that, living in a group and society, people are inclined to change or even forgo their thoughts to cater to others and the group. Obviously, people could not always have their own thinking patterns and could not always express their ideas. Many have lost the ability to think individually and critically on account of conformity and obedience, which could be seen in many examples: during the period of COVID-19, many were credulous about experts' ideas, thus losing the way to think by themselves; on the internet, it is seen that people are susceptible when seeing new arguments. Therefore, driven by numerous cases, this paper aims to analyze the reasons why people are extremely credulous and why they have lacked the ability to think independently instead of easily believing the opinions of others. In other words, this paper is written to find the causes of conformity and obedience in the society. The main methodology used in this article is literature review. At the end of the paper, several ways to ameliorate the situations, and hopefully, this could be of help to the relevant individuals. --- 2. --- Conformity and Obedience --- Case During COVID-19 in Chinese Societies During COVID-19, it was palpable that people were outstandingly susceptible to others' opinions. Hearing many and many news and learning about COVID via other information sources, they lived in fear of getting infected by it. Therefore, they would expect others, especially the so-called experts, to offer them solutions and ideas, and they would be very credulous, thus never distinguishing between various arguments given by experts. This was typical on social media. To be more specific, when social media, like TikTok, posted some new news and information about COVID, it was seen that below each video, there were thousands of proponents who praised the ideas. For instance, at the end of 2022, when COVID was prevalent and influential throughout Chinese societies, many socalled specialists contended that COVID would not carry any symptoms when people would be infected. However, such an argument could hardly bear further verification--evidently, most Chinese people felt uncomfortable as their bodies became troublesome, and most people caught colds, coughs, fevers, and many other derived illnesses. Accordingly, people's previous propositions collapsed naturally, and then they complained that the problems should be ascribed to those experts as they misled the Chinese individuals. Although some of the individuals had once cast doubt on these experts' arguments, finally it turned out that almost everyone believed in them. --- Conformity and Asch Effect Conformity is the tendency for an individual to get their attitudes, beliefs, and behaviors along the others within a group [1][2]. Conformity can take the form of overt social pressure or subtler, unconscious influence. It is powerful irrespective of its forms, and it could alter a person's thoughts, change the person's behaviors, and even resolve conflicts in the group. Conformity renders individuals comfort and dignity when an individual form is subdued by the collective form. Asch Effect illustrates conformity. In Asch Experiment, subjects were instructed that they were participating in a study of perceptual judgment, during which, they were asked which of the three lines shown was the longest [3]. The answers were outstandingly obvious, and the judgement should have been made without too much thinking; nevertheless, all other participants, who were asked by the experimenter to give incorrect answers, all indicated that the longest line was the one that ranked second in length, which was highly evident. Therefore, being influenced by other "participants," the real participant, who had already decided to make the correct choice, eventually made the same answer in a bid to go along with other participants. In this situation, the subject knew the uniform and incorrect of the other members of the group before he makes his own response. As a result, he might give the same answer as the others to cater to the group. This is defined as Asch Effect. --- Obedience and Milgram's Obedience Experiment Obedience refers to changing one's behavior on account of the command of an authority figure, with sometimes the case being that people are under supervision of others. Stanley Milgram, an American social psychologist, conducted the famous Milgram Obedience Experiment in the 1960s in Yale University, the purpose of which was to test people's obedience under the command of authority [4]. The experiment was simple. Two individuals came to the lab to participate in a "memory experiment." One of the individuals, unknown to the other, was the experimenter's confederate and, through a rigged drawing, became the so-called victim. The other person, assigned the role of "teacher," watched while a set of electrodes was attached to the learner. Then, Milgram gave the unsuspecting teacher a small 45-volt shock to demonstrate what it would feel like and to enhance the experiment's credibility. To this end, the "teachers" were told that such a transient hit would not lead to a permanent hurt to the experiment subject. "Teachers" were then asked to test those individuals' ability to recollect of previously learnt things. If the learner made an error in answering the questions asked, the teachers needed to shock the learners. The purpose of the experiment was to assess the effects of punishment on retention and learningwould the teachers stop shocking the learners as the voltage increased gradually. The teacher was placed before the shock generator, which had a series of switches, each labeled to indicate a voltage, arranged from 15 to 450 volts, in 15-volt increments per each time [5][6]. Before the experiment, some professors in Yale University had once made some estimations. Yale University psychiatrists predicted that teachers' most common reaction would ultimately reject to click on the button of electric shock as the numerical value of voltage increased; plus, they predicted that the majority of teachers, approximately 68%, would not go beyond 150V, 4% would reach 300 V, and only one in a million would press 450 V. Nevertheless, the result was dramatically astonishing. Over 60% of participants ("teachers") continued to obey the authority's persistent instructions to press the last button of an electric shock generator. Experts in human behavior and laypersons alike underestimated the teachers' degree of obedience. --- Reason Analysis Though the above concepts are different-conformity and obedience are observed under different circumstances-two concepts to one phenomenon-people would lose some of their own thoughts or hide them and could not express them overly, gradually losing some ability of individual thinking. --- Culture Impact Cultural differences may affect people's thinking patterns. Individualistic culture, referred to as the culture like most Western countries, will be more likely to accommodate people who could think more individually. Such culture underlines intrapersonal values, morality, political philosophy, and their own thinking patterns. Thus, people's own thoughts and actions will be more accepted, valued, and propagated, and people in those countries will advocate that person's interest is more important than that of a society or a country. Individuals tend to reject authority figures from other groups, societies, or governments. Consequently, those individuals would be less likely to experience conformity and obedience. On the contrary, collectivist culture, referred to as the country like China and Cuba, tends to emphasize collective interest, in which a person's value should be consistent with societies' values, group thinking therefore being more important. In such a culture, individuals' rights and thinking are restricted by groups, and therefore, people might lose some ability to think and embody their own thoughts. Collectivism holds that individuals belong to society, individual rights are subject to group power, and individual interests are subordinated to collective, national, class and national interests. The more collectivist the society, the more likely it is to work together. In the previous Asch effect, people with higher collectivism have higher conformity behavior. Also, when asked to do something, in collectivist cultures, people are more likely to obey the rules, whereas, in individualistic cultures, breaches of social rules could be seen as drastically frequent, being a troublesome phenomenon. --- Internet Impact When a certain event arises, netizens collide with each other thanks to different motives, positions, and interest demands, forming a turbulent public opinion pattern. The formation of public opinion is a process of mutual actions; notwithstanding the members of the group are independent individuals with perplexing and variable psychology, which brings challenges to the command of authority [7]. --- Overall network Public opinion reversal events, behind which, group consciousness is existing to promote the development of public opinion. When the Internet gets more frequently used, more netizens see different arguments online, during which, conformity would happen as a result. Netizens might cause doubt about group morality, which means a totality of conscious response of various classes or strata or interest communities to social and economic conditions, political system and cultural life. Congruence in idea is the feature of group thinking on the internet, and thus, individuals seem to agree with one, only idea which, probably, is proposed by some so-called expert, like the case mentioned before during COVID-19 about the issue of symptoms of infection. Hence, as long as the strength that is strong enough is not embodied in a clear form, it is generally accepted by many, and those minority who have different ideas might not share their ideas in front of others on the internet. Members will get the illusion that everyone is in agreement, being also a kind of conformity. For example, many emergences of rumors could be attributed to the conformity on the internet: although some possess different ideas, they tend to believe the thought of the majority, and sometimes the majority would blurt out some unattributable, unreasonable rumors. --- Working Pressure The existence of working pressure in concurrent society is everywhere, owing to which, compared with last generations, people in current society are more confined to the workplace [7]. In working place, the tendency that nowadays people are getting more critical, thoughtful, and individual would increase the likelihood that employers expel the employees. Therefore, although, in comparison with past generations, people nowadays begin to have more individual thinking and self-awareness, making them think more in the workplace, the tendency discussed above could still result in conformity: for example, during the process of decision-making in a company, most employees would still withhold their ideas and avoid sharing them if someone having greater rights and reputation holds a different one. Also, most of the workers in the workplace are mandated to obey their managers' or leaders' instructions to eschew any possible conflicts from occurring. --- Cognitive Patterns in Childhood The pattern of education shapes children's life and future. Education also determines the ways individuals think enormously. Due to the relative lack of social experience and the ability to identify, judge and choose, it is easy to produce a blind herd mentality, especially among teenagers, which is then produced. Blind conformity will cause teenagers to blindly imitate the words and behaviors of others, which leads to weak self-awareness, poor independence and deficiency of creativity. Teenagers are in a period of growth and ability development, and therefore, in such period, they are likely to experience conformity. This may be due to the following reasons [8]. First of all, teenagers lack a deeper and well-rounded understanding of the world; hence, they do not possess the capability to think deeply, individually, and profoundly, so rather than do something wrong and get criticized, they tend to believe others' words and thoughts. Cognitive dissonance would arise during childhood. Secondly, teenagers, even Human beings, are social, so their life, study, work, and so on need to be in the collective. In the process of development, growing teenagers will feel involuntarily fearful when deviating from groups. Palpably, it can be seen that the fear of deviating from the group is also why teenagers choose to conform to others. --- The Figure of Authority The existence of authority could also explicate obedience. It is extremely reasonable that people will conform to authority. The authority could be an entity, for example, the government. It is known that people are obliged to obey the laws, rules, and other legislations enforced by the government, regardless of their rationality. Authority could also be virtual. Knowledge, especially the one that is written in books, is considered undoubted; at least, most people would not spontaneously cast doubt on some authorized knowledge overtly, irrespective of their own thoughts. Another explanation could demonstrate this. When obeying authority, people usually get benefits. In other words, they would not be punished at least. On the contrary, when repelling the authority, people are possibly punished: consider the punishment and fine on account of breach of the social rules and legislations. Therefore, we can consider that people are experiencing positive punishment: the bad things, the punishment, are avoided when people obey the rules, and vice versa. --- Approaches to Be More Creative and Critical Here are a few suggestions and approaches the paper proposes that could make individuals more critical and reduce the behavior of conformity and obedience: As starters, people should be encouraged to think of different ideas, and this is dependent on culture. Living in a culture that promotes individual thinking, people could be more creative. Secondly, teenagers should be bolstered to think more individually. Being a test taker is not a good solution; instead, it is essential that kids are equipped with the ability of critical thinking. Additionally, they should be brave enough to do so, Thirdly, the government should be more accommodating to accept, at least not reject, different but reasonable ideas. It could create some channels that allow citizens to offer distinct ideas and thoughts, therefore promoting the whole society to have a better atmosphere and make more progress. --- Conclusion This paper focuses on the causes of conformity and obedience, with definitions of them analyzed for starters. The reasons leading to the two situations could be ascribed as culture, internet, concurrent pressure, cognitive patterns in childhood, and authority figure. The paper vests new ideas and discussions for social psychology. Nevertheless, there are some limitations and flaws in the overall content and methodology in this paper. secondly, the article might lack some investigation, such as reviews and questionnaires of individuals. In light of endless and promising progress made in social psychology by extraordinary and preeminent researchers in relevant spheres, it is speculated that, in the future, more studies could focus on the comparison of obedience and conformity.
In 2020, global injustice has taken center stage during the uprising of the Black Lives Matter movement and other social movements. Activists are calling attention to longstanding disparities in health outcomes and an urgent need for justice. Given the global socio-political moment, how can health researchers draw on current critical theory and social movements to create structures for equitable outcomes in health research and practice? Here, we demonstrate principles for effective health research and social justice work that builds on community-engaged approaches by weaving critical Indigenous approaches into structural project designs. Our project, "Health Resilience among American Indians in Arizona", brought new and seasoned researchers together to collect and analyze data on the knowledge of healthcare providers concerning American Indian health and well-being. Four years after the conclusion of the project, the team developed and created a post-project self-assessment to investigate lasting impacts of project participation. In this communication, we discuss the principles of defining and measuring the capacity to build together. This work responds to the call from Indigenous scholars and community leaders to build an internal narrative of change. While we will not present the full instrument, we will discuss building a strong foundation using the principles of engagement for planning and implementing justice and change.
Introduction The complexity of global and local crises is immense. There is not a single solution that can solve urgent, health-related problems. The 2020 uprising of social movements like Black Lives Matter, which began after Minneapolis police officer Derek Chauvin murdered George Floyd, brings to light many years of longstanding inequalities in life and health. Indigenous peoples have also faced health disparities and threats to their well-being and life at higher rates than their nonindigenous counterparts. The relationships between social justice and health are not new for people who live with these threats, nor are relationships that have been unexplored by social scientists and public health researchers. Public health researchers and social scientists have turned to community-engaged approaches to address unequal health outcomes. These approaches provide well documented success in advancing health through multiple forms of knowledge [1,2]. Engaged approaches are strongest when they incorporate methods for collapsing divides between communities and institutions, specifically when including people who have more or less economic and political power. Indigenous leaders suggest a need for research designs that are developed "with" instead of "on" people in ways that provide opportunities for "counter-storytelling" [3]. It is in this confluence of approaches that new narratives of impact emerge. This communication presents the principles behind the development of an internal measure of capacity building to discuss a foundation for justice in engaged research that can contribute to narratives of change. --- Community-Engaged and Indigenous Approaches Community-based participatory research (CBPR) and other engaged approaches share goals of collapsing divides between researchers and the researched by equally incorporating different skills and forms of knowledge. These approaches differ from those that exclusively center professional research skills [4][5][6][7]. However, underlying principles of engagement, do not always result in equitable outcomes of project research and practice. Even when working toward justice, benefits of health-focused projects are sometimes unequally weighted. For example, people who already have leadership positions in hospitals, health clinics, schools, and academic researchers, may obtain more measurable benefits from grant-funded projects than community researchers who bring valuable insider knowledge and skills [8,9]. For instance, the evaluation of a project may only measure the impact on people who are considered to be "beneficiaries" instead of collaborators. A more inclusive look at capacity might reveal that tenure-track university researchers have gained additional grant experience, which is advanced in scholarships, promotions, publications, and other benefits. The short-sightedness of measuring only one type of capacity building leaves equity in benefits from funding unexplored. Developing a more inclusive narrative of how all partners benefit from the project allows groups to understand if benefits are equally weighted [10]. One way of capturing a more inclusive look at how multiple communities benefit from engagement is to involve all project partners in the evaluative process that incorporates multiple perspectives and reflects on the long-term impact that aligns with and draws on Indigenous approaches to research [11,12]. This shift also places trust in the building and the skill development at the center of project goals rather than considering them to be a side effect of partnership projects. Our goal in developing a collaborative process for capacity building was in part to create and understand more fully what capacity building looked like for all members of the team instead of focusing only on members of a "community" outside of leadership. We defined the layered definition of capacity building that our group developed together in the results section. The authors of this article include members of original research and post-project evaluation teams. Here, we share principles and strategies to continue the conversation of how to measure and interpret capacity building through a process of narrative creation and expand our discussion of the principles used to create a process fully engaged with a research team and community-based and Indigenous theories of change. --- Context The parent project for the development of the evaluation was called "Health Resilience among American Indians in Arizona", (hereafter "Health Resilience"), which is funded by the National Institutes of Health (NIH) and National Institute on Minority Health and Health Disparities (NIMHD), under the Center for American Indian Resilience (CAIR). "Health Resilience" built on prior engaged studies to investigate health and patient-provider perceptions. Project leads (a health scholar and medical anthropologist) developed a strategy to identify and hire community members to join the team as researchers. Past employment and degree status were not considered in the hiring process intentionally to reflect a community-engaged strategy [13]. Community researchers became paid employees beginning with a multiday intensive training session and continuing into data collection, analysis, implementation, and dissemination. The "Health Resilience" project team included community and academic researchers of different tribal affiliations, ages, genders, and life experiences. Data collection included semi-structured interviews, focus groups, and Wellness Mapping activities [14]. Researchers obtained permissions through the American Indian led local organizations and a university institutional review board (IRB). The team gained permission from the Navajo Nation Human Research and Review Board, Hopi Tribe, and Indian Health Service. Once the multiyear project concluded, the team analyzed data and wrote about, presented, and continued "Health Resilience" in different ways. --- Building a New Instrument Together Researchers built capacity by creating their own instrument based on experiences with the project and questions they had about lasting change after the conclusion of "Health Resilience". This emergent model reflected a process similar to "counter storytelling" [3] as an epistemological approach to building knowledge. Indigenous researchers have discussed how to make research and evaluation "culturally responsive", being called culturally responsive indigenous evaluation (CRIE) [15]. This varies from a research strategy that uses a vetted evaluation instrument in a local site. In the case of "Health Resilience", we specifically created the process to define and understand capacity and relationship building while developing an internal narrative. We did not use a CRIE-specific research design, though we did develop our instrument through a similarly emergent process that focused on capacity and relationship building, which is in line with Indigenous approaches to research. Understanding why the group chose to develop a narrative requires replacing a traditional research focus with one that is steeped in community engagement and the values of Indigenous focused research. Findings from a recent systematic review found that CBPR projects in Indigenous contexts yielded improved research and capacity and, at the same time, there was a need for projects that improved rigor by defining research questions in partnership [16]. Over half of the projects reviewed relied exclusively on researcher-defined questions that had not been developed with community member involvement. Research with Indigenous peoples has been identified as an area where existing evaluation tools are lacking, and where there are opportunities for using principles of engagement to create a new narrative rather than relying on an existing one [12]. Community-, culture-, and language-focused programs tend to show improvements in health, relationship building, and sustainability of change [17]. In one example, a community-engaged approach was used to develop a "healing model of care" for Indigenous people by building crucial partnerships [18]. These partnerships led to the development of dynamic partnerships that contributed methodologically and in practice to evidence-based models of care. In another example, a research partnership with a Native American community developed procedures for measuring CBPR, which included a focus on measuring "level of participant involvement" and "community voice" to be used in the measure trust and "trustworthiness" of an engaged and participatory project [5]. Indigenous researchers highlight the importance of building a narrative for evaluation together [5]. The development of the assessment allowed for an organic structure of openness to emergent narratives of change [19,20]. The team positioned "community" as a series of overlapping groups including academic researchers and highlighted the need to "integrate knowledge and action for all partners", as suggested for use of community engagement in tribal contexts [21]. In our team, there was no definitive line between community and academic partners in that some community members had or have since obtained positions affiliated with the only university in the region, while others had full time appointments there. Over half of the researchers were Indigenous or American Indian with different tribal affiliations. This framework of a broad understanding of community beyond academia and non-academia allowed team members to re-think how people work together to challenge and shift power structures in campus-community partnerships. --- Defining "Communities" in Engaged Research A collaboration that crosses institutional, tribal, social, and other boundaries relies on trust, transparency, and tangible reciprocity to function well. Achieving these principles requires project designs and measures that include different communities as both targets of health programming and recipients of the benefits of community-engaged work [3,22]. Community-engaged and Indigenous-focused approaches to research attempt to decolonize research and methods by reducing the traditional categories of researchers and the researched and positioning people who may not have academic degrees as "cultural experts" over their neighborhoods and social spaces [23][24][25]. These approaches recognize the benefits of including people from outside institutional settings. For instance, noninstitutional team members learn new skills like team-building and institutional team members learn how to collaborate and learn from the people in the communities of study. In limiting a definition of "community" to people outside of academia and clinical settings, researchers can fail to incorporate a full understanding of how researchers themselves constitute a community, or series of overlapping communities. Project leadership at health clinics, hospitals, universities, or other entities constitute a set of communities in the same way that neighborhoods encompass a set of overlapping communities. Community-engaged research offers an opportunity to examine benefits and capacity changes over time in multiple communities, including those who already occupy positions of power. Ongoing collaborative evaluation helps to ensure that outcomes are reaching through barriers to support equity where it is needed most. The development of evaluation in this setting rests on current challenges to empirical models of data collection and analysis discussed by Indigenous researchers and by drawing on foundations that shift the researcher-researched roles to develop new narratives [24,26]. Decolonizing implicit bias of power in definitions of "community" in research is necessary for the health and justice of those communities involved in research activities [26]. --- Project Assessment in Relation to Parent Project Methods for the parent project were designed collaboratively with a group of researchers. The parent project grew out of a series of community-engaged projects. Those prior projects resulted in questions about patient-provider communication and American Indian resilience. Upon funding, project leads developed a hiring strategy to recruit and select researchers from communities of study who were skilled at listening and critical thinking regardless of education or work experience. Once the team came together, there was intensive multiday, collaborative training. Intensive research for the parent project lasted 6-8 weeks, with additional weeks for transcriptions and ongoing analysis. This post-project activity happened long after the conclusion of the parent project. At the conclusion of the project, members of the team continued to stay in contact and work with one another in different capacities. We were curious about how the project impacted all of us, and wanted to ask ourselves and each other about impacts that stayed with us. Evaluation models that exist help groups to explore and understand community coalition functioning and capacity building [10,27]. In addition, the creation of the process presented an opportunity for group members to continue their work together in the development of an instrument and the dissemination of results. Engaged project approaches and designs include ongoing, iterative evaluation embedded within processes of research and implementation [10]. Development of evaluation was part of the primary collaborative training that the team led and participated in together. Leadership was provided by one of the project principal investigators (PIs). This PI is not an American Indian person, although the other two PIs are. She designed and developed the project based on learnings from Indigenous scholars and practitioners, and on experience from developing best practices for community engagement with partners outside of academia. This detailed description of the process of development and evaluation occurred on the foundation of a project structure in a communicative environment. --- 1. During research and analysis, team members discussed doing a post-project evaluation together. No one person prioritized this process. Instead, it was part of the ongoing discussions during analysis meetings. 2. After the project ended, several members of the group stayed in touch through professional and friendship connections. One project lead and the person who had served as a research specialist discussed checking in with the group to determine the remaining interest in evaluating the long-term impacts of the project. --- 3. A project lead had sent an inquiry via email to the team. All but one team member responded that they would be interested in participating. The team was unable to locate the remaining member of the team. Everyone was invited to participate in the discussion, which was possible because of trust established during the parent project. 4. The group circulated an inquiry asking people to create questions they would like to have answered about lasting impacts of the project and post-project reflections. All team members responded to the email with ideas. 5. An outside researcher organized the questions into an online survey and analyzed results. This decision was intentional so that the organization of the questions would occur by someone who was not involved in the original project, and therefore did not have a heavy investment in project outcomes. Only the outside researcher had access to the raw data to analyze results and to share the results with team members. 6. All members of the team with the exclusion of two people, who were not available, participated in the development and writing of the manuscript. --- Materials and Methods Methods specific to the assessment include project partners reconnecting to create an evaluation instrument to assess capacity at an individual level and to evaluate lasting impacts. The project fostered ongoing engagement and capacity building for team members outside of and within university settings. These relationships provide opportunities for employment, raises or promotion, entrance into graduate programs, new partnerships, new funding proposals, and other tangible opportunities. Working together on equal ground to develop the evaluation instrument demonstrated our ability to work together professionally in the years after the completion of the research. Final questions and answers reflected our ability to quickly pick up where we left off at the end of the project, and reflected use of our relationships and skills to complete the evaluation. This process was evaluative in practice and in outcomes. --- Results The most salient outcome of this process was the development of our own way of thinking through and defining lasting capacity building. Capacity building has been defined elsewhere as community-level outcomes resulting from leveraging a community's individual and organizational resources toward addressing collective problems and resources, relationships, and leadership [28]. Approaches to community engagement can add skills, knowledge, experiences, new partnerships, and a breadth of experience to deepen individual and community level knowledge and experience. Once a community builds on existing skills to increase capacity, there can be many outcomes that go beyond a project and transform into new partnerships, projects, grants, relationships, and wellness [29]. We drew on these definitions to develop an operationalized concept of capacity building that reflects increases in skills, knowledge, and ability to design and perform related projects; new and/or improved skills that lead to different employment and educational opportunities; new generative relationships of trust; and new knowledge that leads to other outcomes beyond the stated project goals. Capacity building includes human relations skills within a collaborative working team, recognition of project timeline and priorities, and a respectful communication style. Our view of capacity building began with the understanding that it can impact people on an individual level (e.g., a person obtaining a higher paying job) as well as on a community level (a community leveraging partnerships to bring resources to a neighborhood), and even a larger, policy level (new partnerships that lead to local, state, and federal policy change). Capacity building through engagement can add to a skill set and opportunity landscape for community members who may not hold positions of power or have degrees, as well as people who already hold leadership and research positions, by providing a learning experience that leads to better and more holistic relationships between individuals and organizations as the trust and experience grows. Our perspective on capacity building reaches beyond the idea that capacity is owned, and distributed by people in positions of power. Certainly, people in positions of power can provide training opportunities and access to resources; however, equally important is the ability for project leads and researchers to learn how to engage with communities outside of these entities to challenge inequality. In light of the focus on power and position in engaged work, scholars and partners critique the public health programming that begins with an assumption that researchers hold knowledge and community partners can (and should) learn from them [30,31]. A layered read of capacity building assumes differently positioned groups can learn from one another for a common goal. Researchers and healthcare providers can learn from communities, just as community groups may learn from investigators [30]. --- Discussion In the burgeoning area of community-engaged work, project teams develop plans based on collaborations between researchers and the researched; however, the relationships between researchers and community partners are not always central to the functioning of these projects. There are numerous reasons for the breakdown of equitable and productive relationships between individuals who already have access to resources before a project begins, including policies that result in unequal pay, structures that only involve community members as recruiters, and other system level issues. Indeed, in our own work, we have witnessed projects where inequality between researchers is borne out through unequal employment advancement, lack of transparency in authorship roles in publications, and the weight of ongoing project success falling on people who are already in positions of power. Partnerships and relationships between people working together in a community-engaged project are central to the ability of projects to positively impact health and well-being [29,32]. Scholars and researchers who work in health knowledge and implementation must consider how engaged practice impacts project team members and communities in multiple ways, and how benefits of research and practice are distributed. Highlighting capacity building as a focus of research and practice is one way to measure and understand what works, and to identify possible improvements [30]. This is a multifaceted alternative to more popular approaches that assess programs, typically dealing with a hierarchical set of dichotomies such as patient/provider or client/provider, and cogently answers many critical concerns coming from Indigenous researchers [15,24,26]. In our engaged assessment, we implemented a communal-reflective contingent of community building [24]. The inclusion of multiple team members demystified the barriers usually constructed between academic cultures, students, and community members and opened a space for exploring intersectionality and challenges assumed boundaries of power. This established a balance between flexibility and commitment through collaboration efforts bridging professional and social gaps among different community spheres of influence, which brought about a sense of equitable distribution of knowledge and power. Team members developed relationships that continue on through collaborative work. These relationships reflect a greater ability to engage in ongoing work based on the relational aspects of the research, which are the qualities that Indigenous researchers define as key principles in understanding and conducting research [31,32]. --- Conclusions We recommend that projects with the goal of engaging multiple communities reducing unexplored boundaries of power create a group process of instrument development and analysis. This activity could be implemented annually for multiyear projects and again after the conclusion of the project to explore and measure who benefits from research. It would enable groups to adjust structures to meet the needs of different and overlapping communities participating in research and practice and help to integrate a framework into practice to erode an assumed barrier between a community of need and others. This would be a summative process that could cause community engagement practices to become more deeply woven into present and future project benefits and keep partnerships accountable for areas where benefits could be equitably distributed. --- Conflicts of Interest: The authors declare no conflict of interest.
Context: Sex education/family life education (FLE) has been one of the highly controversial issues in Indian society. Due to increasing incidences of HIV/AIDS, RTIs/STIs and teenage pregnancies, there is a rising need to impart sex education. However, introducing sex education at school level always received mixed response from various segments of Indian society.We attempt to understand the expectations and experiences of youth regarding family life education in India by analysing the data from District Level Household and Facility Survey (DLHS-3: 2007-08) and Youth Study in India . We used descriptive methods to analyse the extent of access to FLE and socio demographic patterning among Indian youth.We found substantial gap between the proportion of youth who perceived sex education to be important and those who actually received it, revealing considerable unmet need for FLE. Youth who received FLE were relatively more aware about reproductive health issues than their counterparts. Majority among Indian youth, irrespective of their age and sex, favoured introduction of FLE at school level, preferably from standard 8 th onwards. The challenge now is to develop a culturally-sensitive FLE curriculum acceptable to all sections of society.
Introduction Sex is a very sensitive subject and public discussion on sexual matters is considered as a taboo in Indian society. Given this context, introducing sex education at school level always attracted objections and apprehensions from many quarters. Family life education (FLE) or Sex education refers to a broad programme designed to impart knowledge/training regarding values, attitudes and practices affecting family relationships [1,2,3,4,5,6]. It aims to develop the qualities and attitudes on which successful family life depends. The real purpose behind family life/sex education is the transfiguration of a male child into manhood and of a female child into womanhood. The education that provides knowledge on physical, social, moral, behavioural, and psychological changes and developments during puberty is termed as Adolescent Family Life Education. It teaches the adolescents about the role of boys and girls in family and society, responsibility and attitude of boys and girls towards each other, etc. within social context. Many psychologists believe that sex education begins at an early age and continues throughout the life of an individual. The purpose of sex education should be to facilitate the best possible integration between the physical, emotional and mental aspects of the personality, and the best possible assimilation between the individuals and the groups. Sex education also instils the essential information about conception, contraception and sexually trans-mitted diseases. It is a continuous process of developing attitudes, values and understanding regarding all situations and relationships in which people play roles as males or females [7]. The major objectives of Family Life/Sex Education (FLE) can be broadly described as follows: 1) To develop emotionally stable children and adolescents who feel sufficiently secure and adequate to make decisions regarding their conduct without being carried away by their emotions. 2) To provide sound knowledge not only of the physical aspects of sex behaviour but also its psychological and sociological aspects, so that sexual experience will be viewed as a part of the total personality of the individual. 3)To develop attitudes and standards of conduct which will ensure that young people and adults will determine their sexual and other behaviour by considering its long range effects on their own personal development, the good of other individuals, and welfare of society as a whole [7]. More than biological specifics, sex education should also include social and moral behaviour, proper attitudes and values towards sex, love, family life and interpersonal relations in the society. Due to growing incidences of HIV/AIDS, RTIs/STIs and teenage pregnancies, there is a need to impart sex education among youth. The problem of over-population also demands family life education, including family planning as a priority, as many of the young people are about to be married and should be aware of the responsibilities they have. A study on child abuse in India, conducted by the Ministry of Women and Child Development, reports that 53 percent of boys and 47 percent of girls surveyed faced some form of sexual abuse [8]. Therefore, family life education might help the vulnerable young population to be aware about their sexual rights and empower them to protect themselves from any undesired act of violence, sexual abuse and molestation. India's National Population Policy also reiterates the need for educating adolescents about the risks of unprotected sex [9]. Furthermore, the provision of family life education might result into multiple benefits to the adolescent boys and girls. This might include delayed initiation of sexual activity, reduction in unplanned and early pregnancies and their associated complications, fewer unwanted children, reduced risks of sexual abuse, greater completion of education and later marriages, reduced recourse to abortion and the consequences of unsafe abortion, curb the spread of sexually transmitted diseases including HIV [10]. Adolescence (10-19 years) is an age of opportunity for children marked with a time of transition from childhood to adulthood; wherein young people experience substantial changes in their physiology after puberty, but do not instantaneously imbibe the various associated roles, privileges and responsibilities of adulthood. This crucial period in the lives of young people presents prospect to promote their development and equip them with appropriate knowledge, attitudes, beliefs and skills (KABS) to help them successfully navigate through various nuisance and vulnerabilities of life, and realize the full development potential [11]. Current statistics indicate that almost one in every fifth person on the globe is an adolescent, as they comprise 18 percent (1.2 billion) of world's population in 2009, with 88 percent living in developing countries, particularly in the South Asia, the East Asia and the Pacific region [11]. India has the largest adolescent population (243 million), followed by China (207 million), United States of America (44 million), Indonesia and Pakistan (41 million each). Interestingly, more than 50 percent of the adolescent population lives in urban areas, which is expected to further reach 70 percent mark by 2050, with the largest increase likely to occur in the developing world. This entire scenario indicate the considerable demographic and socioeconomic challenges, particularly for the developing countries like India, in terms of meeting the specific needs for improving the survival and general health conditions, nutritional status, and sexual and reproductive health of the adolescents. Recent literature on adolescents have documented that irrespective of being relatively healthy period of life, adolescents often engage in the range of risky and adventurous behaviours that might influence their quality of health and probability of survival in both short and long term over the life course [12]. These includes early pregnancy, unsafe abortions, sexually transmitted infections (STIs) including HIV, and sexual abuse and violence. Pregnancy related problems comprise a leading cause of death among adolescents aged 15-19 years, mainly due to unsafe abortions and pregnancy complications [13]. However, the sexual and reproductive health needs of adolescents and youth are poorly understood and grossly underappreciated owing to limitation of scientific evidence compounded with the unpreparedness of public health system, which may jeopardize the initiatives to advance the health and well-being of adolescents. Adolescents and youth in India experience several negative sexual and reproductive health outcomes such as early and closely spaced pregnancy, unsafe abortions, STI, HIV/AIDS, and sexual violence at alarming scale. One in every five woman aged 15-19 years experience childbearing before 17 years of age that are often closely spaced; risk of maternal mortality among adolescent mothers was twice as high as compared to mothers aged 25-39 years [14,15]. Importantly, adolescents and youth comprise 31 percent of AIDS burden in India [16]. Furthermore, multiple socioeconomic deprivations further increase the magnitude of health problems for adolescents. This limits their opportunity to learn and access the appropriate health care services. This inadvertent scenario calls for a serious and comprehensive public health initiative to provide Indian adolescents and youth with accurate and age-appropriate essential information and skills for a responsible lifestyle, that might help in reduction of risky sexual behaviour, early pregnancy, HIV/AIDS and STI, etc. Recently, recognizing the need of the time, Government of India has experimented with the provision of Adolescent Education Programme (AEP) to lay the foundation for a responsible lifestyle, including healthy relationships and safe sex habits among adolescents and youth. However, this initiative attracted mixed reactions from different sections of the Indian society. There is scanty scientific literature which throws light on the level of knowledge, perceptions and viewpoints on issues related to family life education among Indian adolescents and youth. Are adolescents and youth in India really prepared to understand and benefit from this new experiment? Hence there is a need for studies that scrutinize and critically evaluate the knowledge, attitude, perceptions, skills and experiences of family life education among Indian adolescents. --- Controversy Over Introducing Sex Education in Schools With the view to generate awareness and inculcate necessary skills among adolescents and youth, a scheme for adolescent education programme in the school curriculum was promoted by the National AIDS Control Organization (NACO) and the Ministry of Human Resource Development (MHRD), Government of India, which led to a major controversy in 2007. The ardent opponents argued for a ban on starting sex education in schools on the ground that it corrupts the youth and offends 'Indian values' [17,18]. They contended that it may lead to promiscuity, experimentation and irresponsible sexual behaviour [19]. The critics also suggested that sex education may be indispensable in western countries, but not in India which has a rich cultural traditions and ethos. On the contrary, the proponents argued that conservative ideas have little place in a fast modernizing society like India, where attitudes towards sex education are changing rapidly. As fallout of this controversy, several Indian states including Gujarat, Madhya Pradesh, Maharashtra, Karnataka, Kerala, Rajasthan, Chhattisgarh and Goa declared that the course content as suggested by MHRD was unacceptable and thus banned the programme [1]. At the same time, attempt towards the introduction of sex education at school level in India met with opposition from the fundamentalists arguing that it may degrade the tender minds and destroy the rich family systems in India. Furthermore, some teachers and principals were threatened that, ''if you don't stop sex education, neither will you remain in the jobs, nor will your schools survive''. However, the other side of the coin (pro for sex education) reflects supportive campaign towards introduction of sex education that may help to reserve the rich heritage and culture of India. Adolescents should be scientifically educated about the facts and myths related to sexual activities that may lead to number of health related risks. Being vulnerable to various changes associated with physical, emotional and psychological transitions, adolescents/youth must have proper knowledge of sex education that may empower them into healthy, productive and responsible adults [20]. Though few politicians and religious leaders have opposed the introduction of sex education in schools, studies have shown that Indian adolescents and youth do not have sufficient information about sexual matters, thereby increasing the possibility of falling prey to various forms of sexual violence. TARSHI (Talking about Reproductive and Sexual Health Issues), a non-governmental organization running a helpline on sexual information, received over 59,000 calls from men, seeking information on sexual anatomy and physiology [1]. An analysis of this data showed that, 70 percent of the callers were below 30 years of age, while 33 percent were in the age group of 15 to 24 years, which indicates that young people do have the need, but lack adequate authentic source to receive appropriate and correct information in a positive manner. The WHO report (2003) on family life, reproductive health and population education documented that promotion of family life/sex education has resulted in delayed age of entering into sexual relationship, reduced number of partners, increased use of safer sex and contraception, and other positive behaviour [10]. It was further noted that sex education in schools did not encourage young people to have sex at earlier age; rather it delays the start of sexual activity and encourages young people to have safer sex. However, both the critiques and proponents of introducing family life/sex education in Indian schools propagate the analogous ideology of 'sexual restraint' i.e., delaying the initiation of sexual activity among adolescents before marriage, which may also help to curtail the menace of HIV/AIDS, sexually transmitted diseases and restrict the pace of population growth [21]. India has become the second largest hub of HIV/AIDS pandemic in the world. The proponents of sex education stressed the need for providing knowledge about HIV/AIDS, teenage pregnancies and information about sexual health. In a survey of college students conducted by the All India Educational and Vocational Guidance Association, it was reported that 54 percent of males and 42 percent of females did not have adequate knowledge regarding matters of sex [7]. About 30 percent of males and upto 10 percent of females are sexually active during adolescence before marriage, though social attitudes clearly favour cultural norms of premarital chastity [14]. We need to accept the fact that we are living in a complex world leading complicated lives. Preventing access to pornographic movies or erratic contents on television shows is not prudent, but adding a single chapter to the school curriculum is relatively simple and practical [22]. Mass media being highly influential has been part of both solution and of the problem in the area of sex and youth. It has been part of the solution because it has helped to bring sexual topics into discussions. Radio and television has been the medium in opening doors to the deliberations of several topics which were previously considered as taboo. A survey conducted in Mumbai found that 88 per cent of the boys and 58 per cent of the girls among college students had received no sex education from parents and their source of information were books, magazines, and youth counsellors [7]. Internet is the greatest culprit which makes pornography easily accessible in recent times. Studies have shown that vast majority of parents do not accept the responsibility for providing sex education to their sons or daughters [23]. However, another study states that 68 percent of the parents believe that they should be the primary sex educators of their children, followed by schools [24].The apparent stigma attached to any discussion on sex in India is due to the fact that people tend to view sex education in a narrow sense, that is, the mere explanation of anatomical and biological differences. Ideally home is the best place for sex education and the attitudes of parents are of vital importance. When a child feels the subject as forbidden, he/she feels more curious to know about it which can lead to misleading information, if parents feel embarrassed in talking about sex with their children. --- Available Evidences The recent emerging scientific evidence across globe documents substantial confirmatory positive influence of sex education towards promoting overall health and well-being of adolescent and youth. A recent study from Nigeria presents paramount significance of providing sexual education to youth that helped them to develop critical thinking and insights on range of family life/sexual issues like premarital sex and pregnancy, abortion, teacher-student relationships and lesbianism [25]. Another study in Indonesia suggests the mixed viewpoint on the pros and cons of sex education among youth [26]. Proper information about sexuality should be provided to youth to help them grow healthy and responsible. A study conducted in Venezuela highlighted the importance of imparting sex education to youth, as it helped to prevent adolescent pregnancy, abortion, HIV/AIDS and sexual abuse [27]. A study in India revealed that majority of school teachers was in favour of imparting sex education to school children [28]. Fourteen years of age was considered to be the most appropriate for imparting sex education by 28.6 percent of school teachers. School teachers and doctors were considered to be the most appropriate persons for providing sex education. Another study from India attempted to assess the impact of sex education on the students noted that doctors were the first choice to impart sex education followed by school teachers; the preferred mean age to start sex education was 15-16 years [29]. A study conducted in seven private co-educational schools to understand the adolescent attitudes towards issues of sex and sexuality in India showed wide lacuna in the knowledge on sex and sexuality matters among adolescents [30]. Majority of mothers believed that discouraging pre-marital intercourse should be the most important objective of sex education, and those who felt that their own sex education was inadequate were in support of providing sex education for their children [31]. Parents should provide sex education to their children in a friendly and informal atmosphere so that children may get rid of the idea that sex is dirty and be aware of their responsibilities [32]. A survey conducted in Hyderabad and Secunderabad cities of India found that the major sources of information on sexual matters among adolescents were books and films, followed by friends [33]. An important observation emerging from this study is that, in spite of exposure towards sex education, many adolescents did not have the correct knowledge regarding reproduction process. This further raises serious questions regarding the content, technique and format of the sex education being imparted in certain institutions which failed to have a desired impact on adolescents/youth. Family life education for boys and girls at the adolescent stage should be constructive enough so as to contribute to healthier emotional growth and it must prepare them to enter into a responsible adulthood [34]. Adolescent boys and girls need sound and correct knowledge about sexual matters. In general, the knowledge among boys regarding sexual issues is more than that of girls may be because boys try to satisfy their curiosity more readily [23]. It was also found that educated parents help their children to clarify their doubts and anxieties about sexual matters in a more realistic way. The findings of National Family Health Survey show that majority of men and women in India favour family life education [35]. More than two-thirds of adults approve of teaching school children about physical changes in their bodies that come with puberty, although there is somewhat less approval of children learning about puberty in the opposite sex. According to the Youth Study in India [36], 83 percent of young men and 81 percent of young women (aged 15-24 years) felt the need to impart family life education. However, there exists a substantial rural-urban differential in reporting of the need for family life/sex education. Those who received the family life education consisted of only 23 percent of unmarried women and 17 percent of unmarried men. --- Youth Ready for Sex Education? Though few micro-level studies have been conducted in India to examine the knowledge, attitude and perceptions of adolescents toward family life education, yet there exists huge gap in appropriate understanding regarding various issues of family life/sex education and its effective implementation. Since there are supporters and opponents towards introducing sex education in Indian schools, it is most important to understand the perception and attitudes of youth on this controversial issue. This study is an attempt in the same direction using evidence from two nationally representative sample surveys to analyse the perceptions and experiences of family life education among young women in India. These large-scale household surveys [36,37] conducted across India and various parts thereof, provide a unique opportunity for the first time to gauge the attitudes of younger generation. In this study, the terms sex education/family life education/adolescent life education were interchangeably used. The present study broadly attempts to gauge the views, perceptions, aspirations and experiences of adolescents and youth regarding family life education. The specific objectives are as follows: 1) To study the perception regarding family life education (FLE) among adolescents and youth. 2) To examine the experiences of youth who received family life education. 3) To evaluate the awareness on reproductive health (RH) issues among youth and the impact of FLE on their awareness. --- Ethics Statement The study was based on an anonymous public use data set with no identifiable information on the survey participants; therefore no ethics statement is required for this work. --- Data and Methods The data for the present analysis comes from two major household surveys in India. The District Level Household and Facility Survey (DLHS-3) [37] in 2007-08 is perhaps the largest ever demographic and health survey carried out in India with a sample size 7,20,320 households covering 601 districts of the country. The perception and knowledge about family life education, family planning, RTI/STI, HIV/AIDS and reproductive health issues were collected in this survey. About 1,60,550 unmarried women were interviewed in DLHS-3, using a structured interview schedule. The second survey is the ''Youth in India: Situation and Needs'' conducted in 2006-07 in six Indian states [36]. The main objective of this survey is to gather evidence on key transitions experienced by youth as well as their awareness, attitudes and life choices. The study was conducted in the following selected Indian states: Andhra Pradesh, Bihar, Jharkhand, Maharashtra, Rajasthan and Tamil Nadu. In all, 50,848 married and unmarried young women and men were successfully interviewed, from 1,74,037 sample households. Unmarried men and women as well as married women (15-24 years) were interviewed, whereas the age group for married men was extended to 15-29 years, in the first ever landmark survey on youth in India. Literature suggests that the attitudes and behaviour of youth are usually influenced by socio-economic, cultural and demographic characteristics. The pertinent socio-economic and demographic characteristics considered in this study includes age groups (15-19 and 20-24 years), type of residence (rural and urban), religion (Hindu, Muslim, Christian and others), caste (Scheduled Caste, Scheduled Tribe, Other Backward Classes and others), education (non-literate, 1-5 years of schooling, 6-9 years and 10 years or above), economic status of the household as presented by wealth index and employment status (not working, agriculture, manual, non-manual). Awareness about contraceptives has been computed based on modern methods (sterilization, pills, condom, IUD, etc.) and traditional methods (rhythm, withdrawal, abstinence, etc.). --- Findings Table 1 presents the composite picture concerning the perceived importance of family life education (FLE) and the perception regarding at what age and standard it should be introduced in India. Nearly four-fifths of unmarried women (15-24 years of age) perceived that FLE is important. DLHS-3 asked women about their opinion regarding-at what age and at what level in school does the FLE should be introduced? Majority of women reported that FLE should be provided in the age group 15-17 years (38 percent) and initiated from the 8 th to 10 th standards (55 percent). The information regarding major sources of FLE among unmarried women who perceived FLE to be important was also collected. Majority of the respondents reported that the main source for providing FLE should be parents (81 percent), followed by teacher/school/college (55 percent), sibling/ sister-in-law (50 percent), and friends/peers (30 percent) respectively. On the other hand, health care provider/experts (10 percent), husband/partner (4 percent), youth club/NGO worker (3 percent) were respectively chosen as other preferred sources of information on FLE among unmarried women in India. Table 2 indicates the proportion of women who actually received FLE and their experiences regarding the same. Around 50 percent of women actually received FLE, overwhelming majority from schools or colleges. The other sources were NGO programmes, youth clubs, government programmes, etc. Among the women who received FLE, majority reported that the teacher/ trainee explained it in a way that can be understood and FLE answered/clarified many of their questions. It is important to note that, around 40 percent of women felt embarrassed while attending family life/sex education classes. Table 3 presents the percentage of unmarried women 15-24 years who perceived FLE to be important, and those who actually received FLE by selected demographic and socioeconomic characteristics in India. The prevalence of perceived importance of FLE was relatively high among the youth (81 percent) in India. However, only 49 percent of women actually received FLE. The relatively mature unmarried women (20-24 years) residing in urban areas with more than ten years of education, engaged in non-manual occupation, and coming from better-off families had higher prevalence of perceived importance of FLE as well as that of receiving FLE than others. In general, the perceived importance of FLE among youth in India is relatively high with strong demographic and socioeconomic differentials. The actual experience of FLE among youth is extremely limited. The knowledge and awareness on reproductive health issues among unmarried women were also collected in the DLHS-3. On an average, the women who received FLE had much better awareness on various reproductive health issues like RTI/STI, possibility of finding out the sex of a baby before birth, and knowledge about reducing chances of infecting HIV as compared with women who did not receive any FLE (Table 4). In general, women who received the FLE were relatively more aware about methods of contraception as compared to their counterparts. For instance, among women who received FLE, nearly 98 and 27 percent of women were aware about any modern and traditional methods of contraception respectively. On the other hand, this figure declines to 89 and 12 percent respectively among women who do not receive FLE. Table 5 illustrates the young people's opinion on family life/ sex education across men and women, and married and unmarried. Around 83 percent of young men and 78 percent of young women felt that it is important to impart family life/sex education to youth. Slightly large proportion of unmarried youth (84 percent of men and 81 percent of women) as compared to married youth (79 percent of men and 75 percent of women) reported family life/sex education to be important. Majority of young men and women observed that family life/sex education should be provided to adolescents in the age group 15-17 years. Regarding the perception of youth about the best person to impart family life/sex education, the preferences differed among men and women. Majority of young men reported that the best person to provide FLE should be teacher, whereas most young women suggested that parents are ideal persons to provide such education. Around 21 percent of young men and 11 percent of young women reported that the main source of providing family life/sex education can be friends. Table 6 indicates that, nearly 15 percent of young men and 14 percent of young women received family life/sex education. Majority received FLE through schools/colleges. Among those who received formal family life/sex education, majority felt that FLE answered many of their anxieties/queries and the teacher/ trainee explained the subject well. Twenty one percent of men and 37 percent of women also reported that they felt embarrassed while attending family life/sex education. This, in a way suggests that the curriculum and the method of teaching should be contextspecific and culturally sensitive. --- Discussion and Conclusion The present study attempt to unravel the divergent views, perceptions and aspirations of adolescents and youth regarding family life education, its perceived importance, and the potential Table 3. Perception and actual experience of family life/sex education among unmarried women by their background characteristics (percentages). effects of family life education on array of reproductive health issues by using nationally representative household sample surveys in India. Young people (10-24 years) constitute about 315 million and represented about 31% of India's population. They not only represent India's future in the socio-economic and political realms, but nation's ability to harness the demographic dividend. In the course of transition to adulthood, young people face significant risks related to sexual and reproductive health. Adolescent life education program intend to ensure the rights of the large section of adolescents/youth, and to develop them as healthy and responsible members of the family and society. Adolescents in all societies learn their responsibilities towards family by observing and following the behaviour of others. Due to rapid social changes occurring all over the world, the young generation is facing an enormous challenge in coping with the consequences of attrition in the traditional family system, social life, and values. Under this volatile environment, the family life education can help the adolescents to experience successful transition from childhood to adulthood. One of the most significant findings of the study indicates that majority of youth perceived family life education to be important. This highlights that Indian adolescents realizes the range of potential health risks and challenges lurking before them and demands the appropriate knowledge, skills and training to lead a responsible and healthy life style. However, the study points out that only half of the unmarried women actually received any form of family life education. This critical mismatch between the potential demand for FLE and apparent lack of facility might lead One of the crucial issues that deem attention relates to the major sources of FLE. The study indicates that majority of unmarried women, who perceived FLE to be important, reported that parents to be the provider of FLE, followed by teacher/ school/college, brother/sister/sister-in-law and friends/peer respectively. Therefore, it becomes apparent that FLE need not be only part of formal school curriculum; it should also be equally augmented in the first place by parents at home to eliminate all the misconceptions, inhibitions and doubts of adolescents on various aspects of family/sex life. The study also indicates that relatives and friends/peers could also be important avenues that need to be appropriately tapped to help the adolescents learn about the basic issues/rules of family/sex life skills safely and comfortably either at home, school or neighbourhood. Addressing the discourse on the implementation of FLE in school curriculum in India, several scholars, administrators, and politicians have mooted the adverse impact of FLE and how it may denigrate the 'rich Indian cultural values' and ethos. However, our findings effectively nullifies all these apprehensions and convincingly illustrates that, among youth who received FLE, the awareness about various reproductive health issues and knowledge of contraceptive methods was far better and comprehensive compared to their counterparts who had no FLE. This further goes on to show that provision of FLE will benefit not only the adolescents, but many more generations to come by avoiding the menace of RTI/STI, unwanted pregnancies, HIV/AIDS, etc. In the era of globalization and modernization, there still persist steep socioeconomic divide in the knowledge, attitudes and perception of individuals in Indian society. The same holds true with regard to benefits of FLE. Whether it relates to the perceived importance of FLE, or actual prevalence of FLE among unmarried women in India, we found substantial differentials across socio-economic groups. This indicates that even after more than six decades of planned development efforts in India, large proportion of population living in rural areas, illiterate, margin-alized social groups, continue to lag behind when it comes to the adoption of modern attitudes and healthy sexual behaviour. Hence, it is crucial for policy makers and program managers to take note of these socioeconomic hierarchies in Indian society, while designing and implementing any FLE program. However, most political and religious leaders in India are unfortunately not in favour of sex education at school level. The Rajya Sabha of Indian Parliament constituted a committee to examine the implementation of the Adolescent Education Programme (AEP). The committee categorically opined against the implementation of sex education at school level, and felt that AEP may cause irreparable damage to the future of India by polluting the young and tender minds, and could invariably promote promiscuity. The report also took serious objections on the study materials and kits prepared for the implementation of the AEP in Indian schools. The report while denouncing the case for the implementation of any AEP at school level emphasized the need for appropriately passing on the message of no sex before marriage among adolescents and declaring it as immoral, unethical and unhealthy. In addition, students should be made aware of marriageable age which is 21 years in case of boys and 18 years in case of girls and any indulgence in sex outside the institution of marriage was against the social ethos [38]. Finally, we summarize the key issues that emerge from this study. There exists a wide gap between the proportion of women who perceive FLE is important and those who actually received any sex education. It was also true that women who received family life education had better knowledge and awareness on reproductive health issues than counterparts. The level of awareness and knowledge regarding Family Life Education is more among the educated, better-off sections and those living in urban areas. The growing population, changing life styles and increasing incidences of HIV/AIDS is a great challenge. In order to prepare the youth to face these challenges, introducing sex education is an important step. The nation-wide surveys clearly illustrate that overwhelming majority of young women and men are in favour of introducing family life education. The government and civil society should initiate a national debate to arrive at a consensus on this issue among various sections of the society. The study strongly argues the necessity to formulate appropriate policy regarding family life education so as to address the unmet need for scientific learning/training on matters of family/sexual life among --- Author Contributions Conceived and designed the experiments: NT TVS. Performed the experiments: NT TVS. Analyzed the data: NT. Contributed reagents/ materials/analysis tools: NT TVS. Wrote the paper: NT TVS.
Background. Countries need vital statistics for social and economic planning. World Health Organization (WHO) recommends at least 80% coverage to use registration data on births and deaths for social and economic planning. However, registration remains low in developing countries. National coverage for Kenya in 2014 was 62.2% for births and 45.7% for deaths, with wide regional differentials. Kilifi County in the coastal region in Kenya reported rates below the national coverage at 56% for births and 41% for deaths in 2013. Objective. To determine level of knowledge and practice and reasons for low coverage of birth and death in Kilifi County. Method. This is a descriptive cross-sectional study that employed multistage cluster random sampling procedure to select a sample of 420 households from which household heads and women with children below five years old were surveyed. Results. Out of the 420 households sampled, about all respondents (99%) were aware of birth registration while death was 77%. Their main sources of information were assistant chiefs at 77% for both birth and death registration and family and friends at 67% for deaths and 52% for births. Coverage for birth registration was 85% and death 63%. More deaths occurred at home (55%) than in hospital (44%) while 55% of deliveries occurred in hospital and 44% at home. Main reasons for not registering death were ignorance (77%) and transport and opportunity cost (21%) while for birth registration were ignorance (42%), travel and opportunity cost (41%), lack of identification documents (9%), and home deliveries (7%). Conclusion. Registration of birth and death has improved in Kilifi County. The drivers are legal and requirements to access social rights. Reasons for not registering are ignorance and opportunity costs. Community should be sensitized on the importance of registration, address home deliveries and deaths, and increase efficiency in registration. Further research is recommended to determine the severity of teenage pregnancy and orphanhood in the county.
Introduction Vital statistics are necessary for determining population changes, public administration, policy formulation, planning, and implementation of development policies. Ideally, birth registration is part of an effective civil registration system that acknowledges the existence of the person before the law, establishes the child's family ties, and tracks the major events of an individual's life, from live birth to marriage and death. A birth certificate provides some, albeit minimal protection against early marriage, child labour, recruitment in the armed forces, or detention and prosecu-tion as an adult [1]. The data are required to formulate programs relating to maternal and child health including nutrition, immunization, and universal education. World Health Organization (WHO) recommends at least 80% coverage, as criteria for use of registration data on births and deaths. However, coverage of birth and death registration remains unacceptably low especially in developing countries. Globally, each year, about two-thirds of 57 million annual deaths go unregistered, and as much as 40% (48 million) of 128 million births go unregistered, representing one out of three children [2]. Although it can be argued that census and other large sample surveys may be useful in supplementing demographic data in countries where vital registration system is still at infancy, they are expensive to perform on a routine basis, are frequently marred by politics, disputes about figures, underfunding, and topographical challenges, and should rather serve as complements in a comprehensive health information system [3,4]. Civil registration of vital events in Kenya started in 1904, but was limited to Europeans and Americans. However, after independence in 1963, registration was made compulsory for all residents in Kenya. The Civil Registration Service (CRS) is the government agency responsible for the registration of births and deaths. The assistant chiefs are the government registration agents for vital events that occur at home or in the community while health care workers are responsible for events that occur in health institutions. The agents submit notifications to civil registrars in civil registration offices for registration and issuance of birth and death certificates. Despite the clear path, registration coverage for Kenya is below WHO recommended levels. The national coverage was 62.2% for births and 45.7% for deaths in 2014, with wide regional differentials which suggests that factors determining coverage may vary by county. However, very few community studies have been conducted in Kenya to determine the factors responsible for the low coverage. This study was undertaken to identify factors responsible for low registration of birth and death in Kilifi County; the coverage was below the national coverage at 56 and 41 percent, respectively, in 2013. The study assessed knowledge, attitude, and practice (KAP) of birth and death registration in Kilifi County. --- Materials and Methods --- Study Site. The study was conducted in Kilifi County. The constitution of Kenya divides the territory of Kenya into 47 geographical units, and Kilifi County is one of the units. Kilifi County is located in the coastal region in Kenya and has an area of 12,245 km 2 . According to Kenya national population census, the county's population was 1,109,735 in 2009 [5] with a growth rate of 3.1 percent per annum. The main economic activities are tourism and fishing due to its proximity to the Indian Ocean. It has fertile soils and good weather pattern, and so, it is good for agricultural farming. --- Sampling and Sample Size. This study employed a multistage cluster random sampling procedure. A sample of 420 households was drawn from twelve (12) sublocations selected from four subcounties. The four subcounties were randomly selected from the six subcounties that make up Kilifi County. Thirty-five (35) households were selected systematically from each sublocation. The main tool for the study is survey questionnaire-a household questionnaire with vital event sections for deaths in the last five years to the survey and births less than five (5) years old. Interview guides were also utilized to collect qualitative data to explain the survey findings. --- Study Design and Target Population. The KAP survey was cross-sectional and targeted household heads and women with children below five (5) years old. The respondents were interviewed to elicit information on household characteristics, registration of deaths that occurred in the last five years, and births below five years old. The targeted women also provided information on their experience with the civil registration system (recent/current bottlenecks in civil registration of births) and reasons for not registering birth. To address known limitations of quantitative design, qualitative techniques (KII and FGD) were employed to contextualize and supplement the survey findings and explain the practices and attitudes in survey questionnaire responses. The qualitative study targeted registration agents to understand the CRS system and community elders to understand the sociocultural context. The qualitative interviews were aimed at describing and understanding the community's own perceptions and experience in birth and death registration [6]. --- Results To achieve the study objectives, 420 household were surveyed, and six focus group discussions were held with members of the community and seven key informant interviews with birth and death registration agents. From Table 1, out of the 420 households sampled, 88 (21%) were in urban and 332 (79%) in rural area. Over 10 and 13 percent of households surveyed were more than 10 km away from the assistant chief's office and health facility (registration agents), respectively. About 84.3 percent of the households had at least one child under the age of five (5) years, while more than a quarter (27%) of households had experienced death in the last five (5) years. Of the 420 respondents to the household questionnaire, 164 (39%) were heads of household and 237 (56%) were spouses of the heads of household. Two hundred and sixty-four (63%) of the respondents were female while 37 percent were male. Over 80 percent of respondents were between 25 and 49 years old, about 10 percent were 50 or more years old, while less than nine percent (8.8%) were between 20 and 24 years old. Thirty percent (30%) of respondents had no formal education, 55 percent had some primary education, less than 14 percent (13.9%), and less than 2% had some secondary and tertiary education, respectively (Table 2). More than three quarters (76%) of respondents were from monogamous families, less than 17 percent were polygamous, 4 percent had never married, while about 3 percent were widowed, divorced, or separated. The median household size was 7 persons. Two hundred and eighty-four (68%) of the respondents had children below 5 years (Table 2). Out of the 420 respondents interviewed, all respondents (99.5%) were aware of birth registration, while more than three quarters (322 (76.7%)) were aware of death registration. Their main sources of information on death registration were assistant chiefs (77%) and family and friends (67%). Main source of information for birth registration was also the assistant chief (77.2%) and members of the 2 BioMed Research International family and friends (51.7%). However, only 11 percent of respondents reported having heard of death registration from health workers compared to 37.8 percent for birth registration (Table 3). Almost all (97%) of the 322 respondents and of the 419 respondents who said they were aware of death and birth registration, respectively, had knowledge of at least one place where to register the respective civil event. Among the respondents aware of birth registration, higher percentage (94%) of them knew how to register birth compared to respondents who were aware of death registration where only 69 percent were knowledgeable on the process of registering death. Majority (66%) of the respondents who were aware of death registration were also aware of importance of death registration. Fifty-nine (59) cited legal requirement and 2 percent to obtain a burial permit. A sizable percentage mentioned individual benefits: 37 percent for succession and 14 percent to honour the deceased. However, 47 (11%) respondents had no idea why registration of death is done. For birth registration, almost all (93%) of the 419 respondents who were aware of birth registration were also aware of the importance of registering birth. The reasons advanced for registering birth varied from to meet legal requirement (40%), school requirement to register for national examinations and 3 BioMed Research International to access bursaries for orphans (71%), to acquire national identification card (ID) and passport (36.4%), and because it is good to obtain a birth certificate for any eventuality (17.5%). However, 28(6.7%) of the respondents had no reason why registration should be registered (Table 3). The survey reported 140 deaths and 671 births in the last five years prior to the survey. Out of the 140 deaths, 79 (56%) were male and 61 (44%) female, and more deaths occurred at home (55%) than in hospital (44%). For births, about 333 (50%) were male and 338 female (50%) and were more hospital 4). Reasons for not registering death are ignorance (54%: did know where to register and did not know the importance), not heard of death registration (23%), distance to registration office, long wait in the queue and costs associated with travel (21%), and deaths that occurred at home (9%). For birth registration, reasons for not registering are ignorance (42%), transport and associated costs (21%), distance to registration office and waiting time (20%), lack of identification documents (9%), and home deliveries (7%). --- Discussions --- Knowledge of Civil Registration System. Knowledge about death registration in Kilifi County is high at 77% but is lower than birth registration (99%). This can be explained by the number of interventions in Kilifi County; most CRS partners are implementing interventions addressing issues of late and low registration coverage for births while there is none for death registration. The main driver for death registration in the region is the legal requirement for burial permit for body disposal and succession. The low level of awareness found can be associated with low level of education among residents in this area; only about 15 percent of the respondents have attended school beyond primary level with 30 percent with no formal education. Other similar studies have found that where awareness about registration of civil events is low, coverage is also low [3,7]. According to UNICEF, unregistered children tend to be found in areas where there is little awareness of the value of birth registration [8]. The respondents found in this study to be ignorant about civil registration especially death registration and about place to register birth and death, have no individual or legal incentives to register, and are not clear about the process of registering are therefore unlikely to register birth and death if and when it occurs in their households. This shows that awareness about civil registration among the people is one of the important reasons for low coverage of birth and death registration in Kilifi County. The study also established that the main sources of knowledge on birth and death registration are the registration agents, family, and relatives. The role of media in this regard is minimal. This is not surprising as the study found most (84.5%) of the respondents had not attended school beyond primary level and therefore not amenable to source of information outside of their social circles. Any awareness campaign on civil registration targeting the community should therefore be through communal activities including meetings, wedding, and funerals. --- 4.2. Practice. Crude birth rate (CBR) and crude death rate (CDR) are indicators of levels of living or quality of life [9]. In this study, crude birth and death rates for Kilifi County were 38.9 and 8.3 per 1000 population, respectively. The CBR compared well with that reported in WorldBank report 37.6 per 1000 population in the year 2011; however, the CDR reported was in variance to the WorldBank's 11.8 (i) Lack of funds (ii) Not knowing place to register (iii) Ignorance (lack of knowledge, stillbirth, did not know the importance) 5 BioMed Research International per 1000 population in the year 2011. This may be attributed to under reporting of deaths especially for neonates which are regarded as bad omen and are interred immediately after death as explained by a respondent in FGDs: Stillbirths and deaths of newborn babies are mostly not reported, especially when they take place at home. They are buried immediately due to cultural beliefs-they are a bad omen.... Sometimes back they used to be buried under the bed or under a big tree and the whole thing (death) is forgotten. For a CDR of 11 per 1000 population [10,11] and a sample size of 3360 (420 households and household size of 8) people represented in this study, expected number of deaths is 37 per year or 185 deaths in five years as opposed to 140 deaths reported in this study. The 45 people whose deaths (185 deaths computed-140 deaths reported) were not reported might have suffered the "scandal of invisibility" [12]. The study found coverage of birth and death registration in Kilifi to be 85 percent and 84 percent, respectively. However, the death coverage is in variance with WorldBank 63.3 percent (computed based on WorldBank CDR of 0.011) which can be attributed to low death reporting implied elsewhere in this report. Birth registration was found to vary by place of residence, age of the mother, marital status, and education level of household head. Birth registration is highest among children in urban areas than rural areas which can be attributed to strong links to the mainstream mechanisms of society, such as health services. About 60 percent of births in urban areas occurred in hospitals compared to 54 percent in rural areas. Kumar et al. reported similar finding in Eastern Uttar Pradesh where birth coverage varied by place of residence with urban area reporting better coverage than rural area, parent's level of education, and social economic status and marital status of the mother [13]. However, the study found low uptake of event certificate: 19 and 13 percent for death and birth, respectively. This can be interpreted as low individual incentive for death and birth certificates in the area. UNICEF reported similar findings where 85 percent of registered children under the age of five did not have birth certificate [8]. The little demand for birth and death certificate can be explained by the low level of awareness of their importance reported in this study. The proportion of deaths with death certificate is higher than for birth certificates. This maybe because of succession and inheritance which is usually shortly after death, but unlike birth certificate which will be required when the children will be joining school at age six or seven. The birth and death registration rates reported in this study are too high compared to 56% and 41.1 for birth and death, respectively, reported in the annual vital statistics of 2013. The striking gap between the rate of death registration in the study and the vital statistics published in 2010-2013 can be explained by under reporting of deaths reported in the study or speculate loss of forms/data between the agent and CRO office. --- Reasons Advanced for Birth and Death Registration. Drivers of registration of deaths and births in this area are as follows: the need to satisfy legal requirement, to meet school and bursary requirement, to acquire passport and ID, and succession. The perception is on the need for a certificate to achieve something else [3,14]. For example, the actual death registration is highly driven by the legal requirement to obtain a burial permit for disposal of the body while for birth registration, the key motivating factor is the requirement for birth certificate to access social rights including education in future [3]. This indicates that to increase birth and death registration in this area, appropriate incentive(s) are required; lack of sufficient incentives or pressures on the citizen to register leads to low coverage [7]. Reasons advanced by respondents for not registering vital events and, where registered, reason for not obtaining birth and death certificate were distance to the registration office and associated costs, waiting time and opportunity cost for gainful work, and ignorance-not knowing the importance of birth and death registration [7,13]. 4.4. Challenges in Birth and Death Registration. The death and birth registration system faces various challenges that affect its optimal performance especially completeness and data quality. The issues include the following: (i) Shortage of Registration Materials. An assistance chief explained that sometimes he is forced to record details of a vital event that is reported in a note book and transfer them to the notification forms when supplies are replenished. (ii) Competing Priorities. Registration of vital events is not always a priority among registration agents work-it is secondary to other tasks performed by the registration agents including clinical work among health workers and public administration duties for assistant chiefs. (iii) Limited Knowledge on Event Registration among Agents. It was established that some agents especially chiefs are asking for more information than is necessary (details of the child's father) as a requirement to register a birth, while some agents in health facilities do not understand their role as registration agents (a nurse refusing to fill in notification form). The agents should be sensitized on death and birth registration especially on the requirements for event registration. ( --- Conclusion Death and birth registration in Kilifi County has improved. However, gaps in awareness, lack of clarity about the registration process, and individual perceptions are contributory factors to suboptimal civil registration in Kilifi. Leading reasons for not registering are distance, long queues/overnight stay for the services, and associated costs. Others are ignorance: never heard of civil registration, do not know the importance and process of registration, and inertia-too many births and deaths occurring at home. Drivers of registration are legal requirements and requirement to access social rights including education. Reasons for not registering are ignorance, opportunity and travel cost, and death and delivery occurring at home. Generally, respondents perceived registration of births and deaths as expensive both in terms of travel and opportunity cost, and it has little or no immediate benefits to the individuals and thus not a priority. To improve coverage for both births and deaths, the study recommends the following: (i) Enhance awareness campaign among residents on civil registration in the area. The most effective channels of awareness creation are community meetings including, weddings, burials, and community health workers (CHW). The messages should include registration procedure, place to register, and importance of birth and death registration as incentives (ii) Extend birth and death registration network to subcounty level by either opening registration offices or introducing mobile birth and death registration services. This will reduce distance residents have to travel to access registration services, consequently cut cost of transport and associated expenses, and reduce time away from daily work (iii) Enforce the law on birth and death registration (iv) Avoid stock out of registration materials-application forms (v) Undertake a validation exercise/study to ascertain system efficiency in particular the link between the registration agents and Civil Registration Offices --- Data Availability Data is available on request. --- Additional Points Paper Context. Birth and death registration in Kilifi County in Kenya is below national coverage. Reasons for suboptimal coverage are not known. This study has identified the reasons and recommended measures to address them. If the measures are implemented, coverage would improve to more than 80% and the data from the civil registration system can therefore be used for social and economic planning in Kenya. --- Consent Respondents' consent for publication is not required. --- Conflicts of Interest The authors declare that they have no conflicts of interest.
Older adults' usage of information and communication technology (ICT) is challenged or facilitated by perception of usefulness, technology design, gender, social class, and other unspoken and political elements. However, studies on the use of ICT by older adults have traditionally focused on explicit interactions (e.g., usability). The article then analyzes how symbolic, institutional, and material elements enable or hinder older adults from using ICT. Our ethnographic methodology includes several techniques with Spanish older adults: 15 semi-structured interviews, participant observation in nine ICT classes, online participant observation on WhatsApp and Jitsi for 3 months, and nine phone interviews due to COVID-19. The qualitative data were analyzed through Situational Analysis. We find that the elements hindering or facilitating ICT practice are implicit-symbolic (children's surveillance, paternalism, fear, optimism, low self-esteem, and contradictory speech-act), explicit-material (affordances, physical limitations, and motivations), and structural-political (management, the pandemic, teaching, and media skepticism). Furthermore, unprivileged identities hampered the ICT practices: female gender, blue-collar jobs, illiteracy, and elementary education. However, being motivated to use ICT prevailed over having unprivileged identities. The study concludes that society and researchers should perceive older adults as operative with technologies and examine beyond explicit elements. We urge exploration of how older adults' social identities and how situatedness affects ICT practice. Concerning explicit elements, Spanish authorities should improve and adapt ICT facilities at public senior centers and older adults' homes, and ICT courses should foster tablet and smartphone training over computers.
INTRODUCTION He -my husband-does not let me touch the cellphone as I damage it. He claims that I bust everything I touch so I get afraid of it! (Natalia, old woman, 88 years old). An older adult couple over 65 years old talks about their life's stories, troubles, reasons, and expectations embodying their experiences with mobile phones. Natalia does not feel confident with the device since her husband is responsible for the devices at home but does not help her learn to use them. This quote represents what discrimination in technology entails: it is not only related to age but also other social identities, like gender. This study focuses on aging and ICT because we expect the older Spanish population to grow from 19.2 to 25.2% by 2033 (INE, 2018a). Moreover, we face a large digital divide in Spain: 36 percent of people 65-74 years old do not use the internet, while 99.5 percent between 16 and 24 years old do (Pérez Díaz et al., 2020:37). The article broadly aims to unpack what enables or hinders Spanish older adults from using ICT. Our method is qualitative and inductive to acknowledge older adults' diverse social backgrounds. By exploring this, we employ three analytical frameworks: explicit, implicit, and structural barriers or facilitators. These frameworks are in a way inspired by other studies, but the concepts together have not yet been used in the field of aging and technology. The explicit, implicit, and structural frameworks should not be understood as micromeso-macro in that our frameworks are not hierarchical and the conceptualization emerges, namely from our interpretation of the data of this research. We interpreted the data through a Situational Analysis suggested by Clarke and Friese (2007) to identify and represent the complexity and variation of data, and reveal hidden perspectives. This type of analysis uncovered the implicit and structural frameworks, beyond explicit elements. The explicit framework examines the bodily and material experiences of older adults practicing ICT, i.e., what is easily observable such as a physical barrier or a clear preference for technology. For example, Gitlow (2014) finds that vision, hearing loss, and fine motor difficulties hamper ICT use. Explicit experiences can also be found in a usability study or in a survey that explores the attitudes of older people toward a product. Prominent theories like the (Senior) Technology Acceptance Model (Renaud and Van Biljon, 2008:216) analyze clear-cut attitudes-perceived usefulness and ease-and their influence on the adoption of technology. These models exemplify how a relationship with ICT is usually researched through visible interactions and explicit surveys. Another example of an explicit-material study is found in Álvarez-Dardet et al. (2020) who sent questionnaires to Spanish older adults to assess perceived barriers, frequency, type of use, and attitudes toward smartphones, PCs, and tablets. Our article takes a step forward analyzing implicit and structural elements. The implicit framework delves into the unspoken and symbolic elements present in ICT use, e.g., an invisible patronizing relation between a son and the older adult that prompts the older person to disuse a technology. This framework is inspired by the field of socio-gerontechnology that critically analyzes technological solutions for older people and aging policies (Peine et al., 2021). Instead of embracing determinist notions of technology, the field appraises the entanglement of social, political, biological, and psychological factors and so forth in the interplay of technology and aging (Peine et al., 2021). An example of implicit use of technology is found in Greenhalgh et al. (2013) who encountered unexpected uses of tablets-older adults' pragmatic customization, combining new with legacy devices-which served to study technologies against predefined yardsticks. Peine and Neven (2019:19) assert that producers of gerontechnology should not only use concepts like "acceptance" or "input" to measure the impact of technology. Instead, they should look into socio-material practices to examine race, ethnicity, gender, and other unspoken elements mediating technology practice-which this study attempts. An example of an implicit element hampering ICT use is low self-confidence, especially when first using a technology. Confidence is difficult to be reported explicitly by people, and thus, we consider it implicit. Vaportzis et al. (2017:5) found that their participants emphasized their fear of using tablets and other technology due to low confidence; confidence was the primary barrier to ICT. Horst et al. (2021) unpack confidence with technology as a psychographic feature that could reduce social isolation for older adults. They conclude that less tech-savvy participants with low self-confidence report frequent feelings of isolation. The structural framework unpacks the political and institutional factors that go beyond older adults' choices when using ICT. It takes a similar approach as the implicit framework since it moves beyond material interactions, but it differs in that it is large and hardly changed by an individual. For example, a structural element can involve the political management of a Senior Center, policies on aging, or the pandemic. In this regard, Twigg and Martin (2014) highlight the need to study how the "politics of age" spread to the everyday lives, and Lassen and Moreira (2014) researched how innovation policy prioritizes conceptions of aging, sidelining others. In the realm of technology and aging, López Gómez and Criado (2021) look into the politics of participatory methods among Spanish examples about telecare. Similarly, McGrath and Corrado (2019) examine the factors that trigger technology adoption for older people with age-related vision loss. Among the factors, they find institutional/political factors regarding privacy that centered around the provision of personal information when purchasing applications. Investigating other political issues in technology use, Bergschöld (2018) unfolds how nursing students learn the sociopolitical consequences of gerontechnology for older adults with dementia. The experiences of older adults with ICT are unpacked through the concepts "barriers and facilitators" that hinder or enable technology use. Analyzing barriers and facilitators should not be understood as a way to inform how to trigger the use of ICT in the older population, since ways to support older adults are not necessarily technological. As Selwyn et al. (2003:577) say, there is a political interest in assuming that ICT is inextricably useful and desirable for older adults, which is often untrue. Rather we attempt to critically problematize the use of ICT through examining barriers and enablers. We performed the study before and during COVID-19, but these times are not central points of comparison. Information and communication technology in this study comprises landline phones, cellphones, radio, TV, smartphones and apps, laptops, and computers. We understand ICT use as a sociotechnical practice whereby the social context is as relevant as individual use or technical features. Within the social context, we find social identities affecting ICT practice. ICT usage remains connected to social class, age, gender, social capital, and other identities. We paid special attention to the social identities of the participants because we believe they are important to technology use. Our goal is not to analyze social identities separately because social identities are present across the explicit and implicit frameworks. In the structural framework, particular stories and participants' social identities are not disclosed because we shed light on the political structures that determine ICT use from upper positions. Several authors draw on social identities and ICT. Cotten et al. (2016) discuss that older adults are frequently compared to other age cohorts, so they demonstrate that racial and socioeconomic variations occur within the older population to demonstrate that they are not a homogeneous group. Tan and Chan (2018:129), researching with Singaporean older adults, show that social and cultural capital are particularly central in those who are reluctant to ICT and comfortable with them. In Colombia, Mexico, and Peru, education affects ICT use more than income (Gutiérrez and Gamboa, 2010:358) and rural older adults found social and technical hurdles with ICT but gained social engagement and independence (Baker et al., 2017:16). In Spain, González et al. (2015:4) conclude that age, education, and income do not affect older people's interest in using ICT. Otherwise, high involvement with computers prompts Spanish older adults with limited education and computer skills to view ICT positively to communicate, entertain, and stay informed. Apart from social identities, vast quantities of literature focus on the explicit experiences of older adults with ICT, e.g., studies of usability, physical barriers, specific motivations, etc. Albeit little is known about experiences with the structural and implicit frameworks, especially among Spanish older adults. The general aim of this research is to disclose what enables and hinders ICT practice. For that, we highlight the social identities of the participants, and we present the experiences of Spanish older people with ICT in three frameworks. In other words, this research unpacks how social identities are entangled in the implicit, explicit, and structural enablers or barriers for Spanish older adults using ICT. --- MATERIALS AND METHODS Our methodology involves three stages that are further elaborated below. As an overview, we first interviewed older adults to comprehend their experiences with tablets and mobile phones. Grasping these experiences allowed us to later intervene in a community of older adults to help them learn ICT. Consequently, we participated in and observed ICT courses for older adults, until COVID-19. The outbreak led to online participatory observation to learn how they dealt with confinement through ICT. The rationale for the different methods was to apply our knowledge in a community setting and adapt to the pandemic. Furthermore, different methods gave rise to different insights and interpretations of the data. We chose ethnography because it provides deep details about the participants' stories. The explicit/implicit/structural frameworks are based on the fieldwork: interviews and observations gave explicit information (e.g., physical barriers and motivations), whereas participatory methods uncovered implicit feelings or structural-institutional experiences with ICT such as surveillance and management (Flow Chart 1). The participants come from diverse class, education, and gender backgrounds, especially in the first interviews, whereas the participants in the ICT courses came from homogenous backgrounds (rurality, illiteracy, women, etc.). A summary of the demographics, with further details below, can be found in the Table 1. After and during the interviews, courses, and online participation, the ethnographers took notes by means of explorative jottings (in a field notebook and without a predefined sheet), pictures, and screenshots. The data were later documented on shared documents for the rest of authors and translated from Spanish into English. We changed the names of the participants to protect anonymity, and the ethical committee of the Universidad Politécnica de Madrid approved this research. --- First Stage: Interviews The lead author first interviewed 15 older adults from October to November 2019. Interviews lasted 1 h 7 min on average. He took few photographs of the devices while being used and recorded audio for later transcription and analysis. For this, informed consent was given. The inclusion strategy was to interview adults older than 65, living in Madrid, with at least a basic command of ICT (they had a mobile phone and relatively understood what an ICT is). We recruited the first four older adults through an H2020 European Institute of Innovation and Technology (EIT Health) project. Then, we sought older adults in our networks (friends, relatives, and colleagues who gave us contact information for older adults). The first author eventually interviewed 11 more. Recruiting the participant through a friend or colleague enabled us to meet at the participant's home, which helped contextualize daily ICT usage. Other interviews took place in cafeterias or hospital. Half came from rural Spain to Madrid 60 years ago, the mean age was 78.5 (age ranged 65-90), 10 were women, and 2/3 were blue-collar workers with elementary education. We initially focused on low-income older adults, but we later kept the inclusion criteria open for several reasons. Firstly, we could not infer their income by observation, and we preferred not to ask out of politeness. Secondly, the type of job or level of education can be more representative of social class than income. A person can have low annual income, but simultaneously own properties or have savings, which better indicate social class, privileges, or vulnerability to discrimination. The interviews were semi-structured. We initially asked the participants about their life stories and daily life: life in the neighborhood, work history, childhood, hobbies, etc. This information enabled us to understand, for example, how being a woman/man and blue-/white-collar worker affected their lives. Then, shifted the discussion to smartphones and tablets as these ICT were the focus by the time. We discussed general and applications use, motivations, barriers, learning process, feelings, and emotions about ICT (see interview guide in Supplementary Material 1). As the interviews progressed, the participants included other technologies (radio, TV, computers, and landline phones) when talking about phones/tablets, so we broadened the scope. --- Second Stage: Participatory Observation in ICT Courses The insights generated from the previous interviews acquainted us with the interactions of older adults with ICT, however reporting experiences with technologies verbally without observation stayed insufficient. Consequently, the lead author approached a different group of older adults at a senior center. In Madrid, there are several public senior centers where different activities (computers, yoga, games against memory decline, etc.) take place for dwellers over 65 years old. These centers are financially and logistically supported by the City Council. The courses are organized routinely, and the older adults go through an enrolment process and must attend them if accepted. The activities are consistent with the active aging policies to support the health and prevent the loss of autonomy of older adults. He observed and participated in two ICT courses from January to February 2020, since we were not allowed to have more than one attendee. In the courses, the ethnographer observed the students dealing with computers and smartphones: emotions, complaints, motivations, physical struggles, teaching style, the center's management, and ICT affordances. Later, aside from observing, he helped students carry out their tasks simultaneously. Helping the students allowed the ethnographer to build rapport with them. The students started to be more open and to share their life's stories (which strongly intersect with ICT use as this article unfolds). This was eventually helpful to grasp their barriers with and enablers of ICT. We attended nine classes in total at the senior center: six focused on computers and three on smartphones. Four classes were for elementary students of ICT and five for advanced. Each level had a different teacher, and there were approximately nine attendants-mostly women in elementary ICT and men in advanced. Their age was 65+. The tasks were emailing, use YouTube, Drive, and Word, and each lesson lasted 1 h 30 min. The classroom had 11 old computers-yet working well. The main problem rested in the inoperative Wi-Fi connection provided by the municipality, which is later analyzed as a barrier for the students. The senior center is in the neighborhood "Orcasitas," situated within the 31 percent poorest areas in Spain and 10 percent in Madrid (INE, 2018b), and where the students live. Through talks with the center's manager, we came to know about the overall students' lower-class background: the majority of them have low level of education, rural origins, they live in social housing, and are known for being politically active (during the fieldwork, the students participated in demonstrations asking the authorities to remove asbestos from their blocks). This group differs from the first older adults interviewed who had more diverse backgrounds. Diversity of people enabled us to compare the experiences with ICT from different angles, e.g., a woman who worked as a seamstress or as a housewife probably lived different barriers and enablers with technologies than a white-collar man. Furthermore, broadening the inclusion criteria allowed us to not focus on a particular group whose social identities are difficult to infer. When COVID-19 grew, between March and May 2020, the lead author kept in touch with the same students of the ICT courses and the teacher via WhatsApp groups and video chats on Jitsi. We chatted, exchanged videos and pictures, and solved riddles and games. These interactions were captured to observe the limitations and positive influences of ICT, how WhatsApp and Jitsi interfered, how teaching ICT evolved and could be otherwise, and skepticism toward the management of the pandemic through using social networks or TV. The pandemic itself was not an analytical category, but how COVID-19 transformed the ways we experienced technologies and how participants expressed emotions/feelings through ICT. WhatsApp and Jitsi enabled us to overcome the COVID-19 challenge and to research ICT practice by getting involved and observing online on the platforms. However, for the students who did not know the ethnographer, online research hindered an empathetic relationship (unlike in-person interactions). We also lacked insights from participants who were members of the WhatsApp groups but were not active. We also phone-interviewed seven students of the advanced class, one elementary student, and one from our networks. The ethnographer contacted them on WhatsApp to arrange the interview and the advanced students participated more in the chat, which explains the gap between the advanced and elementary students interviewed. With the interviews, we aimed to unpack the role of ICT during the beginning of COVID-19 through oral means that we lacked on WhatsApp and Jitsi. In particular, we discussed barriers with and enablers of ICT, how they kept in touch with loved ones and the doctor, apps use, change in perception of ICT, and emotions and feelings on the pandemic expressed through technologies (see Supplementary Material 1.1). The interviews lasted around 20 min, and we transcribed relevant fragments. --- Data Analysis We initially wanted to unfold the dichotomy barriers/facilitators in ICT use. The data generated in the fieldwork pictures, transcriptions, and field notes were first transcribed, coded, and illustrated in nodes on NVivo by the lead author. NVivo is a software that enables a researcher to analyze and code data, mostly qualitative. Even though the data came from different methods and participants, the data were grouped in nodes transversal to the different sources, i.e., we triangulated the data. The triangulation of methods consists of using and converging data from different participants and settings (Patton, 1999(Patton, :1189)), and it enabled us to capture different dimensions of the experiences with ICT (e.g., before and during . The other authors reviewed and discussed the codes. The resulting and main nodes were barriers and facilitators grouping subthemes (physical barriers, motivations, affordances, usability flaws, etc.). However, these elements stayed short, and we later aimed to analyze beyond explicit patterns (visible or spoken barriers/facilitators). Therefore, we used situational analysis of Clarke and Friese (2007) to reinterpret the nodes of NVivo. Situational Analysis is a methodology that uses situational maps to analyze a particular situation by identifying and representing the complexity and variation of data, reveal marginalized perspectives, and empirically decenter the "knowing subject. " Situational maps intend to display major elements of situations (e.g., discourses, humans and non-humans, controversies, collective elements, symbolic/sociocultural, political/economic, temporal, and spatial factors), and incite analysis of intersections and divergences among them (Ibid.). These maps are suggested along with Social Worlds and Positional Maps, but we only used situational maps because they were sufficient to uncover what elements are at stake in the interactions of older adults with ICT. The maps are not intended to illustrate "findings" or truths about a circumstance. Rather, maps are to be viewed as an interpretation of a situation that is constantly changing. By mapping major elements in ICT practice, we identified different technologies used; participants (teachers and older adults); affordances; facilitators (e.g., motivations); physical and silent barriers (e.g., cognitive decline and surveillance); institutions; the pandemic; lack of facilities in homes and the classroom; contradictory uses of ICT; social identities (e.g., gender roles); and attitudes toward ICT (e.g., adversity). The latter portrayed a messy map that can be seen in Supplementary Material 2. Then, the lead author connected the elements by colored lines to unwrap intersections and divergences. The rest of authors reviewed the connections, and we perceived that social identities were not categories per se, but they cut across other categories. We then organized the elements in an ordered template retrieved from Clarke et al. (2018) that can be found in the next table. The template triggered us to separate the individual and collective human actors; non-human elements; silent actants; discourses upon non-humans; debates; and symbolic, political, temporal, and spatial elements. Finally, splitting up the elements in the ordered table allowed us to identify three overarching analytical frameworks hindering or facilitating ICT use. The first deals with implicit-silent elements (e.g., unwritten rules, gender roles, low self-esteem, and contradictions). The second encompasses explicitmaterial experiences (e.g., physical limitations, ICT affordances, clear preferences, etc.). The third spans political and economic structures (e.g., lack of facilities, COVID-19, and economic limitations). These frameworks serve to analyze the experiences of older adults with ICT from visible, unspoken, and institutional frameworks that form the following section (Table 2). We met her in the elementary ICT course, and, during COVID-19, she showed perceptible barriers typing (low vision and hand tremors), so she used voice recordings instead. She seemed not to grasp some unwritten rules of WhatsApp: no need to reply to every message. Her remaining obstacles were gendered and educational: illiteracy, widowhood, depression, loneliness, and inexperience in ICT (her husband used to handle ICT). She represents an example of embodying ICT through an intersection of explicit, implicit, and structural barriers. The following analysis is divided into three sections; however, some examples like this one might better be discussed through an intersection of explicit, implicit, and structural frameworks (this is reflected in the discussion section). The next table is shown to visualize in a nutshell how the different experiences of the participants are organized (Table 3). --- ANALYSIS: EXPERIENCES OF SPANISH OLDER ADULTS WITH ICT --- Implicit Situations This type of interactions involves symbols or unspoken representations observed by the researchers that facilitate or hamper technology use. We generated these interpretations upon situations, e.g., when the participant used a computer in a class or a comment given by the participant (the words did not matter per se, but the representation of the comment such as an emotion). Social identities, such as lack of education in ICT, gender, or type of social class, are represented in the invisible experiences with ICT. --- Implicit Situations: Surveillance and Paternalism Participants often mentioned their children in the interviews in a way that hindered their use of ICT. We observed a strong patronizing relationship between the adult child and older adult. Adult children taught them how to use ICT, bought the devices, checked the bills, supervised them, etc. For example, most of the initial participants used the former relatives' mobile phones, and nine older adults stated they were taught ICT Numerous adult children required their parents to carry the cellphone outside for safety. But several participants did not take the cellphone outside in order not to be monitored by their children. As such, ICT disrupts Ana's life: I do not take the phone outside, otherwise, they know where I am. […] When I am home, they never call me, but when I go out, they call me (Ana, widow). Other older women portrayed surveillance differently. Josefa (with vast work experience in ICT) refuses the geo-localization In contrast to the positive views, low self-confidence arose among the older adults inexperienced in ICT which hindered their use. The participants who kept their cellphone over a new smartphone lacked the self-confidence to learn the latter. They associated this decision with age and felt comfortable with cellphones for simplicity. One linked simplicity to low self-esteem: I asked him for a phone for rednecks, to call and receive calls (Alberto, man elder). We observed the students (from the lower class and with elementary education) following the indications from the teacher cautiously. In this sense, they revealed a fear of improvisation and self-doubt, which can stem from the old school system. Moreover, they often deleted the whole text when mistyping something, instead of placing the cursor at the wrong character possibly based on a lack of self-confidence. Linking fear of economic fraud and lack of experience in ICT, an older woman expressed reluctance: as the cellphone entails so much fraud, I do not want to get into trouble (Laura). Additionally, one person dreaded testing apps during COVID-19: I am scared of getting into a place and busting something or being charged (Francisco). --- Implicit Situations: Contradictory Speech-Act Although several older adults had a negative attitude toward ICT, we experienced inconsistencies associated with low selfesteem that remain common among women and lower-class participants. They did not know the differences between cellphones and smartphones, but later, they demonstrated awareness. Others ignored what a smartphone was, despite possessing one. It seemed they expressed blindness toward ICT, regardless of competent performance and frequent use: --- If I ask them a direct question on ICT use, they ignore it or disuse it. But after follow-up questions or observations, they know what I mean! They are quick to say NO! It may be due to low self-confidence (Field note of an interview). --- Explicit Interactions This type of experiences entails material situations that were either easily visible by the researcher or were explicitly commented by the participant. For example, a flaw in the design of a product or the hands of a participant is clear things that hampered ICT practice. We perceived other situations as facilitators. These interactions were associated with different social identities. To begin with, the older adults possessed different devices: Of the initial participants, they all possessed a mobile phone, except two. Half had smartphones and used the phone at least once a day, and five had a tablet. In the computer courses, almost all students possessed a smartphone, regardless of low experience with ICT, illiteracy, lower social class, etc., because of prior interest in ICT and relatively young age (65-70). Few advanced students owned a laptop or a desktop, and those were the only ones having Wi-Fi access at home. All the participants approached during COVID-19 had a smartphone, 3/9 had a tablet, and 5/9 had a computer. --- Explicit Interactions: Physical Limitations Ten initial participants faced severe bodily hurdles. Limited memory hindered recall of the steps to reach apps and phone numbers, and numerous students could not remember their email passwords. One saved the password written on a paper and, when typing the password in the box, she wrote "password + actual password, " which showcased significant literacy limitations. Another male older adult-sobbing-complained about his limited memory caused by a stroke: I get blank dullness and sometimes I cannot solve issues […] I remember her phone number, but not my sons' (Juan). Several participants experienced poor thumb performance due to finger clubbing and hand tremors. They could not click accurately on the touchscreen or keyboard. They used to perform manual labor, symbolizing a convergence of lower social class and poor health. This condition limited an older woman, who all her life worked as a seamstress. She faced triple discrimination related to social class, gender, and age: her fingers are thicker than mine since she worked as a seamstress which has affected her hands. That makes her struggle with typing (Field note at the computer courses). Furthermore, many students could not click twice on the mouse and keep it still because of hand tremors, so they could not progress. It was also confusing for them that sometimes one click and sometimes two are needed, depending on the app, folder, etc. Many others struggled with low-vision such as myopia and cataracts: they could not find things on the phone screen that forced them to wear glasses. Other limitations included deafness that hindered hearing calls. --- Explicit Interactions: Clear-Cut Motivations to Use Smartphones, Tablets, and Landlines Although Ana rejected ICT, she tracked the updates of the Real Madrid club through her smartphone. Carmen used the smartphone to visit her husband in his nursing home. Despite their gender barriers, elementary education, and lower class, they actively used smartphones and other ICT. Other reasons to use smartphones included communication with relatives and health tracking through the official online e-appointment mobile application for primary and specialized care (i.e., "Cita Sanitaria Madrid"), which proved useful to delay appointments during COVID-19. Smartphones were deemed beautiful and useful, which proved as a facilitator. The owners of smartphones rejected cellphones, which makes us consider that smartphonesregardless of usability problems-prove more helpful than cellphones: I'm keener on smartphones than the classics, it's another style, not that brick, it's more useful, for example, you can keep it in the pocket. I stare at phone stores to see how beautiful they are (Ana). Tablets were mentioned mostly among the initial participants. Elena and her partner used the tablet to check recipes, fashion, transport, trips, and forecast. Her partner was not fond of the tablet but preferred the computer. The latter shows a gender disparity: He defined the computer as more sophisticated and he handles the sophisticated devices at home. This can hinder Elena from adopting ICT. Another participant frequently used her tablet for entertainment board games, online newspapers, etc. She connected a political stance to her tablet: How can I get rid of the kings? -playing solitaire on the tablet-[…] I cannot bear the kings laughing (Carmen). Carmen preferred a tablet over a smartphone for its size and entertainment apps. During COVID-19, Begoña enjoyed the increased free time that enabled her to spend more time with the tablet-the pandemic acting in this way as a facilitator. Seven of the initial older adults opted for landlines over mobile phones. They considered landlines easier and with a longer shelf life. They better remembered landline numbers, and two participants remarked on the landline's cheaper flat rate. These features enabled ICT practice. Landline users overall rejected ICT, as they did not feel familiar with the technologies, and this was highly marked by social identities. These participants did not possess ICT experience, higher education, nor did they belong to the upper-class. --- Explicit Interactions: WhatsApp and Jitsi Interceding During COVID-19 All elementary and advanced students felt familiar with WhatsApp. However, its design hampered the use of the app. The elementary students communicated by voice messages since they struggled with typing due to small keyboards and illiteracy. During COVID-19, we shared games, riddles, jokes, and computer tasks, and talked about how we were doing. The advanced students participated more on WhatsApp than the elementary group since they handled ICT better. WhatsApp could act as a facilitator. The students employed WhatsApp video calls to keep in touch with relatives and friends. Luisa reflected on a positive outcome of the outbreak and WhatsApp: My neighbors and I communicate more on WhatsApp. We are more united for coronavirus. Before, we did not see each other in a week (Luisa, advanced student). The students, the teacher, and the ethnographer continued meeting during confinement on Jitsi, a videoconferencing app. The app proved problematic for the elementary students since only one participated. The latter might derive from a fear of trying new things. In contrast, the advanced students seemed eager to employ Jitsi, two even asked the ethnographer to set up a call for their families. Many students deemed Jitsi as the only new app learned in confinement, i.e., COVID-19 did not heavily promote the use of new apps for at least nine older adults (Figure 1). Jitsi posed barriers for a few students. The ethnographer shared the call's link on WhatsApp, and since many did not own computers, they logged through smartphones. On smartphones, they needed to download the Jitsi app and a few students struggled with the information displayed in English when downloading the app. The owners of computers initially could not log in because they mistyped the link without an accent. Furthermore, none had WhatsApp installed on their computer to open the link directly. With help most of the students completed the process, except for a couple. During the call, they were unfamiliar with videoconferencing, so conversations often overlapped. These practices embody cultural and economic barriers in ICT: lack of Spanish instructions (Jitsi designers expect everybody to understand what "download" means), computers, and experience. --- Explicit Interactions: Affordances of Mobile Phones This section examines specific software, hardware, and smartphone models. The older adults with smartphones were eager to use the camera. For example, Félix and Natalia-reluctant users of ICT-had a smartphone only to take pictures. Pedro and Elena appreciated photographing their granddaughter and, once, provided proof of their leaky roof to the insurance company. Most of the students handled the camera well, but a few elementary students did not know how to take videos. A male advanced student mastered his iPhone 7 and was facilitated by its font large enough and easy to read. Regarding barriers, Ana had low vision and disregarded adjusting the font in the sports app Marca, only using the teams' logo for guidance. She could not handle the keyboard in WhatsApp either, so she used voice messages. A Sony user complained about its small letter size. We got confused when adjusting the letter size since the phone calls it "font-size" instead of "letter size, " a counterintuitive translation in Spanish. The edge of the Sony touchscreen barely worked which made the older person disuse certain letters. In the elementary course, many students could not send voice messages and pictures on WhatsApp, but were enthusiastic when the ethnographer taught them. The students also got confused with the two microphones appearing together on WhatsApp-one to convert audio to text and another to send voice messages-which proved problematic for those struggling with hand tremors or finger clubbing. The advanced students managed their smartphone well but often pressed the start button because of the small touchscreen and hand limitations, which, for example, made them lose track of emails being written. More barriers were found in another student had an LG with information displayed in English, despite the system's setup being in Spanish. Other students only had access to an online gallery that required an internet connection, which only a few had. Carmen experienced several challenges with her Sony XA1 Ultra and grumbled about not getting notified when the billing period ended or she neglected to hang up. Regarding cellphones, one-third of the initial participants had an Alcatel 2008G. They considered it easy to unlock, mute, place, and take a call, and dial numbers (for those with proper vision). It costs around 40 euros and Spanish phone stores recommend the cellphone as senior-friendly. However, its owners generally did not master ICT, had elementary education, and belonged to the lower class. Its letters are too small, and three letters appear on each number of the keyboard, which hinders texting. The problem does not only lie in usability, but in why the cellphone is marketed as senior-friendly. Older adults may ultimately reject other ICT because they do not feel Alcatel 2008G helps their lives. --- Structural Barriers We consider that structural barriers stem from the institutional or societal level and are hardly transformed by an individual. For instance, the management of the senior center and its decisions on the design of the ICT classes, or the pandemic, are factors that influence an older adult using a technology. Here we did not observe structural facilitators, apart from the opportunity to learn ICT for the older dwellers. Except for two examples, the point of this section is not to unfold stories of particular participants or their social identities because our goal is to shed light on how the practice of ICT is arranged politically. This framework is rooted in the highly specific context of the senior center (where the fieldwork was held), so the generalization of the reflections is difficult to be applied somewhere else. --- Structural Barriers: Political Design and Teaching ICT at the Senior Center The center, where computer courses took place, had flawed and exclusive facilities. In the advanced course, a disabled woman could not reach the mouse nor keyboard from her wheelchair. She required her husband's help and got lost following the teacher's assignments. She did not switch to the elementary course because she wanted to remain beside her husband who preferred the advanced course. The ICT classroom thus excluded people with functional diversity. Not adapting the facilities depends on the municipal government, and it has likely prevented many older people from learning ICT. When the center's Wi-Fi did not work, a tiny "x" appeared on the top of the Wi-Fi symbol and was hardly noticeable. The students could not connect to the internet, and they often ignored the source of the problem. The Wi-Fi proved to have other shortcomings: It is really tough to reach the public municipality's Wi-Fi. Every 2 weeks the student needs to log in again typing their names, email (whose accounts do not remember), phone numbers, etc. The system messages you a different password each time. The students must go back and forth to the messages inbox, copy/paste the password, go to the website… being unnecessary (Field note of the senior center). The decision to modify Wi-Fi relies upon the political power that, for the time being and neglects senior-friendly design. Thereby, the municipal government does not seem to invest much effort in centers for older adults. Another political issue stems from a transit app to know when buses arrive. In the courses, the students were eager to learn how the EMT Madrid app works so as not to wait buses unnecessarily. The political controversy lies in that they are forced to use the app because there are no metro stations near the students' homes, lacking access to other parts of the region. The method of ICT instruction proved inequalities. During COVID-19, the main teacher shared Word and Excel assignments on WhatsApp. Only a few of the advanced students completed them since the rest did not own computers nor Wi-Fi access at home. We thereby questioned the relevance of computer lessons: The students started by writing emails, but do older adults around 75 need an email account? If they want to buy online stuff, ok, but not sure about other things. […] A student was more interested in learning smartphones than computers as she does not have one at home […] They say phones are easier to handle than computers (Field notes of the senior center). Lessons on smartphones could engage students better. Computers often cost more than smartphones and require genuine interest and experience in ICT. The problem lies in students' economic limitations and lack of ICT experience. Nevertheless, changing the course relies upon the managers. Hence, it is political, and it seems the managers did not consider the realities of older adults. --- Structural Barriers: Political Skepticism in COVID-19 Information source retrieval affected the relationships between ICT and older adults during the pandemic. COVID-19 affected us all in a manner that we could not control and in this way it became structural. Some participants perceived WhatsApp and Facebook as fake news spreaders, so they used other means: I prefer to check if it's real in the newspaper. Most of it -on WhatsApp-is fake news (Teresa during COVID-19). Others preferred TV or radio to retrieve information, since these tools are switched on all day at home, except for a few who grumbled: TV lies to us, e.g., Severo Ochoa. --- […] They show what they want us to know. There is so much coronavirus information […] It overwhelms me (Teresa). COVID-19 polarized the political situation in Spain. This scenario arose on a Jitsi videoconference when an alt-right supporter argued with other students about the necessity of criticizing the government during the crisis. In this sense, the pandemic became political and hampered the older adults' trust in technologies to get informed and communicate. --- CONCLUSION This study contributes to a field wherein analysis of materialexplicit experiences of older adults with ICT prevails: usability tests, surveys of attitudes with ICT, and predicting models of technology adoption such as "Senior Technology Acceptance and Adoption Model" (Renaud and Van Biljon, 2008:216), etc. Acknowledging the usefulness of these for policy-making and technology design, this article goes beyond by uncovering how social identities enable or hinder ICT practice through implicit, explicit, and structural experiences. First, we introduce a summary of the results related to other studies. This is followed by a theoretical discussion. Implicit situations included barriers and opportunities. Children surveilling older adults through smartphones, and inconsistent ICT use and speech hindered their relations with technologies. The last to our view derives from misperceiving older adults as inoperative with ICT: younger people do not expect older adults to handle technologies, so older adults embrace this social norm and reject ICT. Low self-esteem and fear determined ICT usage among females, people inexperienced with technologies, and lowerclass older adults, so we suggest that ICT training can empower unprivileged groups, as Ratzenböck (2017:25) suggests for older women. Czaja et al. (2006) point out that self-confidence decreases as we age, which is roughly corroborated by our research. Vaportzis et al. (2017) point in the same direction. On top of this, we add that low self-confidence is higher among those not tech-savvy or with unprivileged identities. We found opportunities among the students of the center who viewed computers and smartphones positively. The students had motivations to attend computer courses, and their unprivileged background did not affect their interest in technologies, which resonates with González et al. (2015:4). Therefore, motivation, education, and social capital can be better enablers than income in ICT practice, which Tan and Chan (2018:129) and Gutiérrez and Gamboa (2010:358) also argue. Regarding the explicit interactions, physical barriers included limited vision, memory, auditory, and thumb performance, which is in line with previous research on barriers to the use of ICT (Gitlow, 2014). Privileged participants (upper-class and men) had tablets and computers. They used smartphones to track health and communicate, whereas tablets were employed for entertainment. Having privileged identities portrayed strong facilitators of technology use. Cotten et al. (2016) also unpack privileges when it comes to using ICT; they argue that among the older population there are striking differences in the Detroit area between those with higher income and white compared to African Americans and poor. However, the assumption that an unprivileged background leads to disuse of ICT is wrong, at least among these participants. We find that motivation or belonging to a strong social network might better facilitate ICT use than social class or gender. Lowerclass participants without experience in ICT embraced landlineswhich differ from Petrovčič et al. (2016:100) who found gender to be a better predictor than class. In this way, social identities play a big role when triggering or hindering ICT use. WhatsApp and Jitsi helped the students get over confinement, and facilitated technology use, but posed cultural and economic barriers for those lacking computers. The participants appreciated smartphones over cellphones, though complained about smartphones' small interfaces and keyboards. Older adults, therefore, do not need out-of-date devices to adopt ICT but training. Regarding structural barriers, it should be noted that the barriers are very specific to the context of this study in Orcasitas (Madrid). The political or institutional agents most likely work differently in other places. We found that reaching an inclusive design of May 2022 | Volume 13 | Article 874025 facilities and ICT for older adults concern the decision makers of the senior center and governments. In the center, we encountered exclusive conditions for people with functional and class diversity: Wi-Fi, transport, and classroom design. The instruction method disregarded older adults' desires since it did not include smartphone training. These elements hindered ICT practitioners living in the Spanish neighborhood "Orcasitas" from learning ICT and, thereby, we struggle to find facilitators. We wonder whether these conditions existed in a senior center located in a wealthier area. Moreover, authorities should also consider that COVID-19 polarized older adults politically and structured how they trust technologies. An example can be found in Rudnik et al. (2020) who discuss the experiences of the "oldest old" older adults engaging and sharing political issues through ICT. Technology represents an opportunity for increased engagement in politics but offers drawbacks such as overwhelming information or being left out of politics if the older person is not tech-savvy. Rudnik et al. (2020) acknowledge that political participation is frequently unrecognized in research. Similarly, our participants debated the responsibility of the government for the pandemic in a video call, meaning that structural issues surround the manner older adults learn and practice ICT. As López Gómez and Criado (2021) and Bergschöld (2018) do, we draw attention to the political arena when it comes to studying technology for older adults. A summary of the findings can be seen in Figure 2. --- Theoretical and Analytical Discussion After the summary of the results, we discuss how our theoretical and methodological approaches could be compared with others. Concerning social identities, our premise is that older adults comprise complex and intersecting identities that influence ICT use, i.e., they are not simply lower class or old. Conceiving of identities as intersectional helps comprehend ICT practice not only as a matter of age but related to gender, education, etc. For this, the notion of intersectionality is helpful. Crenshaw (1991) introduced the notion to analyze the intersections of social identities and their discriminations. The theory posits that identities are fluid and shaped by structures and social processes, and people cannot be reduced to single categories, nor can single categories depict understandings of individuals (Hankivsky, 2011). The notion could enable us to analyze how age, race, class, gender, and other social identity markers intersect depending on how these identity markers eventually discriminate or privilege older adults' use of ICT. However, this study shows that being unprivileged can be deceiving for somebody who is otherwise privileged. We opened our inclusion strategy with participants: we could not infer discriminations by examining their incomes, as they might be privileged through, e.g., property ownership. Moreover, qualitatively analyzing which identities reign over others in ICT usage proved blurry, e.g., uncertainty whether gender or class prevails. We did not ask the participants how they fit into gender, class, etc., out of politeness, so the identities are depicted based on observation, which might have led to misrepresentation. Star (1990:47) argues that dichotomies (e.g., privilegedunprivileged) are exclusionary: marginalized groups are not simply left out, but they are in a "high tension zone" aiming to be inside. Star (1990:26) standards in fostering marginality. For her, non-marginalized networks create standards in technologies that deny multiplicity and contingency in favor of stability and unity, and dismiss how standards could be "otherwise. " The seamstress woman or the students struggling with the Wi-Fi password represent how ICT disregards standards for people with gendered, literate, class, and physical disadvantages. Star (1990:45), therefore, sheds light on the "high-tension zone" to unpack the properties of conventional and standardized networks, which this article attempts to reveal, e.g., with the political networks at the senior center. Although we cannot generalize, the ethnographic approach enabled us to uncover unspoken experiences from different angles and provided us with reflections that other researchers may encounter. The implicit, explicit, and structural frameworks emerge from the situational analysis performed, for that; we encourage researchers to analyze situations as Clarke and Friese (2007) suggest to avoid the "analytical paralysis. " The implicit and structural frameworks could be associated with López Gómez (2014) notion of socio-material arrangements that older adults use to appropriate technologies. Socio-technical arrangements are mundane and material things that we do to keep our lives in order. Older people use technologies through these preexisting and little arrangements in their daily lives, and these are not often regarded by the gerontechnology industry. The notion is inspired in Actor-Network Theory which posits that observations should emerge from the voice of the researched participants and not from conceptions that researchers pre-assume. We employ implicit and structural markers and the industry does not commonly consider them either. However, implicit/structural differs from arrangements in that ours are observations by the researchers, and hence, they do not emerge explicitly from the participants. López's approach could be better associated with the explicit framework of this article, since that framework attempts to represent more directly the participants' stories. --- Limitations The structural, explicit, and implicit frameworks are formed upon the elements that surround and constitute the field. These might not work in different contexts but might serve as inspiration. The frameworks may resemble the triadic reciprocality (Wagner et al., 2010:871) and the "Senior Technology Acceptance and Adoption Model" (Renaud and Van Biljon, 2008:216), but these only delve into material interactions. However, some elements may have been left out by our conceptions of implicit, explicit, and structural. We have divided the analysis into three sections, but these could probably be represented into one. Certainly, the participants' stories are simultaneously embedded into, for example, political conflicts, patronizing relationships with their children, and problems/facilitators with the design of their mobile phone. The older woman that follows the beginning of the analysis section is an example. However, splitting them up helps untangle ICT use in different realms. These dimensions need to be considered and we assert that the understanding of older adults' uses of technology does not finish here, and more need to be studied. Concerning the dichotomy barriers/facilitators, we found more barriers than facilitators in technology use, given that older adults often struggle more than youngsters with technologies due to design flaws and other invisible/political elements. Some may argue that an experience with technology is not necessarily unpacked through barriers-facilitators. An experience can be expressed, for example, through an emotion, a story, or an encounter. Being this true, the dichotomy helped us narrow down the experiences into tangible issues rather than into fuzzy concepts. This eventually allows the understanding of ICT practice for a broad range of fields like design, gerontology, research, etc. The sample size was not large and should not be believed as a full representation of the older Spanish population. In a different scenario with wealthier older adults, the research could have led to different results: a better adoption of ICT, more perceived usefulness, etc. Our participants with previous white-collar jobs at least pointed in that direction. If the seamstress woman (with limitations in her hands) had had a desktop job, now she could have higher dexterity with touchscreens. Nonetheless, the sample size allowed us to dive deeply (ethnographically) to discuss the implications and uses of ICT in a segment of older people that can share social class, gender, and other experiences with other older adults. Ethnography, as a method that collects data from people, entails a certain bias, and this possibly limited our research findings. The authors do not live the daily experiences of an old person and that could affect the results. However, we could take distance from the research without becoming too involved, and all authors reviewed the collection and interpretation of the data to reduce possible bias. More possible limitations are that this investigation lacked caregivers, older adults' children, older adults living in nursing homes or in a frail condition, and senior centers in wealthier areas. Future research could include these populations to get different and meaningful experiences with ICT. --- Implications This study gives voice to underrepresented older adults and enables engineers and designers to understand the complexity of older people. When designing technologies, they should consider that social identities, symbolic, political, and economic elements mediate ICT practice. Moreover, designers should not stigmatize older people by marketing out-of-date products for them (e.g., Alcatel 2008G or the similar Jitterbug Flip) since these discourage older people to use current technologies. Even though this study focuses on ICT, engineers should understand that solutions to improve older adults' lives are not always technological, as Selwyn et al. (2003:577) comment. Similarly, stakeholders should not assume that the only ways to improve older adults' lives are only related to health monitoring or healthcare. These actions are often based on ageist assumptions, so we suggest stakeholders previously study older adults' needs and co-create things with them to improve their lives. Apart from considering symbolic and political elements that affect privileged and unprivileged older people, Spanish policymakers should pay attention to material issues. In particular, they need to enhance ICT facilities at public centers for older adults and their homes. These, alongside nursing homes, remained jeopardized by COVID-19, so the administration should strengthen supervision. They should also understand that tablet and smartphone training can help mitigate isolation. Finally, society (including older adults) must avoid ageist norms with ICT, such as thinking that older people are unable to use technologies, which are often unreal and older adults embrace as their own. The stereotype eventually prompts older people to disuse technology. --- DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. --- ETHICS STATEMENT The studies involving human participants were reviewed and approved by Universidad Politécnica de Madrid, as this study belongs to the project "POSITIVE: Maintaining and improving the intrinsic capacity involving primary care and caregivers." The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article. --- AUTHOR CONTRIBUTIONS XF and EV-M presented the idea of the research. MG-H and SA developed the theory, data analysis method, and designed the investigation. MG-H performed the methods and made the final manuscript. SA, XF, and EV-M supervised the fieldwork, findings and the final manuscript. All authors contributed to the article and approved the submitted version. --- SUPPLEMENTARY MATERIAL The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2022.874025/ full#supplementary-material --- Conflict of Interest: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's Note: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Keberadaan perusahaan tambang di suatu kawasan memiliki dampak positif bagi pembangunan daerah, peningkatan lapangan pekerjaan, dan pertumbuhan ekonomi. Namun, keberadaan perusahaan tambang juga berisiko menimbulkan dampak negatif penurunan kualitas lingkungan dan konflik sosial. Penelitian ini bertujuan menganalisis potensi konflik sosial di kawasan tambang batu bara dan alternatif solusinya. Penelitian ini dilakukan di Indragiri Hulu, Riau menggunakan metode gabungan melalui pemberian kuesioner, wawancara mendalam, observasi fisik, dan diperkaya dengan literatur reviu. Ditemukan pemicu konflik sosial berupa isu kepemilikan lahan, perizinan dan pembebasan lahan, serta isu penggunaan fasilitas umum.This is an open-access article under Creative Commons Attribution-NonCommercial-NoDerivatives License https:/ /creativecommons.org/licenses/by-nc-nd/4.0/ Republic of Indonesia's Constitution of 1945. The law's mandate is clear and rigorous, but there are still many harmful effects of coal mining on residents and the surrounding ecosystem. According to Bakri et al. (2023), mining activities harm the environment and deprive communities of a vital source of income, leading to social tensions. Ahmad & Nurdin (2022) further highlight that social conflicts in the mining area of Bima Regency are primarily caused by inadequate socialization efforts that exclude operasional lebih mudah dicapai karena manfaat positif sosial, ekonomi, dan lingkungan dijaga secara kolaboratif. Peran Komisi VII DPR RI dan Kementerian ESDM sangat besar untuk menyukseskan tujuan tersebut.
Introduction The presence of mining companies contributes positively to economic and regional growth (Sulastri et al., 2018). The state receives the economic contribution of coal mining through Non-Tax State Revenue (PNBP). Overall PNBP realization in 2021 achieved 151.6%, equivalent to 183.91 trillion rupiah, above the objective of 121.2 trillion rupiah, according to the (Ministry of Energy and Mineral Resources, 2021). The PNBP target for minerals and coal in 2021 is 39.11 trillion rupiah, with the most significant proportion realized at 192.2%, valued at 75.16 trillion rupiah. Royalties of 43.563 trillion rupiah were the most significant mineral and coal PNBP contributors. Mineral and coal PNBP contributed 2.73% to the 2021 APBN, while overall mining sector PNBP reached 6.89% (Ministry of Finance of the Republic of Indonesia, 2021, pp. 186-187, 191). People living near coal mines should receive comparable economic benefits, given that natural resources are wealth controlled by the state and used for the greatest prosperity of the people, as mandated by Article 33 paragraph (3) of the Pambudi, Utomo, Soelarno, & Takarina Potential Social Conflict in Coal Mining Areas... members of society and the government's inconsistent enforcement of regulations. A study conducted by Pambudi et al. (2023) on coal mining districts found that social conflicts often arise due to a lack of community involvement, resulting in a diminished sense of ownership towards mining companies. This issue has far-reaching consequences and has significantly contributed to the negative perception surrounding the mining industry. Various negative impacts of coal mining often and continue to recur in various regions of Indonesia, as if no lessons have been learned from similar incidents (Pambudi et al., 2023a). According to the historical records of the (South Kalimantan Archaeological Center, 2017), the first coal mine in Indonesia was inaugurated on Kalimantan Island on September 28, 1849, by the Governor General of the Dutch East Indies, Jan Jacob Rochoseen. According to the Ministry of Energy and Mineral Resources (2023), Indonesia will have 1,178 active coal Mining Business Permits (IUP) by 2021. In this context, this study was carried out to address the question, "What factors can trigger social conflict in coal mining areas, and what are the alternative solutions?" Previous studies examined the causes of social conflict as well as the consequences of social conflict. This study is unique because it examines the elements that cause social conflict while presenting solutions based on reactive actions and mitigating and adapting programs. Social observations, questionnaires, in-depth interviews, and literature studies were conducted to answer the research questions and objectives. The study was conducted at PT. X in Indragiri Hulu Regency, Riau Province, from August to November 2021. Questionnaires were distributed to 205 respondents, and interviews were conducted with 31 male and female informants aged 20 to 60 who had resided in the mine area for at least five years. Figure 1 depicts the site of social observation for administering surveys and conducting in-depth interviews. Social observation was done by observing the community, conducting interviews with people living around the mining location, in-depth interviews with the leaders of PT X, government, and community figures, and an inventory of company documents and study of relevant reference literature. Data on social conditions is interpreted based on education, the contribution of mining companies to local communities, the role of local communities in mining companies, and community interactions with the environment. The obtained results were analyzed using data triangulation techniques, which included the use of several data sources, including primary data from physical observations, social observations, and interviews, as well as secondary data from company reports, government agency/ institution reports, and scientific articles from national and international journals. This triangulation technique based on quantitative and qualitative methods was chosen to obtain as detailed and in-depth data as possible according to the actual conditions at the research location. Triangulation was also chosen to cross-check existing evidence at the research location to increase validity and get a complete picture of the research topic. The research method scheme is presented in Figure 2. The following stage is exploratory, descriptive analysis, which involves summarizing findings and discussing research findings based on past study references. An exploratory, descriptive study was performed to offer an overview of social conditions in the mining area, which was then assessed based on a literature review to design relevant and effective policies to lower the likelihood of social conflict. All study data collected and analyzed is combined to determine the knot of social dynamics problems at the research site and solutions that may be developed to tackle these problems. --- Theory of Social Vulnerability in Mining Areas Coal use is prone to causing conflict, both horizontal (conflict between communities) and vertical (conflict between communities and the state or enterprises) (Dimas et al., 2014). Conflict emerges due to limited carrying capacity due to social and environmental factors (Lezak et al., 2019). Social conflict can also arise when natural resources are used for the current generation without regard for future generations' availability (National Human Rights Commission, 2017). Mining management methods only geared toward short-term objectives are prone to producing conflict since they are exploitative and ignore environmental issues, causing the community to feel the impact of numerous environmental issues. Environmental disturbances as an externality of natural resource use frequently result in people losing their rights, including the right to obtain clean water, the right to breathe fresh air, the right to adequate education, and the right to live comfortably and quietly, free of noise, tension, and interference (Halomoan, 2008;Pranadji, 2005). Aside from that, environmental disruptions in numerous locations have caused local populations to be disrupted in their cultural activities (Maridi, 2015). Forest ecosystems, particularly soil and vegetation cover, are viewed as more than just commercial commodities. Furthermore, land and woods provide a significant social function and serve as the identity of local communities since they can connect their lives with spiritual and religious aspects (Kristiyanto, 2017;Suparmini et al., 2013). The spiritual relationship between humans and land and vegetation is expensive and cannot be measured economically, particularly in monetary terms (Halomoan, 2008). Due to the usage of forest areas and natural ecosystems in numerous regions of Indonesia, this is the primary cause of various social disputes. Natural ecosystems must be used, particularly when developing a decentralized government system. Decentralization emphasizes the importance of natural resources (SDA) for regional economies, mainly because most areas rely on natural resources to generate income (Hidayat, 2011). During the New Order era, regions gave little thought to natural resource management because the central government gave equal regional assistance funding (Nuradhawati, 2019). Regions must battle for local revenue (PAD) in a decentralized system, and natural resources are the most preferred means to manage PAD (Setyaningsih, 2017). Most high PAD areas rely on natural resources, such as coal. However, after the passage of Law No. 11 of 2020 on Job Creation, control over coal mining has been shifted to the center or re-centralized, such that the regional role is less substantial than it was from 1998 to 2019. In connection with the centralization of natural resource management, governance becomes crucial. The government must carefully manage natural resource assets to increase benefits and reduce the risk of conflict and economic inequality (Zainuddin et al., 2010, pp. 457-458). Using natural resources produces externalities in the form of environmental disturbances and economic disparities, especially those felt by local communities. It ultimately has the potential to trigger conflict (Dimas et al., 2014, p. 229). Social inequality in diverse mining locations is no longer a secret that cannot be remedied but persists even in increasingly concerning conditions. The economic disparity generated by the significant number of migrant workers employed in coal mining companies exacerbates the situation of social inequality (Fitriyanti, 2016). Migrant workers, both contract and permanent, are generally imported from outside the region because they have special skills and abilities (Stiglitz, 2000(Stiglitz, , p. 1442)). Of course, this increases labor competitiveness and diminishes employment possibilities in local towns. Economic disparities will emerge due to variations in the mining working class if local communities fail to adjust to changes in social structure (Apriyanto & Harini, 2012). It is unavoidable that in some locations, most workers are immigrants, with locals working in support industries such as restaurants and markets. This situation must be addressed by training skilled and qualified human resources to compete in the mining industry. The Indonesian House of Representatives is one of the parties most likely to advocate for implementing this program. As a legislative institution, the Indonesian House of Representatives has legislative, budgetary, and supervisory powers at the national and regional levels. The legislative func-tion is responsible for developing legislative programs, drafting and debating draft laws (RUU) or draft regional regulations (Raperda), debating bills or raperda, enacting laws or regional regulations, and approving or disapproving Perpu. Regarding its functions, the Indonesian House of Representatives, of course, can stimulate the creation of rules that can be used to manage the environment and maintain society's socioeconomic stability. The Indonesian House of Representatives is strategically positioned to take on the function and position of mediator between the social dynamics of natural resource management, particularly coal mining, through rules with legal repercussions. --- Potential Conflict and Social Tension in Mining Areas The most common negative impacts that can be found in mining areas, especially coal, include (1) social conflict, (2) habitat fragmentation, (3) environmental pollution, (4) flood and landslide disasters, (5) human and animal conflict, (6) marginalization of local communities, and (7) disruption of public health (Fachlevi et al., 2016;Fatmawati et al., 2017;Haq & Har, 2022;Oktorina, 2018;Rachman, 2013;Yamani, 2012). These seven factors are included in the environmental and social elements of mining, which vary depending on the social and ecological boundaries established in the Environmental Impact Analysis (AMDAL) document. Mining activities have the potential to have a negative influence on ecological and social limits. As a result, it is vital to examine social and ecological boundaries during the company licensing process to map risks and establish mitigation strategies. This finding is to the results of research by Dewi (2020), which states that the government has established a policy that all business activities and/or activities must prepare an AMDAL document that contains social boundaries and ecological boundaries and must be discussed in the environmental management plan (RKL) and environmental monitoring plan (RPL) to be evaluated periodically and reported twice every year. Through the RKL and RPL documents, periodic evaluations should be carried out to determine trends in changes in social and ecological conditions at the location. It needs to be understood that social and ecological risks in mining activities can become challenges or even obstacles to company operations. Ideally, the number of IUPs is directly proportional to improving the quality of life of communities around coal mines. However, according to the National Human Rights Commission (2017), the existence of coal mines often leads to social conflicts, which often result in lengthy problems that reach the legal realm and even lead to loss of life. According to Nggeboe (2004, p. 46), coal mining risks triggering social conflict if the land acquisition process is detrimental to the community. According to Azwari and Rajab (2021), coal mining often results in social conflict because local communities feel they are not involved in the process before and during mining, both as workers and as recipients of corporate social responsibility (CSR) programs. The research findings of Subarudi et al. (2016) regarding coal mining conflict resolution found that one of the root causes of social conflict is poor licensing management, so the community suffers losses. Social conflicts regarding these permits often arise when mining enters an area. Siburian (2012) discovered that the core of the social conflict in coal mining was the social conflict of local populations who were less active as employees, even though they were unemployed and needed permanent work to make ends meet. Essentially, every coal mining company has allocated local people, but the proportion to the overall number of workers still needs to be increased (Aprilia et al., 2019, pp. 30-31). According to Permadi et al. (2019) According to the findings of in-depth interviews, social conflict is possible in the form of tension between enterprises and local populations. Tensions frequently emerge due to competing interests in road access between mining and plantation firms. People have also blocked main highways because they were disappointed with the damage to the road, which had not been repaired. The community expressed disappointment by blocking the main highway, resulting in queues of hundreds of vehicles transporting mining and plantation products. A roadblock to the mining site was one of the first road closure instances. This occurred because the only route from the major highway to the mining site required passing through land owned by a local who claimed he had yet to receive land acquisition funds. The landowner charges IDR 100,000.00 per passing truck; the road and distribution are blocked if they do not pay. Second, during the harvest season for palm oil plantations, there is fighting over road access. Many vehicles bringing palm oil parked to block the road, preventing trucks carrying coal from passing and disrupting the company's operations. Third, blocking highway access occurred because people thought that coal trucks were the leading cause of damage, even though 12 companies were using the type B highway as a supply chain route. Based on information collected through in-depth interviews with the community, village government, and the leader of PT. X, the public needs to fully understand that 12 companies use this highway. The community considers that the highway is only used for public activities and PT. X, the existence of PT. X accelerates road damage and requires no significant maintenance efforts. According to PT. X, this perspective arose, cemented, and showed itself as a protest movement blocking the highway. X, people are taking command of this movement and asking that the firm build a unique route not connected to the main road. The conditions at the site are consistent with the findings of Dewi's (2020) research, which discovered social conflict in mining and, after further investigation, discovered that certain parties acted provocateurs and escalated the conflict. --- Reactive Action-Based Social Conflict Mitigation In the case of PT. X, the conflict over this highway had peaked and reached a critical point, so mediation was carried out, led directly by the Batang Peranap Sub District Muspika. During this conference, 12 leaders from road-using companies in the area got down with community representatives, and it was agreed that the 12 companies would form a working group dedicated to road maintenance. The twelve companies formed an entity with an institutional framework to enhance road maintenance program coordination and implementation. They pay regular payments controlled by the institution and collaborate on road maintenance operations such as road watering during the dry season and road compaction using dirt, sand, and stone materials that are compacted using heavy equipment. This procedure has been running since early 2020 and has successfully removed social tensions caused by road damage. The social conflict resolution formulated by the 12 companies is considered revolutionary and has succeed-ed in untangling the tangled threads of the conflict that occurred. Similar conditions were also studied by McIntyre and Schultz (2020), who found that ideally, all activities and/or businesses operating in an area should form institutions tasked with mitigating and managing social conflict so that the burden distribution becomes more proportional. The process of reducing social tension at PT. X is lengthy and requires a high level of mediation intensity. The protracted succession of struggle processes yielded benefits, but the opportunity and social costs were significant. According to the findings of in-depth interviews with the leadership of PT. X is in charge of overseeing all operational activities in the field. Although PT. X and 11 other companies struggled to deal with social tension; many believe the disruption of the supply chain of 12 enterprises affected by the main road blockade does not concern them. Essentially, if the involvement of local communities as workers at PT. X has a large number, so the sense of ownership of the local community is also high so that social tensions and the risk of conflict can be reduced early before movements emerge that hamper the supply chain. In reality, on the ground, people have sources of income from various sectors other than mining. The variety of sources of income for local communities has positive and negative impacts. The positive consequence is less reliance on coal mining, while the negative consequence is that the community may need to be more dedicated to managing reclamation and post-mining sites in the future. Apart from the low level of community reliance on mining, the low level of commitment is influenced by PT. X's lack of community involvement in various corporate initiatives, including participation in the production of reclamation and post-mining papers. If this occurs, the reclamation and post-mining zones will be turned into dead cities. Soelarno (2022, pp. 72-74) explained that the term dead city is interpreted as a condition where an area previously had mining activity, which was a source of economy and crowds, but when the mining permit expired, everything passed, there was no longer any economic activity, and crowds the area became barren, empty. It seems there is no life because the ecosystem has been disturbed. The potential for the emergence of ghost cities must be avoided because it risks causing social and economic turmoil triggered by the loss of jobs, sources of income, and environmental damage. This risk can be mitigated if sustainability theory is applied throughout the mining process. Sustainability is defined not only in terms of output but also in the lives of humans and other organisms that will continue to exist in the area after the enterprise has closed. The most effective strategy to ensure the long-term viability of local community life is to offer PPM programs that are productive, instructive, and empowering. The community is provided with improved capacity in knowledge, insight, skills, and creativity through this program, which is projected to become capital for them to better their standard of living in terms of work prospects, income sources, and entrepreneurial abilities. Taušová et al. (2017, p. 361) support this argument, stating that natural resource extraction companies have a responsibility to improve the welfare of local communities near mining sites in exchange for extracting natural resources and replacing them with the transfer of knowledge, skills, or technology to improve their quality of life. Through this mechanism, the presence of a mine can provide positive changes in life for the community and other species that, in theory, have lived in this place long before the mine was established to carry out extraction. --- Social Conflict Mitigation The occurrence of social conflict in mining sites has the potential to disrupt security and public order. Social conflict over natural resource management is more common in nations with substantial natural resource potential and a heavy reliance on raw material export commodities (Alfamantar, 2019). Social strife in these nations often leads to violence (Safa'at & Qurbani, 2017). In general, unequal economic distribution and geographic growth generate social conflict (Zárate-Rueda et al., 2022). The significant vulnerability of social disagreement to violence necessitates special consideration. Economic equality and development are two strategies for preventing social conflict (Muhammad et al., 2018). This approach provides for a process of communication and discussion amongst existing elements. Society, government, academia, practitioners, and industry are all involved (Pambudi et al., 2023). The debate and discussion process can generate proposals, opinions, and reactions based on horizontal equal exchanges, which can provide input to strengthen collaboration between parties in achieving equitable development (Tauová et al., 2017). --- Local Community Empowerment Walker (2012) defines community empowerment as strengthening a community's talents and potential in terms of soft skills and thinking ability. Community empowerment generally entails knowledge and technology transfer (Pujo et al., 2018). The community empowerment process aims to increase the community's standard of living compared to the conditions before the empowerment process (Darwis & Rusastra, 2011). Community empowerment will provide social, economic, technological, and knowledge interventions to provide a higher degree of welfare, economic level, and environmental circumstances to support all life processes (Ani et al., 2017). Institutions or groups can carry out the process of empowering local communities. Community empowerment occurs through open and participatory collaboration among parties (Dreier, 2014). The community empowerment process, in essence, involves those who transfer knowledge and technology, typically from government institutions, higher education institutions, and groups or communities on the receiving end of knowledge and technology, namely local communities (Mauldya et al., 2020). Empowering local communities is typically carried out continuously or sustainably, with several sequential stages of activity that eventually merge into a unified activity of grounding knowledge and technology transfer (Saleh & Mujahiddin, 2020). As a result, community empowerment is carried out by groups that have scientific and/or technological skills to community groups that still have these two limitations. --- Social Conflict Mitigation Based on Local Community Empowerment PT. X is committed to developing the capacity of local human resources through education, one of which is through community development and empowerment programs (PPM). In 2020, PT. X granted PPM through educational scholarships to seventeen youths from the mine ring area. A total of 17 people received educational scholarships for one year (2 semesters), with the possibility of extending the scholarship till they graduate if they excel. Seventeen recipients of this grant pursued higher education within and outside of Indragiri Hulu Regency. Table 1 shows the PPM provided by PT. X in detail. The number of PPM costs allocated by PT. X is IDR 700.00/ton of coal. According to Table 1, the types of PPM activities performed are diverse and affect various aspects of society. This technique allows for the equitable allocation of the PPM program, preventing social envy towards specific community groups. Following the premise of sustainable development, no one is left behind, or generally known as no one left behind, the mechanism for delivering PPM to the community is fairly varied. According to Miller and Spoolman (2016, pp. 84-90), the spirit of sustainable development is the integration of social, economic, and ecological factors that must function in harmony, side by side, and mutually enhance each other. Various forms of PPM activities should ideally refer to these principles so that the benefits are more comprehensive and can improve people's living standards and environmental quality. According to local community informants, apart from providing PPM PT. X has also implemented a CSR program by providing several fruit plant seeds. The technique for distributing seeds is given to each head of the family, each of whom is invited to take three seeds. Choice of fruit plant types provided by PT. X is durian, mango, jackfruit, water guava, longan, rambutan and sapodilla. PT. X distributes seeds to the community in partnership with the local village government. Each family head receives three fruit plant seeds, collected at the village office and registered by village officers before planting in the community's yard or garden. This program was carried out twice, in 2017 and 2020, with the seeds provided being recommended to be planted in the yard to boost productivity and efforts to diversify food sources from the yard in 2020. This program is critical for boosting family food independence, particularly food rich in vitamins, fiber, and minerals. Pambudi (2020, p. 415) supports the CSR program to increase food independence. Homestead land is an environmental asset that has the potential to be used as a source of fulfillment of non-staple food because of its easy accessibility and very close reach of all family members. Pambudi and Fardiani (2021) provide examples of planting various types of food source plants (fiber, vitamins, medicines, and spices), which are managed communally or called pawon urip, which have succeeded in increasing family resilience in the aspects of physical health, food, and nutrition fulfillment, as well as media for social interaction. The success of family food security as implemented through Pawon urip is greatly influenced by the type of plant and its suitability to environmental conditions. PT provides seven types of plant seeds. X was chosen based on suitability to local climate and ecosystem conditions. The steps chosen by PT. X is precise, so the potential for success in growing the seeds is high, and they can produce fruit as expected. This argument is supported by Pambudi and Utomo (2019, p. 167), who state that the suitability of the microclimate and ecosystem, including the availability of nutrients, supports the speed of growth and development of a type of plant so that it can produce fruit and/or seeds optimally. Pambudi et al. (2021) emphasized that the char- acteristics of the microclimate are a determining factor in the success of cultivating a type of plant, so careful attention must be paid to adapting the characteristics of the ecosystem to the type of plant to be cultivated. A CSR program based on giving biological assets, in this example, plant seeds, can be measured in four stages: (1) implementation, (2) cultivation or care, (3) cultivation results, and (4) exploitation of cultivation results, including processing postharvest. The first phase analysis, namely the implementation of the CSR program, was performed in this study. The first phase has been finished, allowing for future review, analysis, and recommendations for comparable operations. Figure 3 shows the findings of the CSR implementation analysis. Figure 3 shows that the CSR implementation of PT. X generally provides satisfaction to the local community. The characteristics of CSR program type, plant seed selection, usefulness, convenience of participation, and technical seed distribution are used to determine satisfaction. Among these five factors, the community is most satisfied with the features of plant seed selection, usefulness of CSR programs, type of CSR program, and convenience of participation in CSR programs. However, the community wanted more than the technicalities of seed distribution. Researchers undertook an extensive analysis to determine the source of this unhappiness, and it was determined that most consumers desired to receive more than three seeds. People are dissatisfied with the maximum limit of three seedlings; they believe quotas should be set for distributing the number of seedlings based on the yard size rather than generalizing them. Some people's dissatisfaction is understandable because the arguments offered are quite sensible. This finding is reinforced by Jasińska & Jasiński (2022), who suggest that CSR initiatives should ideally be developed through dialogue between communities, businesses, and local governments. The discussion's outcomes are then addressed internally to establish priorities based on the company's vision, mission, work program, and budget capability. Some local people have felt the benefits of PT. X's CSR program believes that the fruit plant seeds provided will assist in satisfying their family's fruit demands and enhance their income; moreover, some people have yet to feel the benefits. Most people who have yet to receive the benefits of the CSR program did not take part in taking seeds in 2017, and/or their fruit seeds were not adequately cared for, so their growth and development were not optimal. Sapodilla, longan, and mango are the seven plant seeds that have begun growing fruit. If properly cared for, these three varieties of plants can give fruit in around three years. In theory, PT. X facilitates the community's ability to use the yard to be more productive and beautiful through the distribution of fruit plant seeds. Still, its success depends on the community's sincerity and tenacity in the care process. The researcher's argument is supported by Kostruba (2021, pp. 123-124), who stated that CSR is a corporate social responsibility that aims to improve the quality of social life of the community around the company. Still, its implementation requires active and proportional cooperation between both parties to achieve this goal. --- Social Conflict Mitigation Based on Alleviating Critical Local Community Problems One of the PPM programs that is implemented and has a very strategic role is the construction of drilled wells. Analysis of community satisfaction with drilling wells is shown in Figure 4. Figure 4 shows that the community is satisfied with establishing a communal drilled well, particularly in terms of accessibility or ease of access for the community to the drilled well location, which includes water collection, continuity of clean water, and user comfort. Aside from that, the general public regards the physical structure of the drilled well as average; in fact, the majority is unsatisfied with the aesthetics and cleanliness of the place. Because a drilled well is a public amenity rather than a monumental construction, the emphasis is on its benefits rather than its physical structure and aesthetics. This is by Simpen et al. (2021, pp. 76-78), who stated that the main aspects to be considered in constructing a drilled well are type capacity, optimum discharge, optimum drawdown, and constant discharge, the physical structure and aesthetics are only accessories that do not affect quality, quantity, and continuity. According to information from PT. X, it was confirmed to the village government and local community that drilled wells were made in 2 locations in Pematang Benteng Village. Each drilled well has a depth of around 135 meters and can be used by the community around the clock. Water from the drilled well is stored in a communal water tank with a capacity of around 10,000 liters. Drilled wells were built at this location between 2017 and 2020 in response to the company's worries about some villages that frequently struggle to meet their water needs during the dry season. The village authority informed the company about the problem, which was satisfactorily resolved. Establishing these two communal drilled wells has successfully remedied the problem of people who frequently have trouble accessing clean water so that there are no longer any houses without access to clean water. The company and the village government said that the determination of the well location and depth of 135 meters was based on the results of an analysis by a team of competent experts so that the quantity and quality of the water could be guaranteed to be sustainable. Through the construction of this communal drilled well, PT. X has realized SDGs indicator 6.1.1 (c)-namely, the proportion of the population with access to safe and sustainable drinking water source services. Communal drilled wells demonstrate the company's commitment to meeting SDG goals, particularly the most fundamental necessity: clean water. Access to safe drinking water is vital for both individual and social requirements. Regarding social impact, providing access to clean water is critical to a community's productive activities. Darwis and Rusastra (2011, p. 141) underlined that effective community empowerment initiatives must begin with data synergy, institutional structuring, the establishment of supporting infrastructure, and program implementation synergy. Accessibility of clean water is part of the supporting infrastructure. Productive activities in the empower- In general, the solution to this problem requires the support and involvement of Commission VII of the Indonesian House of Representatives in collaboration with the Ministry of Energy and Mineral Resources. The Indonesian House of Representatives can play a strategic role in developing a national legislative program (prolegnas) based on a priority scale of interests and emergencies that threaten the safety and security of life in mine-affected communities. This was done in collaboration with the Ministry of Energy and Mineral Resources to prepare and discuss a bill related to harmonizing natural resource management to realize harmonious development, namely economic growth, improved living standards and community welfare, and environmental preservation. Aside from that, Commission VII of the Indonesian House of Representatives can play an optimal supervisory role, for example, if a mining company is not committed to implementing statutory regulations. --- Conclusion Land ownership disputes, incomplete land acquisition, and concerns with the use of public services that are deemed unjust are common factors causing social conflict in coal mining communities. Reactive actions, mitigation through empowerment initiatives, and adaptation through improving the local community's sense of ownership in the enterprise can all be used to address social conflict. CSR and PPM projects with total commitment can provide empowerment and a greater sense of ownership. Companies can provide CSR and PPM to build economic independence so that when the IUP ends, they can live a decent and better life than before so that the community develops a sense of ownership in the company and the ex-mining area can become an economic center while maintaining environmental conditions. PPM programs that are on target and concrete are non-charity; they increase community independence and serve as a tremendous marketing tool for businesses. Companies can indirectly protect themselves and increase the sustainability of their business processes by implementing suitable PPM. PPM can also be a key instrument for reducing social friction to maintain the stability of firm operations, which affects workers' income, safety, and comfort. The community's sense of ownership in the company grows due to proper and proportional PPM, and they willingly contribute to ensuring the company can function sustainably. To support the program to increase educational participation rates in mining areas and to strengthen the PPM program in a more concrete way to increase local community independence, Commission VII of the Indonesian House of Representatives can work more closely with the Ministry of Energy and Mineral Resources to determine the amount of PPM costs that each mining company must pay based on standards based on IUP area and production capacity. In line with that policy, Commission VII can cooperate with the ESDM Ministry to monitor and supervise PPM program implementation to be more concrete and impactful. It is essential because, thus far, many PPMs are in the charity form and need to be more educative for the community. Besides, Commission VII of the Indonesian House of Representatives can also cooperate with the ESDM Ministry and BRIN to strengthen the research and implementation of applied technology in the mining owned by BUMN to enlarge the success percentage of reclamation and post-mining, which in the future can be made as a role model for other mining companies. Commission VII needs to collaborate with the ESDM Ministry to strengthen the policy implementation and supervision related to giving CSR and PPM and the reclamation and post-mining implementation to manifest a productive, conducive, collaborative, and sustainable mining area.
Background: During the COVID-19 lockdown, a large proportion of the women exposed to intimate partner violence had to live with their abusers full-time. This study analyzes the new official complaints that were filed during the lockdown in Spain. Methods: Data from the Comprehensive Monitoring System for Cases of Gender Violence from the Ministry of the Interior, Spain. Using logistic regression models, the complaints registered during the lockdown were compared to those registered in the previous year. Subsequently, we analysed association between the seriousness of the incident reported and the period in which the complaint was filed. Results: Official complaints decreased by 19% during the lockdown. The probability of complaints during lockdown mainly increased when victims had a relationship with the abusers [odds ratio (OR) ¼ 1.33] and when they lacked social support (OR ¼ 1.22). The probability that the complaints were associated with previous jealousy (OR ¼ 0.87), previous harassment behaviours (OR ¼ 0.88) or the victim's fear for minors' safety (OR ¼ 0.87) decreased. In addition, during lockdown increased the probability that the complaints filed were due to incidents of severe physical violence (OR ¼ 1.17); severe psychological violence against women with minors in their charge (OR ¼ 1.22); and severe violence due to threats (OR ¼ 1.53) when the woman had previously suffered harassment. Conclusions: The decrease in new complaints during the studied period and the increase in their severity evidence difficulties in seeking help due to the lockdown. In situations of confinement, it is necessary to design measures that protect women with a lack of social support, and at those who live with the aggressor.
Introduction --- D uring the last few decades, a substantial body of knowledge has been produced on the effects of different crises-economic, natural or socio-political-on the intimate partner violence suffered by women at the hands of their partner or ex-partner (IPV). [1][2][3] The factors identified as triggers of IPV during crises act both directly-unemployment, economic difficulties-or through intermediate mechanisms, such as deteriorating mental health or an increase in alcohol consumption. 4,5 Since 2020, the world has been in the midst of a nearunprecedented pandemic. The measures adopted to address the SARS-CoV-2 pandemic entailed a drastic change in social relations. Previous studies pointed to an increase in IPV during this period. 6 Social isolation, men's frustration, unemployment, the use of alcohol and other drugs may have exacerbated IPV-while limiting support and access to the resources needed to face these issues. [7][8][9][10] The opportunities to leave a violent relationship are greater the more support is diversified across informal networks and in formal services. 11 This diversification when searching for help-which can be influenced by individual, interpersonal and contextual factors-12 has been drastically altered during the pandemic too. In the process of dealing with IPV, a formal complaint against the abuser allows civil measures to be implemented to help protect the victims, and criminal proceedings against the abuser to begin. In Spain, 25% of women killed due to IPV in 2019, and 22% of women exposed to IPV had filed a complaint against the abuser. 13 The presence of minors in the home, physical IPV and the severity of the violence, are all factors that promote a complaint being filed to try to find safety and protection. 14,15 However, during the months of the COVID-19 lockdown, complaints decreased 16 ; this is despite the continued presence of minors in the home and the fact that the first studies carried out on emergency services in contexts similar to ours pointed to an upsurge in the severity of IPV. 17,18 In Spain, services for IPV victims were considered as essential, 19 and, to complement this, increased access to the Security Forces and Corps was implemented to facilitate the filing of complaints. Despite these emergency measures, it was more difficult to implement a planned and safe strategy to respond to the consequences for a woman and her children of reporting the abuser during lockdown. Judicial processes were also more broadly disrupted, and social assistance was focused on responding to the essential needs of the moment. When women are unable to implement active strategies to safely exit violent situations, the mechanisms that are put in place are avoidance behaviours which allow victims to survive that IPV. 20,21 An internationally observed pattern was repeated during the lockdown in Spain. 22 Calls to the 016-helpline increased by 47% compared to the analogous period the previous year, while IPV complaints to the police decreased by 15%, 16 but so far we do not have a full characterization of the use of these resources. Since 2007, the State Security Forces and Corps, the Foral Police of Navarra and more than 500 local police forces have registered and collected information on all IPV complaints via the Comprehensive Monitoring System for Cases of Gender Violence (VioGe ´n System). 23 This information has allowed a longitudinal database to be generated, which offers a unique opportunity for the analysis of formal IPV complaints to the police across a large part of Spain. In this context, this study's objectives were: (i) to analyse the main characteristics of the new complaints for IPV filed during the COVID-19 lockdown in Spain in comparison to the new complaints filed during the same period in 2019; and (ii) to analyse whether there is an association between the period in which a complaint is filed (under lockdown or not) and the severity of the violence reported. --- Methods In order to analyse the main characteristics of the new IPV complaints filed during COVID-19 lockdown, a retrospective case-control study was carried out. The cases were new formal IPV complaints registered between 15 March and 21 June 2020 and the controls, new complaints of the same period in 2019. With the aim of identifying whether the confinement was associated with the severity of the IPV reported, a cross-sectional study of new complaints filed in the same periods was carried out. Data came from the VioGe ´n System, which the Secretary of State for Security, from the Ministry of the Interior is responsible for, via the Police Risk Assessment Form, version VPR5.0 The period analysed runs from 15 March to 21 June of the years 2019 and 2020. The original database includes 23 549 new IPV complaints, defined as any act of violence against women perpetrated by a man who is or has been her spouse, or who is or has been linked to her by a similar emotional relationship, even without cohabitation. The analysed database contains 22 078 records. The excluded records were evenly distributed across the two study periods (2019/20). The study was approved by the Ethical Committee of Alicante University (Ref. 2020-07-08). To achieve to the objectives of the study, the following dependent variables were defined: • Period in which the new complaint was filed: lockdown period/ previous period. • Severity of the violence (physicological, physical, sexual, treats) reported in the new complaint; mild-moderate/severe-very severe. The assumptions that group the different types of IPV based on severity are shown in Supplementary table A1. 24 The independent variables are listed in table 1. These are classified into five categories study period, type of IPV, variables related to the victim, involvement of minors, and variables related to the abuser. First, after obtaining the frequencies and percentages of missing values (table 1), an analysis of the patterns and relationships of these values with different variables was carried out for all variables. This analysis showed that the mechanism of generation of missing values was Missing At Random (MAR). Considering the results, weights were applied using the Inverse Probability Weighting technique (IPW), which is suitable for MAR. 25 The variables used as predictors to calculate the weights were: the nationality of the victim, the current relationship with the abuser, the age of the victim and the number of coexisting forms of IPV. The descriptive analysis includes (i) the complaints filed during the confinement period and during the analogous period the previous year (Supplementary table A1), and (ii) the severity of the different types of IPV (physical, sexual, psychological, threats) collected in the complaints-considering the previously described covariates (Supplementary table A2). Hypothesis contrasts were performed using the Phi (U) statistic for the binary variables, and the F statistic for the age variable. Subsequently, in order to estimate the association of the covariates with the dependent variables, we used logistic regression models. Model 1 has the period in which the complaint was filed as the dependent variable. Models 2-5 have the severity of physical violence as a dependent variable (Model 2), psychological (Model 3), threats (Model 4) and sexual (Model 5), and as the main independent variable: the period in which the complaint was filed. For all the models, the first step was to perform bivariate analyzes, and the variables were selected with P < 0.25. Subsequently, the selected variables were entered into multivariate models and estimates were made by phases until reaching the final variables, always keeping the variable under study in the model (lockdown), nationality (v7) and age of victim (v9). Prior to the construction of the final models, we evaluated the collinearity between all the independent variables. In the models evaluating the severity of the different IPV types (Models 2-5), we explored the possible interactions of the variable 'lockdown period' with the remaining covariates selected in the previous steps. Estimates were made using Robust Standard Errors. 26 All the analyses were performed using SPSS 26.0 27 and Stata 14.2. 28 --- Results Of the 22 078 total new complaints registered in the VioGe ´n System, 12 177 occurred during March-June 2019 and 9901 during the lockdown period, March-June 2020. The most frequent type of IPV in both periods was psychological (9931 in 2019 and 8058 in 2020), physical (8675 and 7142), the presence of threats (6459 and 4784) and sexual violence (1053 and 799). During lockdown period, the frequency of new complaints significantly increased where the victim was a foreign woman (38.1% in 2019 vs. 40.0% in 2020), had a current relationship with the abuser (63.2% vs. 70.8%), lacked of social support (17.2% vs. 20.8%), and had previously reported other abusers (15.6% vs. 17.9%) (Supplementary table A2). The frequency of new complaints significantly decreased in which women had informed the abuser of their intention to break off the relationship (53.6% vs. 52.1%) and in which the victims reported fear for the integrity of the minors (12.6% vs. 10.9%). In lockdown period, the frequency of complaints in which the abuser had shown previously exaggerated jealousy (47.5% vs. 43.9%) or harassing (34.2% vs. 29.1%) behaviours towards the victim decreased. Table 2 shows the independent effect of the covariates with the probability of an IPV report having been filed during the lockdown period. In the complaints filed during confinement, we identified a greater probability that the victim maintained a relationship with the abuser [odds ratio (OR) ¼ 1. 33 During lockdown, the number of severe IPV complaints-physical, psychological, threats, sexual-decreased compared to the reference period (detailed results in Supplementary table A3). In relative terms, during lockdown, there was a significant increase in the percentage of serious physical IPV complaints (18.4% vs. 16.9%) and the percentage of severe IPV complaints due to threats (67.6% vs. 61.9%). Table 3 shows the association between lockdown (ref: analogous period for the previous year) and the probability of registering a report with severe violence vs. moderate violence, in the different types of violence and independently of the other the covariates. --- Intimate partner violence complaints during COVID-19 lockdown in Spain 537 During lockdown, the probability that the new complaints filed would record severe physical IPV increased (Model 2) (OR ¼ 1.17). Regarding psychological IPV complaints (Model 3), a significant interaction was identified between the lockdown period and the presence of minors in the victim's charge (Supplementary figure 1). To understand that interaction, simple effects were obtained estimating the differences of the predicted values (between points connected by each line in figure 1). Having or not having dependent children for a victim at a time other than lockdown does not significantly change their estimated likelihood of suffering serious violence [coef diff ¼ À0.01 (À0.03; 0.01)]. However, the difference between having or not having children in charge when the victim was in the lockdown period was statistically significant [coef diff ¼ 0.03 (0.01; 0.05)]. In the threats category (Model 4), a significant interaction was identified between the lockdown period and the abuser's previous bullying. For victims at a time other than lockdown, the difference between having suffered or not previous harassment by the abuser was statistically significant [coef diff ¼ 0.04 (0.01; 0.06)]. For victims at a lockdown time, the difference between having suffered or not previous harassment by the abuser was statistically significant too [coef diff ¼ 0.09 (0.06; 0.12)], and significantly higher than that of the previous difference (Supplementary figure 2), as exposed by the interaction coefficient in that model. For reports of sexual violence, lockdown did not increase the probability of reporting serious sexual violence (Model 5). --- Discussion During the COVID-19-induced lockdown in Spain, new IPV complaints decreased by 19% when taking the same months of 2019 as a reference. New complaints were associated with IPV incidents where the couple's relationship was continuing, the woman lacked social support and had reported other abusers previously. There was a lower likelihood of complaints due to previous extreme jealousy or harassment behaviour, but likelihood increased for complaints in which the abuser had a history of IPV perpetrated against other partners. In the complaints filed during lockdown, the probability that these included situations of risk to children's safety decreased. In the new complaints filed due to physical violence during lockdown, the probability that this violence was serious increased. The probability of complaints as a result of serious psychological violence in this period also increased for women with minors in their care. The probability of reporting serious threats was higher, especially for women who had previously suffered harassment from the abuser. During the lockdown period when exceptional isolation and stayat-home measures against Covid-19 were in force in Spain, IPV complaints were considerably reduced, as shown by our results. Increases in other indicators during this period, such as 016-helpline calls 16 indicate that this decrease in complaints does not necessarily reflect a decrease in IPV itself, but rather a change in help seeking or that lockdown reduced the possibilities of perpetrating IPV in noncohabiting relationships. In fact, our results show that the women who filed complaints during lockdown were the most exposed to violence: they had a current relation with the abuser, they lacked social support, or they knew about the process because they had filed prior complaints. During lockdown, official complaints where the abuser showed previous extreme jealousy or continued harassing behaviour decreased. As reflected in different theoretical frameworks, 29 jealousy and harassment appear and generate violence due to the abuser's insecurity and lack of control over the victim. The stay-at-home order, the victim's lack of social contact as well as the strategies implemented by women to avoid conflict 30 were likely to have increased abusers' perception of control, partially avoiding violent crises caused by jealousy. Contact with the police services and officially reporting the abuser has been identified as a key moment in women's decision-making when trying to leave a violent relationship. 31 During the lockdown, although IPV reports decreased, the severity of IPV reported increased. It is important to bear in mind that our study analyzes new complaints; therefore, this greater severity is not due to an increase in violence after a first formal complaint, rather to a possible delay in filing the complaint or to contextual elements that triggered or increased risk factors for serious violent behaviour such as victim social isolation. 32 An IPV victim files a formal complaint against her abuser when she sees that the mechanisms that she has put in place to reduce, alleviate or survive the IPV are not working. [33][34][35] The fact that during lockdown there were an increased number of complaints involving serious threats, especially in those situations where the victim reported having been harassed in the previous months, suggests the possible recurrence of previously established IPV, or the appearance of new forms of IPV inflicted through threats. 36 Serious threats can be made without the physical presence of the abuser, perhaps explaining why they were more frequent during lockdown-when including serious threats-as compared to other forms of IPV. In the new complaints filed during lockdown, women's fear for minors' safety was observed less frequently, despite the presence of minors in the home increasing the likelihood of complaints reflecting severe psychological IPV. Although the data do not provide information on cohabitation, it is possible that social distance increased psychological violence against women in those cases of shared custody where the other parent had difficulty obtaining access to the children. 37 Irrespective of the period in which a report was filed, the probability of reporting a serious IPV incident was greater when there was harassment and extreme jealousy in the previous months. Jealousy 37 is one of the risk factors most predictive of women being murdered as a result of IPV. 38 The fact that, during lockdown in Spain, IPV murders decreased by 74% compared to the analogous period of the previous year, 16 could in part be related to a decrease in jealousy due to greater perception of control over the victim, and the separations being postponed-since, in 35% of intimate femicides in Spain, the couple were not cohabiting. 13 After the lockdown in Spain, murders of women and children due to gender violence increased by 50% compared to the average number of murders registered in the previous 5-year period. 13 It is necessary to bear in mind that violence follows established dynamics, 39 and that the violence held in check by the pandemic may emerge over time. At the same time, the socioeconomic crisis caused by the pandemic may generate an increase in new cases of IPV, as has occurred in other recent crises, 3 largely due to the fact that complex crises can have direct effects on IPV risk factors. The variables associated with severe violence in our study support the arguments of other authors about the dynamics of violence; these authors affirm that serious assaults occur in many cases after continued exposure to violence and are associated risk factors which increase the probability of femicide. 38 In our results, regardless of the period in which the aggression, jealousy and harassment occurred, these factors were present for at least 6 months before filing a serious complaint. The women who reported severe IPV presented a chronic deterioration in their general wellbeing a higher probability of suicide attempts, and presented incidences of the abuser inflicting multiple forms of IPV on them, and, finally, they lacked social support. All this suggests that serious IPV attacks are neither ad hoc nor spontaneous. If we consider that our analysis includes only new complaints, where the woman had not previously reported the abuser, it is notable that new formal complaints for severe IPV occur after a long process of psychological deterioration. The study presented here must be understood within the framework of its limitations. The information collection system includes information that covers 79% of the Spanish population. Although there were a limited number of missing values in the different variables, the combination of them in the multivariate analyses caused a loss of records, which may have led to bias in the estimates made. It should be noted that the mechanism for missing value generation was MAR; the IPW technique was used to give greater weight to the complete records, which due to their particular characteristics were more likely to contain missing values. In addition, robust estimators were used for the regression models. These techniques are appropriate for the type of problem faced; however, it can never be said with certainty that all bias in the results has been eliminated. The change in the complaints profile suggests access barriers during the lockdown and specific groups of women were exposed to more serious violence. In situations of forced isolation, it is necessary to design measures that protect the most vulnerable women from IPV and prevent the escalation of violence. These measures should be --- Data availability The data underlying this article, the VioGe ´n system data base, were provided by the Secretary of State for Security, from the Ministry of the Interior, Spain, by permission. Viogen data will be shared on request to the the Secretary of State for Security, from the Ministry of the Interior of Spain. --- directed mainly at women with a lack of social support, and at those who live with the aggressor. It is imperative that these women receive attentive follow-up, adapted to the difficult circumstances of the pandemic. --- Supplementary data Supplementary data are available at EURPUB online. Conflicts of interest: None declared.
While there is a small but growing body of work that examines the religious and spiritual lives of bisexuals, there is a strong need for additional research that further explores the intersectionality of these distinct identities. Motivated by the feminist notions that the personal is political and that individuals are the experts of their own experiences (Unger, 2001), the specific aim of this study is to better understand the intersection of multiple identities experienced by bisexual individuals. Relying upon data collected by Herek, Glunt, and colleagues during their Northern California Health Study, in this exploratory study we examine the intersection of bisexual, religious/spiritual, and political identities by conducting an archival secondary analysis of 120 self-identified bisexual individuals. Among the significant findings, results suggest that higher LGB self-esteem scores and openness about sexual orientation correlated with higher levels of spirituality. Further, attraction to same sex partners was associated with perceiving sexual orientation as a choice, identifying as bisexual at a younger age, more likely to disclose one's sexual orientation, less likely to view religion as being socially important, and a higher score on the belief statement. We discuss the implications of these results and make suggestions for future research on the role of religion and spirituality in bisexual lives.
Recently however, researchers have begun to move away from studying the LGBT community in its "entirety" and have started to focus more on the experiences of bisexual individuals apart from the larger sexual minority community. Specifically, scholars (Carr, 2011;Fassinger & Arseneau, 2007) have argued against grouping individuals into categories based solely on their gender or sexual orientation since, in particular, it neglects the unique experiences of bisexual women and bisexual men. Such distinctions add complexity and possible complications to any discussion of the influence of religion and spirituality in the lives of LGBT individuals. Even as the study of sexual minorities follows sociopolitical trends (e.g., situational homosexuality, the HIV/AIDS epidemic, same-sex marriage), so has the inclusion and/or exclusion of bisexuality in research been impacted by such trends (Rust, 2002). Added to the mix is the use of a variety of theories and perspectives to understand sexuality and sexual orientation. As an example of this trend, recent research has incorporated principles of Positive Psychology in the exploration of non-heteronormative identities (Savin-Williams, 2008). To further this discussion, the authors of this paper will offer a discussion of the unique experiences of male and female bisexuals from a feminist perspective; one that recognizes the religious, sociopolitical, cultural, and historical experiences that have impacted bisexual individuals from both the heterosexual and LGBT communities. This paper will attempt to address these issues via a feminist perspective by exploring the connections between bisexuality, political view, and religiosity/spirituality. --- Theories of Bisexuality Alfred Kinsey and his colleagues suggested that sexual orientation existed on a continuum from homosexuality to heterosexuality and recognized bisexuality as a separate experience (Kinsey, Pomeroy, & Martin, 1948/1998). According to Rust (2002), the term bisexual used to refer to the combination of homosexuality and heterosexuality or the sexual attraction towards same-gender as well as different-gender individuals. However, the American Psychological Association (2008) reported that sexual orientation was more than just sexual attraction to women, men, or both; it included emotional and romantic attraction as well. Furthermore, some individuals do not accept the concept of bisexuality at all and are under the assumption that bisexuals are either gays or lesbians who are not ready to come out due to societal homonegativity or they are simply experimenting heterosexuals (Rust, 2002). Even within the lesbian and gay community, some lesbian activists believed that bisexual women were not as invested in the lesbian feminist movement due to their occasional and pseudo-treasonous attraction to men. Additionally, bisexuals have also been stigmatized as being the conduit between gay men and heterosexual women for the spread of HIV/AIDS (Donaldson, 1995;Rust, 2002;Udis-Kessler, 1995). Nonetheless, with all these disparate views of bisexuality, between the 1960s and 1980s, the movement toward greater bisexual awareness and acceptance began to come to fruition (Donaldson, 1995;Rust, 2002;Udis-Kessler, 1995), and by the 1990s bisexuality was starting to be addressed in both research and practice. However, throughout the1980s and 1990s, the debate over whether or not to include "bisexual" in the title of gay and lesbian organizations continued and, according to Weiss (2003), this lack of inclusion still occurs in some groups. --- Feminist Theory and Bisexual Identity In considering the bisexual experience of negotiating identities from a feminist perspective, the notions that (a) individuals' experiences are influenced by their sociopolitical context (the personal is political) and (b) individuals should be considered experts of their personal experiences (Unger, 2001), are invaluable. In addition, the feminist theory of intersectionality,[defined as the intersection of social categories that are based on the subjective experiences of privilege and oppression (Bowleg, 2012;Collins, 2000;Frazier, 2012;Warner, 2008)] was used to further understand these overlapping identities. Bisexuals have traditionally been grouped together with gay, lesbian, and transgender individuals and their unique experiences are often overlooked (Fassinger & Arseneau, 2007;Meezan & Martin, 2009). While the term 'LGBT community' is often bandied about, as Edwards (2003) suggested, this acronym is often seen more as a media tool and a label to group people together than as a means leading to greater understanding of sexual and gender minorities. Furthermore, Fassinger and Arseneau (2007) described how the intersection of gender and sexual orientation and the labels individuals place on themselves may vary depending on levels of internal and external acceptance, sociopolitical experience, and culture among other variables. Due to the different experiences among females and males regarding gender-role socialization, Fassinger and Arseneau (2007) argued against using the term bisexual, regardless of gender, since it does not sufficiently address these individuals' unique life experiences. For instance, Savin-Williams and Diamond (2000) found that women were more likely to identify as bisexual and women in general were more likely to identify with this sexual orientation prior to the onset of sexual activity; whereas men tended to label themselves after becoming sexually active. However, since bisexual individuals have been under-studied, the intersection of gender and sexual orientation among other identities (i.e., religious identity) is just starting to be addressed (Jefferies, Dodge, & Sandfort, 2008;Toft, 2009;Unger, 2001). According to Fassinger and Arseneau (2007), assumptions about bisexual individuals are often made, since there is limited research that focuses solely on the bisexual experience. For example, Weiss (2003) reported that suppositions about bisexuals remain (such as the previously mentioned experimentation, or not being ready to come out as gay or lesbian), as well as assumptions about bisexuals trying to gain more power and privilege by not identifying as gay or lesbian. According to Clarke and Peel (2005), advances in feminist theory as well as gay and lesbian psychology have been made in response to societal stigma and oppression. However, little has been done to address the postulations about bisexual individuals. In addition, Smiley (1997) reported that more awareness of bisexuality as a culture in of itself is needed to improve both research and clinical practice. As with other hidden identities, Corrigan and Matthews (2003) described the pros and cons gays and lesbians face while deciding how and when to come out. With an invisible identity, individuals are often assumed to be part of the majority and, according to Ochs (2007), most people do not realize how many LGBT individuals they actually know. Bisexuals may face the same negotiations as gays and lesbians in having to decide between the psychological advantages of disclosing one's identity with such disadvantages as legalized discrimination and oppression. On the other hand, bisexual individuals may have the unique experience in that their privilege and oppression may vary depending on partner status;bisexuals may receive heterosexual benefits when with other-sex partners and experience more oppression when with same-sex partners (Fassinger & Arseneau, 2007). Encouragingly, scholars have recognized the need to be more inclusive of bisexuals within LGBT identity research. For instance, Mohr and Kendra (2011) revised the Lesbian, Gay, and Bisexual Identity Scale to be more inclusive towards bisexual individuals and to use less pejorative language in their measure. In addition, Weinberg, Williams, and Pryor (2001) reported that although bisexual individual may go through an acceptance process, their identity process differs from that of gays and lesbians. In their study, Weinberg et al. interviewed a group of bisexual individuals at three time intervals (1983, 1988, and 1996) and found that among the changes (i.e., sexual activity, gender of partners, and types of relationships) participants reported over time, half of the respondents were involved with one gender or were in monogamous relationships. Additionally, for a variety of reasons the participants limited their involvement with the bisexual community and were more definitive, over time, with regard to their bisexual identity. The authors also found that with age, bisexual individuals appeared to become more certain of their sexual orientation by reviewing their lives rather than by focusing on their current experiences. Ochs (2007) reported that she found that some bisexuals do not want to be confined by a label, that neither bisexuality nor gender were binary experiences and, therefore, these terms do not conceptualize these phenomenon. For instance, as one participant reported to Ochs, her partner status was not limited to male or female, but to a relationship with a man, a woman, a transgender individual, or an intersex individual, fully dependent on characteristics other than biological sex. Further, Ochs found that for some individuals, the identification as bisexual or as lesbian may be made for political purposes. Additionally, some bisexual women recognize that lesbian feminists have political clout and choose to identify as lesbian whereas others prefer the label bisexual to prove that there are more than two sexual orientations (Ochs, 2007). --- Religious and Spiritual Identity To gain further insight into the intersection of identities, in this case the impact of religion and spirituality on bisexual experience, the feminist view holds that environment has an impact on individuals' experiences (Cosgrove & McHugh, 2000) and is utilized to highlight how the cultural and social norms as experienced by bisexuals may facilitate or impede the integration of identities. Therefore, it is essential to consider the interaction between what bisexual individuals experience, based on the larger culture they live in as well as how they identify themselves (Unger, 2001). Previous researchers have examined the process individuals go through in negotiating identities that were assumed to be mutually exclusive. For instance, Ritter and O'Neill (1989) suggested that gay and lesbian individuals from Judeo-Christian denominations may believe they have to choose between their religion and sexual orientation, while Hunsberger (1996) found that conservative Christian, Jewish, Muslim, and Hindu religions tend to be intolerant towards gay and lesbian individuals. More recently, however, scholars have found that accepting one's sexual orientation does not have to be at the cost of her or his religious identity (Buchanan, Dzelme, & Hecker, 2001;Lease, Horne, & Noffsinger-Frazier, 2005;Rodriguez, 2006;2010). Further, some religions (i.e., Neo Paganism) may coincide with a bisexual identity as well as a feminist perspective (Harper, 2010). According to Cole (2009), feminist theory suggests that the interaction between multiple identities found among an individual's characteristics (e.g., religion, race/ethnicity, political view, sexual orientation, etc.), are inseparable. For example, individuals may experience privileges based on certain aspects of their identities and encounter oppression due to other characteristics. For some bisexuals, these experiences may be mediated by which identities are salient versus those that are not. Specifically, a bisexual Christian in an other-sex relationship may experience more privilege than a bisexual Christian in a same-sex relationship. As per the discussion above regarding bisexual privilege, both of these individuals may experience Christian privilege since they belong to the predominant religion in the United States. However, the latter may face more oppression due to heterosexism in the larger society. Therefore, the complex relationship between bisexuals and their religious and spiritual experience is an area for future exploration due to varying levels of privilege and oppression. Researchers have started to investigate the relationship between religion, spirituality, and sexual orientation (i.e., Rodriguez &Ouellette, 2000), though few have focused specifically on the intersection of religion and bisexuality. Lease et al. (2005) reported that lesbian, gay, and bisexual (LGB) individuals who were involved with gay-affirming religious organizations were less likely to experience internalized homonegativity, were more likely to identify as spiritual, and therefore were less likely to have psychological health concerns. Yip (2007) reviewed data from a number of studies to examine the experiences of Christian and Muslim LGB individuals, and suggested that individuals create their own interpretations free from homonegative sentiments -that connecting with LGB affirming religious groups facilitates the negotiation of multiple identities, and LGB religious organizations have started to work with other LGB groups (both religious and secular) to enhance their political investments. According to Dworkin (1997), Jewish LGB individuals who live in a predominantly Christian society may experience multiple forms of coming out (e.g., disclosing one's sexual orientation as well as disclosing one's religion), and individuals may need to weigh the benefits associated with identity acceptance and the risks of giving up an invisible identity. While these scholars have utilized research as a tool towards social change for LGBT individuals, the unique experience of bisexual individuals has not sufficiently been addressed. More recently, scholars have been addressing the intersection of bisexuality and religious identity. For example, Jefferies et al. (2008) used grounded theory to explore the experiences of Black bisexual men and found that, although most participants believed that their bisexual identity would not be accepted within their religious communities, others who attended gay and lesbian affirming religious groups believed their bisexuality would be accepted. This study also found that most individuals differentiated between their spirituality and their religion's intolerance towards LGB individuals. Specifically, participants tended to use their religion or faith to manage the stress of negotiating intersecting identities, whereas others discussed the protection they received from God (Jefferies et al., 2008). In agreement with Lease et al. (2005), Jefferies et al. described the psychological benefits that affirming religious or spiritual experiences may have for Black bisexual men. Toft (2009) qualitatively examined the experience of bisexual Christians who were in the process of negotiating their multiple identities and found that bisexual individuals vary in their self-definition; involvement in religion was either limited or re-defined, and the fluidity of their bisexuality facilitated negotiating their sexual orientation with their religious identity. Although Jefferies et al. and Toft have started to research the religious experiences of bisexual individuals, more research is needed. --- Political Identity As discussed above, the sociopolitical context of an individual's experience shapes their perspective. In feminist theory this is referred to as "the personal is political" (Unger, 2001). Although political identity is not a social category within the theory of intersectionality; it may influence how individuals identify. According to Crenshaw (1989;1994), political intersectionality describes how experiences with oppression tend to result in political activation. Udis-Kessler (1995) described how, during the 1970s, feminists used to believe that sexual relationships with men could be used to fight against patriarchy; thus, lesbians were not considered useful in the movement. But later, lesbian-feminist groups suggested that woman should always be put first that being a lesbian was a way to practice feminism, and this led to the "women-only" trend of lesbian-feminism. In turn, lesbian feminists did not view bisexual women as committed to the cause; bisexuals were viewed as imposters. This triggered the creation of bisexual feminism (Udis-Kessler, 1995). With the advent of bisexual feminism, bisexual organizations started to develop, the bisexual movement became a separate political force and, by the 1980s, bisexuals were fighting for recognition in gay and lesbian communities, resulting in the inclusion of the "B" in LGBT organizations (Udis-Kessler, 1995). Similarly, Donaldson (1995) discussed the bisexual movement from the male perspective and addressed how the Quaker religion facilitated his experience. According to Donaldson, he led a discussion on bisexuality at an annual Quaker conference; this resulted in the formation of a bisexual Quaker committee. In the 1970s, this religiously-based group fought for bisexual rights within other religious organizations. In the 1970s, although bisexuality was described in the popular media as "chic", in the larger gay and lesbian communities there was limited acceptance of bisexual individuals (Donaldson, 1995). It was during this time that awareness for the bisexual movement was bolstered by articles about the fluidity of bisexuality and how being bisexual defied dichotomous labels. However, this support did not last very long, and stigma regarding bisexuality began to rise. According to Donaldson, the number of bisexual men who were actively involved in the bisexual movement may have been influenced by the AIDS epidemic in the 1980s (e.g., bisexual men were stigmatized as AIDS carriers) and lesbian feminists' refusal to work with men. Although it may be argued that lesbian feminists refusals had less to do with gender than differing and, at times, fractious, sociopolitical agendas between lesbians and gay men. Nonetheless, fewer men remained involved in the movement. According to Rust (2002), due to the bisexual political movement, researchers were more focused on the HIV epidemic, same-sex marriage, and bisexual culture. However, now that more scholars have included bisexual individuals in their research or have become invested in understanding bisexuality, research has moved past explaining sociopolitical assumptions to understanding the bisexual experience. For instance, Rosario, Schrimshaw, Hunter, and Braun (2006) found that although youth who oscillate between gay or lesbian and bisexual identities tend to move towards a gay or lesbian identity over time, those who identified as bisexual to begin with were more likely to maintain this identity. Hence, researchers have determined that bisexuality is a separate identity rather than a transitional phase (Rosario et al., 2006;Rust, 2007). However, more research is needed to understand these nuances of bisexuality. --- Intersectionality Intersectionality is a feminist concept with historical roots partially credited to the Combahee River Collective (Cole, 2009). This Black feminist group focused on the intersection of race, sex, sexual orientation, and socioeconomic status, among other identities which influenced their political identity and their desire to fight against multiple forms of oppression (Combahee River Collective, 1982). The Collective recognized that the lesbian feminist movement left too many people behind and suggested that the fight against racism, sexism, heterosexism, and class oppression needed to occur simultaneously. Cole (2009) suggested that when considering intersectionality, psychologists should consider the following questions: "who is included… what role does inequality play… where are the similarities?" These same questions apply to LGBT research and, more specifically, to bisexual experience. Perhaps due to the interaction between sexual orientation and gender as well as political power, there are times when researching LGBT individuals together makes sense as long as each group is considered equally. However, the question of inequality should also be considered. Even within the LGBT "community," inequality remains; and although bisexuals have fought for inclusion within the larger group, their needs have often been excluded. Cole (2009) proposed that subgroups of people who have been neglected should be given a voice. Therefore, the experience of bisexual individuals as a unique group should be considered independently rather than comparing and contrasting their experience to gay, lesbian, transgender, or heterosexual individuals. Further, when considering the inequality, scholars must consider the intersection of a bisexual identity with other relevant identities to consider the interplay of privilege and oppression which may be experienced simultaneously. As discussed above, a bisexual Christian man who is partnered with a female may choose to keep his bisexual identity invisible in order to maintain power and privilege. However, a bisexual Jewish woman in a relationship with a female may have to negotiate her identities based on the interplay of power and oppression. Lastly, Cole (2009) suggests that to understand intersectionality, researchers must also address the similarities between and within groups. By focusing solely on the diversity within groups it is easy to overlook the similarities between groups. For instance, the phrase 'coming out'; is often associated with LGBT individuals -but, as previously noted, individuals with other hidden identities may also have a coming out experience (Dworkin, 1997). Therefore, by considering areas of commonality we can start to break down boundaries. This paper will thus attempt to address these issues by exploring the intersectionality between three key identities -bisexuality, religion/spirituality and political view. --- The Current Study Although there is a small but growing body of work that examines the religious and spiritual lives of bisexuals (i.e., Donaldson, 1995;Harper, 2010;Rodriguez, 2006;Toft, 2009), there is a strong need for additional empirical research (employing both qualitative and quantitative research methodologies) to better explore the intersections of bisexual and religious/spiritual identities. Increasingly, quantitative methods have been used by psychologists to examine feminist concerns, especially when it comes to examining complex relationships (Peplau & Conrad, 1989;Warner, 2008). The present paper is an exploratory study that seeks to use the lens of feminist theory to assess the relationship between bisexual, religious/spiritual, and political identities. Motivated by the feminist notions that the personal is political and that individuals are the experts of their own experiences (Unger, 2001), the specific aim of this study is to better understand the intersection of multiple identities experienced by bisexual individuals. Specifically, this research was designed to examine bisexuality as it intersects with both political and religious/spiritual identities. What is the relationship between bisexual identity and religious/spiritual identity? What is the relationship between bisexual identity and political identity? What is the relationship between political identity and religious/spiritual identity? What demographic variables play a role in influencing these three identities? These are the research questions that this current study will attempt to answer. The specific aims and research questions for this study were assessed by conducting an archival secondary analysis of data from the Northern California Health Study (NCHS) conducted at the University of California, Davis (UC Davis). Dr. Greg Herek and his colleagues, from August 1994 through December 1995, conducted this study to better understand the relationship between hate crime victimization, non-hate crime victimization, psychological well-being, world-view, and victimization related beliefs within a large sample of gay men, lesbians and bisexuals, as well as a small number of transgendered and heterosexuals (Herek, Cogan, &Gillis, 2002;Herek, Gillis, &Cogan, 2009;Herek, Gillis, Cogan, &Glunt, 1998). One of Herek's colleagues on the NCHS, Eric Glunt, was interested in assessing religiosity and spirituality within a GLB sample. Thus, embedded within the NCHS was a battery of religious and spirituality questions that were randomly given to a third of the study's sample (n =761). These data were never analyzed by the NCHS team, thus it serves as an ideal data set to conduct an analysis of intersecting bisexual, political, and religious/spiritual identities as there was a sizeable subsample of bisexual individuals who participated in the survey. For additional information about the methodology utilized during the NCHS, including detailed descriptions of the non-probability sampling strategy, data collection, and the structure of the survey instrument, please see Herek, Gillis, and Cogan (1999;2009) and Herek, Gillis, Cogan, and Glunt (1998). --- Method Participants The sample size for the NCHS was 2,259, with 1,170 women and 1,089 men (Herek, Gillis, &Cogan, 1999). Of these participants, a sub-sample of 761 individuals answered a series of questions regarding their spirituality, religious beliefs, and religious behaviors. Of these 761 participants, 120 self-identified as bisexual. This subgroup of 120 bisexual individuals (n =67 females and n=53 males) forms the research sample for the current paper. --- Measures The measures used in this study were all taken directly from the religious/spirituality version of the NCHS survey and were divided into four subsets: demographic variables, measures of political identity, measures of bisexual identity, and measures of religious and spiritual identity. There was very little missing data in the NCHS dataset, but to ensure maximum power for subsequent data analysis, mean replacement was used to correct any missing data points found within each of the continuous variables. Demographic variables-The demographic variables of interest were age, level of education, sex, race/ethnicity, and satisfaction with their standard of living. Because the sample was predominantly White, the race/ethnicity variable was recoded for analysis purposes into a binary dummy variable with 1=White and 0=non-White. Bisexual identity variables. The specific measures of bisexual identity are described below: • Degree Out Regarding Bisexuality to family, friends and co-workers (a single 10point Likert scale item ranging from not out at all [0] to completely out to everyone [9]). • Age Came Out to Self (continuous variable in years), Have a Choice in Being Bisexual (a single 5-point Likert scale item ranging from no choice at all [0] to a lot of choice [4]). • LGB Community Consciousness (scale consisting of five, 5-point Likert scale items [0-4]; potential scale range = 0 to 20, with a 20 indicating a high level of involvement in the gay, lesbian and bisexual community). • LGB Self-Esteem (scale consisting of five, 5-point Likert scale items [0-4]; potential scale range = 0 to 20, with a 20 indicating a high level of self-esteem regarding one's sexual orientation). Participants were also asked about Bisexual Attraction -specifically whether they found themselves attracted more to the same sex, to the other-sex or were equally attracted to both women and men. For analysis purposes each of these three attraction targets were recoded into binary dummy variables: attracted to the same sex (1=Yes, 0=No), attracted to the other-sex (1=Yes, 0=No) and equally attracted to both sexes (1=Yes, 0=No). Political identity variables. Political identity was assessed by asking participants their political view (a single 7-point Likert scale item ranging from very conservative [1], to middle of the road [4], through very liberal [7]). Participants were also asked to describe their political affiliation (Democrat, Republican, Independent, etc.). Given that the data was primarily drawn from Northern California (a "blue state" that historically leans Democratic in national elections), political affiliation was recoded into a binary dummy variable where 1=Democrat and 0=Not Democrat. Religious and spirituality identity factor variables-For the purposes of the current paper we analyzed 16 religiosity and spirituality questions from the NCHS survey. Although the Cronbach's alpha reliability of this 16-item scale as a whole was an acceptable .686, the wide diversity and theoretical range of the questions precluded using the total religiosity/ spirituality score for analysis purposes. An exploratory Principal Components factor analysis was conducted using a Varimax rotation with Kaiser normalization. This factor analysis resulted in six distinct factors, each with eigenvalues greater than one. Additionally, the resulting factor model showed clear theoretical underpinnings and strong face validity, and accounted for 72% of the total variance. The six factors uncovered included the following: 1. Religiosity -consisted of five items including "My religious beliefs are what really lie behind my whole approach to life" and "I enjoy reading religious or spiritual books." 2. Spirituality -consisted of two items including "I have a sense of harmony with the universe" and "I feel a spiritual connection to all living things." --- 3. Religion as Oppression -consisted of three items including "Traditional religion has been a repressive force in women's lives" and "I believe that organized religion has done more harm to the world than good." --- 4. Alternative Religious Beliefs -consisted of two items including "The idea of a female divine or goddess is an important part of my spiritual beliefs" and "I think that it is important to center my spiritual beliefs around the idea of Mother Earth and fertility." --- 5. Religion Socially Important -consisted of two items including "Being involved in religious activities is an important way to develop good social relationships" and "Being involved with a church or synagogue helps to establish a person in the community." 6. Atheism -consisted of two items including "Spiritual and religious beliefs do little more than mask reality" and "Religion is little more than a kind of social control over people." For data analysis purposes, the six factors were converted into scale variables by summing the individual scale items that loaded on each factor. Please note that items with negative factor loadings were reverse-coded to ensure similar directionality for each of the resulting factor scale variables and due to mean replacement there were no missing data issues to address. The descriptive statistics for each of the six factor variables can be found in Table 1 and, as can be seen, each had an acceptable reliability coefficient • Other religious and spirituality identity variables. In addition to the six religious/spiritual factor variables (previous section), there were several other religious variables assessed: Church Attendance in the past year -This was a sixpoint Likert scale item ranging from never (0) to more than one time per week (5). Current Religious Beliefs -A continuous five-point variable broken down as follows: 1) I belong to a religion, 2) I believe in God, but don't belong to an organized religion, 3) I believe in the spiritual, but not religion or God, 4) I am an agnostic, and 5) I am an atheist. • Participants were also asked about their experiences with Religious Support -an aggregate score of 3, four-point Likert scale items (0-3) measuring support from a specific church/religious group, support from a clergy member, and support from someone who is an active participant in organized religion. The potential range of this scale was 0 to 9, with a score of 9 indicating high levels of religious support. Participants were also asked if they had Ever Attended an LGB-Positive Religious Organization. This construct was coded as a binary dummy variable with 0=No and 1=Yes. • Participants were also asked what religion they currently identified with today. For analysis purposes this question was recoded into two different binary dummy variables: Currently identify as Christian (1=Yes, 0=No) and currently identify with any established religion (1=Yes, 0=No). --- Data Analysis Plan In addition to running frequencies and descriptive statistics for all identity variables of interest, the exploratory multivariate analyses conducted for this study began with a series of correlational analyses assessing the relationship between the demographic variables and the bisexual, political, and religious/spiritual identity measures. After determining which demographic variables could potentially influence subsequent analyses, we then ran partial correlations to control for any statistically significant demographic covariates uncovered during the previous wave of analyses. We relied on the standard .05 cutoff to determine statistical significance of all the above-mentioned results, and also used a .10 cut-off to identify results that approached a trend level of significance. --- Results --- Basic Demographic and Identity Information We present the descriptive statistics for all of the continuous demographic, political identity, bisexual identity, and religious/spiritual identity variables in Table 1. As noted previously, 55.8% of the bisexual participants in this current study were female. Racially, the participants were predominately White (n = 81; 67.5%), with 13 (10.8%) identifying as mixed race, twelve (10%) as Hispanic, seven (5.8%) as Black, four (3.3%) as Native American, and three (2.5%) as Asian. As can be seen in Table 1, the sample had an average age of 32, was highly educated (only three had less than a high school education; 83% had at least some college or more), and were generally only somewhat satisfied with their standard of living. Intercorrelations between the demographic variables uncovered several relationships between age and several of the other demographic variables. Age was positively correlated with higher levels of education (r =.388, p =.0001) and showed trendlevel positive correlations with being White (r =.164, p = .074), being male (r =.168, p =. 067), and being more satisfied with one's standard of living (r =.15, p =.10). Bisexual identity-Seventy-two percent (n = 60) of the bisexual participants in the current study reported that they found themselves more attracted to the same sex, 19% (n = 23) found themselves more attracted to the other-sex, and 21% (n = 25) were equally attracted to both men and women. The bisexual participants in this current study scored in the middle of both the LGB community conscious and self-esteem measures, and they tended to believe that being bisexual was not a choice (scoring on the low end of this particular measure). Study participants also tended to score on the low end regarding how open (or "out") they were regarding their bisexual orientation to their family, friends and coworkers. Political Identity-Given that the sample was drawn predominantly from Northern California, it comes as no surprise that the majority of the research participants identified as Democrats (n =66; 55%) and leaned more toward the liberal end of the political spectrum. Fifteen of the participants identified as Independent (12.5%), ten as belonging to the Green Party (8.3%), eight as Republican (6.7%) and four as a "Mixed" political affiliation (3.3%). While ten participants identified as having an "Other" political affiliation (i.e. Libertarian, Peace and Freedom, "Other"), seven participants (5.8%) identified as having no political affiliation at all. Religious/Spiritual Identity-Over 78% of the bisexual participants in the current study (n = 94) did not currently identify with an established religion. Of the 22% who did, eighteen (15% of the total sample) identified as Christian (with eleven Protestants, four Catholics, and three Christians -denomination unspecified), seven (5.8%) identified as belonging to a non-Western religion (i.e. Buddhist, Pagan, or Wiccan), with only one bisexual individual identifying as Jewish. Interestingly, while there was not much current involvement in established religion, when dealing with issues of belief only 7.5% of the sample identified as atheist (n = 9) and only 5.8% identified as agnostic (n = 7). Over 84% of the bisexual individuals surveyed reported some level of religious and/or spiritual belief: 21.7% noted that they belong to an established religion (n = 26);34.2% (n = 41) believed in God but did not belong to an organized religion; and 28.3% (n =34) believed in the spiritual, but not religion or God. Seventeen participants (n = 14.2%) noted that they had once belonged to an LGB-positive religious organization at some point in their lives. --- Determining Significant Covariates for Subsequent Analyses The next step in the multivariate analyses was to determine if any of the demographic variables of interest needed to be used as covariates in the subsequent analyses of the bisexual, political, and religious/spiritual identity measures. For the measures of political identity, correlational analyses showed statistically significant relationships between sex and political view (r = -.30, p = .001), level of education and political view (r = .225, p = .014), and level of education and being a Democrat (r = .21, p = .021). These results indicate that the female participants tended to be more politically liberal, whereas the male participants tended to be more politically conservative. Additionally, those with higher levels of education tended to identify as Democrats and leaned more towards the liberal end of the political spectrum. There were no statistically significant relationships between the demographic variables of race, age, satisfaction with standard of living, or the measures of political identity. For the measures of bisexual identity, correlational analyses showed statistically significant relationships between sex and choice (r = -.201, p = .028), and between sex and age at which the participants decided they were bisexual (r =-.277, p =.002). These results indicate that women tended to view their sexual orientation as more of a choice than men, and women tended to be older than men before deciding that they were bisexual. Level of education was significantly correlated with age the participant first told someone else that they were bisexual (r = .233, p = .01) and age the participant decided they were bisexual (r = .194, p = .034). These results indicate that those participants with higher levels of education tended to be older before deciding that they were bisexual and then telling others about their sexual orientation. The participant's age was significantly related to LGB self-esteem (r = -.183, p = .046) and the age they first told someone else that they were bisexual (r = .271, p = .003), while also indicating a trend level of significance with the age that they decided they were bisexual (r = .15, p = .10). These results indicate that older bisexual study participants tended to have lower levels of self-esteem regarding their sexual orientation and that they tended to wait until they were older before disclosing their bisexuality to others or coming out as bisexual to themselves. There were no statistically significant results for either the race or the satisfaction with standard of living variables. For the measures of religious and spiritual identity, age was significantly (inversely) correlated with having alternate religious beliefs (r = -.204, p = .026) and positively associated with attending religious services (r =.190, p = .037). It showed trend levels of significance with currently identifying as belonging to an established religion (r = .161, p = . 078), ever belonging to an LGB-positive religious organization (r = .153, p = .096), and currently identifying as Christian (r = .168; p = .066). Sex was significantly correlated with viewing religion as oppression (r = -.236, p = .009), having alternate religious beliefs (r = -. 178, p = .051), attending worship services (r = .181, p = .048), and identifying as Christian (r = .237, p = .009). Level of education was significantly related to viewing religion as oppression (r = .212, p = .02), atheism (r = -.238, p = .009), and religious support (r = .188, p =.04), while showing trend levels of significance with church attendance (r = .160, p = . 08) and having alternate religious beliefs (r = -.173, p = .059). For the bisexuals participating in the current study, these results indicate that being older is related to a lower likelihood of having alternate religious beliefs, belonging to an established religion, currently identifying as Christian, ever belonging to an LGB-positive religious organization and attending church more often. Being female was significantly related to viewing religion as oppression, having alternate religious beliefs, attending worship services more often, and currently identifying as Christian. A higher level of education was related to viewing religion as oppression, not identifying as an atheist, receiving more religious support, attending worship services more often, and not having alternate religious beliefs. Being White was only significantly correlated with higher levels of spirituality (r = .222, p = .015) and with viewing religion as oppression (r = .185, p = .044), while standard of living was only correlated at the trend level of significance with being an atheist (r = -.153, p = . 095). As a result of the correlational analyses discussed above, we determined that that the demographic variables of age, education, and sex would be used as covariates in the subsequent analyses to follow. --- Bisexual, Political and Religious/Spiritual Identity Analyses The partial correlation analyses of selected bisexual and political identity measures by selected religious/spiritual identity measures, controlling for the statistically significant covariates of age, level of education and sex, are presented in Table 2. There were no statistically significant results for being a Democrat, religion as socially important, being an atheist, currently being Christian, ever attending an LGB-positive religious organization, and currently belonging to an established religion;therefore, these variables were dropped from the resulting analysis. However, we uncovered multiple statistically significant and trend level findings. Participants who viewed bisexuality as a choice had higher levels of religious/spiritual belief. Those who were older when they decided that they were bisexual were more likely to view religion as oppression and to have received higher levels of religious support. Those who were older when they began disclosing their bisexuality to others were less likely to have alternative religious beliefs. Those who scored higher on the measure of community consciousness showed higher levels of religiosity and higher levels of spirituality, and were more likely to have alternative religious beliefs. Those who scored higher on the measure of LGB self-esteem showed higher levels of spirituality, and were more likely to have alternative religious beliefs. Those bisexual participants who were more open about their sexual orientation showed higher levels of both religiosity and spirituality, were more likely to have alternate religious beliefs, and were less likely to attend church regularly. Finally, those with a more liberal political view showed higher levels of spirituality and were also more likely to have alternate religious beliefs. We ran additional analyses assessing whether the bisexual participants were more attracted to same sex partners, to other-sex partners, or to both sexes equally. We first correlated the binary attraction target variables with the demographic variables (race, age, education, sex, and satisfaction with standard of living) to determine whether any of the demographic variables needed to be included as covariates in the subsequent analyses. Those bisexual participants who identified as being more attracted to the same sex tended to be older (r = . 151, p = .099), had significantly higher levels of education (r = .183, p = .045), were more likely to be male (r = .212, p = .02), and tended to be more satisfied with their standard of living (r = .197, p = .031). Those bisexual participants who identified as being more attracted to the other sex tended to be less satisfied with their standard of living (r = -.153; p = .096), whereas those who identified as being equally attracted to both men and women were more likely to be younger (r = -.207, p = .023), have lower levels of education (r = -. 178, p = .05), and tended to be female (r = -.167, p = .068). Although attraction was not related to race, these results indicate that we needed to use the demographic variables of age, education, sex, and satisfaction with standard of living as covariates in the partial correlational analyses to follow. Table 3 presents the partial correlations of attraction target by selected bisexual, political, and religious/spiritual identity measures, controlling for the statistically significant covariates of age, education, sex and satisfaction with standard of living. While many of the identity variables of interest were dropped from the resulting table due to a lack of statistically significant findings, we nonetheless observed multiple statistically significant results. Those bisexual participants who self-reported that they were more attracted to members of the same sex were less likely to view their sexual orientation as a choice, were younger when they decided that they were bisexual, were more open about their sexual orientation with their friends, family, and co-workers, were less likely to view religion as being socially important, and were more likely to score higher on the belief statement. Those bisexual participants who self-reported that they were more attracted to members of the other-sex were more likely to view their sexual orientation as a choice, were older when they decided that they were bisexual, were less open about their sexual orientation, and showed higher levels of spirituality. Those bisexuals who reported that they were equally attracted to both men and women were more likely to see bisexuality as a choice and were more likely to view religion as being socially important. --- Discussion Relying on the feminist theoretical notions that the personal is political and that individuals are the experts of their own experiences (Unger, 2001), that the environment impacts an individual's choices (Cosgrove & McHugh, 2000), and that the interaction between multiple identities found among an individual's characteristics should not always be considered separately (Cole, 2009), the present study is one of the first to quantitatively explore the connections between bisexual, political, and religious/spiritual identities. Relying upon a sizeable N of self-identified male and female bisexuals, this exploratory, archival secondary data analysis uncovered a number of significant findings. Consistent with long-standing assertions that experiences of bisexual individuals may be heavily influenced by their gendered experience, women were much more likely to experience their bisexual attractions as a choice and come to self-identify at later ages, perhaps reflecting assertions that women who are attracted to the same gender are more likely to experience perceived shifts in their identities and attractions (consistent with emerging work on sexual fluidity in women). Given the more liberal political views, endorsement of alternative religious beliefs, and greater perception of religion as an oppressive force, it is also possible that the historical marginalization of women in many branches of Christianity is not surprising to be more common among the women bisexuals in the sample. Not surprisingly, the experience of religion as oppressive was linked to older age at first self-identification as bisexual, while alternative (LGB-affirming) religious beliefs were linked to earlier disclosure, higher community consciousness, self-esteem and more Democrat political views, suggesting a possible buffering effect of exposure to more LGBaffirming experiences in one's environment to the challenges of bisexual identity development. Bisexual individuals who were predominantly attracted to those of the other sex endorsed items that suggested a more complex identity development process in terms of lower degrees of outness, older age at self-identification, and greater perception of their orientation as a choice. In contrast, those who were primarily attracted to the same sex were more likely experience their bisexuality as innate, which was associated with earlier markers of internal identity development, greater disclosure that may come with a pattern of attractions that is more similar to gay and lesbian identified peers. Those with equal attraction to men and women demonstrated their own unique patterns regarding a tendency to endorse their orientation as a choice (similar to those with primarily other-sex attractions) yet showed no clear patterns in terms of other identity variables, perhaps indicating the diverse experiences of this under-researched subgroup. --- Study Limitations and Suggestions for Future Research As with nearly all research, this study has several methodological limitations that should be taken into account in subsequent research projects on the religious and spiritual lives of bisexuals. Please note that the limitations found within the current study are presented along with suggestions for future social scientific research that address each of the drawbacks listed below. Racial minorities were not well represented, nor is the study inclusive of non-Western religious experiences-Racial/ethnic minorities were not well represented in the current study as almost 68% of the sample (n =81) self-identified as White. While most likely a demographic artifact of collecting the data primarily from Northern California, future research in this area needs to be more sensitive to the inclusion of racial/ethnic minorities. While we are pleased to note that the literature on LGBT people of faith continues to expand, the majority of the research (both qualitative and quantitative) conducted to date focuses primarily on Christianity and to a lesser extent on Judaism. Future research should attempt to expand the study of LGBT religiosity and spirituality to non-Western religions such as Buddhism, Daoism, Hinduism, Islam and even Neo-Pagan religions such as Wicca and Shamanism. Non-random sampling-Historically, it has been extremely difficult to conduct research using a representative sample of gay men and lesbians (Gonsiorek, 1991). An unknown subset of the gay and lesbian population is not open about their sexual orientation and is unwilling (or unable) to volunteer to participate in social scientific research. Gonsiorek (1991) also points out that previous research has shown that sexual minorities who do volunteer for psychological research are not always representative of the larger LGBT population. Because of these inherent difficulties, random sampling and/or random selection are rarely viable options in social scientific research studies of the LGBT community. Care has been taken, however, to insure that any generalizations made regarding this research may not necessarily apply to bisexual individuals as a population. The general nature and sheer variety of non-probability sampling techniques used by Herek, Glunt and their colleagues have ensured that, while the sample collected was not a probability sample, it was probably one of the stronger non-probability samples of GLB individuals ever collected during the course of psychological research. Future research that relies on such large datasets is imperative to enable us to continue to increase our understanding of the role of faith in the lives of LGBT individuals. There are drawbacks to using and relying on someone else's data-While the dataset utilized for this research project was one of largest and most complete religious and spiritual datasets ever compiled on LGB individuals to date, the data contained gaps that we were not able to overcome in the study design and secondary data analyses described here, including the fact that the scale items compiled by Glunt do not correspond with any established measures of religiosity or spirituality. Additionally, it is important to note that correlation is not causation -while the relationships uncovered here are fascinating and warrant future exploration, additional research using diverse methodological approaches is needed to establish causality between the variables described here. --- Conclusion Despite the study limitations noted above, this current project had multiple strengths and advances the current literature on bisexuality in several unique ways. First, it introduces a feminist theoretical framework to the psychological study of bisexual religiosity and spirituality. Second, it expands our understanding of the religious and spiritual lives of bisexuals. Third, it expands our understanding of the relationships between political outlook, sexual orientation, and religiosity/spirituality. Finally, it looks across multiple identities (bisexual, political, religious/spiritual) to better understand the multiplicity and intersectionality within bisexual lives.
Some scholars add authors to their research papers or grant proposals even when those individuals contribute nothing to the research effort. Some journal editors coerce authors to add citations that are not pertinent to their work and some authors pad their reference lists with superfluous citations. How prevalent are these types of manipulation, why do scholars stoop to such practices, and who among us is most susceptible to such ethical lapses? This study builds a framework around how intense competition for limited journal space and research funding can encourage manipulation and then uses that framework to develop hypotheses about who manipulates and why they do so. We test those hypotheses using data from over 12,000 responses to a series of surveys sent to more than 110,000 scholars from eighteen different disciplines spread across science, engineering, social science, business, and health care. We find widespread misattribution in publications and in research proposals with significant variation by academic rank, discipline, sex, publication history, coauthors, etc. Even though the majority of scholars disapprove of such tactics, many feel pressured to make such additions while others suggest that it is just the way the game is played. The findings suggest that certain changes in the review process might help to stem this ethical decline, but progress could be slow.
Introduction The pressure to publish and to obtain grant funding continues to build [1][2][3]. In a recent survey of scholars, the number of publications was identified as the single most influential component of their performance review while the journal impact factor of their publications and order of authorship came in second and third, respectively [3]. Simultaneously, rejection rates are on the rise [4]. This combination, the pressure to increase publications coupled with the increased difficulty of publishing, can motivate academics to violate research norms [5]. Similar struggles have been identified in some disciplines in the competition for research funding [6]. For journals and the editors and publishers of those journals, impact factors have become a mark of prestige and are used by academics to determine where to submit their work, who earns tenure, and who may be awarded grants [7]. Thus, the pressure to increase a journal's impact factor score is also increasing. With these incentives it is not surprising that academia is seeing authors and editors engaged in questionable behaviors in an attempt to increase their publication success. There are many forms of academic misconduct that can increase an author's chance for publication and some of the most severe cases include falsifying data, falsifying results, opportunistically interpreting statistics, and fake peer-review [5,[8][9][10][11][12]. For the most part, these extreme examples seem to be relatively uncommon; for example, only 1.97% of surveyed academics admit to falsifying data, although this probably understates the actual practice as these respondents report higher numbers of their colleagues misbehaving [10]. Misbehavior regarding attribution, on the other hand, seems to be widespread [13][14][15][16][17][18]; for example, in one academic study, roughly 20% of survey respondents have experienced coercive citation (when editors direct authors to add citations to articles from the editors' journals even though there is no indicated lack of attribution and no specific articles or topics are suggested by the editor) and over 50% said they would add superfluous citations to a paper being submitted to a coercive journal in an attempt to increase its chance for publication [18]. Honorary authorship (the addition of individuals to manuscripts as authors, even though those individuals contribute little, if anything, to the actual research) is a common behavior in several disciplines [16,17]. Some scholars pad their references in an attempt to influence journal referees or grant reviewers by citing prestigious publications or articles from the editor's journal (or the editor's vita) even if those citations are not pertinent to the research. While there is little systematic evidence that such a strategy influences editors, the perception of its effectiveness is enough to persuade some scholars to pad [19,20]. Overall, it seems that many scholars consider authorship and citation to be fungible attributes, components of a project one can alter to improve their publication and funding record or to increase journal impact factors (JIFs). Most studies examining attribution manipulation focus on the existence and extent of misconduct and typically address a narrow section of the academic universe; for example, there are numerous studies measuring the amount of honorary authorship in medicine, but few in engineering, business, or the social sciences [21][22][23][24][25]. And, while coercive citation has been exposed in the some business fields, less is known about its prevalence in medicine, science, or engineering. In addition, the pressure to acquire research funding is nearly as intense as publication pressures and in some disciplines funding is a major component of performance reviews. Thus, grant proposals are also viable targets of manipulation, but research into that behavior is sparse [2,6]. However, if grant distributions are swayed by manipulation then resources are misdirected and promising areas of research could be neglected. There is little disagreement with the sentiment that this manipulation is unethical, but there is less agreement about how to slow its use. Ultimately, to reverse this decline of ethics we need to better understand the factors that impact attribution manipulation and that is the focus of this manuscript. Using more than 12,000 responses to surveys sent to more than 110,000 academics from disciplines across the academic universe, this study aims to examine the prevalence and systematic nature of honorary authorship, coercive citation, and padded citations in eighteen different disciplines in science, engineering, medicine, business, and the social sciences. In essence, we do not just want to know how common these behaviors are, but whether there are certain types of academics who add authors or citations or are coerced more often than others. Specifically, we ask, what are the prevailing attributes of scholars who manipulate, whether willingly (e.g., padded citation) or not (e.g., coercive citation), and we consider attributes like academic rank, gender, discipline, level of co-authorship, etc. We also look into the reasons scholars manipulate and ask their opinions on the ethics of this behavior. In our opinion, a deeper understanding of manipulation can shed light on potential ways to reduce this type of academic misconduct. --- Background As noted in the introduction, the primary component of performance reviews, and thus of individual research productivity, is the number of published articles by an academic [3]. This number depends on two things: (i) the number of manuscripts on which a scholar is listed as an author and (ii) the likelihood that each of those manuscripts will be published. The pressure to increase publications puts pressure on both of these components. In a general sense, this can be beneficial for society as it creates incentives for individuals to work harder (to increase the quantity of research projects) and to work better (to increase the quality of those projects) [6]. There are similar pressures and incentives in the application for, and distribution of, research grants as many disciplines in science, engineering, and medicine view the acquisition of funding as both a performance measure and a precursor to publication given the high expense of the equipment and supplies needed to conduct research [2,6]. But this publication and funding pressure can also create perverse incentives. --- Honorary authorship Working harder is not the only means of increasing an academic's number of publications. An alternative approach is known as "honorary authorship" and it specifically refers to the inclusion of individuals as authors on manuscripts, or grant proposals, even though they did not contribute to the research effort. Numerous studies have explored the extent of honorary authorship in a variety of disciplines [17,20,[21][22][23][24][25]. The motivation to add authors can come from many sources; for instance, an author may be directed to add an individual who is a department chair, lab director, or some other administrator with power, or they might voluntarily add such an individual to curry favor. Additionally, an author might create a reciprocal relationship where they add an honorary author to their own paper with the understanding that the beneficiary will return the favor on another paper in the future, or an author may just do a friend a favor and include their name on a manuscript [23,24]. In addition, if the added author has a prestigious reputation, this can also increase the chances of the manuscript receiving a favorable review. Through these means, individuals can raise the expected value of their measured research productivity (publications) even though their actual intellectual output is unchanged. Similar incentives apply to grant funding. Scholars who have a history of repeated funding, especially funding from the more prestigious funding agencies, are viewed favorably by their institutions [2]. Of course, grants provide resources, which increase an academic's research output, but there are also direct benefits from funded research accruing to the university: overhead charges, equipment purchases that can be used for future projects, graduate student support, etc. Consequentially, "rainmakers" (scholars with a record of acquiring significant levels of research funding) are valued for that skill. As with publications, the amount of research funding received by an individual depends on the number and size of proposals put forth and the probability of each getting funded. This metric creates incentives for individuals to get their names on more proposals, on bigger proposals, and to increase the likelihood that those proposals will be successful. That pressure opens the door to the same sorts of misattribution behavior found in manuscripts because honorary authorship can increase the number of grant proposals that include an author's name and by adding a scholar with a prestigious reputation as an author they may increase their chances of being funded. As we investigate the use of honorary authorship we do not focus solely on its prevalence; we also question whether there is a systematic nature to its use. First, for example, it makes sense that academics who are early in their career have less funding and lack the protection of tenure and thus need more publications than someone with an established reputation. To begin to understand if systematic differences exist in the use of honorary authorship, the first set of empirical questions to be investigated here is: who is likely to add honorary authors to manuscripts or grant proposals? Scholars of lower rank and without tenure may be more likely to add authors, whether under pressure from senior colleagues or in their own attempt to sway reviewers. Tenure and promotion depend critically on a young scholars' ability to establish a publication record, secure research funding, and engender support from their senior faculty. Because they lack the protection of rank and tenure, refusing to add someone could be risky. Of course, senior faculty members also have goals and aspirations that can be challenging, but junior faculty have far more on the line in terms of their career. Second, we expect research faculty to be more likely to add honorary authors, especially to grant proposals, because they often occupy positions that are heavily dependent on a continued stream of research success, particularly regarding research funding. Third, we expect that female researchers may be less able to resist pressure to add honorary authors because women are underrepresented in faculty leadership and administrative positions in academia and lack political power [26,27]. It is not just their own lack of position that matters; the dearth of other females as senior faculty or in leadership positions leave women with fewer mentors, senior colleagues, and administrators with similar experiences to help them navigate these political minefields [28,29]. Fourth, because adding an author waters down the credit received by each existing author, we expect manuscripts that already have several authors to be less resistant to additional "credit sharing." Simply put, if credit is equally distributed across authors then adding a second author would cut your perceived contribution in half, but adding a sixth author reduces your contribution by only 3% (from 20% to 17%). Fifth, because academia is so competitive, the decisions of some scholars have an impact on others in the same research population. If your research interests are in an area in which honorary authorship is common and considered to be effective, then a promising counterpolicy to the manipulation undertaken by others is to practice honorary authorship yourself. This leads us to predict that the obligation to add honorary authors to grant proposals and/or manuscripts is likely to concentrate more heavily in some disciplines. In other words, we do not expect it to be practiced uniformly or randomly across fields; instead, there will be some disciplines who are heavily engaged in adding authors and other disciplines less so engaged. In general, we have no firm predictions as to which disciplines are more likely to practice honorary authorship; we predict only that its practice will be lumpy. However, there may be reasons to suspect some patterns to emerge; for example, some disciplines, such as science, engineering, and medicine, are much more heavily dependent on research funding than other disciplines, such as the social sciences, mathematics, and business [2]. For example, over 70% of the NSF budget goes to science and engineering and about 4% to the social sciences. Similarly, most of the NIH budget goes to doctors and a smaller share to other disciplines [30]. Consequently, we suspect that the disciplines that most prominently add false investigators to grant proposals are more likely to be in science, engineering, and the medical fields. We do not expect to see that division as prominent in the addition of authors to manuscripts submitted for publication. There are several ways scholars may internalize the pressure to perform, which can lead to different reasons why a scholar might add an honorary author to a paper. A second goal of this paper is to study who might employ these different strategies. Thus, we asked authors for the reasons they added honorary authors to their manuscripts and grants; for example, was this person in a position of authority, or a mentor, did they have a reputation that increased the chances for publication or funding, etc? Using these responses as a dependent variable, we then look to find out if these were related to the professional characteristics of the scholars in our study. The hypotheses to be tested mirror the questions posed for honorary authors. We expect junior faculty, research faculty, female faculty, and projects with more co-authors to be more likely to add additional coauthors to manuscripts and grants than professors, male faculty, and projects with fewer co-authors. Moreover, we expect for the practice to differ across disciplines. Focusing specifically on honorary authorship in grant proposals, we also explore the possibility that the use of honorary authorship differs between funding opportunities and agencies. --- Coercive citation Journal rankings matter to editors, editorial boards, and publishers because rankings affect subscriptions and prestige. In spite of their shortcomings, impact factors have become the dominant measure of journal quality. These measures include self-citation, which creates an incentive for editors to direct authors to add citations even if those citations are irrelevant, a practice called "coercive citation" [18,27]. This behavior has been systematically measured in business and social science disciplines [18]. Additionally, researchers have found that coercion sometimes involves more than one journal; editors have gone as far as organizing "citation cartels" where a small set of editors recommend that authors cite articles from each other's journal [31]. When editors make decisions to coerce, who might they target, who is most likely to be coerced? Assuming editors balance the costs and benefits of their decisions, a parallel set of empirical hypotheses emerge. Returning to the various scholar attributes, we expect editors to target lower-ranked faculty members because they may have a greater incentive to cooperate as additional publications have a direct effect on their future cases for promotion, and for assistant professors on their chances of tenure as well. In addition, because they have less political clout and are less likely to openly complain about coercive treatment, lower ranked faculty members are more likely to acquiesce to the editor's request. We predict that editors are more likely to target female scholars because female scholars hold fewer positions of authority in academia and may lack the institutional support of their male counterparts. We also expect the number of coauthors to play a role, but contrary to our honorary authorship prediction, we predict editors will target manuscripts with fewer authors rather than more authors. The rationale is simple; authors do not like to be coerced and when an editor requires additional citations on a manuscript having many authors then the editor is making a larger number of individuals aware of their coercive behavior, but coercing a sole-authored paper upsets a single individual. Notice that we are hypothesizing the opposite sign in this model than in the honorary authorship model; if authors are making a decision to add honorary authors then they prefer to add people to articles that already have many co-authors, but if editors are making the decision then they prefer to target manuscripts with few authors to minimize the potential pushback. As was true in the model of honorary authorship, we expect the practice of coercion to be more prevalent in some disciplines than others. If one editor decides to coerce authors and if that strategy is effective, or is perceived to be effective, then there is increased pressure for other editors in the same discipline to also coerce just to maintain their ranking-if one journal climbs up in the rankings, others, who do nothing, fall. Consequently, coercion begets additional coercion and the practice can spread. But, a journal climbing up in the rankings in one discipline has little impact on other disciplines and thus we expect to find coercion practiced unevenly; prevalent in some disciplines, less so in others. Finally, as a sub-conjecture to this hypothesis, we expect coercive citation to be more prevalent in disciplines for which journal publication is the dominant measure for promotion and tenure; that is, disciplines that rely less heavily on grant funding. This means we expect the practice to be scattered, and lumpy, but we also expect relatively more coercion in the business and social sciences disciplines. We are also interested in the types of journals that have been reported to coerce and to explore those issues we gather data using the journal as the unit of observation. As above, we expect differences between disciplines and we expect those discipline differences to mirror the discipline differences found in the author-based data set. We also expect a relationship between journal ranking and coercion because the costs and benefits of coercion differ for more or less prestigious journals. Consider the benefits of coercion. The very highest ranked journals have high impact factors; consequently, to rise another position in the rankings requires a significant increase in citations, which would require a lot of coercion. Lowerranked journals, however, might move up several positions with relatively few coerced citations. Furthermore, consider the cost of coercion. Elite journals possess valuable reputations and risking them by coercing might be foolhardy; journals deep down in the rankings have less at stake. Given this logic, it seems likely that lower ranked journals are more likely to have practiced coercion. We also look to see if publishers might influence the coercive decision. Journals are owned and published by many different types of organizations; the most common being commercial publishers, academic associations, and universities. A priori, commercial publishers, being motivated by profits, are expected to be more interested in subscriptions and sales, so the return to coercion might be higher for that group. On the other hand, the integrity of a journal might be of greater concern to non-profit academic associations and university publishers, but we don't see a compelling reason to suppose that universities or academic associations will behave differently from one another. Finally, we control for some structural difference across journals by including each journal's average number of cites per document and the total number of documents they publish per year. --- Padded citations The third and final type of attribution manipulation explored here is padded reference lists. Because some editors coerce scholars to add citations to boost their journals' impact factor score and because this practice is known by many scholars there is an incentive for scholars to add superfluous citations to their manuscripts prior to submission [18]. Provided there is an incentive for scholars to pad their reference lists in manuscripts, we wondered if grant writers would be willing to pad reference lists in grants in an attempt to influence grant reviewers. As with honorary authorship, we suspect there may be a systematic element to padding citations. In fact, we expect the behavior of padding citations to parallel the honorary author behavior. Thus we predict that scholars of lower rank and therefore without tenure and female scholars to be more likely to pad citations to assuage an editor or sway grant reviewers. Because the practice also encompasses a feedback loop (one way to compete with scholars who pad their citations is to pad your citations) we expect the practice to proliferate in some disciplines. The number of coauthors is not expected to play a role, but we also expect knowledge of other types of manipulation to be important. That is, we hypothesize that individuals who are aware of coercion, or who have been coerced, are more likely to pad citations. With grants, we similarly expect individuals who add honorary authors to grant proposals to also be likely to pad citations in grant proposals. Essentially, the willingness to misbehave in one area is likely related to misbehavior in other areas. --- Methods The data collection method of choice for this study is survey because to it would be difficult to determine if someone added honorary authors or padded citations prior to submission without asking that individual. As explained below, we distributed surveys in four waves over five years. Each survey, its cover email, and distribution strategy was reviewed and approved by the University of Alabama in Huntsville's Institutional Review Board. Copies of these approvals are available on request. We purposely did not collect data that would allow us to identify individual respondents. We test our hypotheses using these survey data and journal data. Given the complexity of the data collection, both survey and archival journal data, we will begin with discussing our survey data and the variables developed from our survey. We then discuss our journal data and the variables developed there. Over the course of a five-year period and using four waves of survey collection, we sent surveys, via email, to more than 110,000 scholars in total from eighteen different disciplines (medicine, nursing, biology, chemistry, computer science, mathematics, physics, engineering, ecology, accounting, economics, finance, marketing, management, information systems, sociology, psychology, and political science) from universities across the U.S. See Table 1 for details regarding the timing of survey collection. Survey questions and raw counts of the responses to those questions are given in S1 Appendix: Statistical methods, surveys, and additional results. Complete files of all of the data used in our estimates are in the S2, S3 and S4 Appendices. --- Honorary authors: Manuscripts Honorary Authors:Grant proposals Padding Citations: Grant Proposals Four waves of surveys were sent to these 18 disciplines over a five year period. First wave (shaded orange) focused on coercive citation in business and the social sciences. Some of these data were used in a published study on coercive citation [18]. Second wave (pink) was early in the spring of 2012 and surveyed the health care disciplines. Third wave (green) was distributed in the fall of 2012 and asked about honorary authorship in STEM disciplines and the social sciences. The fourth wave (shaded blue) filled in the rest of the data; collecting honorary authorship data from business and coercive citation data from the sciences. https://doi.org/10.1371/journal.pone.0187394.t001 Potential survey recipients and their contact information (email addresses) were identified in three different ways. First, we were able to get contact information for management scholars through the Academy Management using the annual meeting catalog. Second, for economics and physicians we used the membership services provided by the American Economic Association and the American Medical Association. Third, for the remaining disciplines we identified the top 200 universities in the United States using U.S. News and World Report's "National University Rankings" and hand-collected email addresses by visiting those university websites and copying contact information for individual faculty members from each of the disciplines. We also augmented the physician contact list by visiting the web sites of the medical schools in these top 200 school as well. With each wave of surveys, we sent at least one reminder to participate. The approximately 110,000 surveys yielded about 12,000 responses for an overall response rate of about 10.5%. Response rates by discipline can be found in Table A in S1 Appendix. Few studies have examined the systematic nature of honorary authorship and padded citation and thus we developed our own survey items to address our hypotheses. Our survey items for coercive citation were taken from prior research on coercion [18]. All survey items and the response alternatives with raw data counts are given in S1 Appendix. The complete data are made available in S2-S4 Appendices. Our first set of tests relate to honorary authorship in manuscripts and grants and is made up of several dependent variables, each related to the research question being addressed. We begin with the existence of honorary authorship in manuscripts. This dependent variable is composed of the answers to the survey question: "Have YOU felt obligated to add the name of another individual as a coauthor to your manuscript even though that individual's contribution was minimal?" Responses were in the form of yes and no where "yes" was coded as a 1 and "no" coded as a 0. The next dependent variable addresses the frequency of this behavior asking: "In the last five years HOW MANY TIMES have you added or had coauthors added to your manuscripts even though they contributed little to the study?" The final honorary authorship dependent variables deal with the reason for including an honorary author in manuscripts: "Even though this individual added little to this manuscript he (or she) was included as an author. The main reason for this inclusion was:" and the choices regarding this answer were that the honorary author is the director of the lab or facility used in the research, occupies a position of authority and can influence my career, is my mentor, is a colleague I wanted to help out, was included for reciprocity (I was included or expect to be included as a co-author on their work), has data I needed, has a reputation that increases the chances of the work being published, or they had funding we could apply to the research. Responses were coded as 1 for the main reason given (only one reason could be selected as the "main" reason) and 0 otherwise. Regarding honorary authorship in grant proposals, our first dependent variable addresses its existence: "Have you ever felt obligated to add a scholar's name to a grant proposal even though you knew that individual would not make a significant contribution to the research effort?" Again, responses were in the form of yes and no where "yes" was coded as a 1 and "no" coded as a 0. The remaining dependent variables regarding honorary authorship in grant proposals addresses the reasons for adding honorary authors to proposals: "The main reason you added an individual to this grant proposal even though he (or she) was not expected to make a significant contribution was:" and the provided potential responses were that the honorary author is the director of the lab or facility used in the research, occupies a position of authority and can influence my career, is my mentor, is a colleague I wanted to help out, was included for reciprocity (I was included or expect to be included as a co-author on their work), has data I needed, has a reputation that increases the chances of the work being published, or was a person suggested by the grant reviewers. Responses were coded as 1 for the main reason given (only one reason could be selected as the "main" reason) and 0 otherwise. Our next major set of dependent variables deal with coercive citation. The first coercive citation dependent variable was measured using the survey question: "Have YOU received a request from an editor to add citations from the editor's journal for reasons that were not based on content?" Responses were in the form of yes (coded as a 1) and no (coded as 0). The next question deals with the frequency: "In the last five years, approximately HOW MANY TIMES have you received a request from the editor to add more citations from the editor's journal for reasons that were not based on content?" Our final set of dependent variables from our survey data investigates padding citations in manuscripts and grants. The dependent variable that addresses an author's willingness to pad citations for manuscripts comes from the following question: "If I were submitting an article to a journal with a reputation of asking for citations to itself even if those citations are not critical to the content of the article, I would probably add such citations BEFORE SUBMISSION." Answers to this question were in the form of a Likert scale with five potential responses (Strongly Disagree, Disagree, Neutral, Agree, and Strongly Agree) where Strongly Disagree was coded as a 1 and Strongly Agree coded as a 5. The dependent variable for padding citations in grant proposals uses responses to the statement: "When developing a grant proposal I tend to skew my citations toward high impact factor journals, even if those citations are of marginal import to my proposal." Answers were in the form of a Likert scale with five potential responses (Strongly Disagree, Disagree, Neutral, Agree, and Strongly Agree) where Strongly Disagree was coded as a 1 and Strongly Agree coded as a 5. To test our research questions, several independent variables were developed. We begin by addressing the independent variables that cut across honorary authorship, coercive citation, and padding citations. The first is academic rank. We asked respondents their current rank: Assistant Professor, Associate Professor, Professor, Research Faculty, Clinical Faculty, and other. Dummy variables were created for each category with Professor being the omitted category in our tests of the hypotheses. The second general independent variable is discipline: Medicine, Nursing, Accounting, Economics, Finance, Information Systems, Management, Marketing, Political Science, Psychology, Sociology, Biology, Chemistry, Computer Science, Ecology, and Engineering. Again, dummy variables were created for each discipline, but instead of omitting a reference category we include all disciplines and then constrain the sum of their coefficients to equal zero. With this approach, the estimated coefficients then tell us how each discipline differs from the average level of honorary authorship, coercive citation, or padded citation across the academic spectrum [32]. We can conveniently identify three categories: (i) disciplines that are significantly more likely to engage in honorary authorship, coercive citation, or padded citation than the average across all disciplines, (ii) disciplines that do not differ significantly from the average level of honorary authorship, coercive citation, or padded citation across all of these disciplines, and (iii) those who are significantly less likely to engage in honorary authorship, coercive citation, or padded citation than the average. We test the potential gender differences with a dummy variable male = 1, females = 0. Additional independent variables were developed for specific research questions. In our tests of honorary authorship, there is an independent variable addressing the number of coauthors on a respondent's most recent manuscript. If the respondent stated that they have added an honorary author then they were asked "Please focus on the most recent incidence in which an individual was added as a coauthor to one of your manuscripts even though his or her contribution was minimal. Including yourself, how many authors were on this manuscript?" Respondents who had not added an honorary author were asked to report the number of authors on their most recently accepted manuscript. We also include an independent variable regarding funding agencies: "To which agency, organization, or foundation was this proposal directed?" Again, for those who have added authors, we request they focus on the most recent proposal where they used honorary authorship and for those who responded that they have not practiced honorary authorship, we asked where they sent their most recent proposal. Their responses include NSF, HHS, Corporations, Private nonprofit, State funding, Other Federal grants, and Other grants. Regarding coercive citation, we included an independent variable regarding number of co-authors on their most recent coercive experience and thus if a respondent indicated they've been coerced we asked: "Please focus on the most recent incident in which an editor asked you to add citations not based on content. Including yourself, how many authors were on this manuscript?" If a respondent indicated they've never been coerced, we asked them to state the number of authors on their most recently accepted manuscript. Finally, we included control variables. In our tests, we included the respondent's performance or exposure to these behaviors. For those analyses focusing on manuscripts we used acceptances: "Within the last five years, approximately how many publications, including acceptances, do you have?" The more someone publishes, the more opportunities they have to be coerced, add authors, or add citations; thus, scholars who have published more articles are more likely to have experienced coercion, ceteris paribus. And in our tests of grants we used two performance indicators: 1) "In the last five years approximately how many grant proposals have you submitted for funding?" and 2) "Approximately how much grant money have you received in the last five years? Please write your estimated dollars in box; enter 0 if zero." We also investigate coercion using a journal-based dataset, Scopus, which contains information on more than 16,000 journals from these 18 disciplines [33]. It includes information on the number of articles published each year, the average number of citations per manuscript, the rank of the journal, disciplines that most frequently publish in the journal, the publisher, and so forth. These data were used to help develop our dependent variable as well as our independent and control variables for the journal analysis. Our raw journal data is provided in S4 Appendix: Journal data. The dependent variables in our journal analysis measure whether a specific journal was identified as a journal in which coercion occurred, or not, and the frequency of that identification. Survey respondents were asked: "To track the possible spread of this practice we need to know specific journals. Would you please provide the names of journals you know engage in this practice?" Respondents were given a blank space to write in journal names. The majority of our respondents declined to identify journals where coercion has occurred; however, more than 1200 respondents provided journal names and in some instances, respondents provided more than one journal name. Among the population of journals in the Scopus database, 612 of these were identified as journals that have coerced by our survey respondents, some of these journals were identified several times. The first dependent variable is binary, coded as 1 if a journal was identified as a journal that has coerced, and coded as 0 otherwise. The frequency estimates uses the count, how many times they were named, as the dependent variable. The independent variables measure various journal attributes, the first being discipline. The Scopus database identifies the discipline that most frequently publishes in any given journal, and that information was used to classify journals by discipline. Thus, if physics is the most common discipline to publish in a journal, it was classified as a physics journal. We look to see if there is a publisher effect using the publisher information in Scopus to create four categories: commercial publishers, academic associations, universities, and others (the omitted reference category). We also control for differing editorial norms across disciplines. First, we include the number of documents published annually by each journal. All else equal, a journal that publishes more articles has more opportunities to engage in coercion, and/or it interacts with more authors and is more likely to be reported in our sample. Second, we control for the average number of citations per article. The average number of citations per document controls for some of the overall differences in citation practices across disciplines. Given the large number of hypotheses to be tested, we present a compiled list of the dependent variables in Table 2. This table names the dependent variables, describes how they were constructed, and lists the tables that present the estimated coefficients pertinent to those dependent variables. Table 2 is intended to give readers an outline of the arc of the remainder of the manuscript. --- Results --- Honorary authorship in research manuscripts Looking across all disciplines, 35.5% of our survey respondents report that they have added an author to a manuscript even though the contribution of those authors was minimal. Fig 1 displays tallies of some raw responses to show how the use of honorary authorship, for both manuscripts and grants, differs across science, engineering, medicine, business, and the social sciences. To begin the empirical study of the systematic use of honorary authorship, we start with the addition of honorary authors to research manuscripts. This is a logit model in which the dependent variable equals one if the respondent felt obligated to add an author to their manuscript, "even though that individual's contribution was minimal." The estimates appear in Table 3. In brief, all of our conjectures are observed in these data. As we hypothesized above, the pressure on scholars to add authors "who do not add substantially to the research project," is more likely to be felt by assistant professors and associate professors relative to professors (the reference category). To understand the size of the effect, we calculate odds ratios (e β ) for each variable, also reported in Table 3. Relative to a full professor, being an assistant professor increases the odds of honorary authorship in manuscripts by 90%, being an associate professor increases those odds by 40%, and research faculty are twice as likely as a professor to add an honorary author. Consistent with our hypothesis, we found support that females were more likely to add honorary authors as the estimated coefficient on males was negative and statistically significant. The odds that a male feels obligated to add an author to a manuscript is 38% lower than for females. As hypothesized, authors who already have several co-authors on a manuscript seem more willing to add another; consistent with our hypotheses that the decrement in individual credit diminishes as the number of authors rises. Overall, these results align with our fundamental thesis that authors are purposively deciding to deceive, adding authors when the benefits are higher and the costs lower. Considering the addition of honorary authors to manuscripts, Table 3 shows that four disciplines are statistically more likely to add honorary authors than the average across all disciplines. Listing those disciplines in order of their odds ratios and starting with the greatest odds, they are: marketing, management, ecology, and medicine (physicians). There are five disciplines in which honorary authorship is statistically below the average and starting with the lowest odds ratio they are: political science, accounting, mathematics, chemistry, and economics. Finally, the remaining disciplines, statistically indistinguishable from the average, are: physics, psychology, sociology, computer science, finance, engineering, biology, information systems, and nursing. At the extremes, scholars in marketing are 75% more likely to feel an obligation to add authors to a manuscript than the average across all disciplines while political scientists are 44% less likely than the average to add an honorary author to a manuscript. --- Dependent variable Description Table --- Honorary Authorship: Manuscripts --- Added honorary author to manuscript Binary variable = 1 if respondent has added an honorary author to a research manuscript in the last five years; = 0 otherwise Table 3 --- Number of times added authors to manuscripts Count variable; number of times have added honorary author to manuscripts in the last five years Table 4 Honorary Authorship: Grant Proposals --- Added honorary author to grant proposal Binary variable = 1 if respondent has added an honorary author to a grant proposal in the last five years; = 0 otherwise Table 5 --- Reasons added Honorary Authors to Manuscripts Director Binary variable = 1 the primary reason this honorary author was added to a manuscript; "was the Director of the lab or facility used in the research." = 0 otherwise Table 6 Authority Binary variable = 1 the primary reason this honorary author was added to a manuscript; "occupies a position of authority and can influence my career." = 0 otherwise. --- Table 6 Mentor Binary variable = 1 the primary reason this honorary author was added to a manuscript, "this is my mentor." = 0 otherwise Table 6 --- Reasons added Honorary Authors to Grant Proposals Reputation Binary variable = 1 the primary reason this honorary author was added to a grant proposal, "their reputation increases the chances of receiving funding." = 0 otherwise Table 7 Director Binary variable = 1 the primary reason this honorary author was added to a grant proposal, "was the Director of the lab or facility used in the research." = 0 otherwise Table 7 Authority Binary variable = 1 the primary reason this honorary author was added to a grant proposal, this individual, "occupies a position of authority and can influence my career." = 0 otherwise Table 7 Coercive Citations: individual data --- Existence of coercive citation Binary variable = 1 if respondent was coerced by an editor to add superfluous citations to the editor's journal in the last five years. = 0 otherwise Table 8 --- Frequency of coercive citation Count variable; number of times respondent was coerced by editors to add superfluous citations to the editors' journals in the last five years. Table 9 Coercive Citations: journal data --- Journals that have coerced Binary data = 1 if journal was named as having coerced; = 0 otherwise Tables 10 and 11 --- Frequency journals coerced authors Count variable; number of times a journal was identified as one that practiced coercion in the last five years Tables 10 and11 Padded Citations --- Padded citations in manuscripts Ordered categorical variable; Response to the statement, "If I were submitting an article to a journal with a reputation of asking for citations to itself even if those citations are not critical to the content of the article, I would probably add such citations BEFORE SUBMISSION." Strongly agree = 5; agree = 4; neutral = 3; disagree = 2; strongly disagree = 1 --- Table 12 --- Padded citations in grant proposals Ordered categorical variable; response to the statement, "When developing a grant proposal I tend to skew my citations toward high impact factor journals even if those citations are of marginal impact to my proposal." Strongly agree = 5; agree = 4; neutral = 3; disagree = 2; strongly disagree = 1 To bolster these results, we also asked individuals to tell us how many times they felt obligated to add honorary authors to manuscripts in the last five years. Using these responses as our dependent variable we estimated a negative binomial regression equation with the same independent variables used in Table 3. The estimated coefficients and their transformation into incidence rate ratios are given in Table 4. Most of the estimated coefficients in Tables 3 and4 have the same sign and, with minor differences, similar significance levels, which suggests the attributes associated with a higher likelihood of adding authors are also related to the frequency of that activity. Looking at the incidence rate ratios in Table 4, scholars occupying the lower academic ranks, research professors, females, and manuscripts that already have many authors more frequently add authors. Table 4 also suggests that three additional disciplines, Nursing, Biology, and Engineering, have more incidents of adding honorary authors to manuscripts than the average of all disciplines and, consequently, the disciplines that most frequently engage in honorary authorship are, by effect size, management, marketing, ecology, engineering, nursing, biology, and medicine. Another way to measure effect sizes is to standardize the variables so that the changes in the odds ratios or incidence rate ratios measure the impact of a one standard deviation change of the independent variable on the dependent variable. In Tables 3 and4, the continuous variables are the number of coauthors on the particular manuscripts of interest and the number of publications of each respondent. Tables C and D (in S1 Appendix) show the estimated coefficients and odds ratios with standardized coefficients. Comparing the two sets of results is instructive. In Table 3, the odds ratio for the number of coauthors is 1.035, adding each additional author increases the odds of this manuscript having an honorary author by 3.5%. The estimated odds ratio for the standardized coefficient, (Table C in S1 Appendix) is 1.10, meaning an increase in the number of coauthors of one standard deviation increases the odds that this manuscript has an honorary author by 10%. Meanwhile the standard deviation of the number of coauthors in this sample is 2.78, so 3.5% x 2.78 = 9.73%; the two estimates are very similar. This similarity repeats itself when we consider the number of publications and when we compare the incidence rate ratios across Table 4 and Table D in S1 Appendix. Standardization also tells us something about the relative effect size of different independent variables and in both models a standard deviation increase in the number of coauthors has a larger impact on the likelihood of adding another author than a standard deviation increase in additional publications. --- Honorary authorship in grant proposals Our next set of results focus on honorary authorship in grant proposals. Looking across all disciplines, 20.8% of the respondents reported that they had added an investigator to a grant proposal even though the contribution of that individual was minimal (see Fig 1 across disciplines). To more deeply probe into that behavior we begin with a model in which the dependent variable is binary, whether a respondent has added an honorary author, or not, to a grant proposal and thus use a logit model. With some modifications, the independent variables include the same variables as the manuscript models in Tables 3 and4. We remove a control variable relevant to manuscripts (total number of publications) and add two control variables to measure the level of exposure a particular scholar has to the funding process: the number of grants funded in the last five years and the total amount of grant funding (dollars) in that same period. The results appear in Table 5 and, again, we see significant participation in honorary authorship. The estimates largely follow our predictions and mirror the results of the models in Tables 3 and4. Academic rank has a smaller effect, being an assistant professor increases the odds of adding an honorary author to a grant by 68% and being an associate professor increases those odds by 52%. On the other hand, the impact of being a research professor is larger in the grant proposal models than the manuscripts model of Table 3 while the impact of sex is smaller. As was true in the manuscripts models, the obligation to add honorary authors is also lumpy, some disciplines being much more likely to engage in the practice than others. We find five disciplines in the "more likely than average" category: medicine, nursing, management, engineering, and psychology. The disciplines that tend to add fewer honorary authors to grants are political science, biology, chemistry, and physics. Those that are indistinguishable from the average are accounting, economics, finance, information systems, sociology, ecology, marketing, computer science, and mathematics. We speculated that science, engineering, and medicine were more likely to practice honorary authorship in grant proposals because those disciplines are more dependent on research funding and more likely to consider funding as a requirement for tenure and promotion. The results in Tables 3 and5 are somewhat consistent with this conjecture. Of the five disciplines in the "above average" category for adding honorary authors to grant proposals, four (medicine, nursing, engineering, and psychology) are dependent on labs and funding to build and maintain such labs for their research. --- for differences --- Reasons for adding honorary authors Our next set of results looks more deeply into the reasons scholars give for adding honorary authors to manuscripts and to grants. When considering honorary authors added to manuscripts, we focus on a set of responses to the question: "what was the major reason you felt you needed to add those co-author(s)?" When we look at grant proposals, we use responses to the survey question: "The main reason you added an individual to this grant proposal even though he (or she) was not expected to make a significant contribution was. . ." Starting with manuscripts, although nine different reasons for adding authors were cited (see survey in S1 Appendix), only three were cited more than 10% of the time. The most common reason our respondents added honorary authors (28.4% of these responses) was because the added individual was the director of the lab. The second most common reason (21.4% of these responses), and the most disturbing, was that the added individual was in a position of authority and could affect the scholar's career. Third among the reasons for honorary authorship (13.2%) were mentors. "Other" was selected by about 13% of respondents. The percentage of raw responses for each reason is shown in Fig 2 . To find out if the three most common responses were related to the professional characteristics of the scholars in our study, we re-estimate the model in Table 3 after replacing the Reasons for adding honorary authors to grants and manuscripts. Each pair of columns presents the percentage of responses who selected a particular reason for adding an honorary author to a manuscript or a grant proposal. Director refers to responses stating, "this individual was the director of the lab or facility used in the research." Authority refers to responses stating, "this individual occupies a position of authority and can influence my career." Mentor, "this is my mentor"; colleague, "this a colleague I wanted to help"; reciprocity, "I was included or expect to be included as a co-author on their work"; data, "they had data I needed"; reputation, "their reputation increases the chances of the work being published (or funded)"; funding, "they had funding we could apply to the research"; and reviewers, "the grant reviewers suggested we add coauthors." https://doi.org/10.1371/journal.pone.0187394.g002 dependent variable with the reasons for adding an author. In other words, the first model displayed in Table 6, under the heading "Director of Laboratory," estimates a regression in which the dependent variable equals one if the respondent added the director of the research lab in which they worked as an honorary author and equals zero if this was not the reason. The second model indicates those who added an author because he or she was in a position of authority and so forth. The estimated coefficients appear in Table 6 and the odds ratios are reported in S1 Appendix, Table E. Note the sample size is smaller for these regressions because we include only those respondents who say they have added a superfluous author to a manuscript. The results are as expected. The individuals who are more likely to add a director of a laboratory are research faculty (they mostly work in research labs and centers), and scholars in fields in which laboratory work is a primary method of conducting research (medicine, nursing, psychology, biology, chemistry, ecology, and engineering). The second model suggests that the scholars who add an author because they feel pressure from individuals in a position of authority are junior faculty (assistant and associate professors, and research faculty) and individuals in medicine, nursing, and management. The third model suggests assistant professors, lecturers, research faculty, and clinical faculty are more likely to add their mentors as an honorary author. Since many mentorships are established in graduate school or through post-docs, it is sensible that scholars who are early in their career still feel an obligation to their mentors and are more likely to add them to manuscripts. Finally, the disciplines most likely to add mentors to manuscripts seem to be the "professional" disciplines: medicine, nursing, and business (economics, information systems, management, and marketing). We do not report the results for the other five reasons for adding honorary authors because few respondent characteristics were statistically significant. One explanation for this lack of significance may be the smaller sample size (less than 10% of the respondents indicated one of these remaining reasons as being the primary reason they added an author) or it may be that even if these rationales are relatively common, they might be distributed randomly across ranks and disciplines. Turning to grant proposals, the dominant reason for adding authors to grant proposals even though they are not actually involved in the research was reputation. Of the more than 2100 individuals who gave a specific answer to this question, 60.8% selected "this individual had a reputation that increases the chances of the work being funded." The second most frequently reported reason for grants was that the added individual was the director of the lab (13.5%), and third was people holding a position of authority (13%). All other reasons garnered a small number of responses. We estimate a set of regressions similar to Table 6 using the reasons for honorary grant proposal authorship as the dependent variable and the independent variables from the grant proposal models of Table 5. Before estimating those models we also add six dummy variables reflecting different sources of research funding to see if the reason for adding honorary citations differs by type of funding. These dummy variables indicate funding from NSF, HHS (which includes the NIH), research grants from private corporations, grants from private, non-profit organizations, state research grants, and then a variable capturing all other federally funded grants. The omitted category is all other grants. The estimated coefficients appear in Table 7 and the odds ratios are reported in Table F in S1 Appendix. The first column of results in Table 7 replicates and adds to the model in Table 5, in which the dependent variable is: "have you added honorary authors to grant proposals." The reason we replicate that model is to add the six funding sources to the regression to see if some agencies see more honorary authors in their proposals than other agencies. The results in Table 7 suggest they do. Federally funded grants are more likely to have honorary authorships than other sources of grant funding as the coefficients on NSF, NIH, and other federal funding are all positive and significant at the 0.01 level. Corporate research grants also tend to have honorary authors included. The remaining columns in Table 7 suggest that scholars in medicine and management are more likely to add honorary authors to grant proposals because of the added scholar's reputation, but there is little statistical difference across the other characteristics of our respondents. Exploring the different sources of funds, adding an individual because of his or her reputation is more likely to be practiced with grants to the Department of Health and Human Services (probably because of the heavy presence of medical proposals and honorary authorship is common in medicine) and it is statistically less likely to be used in grant proposals directed towards corporate research funding. Logit regression, dependent variable is binary: 1 = added an author to a grant proposal because of his or reputation, or added the director of laboratory as co-author, or someone in position of authority (even though they were not materially involved in the research), 0 = some other reason for adding author. Independent variables include academic ranks, disciplines, gender, funding agency, number of grants, and total grand funding received in last five years. * Indicates significance at the 5% level; ** significant at the 1% level. https://doi.org/10.1371/journal.pone.0187394.t007 Table 7 shows that lab directors tend to be honorary authors in grant proposals with assistant professors and for grant proposals directed to private corporations. While position of authority (i.e., political power) was the third most frequently cited reason to add someone to a proposal, its practice seems to be dispersed across the academic universe as the regression results in Table 7 do not show much variation across rank, discipline, their past experience with research funding, or the funding source to which the proposal was directed. The remaining reasons for adding authors garnered a small portion of the total responses and there was little significant variation across the characteristics measured here. For these reasons, their regression results are not reported. --- Coercive citations There is widespread distaste among academics concerning the use of coercive citation. Over 90% of our respondents view coercion as inappropriate, 85.3% think its practice reduces the prestige of the journal, and 73.9% are less likely to submit work to a journal that coerces. These opinions are shared across the academic spectrum as shown in Fig 3, which breaks out these responses by the major fields, medicine, science, engineering, business, and the social sciences. Despite this disapproval, 14.1% of the overall respondents report being coerced. Similar to the analyses above, our task is to see if there is a systematic set of attributes of scholars who are coerced or if there are attributes of journals that are related to coercion. Two dependent variables are used to measure the existence and the frequency of coercive citation. The first is a binary dependent variable, whether respondents were coerced or not, and the second counts the frequency of coercion, asking our respondents how many times they have been coerced in the last five years. Table 8 presents estimates of the logit model (coerced or not) and their odds ratios and Table 9 presents estimates of the negative binomial model (measuring the frequency of coercion) and their accompanying incident rate ratios. With but a single exception (the estimated coefficient on female scholars was opposite our expectation) our hypotheses are supported. In this sample, it is males who are more likely to be coerced, the effect size estimates that being a male raises the odds ratio of being coerced by 18%. In the frequency estimates in Table 9, however, there was no statistical difference between male and female scholars. Consistent with our hypotheses, assistant professors and associate professors were more likely to be coerced than full professors and the effect was larger for assistant professors. Being an assistant professor increases the odds that you will be coerced by 42% over a professor while associate professors see about half of that, a 21% increase in their odds. Table 9 shows assistant professors are also coerced more frequently than professors. Co-authors had a negative and significant coefficient as predicted in both sets of results. Consequently, comparing Tables 3 and8 we see that manuscripts with many co-authors are more likely to add honorary authors, but are less likely to be targeted for coercion. Finally, we find significant variation across disciplines. Eight disciplines are significantly more likely to be coerced than the average across all disciplines and ordered by their odds ratios (largest to smallest) they are: marketing, information systems, finance, management, ecology, engineering, accounting, and economics. Nine disciplines are less likely to be coerced and ordered by their odds ratios (smallest to largest) they are: mathematics, physics, political science, chemistry, psychology, nursing, medicine, computer science, and sociology. Again, there is support for our speculation that disciplines in which grant funding is less critical (and therefore publication is relatively more critical) experience more coercion. In the top coercion category, six of the eight disciplines are business disciplines, where research funding is less common, and in "less than average" coercion disciplines, six of the nine disciplines rely heavily on grant funding. The anomaly (and one that deserves greater study) is that the social sciences see less than average coercion even though publication is their primary measure of academic success. While they are prime targets for coercion, the --- Coercive citations: Journal data To achieve a deeper understanding of coercive citation, we reexamine this behavior using academic journals as our unit of observation. We analyze these journal-based data in two ways: 1) a logit model in which the dependent variable equals 1 if that journal was named as having coerced and 0 if not and 2) a negative binomial model where the dependent variable is the count of the number of times a journal was identified as one where coercion occurred. As before, the variance of these data substantially exceeds the mean and thus Poison regression is inappropriate. To test our hypotheses, our included independent variables are the dummy variables for discipline, journal rank, and dummy variables for different types of publishers. We control for some of the different editorial practices across journals by including the number of documents published annually by each journal and the average number of citations per article. The results of the journal-based analysis appear in Table 10. Once again, and consistent with our hypothesis, the differences across disciplines emerge and closely follow the previous results. The discipline journals most likely to have coerced authors for citations are in business. The effect of a journal's rank on its use of coercion is perhaps the most startling finding. Measuring journal rank using the h-index suggests that more highly rated journals are more likely to have coerced and coerced more frequently, which is opposite our hypothesis that lower ranked journals are more likely to coerce. Perhaps the chance to move from being a "good" journal to a "very good" journal is just too tempting to pass. There is some anecdotal evidence that is consistent with this result. If one surfs through the websites of journals, many simply do not mention their rank or impact factor. However, those that do mention their rank or impact tend to be more highly ranked journals (a low-ranked journal typically doesn't advertise that fact), but the very presence of the impact factor on a website suggests that the journal, or more importantly the publisher, places some value on it and, given that pressure, it is not surprising to find that it may influence editorial decisions. On the other hand, we might be observing the results of established behavior. If some journals have practiced coercion for an extended time then their citation count might be high enough to have inflated their h-index. We cannot discern a direction of causality, but either way our results suggest that more highly ranked journals end up using coercion more aggressively, all else equal. There seems to be publisher effects as well. As predicted, journals published by private, profit oriented companies are more likely to be journals that have coerced, but it also seems to be more common in the academic associations than university publishers. Finally, we note that the total number of documents published per year is positively related to a journal having coerced and the impact of the average number of citations per document was not significantly different than zero. The result that higher-ranked journals seem to be more inclined than lower-ranked journals to have practiced coercion warrants caution. These data contain many obscure journals; for example, there are more than 4000 publications categorized as medical journals and this long tail could create a misleading result. For instance, suppose some medical journals ranked between 1000-1200 most aggressively use the practice of coercion. In relative terms these are "high" ranked journals because 65% of the journals are ranked even lower than these clearly obscure publications. To account for this possibility, a second set of estimates was calculated after eliminating all but the "top-30" journals in each discipline. The results appear in Table 11 and generally mirror the results in Table 10. Journals in the business disciplines are more likely to have used coercion and used it more frequently than the other disciplines. Medicine, biology, and computer science journals used coercion less. However, even concentrating on the top 30 journals in each field, the h-index remains positive and significant; higher ranked journals in those disciplines are more likely to have coerced. --- Padded reference lists Our final empirical tests focus on padded citations. We asked our respondents that if they were submitting an article to a journal with a reputation of asking for citations even if those The unit of observation is a journal. The dependent variable for the logit model is binary: 1 = journal named as having coerced, 0 = not so named, and for the frequency model the dependent variable is the number of times a journal was named as one that has coerced. Independent variables include the total number of documents published by the journal in a year, the average references per document, the type of publisher, academic disciplines, and the journal's ranking as measured by the h-index. * Indicates significance at the 5% level; ** significant at the 1% level. https://doi.org/10.1371/journal.pone.0187394.t010 citations are not critical to the content of the article, would you "add such citations BEFORE SUBMISSION." Again, more than 40% of the respondents said they agreed with that sentiment. Regarding grant proposals, 15% admitted to adding citations to their reference list in grant proposals "even if those citations are of marginal import to my proposal." To see if reference padding is as systematic as the other types of manipulation studied here, we use the categorical responses to the above questions as dependent variables and estimate ordered logit models using the same descriptive independent variables as before. The results for padding references in manuscripts and grant proposals appear in Tables 12 and13, respectively. Once more, with minor deviation, our hypotheses are strongly supported. 10 cutting sample to include only the top 30 journals in each discipline as measured by the h-index. The dependent variable for the logit model is binary: 1 = journal named as having coerced, 0 = not so named, and for the frequency model the dependent variable is the number of times a journal was named as one that has coerced. Independent variables include the total number of documents published by the journal in a year, the average references per document, the type of publication, academic disciplines, and the journal's ranking as measured by the h-index. * Indicates significant at the 5% level; ** significant at the 1% level. https://doi.org/10.1371/journal.pone.0187394.t011 Tables 12 and13 show that scholars of lesser rank and those without tenure are more likely to pad citations to manuscripts and skew citations in grant proposals than are full professors. The gender results are mixed, males are less likely to pad their citations in manuscripts, but more likely to pad references in grant proposals. It is the business disciplines and the social sciences that are more likely to pad their references in manuscripts and business and medicine who pad citations on grant proposals. In both situations, familiarity with other types of manipulation has a strong, positive correlation with the likelihood that individuals pad their reference list. That is, respondents who are aware of coercive citation and those who have been coerced in the past are much more likely to pad citations before submitting a manuscript to a journal. And, scholars who have added honorary authors to grant proposals are also more likely to skew their citations to high-impact journals. While we cannot intuit the direction of causation, we show evidence that those who manipulate in one dimension are willing to manipulate in another. --- Discussion Our results are clear; academic misconduct, specifically misattribution, spans the academic universe. While there are different levels of abuse across disciplines, we found evidence of honorary authorship, coercive citation, and padded citation in every discipline we sampled. We also suggest that a useful construct to approach misattribution is to assume individual scholars make deliberate decisions to cheat after weighing the costs and benefits of that action. We cannot claim that our construct is universally true because other explanations may be possible, nor do we claim it explains all misattribution behavior because other factors can play a role. However, the systematic pattern of superfluous authors, coerced citations, and padded references documented here is consistent with scholars who making deliberate decisions to cheat after evaluating the costs and benefits of their behavior. Consider the use of honorary authorship in grant proposals. Out of the more than 2100 individuals who gave a specific reason as to why they added a superfluous author to a grant proposal, one rationale outweighed the others; over 60% said they added the individual because of they thought the added scholar's reputation increased their changes of a positive review. That behavior, adding someone with a reputation even though that individual isn't expected to contribute to the work was reported across disciplines, academic ranks, and individuals' experience in grant work. Apparently, adding authors with highly recognized names to grant proposals has become part of the game and is practiced across disciplines and rank. Focusing on manuscripts, there is more variation in the stated reasons for honorary authorship. Lab directors are added to papers in disciplines that are heavy lab users and junior faculty members are more likely to add individuals in positions of authority or mentors. Unlike grant proposals, few scholars add authors to manuscripts because of their reputation. A potential explanation for this difference is that many grant proposals are not blind reviewed, so grant reviewers know the research team and can be influenced by its members. Journals, however, often have blind referees, so while the reputation of a particular author might influence an editor it should not influence referees. Furthermore, this might reflect the different review process of journals versus funding agencies. Funding agencies specifically consider the likelihood that a research team can complete a project and the project's probability of making a significant contribution. Reputation can play a role in setting that perception. Such considerations are less prevalent in manuscript review because a submitted work is complete-the refereeing question is whether it is done well and whether it makes a significant contribution. Turning to coercive citations, our results in Tables 8 and9 are also consistent with a model of coercion that assumes editors who engage in coercive citation do so mindfully; they are influenced by what others in their field are doing and if they coerce they take care to minimize the potential cost that their actions might trigger. Parallel analyses using a journal data base are also consistent with that view. In addition, the distinctive characteristics of each dataset illuminate different parts of the story. The author-based data suggests editors target their requests to minimize the potential cost of their activity by coercing less powerful authors and targeting manuscripts with fewer authors. However, contrary to the honorary authorship results, females are less likely to be coerced than males, ceteris paribus. The journal-based data adds that it is higher-ranked journals that seem to be more inclined to take the risk than lower ranked journals and that the type of publisher matters as well. Furthermore, both approaches suggest that certain fields, largely located in the business professions, are more likely to engage in coercive activities. This study did not investigate why business might be more actively engaged in academic misconduct because there was little theoretical reason to hypothesize this relationship. There is however some literature suggesting that ethics education in business schools has declined [34]. For the last 20-30 years business schools have turned to the mantra that stock holder value is the only pertinent concern of the firm. It is a small step to imagine that citation counts could be viewed as the only thing that matters for journals, but additional research is needed to flesh out such a claim. Again, we cannot claim that our cost-benefit model of editors who try to inflate their journal impact factor score is the only possible explanation of coercion. Even if editors are following such a strategy, that does not rule out additional considerations that might also influence their behavior. Hopefully future research will help us understand the more complex motivations behind the decision to manipulate and the subsequent behavior of scholars. Finally, it is clear that academics see value in padding citations as it is a relatively common behavior for both manuscripts and grants. Our results in Tables 12 and13 also suggest that the use of honorary authorship and padding citations in grant proposals and coercive citation and padding citations in manuscripts is correlated. Scholars who have been coerced are more likely to pad citations before submitting their work and individuals who add authors to manuscripts also skew their references on their grant proposals. It seems that once scholars are willing to misrepresent authorship and/or citations, their misconduct is not limited to a single form of misattribution. It is difficult to examine these data without concluding that there is a significant level of deception in authorship and citation in academic research and while it would be naïve to suppose that academics are above such scheming to enhance their position, the results suggest otherwise. The overwhelming consensus is that such behavior is inappropriate, but its practice is common. It seems that academics are trapped; compelled to participate in activities they find distasteful. We suggest that the fuel that drives this cultural norm is the competition for research funding and high-quality journal space coupled with the intense focus on a single measure of performance, the number of publications or grants. That competition cuts both ways, on the one hand it focuses creativity, hones research contributions, and distinguishes between significant contributions and incremental advances. On the other hand, such competition creates incentives to take shortcuts to inflate ones' research metrics by strategically manipulating attribution. This puts academics at odds with their core ethical beliefs. The competition for research resources is getting tighter and if there is an advantage to be gained by misbehaving then the odds that academics will misbehave increase; left unchecked, the manipulation of authorship and citation will continue to grow. Different types of attribution manipulation continue to emerge; citation cartels (where editors at multiple journals agree to pad the other journals' impact factor) and journals that publish anything for a fee while falsely claiming peer-review are two examples [30,35]. It will be difficult to eliminate such activities, but some steps can probably help. Policy actions aimed at attribution manipulation need to reduce the benefits of manipulation and/or increase the cost. One of the driving incentives of honorary authorship is that the number of publications has become a focal point of evaluation and that number is not sufficiently discounted by the number of authors [36]. So, if a publication with x authors counted as 1/x publications for each of the authors, the ability to inflate one's vita is reduced. There are problems of course, such as who would implement such a policy, but some of these problems can be addressed. For example if the online, automated citation counts (e.g., h-index, impact factor, calculators such as SCOPUS and Google Scholar) automatically discounted their statistics by the number of authors, it could eventually influence the entire academe. Other shortcomings of this policy is that this simple discounting does not allow for differential credit to be given that may be warranted, nor does it remove the power disparity in academic ranks. However, it does stiffen the resistance to adding authors and that is a crucial step. An increasing number of journals, especially in medicine, are adopting authorship guidelines developed by independent groups, the most common being set forth by the International Committee of Medical Journal Editors (ICMJE) [37]. To date, however, there is little evidence that those standards have significantly altered behavior; although it is not clear if that is because authors are manipulating in spite of the rules, if the rules are poorly enforced, or if they are poorly designed from an implementation perspective [21]. Some journals require authors to specifically enumerate each author's contribution and require all of the authors to sign off on that division of labor. Such delineation would be even more effective if authorship credit was weighted by that division of labor. Additional research is warranted. There may be greater opportunities to reduce the practice of coercive citation. A fundamental difference between coercion and honorary authorship is the paper trail. Editors write down such "requests" to authors, therefore violations are easier to document and enforcement is more straightforward. First, it is clear that impact factors should no longer include self-citations. This simple act removes the incentive to coerce authors. Reuters makes such calculations and publishes impact factors including and excluding self-citations. However, the existence of multiple impact factors gives journals the opportunity to adopt and advertise the factor that puts them in the best light, which means that journals with editors who practice coercion can continue to use impact factors that can be manipulated. Thus, self-citations should be removed from all impact factor calculations. This does not eliminate other forms of impact factor manipulation such as posting accepted articles on the web and accumulating citations prior to official publication, but it removes the benefit of editorial coercion and other strategies based on inflating self-citation [38]. Second, journals should explicitly ban their editors from coercing. Some journals are taking these steps and while words do not insure practice, a code of ethics reinforces appropriate behavior because it more closely ties a journal's reputation to the practices of its editors and should increase the oversight of editorial boards. Some progress is being made on the adoption of editorial guidelines, but whether they have any impact is currently unknown [39,40]. These results also reinforce the idea that grant proposals be double blind-reviewed. Blindreview shifts the decision calculus towards the merit of a proposal and reduces honorary authorship incentives. The current system can inadvertently encourage misattribution. For example, scholars are often encouraged to visit granting agencies to meet with reviewers and directors of programs to talk about high-interest research areas. Such visits make sense, but it is easy for those scholars to interpret their visit as a name-collecting exercise; finding people to add to proposals and collecting references to cite. Fourth, academic administrators, Provosts, Deans, and Chairs need to have clear rules concerning authorship. Far too many of our respondents said they added a name to their work because that individual could have an impact on their career. They also need to have guidelines that address the inclusion of mentors and lab directors to author lists. Proposals that include name-recognizable scholars for only a small proportion of the grant should be viewed with suspicion. This is a consideration in some grant opportunities, but that linkage can be strengthened. Finally, there is some evidence that mentoring can be effective, but there is a real question as to whether mentors are teaching compliance or how to cheat [41]. There are limitations in this study. Although surveys have shortcomings such as self-reporting bias, self-selection issues, etc., there are some issues for which surveys remain as the data collection method of choice. Manipulation is one of these issues. It would be difficult to determine if someone added honorary authors or padded citations prior to submission without asking that individual. Similarly, coercion is most directly addressed by asking authors if editors coerced them for citations. Other approaches, such as examining archival data, running experiments, or building simulations, will not work. Thus, despite its shortcomings, survey is the method of choice. Our survey was sent via email and the overall response rate was 10.5%, which by traditional survey standards may be considered to be low. We have no data on how many surveys were filtered as spam or otherwise ended up in junk mail folders or how many addresses were obsolete. We recognize however that there is a rising hesitancy by individuals to click on an emailed link and that is what we were asking our recipients to do. For these reasons, we anticipated that our response rate may be low and compensated by increasing the number of surveys sent out. In the end, we have over 12,000 responses and found thousands of scholars who have participated in manipulation. In the S1 appendix, Table A presents response rates by discipline and while there is variation across disciplines, that variation does not correlate with any of the fundamental results, that is, there does not seem to be a discipline bias arising from differential response rates. A major concern when conducting survey research is that the sample may not represent the population. To address this possible issue in our study, we perform various statistical analyses to determine if we encountered sampling bias. First, we compared two population demographics (sex and academic rank) to the demographics of our respondents (see Table B in S1 Appendix). The percentage of males and females in each discipline was very close to the reported sex of the respondents. There was greater variation in academic ranks with the rank of full professor being over-represented in our sample. One should keep this in mind when interpreting our findings. However, our hypotheses and results suggest that professors are the least likely to be coerced, use padded citations, and use honorary authorship, consequently our results may actually under-estimate the incidence of manipulation. Perhaps the greatest concern of potential bias innate in surveys comes from the intuition that individuals who are more intimately affected by a particular issue are more likely to respond. In the current study, it is plausible that scholars who have been coerced, or felt obligated to add authors to manuscripts, or have added investigators to grants proposals, are upset by that consequence and more likely to respond. However, if that motivation biased our responses it should show up in the response rates across disciplines, i.e., disciplines reporting a greater incidence of manipulation should have higher percentage of their population experiencing manipulation and thus higher response rates. The rank correlation coefficient between discipline response rates and the proportion of scholars reporting manipulation is r s = -0.181, suggesting virtually no relationship between the two measures. In the end, we cannot rule out the existence of bias but we find no evidence that suggests it affects our results. We are left with the conclusion that scholars manipulate attribution adding honorary authors to their manuscripts and false investigators to their grant proposals, and some editors coerce scholars to add citations that are not pertinent to their work. It is unlikely that this unethical behavior can be totally eliminated because academics are a competitive, intelligent, and creative group of individuals. However, most of our respondents say they want to play it straight and therefore, by reducing the incentives of misbehavior and raising the costs of inappropriate attribution, we can expect a substantial portion of the community to go along. With this inherent support and some changes to the way we measure scientific contributions, we may reduce attribution misbehavior in academia [42]. --- Supporting information
Perinatal depression (PND) can interfere with HIV care engagement and outcomes. We examined experiences of PND among women living with HIV (WLWH) in Malawi.We screened 73 WLWH presenting for perinatal care in Lilongwe, Malawi using the Edinburgh Postnatal Depression Scale (EPDS). We conducted qualitative interviews with 24 women experiencing PND and analyzed data using inductive and deductive coding and narrative analysis.Women experienced a double burden of physical and mental illness, expressed as pain in one's heart. Receiving an HIV diagnosis unexpectedly during antenatal care was a key contributor to developing PND. This development was influenced by stigmatization and social support.These findings highlight the need to recognize the mental health implications of routine screening for HIV and to routinely screen and treat PND among WLWH. Culturally appropriate mental health interventions are needed in settings with a high HIV burden.
Introduction The scale-up of antiretroviral therapy to all pregnant and breastfeeding women living with HIV, known as Option B+, has the potential to dramatically improve maternal health and end mother-to-child HIV transmission (MTCT) [1]. In Malawi, all pregnant women diagnosed with HIV in antenatal care (ANC) begin lifelong antiretroviral therapy (ART) under Option B + [2]. However, women who initiate ART during pregnancy under Option B+ are one-fifth as likely to return to HIV care after their initial visit compared to non-pregnant women initiating ART in Malawi [3]. In the short term, poor maternal mental health has the potential to undermine the delivery of Option B+ by affecting initiation of and retention in HIV care [4]. Long term, poor maternal mental health and disrupted HIV care may increase the risk of MTCT and have negative effects on women's quality of life and psychological well-being [4]. Globally, adults living with HIV are at an increased risk of depression, with the association being stronger among patients who are newly diagnosed and women [5]. A systematic review conducted in high-, middle-, and low-income countries found that pregnant and postpartum women living with HIV are at particularly high risk for perinatal depression (PND) due to multiple bio-psychosocial risk factors [4]. These risk factors include increased stress, HIVrelated stigma, a lack of social support, concerns about disclosing their HIV status, and concerns about their infant's health and HIV status [4]. Through Option B+, more women are becoming aware of their HIV status and initiating ART during the perinatal period. Simultaneously, many are experiencing PND. PND is known to affect 13.1% of women in low and middle-income countries, with as many as 19.2% of women having a depressive episode within the first three months postpartum [6,7]. Among women living with HIV in Sub-Saharan Africa, a meta-analysis found a pooled prevalence for PND of 42.5% for prenatal women and 30.7% for postpartum women, indicating a high prevalence among this population [8]. PND is known to have detrimental effects on both mothers and infants [4,9]. For example, behavioral traits associated with depression (i.e., neglecting ANC) can lead to adverse effects on fetal health and child development [9]. Among women living with HIV, PND is also associated with increased risk for HIV progression as a function of dietary changes, impaired immune function, and suboptimal ART adherence and engagement in HIV care [10,11]. Given the connections between HIV, PND, and maternal and infant health, there is a great need for a fuller, qualitative understanding of the PND experience of women living with HIV in low-income settings [12,13]. Understanding the social etiology of PND will guide efforts to intervene on and alleviate PND among women living with HIV. Addressing PND among women living with HIV may also improve women's retention in HIV care, a global health priority, as well broader maternal and child health outcomes [14]. This study aims to understand the experience of PND among women living with HIV in Malawi. --- Methods --- Study site and population We completed in-depth interviews about PND with women seeking pre-or postnatal care at five ANC clinics (two urban and three rural) in Lilongwe, Malawi between July and August 2018. All women living with HIV seeking pre-or postnatal care at the study sites who screened positive for PND and who were over the age of 18 were eligible for the study. HIV status was confirmed by women's health passports. PND was defined as depression occurring during pregnancy or the first 12 months postpartum [14,15], and was assessed using the Edinburgh Postnatal Depression Scale (EPDS), which was previously validated in Chichewa and is used to screen for antenatal depression in this region [16][17][18][19]. Women were classified as having PND if they received a score of � 10 on the EPDS. Consecutive women were screened by a trained counselor at each site until 24 women with PND total (4 at site A; 5 at sites B-E) were identified who agreed to participate in the study [20]. A sample size of 24 was decided upon to achieve data saturation [21]. Women reporting suicidal ideation in the EPDS were referred to mental health specialists as appropriate. --- Data instruments We developed a semi-structured interview guide to explore women's experiences of PND, its determinants and manifestations, and its impact on HIV care engagement. This guide began by presenting PND symptoms and asking if they had seen someone with these symptoms and how they would describe them, then presented vignettes, or short stories with hypothetical characters, and closed by asking women about their personal experiences with depression. Vignettes were used due to the sensitivity of the study topic [22]. The vignette in the interview guide centered around a woman with a new child experiencing signs of PND and receiving an HIV diagnosis. The interviewer then asked how this woman would be treated in her community and how the woman being interviewed would handle the situation. The data collector then asked how the woman had been feeling in her most recent pregnancy and who she had confided in. The guide closed with a discussion of depression treatments, namely how the woman thought those experiencing depression would be most helped. The guide was created in English and translated into Chichewa. A trained, female research assistant from Malawi conducted all interviews and met with the study team weekly to discuss the data collection process. --- Analysis All interviews were conducted and audiotaped in Chichewa, simultaneously transcribed and translated to English by AK, and uploaded to NVivo v.12 for data analysis [23]. We used a combination of thematic and narrative analysis [24]. Analysis and interpretation began during data collection as interviews were transcribed and translated [25]. After reviewing the first few transcripts, two research assistants (KL; JD) based in the United States created a codebook to begin categorizing data that included both descriptive and interpretive codes [24]. Using these descriptive and interpretive codes, the first author coded the data using a hybrid of data-driven (i.e., inductive) coding and concept-driven (i.e., deductive) coding, with concepts coming from prior literature, the research team's previous experience, and the research questions [26]. The first author also analyzed words and phrases that were significant, listed their meanings, and created in vivo codes to capture phrases women used to describe their PND [13,26]. The first author met regularly with the Malawian interviewer and translator and other members of the research team to clarify when translation was unclear, refine the codebook, and identify themes. The analysis also involved creating analytic memos to document the development of the team's understanding of women's experiences of living with both PND and HIV and to identify emergent patterns and relationships between codes, which assisted in connecting the data [26,27]. The first author then created matrices to identify and analyze similarities and differences between participants for key themes [28]. Lastly, using narrative analysis, the first author wrote an HIV diagnosis narrative based on the participant 'Ruth.' Because women's stories often began with their HIV-diagnosis, a life-changing event, the narrative structure assisted in analysis by establishing chronology [24,29,30]. We use the illustrative case of Ruth to highlight processes over time within one woman's experience, as a complement to thematic summaries and quotes. --- Ethics This study was approved by the Institutional Review Boards at the University of North Carolina at Chapel Hill and at the Malawi National Health Sciences Research Committee. If participants were literate, written consent was obtained. If participants were illiterate, oral consent was obtained and an impartial witness was present. --- Results --- Demographic information 73 women were screened and 24 (33%) had elevated symptoms of PND. We conducted 24 indepth interviews (14 with prenatal and 10 with postpartum women with living with HIV and PND). Of the 24 with elevated symptoms of PND, 14 were pregnant and 10 were postpartum at the time of the interview (Table 1). The proportion of women with PND was higher at the three rural sites (range: 45-71%) compared to the two urban sites (13-29%) (Table 1). Of the 73 women screened, 14 (19% of all women; 58% of those with PND) reported suicidal ideation. Report of suicidal ideation was also higher at the three rural sites (27-63% of all women) than at the urban sites (0-24% of all women). Women were, on average, 27 years old and most had more than one child, were married, unemployed, and had at least some primary education (Table 2). One woman was beginning ART at the current appointment while the remainder had already initiated ART. Most women (71%) had received their HIV status over two years ago and none had been screened for or diagnosed with PND previously. --- Women's experience of PND Here we present the experiences of women living with both HIV and PND in Malawi. We highlight the narrative of one respondent, 'Ruth,' alongside others' stories to demonstrate how PND often manifests and how a woman's HIV diagnosis is a key contributor to her development of PND. Ruth's story represents a typical case, which helps provide insight into women's PND experiences. "Pain in my heart": A double burden. Ruth was a pregnant participant who had received an HIV diagnosis during a past pregnancy over two years ago. When Ruth and others described how their PND presented itself in their lives, they often described their culmination of symptoms causing "pain in my heart." When talking about her depression during her interview, Ruth said that her heart was troubled; her depression would not stop in her heart and was persisting because it had come from the combination of her HIV diagnosis, pregnancy, and marital issues. Other women used phrases referring to the heart. Women used this phrase of having pain in their hearts to describe how depression felt to them and how this feeling persisted in their heart throughout time, disrupting their lives and not allowing them to tend to other tasks. When a prenatal woman that had been diagnosed in her current pregnancy was asked if she knew about depression, she described her own experience as the following: "from the time I started my HIV treatment, I feel a lot of pain in my heart. It's not like I am worried about anything, but I just feel so much pain in my heart, as if I have been shocked by something" (prenatal, living with HIV <6 months). This pain persisted because she did not feel that she could confide in anyone about her HIV status and because her husband had abandoned her once she disclosed her diagnosis. A woman diagnosed with HIV within the last two years said that she kept feeling an overwhelming pain in her heart that kept her from working and that this pain stemmed from her overthinking and worry. This worry was about raising her children alone, as her husband had abandoned her after she became HIV-positive. Importantly, women explained that understanding how others felt in their heart and helping other women strengthen their heart were potential mechanisms for addressing PND. The persistence of women's depression was often expressed through women's double burden of living with both HIV and PND. This double burden was closely tied to having an unexpected HIV diagnosis, with the diagnosis making the depression harder to handle. During her interview, Ruth described the persistence of her depression and the pain in her heart as a large burden. Discussions of this double burden were most common when women were asked to imagine a woman with PND and HIV and one with only PND. HIV and PND were both thought of as diseases, one being physical and one being mental: "one of them [with only PND] is just depressed but her body is okay whereas the other one [with both PND and HIV] is depressed and her body has viruses" (prenatal, living with HIV over 2 years). All women said that living with both would be more difficult and would be different than only having PND. Women also expressed that they could handle their PND more easily if it did not co-occur with HIV. As a woman's HIV diagnosis was a key contributor to her developing PND, without this diagnosis, many women believed they would not have developed it or would have experienced a milder form. One woman stated that receiving an HIV diagnosis made her PND worse: "you get very depressed because you think of the problems you already have at home and then you have even more problems now. The depression increases" (postnatal, living with HIV over 2 years). Without an HIV diagnosis, PND was also perceived to have an endpoint whereas PND stemming from HIV was expected to be lifelong because the primary cause (i.e., their HIV status) was also lifelong. Ruth and others felt that this double burden combined with stigma and marital issues led them to have suicidal thoughts, as they claimed that committing suicide would rid them of all of their problems. Fourteen out of 24 women (58%) expressed suicidal ideation and such thoughts were more common among currently pregnant women and those lacking support from their partner. Of the 14 women with suicidal thoughts, nine had passive or low-risk suicidal ideation and five had active or high-risk suicidal ideation. In addition to suicidal thoughts, women also revealed that their burdens could affect their ART adherence. Ruth explained that "if you are depressed, you cannot manage to do those things [take ART medication] because of the depression and you're hurt in your heart every day, because when someone is depressed the heart always hurts. For you to be bothered about your life, you just say whatever happens will happen." Women's PND resulted in hopelessness, which could lead them to either forget to take their medication or to lack the motivation to go to the clinic. At the same time, Ruth said that women often think too much when they have PND, and it is harder to remember one's medication when one has so many thoughts. Yet, suicidal thoughts did not result in suicide attempts and ART barriers did not result in women's lack of adherence to ART for Ruth and many others. High reported adherence was most common among women in urban areas and among postnatal women. Ruth was motivated to take her ART "so that the baby [she was] expecting should not go through the burden that [she was] going through" by contracting HIV. She also felt responsible for her other children and wanted to remain healthy so that she could raise them. Adherence was often made easier when women had disclosed their status to others, as they then felt accountable to those to whom they had disclosed and felt encouraged to accept their HIV status and begin and remain in treatment. Intersecting identities: HIV-positive, pregnant, and in an unstable marriage. Ruth had known she would be tested for HIV as part of routine ANC, but did not suspect that she had contracted HIV. In discussing her HIV diagnosis, she stated that "depression is inevitable because you have been diagnosed with HIV at a time you were not expecting it." Ruth found it difficult to accept the reality of having HIV when she did not anticipate a diagnosis. For Ruth and others that received their HIV diagnosis during pregnancy, their diagnosis compounded pre-existing anxieties about being a mother, predisposing them to develop PND. One woman in her fourth pregnancy began "wondering how [she] would be able to look after [her children] with the HIV and [kept] wondering if this was the end of [her] life" after her diagnosis (postnatal, living with HIV 1-2 years). She continually expressed her worry about her diagnosis and about how she would be able to parent while living with HIV. In addition to being pregnant when receiving her diagnosis, Ruth learned that her husband was also HIV-positive but had been hiding his diagnosis from her. Upon learning of Ruth's diagnosis, her husband acted as though she "went wayward and brought the virus into the marriage." Like Ruth, most women were in relationships in which they believed they had contracted HIV from their husband. Yet, most women's husbands reacted negatively to their wives' HIV diagnosis and denied their own responsibility. Women's HIV diagnoses combined with their pregnancy often exacerbated preexisting marital issues and created new ones. These marital issues meant that women lacked social support from a partner, contributing to them both developing PND and accelerating and exacerbating it: "I sometimes do not talk to him because something is troubling me. To think the same person who transmitted the virus to me is the one who is insulting me. . ." (prenatal, living with HIV < 6 months). For Ruth, the combination of not anticipating her HIV diagnosis, being pregnant when she received her diagnosis, and unknowingly contracting HIV from her husband resulted in her feeling overwhelmed and like her challenges were insurmountable. It was then difficult for her to accept the reality of having HIV in the midst of her overwhelming circumstances. Many other women noted an inability to accept an HIV diagnosis or a denial of their diagnosis as contributing to feeling depressed. While an inability to accept a diagnosis and an active denial of a diagnosis may be different, women used these two descriptions interchangeably. Stigma and social support. In addition to their marital issues, many women felt stigmatized and unsupported by others. One woman stated that "people who can encourage us to live a life with little depression are rare" (prenatal, living with HIV over 2 years). Both HIV and PND were emotionally charged topics in women's communities and women often received mixed reactions from community members. Mixed or negative reactions largely stemmed from others not accepting women's HIV diagnosis, meaning that they did not accept the women as their full selves with an HIV diagnosis. One woman directly linked HIV stigma to her development of depression: "I sometimes ask myself if [my community knows] about my HIV status and if that is the reason they treat me in the way they do. I then get depressed because I think too much" (postnatal, living with HIV over 2 years). Women feeling stigmatized by others often led to their overthinking, which was both a cause and symptom of PND. While HIV stigma was a prominent determinant of some women's depression, others felt directly stigmatized for their depression because they felt that they were perceived as a bad mother. Women were sometimes warned that they "shouldn't be sad, because the baby [will] also be sad" (prenatal, living with HIV over 2 years), or were described as sick, mad, in trouble, lazy, or panicked. Importantly, once women were stigmatized due to their PND, they sometimes began to worry that people were stigmatizing them due to their HIV status even if people did not know their status, which created an internal cycle of worry. Still others felt a lack of love or ambivalence towards them once people knew about their HIV or PND. Ambivalence towards women in these circumstances was often as negative as active stigmatization, as women were not supported and thus unable to move towards accepting their status and lessening their PND. Yet, some women found sources of support. Linked to their denial or struggle to come to terms with their HIV status, women did not immediately disclose their HIV status or feelings of depression to others. Rather, they turned to prayer. Fourteen of the 24 women listed prayer as a source of support in response to their HIV diagnosis. One woman said that she "would just pray for God to remove [her] worries because. . .He is the one who can remove [her] anxieties" (prenatal, living with HIV over 2 years), indicating that both she and those around her could not remove her anxieties related to her HIV. After prayer, many women turned to one or two specific people for support. Ruth confided in one of her in-laws, and most women talked to a family member, friend, or their husband. While most did not find support in their husbands, four women found encouragement in talking with them, including two women who noted that their husbands were HIV-positive. Women that had been living with HIV for longer were more likely to have found sources of support. Additionally, if women knew others that were living with HIV, they were likely to serve as sources of support. Throughout women's discussions of social support, they described social support as being composed of three components: interaction, encouragement, and offloading or sharing worries. Interaction was seen as a distraction for women to stop worrying about their HIV and PND. Encouragement was discussed as both needed by women and provided by them to others so that "[they] get strengthened in [their] heart" (prenatal, living with HIV over 2 years). Encouragement thus helped their depression and their worries. Offloading was identified as a critical component for preventing suicide. Some had not found anyone that they could share their worries with: "I have never met anyone whom I could share my worries with. . . sometimes I get depressed but can't tell anyone" (postnatal, living with HIV over 2 years). Thus, women needed a combination of all three components of social support, as they all served different functions in alleviating PND. --- Discussion In our population, women with PND experienced a unique double burden of HIV and PND, which was commonly expressed as having pain in their hearts and as worry. Women's HIV diagnosis, especially when it was unexpected and received during routine ANC, was a key contributor to their developing PND. Women's unexpected diagnosis intersected with their pregnancy and marital relationships to contribute to their PND. These relationships were then influenced, positively or negatively, by women's social interactions and relationships. Our study sheds light on the experience of women living with both PND and HIV in a lowincome country with high HIV prevalence. Three main themes emerged from our interviews: a double burden of having a physical and mental illness, as expressed through having pain in one's heart; women's intersecting identities of being HIV-positive, pregnant, and in an unstable marriage; and the key roles of stigmatization and social support (or lack thereof) in influencing the development and trajectory of PND. First, given the many contributing factors to women's PND, it is not surprising that women would experience a large burden. Yet, while the co-occurrence of HIV and depression is often cited in the literature about this population, there is a lack of discussion around what this cooccurrence means for women's experiences and the burden it creates [4,[31][32][33]. This finding reemphasizes that these two epidemics often collide in the lives of women and their intersection deserves global attention through increased screening and treatment [14]. Second, the high interconnection of contributors to PND is supported in other qualitative work [13]. Women's HIV diagnosis has been noted as a source of depression in prior literature in sub-Saharan Africa, with women reporting a 3.5-fold higher number of mental health issues, especially depression, after an HIV diagnosis [31,32]. We add to this literature by finding this HIV diagnosis to be particularly burdensome when it is not expected and is diagnosed during routine ANC. Research from South Africa found that if women are living with HIV, women's PND is worse because women are worried about having children to look after while living with HIV [13]. The role of a woman's marriage was quite prominent in our data. Based on our data, it is possible that women living with HIV and experiencing PND may, in particular, lack support from their partner. Another study in Lilongwe, Malawi found that women that had not disclosed their HIV status to their partner had twice the prevalence of PND, which likely indicates a poor relationship with and emotional support from a partner [13,34]. Thus, women's marriage may relate to their depression specifically among HIV-positive women, as HIV women's HIV status may indicate a lack of ability to have previously negotiated condom use with a partner and a lack of emotional support within marriage [35]. Third, women's stigma related to HIV and PND cut across all of their identities mentioned above [35,36]. Most prior work focuses on HIV-related stigma specifically and finds that women reporting greater stigma related to HIV in Malawi are significantly more likely to report depression [36]. This indicates that in addition to women having an unexpected HIV diagnosis, HIV may contribute to PND through stigma. A study in South Africa found that the relationship between stigma and HIV among HIV-positive women persists after controlling for marital status and pregnancy intention, indicating that it is a distinctly important component of women's identities contributing to PND [37]. Stigma related to depression plays a role as well. One study in South Africa found that psychological or emotional illnesses, including depression, hold an additional layer of stigma and increase stress and perceived stigma upon disclosure of status [38]. Yet, this work was not specific to PND [38]. While we found depression-related stigma to be present, we argue that there is unique stigma related to PND, as women were worried about being perceived as a bad mother due to their depression. Additionally, while the literature documents well how stigma affects depression, it is less well understood how ambivalence or a lack of support can be a contributing factor. Multiple women did not mention overt stigma, but indicated that ambivalence and a lack of support also contributed to their PND. Regarding the association between PND and HIV, most literature reports that depression is associated with lower ART adherence, which often serves as the motivation for why PND should be addressed among HIV-positive women. Yet, while the women in our study said that other depressed women may not adhere to ART, most claimed that they did not have issues with adherence themselves. It is possible that the women choosing to participate in our interviews were more likely to be those well engaged in care; it is also possible the women felt some social desirability pressure to report good adherence for themselves, while feeling more free to note that others might have such a difficulty. One explanation, as noted by another study in Lilongwe, Malawi is that because these women are in a population with high engagement in care, PND is not associated with lower ART adherence [39]. Another explanation our study found is that during this perinatal period, women may be more motivated to adhere to ART as they want to have a healthy pregnancy and remain healthy for raising their children. It is important to note this motivating factor, as it may be protective in women's ART retention during the perinatal period. --- Recommendations Moving forward, literature suggesting that HIV and depression care should be combined in treatment and that providers should attend to the emotional and psychological needs of women need to be transferred to the perinatal population and expand beyond ART counseling [32,36]. Additionally, examples exist that address PND at the community level using pre-existing primary care structures and training lay health workers in counseling [40][41][42]. Given that HIV-infected women often experience a double burden of physical and mental illness that is influenced by social support, stigmatization, and family dynamics, these interventions have the possibility of being expanded and adapted to women living with HIV. Specifically, the Friendship Bench, individual talking therapy based on problem-solving therapy delivered by a lay health worker, has been found to be effective in Zimbabwe among a general population suffering from depression, the majority of whom were living with HIV [41]. Importantly, the Friendship Bench and similar interventions draw on pre-existing structures of primary care and ART counseling services and train lay health workers, indicating that they are scalable given the high proportion of women living with HIV that also have PND. Additionally, they focus on counseling and providing social support, which is often a critically missing piece of women's current experiences. --- Limitations Our findings should be considered in the context of certain limitations. First, while the use of vignettes is an appropriate strategy for discussing sensitive topics, it creates difficulty in disentangling women's personal experiences from their perceptions of others' experiences. Second, the data was translated directly into English from an audio-recording in Chichewa, which may result in inconsistencies in translation. However, the translator was an active member of the research team and was available for discussion, which aided immensely in data analysis and interpretation [43]. Third, it is possible that there was social desirability bias in discussing ART adherence. Lastly, we have limited generalizability, as we only spoke with women engaged in ART and with women that were willing to talk about their PND from five clinics in Malawi. However, our five sites do capture a diverse patient population and all women that screened positively for PND agreed to be interviewed. --- Conclusions By improving our understanding of the social etiology of PND among women living with HIV, we will be better able to construct interventions that are specifically designed for women living with both PND and HIV and that are responsive to the experiences of women [13]. In conclusion, our findings indicate a great need for programs to recognize and address the mental health issues during routine HIV testing and ART treatment and the importance of recognizing women's whole identities and experiences in assessing the burden of PND in this population.
Substantial scholarship has been generated in medical anthropology and other social science fields on typically developing child-parent-doctor interactions during health care visits. This article contributes an ethnographic, longitudinal, discourse analytic account of a child with autism spectrum disorder (ASD)-parent-doctor interactions that occur during pediatric and neurology visits. The analysis shows that when a child with ASD walks into the doctor's office, the tacit expectations about the visit may have to be renegotiated to facilitate the child's, the parent's and the doctor's participation in the interaction. A successful visit then becomes a hard-won achievement that requires the interactional and relational work of all three participants. We demonstrate that communicative and sensory limitations imposed by ASD present unique challenges to all the participants and consider how health care disparities may invade the pediatric encounter, making visible the structural and interactional processes that engender them.
Autism spectrum disorder (ASD, American Psychiatric Association 2013) is defined in biomedicine as a neuro-developmental syndrome of wide phenotypic variability (Muhle et al. 2004). From a sociocultural perspective, it is a "contested category" (Silverman 2012:16): a lived experience, a way of being in the world, and a form of neurodiversity (Eddings Prince 2010;Grinker 2010). Because of this dualistic framing, ASD refracts in important ways the social science research on child-parent-doctor interactions during health care encounters (Tates and Meeuwesen 2001). This literature, however, focuses on health care encounters that meet the sociocultural expectations of normative development under implicitly "default" conditions (i.e., most children in these studies are typically developing, white, and middle-class). We extend the scope of existing research by examining the health care encounters of African American children diagnosed with ASD. We describe the interactional work of mothers to ensure that their children's developmental and health care needs are addressed by medical professionals in ways that acknowledge the children's subjectivities. We offer a description of the mothers' emic perspectives not only on their child's health, development, and health care but also on the hidden intersections of race and disability. Such perspectives are exemplified in a mother's reflection, drawn from our data, about her young son's future: "He has the rest of his life being black and being labeled autistic" (Solomon and Lawlor 2013:107). As Mattingly (2010) notes: "Structural conditions, identities and power/ knowledge discourses are concretely realized … not as necessary or inevitable workings of macrostructures, but as intimate dramas of dismissal, often in the face of great need" (p. 85). We examine the in situ linkages between the structural and interactional processes (Institute of Medicine 1999) that shape African American children's health care visits and describe the challenges and dilemmas that these visits hold for the participants. --- Background Health Care Encounters: Typically Developing Children Extensive research has been conducted on typically developing (TD) child-parent-doctor interactions. Metaphors of "social choreography" and "dance of three partners" (Aronsson and Rindstedt 2011;Gabe et al. 2004;Tates et al. 2002) capture the coordination of the participants' social actions during pediatric visits. The dance, however, does not involve the children to the same degree as the adults: Most often it is the parent who is the primary informant about the child's health (Clemente et al. 2008;Stivers 2007;Tates and Meeuwesen 2001). Nevertheless, communication during the visits is fundamentally triadic: Although the child is often an over-hearer rather than a speaker, he or she is always a "coauthor" (Duranti 1986) of the illness experience. In the last decade, researchers have called for greater attention to how children learn to "both participate in their medical visits and to be appropriately socialized into the role of an autonomous, accountable patient" (Stivers and Majid 2007:424). Stivers and Majid's (2007) study of children's video-recorded visits with pediatricians in community practices in Los Angeles identified a gradual attribution of competence to the child by the adults, realized through the child's increased opportunities (22% for each additional year of age), to verbally participate in the interaction. When the parent was black, however, the odds that the child was selected as an informant was 78% less than when the parent was white, while no effect of race was found in the willingness of the children to answer the doctor's questions. These findings suggest that black children are treated by providers as less-competent informants, which decreases their opportunities to be socialized into patient roles (Stivers and Majid 2007). These opportunities may become even scarcer when a child has been diagnosed with ASD. --- Complexities of Health Care: Children with ASD While there is a vast literature on health care utilization of children with ASD (e.g., Croen et al. 2006;Kogan et al. 2008;Liptak et al. 2006), research on health care encounters of both children and adults with ASD is scarce and consists mainly of clinical case studies (Accordino and Walkup 2015;Radcliff 2013;Smith et al. 2012). To our knowledge, this article is the first to provide an analysis of social interactions during pediatric visits involving a child with ASD. The realization that limitations faced by individuals with ASD present challenges to their health care is a relatively new development in autism research (Lajonchere et al. 2012;Volkmar et al. 2014a). Verbal children may have difficulties in conversational turn-taking and meeting the listener's informational needs (American Psychiatric Association 2013). One-third of all children with ASD are "minimally verbal" (i.e., have little or no spoken language by school age) (Bauman 2010;Tager-Flusberg and Kasari 2013). Difficulties may stem from the children's reactivity to sensory stimuli, stereotypic behaviors and mannerisms, self-and other injurious behavior, and wandering and elopement tendencies (American Psychiatric Association 2013; Baranek et al. 2005;Solomon and Lawlor 2013). Although limitations associated with ASD predictably present problems when there is an illness or an injury, research in this area is mostly lacking (Accordino and Walkup 2015). This is of special concern because many children with ASD have gastrointestinal, seizure and sleep disorders, and other co-occurring conditions (Lajonchere et al. 2012). These conditions are often difficult to diagnose and manage clinically because children with ASD may express their discomfort and pain differently than TD children (American Psychiatric Association 2013; Volkmar et al 2014a). This may lead to a misdiagnosis of underlying problems, over-medication with psychotropic drugs, and decreased participation in family and community life (Accordino and Walkup 2015;Bauman 2010;Goldson and Bauman 2007). The role of health care providers in addressing this problem is multifaceted. Practice standards and policy mandates call on them to occupy a central role in the management of ASD (Committee on Children with Disabilities 2001a, 2001b;Myers and Johnson 2007;Volkmar et al. 2014b). The providers themselves voice uncertainty about delivering health care for children with ASD, feeling less competent in treating them compared to children with other neuro-developmental conditions (Golnik et al. 2009;Heidgerken et al. 2005). Alternatively, families struggle to find providers with the skills to treat their children (Lajonchere et al. 2012), reporting less shared decision-making than families of children with special health care needs but without ASD (Bethell et al. 2014). Children with ASD are less likely to receive comprehensive health care services and specialty care compared to TD peers and are more likely to have unmet health care needs compared to both TD children and children with special health care needs but without ASD (Kogan et al. 2008;Liptak et al. 2006Liptak et al. , 2008;;Tregnago and Cheak-Zamora 2012). These unmet needs occur in the context of greater health care utilization (Croen et al. 2006;Zablotsky et al. 2015), which suggests that it is not the amount but the quality of communication that makes health care effective. --- Methods, Sample, and Data The data are part of a three-year ethnographic, mixed methods study of African American families' experiences of ASD diagnosis and services in Los Angeles County, California ("Autism in Urban Context: Linking Heterogeneity with Health and Service Disparities," National Institute for Mental Health, R01 MH089474, 2009-2012, O. Solomon, P.I.). Part of a research tradition in occupational science and medical anthropology, the study draws on narrative, phenomenological, and interpretive approaches to understand families' illness and disability experiences (Jacobs et al. 2011;Lawlor 2003Lawlor , 2012;;Lawlor andMattingly 2009, 2014;Mattingly 2010Mattingly , 2014)). Four California Department of Developmental Services regional centers, a universityaffiliated hospital, and a center for developmental disabilities in Los Angeles County served as the study sites. Twenty-three families with a total of 25 children who were diagnosed with ASD participated in the study. Enrolled children were eight years old or younger at recruitment and ranged from four to 11 years during data collection. The children had an ASD diagnosis (American Psychiatric Association 2000) by a licensed professional and a projected need for interventions at one of the study sites. Their primary caregivers selfidentified as African American and were all mothers except one family where it was the father. Adult participants included 22 mothers, 15 fathers and stepfathers, 17 extended family members, and 68 professionals, including physicians, occupational therapists, speech pathologists, teachers, and service coordinators. Ethical approval for the study was obtained from the University of Southern California Health Science Campus Institutional Review Board. The data were collected via participant observation in the home, clinic, school, and community. The description of the data collected for the larger study can be found in Solomon and Lawlor (2013) and Angell and Solomon (2014). Four of the 23 participating families made it possible for us to observe their children's visits with seven physicianstwo developmental pediatricians, two gastroenterologists, one family practitioner, a pediatric neurologist, and a pediatric cardiologist. The observations included the entire visit from the time the child and the mother walked into the clinic until they left. For this article, we analyzed a sub-corpus of 16 observations of the children's health care visits, 12 interviews with the mothers following these visits, four interviews with the physicians, and field notes. Most of our analysis focuses on one family whom we followed over a three-year period: Noah, who was six years old at the start of the study, and his mother, Stella. We chose this family because nine of the 16 visits that we observed involved Noah and Stella: six with Dr. Saito, a developmental pediatrician, and three with Dr. Tran, a neurologist. The challenges and achievements that Noah, Stella, and Dr. Saito and Dr. Tran experienced during these visits, while particular and unique to this child, this mother, and these physicians, provide a conceptual framework to "think with" about the experiences of other children with ASD, their parents, and their doctors. --- Findings Collaborative Co-construction of the Child's Action and Subjectivity The title of this article ("You can turn off the light if you'd like") are the first words that Dr. Tran said to Noah during one of the visits when he entered the examination room and saw that Noah had switched off the light. This comment projects several consequential actions: Dr. Tran acknowledges Noah's sensitivity to the fluorescent light and ratifies an accommodation to meet Noah's sensory needs. Three video-recorded visits over a six-month period, one with Dr. Tran and two with Dr. Saito, took place in various degrees of semidarkness because Noah turned off the light and the adults did not turn it back on. There was also an observed, but unrecorded, visit with a specialist where Noah turned off the light while waiting for the doctor, prompting Stella's light-hearted comment: "You are being energy efficient. Being a conservationist." In that visit, leaving the light off was not an option afforded by the doctor; he turned the light back on as he entered, without a comment. These ratifications of Noah's turning off the light evince what Rapp and Ginsburg (2011) call "the paradox of recognition": Parents of children with disabilities often seek to "demedicalize" their children to "situate them in a more holistic and communitarian context" (Ginsburg and Rapp 2013:187). This is a consistent theme in our data, but we found that the parents are not the only ones reckoning with this paradox; some of the doctors were engaged in these processes as well. The following examples drawn from Noah's video-recorded visits with Dr. Saito describe how this pediatrician's acceptance of Noah's sensory needs, combined with Stella's ability to keep Noah occupied and calm, are linked to the "interactional achievement" (Schegloff 1995) of the visits. --- Dr. Saito: Visit 1 Noah and Stella are in the examination room. The room is dark because Noah turned off the light. Only an examination lamp is on, shining a bright light on the wall. When Dr. Saito enters, smiling, Noah looks away from the toy with which he has been playing and says "Hi?" "H::i! Goo::d!" says Dr. Saito, acknowledging Noah's greeting. Dr. Saito does not comment on the darkness in the room and Stella does not offer any explanation. He shakes hands with Stella and offers Noah his raised hand for a high five. It takes several seconds to coordinate their hands for a high five, but neither the doctor nor the boy give up. "Good job," says Dr. Saito when their hands touch. "Good job!" Stella says, smiling. Then she begins her update: "We a::re-." As he sits down, Dr. Saito asks the question that Stella has already begun to answer: "How are we doing overall?" Stella sighs: "One step back from where we've been. His behavior has become aggressive since you and I last spoke. If he does not get what he wants he actually comes charging at me." Dr. Saito's expression turns serious. He opens Noah's medical chart and begins to write. Three minutes later, Stella points with frustration to the ceiling where the light would have been: "The light sensitivity thing is just-he is taking light bulbs out in my house! He is now climbing on the vanities in the bathrooms, so wherever Noah is, I have to be. If I don't take a shower before he wakes up, I don't get a shower until he goes to bed." The visit lasted 28 minutes, and the light was never turned on. During the visit, Stella and Dr. Saito discussed Noah's complex challenges: gastrointestinal problems, unpredictable aggression, spitting at school, and public displays of sexuality. But Stella and Dr. Saito also talked about Noah's developmental progress: naming more objects and becoming continent. Throughout this conversation, Noah lay on his side on the exam table playing with a beadmaze toy, his back turned to his mother and the doctor. Stella's hand rested on Noah's hip, as if to both calm him and hold him in place. This corporeal arrangement in the semi-dark room was what made this health care visit possible, providing Noah with a way of being self-regulated and occupied while his mother and the doctor talked (see Figure 1). Twelve minutes into the visit Dr. Saito had to examine Noah, still in semi-darkness, using the light from the exam lamp. Through all the manipulations of his body, Noah followed Dr. Saito's instructions with no protest. Thirteen-and-a-half minutes into the visit, when for a second Noah turned the light on, Dr. Saito asked Stella, "This is just a new thing with the light?" The example illustrates the participants' complex interactional work during a health care visit for a child with ASD. The assumptions about what a health care visit is expected to be become suspended and new ground rules are followed (e.g., the light should remain off if this is what-as it is tacitly agreed on-Noah needs to stay calm). These ground rules are linked to how Noah's subjectivity is framed by Stella and Dr. Saito in light of his sensory, communicative, and behavioral challenges. The "interactional achievement" (Schegloff 1995) of the visit is predicated on a transactional engagement of the doctor, the child, and the mother: The doctor's acceptance of Noah's need to have the light off affords Stella's ability to hold Noah in place on the examination table, while Noah's ability to occupy himself with a toy makes it possible for his mother and the doctor to talk. After the visit, Stella reflected: "I absolutely adore this child, but being a parent is hard work. I mean when I say 'hard work,' it's exhausting, especially when you have a child with special needs." During the visit, Stella comments on her vigilance regarding Noah's comportment and safety: "Wearing heels was not perhaps the wisest thing to do today." Minutes later, she half-seriously says to Noah: "What are you thinking about doing? I may have the heels on and I may have tripped a couple times, but I can catch up with you." Another mother in the study, Monica, shared similar concerns about wearing clothes and shoes that provided ease of movement to manage her daughter's behavior during a health care visit: "I came really ready, gym shoes, yoga pants, cotton shirt, hair in a ponytail, like I'm ready to work, you know." After this visit, Dr. Saito reflected on his experiences: "It's too easy to get frustrated and be dismissive of some of the difficult autistic children, because they're running around the office more, they may be more destructive, it's hard, very hard to do an exam, you don't know how far you're getting through." In spite of these difficulties, Dr. Saito described children with ASD as intentional actors with rich subjectivity: They have the logic going on in their own brain. They're interpreting the world in a different way, they're speaking a different language, so the burden is upon me to understand them as much as it is for them to understand our world. No, it's not about decreased intelligence, it's a different world. That's what I keep in mind. That helps me. Dr. Saito portrays ASD both as a disorder and a way of being, a view strikingly consistent with the "paradox of recognition" (Rapp and Ginsburg 2011). --- Dr. Saito: Visit 2 A month later, Noah and Stella were back in Dr. Saito's office. Noah again turned off the light, and the room was lit only by an examination lamp (see Figures 23456). Sitting on the examination table, Noah has been shining the lamp on different parts of the room. Dr. Saito enters, smiling. "Hi, Doctor Saito!" says Stella, smiling. Noah turns and looks at Dr. Saito. Dr. Saito, looking at Noah, says: "Hello!" Noah turns away and begins to hum. Stella says: "Noah, say 'Hi'!" Dr. Saito raises his hand in a greeting, positioning it for a high five. Noah softly says "Hi." Dr. Saito looks at Noah and says, "How are you?" Noah looks away and does not answer (Figure 2). Stella puts her hand under Noah's chin and turns his face toward Dr. Saito (Figure 3). "Eye contact!" she says. Dr. Saito leans forward and looks directly into Noah's face (Figure 4). When Noah does not reply, Dr. Saito places his hand on top of Noah's forehead, lifts Noah's face slightly to have a more direct eye gaze and softly asks: "How are you?" (Figure 5). Stella voices a reply: "Good?" Dr. Saito offers another reply "Okay?" and moves his hand away from Noah's forehead. Noah reaches for Dr. Saito's hand and holds it for a few seconds, a gesture resembling a high five (Figure 6). Stella says to Noah: "Say 'good.'" "Good," Noah says softly. Stella says "Yeah!" Dr. Saito says, "Good job!" and, smiling, turns to Stella and asks her, "How are you doing? Still in the dark?" Stella laughs. "Yes!" Discussion of a surgery that Noah is planned to have in two weeks follows. As in the earlier visit, the light was never turned on. With the exception of Dr. Saito's semihumorous question to Stella, "How are you doing? Still in the dark?," the fact that the three of them were in the dark again was never framed as a problem. This appears to be the ground rules of having a health care visit with Noah: Not make any changes in the physical environment that may set off problematic behaviors and not make the unusual nature of the situation explicit. These ground rules, however, are not always designed to obscure the child's limitations. Consider Dr. Saito's greeting "How are you?" directed to Noah, compared to the greeting from the previous visit, "How are we doing overall?" Directed to both Stella and Noah, "How are we doing overall?" does something subtle: It indexes Noah as an "owner" of his experience but does not directly select him as a speaker and thus does not make him accountable if he does not reply. The greeting of the second health care visit, "How are you?" does the opposite: It makes Noah accountable to both know "how he is" and to reply. If Noah does not reply, it would produce a conditionally relevant absence (Schegloff 1996) (i.e., when an action is noticeably and accountably absent). When Noah does not respond, Stella physically molds his body, directing him to use eye contact and moving his head to face Dr. Saito (Solomon 2011). When Noah softly replies, "Good," he has become a speaking and participating child. After this visit, Stella reflected: He (Dr. Saito) took the time to interact with him, where a lot of time the doctors are like, "What's the problem?," you know, "What's going on?" Talk to him! See, engage him! He'll say something to you, you may not understand it right away, but he'll tell you something, which is really important because, I think, if any child-, if you lower yourself to their eye level, they can relate to you. But Dr. Saito, he interacts with Noah, he talks to him. Stella's insistence that Noah be treated like any child at his eye level matches Dr. Saito's view of children with ASD as social and intentional. As Stella makes clear in another interview, this perspective is not shared by every physician who sees Noah. For example, Stella remembers another doctor's (we call him Dr. Simpson) response to Noah's tantrum during a visit: Noah had a meltdown that was probably ten times what you saw in there. And Noah wanted to write all over their door and I actually had to wash their door down (laughing). And at the end of the first time they met him, Dr. Simpson came over, shook my hand and said, "God bless you for having so much patience! Just-, God bless you!" I'm just like, I'm like (high-pitched, incredulous tone), "Wha::::t?" Stella appears to be taken aback by Dr. Simpson's thanking her for her patience during the visit. In another interview, she fleetingly comments that Dr. Simpson's "bedside manner is just atrocious." It seems significant that the doctor's appreciation does not translate into asking someone from the office staff to wash the door on which Noah drew with his marker, holding Stella publicly accountable for Noah's behavior. Stella's exasperation is shared by another mother in the study around the issue of adaptability of the clinic's environment to the child's needs and challenges. Monica, whom we quoted earlier, shared in an interview her frustration with how her daughter Roxanne's psychotherapist treated her during a visit: When you deal with a special needs person, you need to be prepared for that. I didn't like the way Clara (the psychotherapist) treated me like I didn't have a good attitude, but the first time we went in there I could tell that she judged us already, Roxanne was hyper and I couldn't control her. Roxanne ran around the clinic, she ran out onto the roof, and that first time she didn't end up having a tantrum in the clinic but when we left, you know how she gets, in the car she let it out. Monica's reflections on being judged and Stella's account of Dr. Simpson's gratitude stand in stark contrast with Stella's experiences of Noah's visits with Dr. Saito and Dr. Tran. It would be unthinkable to imagine either Dr. Tran or Dr. Saito thanking Stella for her patience because it would mark the doctor as not having the patience, and stand in conflict with the "paradox of recognition" (Rapp and Ginsburg 2011) that resides in simultaneously seeing Noah's challenges and his way of being in the world. In an interview, Stella reflected on what makes the long drives to see Dr. Saito and Dr. Tran worth it: "It just fell into place for Noah. Dr. Saito and Dr. Tran work well together even though they've never met. I think Noah has a really good group of doctors. Dr. Saito and Dr. Tran are just phenomenal when it comes to him." This quote exemplifies Stella's recognition of the visits as an extended process of health care that has an important impact on Noah's development, health, and well-being. In the next section, we examine the institutional practices that may hinder this hard-won achievement. --- The Transactional Nature of Health Care Encounters Noah's three semi-annual visits with Dr. Tran show how the transition to an electronic health record system alters the situated practices of a health care visit. We identify the changes in participation (Goodwin 2007) across the visits and show how the participants orient toward each other and the relevant objects in their physical environment: the medical record and the computer (see Figures 7891011121314). --- Dr. Tran: Visit 1 Noah and Stella enter the exam room and sit down at a child-sized table by the door. Noah begins to draw with markers that his mother carries for him in her backpack. Stella sits next to him, making encouraging comments. Eleven minutes later, Dr. Tran enters carrying Noah's medical chart. He sits in a chair behind Noah, joining him and Stella in the childcentered part of the room (Figure 7). Noah is drawing, supervised by Stella who, because Noah is occupied in an activity, is able to speak with Dr. Tran. The visit begins with updates on medication (Figure 8). While talking with Dr. Tran, Stella also talks to Noah, encouraging him to draw. Dr. Tran stands up and looks over Stella's shoulder, observing Noah's drawing (Figure 9). Stella says to Dr. Tran: "He knows how to write his name now. And there's one major thing, we are now completely potty trained. Number one and two!" Dr. Tran, writing in the chart, says, "That's great" (Figure 10). Stella goes through her list of concerns about Noah's sleep, his emerging sexuality, and unexpected responses to medication. Dr. Tran writes in the medical chart, alternating between looking up at Stella and looking down at the chart. When Stella says, "Another concern I'm having is …" or, "Some of my concern is …" Dr. Tran looks up at her from the chart. This "choreography of attention" (Tulbert and Goodwin 2011) provides a predictable interactional framework that renders Stella's concerns recognized and acknowledged by the doctor. Five minutes into the visit, Noah draws on the wall and Stella takes the marker away. From this moment on, maintaining the conversation becomes increasingly difficult because Noah's calm engagement in an activity will become impossible. Noah, upset, lies down on the floor and begins to cry. Stella lifts him up and puts him back in the chair. Noah becomes more upset, crying and stomping his feet. As his cries grow louder, Stella begins to count backward from 30. At this point, Stella directs all her attention and energy to managing Noah's behavior, and the conversation between her and Dr. Tran stops. Dr. Tran observes Stella and Noah and writes in the chart. When Stella stops counting, Noah asks "Pa::h" (it may be "pen" or "please"). But Stella cannot risk Noah drawing on the wall again and she puts the marker away. Noah goes into a full-blown tantrum. Stella sits down with him on the floor, holding him with her arms and legs like a human restraint. Over Noah's cries, Stella and Dr. Tran have a hard time hearing each other. Stella explains how she learned this method of calming Noah down: Noah's former teacher gave me the instruction to put him on the floor, to put your legs over his. But then I was being butted in the head. And I really need your-not necessarily your help-I've talked to Regional Center and they've talked about other methods, something with the diet but the problem is, he's not eating. Stella's hesitant request for "not necessarily your help" with this problem shows her awareness that this issue is more in the jurisdiction of the regional center that authorizes behavioral services rather than Dr. Tran's, who provides medication management. The medications are intended, as the American Academy of Pediatrics recommends, to help the child "benefit more optimally from educational interventions" (Myers andJohnson 2007:1163), rather than being an intervention on its own. The dilemma that Stella, Dr. Tran, and Dr. Saito have been facing is that Noah's educational interventions have been limited and inconsistent. In the absence of a continuous, well-planned intervention program, the medical management of Noah's behavior took the primary role, which it was not designed to have. Our data reveal the complex linkages between structural and interactional dimensions of health care disparities (Institute of Medicine 1999). Although ASD alone is associated with a risk for substandard care (Bethell et al. 2014), disparities in autism interventions and services experienced by many African American children may further jeopardize the interactional achievement (Schegloff 1995) of their health care visits. The necessity to manage the child's sensory, communicative, and behavioral challenges may become the primary focus during the visit, essentially stopping the conversation between the mother and the doctor. The range and severity of a child's health and behavioral challenges may become impossible to consider and address within the limited time of a health care visit. --- Dr. Tran: Visit 2 Six months later, Stella and Noah once again sit at the child-sized table while Noah works on a puzzle. When Dr. Tran walks in, he goes straight to a computer on the other side of the room, sits on a chair, and begins to type. Unlike the past visit when Dr. Tran joined Noah and Stella in the child-centered part of the room, this time Stella and Noah have to join Dr. Tran by the computer. This puts into motion a different participant organization than in the previous visit: It turns from being child-centered to being "computer-centered." Stella sits down, facing Dr. Tran at an angle. Noah sits in her lap, and they will now have to maintain his calm demeanor without the help of occupations such as drawing or doing a puzzle. It is now much harder for Noah to be calm and unobtrusive. Stella, excited, begins her update with a story about Noah's developing sociality: When I picked him up from school today, one of the boys in his class was crying because his aide was leaving. Noah out of the blue went over to where the tissues are kept, grabbed a tissue, went over to where the little boy was and wiped the tears off of his face! He had actually acknowledged the little boy! Other than that he won't acknowledge the kids are there. Dr. Tran types on the computer as she speaks. This second visit is organized around the competing demands introduced by the computer and Dr. Tran's new obligation to type the notes rather than to write them down. He no longer has the flexibility to move around the room and observe Noah. He appears torn between wanting to look at Stella while listening to her-the choreography of attention (Tulbert and Goodwin 2011) of the previous visitand having to type on the keyboard. In the middle of the visit, Dr. Tran stops typing long enough for the screen saver to turn on, as he listens to Stella's story about the second time in a few weeks that Noah used his own feces to "paint" on the walls of his room. Dr. Tran offers several medications to manage this behavior as well as another problematic behavior, running away at school. Throughout this visit, Noah is draped across Stella's body, and for the most part, he remains relatively calm. He vocalizes at times as if in protest but never becomes upset as in the previous visit. Several times throughout the visit, he hits himself on the forehead but neither Stella nor Dr. Tran comment on it. The visit is "interactionally achieved" (Schegloff 1995) because Dr. Tran is able to shift his gaze and body orientation from the computer to Stella and Noah (Figure 11) and back to the computer without much disruption in the interaction (Figure 12). Stella is able to contain Noah during the visit in the absence of a child-centered organization of the room. Noah is able to remain relatively calm and content throughout the visit, which allows his mother and Dr. Tran to talk. In the end of the visit, Stella says to Noah: "You did so well." The third visit, however, will prove to be more challenging. --- Dr. Tran: Visit 3 Six months later, Noah and Stella are once again in Dr. Tran's exam room. Stella sits at the child-size table by the door while Noah stands leaning against her. She asks him to point to animals on a wooden puzzle and then to point to his hair, shirt, and shoes. Dr. Tran enters and sees that Noah has turned off the light. Before uttering a greeting, he says, "You can turn off the light if you'd like. It's okay with me." He walks straight to the computer and begins to type. There is no chair by the computer, so Dr. Tran remains standing. Noah, unanchored by a drawing activity or by his mother's body, begins to explore the room. Stella tries to hold Noah in her lap as she did in the previous visit, but this time she is unable to hold him in place. What follows next can be interpreted as either a child's disruptive behavior or a creative way to distract Dr. Tran from the computer. As Noah moves around the room, he touches Dr. Tran's shoes and pants, then the computer mouse, keyboard, and screen. He runs back and forth from the window to the door, opens cabinets and drawers, stands on the trash can, turns on the water, grabs Dr. Tran's glasses, and even wraps his arms around Dr. Tran's neck and shoulders, hanging from his back. Of the many behaviors that Noah could have been engaged in, most of these are aimed at attracting Dr. Tran's attention. Noah's overtures, however, make Dr. Tran appear less and less attentive to Stella's updates. "I have to type," Dr. Tran says one time, trying to physically disengage from Noah when he takes Dr. Tran's hand. Stella, who has been trying to manage Noah's behavior, now sits alone on the examination table. She continues to give Noah directions but has given up her efforts to physically manage him. She discusses Noah's medications, often speaking to Dr. Tran's back and gesticulating as he types on the keyboard (Figure 13). To make eye contact with Stella, Dr. Tran has to turn around, sometimes leaving one hand, as a pivot, on the mouse, posed to type as soon as he turns back to the computer (Figure 14). The analysis of this visit shows that the transactional nature of the child-mother-doctor interactions may not only enhance but also diminish the participants' engagement. Noah is left without the child-centered space where he can engage himself in an activity or his mother's body where he can remain calm. Dr. Tran must manage the new demands of entering information into Noah's electronic health record while trying to listen to Stella and to preserve the choreography of attention (Tulbert and Goodwin 2011) that the two of them have enacted in previous visits. With Dr. Tran's diminished ability to interactionally acknowledge Noah's challenges and developmental victories, Stella has to work harder to get through her list of concerns. At the end of the visit, Dr. Tran, visibly exhausted, says, "I'll see him back at the end of the year." --- Discussion and Conclusion This article contributes an ethnographic, discourse analytic account of the situated health and development-focused work accomplished by the mother, the doctors, and the child with ASD. Our analysis reveals that the orchestration of pediatric health care visits for children with ASD presents new challenges and involves interactional, relational, and spatial work that differs in significant ways from findings reported for TD children. Specifically, although there are three primary social actors, the interaction between them cannot be categorized as triadic, as is the case in the TD children's visits. Rather, there are three separate dyadic interactions-child-mother, child-doctor, and mother-doctor-that take place during most of the visits described here. This may partly explain Liptak and colleagues' (2006) finding that outpatient visits of children with ASD are twice as long as visits of other clinical populations of children. Moreover, the child's communicative, sensory and other limitations imposed by ASD present challenges to all the participants in the visit. We demonstrated how, when a child with autism walks into a doctor's office, the encounter becomes a precarious, hard-won interactional achievement (Schegloff 1995) rather than a predictable event. We outlined the transactional nature of the work required from all three participants to achieve a "successful" health care visit. We considered the import of the physical environment in this achievement (e.g., the quality of light, the child-centeredness of the room, and the prominence of the computer). We described how structural and interactional dimensions of health care disparities become intertwined and amplified when the child's behavioral and educational services are inconsistent and showed how this impacts the child's ability to participate in the visit and how it increases the range and magnitude of concerns that the mother and the physicians have to address. Our analysis points to what is "at stake" (Kleinman 1988:55) in these visits as Stella vigilantly orchestrates interactions with the doctors to stay clear of "harsh judgment" while paying "respectful regard" to Noah's subjectivity and humanity (Lawrence-Lightfoot and Davis 1997:52). Much has been written in anthropology and other social sciences about ways in which parents of children with disabilities "come to locate, interpret and often advocate for (their) personhood" (Gray 2002;Landsman 2003Landsman :1948;;Ortega 2009). Stella's interactional and relational work during the visits insists on Noah's fundamental humanity while contesting potential "othering" categorizations related not only to ASD, but also to being African American. In one of the interviews, she told a story about filling out a form where she had to check a box indicating that Noah was black. "Instead of checking any of those off, I made my own box and I put 'Human,'" she said. This quote explicitly marks Stella's commitment to another kind of vigilance: to oversee and anticipate the intersections of race and disability (i.e., the direct and implicit ways in which race can exacerbate potential impediments to her son's care). There were many more subtle and nuanced ways, however, in which this work was undertaken. The "conditionally relevant absence" (Schegloff 1996) of the intrusion of race, disability, and disparities within the moments of health care visits was also evident. Stella's actions often embodied her determination to insist on Noah's humanity while avoiding the boxes denoting race and disability both on forms and during interactions with the doctors. Many of her elicitations and narrations of her son's actions were designed to illuminate Noah's capacities and attributes as first and foremost a child. The physicians' actions often supported and enhanced Stella's interactional and relational work during the visits, enacting the expectation that Noah was a competent co-participant in obvious ways such as engaging in social greetings, but also in more subtle ways evident in statements like "I have to type." These actions constituted Noah's sociality and potentiality as a relational participant in the health care visit, standing in stark contrast to commonly held presumptions of the lack of sociality and diminished subjectivity in children with ASD. The study has limitations in that health care visits data were collected for only four families of 23 who participated in the larger study. Some parents invited us to most of their children's appointments that took place during data collection, while other families never did. The families may have invited us to observe visits mostly with providers with whom they had positive relationships that would not have been jeopardized by the presence of an outside observer. This potentially provided an overly positive view of the phenomena. Even this view, however, allowed a window into the challenges that the families faced in seeking health care for their children. In spite of the consistently unmet health care needs of children with ASD and the mandate for health care providers' central role in managing this medically complex and chronic condition, there have been few practical strategies to improve the provision of health care for children with ASD and their families. This article aimed to show that the health care visit is a promising site where such strategies can begin to be developed.
The black-white disparity in preterm birth has been well documented in the USA. The racial/ethnic composition of a neighborhood, as a marker of segregation, has been considered as an underlying cause of the racial difference in preterm birth. However, past literature using cross-sectional measures of neighborhood racial/ethnic composition has shown mixed results. Neighborhoods with static racial/ethnic compositions over time may have different social, political, economic, and service environments compared to neighborhoods undergoing changing racial/ethnic compositions, which
may affect maternal health. We extend the past work by examining the contribution of neighborhood racial/ ethnic composition trajectories over 20 years to the black-white difference in preterm birth. We used natality files (N = 477,652) from birth certificates for all live singleton births to non-Hispanic black and non-Hispanic white women in Texas from 2009 to 2011 linked to the Neighborhood Change Database. We measured neighborhood racial/ethnic trajectories over 20 years. Hierarchical generalized linear models examined relationships between neighborhood racial/ethnic trajectories and preterm birth, overall and by mother's race. Findings showed that overall, living in neighborhoods with a steady high proportion non-Hispanic black was associated with higher odds of preterm birth, compared with neighborhoods with a steady low proportion non-Hispanic black. Furthermore, while black women's odds of preterm birth was relatively unaffected by neighborhood proportions of the Latinx or non-Hispanic white population, white women had the highest odds of preterm birth in neighborhoods characterized by a steady high proportion Latinx or a steady low proportion non-Hispanic white. Black-white differences were the highest in neighborhoods characterized by a steady high proportion white. Findings suggest that white women are most protected from preterm birth when living in neighborhoods with a steady high concentration of whites or in neighborhoods with a steady low concentration of Latinxs, whereas black women experience high rates of preterm birth regardless of proportion white or Latinx. --- Introduction Preterm birth is the leading cause of neonatal and infant mortality [1,2]. Preterm babies are at increased risk for adverse health outcomes, neurodevelopmental problems, and disability [3,4]. The black-white difference in preterm birth has been documented in the USA for decades [5,6]. For 2016, non-Hispanic black women were 50% more likely to deliver preterm compared to non-Hispanic white women (13.8% vs. 9.0%) [5]. Researchers have investigated underlying reasons for the black-white difference in adverse birth outcomes; yet, the difference has remained unexplained after adjustment for individual-level factors in most populationbased studies [7][8][9][10]. Following an earlier focus on individual-level factors of racial difference in preterm birth, research has turned to neighborhood factors to potentially explain the difference. Researchers have posited the racial/ethnic composition of a neighborhood as a significant underlying cause of the racial difference in preterm birth [11][12][13]. For a variety of reasons stemming from institutional and interpersonal racism-including socioeconomic inequality, racial prejudice, and housing market discrimination-neighborhood racial/ethnic composition is highly variable [14,15]. Non-Hispanic blacks tend to reside in neighborhoods with a high proportion of racial/ethnic minorities (hereafter, referred to as "predominantly non-white neighborhoods") rather than neighborhoods with a high proportion of non-Hispanic whites (hereafter, referred to as "predominantly white neighborhoods") [15]. Predominantly non-white neighborhoods on average have limited social, economic, educational, and healthcare opportunities and are considered as less safe and less comfortable [16][17][18][19]. A review study [14] by Williams and Collins found that the physical separation of racial groups is likely to adversely affect the practice of a broad range of health behaviors and access to high-quality medical care, which disproportionately impact poor health for blacks. Given that preterm birth is associated with neighborhood deprivation and maternal stress [20][21][22][23][24][25][26], it has been hypothesized that living in predominantly nonwhite neighborhoods prior to or during pregnancy increases risk of preterm birth (hereafter, neighborhood deprivation hypothesis). The overall picture of associations between neighborhood racial/ethnic composition and preterm birth becomes more complicated when considering women's race/ethnicity. According to the ethnic density hypothesis, the health of ethnic minorities is improved when living in neighborhoods with a higher concentration of one's own racial/ethnic group. One's own racial/ethnic group in the residential neighborhood provides opportunities for social interaction, access to culturally accessible resources, and material, logistic, and social support that promote individual health [27][28][29]. In addition, living in predominantly non-white neighborhoods may place minority residents at lower risk of racial discrimination and exclusion [30]. For example, some studies [31,32] showed that living in neighborhoods with a higher proportion of black people was associated with a lower level of discrimination among black women compared to living in neighborhoods with a low proportion of black people. In summary, the ethnic density hypothesis posits that higher social capital, social support, shared culture, culturally accessible resources, and less racial discrimination that minority residents may have in neighborhoods with many people who "look like them" may override social and material deprivation that also often occurs in predominantly non-white neighborhoods [33]. The ethnic density effect was first investigated as a correlate of mental health outcomes. In a classic study [34], an inverse association between living in one's own ethnic neighborhood and rates of psychiatric hospital admission was found for blacks and whites. Recently, literature has shown a significant association between one's own racial/ethnic composition in the neighborhood and better physical health outcomes, albeit with associations differing by age and race/ethnicity [33,[35][36][37]. Literature examining the association between the proportion of one's own racial/ethnic group in the neighborhood and risk of preterm birth, however, shows mixed findings-a positive [38,39], inverse [33,40,41], or insignificant association [33,42] overall or within a subgroup. Also, because most of the studies focused only on a non-white sample, their analysis did not inform whether neighborhood racial/ethnic composition contributes to the black-white difference in preterm birth [33,38,39,43]. There are a few birth outcome studies examining interactions between neighborhood racial/ ethnic composition and women's race, but using birth records from 1990 to 2003 and finding inconsistent results [11,41,44,45]. Neighborhoods are not static but dynamic entities that evolve through time due to changes in immigrant settlement patterns and mobility in general [40,46,47]. In particular, the Latinx population has increased rapidly during recent decades, and Texas is one of the three states where more than half of the US Latinx population resides [48,49]. The proportion of African Americans has decreased in Texas over 60 years [50]. Theoretically, historically and predominantly non-white neighborhoods may have health-compromising environmentssuch as less investment in a broad range of services and medical care, safety concerns, lack of social support, advertisement for unhealthy foods and substancesthan neighborhoods that currently, but not historically, have high proportion of racial/ethnic minorities, as a result of limited economic, social, and political power. Neighborhoods undergoing changing racial/ethnic compositions may also face benefits (e.g., social integration, diversity) and stressors (e.g., racial income inequality, unstable social change) [51][52][53][54][55], compared to neighborhoods with static racial/ethnic compositions over time. Using Texas census tract-level data from 1990 to 2010, our study, currently under review, found that neighborhoods varied considerably on neighborhood economic status according to neighborhood racial/ethnic trajectories [56]. For example, neighborhoods with changing (either increasing or decreasing) proportion of whites over 20 years were more likely to have high-poverty trajectories than neighborhoods with a consistently high proportion of whites during the same period [56]. Thus, consideration of neighborhood racial/ethnic trajectories may reflect economic, social, and environmental changes caused by the historical patterns of migration and mobility in Texas, compared to using a single point-intime measure of neighborhood racial/ethnic composition. Furthermore, living in neighborhoods with a high proportion of Latinx people (hereafter, referred to as "predominantly Latinx neighborhoods") may contribute to reducing the risk of black mothers' preterm birth and ultimately the black-white difference. This is because predominantly Latinx neighborhoods as the largest racial/ethnic minority group in Texas could provide black women higher levels of social capital and minority-focused resources and less racial discrimination compared to predominantly white neighborhoods [11]. This study examined whether neighborhood racial/ ethnic trajectories over 20 years explain black and white mothers' preterm birth and the black-white difference in effect estimates of preterm birth because none has examined neighborhood racial/ethnic trajectories associated with preterm birth and its black-white differences. The following hypotheses guided the study: (1) living in historically Latinx or non-Hispanic black neighborhoods will serve a protective role against preterm birth for black, but not white, women; (2) living in historically non-Hispanic white neighborhoods will serve a protective role for white, but not black, women; and (3) living in historically Latinx or non-Hispanic black neighborhoods will reduce the black-white difference in effect estimates of preterm birth compared to the difference in historically non-Hispanic white neighborhoods. --- Methods --- Data Individual-level data were obtained from Texas natality files for all live, singleton births in Texas from 2009 to 2011. Census tracts were used as approximations of neighborhoods, consistent with past studies [26,57]. Neighborhood data for the years 1990 through 2010 came from the Neighborhood Change Database (NCDB), the gold standard in examining census tractlevel data over time. We did not use 1970-1980 decennial censuses because 1250 out of 5265 census tracts had records missing neighborhood racial/ethnic composition in either 1970 or 1980 (i.e., rural areas that had not yet been assigned tract geocodes). Neighborhood data were linked to natality files on the basis of tract geocodes derived from women's residential address on birth certificates. Our analytic sample included all singleton births to non-Hispanic black and non-Hispanic white women during the 3-year period (N = 513,148; black = 127,763; white = 385,385). We excluded records missing length of gestation or birth weight (n = 9756), those with gestational age < 22 or > 44 weeks (n = 4337), and those with biologically implausible birthweightgestation combinations (n = 2275) based on Alexander et al. guideline [58]. We also excluded records missing 2010 geocodes (n = 21,536), those missing neighborhood racial/ethnic compositions from 1990 through 2010 (n = 1472), those not falling into any of neighborhood racial/ethnic trajectories (n = 1946), and those missing maternal age, nativity, parity, or education (n = 393). The final sample is 471,433 births (92% of the total). --- Individual Measures Our dependent variable is preterm birth. We categorized births as preterm (< 37 weeks completed gestation) or term (≥ 37 ; no first trimester care). Young and older women may have higher risks of preterm birth than women of 20-34 years of age possibly due to social disadvantage, biological immaturity, and unhealthy behaviors for young women [59] and preexisting chronic diseases, medical problems, and infertility for older women [60,61]. Multiparas had greater perinatal risks than primiparas [61] especially among older women [62]. Black women and low socioeconomic status (SES) women may have higher risks of preterm birth than white or high SES women because of psychosocial stress [63][64][65][66] operating through system function/susceptibility to infection and health behaviors [67,68]. Unmarried women may experience stress from relationship instability, lack of psychosocial support, and socioeconomic disadvantage, which possibly leads to adverse birth outcomes, compared to married women [69]. Appropriate timing of prenatal care during pregnancy is important to prevent preterm birth by providing services to manage preterm labor and improving health behaviors and knowledge about early warning signs of pregnancy complications [70,71]. --- Neighborhood Measures Neighborhood racial/ethnic trajectories are the main exposure variables. We used census tract-level trajectories of the Latinx population, non-Hispanic black population, and non-Hispanic white population between 1990 and 2010. We first created three cross-sectional measures of neighborhood-level proportions of the Latinx, non-Hispanic black, and non-Hispanic white populations based on the distribution of the 1990, 2000, and 2010 data (tertiles). Next, using the three cross-sectional measures for each time period and population, we categorized neighborhoods into five trajectories defined a priori [26]: (1) steady low trajectory, (2) steady moderate trajectory, (3) steady high trajectory, (4) increasing trajectory, and (5) decreasing trajectory. First, many neighborhoods would have a steady proportion of each population over time (i.e., steady low, steady moderate, or steady high). Census tracts in a steady low trajectory experienced either or a combination of low and moderate with no discernible pattern during all time periods. Census tracts in a steady moderate category had a moderate proportion of specified race/ethnicity at all three time periods. And, census tracts in a steady high category experienced either high or a combination of high and moderate with no discernible pattern during all time periods. Second, some neighborhoods would experience either an increase or decrease in the population of a particular racial/ethnic group over the three decades (i.e., increasing or decreasing). Census tracts in an increasing trajectory had low or moderate population in 1990 and became and remained moderate or high after 1990. Census tracts in a decreasing trajectory experienced high or moderate population in 1990 and became and remained moderate or low after 1990. Neighborhood-level covariates are neighborhood poverty and population density (as a proxy of urbanization [72]). Advantaged neighborhoods and urban neighborhoods tend to have greater political power to build health-promoting environments (e.g., easy access to high-quality education, employment, information, and resources) [73][74][75], which are plausibly related to prepregnancy health and adverse birth outcomes [26,76]. A cross-sectional measure of neighborhood poverty was created based on the 2006-2010 American Community Survey. We classified census tracts with less than 5% poverty as low poverty (reference group), those with 5% to 20% poverty as moderate poverty, and those with more than 20% poverty as high poverty. The cutoff was based on the US Census definition of poverty areas. Neighborhood-level population density was calculated as people in a census tract per square mile by using the 2010 Census data. --- Statistical Analysis We first examined the prevalence of preterm birth overall and by sample characteristics. We then estimated hierarchical generalized linear models (HGLM) to examine the associations between neighborhood racial/ ethnic trajectories and preterm birth (1) without adjustment for covariates and (2) with adjustment for all individual-and neighborhood-level covariates. To determine whether a black-white difference in odds ratio of preterm birth is explained by neighborhood racial/ ethnic trajectories, we compared odds ratios of preterm birth associated with maternal race in the model that included all the individual-level variables and excluded neighborhood racial/ethnic variables (i.e., covariate model) to those in the models that included neighborhood racial/ethnic trajectories. If the odds ratios decreased from the individual-level model to the neighborhood-level models, it would indicate that neighborhood racial/ethnic trajectories partly explain the black-white difference in odds ratio of preterm birth. Finally, we tested for cross-level interactions between race and neighborhood racial/ethnic trajectories. Where interaction terms had a statistically significant at alpha < 0.10, we stratified the sample by women's race and conducted HGML to examine racial differences in associations between neighborhood racial/ethnic trajectories and preterm birth. In addition, we stratified the sample by each of these neighborhood racial/ethnic trajectories variables and conducted HGML to estimate the odds ratio for preterm birth among black versus white women (1) without adjustment for covariates and (2) with adjustment for all individual-and neighborhood-level covariates. We reported the interpretation of adjusted p values based on a Bonferroni adjustment as additional information because multiple null hypothesis significance test is vulnerable to type 1 error. The adjusted p values (i.e., 0.0167) were made by dividing the conventional alpha by 3. We conducted multilevel modeling because births were nested within neighborhoods. On average, there were 93 births per census tract, and 1793 out of 5099 census tracts had 100 or more births. In addition, while the intra-class correlation coefficient was only 2.4%, findings of unconditional model showed significant residual variance between neighborhoods (τ 00 = 0.08 *** ). Thus, the decision to use multilevel modeling was appropriate. We developed our final model by subsequently testing random intercept models with individual-and neighborhood-level correlates, random coefficient models assessing whether the slopes for women's race had a significant variance component, and cross-level interaction models for women's race and neighborhood racial/ethnic trajectories. Final models were decided with consideration of model fits, significance of slope, and model parsimony. We used SAS software version 9.4 for all analyses. --- Results --- Description of Sample Table 1 presents the characteristics of non-Hispanic black and white mothers with singleton births in Texas from 2009 to 2011 (N = 471,433). The majority of mothers were 20 to 34 years old, three quarters were non-Hispanic white, more than 40% were primiparous, and more than 60% were married. Over one-third of births were to mothers without first trimester prenatal care, and about 30% of births were to mothers with a college degree. About a third of births were to mothers living in neighborhoods with a steady low proportion of the Latinx population, or neighborhoods with a steady high proportion of the non-Hispanic black or non-Hispanic white population. Nearly a quarter of births were to mothers living in high poverty neighborhoods. About one in ten mothers delivered a preterm birth. The prevalence of preterm birth was higher among teens, older mothers, non-Hispanic black mothers, those delivering their 5th child or more, unmarried mothers, mothers without first trimester care, or births to loweducated mothers and fathers. The prevalence was also high for mothers living in neighborhoods with a steady high proportion of the Latinx or non-Hispanic black population, neighborhoods with a steady low proportion of the non-Hispanic white population, and high-poverty neighborhoods. The black-white difference in odds ratio of preterm birth was substantial (1.62 times higher among black mothers), and especially notable among older women, women in neighborhoods with steady low Latinx population, or women in neighborhoods with Living in neighborhoods with a steady high proportion of the non-Hispanic black population or decreasing non-Hispanic black population (compared to a steady low proportion) was associated with increased odds of preterm birth in unadjusted models (OR = 1.47 and 1.11, respectively). After adjustment for individual and neighborhood characteristics, living in neighborhoods with a steady high proportion of the non-Hispanic black population remained associated with a 5% increase in the odds of preterm birth (see the neighborhood black trajectories model in Table 2). All categories of the neighborhood non-Hispanic white trajectories (compared to a steady high proportion) were significantly associated with preterm birth in unadjusted models (ORs = 1.18-1.71). After adjusting for covariates, all categories of the neighborhood non-Hispanic white trajectories (compared to a steady high proportion) were still significantly associated with the odds of preterm birth (ORs = 1.06-1.18) (see the neighborhood white trajectories model in Table 2). The odds of preterm birth associated with being a non-Hispanic black woman (vs. non-Hispanic white) were robust to including racial ethnic trajectories (adjusted ORs = 1.49, 1.46, and 1.45, respectively). However, in general, model fit was better with inclusion of the racial ethnic trajectories measures in the models. Using the adjusted p values based on a Bonferonnia adjustment, living in neighborhoods with a steady high proportion of the Latinx population and all categories of the neighborhood non-Hispanic white (compared to a steady high proportion) were significantly associated with preterm birth. --- Cross-level Interaction of Women's Race with Neighborhood Racial/Ethnic Trajectories in Preterm Birth Cross-level interactions between women's race and neighborhood Latinx or non-Hispanic white trajectories were significantly associated with the odds of preterm birth (see Supplemental Table 1). Regarding the cross-level interaction between women's race and neighborhood Latinx trajectories, being black (vs. being white) in neighborhoods characterized by a steady moderate or high proportion of the Latinx population, or an increasing Latinx population was not as risky for preterm birth (OR = 0.93, 0.88, 0.92, respectively) compared with neighborhoods characterized by a steady low Latinx population. On the other hand, regarding the cross-level interaction between women's race and neighborhood non-Hispanic white trajectories, there were significant interactions between race and non-Hispanic white trajectories and Latinx trajectories at alpha 0.05. There was no significant interaction between race and neighborhood non-Hispanic black trajectories (results not shown). To illustrate further the significant cross-level interactions between race and the trajectories, we performed stratified models by women's race. Table 3 presents the logistic regression results estimating odds ratio of preterm birth associated with neighborhood Latinx trajectories and neighborhood white trajectories by women's race. In general, for black mothers, neighborhood context in the form of racial/ethnic trajectories did not matter for the odds of preterm birth. White mothers, however, had the highest odds in neighborhoods characterized by a steady high Latinx population or a steady low population of whites. In other words, white women seem to benefit but from being racially segregated. When based on the adjusted p values, living in a steady moderate or high proportion of the Latinx population and all categories of the neighborhood non-Hispanic white remained associated with preterm birth among non-Hispanic white women. Table 4 displays the logistic regression results estimating odds ratio of preterm birth associated with women's race within each subgroup of neighborhood racial/ethnic trajectories. The adjusted black-white odds of preterm birth were lowest among mothers in neighborhoods with a steady high trajectory of the Latinx population (OR = 1.39) compared with other Latinx trajectories (ORs: 1.47-1.58). Notably, the CIs between steady high and steady low did not overlap. In contrast, the adjusted black-white odds of preterm birth were highest among mothers in neighborhoods with a steady high (OR = 1.59) or increasing (OR = 1.55) trajectory of the non-Hispanic white population compared with other non-Hispanic white trajectories (ORs: 1.34-1.49), with mostly nonoverlapping CIs. When using the adjusted p values, same results were found. --- Discussion The present study examined the contribution of neighborhood racial/ethnic compositions over time to the blackwhite difference in effect estimates of preterm birth. A few studies have examined the black-white difference in preterm birth using cross-sectional measures of neighborhood racial/ethnic compositions, with inconsistent results [11,41,44,45]. This study extends past work by examining trajectories of neighborhood racial/ethnic composition over 20 years and interactions between women's race and neighborhood racial/ethnic trajectories. Our findings highlight that neighborhood racial/ethnic trajectories were associated with risk of preterm birth, but with different directions of association by mother's race. Prior literature has hypothesized that predominantly non-white neighborhoods tend to have limited levels of accessibility to healthcare services, lack educational and employment opportunities, and be considered as less safe and less comfortable [16][17][18][19], which are associated with greater risk of preterm birth among both non-Hispanic black and white women [20,25,26]. In contrast, according to the ethnic density hypothesis, minorities have better health status in neighborhoods with a higher proportion of one's own racial/ethnic group. This is plausibly because minorities may experience less racial discrimination and have more opportunities for social interaction, culturally appropriate health care services, and material, logistic, and social support in such neighborhoods [27][28][29][30][31][32]. Our findings support both hypotheses. Living in neighborhoods with a steady high proportion of non-Hispanic blacks (compared to longterm low blacks) was associated with greater odds of preterm birth overall. Living in neighborhoods with a steady low, moderate, increasing, or decreasing proportion of non-Hispanic whites (compared to a steady high proportion of whites) was associated with higher odds of preterm birth among white women. These findings corroborate the neighborhood deprivation hypothesis. We also found partial support for the ethnic density hypothesis. Living in predominantly Latinx neighborhoods was associated with greater risk of preterm birth for white, but not black, women. The findings could be interpreted as indicating that social and material deprivation in predominantly Latinx neighborhoods might be offset by a higher level of social support, minorityfocused services, and less discrimination for non-Hispanic black women as a racial minority group [16][17][18][19]. Our results also showed that living in predominantly white neighborhoods is not protective against adverse birth outcomes among black women and that the blackwhite difference in odds ratio of preterm birth was elevated in predominantly white neighborhoods. This is possibly due to the mediating role of racism-related stress, social isolation, and low accessibility to culturally sensitive resources which minorities often experience [27][28][29]. For example, compared to blacks in predominantly non-white neighborhoods, black women in predominantly white neighborhoods may be exposed to stressful race-related experiences and have to overcome racial barriers to being involved in the community and accessing resources [30][31][32]. Future research should examine such mechanisms using measures of exposure to neighborhood-level racial discrimination in daily life, interactions with neighbors, social cohesion, safety, and accessible resources prior to or during pregnancy. Testing these potential mediators of objectively measured (e.g., number of healthcare facilities, crime rates) and perceived potential mediators (e.g., perceived neighborhood safety, perceived quality of care) in such associations could explain the complexity of the blackwhite difference in preterm birth associated with neighborhood racial/ethnic compositions. Predominantly non-white neighborhoods are generally considered economically and socially disadvantaged neighborhoods [16][17][18][19]. Often policies (local, state, and national) perpetuate systemic gaps between non-Hispanic white residents and residents of color leading to increased health disparities, mainly in non-Hispanic black women. Racially inclusive policy and strategies can improve the health of mothers and prevent preterm birth among black women in predominantly non-white neighborhoods by focusing on limited opportunities for adequate and affordable housing, access to healthcare, high-quality minority-focused resources, sustainable income, and quality education, all key determinants of health [77,78]. Key strategies for Texas and its municipalities could include [79][80][81][82] (1) supporting access to health care through supporting extending Medicaid coverage one year postpartum; (2) funding community health care workers in partnership with local health clinics and hospitals; (3) increasing access and funding for more postpartum depression screenings and mental health treatment; (4) utilizing community organizations and churches and other cultural institutions to engage community voice when local governments and neighborhood associations are considering policy changes to secure parental leave and affordable and accessible child care; and (5) providing implicit bias training for realtors, housing agents, healthcare staff and workers, teachers and education administrators, and city council members. These policies and key strategies work to increase protective factors for black women in both steady high non-Hispanic black and steady high non-Hispanic white neighborhoods, to improve the health of black mothers. We note several limitations to the present study. We were limited in our ability to examine the ethnic density hypothesis for black women as the black population in Texas is relatively small and not as highly concentrated compared with the non-Hispanic white and Latinx population (e.g., nearly 2/3 of all neighborhoods had 10% or fewer black residents). Birth certificate data do not include information for potentially important information such as the length of time a mother lived in a neighborhood, household income, health insurance status, and residential mobility. In particular, given that 17% of all females moved in the past year in Texas according to the 2010 ACS, future longitudinal research needs to account for the length of time a women lives in her neighborhood and racial/ethnic compositions of her previous neighborhood if she recently moved. In addition, we chose to focus on neighborhood racial/ethnic compositions, but other important neighborhood characteristics-such as racial segregation and crimeand neighborhood socioeconomic trajectories should be considered in future research. Furthermore, our sample was limited to non-Hispanic black and white women, as approved by two IRBs. We have plans for future research to include births to Latinx women and to examine those births by nativity. Finally, causal inferences cannot be made with cross-sectional data. Further research replicating our study with longitudinal data and consideration of residential mobility is warranted to clarify the effect of neighborhood racial/ethnic trajectories on preterm birth. Despite these limitations, this study has several strengths. First, this study included all singleton births to non-Hispanic black and white women in Texas from 2009 to 2011. We measured neighborhood racial/ethnic trajectories that has not been investigated in past work examining associations between neighborhood racial/ ethnic compositions and adverse birth outcomes. Our work also adds to a body of literature demonstrating that living in a steady high concentration of whites protects white, but not black, women and that living in a steady low concentration of Latinxs protects white, but not black, women. Given evidence that the black-white difference in preterm birth varies by neighborhood racial/ethnic trajectories, researchers and policymakers should identify mechanisms through which white women benefit from living in white neighborhoods and the reason what works for white women does not work for black woman. --- Conflict of Interest The authors declare that they have no conflicts of interest. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Background Social relationships play a fundamental role in individuals' lives and health, and social isolation is prevalent among older people. Chronic non-communicable diseases (NCDs) and frailty are also common in older adults. Aims To examine the association between number of NCDs and social isolation in a cohort of community-dwelling older adults in the UK, and to consider whether any potential association is mediated by frailty. Methods NCDs were self-reported by 176 older community-dwelling UK adults via questionnaire. Social isolation was assessed using the six-item Lubben Social Network Scale. Frailty was assessed by the Fried phenotype of physical frailty.The median (IQR) age of participants in this study was 83.1 (81.5-85.5) years for men and 83.8 (81.5-85.9) years for women. The proportion of socially isolated individuals was 19% in men and 20% in women. More women (18%) than men (13%) were identified as frail. The number of NCDs was associated with higher odds of being isolated in women (unadjusted odds ratio per additional NCD: 1.65, 95% CI 1.08, 2.52, p = 0.021), but not in men, and the association remained robust to adjustment, even when accounting for frailty (OR 1.85, 95% CI 1.06, 3.22, p = 0.031). Discussion Number of self-reported NCDs was associated with higher odds of social isolation in women but not in men, and the association remained after considering frailty status. Conclusions Our observations may be considered by healthcare professionals caring for community-dwelling older adults with multiple NCDs, where enquiring about social isolation as part of a comprehensive assessment may be important.
Background Social relationships are important in individuals' lives and health, and have previously been associated with physical and psychological wellbeing [1]. Social isolation is considered as an objective measure of the scarcity or absence of regular social contacts and relationships with relatives, friends and neighbours and lack of social connection and involvement with the wider society [2][3][4]. As such, social isolation is distinct from loneliness, which is intended as a subjective, negative evaluation of the discrepancy between one's desired and actual quantity and quality of social relationships [5][6][7]. Previous studies reported that social isolation is prevalent and increasing among older adults [8,9]. This is a growing public health concern, as social isolation has been associated with a number of both physical and psychological adverse health outcomes, such as poor 1 3 physical capability, myocardial infarction, stroke, depression and mortality [10][11][12][13][14][15][16]. Therefore, recent studies have highlighted the importance of developing and implementing interventions aimed at reducing social isolation (as well as loneliness) in older populations [17,18]. In addition, an increase in life expectancy and a subsequent ageing population have led to a higher prevalence of chronic, non-communicable diseases (NCDs) [19]. The coexistence of two or more NCDs in one patient is defined as multimorbidity [20,21], a phenomenon that increases with age [22]: a study utilising a survey of members of a health maintenance organisation aged 65 and over, found the average person had 8.7 chronic diseases [23], while a Canadian study reported that the number of chronic diseases varies from 2.8 in young patients to 6.4 among older patients recruited from regional general practices [24]. The World Health Survey carried out between 2002 and 2004 in 70 countries worldwide showed that about 50% of middle-aged (50-64 years) to older (≥ 65 years) adults were multimorbid, having two or more NCDs, approximately a quarter had three, and one tenth have four or more NCDs [25]. A study by Kingston et al. using data from two population-based English cohorts of older adults living in the community (i.e. the English Longitudinal Study of Ageing [ELSA] and the Cognitive Function and Ageing Studies II) reported a 45.7% prevalence of multimorbidity (defined as having two or more NCDs) in 2015 for individuals aged 65-74 years, and estimated that such prevalence might increase to 52.8% by 2035 [26]. While a number of studies focused on the link between multimorbidity and loneliness [27][28][29][30][31][32], studies looking at potential associations between the number of coexisting NCDs and social isolation are very rare. A recent systematic review of observational studies examining the link between multimorbidity and loneliness, social isolation, and social frailty (i.e. the lack of resources to meet one's basic social needs) highlighted the lack of studies examining the association between multimorbidity and social isolation [33]. The occurrence of NCDs in older adults is often accompanied by frailty [34,35], a multi-dimensional geriatric syndrome that can be defined as a state of increased vulnerability resulting from decreased physiological reserves, multi-system dysregulation and limited capacity to maintain homeostasis [36,37]. Frailty is associated with higher risks of falls, disability, hospitalisation and mortality [38], and it has been reported to predict increased social isolation [39]. It is thus possible that any link between NCDS and social isolation might be mediated by frailty. In the current study, we, therefore, investigated whether the number of self-reported NCDs is associated with social isolation in a cohort of community-dwelling older adults in the UK. We also sought to explore whether any observed associations were removed by adjustment for the presence of frailty. --- Methods Participants were recruited from the Hertfordshire Cohort Study (HCS), a population-based sample of men and women born between 1931 and 1939 in Hertfordshire and originally recruited to study the relationship between growth in infancy and the subsequent risk of adult diseases [40,41]. Between 2019 and 2020, 176 participants from the HCS (94 men and 82 women) were visited at home by a trained fieldworker who administered a questionnaire that included information on medical history, medication use, lifestyle and social isolation. The visits also included measurements of height and weight to calculate body mass index (BMI); grip strength assessed three times for each hand using a Jamar dynamometer (the maximum measurement was used for analysis) [42]; the performance of the Short Physical Performance Battery (SPPB) tests, which included the assessment of gait speed, measured using an eight-foot course with no obstructions for an additional foot at either end. Participants were asked to walk at their customary pace and the time taken was recorded using a stopwatch; the use of assistive devices, such as canes, was permitted if necessary; gait speed was determined by dividing the distance traversed by the time between the first and last step [43]. Social isolation was assessed using the 6-item Lubben Social Network Scale (LSNS-6), which has been validated to assess social networks and social support and to screen for social isolation in older people [44]. The LSNS-6 tool measures the number and frequency of social interactions with friends (three items) and family members (three items). Each answer is assigned a score ranging from 0 ("none") to five ("nine or more"), and the overall final score ranges from 0 (indicating high isolation or few social resources) to 30 (indicating low isolation or many social resources). Social isolation was defined as a LSNS-6 score < 12, in accordance with Lubben et al. [44]. The LSNS-6 has been shown to have good internal consistency across samples of communitydwelling older adults [44][45][46]. Number and types of NCDs were assessed by asking the question: 'Have you been told by a doctor that you have any of the following conditions?'. The following conditions were recorded: high blood pressure, diabetes, lung disease (asthma, COPD, emphysema, chronic bronchitis), rheumatoid arthritis, multiple sclerosis, cancer, vitiligo, depression, Parkinson's disease, heart disease (heart attack, angina, heart failure), peripheral arterial disease (claudication), osteoporosis, thyroid disease, and stroke. Any other serious illnesses were also recorded. Frailty was defined as the presence of at least three of the following Fried frailty criteria [38]: unintentional weight loss, weakness, self-reported exhaustion, slow gait speed and low physical activity. Weight loss was assessed asking the question: 'In the past 3-6 months, have you lost any weight unintentionally? If yes, how much?'. Weakness was defined as a maximum grip strength of < 27 kg for men and < 16 kg for women [47]. Exhaustion was assessed asking the following question: 'How often in the last week did you feel "everything I did was an effort" or "I could not get going?"'. Participants who responded to feel as described above for either moderate amounts or most of the time were identified as exhausted. Slow gait speed was defined as ≤ 0.8 m/s. Physical activity was assessed by the average amount of time (in minutes per day) spent walking outside, cycling, gardening, playing sports or doing housework in the last 2 weeks. Low physical activity was defined as an activity time in the bottom fifth of the HCS sex-specific distribution (≤ 58 min/ day for men and ≤ 90 min/day for women). Frailty assessed using Fried's criteria has predictive validity for adverse health outcomes, including disability [38,48]. Smoker status was categorised as never smoked, exsmoker or current smoker depending on the participants' answers to the questions 'Do you currently smoke?' and 'Have you ever been a smoker?'. Participants were asked how often they currently drank different types of alcohol (beer, wine, spirits, etc.) and how much they normally drank each time. This was used to estimate their alcohol consumption in units per week. Marital status was also ascertained and dichotomised for analysis as 'currently married' and 'single, divorced, separated or widowed'. Lastly, social class was determined at HCS baseline study (1998) from the participants' current or most recent occupation for men and never-married women, and of the husband for married women; occupations were classified as non-manual (classes I-IIINM) or manual (classes (IIIM-V) according to the 1990 OPCS Standard Occupational Classification scheme. --- Statistical analysis Descriptive statistics for continuous variables were expressed as median and interquartile range (IQR); categorical variables were expressed as frequency and percentage. Differences between men and women were assessed using Mann-Whitney tests, Pearson's χ 2 tests or Fisher's exact test, as appropriate. Logistic regression analyses were used to examine the associations between the number of NCDs and the social isolation outcome. The regression analyses were undertaken with and without adjusting for the following demographic and lifestyle confounders: age, BMI, social class, marital status, smoker status and alcohol consumption and then further adjusted for frailty. A p value of ≤ 0.05 was considered to be statistically significant. The analyses were conducted using Stata version 16. --- Results Data on NCDs, social isolation, and frailty were available for 176 participants (94 men and 82 women). Table 1 provides the demographic characteristics of the participants. The median (IQR) age of participants in this study was 83.1 (81.5-85.5) years for men and 83.8 (81.5-85.9) years for women. BMI was slightly higher in men (median 27.3, IQR 24.9-29.8) than in women (26.2, 23.7-29.3), although the difference was not statistically significant. The median (IQR) number of NCDs was 2 (1-2) in men and 2 (1-3) in women, and essentially equal proportions of men (19%) and women (20%) were identified as socially isolated on the LSNS-6, while more women (18%) than men (13%) were identified as frail according to Fried's criteria. None of these differences, however, were statistically significant, the main significant differences being that men were more likely to be currently married compared to women (72% vs 48%, p < 0.001), consumed more alcohol units in a week than women (median 2.8, IQR 0.2-8.6 for men and 1.0, 0.0-4.4 for women, p = 0.006) and counted fewer subjects who had never smoked (54% of men and 70% of women, p = 0.053). Table 1 also presents the number and proportion of participants with each of the NCDs. Table 2 displays relationships between the number of NCDs and social isolation. There was no association between the number of conditions and being isolated in men, before or after adjustment. In contrast, a greater number of NCDs was associated with higher odds of being isolated in women in the unadjusted model (OR per additional NCD 1.65, 95% CI 1.08, 2.52, p = 0.021). This association persisted after adjustment for confounders, i.e. age, BMI, social class, marital status, smoker status and alcohol consumption (OR 1.93, 95% CI 1.11, 3.34, p = 0.020), and it remained robust when Fried frailty was added to the model (OR 1.85, 95% CI 1.06, 3.22, p = 0.031). Finally, we also considered whether these relationships were altered after adjustment for the presence of anxiety or depression according to the EuroQoL (moderately or extremely anxious/depressed vs not anxious/depressed); associations were similar after adjustment for this (data not shown). --- Discussion We have found a high prevalence of social isolation in our population of older community-dwelling older adults, in line with previous estimates for social isolation among older adults ranging between 15 and 40% [49,50], and virtually identical to the 19% prevalence of social isolation reported in ELSA participants with a mean (SD) age of 70.3 (16.8) years [51]. These data were collected just prior to the start of the COVID-19 pandemic; the prevalence of social isolation is now likely to be even higher. We also found that a greater number of NCDs in women was associated with a higher odds of being isolated, and this association was not affected by the presence of frailty. In contrast, no associations were found between the number of NCDs and being socially isolated in men. We were interested to consider whether any possible association between the number of NCDs and social isolation could be explained by the presence of frailty after previous work in ELSA that found that social isolation predicted higher frailty levels, and higher frailty levels predicted greater social isolation [39]. In our study, adjustment for frailty did not remove associations between social isolation and NCDs in women, possibly because there were low numbers of individuals living with frailty in our population sample. Our results hence suggest that even before the onset of frailty, having a greater number of NCDs is associated with social isolation in women-but interestingly not in men. Despite the paucity of literature on the topic, one previous study by Kristensen et al. found that, in a population of German adults with a mean (SD) age of 63.47 (11.44) years, the onset of multimorbidity was actually associated with increased social networks [27]. This diverges from what we found in our study; such discrepancy can be ascribed to the fact that our population sample is significantly older than the one examined by Kristensen and colleagues. As these authors have highlighted, the onset of physical ill health may have caused an increased need for social contact, especially through support and help [27]. This is to some extent corroborated by another study, conducted in New Zealand with participants aged between 35 and 86 years, which reported that patients with multimorbidity tend to describe social networks mainly consisting of family, support groups, and health care professionals [52]. Being considerably older, our participants are very likely to be much beyond the onset of NCDs and may have already lived with two or more conditions for a long time, by which time their social networks may have decreased in size. It must be noted that Kristensen et al. did not examine possible sex differences [27,33]. Lastly, Tisminetzky et al. reported that, among American participants with an average age of 61 years, individuals with 4 or more comorbidities were more likely to have a limited social network compared to those with one or less conditions [53]. However, the participants in this study were not only notably younger than ours but also hospitalised individuals rather than community-dwelling adults. The sexual dimorphism of our findings is striking. We found that the number of NCDs was associated with social isolation in women but not in men. It is possible that the number of NCDs is linked to isolation in women only as women tend to have a greater prevalence and incidence of mobility disability than men [54,55]: it has been previously reported that social isolation is high among adults with disability [56] and that people with disability have fewer friends, less social support, and are more socially isolated than the general population [56][57][58][59][60]. Women reporting NCDs may be affected by different medical conditions to men, specifically those affecting physical performance to a greater extent [56,57]; for example, arthritis is more common in women, although we could find no statistically significant difference in prevalence of rheumatoid arthritis between the sexes in our sample, possibly due to the low proportion of men and women with this condition. It is also possible that co-existing depression/anxiety may mediate relationships between NCD and social isolation-again we found no evidence of this in our sample. In our study, we used a simple count of NCDs rather than a complex measure such as the Charlson grading index of comorbidity [61]. A systematic review of measures of multimorbidity found that simple counts of diseases perform almost as well as complex measures in predicting outcomes such as mortality and health care utilisation [62]. In addition, the mechanisms leading from disease to social isolation can vary substantially, as there can be not only physical but also psychological reasons for social isolation. For instance, vitiligo, a skin disease characterised by a total or partial loss of melanocytes, does not cause decreased mobility (as it can instead be the case for stroke and heart disease which may thus account for social isolation); however, vitiligo, as other chronic skin conditions, is often associated with social stigmatisation and lower social acceptance [63,64], which can in turn lead to social isolation. Similarly, high blood pressure may not directly be associated with social isolation, but medications prescribed to treat this condition may have a number of side effects (e.g. sedation, fatigue, and insomnia) [65], which can hamper one's social life and thus induce social isolation. Further work including qualitative analysis (rather than complex measures of morbidity) may be beneficial to the investigation of the relationship between multimorbidity and social isolation in this group. Our study has a number of limitations. Our study population may not be entirely representative of the wider UK population, since all recruited participants were born in the county of Hertfordshire, were still living in their homes, and were all Caucasian. Nevertheless, it has been previously demonstrated that the HCS is representative of the general population with regard to anthropometric body build and lifestyle factors, such as smoking and alcohol intake, which was in line with data found in the European Investigation into Cancer and Nutrition Cohort (EPIC) [66]. In addition, a 'healthy' responder bias is evident within the HCS [40]. Social class was determined at the HCS baseline from the participants' then current or most recent occupation for men and never-married women, and that of the husband for married women: this is a crude assessment which might not be reflective of participants' actual occupation and, therefore, social class. An additional limitation of this study is the cross-sectional design of most of its analysis. Lastly, NCDs were self-reported and therefore recall bias cannot be ruled out. However, our study has also a number of strengths. Firstly, the LNS-6 provides a reliable measurement of social isolation; Rasch analysis showed unidimensionality of the overall scale, high person and item reliability and good fit of individual items with only one exception [67]. Secondly, we assessed frailty using the accepted and objective Fried criteria [68]. We are aware that other methods have been developed in order to assess frailty, but existing literature exploring the relationships between frailty and social isolation using different screening tools is limited [69]. Lastly, the HCS is a population of community-dwelling older adults that have been extensively phenotyped and well characterised with regard to lifestyle and past medical history. --- Conclusions In a cohort of community-dwelling older adults in the UK, we found that self-reported number of NCDs was associated with social isolation in women only, and that this association was not affected by frailty assessed using Fried's criteria. Healthcare professionals looking after older adults in a community setting might take our observations into consideration when completing Comprehensive Geriatric Assessments, for individuals affected by NCDs. Future studies may benefit from investigating this association longitudinally and in larger populations, and from exploring whether the association is mediated by impaired physical function and mobility disability. Qualitative studies exploring these relationships in greater detail in women would also be extremely valuable. Funding This work was funded by the Medical Research Council. --- Availability of data and material The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. Code availability Not applicable. --- Declarations --- Conflict of interest Professor Cyrus Cooper has received lecture fees and honoraria from Amgen, Danone, Eli Lilly, GSK, Kyowa Kirin, Medtronic, Merck, Nestlé, Novartis, Pfizer, Roche, Servier, Shire, Takeda and UCB outside of the submitted work. Professor Elaine Dennison has received speaker honoraria from UCB, Pfizer, Lilly and Viatris. Dr Harnish Patel has received lecture fees and honoraria Health Conferences UK, Abbott and Pfizer outside of the submitted work. Gregorio Bevilacqua, Karen A Jameson, Jean Zhang, Ilse Bloom, Nicholas R Fuggle and Kate A Ward have no relevant interests to declare. --- Ethical approval --- Consent for publication Not applicable. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Purpose-To assess whether reactions to genetic explanations for disparities in lung cancer incidence among family members of African American patients with lung cancer are associated with willingness to participate in clinical genetics research. Methods-Data are reported for 67 self-identified African Americans ages 18 to 55 years who completed a telephone survey assessing reactions to explanations (i.e., genetics, toxin exposure, menthol cigarettes, race-related stress) for lung cancer disparities. Majority was female (70%), current smokers (57%), and patients' biological relatives (70%). Results-Family members' rated the four explanations similarly, each as believable, fair and not too worrisome. Participants also indicated a high level of willingness to participate in genetics research (M= 4.1 ± 1.0; Scale 1-5). Endorsements of genetics explanations for disparities as believable and fair, and toxin exposure as believable were associated significantly with willingness to participate in genetics research. Conclusion-These results suggest that strategies to encourage African Americans' participation in genetics research would do well to inform potential participants of how their involvement might be used to better understand important environmental factors that affect health disparities.
INTRODUCTION Minority participation in genetics research is limited. 1,2 Efforts to increase minority participation in genetics research have included targeting high-risk populations through cancer registries, cultural tailoring of study materials, addressing issues of trust in the target population, and incorporating flexible intervention and evaluation methods. 3 However, these methods have shown at best to have only modest improvements in minority participation. 4,5 While active recruitment methods (e.g., tumor registries) have been shown to be more effective in increasing minority recruitment than passive accrual approaches (e.g., selfreferral) 6,7 it is still unclear why minority participation in genetics research remains low. It has been widely suggested that African Americans may be apprehensive about participating in genetics research due to a legacy of research abuse in the United States (e.g., Tuskegee syphilis study) and fears that this research will be used as a means to label groups as inferior and foster discrimination. 1,[8][9][10][11][12][13] Indeed, research has indicated that African Americans who have negative perceptions of genetics research also report less interest in genetics research and testing. 14,15 However, these suggestions have not always been substantiated by empirical evidence. There is evidence suggesting that minority groups may be concerned about participating in research linking genes, race/ethnicity, and health outcomes. 8,16,17 For example, African Americans have indicated low levels of belief in messages surrounding medications deemed to be effective specifically for African Americans. 18 African Americans also have reported skepticism about race-based medication information, 19 fears of a racist conspiracy 18 and high levels of suspicion regarding the safety and effectiveness of race-based medications. 17,19,20 New discoveries linking genetic variation to racial differences in health outcomes often termed "disparities" 21,22 may elicit similar levels of disbelief and skepticism as well as emotional responses such as worry or anger that may in turn exacerbate minority groups' negative responses to and decrease their willingness to participate in clinical genetics research. 16,18,19 Indeed, genetic factors are increasingly being examined in an effort to explain racial/ethnic health disparities in common health conditions such a lung cancer. [23][24][25][26][27] In the United States there are sizeable disparities in lung cancer incidence and mortality with African Americans disproportionately affected. 28,29 While cigarette smoking is the leading preventable cause of lung cancer, 30 racial/ethnic disparities in lung cancer incidence can not be explained by differences in smoking behavior alone. 31,32 When comparing the smoking patterns of African Americans with whites we find that historically African Americans begin smoking at older ages and smoke fewer cigarettes per day than whites. 29 Yet African Americans are more likely to be diagnosed with and die from lung cancer than whites. 28 For this report we focus on explanations for disparities in lung cancer incidence where conjectures about different causal factors are well described and relatively straight-forward to convey to lay audiences. 26,27,33 Ongoing epidemiologic research suggests that common polymorphisms in a number of genes may increase genetic susceptibility to the harms of environmental exposures such as cigarette smoking and increase risk for diseases like lung cancer. 34,35 For example, results of research conducted by Mechanic and colleagues 26 suggest that common genetic variations in TP53 may account for increased risk for lung cancer and worsened lung cancer prognosis among African Americans. Also it has been reported that African Americans may be more likely than whites to carry a less-efficient DNA damage-induced G 2 -M checkpoint which may be associated with an increased risk of lung cancer among African Americans. 27 Other common explanations for disparities in lung cancer incidence include racial differences in menthol cigarette use, exposure to toxins and race-related stress. The majority of African Americans smoke menthol cigarettes (70-80%) as compared with white smokers (20-30%). 36 It has been suggested that menthol numbs the lungs allowing for more smoke to be inhaled with each puff. 37 Hence, African Americans may smoke fewer cigarettes but take in a greater amount of harmful chemicals. 38 Further, a greater proportion of African Americans live in poverty than whites, (25% vs 9%, respectively), 39 and may be more likely to live and work near environmental toxins and pollutants than whites. [40][41][42] Race-based discrimination has been linked to increased levels of stress and health conditions such as hypertension among African Americans. [43][44][45] Accordingly, it could be that increased rates of lung cancer among African Americans may also result from increased smoking behavior in response to prolonged exposure to race-related stress. 46 The conundrum for clinical genetics research is that adequate minority participation in this research is essential to fully understand the multifactorial influences on lung cancer disparities. Genetic research occurs in a socio-political context 23,47,48 that may influence how minority groups' interpret this information and influence their willingness to participate in related research. Moreover, individuals exposed to such information may have preexisting health beliefs and attitudes that lead them to discredit the messages particularly when they do not align with their personal worldviews. Clinical genetics research generally recruits family members of those affected by cancer. [49][50][51] A loved one's diagnosis with lung cancer may influence how an individual responds to explanations for lung cancer disparities. Smoking status may influence these responses such that smokers may perceive explanations for lung cancer disparities differently than nonsmokers. Thus, it is important to consider these factors in evaluating African American's responses to genetic explanations for lung cancer disparities. In this report we describe an observational study designed to assess reactions to different explanations (i.e., genetics, toxin exposure, menthol cigarettes, race-related stress) for disparities in lung cancer incidence among family members of African American patients with lung cancer. We also examined whether reactions to these explanations were associated with willingness to participate in clinical genetics research. --- MATERIALS AND METHODS --- Eligibility Data were collected via structured telephone surveys with self-identified African American patients with lung cancer who were receiving care from Washington Cancer Institute at the Washington Hospital Center and their family members aged 18 to 55. Patients and family members who self-identified as African American or Black and were born in the U.S. were eligible for the study. We excluded foreign-born Blacks because previous research has indicated U.S.-born and foreign-born Blacks have different cultural beliefs and health habits, as well as different health outcomes, including cancer. 52,53 Family members included both biological and non biological relatives of the patient as well as friends considered as family to the patient. In order to complete the family survey, participants had to meet the criteria for either a current smoker or a never smoker. A current smoker was defined as someone who smoked at least 100 cigarettes in their lifetime, and has smoked at least 7 in the past 7 days. A never smoker was defined as someone who has not smoked at least 100 cigarettes in their lifetime. --- Recruitment and study procedures Patients were approached by a recruiter during their clinic visits and asked their willingness to be contacted to complete a telephone survey about their general well-being, and to enumerate family members' smoking status. We elected to have only African American/ Black recruiters employed by Washington Cancer Institute who already have regular contact with the patient population. Patients who agreed to complete the survey provided written consent to have their personal health information forwarded to the National Human Genome Research Institute. A trained interviewer contacted patients within one week to complete the survey. For patients who were extremely ill, a proxy completed the survey. Only families of patients who provided current mailing addresses and telephone numbers for at least one current smoker were included in the study. These patients were asked for permission to contact all identified smokers and up to two never smokers. To maximize recruitment reach, family members were eligible regardless of their geographic distance from the patient. This necessitated using telephone interviews for data collection. Family members were mailed an introductory packet that included a letter describing the study purpose, information on how they were identified, and a toll-free telephone number to call to decline participation. Family members who did not call to decline participation were contacted to determine their willingness and eligibility to complete the survey. Call attempts to family members were conducted within a 21 day window and family members were given options to complete the survey in the evenings and on weekends. Patients and family members who self-identified as current smokers were offered free print smoking cessation materials and all participants who completed the patient or family survey received a $35 gift card. The study procedures were approved by the National Human Genome Research Institute and MedStar Research Institute-Georgetown University Oncology Institutional Review Boards. --- Family member survey The family member survey took 30-40 minutes to complete and was formatted to ask different questions of smokers and never smokers. All family member surveys were audio taped and administered by trained African American/Black telephone interviewers. The primary purpose of the survey was to assess family members' reactions to four common explanations (i.e., genetics, toxin exposure, menthol cigarettes, race-related stress) for racial/ ethnic disparities in lung cancer incidence. In order to control for order effects associated with using multiple explanations for disparities, family members were administered one of four versions of the survey which varied based on the order in which the explanations were presented. --- Measures Patient demographics-Stage of lung cancer diagnosis, age, and gender were obtained from patient medical records. As part of the telephone survey, patients reported race, highest level of education, marital status, current smoking status and exposure to other household smoking. Family member demographics-Family members self-reported their race, age, smoking status, gender, highest level of education, marital status, employment status, exposure to household smoking, and smoking behaviors as part of the telephone survey. Biological relationship to the patient was reported by the patient during the patient survey. Open-ended explanations for lung cancer disparities-Prior to capturing reactions to the four targeted explanations, interviewers read a short narrative to family members describing lung cancer disparities. Participants were asked to provide their opinions, in an open-ended format, about why lung cancer disparities exist. Responses to the narrative were transcribed and evaluated to identify emerging themes which were used to develop the codebook used for coding participants' responses into close-ended categories. Inter-rater reliability was calculated for 20% of the coded material; all items that contained variability reached acceptable reliability (Kappa ≥0.75). Code frequencies were examined to assess whether other explanations were commonly cited for disparities in lung cancer incidence than the four targeted by the current report. Reactions to explanations for disparities-Interviewers read a short narrative to family members that outlined the four targeted explanations for disparities in lung cancer incidence. A description of each narrative is provided in Table 1. After each narrative, family members were asked to rate the level of believability, fairness and impact of worry for personal lung cancer risk elicited by the explanation. Believability assessed the level of plausibility or credibility of the explanation with participants asked to rate: "On a scale from 1 to 7 where 1 is not at all and 7 is completely believable, how much do you believe the statement that racial differences in lung cancer may be due to blacks having more of the risk versions of some genes than whites?" The fairness item assessed the level of impartiality of the explanation. Participants were asked: "On a scale from 1 to 7 where 1 is not at all and 7 is very fair, how fair is the statement?" Worry was assessed by asking participants: "On a scale from 1 to 7 where 1 is not at all and 7 is very worried, how much does the statement make you worry about your own risk for lung cancer? Willingness to participate in clinical genetics research-Willingness to participate in genetics research was assessed with one item. Participants were asked: "If you were invited to participate in a clinical research study, in which you had to provide a blood sample, to identify genetic risk factors for lung cancer, how likely is it that you would participate?" Response categories were: 1 = Definitely Not; 2 = Probably Not; 3 = Possibly; 4 = Probably; and 5 = Definitely Would. --- Statistical analyses We generated descriptive statistics stratified by smoking status to characterize the family members' demographics. Partial correlations were used to evaluate relationships among the reactions to explanations and the main outcome (i.e., willingness to participate in clinical genetics research) while adjusting for smoking status and level of education. The intraclass correlations for the reactions to explanations and willingness to participate in clinical genetics research approximated zero, indicating that family members' responses were essentially independent. Thus, analyses did not control for family clustering. Also, we tested whether the means for reactions to each explanation was different across the four survey versions and the results indicated no order effects on these variables. Effect sizes were calculated using Cohen's r statistic (small = 0.14, medium = 0.36, large = 0.51). 54 All statistical analyses were conducted using SAS (Version 9.2; SAS Institute, Cary, NC). A pvalue of 0.05 was established as a level of significance. --- RESULTS --- Patient demographics A total of 147 patients were approached for the study of which 93 self-identified AA patients provided written consent to be contacted for the patient survey. A total of 70 patients completed the telephone survey. The majority was female (64 %) and had been diagnosed with early-stage lung cancer (64%). The mean age was 62 ± 11 years, 29% were married, 54% had a high school education or less and 11% reported being current smokers at the time of the screening survey. Only 4% of patient surveys were completed by a proxy on behalf of the patient. The majority of patients who completed the survey identified at least one family member who smoked (64%), 59% were eligible to have all identified family members contacted about the study and 41% had at least one family member who completed the study. --- Description of family referral Patients collectively identified 158 family members. They gave permission to contact 147 of these family members and 139 family members were successfully referred that is, the patient was able to provide complete contact information for the family member. Figure 1 shows the cascade of family referral. A total of 33 referred family members were ineligible to complete the family survey due to age (n= 25) and smoking status (n = 8). There were 106 family members deemed eligible to participate and 67 family members completed the family survey (63%). The 67 survey completers were from 29 families from which a range of 1-6 and a median of 2 family members completed the survey. There were no significant differences in gender, smoking status, or biological relationship between eligible family members who completed the survey and those who did not complete the survey. A description of eligible family who completed the survey and family who did not complete the survey is provided in Table 2. --- Family member characteristics The majority of survey completers was female (70%), 57% were current cigarette smokers, and 70% were biologically related to the patient. The mean age was 43 ± 9.3 years, 46% were married and about a third of family members reported exposure to other household smokers. On average, smokers reported smoking 9.2 ± 5.4 cigarettes in a typical day. The majority smoked their first cigarette within 30 minutes of waking (74%) and reported smoking menthol cigarettes (89%). There was a significant association of smoking status with employment status and education level. Smokers were less likely to be employed fulltime for pay and reported lower levels of education than never smokers. The sample characteristics of family members by smoking status are shown in Table 3. --- Explanations for disparities A total of 60 interviews were included in the analysis of the open-ended data. There were seven recordings excluded due to recording errors. Evaluation of the open-ended data indicated the four targeted explanations were the most frequently mentioned explanations for disparities. The proportion of family members citing one of these explanations ranged from 32% to 15%. There were other less frequently mentioned explanations for disparities. For example, 12% of participants cited racial/ethnic differences in attention to self-care and 8% indicated differences in lifestyle factors as explanations for lung cancer disparities. --- Reactions to explanations for disparities Overall, family members endorsed explanations of racial differences in genetic risk, toxin exposure, menthol cigarette use and race-related stress as believable and fair but not particularly worrisome (See Table 4). While toxin exposure was endorsed as the most believable explanation for lung cancer disparities (mean = 5.6 ± 1.6; Scale: 1 = not at all and 7 = completely believable), reactions to genetics as an explanation were relatively favorable such that the mean scores for believability and fairness were 5.2 ± 1.5 and 5.3 ± 1.6, respectively. Race-related stress was endorsed as the least believable, fair and worrisome explanation (means = 5.1 ± 1.9; 5.0 ± 2.1; 3.7 ± 2.2, respectively). Nonetheless, there were no significant differences in evaluations across the four explanations. --- Willingness to participate in clinical genetics research Family members indicated a high level of willingness to participate in clinical genetics research (mean = 4.1 ± 1.0; Scale: 1 = definitely not and 5 = definitely would). Partial correlations between willingness to participate in clinical genetics research and reactions to the four explanations while controlling for smoking status and education level are presented in Table 3. Endorsements of genetics as a believable and fair explanation were positively associated with willingness to participate in genetics research. Also endorsements of toxin exposure as a believable explanation were positively associated with willingness to participate. Worry about menthol use as an explanation tended to be positively associated with willingness to participate; however this association was not statistically significant. None of the reactions to race-related stress as an explanation for racial differences in lung cancer was significantly associated with willingness to participate. --- DISCUSSION To our knowledge, this is the first empirical report to assess African Americans' responses to genetic explanations for racial/ethnic health disparities. Our sample of African American relatives of patients with lung cancer did not respond negatively to explanations conjuring racial/ethnic differences in genetic variation to explain lung cancer disparities. Reactions to genetic explanations were similar to reactions to the other common explanations. Additionally, favorable endorsements of genetic explanations as a believable and fair explanation were significantly associated with willingness to participate in genetics research. Previous research has indicated that minorities have negative perceptions about the use of race-based medicine. 8,[16][17][18][19][20] However, our results suggest that genetics as a basis for racial/ ethnic health disparities need not limit minority participation in genetics research. The ever increasing media coverage describing anticipated health benefits of genetic discovery may be increasing African American's receptivity to messages linking genes, race and health. 55,56 However, it is important to note that the genetics explanation we used focused on racial/ethnic differences in susceptibility to the harmful effects of smoking. It may be the case that messages that consider the role of gene-environment interactions in health disparities instead of genetics-only messages may be more acceptable to minority populations and may buffer against skepticism about the use of genetics research to foster discrimination. While none of the explanations for lung cancer disparities elicited a particularly strong emotional reaction in terms of high worry about personal lung cancer risk, participants rated genetic explanations as having the highest impact on lung cancer worry. This is not surprising as 70% of family members who completed the survey were biologically related to the patient. Previous research has shown African American women with a family history of breast or ovarian cancer to be less likely to participate in BRCA1/2 genetic services compared to similar white women. 57 However, here level of worry generated by genetics as an explanation for disparities did not significantly inhibit willingness to participate in genetics research. This suggests that reactions to genetic explanations and willingness to participate in genetic research may differ for different health conditions. Limitations of the study must be considered in interpreting these results. The data reported are cross-sectional and thus no causative inferences can be made. There was a relatively small sample. Sixty three percent of eligible family members completed the survey. This response rate is somewhat lower than other studies involving survey only methodology. 15,58 Yet minority participation in research requiring more commitment such as genetic testing and participation in cancer genetics registry has been much lower. 2,6,59 In the current report, willingness to participate in research was based on a hypothetical scenario and may not accurately reflect how participants would respond to an actual invitation to research. Indeed, previous research has indicated that minorities express high levels of willingness to participate in genetics research but actual participation remains limited among minorities when invited to participate. 2,6,59,60 Further, these families were dealing with the diagnosis of lung cancer; therefore, results may not be generalizeable to African Americans without personal experiences of lung cancer, suggesting the need for replication in larger and more diverse samples of African Americans and in non-clinical settings. Our small sample did not enable us to control analyses for patient characteristics (e.g., stage of lung cancer, age, smoking history) in examining associations of reactions to explanations with willingness to participate in genetics research. Future studies with larger samples are needed to examine the role of such factors in family members' beliefs about lung cancer. Also we had limited demographic information (i.e., gender, smoking status, relationship to patient) on family members who did not complete the survey. Factors such as education level are commonly found to be negatively associated with research participation. 5 As part of this research we addressed four common explanations for disparities in lung cancer incidence. The narratives and questions used to assess reactions were developed as part of the study protocol. The investigators aimed to provide scientific evidence for each explanation while at the same time not biasing respondents by presenting either explanation as more plausible than the others. However, variability in certainty associated with each explanation may have influenced family members' responses. These items need to be evaluated for reliability and validity for future use. Further, while results of our open-ended survey indicated our four targeted explanations as the most commonly cited, there may be additional explanations and reactions that African Americans consider important in understanding lung cancer disparities, which might be associated with willingness to participate in genetics research. These might have been elicited had we used focus groups or face-to-face interviews rather than telephone surveys. In the future it is important to examine the influences of other factors such as smoking status, causal attributions for lung cancer, and experiences with discrimination on beliefs about the role of these factors in disparities and willingness to participate in genetics research. Also additional work is needed to examine cognitive and emotional reactions to genetic explanations for disparities in health outcomes that do not have a well-established and highly stigmatizing behavioral risk factor such as cigarette smoking but for which disparities are increasingly being attributed to genetics (e.g., breast cancer, asthma). The current research is a first step in gaining information about how African American smokers and never smokers personally affected by lung cancer might respond to clinical genetic research related to health disparities in lung cancer. In order to fully understand the role of genetics and other environmental factors in lung cancer disparities more minority participation is needed. The results of this report suggest that developing messages that inform participants how their involvement in genetics research might be used to better understand the impact of other environmental factors on lung cancer disparities may generate more willingness to participate in genetics research. --- Table 1 Narratives used to describe each explanation --- Explanation Narrative --- Genetics Genes are passed down in families through DNA from one generation to the next. Recently it has been found that there are versions of genes that make it hard for some people to get rid of harmful chemicals in cigarette smoke. A person who has these risk versions of genes may be more likely to get lung cancer. Some of these gene versions may be more common among blacks than whites. Therefore, racial differences in lung cancer may be due to blacks having more of the risk versions of some genes than whites. Toxin exposure Being around harmful chemicals in the environment for a long period of time can increase the risk of a person getting lung cancer. Blacks are more likely to live and work in neighborhoods that have more harmful chemicals than whites. Therefore, racial differences in lung cancer may be due to blacks living and working in more harmful environments than whites. --- Menthol cigarettes It has been suggested by scientists that menthol numbs the lungs and makes it easier for a smoker to take in more cigarette smoke and harmful chemicals with each puff. Over time the extra amount of harmful chemicals taken into the body increases the chance of a smoker getting lung cancer. Blacks are more likely to smoke menthol cigarettes than whites. Therefore, racial differences in lung cancer may be due to blacks smoking more menthol cigarettes than whites. Race-related stress Stress can make it harder for a person's body to fight off disease. People of all different races may experience racism or discrimination; however, blacks are more likely to deal with racism than whites. It may be the case that racial differences in lung cancer may be due to blacks having to deal with more race-related stress than whites.
B ail ey, Ni c h ol a s a n d Wi nc h e s t er, Ni k 2 0 1 8. F r a mi n g s o ci al jus tic e: t h e ti e s t h a t bi n d a m ul ti n a tio n al o c c u p a tio n al c o m m u ni ty. B ritis h Jou r n al of S o ciolo gy 5 2 (4) , p p
Introduction Central to the conceptualisation and practice of social justice is the determination of who can legitimately raise claims (Fraser, 2007). Influential accounts largely frame social justice, and hence potential claimants, in terms of geographical bounds, most notably citizens of the nation state (Miller, 1999;Rawls, 1999;Walzer, 1983) or upscale to the level of the globe in varieties of cosmopolitanism (Brock, 2009;Held, 2010). The re-visioning of space and belonging incumbent in the processes of globalisation and increased mobilities poses a challenge to this reliance on static, bounded, geographical framings (Benhabib, 2003). Through examination of a group of multinational workers caught at the intersection of competing appeals to nationality and commonality, we offer an alternative lens through which to engage with such complex cases. We argue that accounts of social justice and the determination of the relevant frame for claims-raising need to appeal to principles of inclusion based on structural relations, expressed in the application of principles, and notions of community, based in understandings of belonging. Extant accounts focus on the former and largely neglect the latter. We demonstrate how the inclusion of empirical data and thick sociological description (Geertz, 1973) can deliver a grounded and credible understanding adequate to advance complex real-life transnational contestations of social justice framings, in a way that is not available to philosophical reflection and the application of principles alone. Social injustice can be experienced along multiple dimensions including, for example, class, race, gender and ethnicity. We proceed by examining a group of multinational workers (seafarers) and their experience of discrimination on the basis of nationality with respect to inequity in terms and conditions of employment. In so doing, we also draw attention to the relevance of broader rights and entitlements (such as access to free or subsidised healthcare) in framing their understandings. The treatment of seafarers as either different national workers or similar multinational co-workers has significant implications for understanding social justice in complex transnational spaces. We demonstrate the benefit of inserting a sociological narrative, whilst reflecting on theorisations of social justice. --- Social justice and globalisation Social justice is standardly taken to be concerned with the equitable distribution of goods within a social group (Miller, 1999;Rawls, 1999). Different accounts seek to explicate what 'equitable distribution' amounts to and the nature of these goods. Taken for granted is the idea that social justice only applies within the context of a set of social relations and arrangements that delimit the extent of the group and those entitled to consideration under the aegis of justice. Traditionally the nation state has been taken to provide the bounds within which relations of social justice hold -on the grounds that states contain the institutional arrangements necessary to ensure equitable redistribution, including the securing of human rights, and encompass a community of individuals that, in some sense, share a common fate (Miller, 1999(Miller, , 2009;;O'Neill, 2000). In a more interconnected world, the increased porosity of economic, political and cultural boundaries generates problems for national based accounts in that understandings of community are becoming increasingly complex and pluralised. Hence the idea of a national bounded conception of social justice is argued to be unsustainable (Fraser, 2007;Goodin, 2008;Held, 2010). A standard response to what is perceived to be injustice of an intrinsically cross-border form is to upscale to a cosmopolitan perspective (Cabrera, 2005;Pojman, 2006). Other theorists appeal to both the national and global scales while privileging one or the other (Arneson, 2005;Beitz, 2005;Blake, 2001;Chan, 2004). What such approaches share is the treatment of the 'social' in 'social justice' as a matter of theoretical conceptualisation. The theorist seeks to determine criteria by which a boundary may be drawn. The community of entitlement is legislated through the application of principles; individuals are part of a community of entitlement or not. This focus on scale and legislation by principles fails to engage with the significance of the society or community supporting the particular form of life within which justice claims are raised. And, has led to a form of prescriptive determination focused on a hierarchy of geographical boundaries; particularly, the polarisation of the national and global scales. Consequently such accounts are unable to address satisfactorily the complex social reality of, inter alia, migrant workers, diasporic groups and mobile workers operating across national borders (Benhabib, 2003). This we suggest is due to the lack of sociological understanding of the 'social' and the role of individuals and groups as interacting agents. Failure to attend to the particular social arrangements in a way that provides detailed understanding of the nature of the community as the source of justice claims is lost, or marginalised, in the context of the application of principles derived from philosophical reflection alone. We address this deficit by providing a sociologically rich description of workers operating in a complex social space to advance understandings of social justice claims -in a manner that both engages with, and goes beyond, debates on scale. We contend that any account of social justice can be enriched by an empirically informed account of collective belonging underpinning the community of entitlement under consideration. Rather than engage in an exclusively theoretical debate about the merits of this approach, we proceed by means of demonstration. In the following we detail the multiple layers of seafarers' social reality -understood as both the institutional structures that shape shipboard organisation and practice, and seafarer understandings of their place in the collective enterprise. The maritime sector provides an instance of a highly globalised context where workers aboard ship are employed in mixednationality crews to work aboard foreign-registered ships operating internationally. Moreover these workers, employed through a global seafarer labour market, are employed on different terms and conditions dependent upon their nationality (Couper, 1999;ILO, 2004). Importantly for understanding social justice beyond national boundaries, workers from different countries with the same internationally recognised qualifications may be working-side-by-side, doing the same tasks, but be on very different terms and conditions: including, basic pay, contract duration and leave entitlement. Seafarers have claimed that such practices are discriminatory, i.e. differential treatment on the basis of nationality, and unjust. When presented with these claims managers in the shipping industry rehearse a number of counter-arguments, rejecting claims of unfair and unjust treatment. Referring to relative purchasing power and appropriate reference groups they seek to ground justice claims in national belonging, with transnational co-presence rejected as a source of justiceclaims (Winchester and Bailey, 2012). The challenge is to provide a basis for privileging a given frame of reference over others or to bring the different conceptions into dialogue. --- Approaches to social justice Debates on social justice have been framed by the understanding that the nation state is the apposite scale for theorising justice. Many accounts hold that the 'social' in 'social justice' does, and should, refer to social relations characterised as state-based (Miller, 1999(Miller, , 2009;;O'Neill, 2000;Walzer, 1983). Important for the discussion here is the manner in which claims are grounded. An influential approach is through appeal to, or generation of, universal principles that, by definition, are applicable irrespective of context (Rawls, 1971(Rawls, , 2011)). By comparison, communitarian accounts acknowledge the importance of viewing individuals as socially and contextually embedded, but assume a pre-established community typically the nation state as grounding justice claims (Walzer, 1983). By contrast, others have drawn attention to the relevance of the 'real' world and the need to confront theory with social practice (Kymlicka, 1989). Indeed, Carens identifies three advantages afforded by engagement with real life contexts: First, it can clarify the meaning of abstract formulations. Secondly, it can provide access to normative insights that may be obscured by theoretical accounts that remain at the level of general principle. Thirdly, it can make us more conscious of blinkers that constrain our theoretical visions when they are informed only by what is familiar (Carens, 2000:2). Miller's (1999) influential work draws on sociological and psychological data as touchstones in arguing that social justice needs to be framed at the national level -as the state represents the limit of affective ties sufficient to ground a shared conception of fairness and equity. Hence Miller explicitly rejects the idea that obligations to individuals beyond state borders are ones of social justice. Other writers adopting a cosmopolitan approach attempt to ground their theories in an account of human nature; for example in terms of some shared universal property such as a capacity for reason or common humanity (Brock, 2009;Held, 2010). Despite these different approaches such accounts tend to draw, if at all, on limited empirical data and reflect the dominant competing understandings of the post-Westphalian international order (Boucher and Kelly, 1998). Feminists and postmodernists concerned with reconceptualising social justice by incorporating considerations of recognition as well as redistribution -while similarly acknowledging the importance of treating individuals as concrete and socially embeddedhave adopted more radical stances. Moving away from reliance on geographic scale for framing claims-making they have instead drawn the boundary outwith scale. The 'allaffected' principle (Benhabib, 2004;Young, 2000) seeks to articulate the group entitled to consideration as all those 'affected' by the injustice, whereas the 'all-subjected principle' (Fraser, 2008: 65) draws the boundary according to individuals existing within a 'structure of governance ' (ibid. 65). While such approaches may serve to reinforce the links between social justice and democratic accountability as a practical basis for circumscribing a community of entitlement (i.e. demarcating those that can legitimately make justice claims), both principles are extremely vague and do little to close down contesting conceptions of who should be included (Näsström, 2011;Winchester and Bailey, 2012). In attempting to formulate general principles, such approaches treat the 'social' in 'social justice' as a matter of theoretical conceptualisation. The theorist seeks to determine criteria by which a boundary may be drawn. The community of entitlement is legislated through the application of principles; an individual is either part of a community of entitlement or not. This reduces the concept of community to structuralist determination. To use Fraser's terminology, one is subjected and so becomes a subject of justice. Missing from such accounts are details of those social arrangements and interactions that shape group and individual perceptions and produce community understandings of justice. Drawing on empirical data, we present a grounded account that warrants the framing of seafarers as a primary reference group for raising social justice claims. In particular, we show how elucidation of notions of belonging and community, involving both structural conditions and individual perceptions, deepens our understanding and conceptualisation of social injustice in this complex context. The account that emerges is not from some abstracted pre-social initial position but from the lived reality of individuals in social relations, living and working within a particular set of organisational arrangements. In so doing, we demonstrate the relevance of sociology to understanding social justice. Thus we go beyond Caren's point that data should be used to finesse or check theory, and claim that appeal to rich sociological data is necessary to develop a robust account. The paper serves to make a methodological contribution to debates on social justice, to address the broader issue of 'framing' of social justice and contribute to a substantive discussion on social justice in the global maritime sector. In the next section we provide an overview of data used to support our argument. --- Context and approach. --- The Data We draw on 10 years' experience of research in the maritime sector, previous empirical research and secondary analysis of an ESRC archived qualitative dataset (not collected by the authors) on the topic of 'transnational communities' 2 . The data was collected 'to examine the social dynamics of multinational crewing aboard merchant vessels' (Kahveci et al., 2001). While not a central focus of the original study, the dataset yields valuable insight into seafarer understandings of themes relevant to social justice. The dataset is transcripts of 194 in-depth semi-structured interviews and five focus groups undertaken with male seafarers 3 between the ages of 18 and 65, of all ranks and 24 nationalities reflecting the constitution of the seafaring workforce, working aboard 14 internationally trading ships of different types and sizes operating on a variety of routes 4 . The Interviews were conducted over a two year period and undertaken, face-to-face, by seven researchers who spent between 12 days and three months aboard the individual ships. The data was imported into NVivo and 120 of the interviews were indexed and thematically analysed. Initial coding was undertaken by two final year undergraduate students in accordance with a coding frame jointly developed by them and one of the authors as part of a summer studentship. The nodes developed were then further and systematically interrogated by the authors using analytic induction (Bloor, 1978). The number of interviews coded was determined by the fact that no new themes were emerging (Guest et al., 2006). Quotes from the interviews are presented with an identifier indicating nationality and rank. Next we provide detail of the organisational arrangements that structure life aboard ship. Drawing on the wider research literature, we then show how the maritime sector has been seen as a distinct community standing apart from those ashore. Having done so, we utilise our data to demonstrate the nature of the shipboard community and the extent to which seafarers conceive of each other as the 'same', while recognising difference. --- Social justice, community and difference --- The structuring of shipboard life Work organisation aboard ship is standardised and hierarchical with the captain in command. Work functions are divided into three primary groupings: deck, engineering and catering. Below the captain are senior and junior officers and below them the ratings (manual workers). The physical layout of the ship reflects these hierarchies with more senior personnel occupying higher decks and different messing and social spaces for officers and ratings. Notable divisions also exist amongst work groups with their different work spaces, e.g. Navigation Bridge and engine room. While some ships have fully mixed nationality crews, others employ different national groups in different ranks, i.e. officers of one nationality and ratings of another (ILO, 2004). Shipboard organisational arrangements are highly standardised throughout the industry. For each position aboard ship there are clearly defined roles and associated internationally recognised certification. Employers recruit individuals to fill positions on the basis of required certificates and in compliance with the ship's minimum crewing requirements established by the country whose flag it flies (ILO, 2004). The existence of an internationally recognised certification regime enables employers to recruit globally for crew members often at the lowest cost. There are however structural factors that mean some countries, like the Philippines, maintain a prominent position in the labour market. Terms and conditions of employment are primarily determined by rank and nationality (ILO, 2004). Onboard seafarers work seven days a week and typically 12-16 hours per day commonly for nine months at a time. Most will be doing shift work, but some will be on day work, e.g. 7am-7pm, and on-call to assist with the mooring operations of the ship when entering or leaving port. Consequently, there may be no more than two or three individuals off-duty at the same time. This picture hardly presents the idea of a 'community' -rather, it could be argued that we have outlined an image of differentiation and division, with little to unite fellow workers. Indeed, it has been claimed that crew members tend to form relatively shallow relationships (Fricke, 1973). Our contention is that there are, nonetheless, key features of this group that make it appropriate to refer to them as a community of entitlement. Before presenting our data, the following section uses existing research literature to demonstrate that seafarers can be seen as a distinct and separate group with their own ways of being and doing. --- A group apart Research on seafaring and family life documents how seafarers see their life at home and onboard as two distinct life worlds (Thomas and Bailey, 2006;Sampson, 2013). One of Thomas and Bailey's (2006) informants summarised this generally held belief as follows: I always found it was very much a two life existence, wouldn't go so far as saying it was Dr. Jeckyl, Mr. Hyde exactly, but it's very different… There's no comparison between the two. (ibid. 620) Other social scientists have utilised Goffman's concept of the total institution (1961) to describe life aboard ship (Encandela, 1991;Fricke, 1973). Not only are these workplaces physically and socially separated from life ashore but the social organisation of life and work onboard is also embedded in a distinctive maritime culture. While work practices and organisation aboard ship, and the system of certification, have evolved over recent decades to address technological developments and global practices, shipboard life is nonetheless deeply entrenched in maritime tradition (Gould, 2010;Knudsen, 2005). Organisational structures, the organisation of work and training structures coalesce to produce a distinctive 'form of life' with its own norms, values and ways of acting that are distinct and separate from life ashore. Moreover entry into this closed community is by way of gatekeepers and recognised certification. The cooperative form, its institutions and structure of governance offer a nascent account of collective belonging grounded in a sector of the world economy. This sector traverses geographical boundaries through inherent mobilities of both work site and workers. And so, we argue that there are prima facie grounds for viewing seafarers as a distinct community and suitable framing for inter-group comparison. Theories of social justice which give credence to social context terminate the discussion at this point. If the account of structure points towards a transnational community then social justice claims should be located at this level, if not then other scales become determining. Such an approach absents the idea of community from the perspective of the agent, i.e. how the purported members of a community see each other as common fellows. This is particularly important for our discussion for two reasons. Firstly, the structure of the maritime industry creates a series of entrances and exits for each individual -commonly each seafarer alights on a particular vessel for a defined period of time leaving on fulfilment of a contract. This contrasts with the idea of community in terms of long term (indeed, nontime bounded) cooperation by its members (Rawls, 2001). Secondly, occupation could be seen as a thin form of belonging in contrast to the thicker accounts based on shared nationality and lasting co-presence. In the following section we utilise our data to explore how the occupation generates a meaningful sense of community through a narrative of demonstrable competence in pursuit of a shared goal inflected with collective confrontation with danger. --- Communities of belonging and their grounding From a structural perspective, life at sea can be seen as distinct from life ashore -making it cogent to argue that references made between workers should be between seafarers rather than with any other group. However, by contrast shipboard work organisation leads to differentiation and appears to work against a strong account of shipboard community. Rather than undermining the notion of seafarers as a community, we argue that it is this very individualism forged by the organisational structure that allows for the emergence of a form of occupational identity and shared identification. Data reveal that seafarers working in multinational crews, with minimal numbers, segregated by hierarchy, work department and often nationality and language ground their shipboard relationships on the perception of an individual's competence and contribution to work independently of an assessment of their personality, background or nationality. This is not to deny that individuals may form friendships or animosities on the basis of such characteristics. Rather, relationships onboard and acceptance into the professional community are reported to be grounded in an individual's contribution to the common enterprise of operating the ship and, importantly, amplified by the emergence of multinational crewing. Further, given that a seafarers' presence on the ship is founded on the understanding of an equitable return for their skills and labour as the basis of participating in the employment contract, we argue that issues of fairness in terms of pay and conditions appropriately relate to seafarers as a reference group. --- Gaining acceptance With clearly defined roles and skillsets it could be expected that the organisational structures regulate behaviour to produce work teams that achieve effective outcomes. However, where organisation is weak or fails, it may take an act of trust to achieve an outcome based on collaboration (Barbalet, 2009). In the maritime industry there is a widely expressed view that actual competencies underlying these global standards of certification are in fact variable (Bloor et al., 2014). Additionally there have been reported cases of seafarers working with fraudulently obtained certification (Chapman, 1992;Obando-Rojas et al., 2004). This is reflected in reservation amongst seafarers as to the confidence they express in certain of their colleagues on the basis of certification alone. The UK is a lot more stringent on who they let in charge of ships, and let's face it…in developing countries they can buy a ticket [certificate of competence]…and a lot of people do. [British, Junior Engineer] Furthermore seafarers routinely join ships at short notice with little or no knowledge of the company, the ship or the other crew members onboard and yet are expected to work with, and place trust in, those others to carry out their tasks safely and competently. In the context of multinational crews that may not share a common national culture or language this raises further challenges. Morita and Burns' (2013) ethnographic study identifies a number of features that are pertinent to developing trust, including validated credentials and access to positive information about skills, experience, and safety orientation. English is the lingua franca at sea, but levels of fluency vary considerably. As such seafarers report that it is less easy to gain insight into an individual's level of competency verbally (e.g. by talking through a job) rather one has to see skills in action. Our data show trust or confidence in colleagues is built through demonstrations of competence and/or willingness to do a fair share of the work. As such trust and hence acceptance within the community has to be more clearly earned in multinational crews: There are benefits when you work with a mix. The only thing is you have to show them your knowledge and that you can do your job. [Filipino, Electrician] I think as you get more experience with dealing with foreign nationals you tend to be a little bit more maybe fastidious on the checking that you do when they're actually attending to the task…you've got to ensure yourself that these people are capable of doing what it is you're asking them to do... [British, Chief Officer] Trust is achieved by demonstration of ability and willingness to work -not only to slot into the shipboard structure but, as noted above, to perform tasks in a way that is seen as competent and recognised as in accordance with good maritime practice. At sea it doesn't really make a lot of difference [which nationalities you work with] as long as who you work with does his share, it doesn't really matter what nationality he is…it is just a case of getting the job done. [Tanzanian, Chief Engineer] For me I don't mind about nationality, just as long as you do your job. [Filipino, Third Engineer] Acceptance within the community is not simply based on the presentation of an internationally recognised certificate, but the demonstration of competence that validates the claims within that certificate. Belonging inheres within practice. --- A dangerous occupation The relationship between competence and belonging is deepened through a recognition of common features of the social context. Seafaring is a dangerous occupation with high levels of occupational injury and mortality (Walters and Bailey, 2013), and is perceived to be so by those onboard ship. Nobody knows what will happen next, especially because of the danger of the sea and you don't know if everything is going to be alright in the morning when you wake up. If you're a seaman then you're stepping one foot into the grave already like that. [Filipino, Able Seaman] All crews they are like us…because…we are exposed to danger [Sierra Leonean, Second Cook] And, with minimal crewing levels, each person is crucial to the successful operation of the vessel; there is no spare capacity. A crew is thus comprised of a tightly interconnected group dependent upon each other to perform their roles to ensure the effective running of the ship, as noted by one participant: [A]t the end of the day we're all here to do a job and that to get the ship from point A to point B. [British, Third Engineer] Competence deepens belonging beyond that of successful completion of organisational aims to the self-preservation of a group entwined in a common fate. In this manner seafarers aboard ship can be viewed as a distinctive community, both physically and socially separated from land and their homes, where the defining features of membership are competence and work ethic as defined by the established norms, values and practices of this particular community. Whilst organisational structures define formal criteria of work and entrance into the community, practice and competence establish a deeper sense of belonging; a belonging that transcends a given ship and reflects an industry wide identification. As seafarers frequently move between ships and companies they reflect the sense of shared belonging deriving from a common industry wide organisational structure and embedded maritime culture. As such it is our contention that the community of seafarers deployed via the global labour market represents a relevant site for raising social justice claims in relation to terms and conditions of employment. --- The dialectic of same and other In the preceding section we emphasised commonality in terms of the structural context of work and the interactional bases of belonging. This approach runs counter to the emphasis on geographical and, in particular, national scale within extant theories of global social justice; replacing it by one of transnational belonging that crosses boundaries. These ties define the community of fate and align social justice claims along a transversal axis. However seafarers have multiple bases of belonging including, importantly, nationality. In the next section we show how perceptions of otherness interact with perceptions of sameness and lead to the raising of claims of injustice. --- Identity and the perception of 'others' Crews aboard the majority of the world's ocean-going ships are now comprised of individuals of different nationalities. This multi-nationalism is viewed largely positively by the seafarers interviewed, with many commenting on the benefits of working with others of a different nationality. To the extent that cultural differences were remarked upon, it was frequently in terms of food eaten, religions followed, and sociability. Typically however it was commented that a seafarer's nationality was unimportant in respect of work practice, as all seafarers were viewed as being onboard to do a job and earn a living, and in that respect they were described as 'all the same': Because, we are all humans, see, whether you are Philippine, Greeks, it is, when we are on the ship, we are all seamen…We are one family. [Sierra Leonean, Chief Cook] There are no difference at all. First of all, we are doing the same job, all of us, we are doing the same job… [Croatian, First Engineer] The recognition of difference appears coterminous with its opposite 'all being the same'. Seafarers recognise that others are different and different in multiple ways. But what enables them to say they are all the 'same' is the recognition of commonality that transcends rank, work department or nationality in this situated but generalised contextnamely their contribution to the joint endeavour of safely and efficiently operating their ship in the international merchant fleet navigating the world's oceans. --- A sense of injustice This dialectic of sameness and otherness is emphasised when seafarers reflect on fairness across their community. The narrative of interpersonal and intra-community comparison is shot through with national contextual comparators, viewed from the common base of occupational community. When a seafarer's participation in the shipboard community is judged on the basis of their perceived competence and contribution to work, it is hardly surprising that interpersonal comparisons tend to focus on issues of pay. In responding to noted inequalities, some take the seafarer community as the sole point of reference: The thing what's not great though, especially as we're with these guys, is the way they're paid. All right what you get paid is good for their country, but I'd feel ashamed if one of them turned around to me and said 'how much do you get paid?' I'd feel ashamed to tell them, 'cos I think they're -they're paid a lot less than what they're worth. [British, Junior Engineer] We should be paid for the job not for the nationalities…I don't understand this because you live there [referring to home country], but you don't have to live there… I think this should be paid for the job; for your skills, for your knowledge, for your experience. Not for the nationality, it shouldn't be taken into account. [Polish, Second Officer] Others refer to the national context as important in understanding equity. It is important to recognise that even in these cases the occupational community grounds the claim, and differential treatment is only argued to be fair when taking into account relative purchasing power in the respective countries. I think it makes you see how lucky you are sometimes. The Polish officers think they're quite well off and everything. I think they're on our sort of wages in US dollars; something like that, so it's probably half or a third of us for doing the same job, while back home they're really well off. [British, Fourth Engineer] In framing claims in this manner, attention is drawn to the relevance of otherness and sameness in the redemption of justice claims. Fairness becomes not solely an internal comparison within the community but also a recognition of otherness in lives, making visible that seafarers live two lives, one aboard ship and one at home. However, it is appeal to the shipboard community that serves to ground the claim and assert its legitimacy; a legitimacy grounded in both structural arrangements of the seafarer labour market, industry wide organisational arrangements and seafarer understandings. By contrast reference to the national offers nuance in respect of the substance of the claim. Claims of inequity go beyond pay, as terms and conditions of employment vary even more widely. The most visible sign of inequality between workers is contract length and time spent onboard. Individuals from Western Europe typically spend 3-4 months aboard ship followed by 2-4 months paid leave. By comparison seafarers from developing countries commonly work 9-12 months and are entitled to a month's paid leave. They have the extended length of trip… they are going to look at it as 10 months I've got to do, 4 months I've got to do. I don't feel it's fair that they should have to do that. [British, Chief Engineer] As well as formal contractual arrangements there are also broader differences in rights and entitlements depending upon their home country. For instance, those from richer nations often have the right to free or subsidised national-based healthcare, whereas seafarers from developing countries are more likely to have to pay for their own: When I go home from the company I don't have any medical facilities but those who are staying in UK have medical facilities… [Pakistani, Second Engineer] The ability to exercise other rights, such as to join a trade union, claim compensation for mistreatment or repatriation in case of abandonment, also vary significantly by nationality (Bailey, 2003;ILO, 2004). Reference to such differences highlights that seafarers have multiple identifications, in terms of both sameness and difference that ground perceptions of inequity. From the data, we have identified several perceived inequities, with the national read through the transversal community in a way that demonstrates how the experience of these differences contributes to further the seafarers' perception of injustice. That is, as workers engaged in a common enterprise, reliant upon each other for their safety, well-being and security it is made visible on a daily basis that some can exercise rights and access goods and services not available to others in a way that may cause hardship, anger, shame and/or embarrassment. Moreover the experience of work especially in terms of duration and reprieve, as an embodied reality, is impressed upon an individual's consciousness in terms of energy and vitality expended and marked by the changing sequence of 'the others', one's colleagues and team members, due to the existence of inequality predicated on nationality. --- Discussion Influential accounts of social justice have attempted to restrict the concept of social justice to the national scale (Miller, 1999;Walzer, 1983). Through the examination of a concrete example and presentation of empirical data, we have offered an alternative analytic frame. Seafarers working in multinational crews, traversing the world's oceans, express a sense of belonging with the community of international seafarers. With perceptions of demonstrable competence, emergent trust and common fate forming the narrative of their existence and self-identification as 'seafarers'. This is an identification grounded in everyday practices onboard particular ships, but one that is experienced as an emphatic sense of belonging to a community of seafarers rather than a fragmented series of short-term contractual events. For the seafarer, the community of belonging is neither the globe nor the nation state, but neither is it solely the particular organisation. Whilst employment contracts appear insufficient to ground social justice claims, we argue that belonging to a transversal community is wider and richer in content and can make reasonable claim to operate as a legitimate frame for raising claims of social injustice. Likewise, the quotidian multinational organisation of labour within the maritime sector and the perceptions of seafarers', as presented, argue against the national scale as determining. Our account similarly takes issue with cosmopolitan approaches which assume that the only response to globalising conditions consists in re-siting the basis of inclusion in a metanarrative of global belonging, such as shared capacity for reason or common humanity. While sociologists and cultural theorists have approached the issue of mobility and globalisation by developing empirically informed, nuanced accounts of sameness and difference and cosmopolitan ways of being and understanding, others have stressed the need to link such findings via theory to broader social, political and economic issues arising from the processes of globalisation, to bring about change (Alexander et al., 2014;Hall, 2000). Our account seeks to do this by providing an empirically informed approach to social justice theory. The community of belonging of multinational seafarers is based in shared practices and lived commonality. The ties may be transversal and the groups may form and dissipate regularly, but the narrative of belonging to the community of seafarers remains present. Indeed, it could be argued that the short term contracts and continual churn of crews across organisations and vessels serves to emphasise this belonging to a community, at the level of professional practice, rather than particular employing organisation. By exploring the lived reality of seafarers via the collection and analysis of empirical data other, and arguably richer, forms of belonging become apparent that question the counter-posing of the national and the global prevalent in theoretical debates. We have more sympathy with accounts that seek to obviate scale. Indeed our discussion shares some features with arguments concerning issues of 'subjection' and 'being affected' (Fraser, 2008;Young, 2000). A criticism of these approaches is that those within the structures of governance are somewhat undifferentiated and that prior ties do not appear to make a difference; justice claims are operationalized through a subjection by a structure which trumps other claims of any form. By contrast our approach seeks to show both the formation of meaningful ties through shared practice that operate across boundaries and, in this case, the continuing relevance of national ties as points of interpersonal comparison. The seafarers are in effect holding both sets of ties as elements of social justice claims. However, these claims are not of the same order. The community of belonging raises and grounds justice claims, prior national ties have the potential to affect the elucidation of the substantive accounts of fairness. To restate, the claim derives from and is grounded in the occupational community, the exposition of the claim (i.e. what is fair in a particular instance) reflects the primacy of the community and, in this case, the continuing significance of the national. As one seafarer argued: To see the salary and then compare it to the developed world, and know that we do -that we [Ghanaians] work with them. Not to be on the same salary scale, you know, but the difference should be a little closer. [Ghanaian, Third Officer] In a previous paper we discussed managers' responses to claims of injustice made by seafarers (Winchester and Bailey, 2012). Many of the responses took the form of either rejecting non-national scales of reference or suggesting that the national scale trumps any other apparently legitimate scale. In our view, the claims deriving from shared practice appear too strong simply to be dismissed in this manner. In a normative vein, the claims for justice, and the specific invocation of equity in the first instance, lies across boundaries based in transversal forms of belonging grounded in practice. National scales introduce claims for contextual relevance in respect of the nature of equity across the group. In these claims the practices of the group and its community of belonging inter-mingle with the other land-based 'world' of the seafarers. In this account, communities of entitlement are not always closed and fixed, but can be more or less permeable. Hence, within substantive accounts of social justice, where to draw the boundary of inclusion, and what factors to admit for consideration, is not resolved by prior determination or theoretical reflection but is a dialogical accomplishment; one that can be aided by sociological elucidation of the grounding relations. This, as we have shown, is very much implicit within the understandings of the seafarer community and its dialectic of same and other and, importantly, made explicit by the social researcher. However the starting point is not of differentiated others, but of the equal standing of members of the community based in shared practices. In this way transversal equity (i.e. that of the community as the grounding of social justice) is prior to the narrative of difference based on other sites of belonging. The latter can only contextualise and render nuance to claims of substantive fairness but cannot ground the claim. The significance of our data is that it gives substance to the claim that social identity and entry to, or exclusion from, the community of seafarers is grounded in perceptions of competence and commitment under conditions of collective risk -perceptions shaped by and formed within a particular set of organisational structures that are common and institutionalised across this industrial sector. Commensurate with this is the perception of seafarers as individuals with skills and applying effort doing a job. A natural corollary of this is that social justice is grounded and framed at the level of this community and should relate to the individual, their skills and the work they do, not their nationality. It is only by introducing sociological data into these debates that these issues have been drawn out; not only in respect of a detailed analysis of the socio-structural content in which social justice claims play out, but also by enunciating seafarer perceptions and feeling of inequity. In this we seek to go beyond claims that sociology should leave ethical theory to philosophy (Abend, 2008) and contend that the sociological and the ethical should be seen as mutually constitutive. --- Conclusion In a globalised world claims of social injustice often transcend national boundaries. In such cases a key area of contestation is the determination of the appropriate and legitimate frame, i.e. who is entitled to raise claims and on what basis. We have examined the case of seafarers in the global maritime industry, where claims for equitable treatment are based on perceived commonality but rebutted on the grounds of national belonging. We have presented data from seafarers demonstrating that they self-identify as a community of entitlement; where belonging is based in perceptions of competence and willingness to contribute to the joint endeavour in a way consistent with established sector-wide maritime tradition and practice and, importantly, shaped by the sector-wide organisational structures and arrangements. Having argued that seafarers, as a community, warrant the status of legitimate claimants to social justice, a fuller theory of social justice requires the elucidation of the mechanisms by which claims could be addressed. Whilst beyond the scope of this paper, we point to several elements necessary to developing a fuller account that are present in the sector. First, the maritime sector is subject primarily to international regulation as developed by the International Maritime Organisation (IMO) and International Labour Organisation (ILO), with labour organised globally by the International Transport Workers' Federation (ITF); Second, the ILO has developed an internationally accepted minimum wage for seafarers, while the ITF has leveraged commonality to set minima in respect of terms and conditions and secured some basic rights; Third, the recent development and ratification of the ILO's Maritime Labour Convention, which attempts to delineate an extensive 'bill of rights' for seafarers. In developing our argument we have focused on seafarers, however, the need to identify who can legitimately raise claims to social justice applies equally to other groups -we could for instance, have examined the case of other transnational workers, e.g. air crew, construction workers in the Middle East, or migrant domestic workers in Singapore or Hong Kong. Indeed, we have argued that any account of social justice would benefit from a detailed examination of the nature of the community that underpins it. The thesis advanced is that social justice relates to a community whatever its basis; be it in terms of geographical boundaries underpinned by legal and constitutional arrangements and notions of citizenship or through some more amorphous but socially grounded set of social relations. And, in giving an account of social justice appeal needs to be made to the empirical elucidation of those relations that define the community. Hence our contention is, that in the matter of social justice, theorists should not declaim principles without detailed understanding of the social world and a recognition of the relations that bind those for whom they speak. To this end sociological methods and data are an essential element. --- Author Biographies Nicholas Bailey is Lecturer in the School of Social Sciences, Cardiff University. His research interests and publications have focused on the maritime sector and seafaring as a lens to explore issues relating to: work and family life, risk, health and safety, and equality and rights, in the context of global social and economic process.
This study looks into the complex interactions that exist between Indonesian Micro, Small, and Medium-Sized Enterprises (MSMEs) and training, hiring, employee engagement, social entrepreneurship performance, sustainable business practices, and the social impact on local communities. The study employs Structural Equation Modeling (SEM-PLS) through a quantitative analysis encompassing 487 MSMEs to explore a broad range of hypotheses. The findings highlight the paradoxical relationship that exists between sustainability and training, underscoring the necessity for HR procedures to be approached with delicacy. High employee engagement and successful hiring emerge as key factors that influence the performance of social entrepreneurship and sustainable business practices. Moreover, the research highlights the positive effects of MSMEs involved in social entrepreneurship on sustainable practices and the larger community, underscoring the connection between sustainability and social entrepreneurship. While the practical consequences direct strategic HR planning and the reform of training programs, the theoretical implications cover the advancement of Sustainable Human Resource Management (SHRM) and the enrichment of social entrepreneurship theory. The research offers significant perspectives for MSMEs aiming to harmonize HR procedures with sustainability goals and promote constructive societal influence.
INTRODUCTION The convergence of social entrepreneurship, sustainable HR practices, and the micro, small, and medium-sized firm (MSME) sector presents an intriguing terrain for investigation in the fast-paced corporate world of today. MSMEs are essential to the Indonesian economy because they play a significant role in reducing poverty, fostering job creation, and fostering economic progress in both developed and developing nations (Kadarisman, 2019;D. Sari et al., 2023). Even while MSMEs have a big economic impact, it's becoming more widely acknowledged that their influence goes beyond financial indicators (Koeswahyono et al., 2022;Kurniawan et al., 2023). MSMEs are crucial for environmental and social responsibility. The emphasis of attention is on the intricate interactions that occur between sustainable human resource practices in Indonesian MSMEs and their effects on social entrepreneurship performance, business sustainability, and community well-being (Iskandar & Kaltum, 2022b; N. T. P. Sari & Kusumawati, 2022). Due to its sociocultural variety, Indonesia has particular potential and problems in the MSME sector. MSMEs can be major forces in economic growth, but it's critical that their operations reflect the values of social responsibility and sustainability (Glänzel & Scheuerle, 2016;Wardhani et al., 2023). MSME businesses are seen as significant contributors to societal well-being in addition to being economic engines (Castellas et al., 2018;Eikenberry & Kluver, 2004;Mia et al., 2022). The Indonesian government has been pushing for sustainable development programs in recent years, emphasizing companies that share their social and environmental objectives (Kadarisman, 2019; N. T. P. Sari & Kusumawati, 2022;Tria Wahyuningtihas et al., 2021). The MSME sector, as the backbone of the economy, plays a unique role in determining the sustainable destiny of the country (Febrian & Maulina, 2018). This study intends to investigate how social entrepreneurship success, business sustainability, and wider social effect on local communities are impacted by employee recruitment, training, and engagement. Employee training has a significant role in raising the caliber of human resources, which benefits organizational performance (K. Nkundabanyanga et al., 2014). Training initiatives can boost staff members' energy and inventiveness, improving their capacity for original thought and self-renewing behavior (Tabasum & Shaikh, 2022). This is especially crucial for social entrepreneurs, as they frequently need to come up with creative solutions to pressing societal issues (Rahmi et al., 2022; N. T. P. Sari & Kusumawati, 2022). The sustainability of a company can be significantly impacted through recruitment, particularly green recruitment (Kumar et al., 2022). Hiring people that are devoted to sustainable practices and are environmentally concerned is known as "green recruitment." This can improve the company's standing, draw in like-minded clients, and support the long-term viability of the enterprise (Mathis & Jackson, 2016;Ozkazanc-Pan & Clark Muntean, 2018). Employee performance and engagement are directly correlated, which aids in the organization's achievement of its objectives (Abolnasser et al., 2023;Ahmed et al., 2020). The strong emotional bond that staff members have with their company motivates them to work harder at their jobs (Tabasum & Shaikh, 2022). Employee retention, a critical component of organizational performance, is boosted by engaged workers' propensity to stick with the company (Alhmoud & Rjoub, 2019;Awolusi & Jayakody, 2021). Organizational sustainability is greatly impacted by green human resource management (GHRM) techniques, such as green HR planning, green job design and analysis, green recruiting and selection, green employee relations, and green training methods (Akhtar et al., 2023;Bahuguna et al., 2023;Gharbi et al., 2022;Yong et al., 2020). These procedures support the sustainability of the business as a whole, preserve organizational capabilities, boost profitability, and enhance employee and customer satisfaction (Kumar et al., 2022). Government support, community/local population engagement, employee engagement, and organizational contribution are some of the factors that impact social entrepreneurship performance (Hidzir et al., 2021). These factors are interrelated and form a model that can be used to understand and improve social entrepreneurship performance. Local communities can be greatly impacted through social entrepreneurship. It can support regional economic growth, generate employment, and offer services (Reyes & Campo, 2020). Furthermore, the social impact of social entrepreneurship efforts can be amplified by enlisting the participation of local residents (Iskandar et al., 2021;Iskandar & Kaltum, 2022a). The necessity for businesses to include sustainable practices into their core values is becoming more and more obvious given the complexity of today's business environment. Human resource (HR) practices play a significant role in this scenario (Gulzar, 2017;Mathis & Jackson, 2016). These days, hiring tactics, training plans, and employee engagement projects are viewed as strategic tools that can be used to advance social impact and sustainability rather than as merely functional requirements (Amah & Oyetuunde, 2020;Antony, 2018;Malik et al., 2022). Nonetheless, there is still much to learn about the complex interaction dynamics in the unique setting of Indonesian MSMEs. The intricate relationship between sustainable human resource practices and the socioeconomic impact of MSMEs in Indonesia is not well understood, which is a major problem (Kourilsky & Esfandiari, 1997;Lin-Lian et al., 2022;Nafukho & Helen Muyia, 2010). The disregard for human resource procedures in MSMEs, which more frequently concentrate on daily operational concerns, is one facet of this issue (Hermawati, 2020;Nurani et al., 2020). The fundamental problem is the underappreciation of human resources' strategic importance as an internal organization driver and its larger social influence. MSMEs face particular issues because they operate in different industries, thus a general approach to sustainable HR practices might not work. It is troublesome because there aren't any particular solutions that consider the characteristics of MSMEs in Indonesia. MSMEs have a lot of potential to positively impact society; the trick is to make the most of it. It is necessary to conduct further research on how human resource practices might be adjusted to get the best possible results for businesses and the communities that MSMEs serve (Omar, 2020;Tabatabaei et al., 2017;Zhao & Huang, 2022). The issue is knowledge gaps that make it difficult to develop appropriate practices, policies, and interventions to improve MSMEs' social impact and sustainability. It is imperative that this gap be closed if Indonesia's MSME sector is to grow overall. The significance of Micro, Small, and Medium-Sized Enterprises (MSMEs) in Indonesia's socio-economic structure makes this research imperative. Even while these businesses undoubtedly contribute financially, it is critical that they adopt sustainable and socially conscious business methods (Febrian & Maulina, 2018). This is highlighted by multiple elements: The foundation of the Indonesian economy, MSMEs significantly boost employment and GDP. However, research into sustainable methods is desperately needed because humans are susceptible to both environmental changes and economic shocks. For these businesses to be resilient and sustainable, these principles must be included. MSMEs have the ability to be effective change agents since they are significant members of the local community. Employing sustainable Human Resource (HR) practices can augment their influence on social entrepreneurship, thereby aiding in the development of communities (Campos, 2021). The need to maximize this potential for societal well-being gives rise to urgency. Aligning MSMEs with sustainable HR practices is not only a local requirement but also a global one in an era where global sustainability is the primary focus. MSMEs must adapt to be relevant in the global market as investors and customers prioritize socially conscious businesses more and more. The paucity of studies especially examining the impact of HR practices on social entrepreneurship in Indonesian MSMEs underscores the urgency of this matter. It is essential to comprehend and modify HR strategies in light of the changing socioeconomic environment in order to address current issues. Although sustainable HR practices are important, there is a clear knowledge gap about their particular consequences when it comes to MSMEs in Indonesia. The difficulty is in realizing how HR procedures, especially those pertaining to hiring, training, and employee engagement, can be used to boost community development and social entrepreneurship in addition to enhancing corporate performance. By investigating the connection between these HR practices and their effects on the social well-being of the larger local community, this study aims to close this gap. Based results Table 1. the age distribution of the respondents reveals a notable presence of people in their mid-career phase, with 36.06% of the sample falling into the 26-35 age group. A thorough investigation of viewpoints on sustainable HR practices in Indonesian MSMEs is ensured by this age distribution. A cohort of professionals with a moderate degree of experience is suggested by the majority of respondents (36.17%), who have 5-10 years of experience. This variation in experience levels helps to provide a more nuanced view of how sustainable HR practices are implemented at various phases of a career. The large number of responders (38.20%) with master's degrees is noteworthy. This educated population offers a good starting point for discussions about the integration of sustainable HR practices and may be a reflection of the importance that the Indonesian MSME sector places on knowledge and skills. It is important to investigate sustainable HR practices in the setting of smaller organizations, as evidenced by the preponderance of micro (36.50%) and small businesses (37.77%). But the inclusion of large businesses (10.89%) and mediumsized businesses (14.78%) guarantees a comprehensive analysis of these practices across a range of company sizes. --- b. Data Analysis The partial least squares approach and structural equation modeling (PLS-SEM) were utilized in SMARTPLS version 4 to examine the study data. Based on the previously developed theoretical framework, we utilized the Confirmatory Composite Analysis (CCA) method to support this research. The robustness of the model architecture and the latent variable indicators is therefore guaranteed. The PLS-SEM methodology evaluates the outer and inner models through two stages of analysis. The construct validity and coherence of the survey instrument indicators are evaluated using a variety of statistical methods. Two different metrics were employed to evaluate the instruments' validity: convergent and discriminant validity. Instrument dependability is measured using metrics such as Composite dependability (CR) and Cronbach's alpha (CA). Latent variables are considered dependable in accordance with the CCA approach if the combined CR and CA values exceed 0.70. Convergent validity is assessed using the CCA Method using the Average Variance Extracted (AVE) measure. According to criteria (Hair et al., 2019), convergent validity is deemed adequate when the value is more than 0.50. Before it was completed, a preliminary version of the questionnaire was distributed to entrepreneurial PhD holders who had published high-caliber papers in Scopus. Following that, thirty ad hoc examples of the query words were selected. In this study, three independent factors and three dependent variables are present. A list of the criteria for validity and reliability is given in Table 3 above. A total of twenty-eight questionnaire questions were employed in this study. Convergent validity-a measure of the questionnaire's validity-was determined by using the partial least squares approach to the calculation. The degree to which an indicator accurately reflects a dimension is the gauge of convergent validity. As per (Hair et al., 2019), an evaluation tool is deemed to possess convergent validity if the Average Variance Extracted, or AVE, value is more than 0.5. Factor loadings are shown for each item in the table, and they are all greater than 0.70. As predicted, every construct composite reliabilities and AVE value exceed 0.50 and 0.70, respectively. Statistically, the Heterotrait-Monotrait Coefficient (HTMT) can be used to evaluate the discriminant validity of research instruments. To assess discriminant validity in PLS-SEM study, keep in mind that (Ringle et al., 2012) recommended the HTMT ratio as a more accurate statistic. It is crucial to verify that the HTMT ratio does not exceed 0.90 in order to determine the instrument's legitimacy. The validity of the research instrument used to evaluate the model it contains is indicated by Table 4, where the HTMT ratio values for each latent variable are all less than 0.90. The structural or internal assessment's goal is to put a number on how well the conceptual model predicts the variance of the independent variable. Figure 2 depicts the internal model and the construction process. The four measurement experiments that were conducted are included as well. --- Figure 1 Internal Model Assessment The objective of the internal or structural assessment is to ascertain how well the conceptual model predicts the variance of the independent variables. Four measurement analyses are conducted in order to achieve this. The combined influence of the exogenous and endogenous components was assessed for significance using the R-square (R2) value, also known as the coefficient of determination. Additionally, using a subsample of 5000, the bootstrap technique was used to evaluate the statistical significance of the direct and indirect path coefficients. In order to show that there is a statistically significant association between latent variables, this evaluation uses the t-statistic, also known as the pvalue, which requires a value of less than 0.1. At this point, the research approach described by (Hair et al., 2019) was used to test the study's hypotheses. The measurement and overall effectiveness of the structural model were then assessed, and the robustness of the model was verified using a Goodness of Fit study. The Chi-Square ratio, NFI, and SRMR values are evaluated for strength in the analysis. In addition to the predictive relevance analysis discussed above, another method employed in this work is the blindfolding methodology, which is based on crossvalidated redundancy and was fully explained by Sarstedt, Straub, and Hair in 2012. Examining and analyzing partial least squares structural equation modeling (PLS-SEM) in relation to structural equation modeling is one of the main goals of this work. --- RESULTS AND DISCUSSION Researchers (Hair et al., 2019) especially advise that before performing a more thorough analysis, make sure there are no missing outlier data from distributing questionnaires to research participants. 500 surveys were initially sent out; however, it was discovered that some outliers were missing or respondents had not filled out the form after the author, the enumerator, entered the data. 487 questionnaires were judged suitable and accurate after the missing outlier data was eliminated. The research should multiply five to ten times more than the entire number of research indicators if SEM-PLS is used as the data analysis method (Hair et al., 2019). This study presents the least number of samples needed to test the sample requirements in PLS-SEM, which is 28 indicators in total. Based on this, 487 samples are found to be eligible. In the PLS-SEM test series, the second need is to confirm that no multicollinearity assumption is present in any of the variables that are used to generate a construct. As stated by (Hair et al., 2017), one must not use this assumption if the VIF value is less than 3,000. The findings are displayed in the table below, which was produced by conducting this study without depending on the multicollinearity hypothesis. Table 4 The The study's multicollinearity assumption criteria have satisfied all pertinent requirements, as per (Hair et al., 2017). Every one of the resultant structures has an inner VIF value less than 3,000, as Table 4 above illustrates. According to the VIF values of the network variables in these topics, the training, hiring, and employee engagement variables' VIF values on the performance of social entrepreneurship and sustainable business are less than 3,000. It is indicated by this value that these variables are acceptable. Furthermore, for the constructs associated with the dependent variable, values fewer than three thousand were also discovered. The GoF in the study model will also be examined as a proposed criterion. Model fit evaluation can be done with the use of the SMARTPLS website, according to Hair et al., 2017 and2019. The assessment of model fit is essential for determining the overall utility of the structural, internal, and external models. The theta root mean square (RMS) and the standardized root mean square (SRMR) should therefore be less than 0.02, 0.10, or 0.08. In addition, there must be a minimum of 0.9 in the numerical fit index (NFI). The calculated model's NFI value of 0.842, indicating a great degree of fit, and SRMR value of 0.085, below the recommended threshold of 0.10, are displayed in Table 5. Given the study's findings, the model satisfies the Goodness of Fit presumptions. --- a. Interior Model Architectur By applying the coefficient of determination (R-square), one can determine the extent to which other factors impact the dependent variable. The dependent latent variable of the structural model with an R2 value of 0.67 or higher, as per (Chin, 1998;Hair et al., 2019), suggests that the influencing independent factors have a positive impact on the dependent variable under influence. Results fall into two categories: weak and moderate. If they fall between 0.19 and 0.33 and between 0.33-0.67, they are classified as weak. hypothesis is supported. This suggests that there is a statistically significant inverse link between organizational sustainability and training. Like H1, the negative coefficient suggests that social enterprise performance (SEP) tends to decline as training (TRA) increases. At the 0.05 level, the t-statistic of 2.521 is significant, indicating that the hypothesis is supported. This suggests that there is a statistically significant inverse link between social enterprise performance and training. The approval of H1 and H2 is the conclusion. Business sustainability (SBS) tends to rise as recruitment (RET) increases, according to the positive coefficient. The hypothesis is supported by the extremely significant (p<0.001) t-statistic of 5.253. This suggests that recruiting and organizational sustainability have a statistically significant positive link. According to the positive coefficient, social enterprise performance (SEP) tends to climb when recruitment (RET) does. The hypothesis is supported by the t-statistic of 5.559, which is very significant (p<0.001). This suggests that recruitment and social enterprise performance have a statistically significant beneficial association. H3 and H4 are deemed authorized in the end. Business sustainability (SBS) tends to rise as employee engagement (EET) does, according to the positive coefficient. The hypothesis is supported by the extremely significant (p<0.001) t-statistic of 7.812. This suggests that employee engagement and organizational sustainability have a statistically significant positive link. Social entrepreneurship performance (SEP) tends to rise as employee engagement (EET) does, according to the positive coefficient. At the 0.001 level, the t-statistic of 3.231 is significant, indicating that the hypothesis is supported. This suggests that employee engagement and social entrepreneurship performance are positively correlated in a statistically meaningful way. H5 and H6 are deemed authorized in the end. The positive correlation shows that business sustainability (SBS) tends to rise in tandem with social enterprise performance (SEP). The idea is supported by the extremely significant (p<0.001) T-statistic of 5.969. This suggests that social entrepreneurial success and organizational sustainability have a statistically significant positive link. The positive correlation suggests that social impact on local communities (SIL) tends to improve as social entrepreneurial performance (SEP) does. Supporting the hypothesis, the t-statistic of 3.275 is significant at the 0.001 level. This suggests a statistically significant positive correlation between social impact on local communities and the performance of social entrepreneurs. H7 and H8 are deemed authorized in the end. The positive correlation suggests that social impact on local communities (SIL) tends to improve as social entrepreneurial performance (SEP) does. The hypothesis is supported by the extremely significant (p<0.001) t-statistic of 6.192. This suggests a statistically significant positive correlation between social impact on local communities and the performance of social entrepreneurs. We conclude that H9 is authorized. To sum up, the statistical analysis provided support for every hypothesis. The results show a strong correlation between social entrepreneurship performance, employee engagement, training, and recruitment, as well as how these factors affect organizational sustainability and the social impact on nearby communities. The study's findings advance knowledge of sustainable HR practices in MSMEs in Indonesia from the standpoint of social entrepreneurship. --- Discussion The study's conclusions offer a sophisticated understanding of the connections among Indonesian MSMEs' social impact on local communities, performance in social entrepreneurship, hiring practices, employee engagement, and training. In this section, we examine how to interpret these findings, connect them to earlier studies, and talk about the theoretical ramifications. An intriguing finding is the inverse association between social entrepreneurship performance and training and sustainable business practices. Training has historically been linked to successful organizational outcomes (Lumunon et al., 2021;Oloan, 2022). This finding, however, raises the possibility that not all training programs will support the objectives of social entrepreneurship and sustainability. This makes it necessary to examine training programs' orientation and substance more closely. Do these programs focus on teaching about sustainability, or are they lacking this crucial element? Consideration must be given serious thought to the paradoxical negative link regarding training identified in H1 and H2. Training has long been thought to be an effective means of improving organizational performance. Nonetheless, it seems that not all training programs make an equal contribution when it comes to sustainability and social entrepreneurship. It is necessary to investigate how training programs relate to sustainability goals in terms of content, orientation, and alignment. To make sure that their training initiatives encourage employees to think sustainably, organizations might need to review and rethink their curricula. The literature on the significance of human resource practices in fostering organizational sustainability is consistent with the favorable effects of recruitment practices on social entrepreneurship and sustainable company performance (Mathis & Jackson, 2016;Sutanto & Kurniawan, 2016). Broader organizational goals are aided by efficient recruitment techniques that take into account a candidate's commitment to sustainability goals in addition to talents. The positive correlations found in H2 and H43 demonstrate how strategically important recruitment is to the advancement of social entrepreneurship and sustainable business practices. Organizational goals are greatly aided by efficient hiring procedures that take into account candidates' beliefs and commitment to sustainability (Baten, 2017;Chandani et al., 2016). This is consistent with the claim that a company's early personnel lifecycle has a significant impact on how things turn out in the long run (Chandani et al., 2016;Sendawula et al., 2018;Yuswardi & Suryanto, 2021). Likewise, the research emphasizing the crucial role engaged employees play in organizational success is echoed by the strong positive association between employee engagement and sustainable business practices and social entrepreneurship performance (Nugroho, 2023;Tabasum & Shaikh, 2022;Winasis et al., 2020). Employee engagement increases the likelihood that they will actively engage in social entrepreneurship and take ownership of sustainability efforts (Iskandar & Kaltum, 2022b). The idea that social entrepreneurship can act as a catalyst for positive organizational and community outcomes is reaffirmed by the positive relationship shown between social entrepreneurship performance and sustainable business practices as well as local community social impact. MSMEs that actively participate in social entrepreneurship are seen as having a good impact on the community as well as encouraging sustainable practices in their operations (Burkett, 2013;Castellas et al., 2018;Iskandar & Kaltum, 2021;Krupa et al., 2019;McLoughlin et al., 2009;Troise et al., 2022). --- a. Theoretical Contribution This research adds empirical support to the body of knowledge on particular HR practices that have an impact on sustainability, which helps to advance the field of sustainable HRM. According to the findings, a targeted approach to HR procedures-such as hiring, training, and employee engagement-is essential to promoting long-term business strategies. By offering empirical evidence for the favorable correlation between social entrepreneurial performance, sustainable company practices, and social effect on local communities, this research contributes to the theoretical framework of social entrepreneurship. This puts into question the conventional understanding of entrepreneurship, which primarily considers financial gains. Given the contradictory effects of training on sustainability practices, it is necessary to reconsider the purpose and method of training within the framework of sustainable human resource management. Conventional training programs may enhance certain abilities, but they might not place as much emphasis on the attitudes and values that encourage sustainable behavior. These findings open up new avenues for future research on the subject matter and efficacy of training with a sustainability focus. --- b. Practical Implications The study's findings highlight the necessity of strategic HR planning that specifically incorporates sustainability objectives for practitioners. HR specialists should coordinate employee engagement campaigns, recruitment tactics, and training plans with the organization's sustainability goals. This entails not just seeking out people who have a strong commitment to sustainability, but also cultivating an environment that values and promotes sustainable behavior. The contradictory training-related findings emphasize how crucial it is to review and even restructure training initiatives. Employers should make sure that training programs contain elements that help employees create a sustainable mentality in addition to skills development. The necessity for recruitment procedures that specifically take candidates' ideals and dedication to sustainability into account is highlighted by the positive effects of hiring on social entrepreneurship and sustainable company practices. In order to guarantee that the staff is aligned with company values, HR professionals should integrate sustainability criteria into the hiring process. The significance of employee engagement activities from a strategic perspective is underscored by the robust positive correlation observed between sustainability and social entrepreneurship performance and employee engagement. Programs that encourage employee participation and enable them to actively contribute to sustainability objectives and social entrepreneurship endeavors are something that organizations ought to invest in. --- c. Limitations and Future Research Directions There are certain limitations that should be acknowledged even if the study's findings offer insightful information. Causal inferences are limited by the data's crosssectional character. Future studies investigating the temporal relationship between HR practices, sustainability, and social entrepreneurship may employ a longitudinal approach. Furthermore, because the study's focus was on MSMEs in Indonesia, caution should be exercised when extrapolating the results to other settings. To evaluate the robustness of the association, future research could expand this analysis to include diverse cultural and economic contexts. Additionally, the impact of particular HR practices on sustainable outcomessuch as employee engagement, recruiting, and training-was examined in this study. Future studies should look into other elements like leadership and organizational culture to provide a more thorough knowledge of the mechanisms influencing sustainability in MSMEs.. --- CONCLUSION To sum up, this study offers a thorough evaluation of how HR procedures, sustainability, and social entrepreneurship relate to MSMEs in Indonesia. The necessity to reevaluate training programs to make sure they are in line with sustainability goals is highlighted by paradoxical discoveries pertaining to training. Sustainable business practices and social entrepreneurship can be fueled by strategic levers such as high employee engagement and effective recruitment practices. By highlighting the connections between social entrepreneurship and sustainable human resource management, this study enhances the theoretical frameworks in both fields. In order to improve sustainable outcomes, HR professionals can use the study's practical findings to inform their strategic planning, recruitment, and employee engagement initiatives. This research offers pertinent insights for companies seeking to traverse the challenging landscape of HR practices in the quest of sustainable and socially impacting company operations, as companies around the world struggle with the sustainability imperative. --- Reference --- Strong explanatory power is indicated by the high R2 values for social entrepreneurship performance (0.613), social impact of local communities (0.643), and sustainable business (0.534) in their respective models. Furthermore, the corresponding R2 adjusted values (0.602, 0.652, 0.694) imply that the models successfully take into account the quantity of predictors, hence enhancing the resilience of the associations investigated in the research. These results confirm that, in the context of the study, the variables selected to explain variations in social entrepreneurship performance, sustainable business practices, and the social impact on local communities are reliable. --- b. Forecasting Model's Applicability Based on recommendations from (Hair et al., 2017(Hair et al., , 2019)), this study evaluated the model using the Q2 redundancy measure while accounting for the reflecting component of the metric. How well the model predicts outcomes outside of a sample is indicated by Hair's Q2 value. For a given dependent construct reflecting endogenous variables in structural equation models, a Q2 value larger than zero indicates the route model's predictive usefulness. Given the data, Table 7 demonstrates the predictive power of the model. --- c. Exam Bootstrapping When the t-statistic value at the 95% confidence level is greater than the tstatistic (>1.96), the hypothesis is considered significant. The software SmartPLS bootstrap was used to achieve the findings seen here. Together with the beta value, mean, standard deviation, t-value, and p-value, the construct hypotheses analysis is displayed in Table 8. The 0.05 p-value was therefore used to make the decision.
Female sex workers (FSW) living with HIV in sub-Saharan Africa have poor engagement to HIV care and treatment. Understanding the HIV care and treatment engagement experiences of FSW has important implications for interventions to enhance care and treatment outcomes. We conducted a systematic review to examine the HIV care experiences and determinants of linkage and retention in care, antiretroviral therapy (ART) initiation, and ART adherence and viral suppression among FSW living with HIV in sub-Saharan Africa. The databases PubMed, Embase, Web of Science, SCOPUS, CINAHL, Global Health, Psycinfo, Sociological Abstracts, and Popline were searched for variations of search terms related to sex work and HIV care and treatment among sub-Saharan African populations. Ten peer-reviewed articles published between January 2000 and August 2015 met inclusion criteria and were included in this review. Despite expanded ART access, FSW in sub-Saharan Africa have sub-optimal HIV care and treatment engagement outcomes. Stigma, discrimination, poor nutrition, food insecurity, and substance use were commonly reported and associated with poor linkage to care, retention in care, and ART initiation. Included studies suggest that interventions with FSW should focus on multilevel barriers to engagement in HIV care and treatment and explore the involvement of social support from intimate male partners. Our results emphasise several critical points of intervention for FSW living with HIV, which are urgently needed to enhance linkage to HIV care, retention in care, and treatment initiation, particularly where the HIV prevalence among FSW is greatest.
Introduction Globally, female sex workers (FSW) remain disproportionately burdened by HIV and are a key population for engaging in HIV care and treatment to both improve these women's health and stem ongoing HIV transmission. The HIV prevalence among FSW worldwide is 12% (Baral et al., 2012). FSW have more than 13.5-times increased odds of HIV infection than women in the general population of reproductive age in low and middle income countries (Baral et al., 2012). When compared to other regions, sub-Saharan Africa holds the highest HIV prevalence among FSW, with nearly 40% of FSW living with HIV (Baral et al., 2012). Expanded antiretroviral therapy (ART) access among the general population has led to substantial improvements to the overall health and well-being of those living with HIV. ART adherence can significantly maintain or restore immune function while also reducing viral load and the likelihood for onward transmission (Cohen et al., 2011;Grinsztejn et al., 2014;Group et al., 2015). The numerous benefits of ART are, however, reliant on successful engagement in HIV care and treatment, including linkage to care shortly after HIV diagnosis, retention in pre-ART care, timely initiation onto ART, and optimal ART adherence for viral suppression (Gardner, McLees, Steiner, Del Rio, & Burman, 2011;McNairy & El-Sadr, 2012;Mountain et al., 2014b). FSW living with HIV must be linked to care and initiate treatment to receive the individual immunological and clinical benefits of ART, such as viral suppression. Also, given the evidence supporting ART for treatment as prevention (Granich et al., 2010;Smith, Powers, Kashuba, & Cohen, 2011), FSW who are virally suppressed decrease the likelihood for ongoing transmission to their sexual partners (Cohen et al., 2011;Gardner et al., 2011). As countries in sub-Saharan Africa begin to implement universal testing and treatment strategies to provide immediate ART to all those testing HIV-positive, there has been recent attention to estimating the proportion of key populations, such as FSW, at each step of the HIV treatment cascade from HIV diagnosis to viral suppression on ART. Currently, prevalence data is limited on linkage and retention in HIV care prior to ART initiation (Mountain et al., 2014b). A recent meta-analysis of HIV care continuum estimates that among FSW living with HIV, ART initiation ranges from 19% in Kenya to 48% in Rwanda and current ART use ranging from 23% in Kenya to 70% in Burkina Faso (Mountain et al., 2014a;Mountain et al., 2014b). Strategies that enhance linkage and retention to HIV care, initiation of ART, and viral suppression through ART adherence are urgently needed to maximise the benefits of ART among this key population. While estimates of HIV care and treatment engagement outcomes among FSW are improving, our understanding of barriers and facilitators for linkage to HIV care and treatment for FSW living with HIV is limited. Among general populations of people living with HIV in low-income countries, reasons found for not being linked to care or initiating ART have included poor health provider communication and barriers to accessing services (e.g. transportation, cost) (Fehringer et al., 2006;Harris et al., 2011;Layer et al., 2014a;Layer et al., 2014b;Tuller et al., 2010; U.S. Agency for International Development [USAID], 2013). Among people living with HIV in high-income countries, barriers have included depression, social instability, substance use, and literacy levels (Harris et al., 2011;Winter, Halpern, Brozovich, & Neu, 2014). FSW frequently experience stigma, discrimination, and violence, which likely exacerbates these known barriers to HIV care and treatment (Baral et al., 2012;Chersich et al., 2013;Scambler & Paoli, 2008;Scheibe, Drame, & Shannon, 2012). Despite the implementation of lifelong ART for all women who are pregnant or breastfeeding for the prevention of mother-to-child transmission (Option B+) in many countries within sub-Saharan Africa, FSW may face additional stigma and discrimination when attending antenatal care visits without a male partner (Beckham et al., 2015;Beckham et al., 2016). To improve HIV care and treatment outcomes for FSW living with HIV in sub-Saharan Africa, it is imperative to systematically review the existing evidence on FSW's experiences with linkage and retention to care and treatment initiation and adherence in this region. For this systematic review, our objective was to examine and synthesise the findings in the quantitative and qualitative literature regarding the care experiences and factors associated with linkage to and retention in HIV care, treatment initiation, and ART adherence and viral suppression among FSW living with HIV in sub-Saharan Africa. --- Methods --- Search strategy To identify articles on the HIV care and treatment experiences and determinants of FSWs living with HIV, we used established criteria for systematic reviews, as defined by The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) Statement (Moher, Liberati, Tetzlaff, & Altman, 2010). We also consulted guidelines and strategies for conducting systematic reviews with findings from both qualitative and quantitative data (Petticrew & Roberts, 2008;Harden, 2010;Atkins, Launiala, Kagaha, & Smith, 2012). Details of the article selection process are shown in Figure 1. We conducted our search in two phases. The first phase sought to identify all articles published in English peer-reviewed journals published between 2000 and 2015 that described the experiences of FSW living with HIV with engagement in HIV care and treatment. Articles were included if they provided results that specifically described the social, behavioural and/or health system experiences of FSW living with HIV. This included articles such as FSW living with HIV as a sub-sample within a research study on FSW, key populations, or people living with HIV. Articles were excluded if they did not disaggregate the results of FSWs living with HIV. Articles that focused solely on biological or clinical measurements of disease progression or transmission among this population were also excluded. We conducted our initial search on 22 November 2013 and updated the search on 30 July 2015. We used the databases of PubMed, Embase, Web of Science, SCOPUS, CINAHL, Global Health, Psycinfo, Sociological Abstracts, and Popline. Our search terms were: --- Selection criteria and data extraction Phase 1 had three review steps: 1. Title review, 2. Abstract review, 3. Full-text review. Each review step included two independent reviewers who evaluated whether or not the article should be included, based on the following a priori inclusion criteria: --- (a) Included female sex workers living with HIV. We excluded articles that were solely about transgender-identified female sex workers because their experiences were likely to differ from the population of cisgender female sex workers. We included articles that specifically mentioned sex worker or prostitute. We excluded articles that included only mentions of transactional sex, but no indication that the women self-identified as FSW. --- (b) Included original empirical quantitative or qualitative data. We excluded articles that were commentaries, letters-to-the-editor, systematic reviews, or metaanalyses and did not present any original data. We also excluded articles where we could not disaggregate which results were from FSW living with HIV. --- (c) Included data collected after 2000. We limited our search to data that is more reflective of the current era of HIV care and treatment. --- (d) Article published in English. After the two primary reviewers finalised their decisions, any discrepancies were discussed among all four reviewers (DC, RZ, PF, KL) until a final decision was agreed upon. The initial title and abstract review resulted in a total of 72 articles that met the inclusion criteria and went through final paper review. For full-text review, two reviewers separately extracted key data about each paper using a standardised data abstraction sheet and made a final recommendation for inclusion. Reviewers used the same a priori inclusion criteria for each step of Phase 1. At the conclusion of Phase 1, we had 46 articles published in Englishlanguage peer-reviewed journals since 2000 that were focused on the experiences of FSW living with HIV. In Phase 2, we aimed to identify a sub-sample of the 46 articles from Phase 1 that addressed determinants related to linkage to HIV care, retention to HIV care, ART initiation, and ART adherence and viral suppression among FSW in sub-Saharan Africa. To do so, we had two reviewers independently review the extractions of each of the 46 articles to identify whether or not an article used data collected in sub-Saharan Africa and had data on this population's HIV care and treatment experiences. Any discrepancies were discussed by all reviewers until a final decision was made. Of the 46 articles from Phase 1, we identified ten articles from sub-Saharan that fit the inclusion criteria and are the focus of the results presented below. To synthesise the findings from the ten articles, we first organised results by the care and treatment steps of the HIV care cascade: linkage and retention to care, ART initiation, and ART adherence and viral suppression. Given the focus on the care experiences and determinants of care among FSW living with HIV, the HIV testing step of the HIV care cascade was not included as part of our results. We also developed a conceptual framework to organise results using a multilevel framework that includes individual, interpersonal, and structural levels. For quantitative articles, we assessed how each outcome was measured and compared reported determinants of HIV care and treatment. For qualitative studies, we identified themes highlighted by each article and findings related to HIV care and treatment engagement outcomes. Because qualitative articles tended to provide more in-depth information on these women's lives, we used the articles to provide illustrative quotes and gain a deeper understanding of these women's HIV care and treatment experiences. --- Results The final sample of ten articles were all published after 2011 (Table 1). The sample size of FSW ranged from 20 to 870. They came from Rwanda (n = 1), Zimbabwe (n = 2), Benin (n = 2), Burkina Faso (n = 1), Nigeria (n = 1), Swaziland (n = 1), Kenya (n = 1), and Uganda (n = 1). Overall, critical barriers and facilitators were noted at the individual, interpersonal, and systems levels for engagement in HIV care and treatment among FSW (Figure 2). At the individual level, the articles describe that substance use and ART knowledge and attitudes influences linkage and retention in care, ART initiation, and ART adherence. At the interpersonal level, peer and intimate partner support were important determinants for engagement in care and treatment. At the structural level, articles within our review emphasised experiences of stigma and discrimination from healthcare workers and poor health systems, such as long waiting lines and distance to clinics, as barriers to linkage and retention in care and ART initiation. Food security and underlying poverty experienced by FSW also played an important role for engagement in care and treatment. --- Linkage and retention to HIV care The evidence on linkage and retention in care was limited as only two articles quantitatively described linkage to HIV care or retention in HIV care among FSWs living with HIV in sub-Saharan Africa. Both the Rwanda and Nigeria articles assessed linkage to care as receiving any HIV-related medical care (Braunstein et al., 2011;Lawan, Abubakar, & Ahmed, 2012). Although the majority FSWs in these study populations were receiving HIV care, FSW's positive experiences with healthcare providers and ART knowledge positively influenced HIV care engagement and retention specifically among FSW in Rwanda (Braunstein et al., 2011). Overall, these two studies showed that most FSW living with HIV had reported linking to HIV care. The article from Kano, Nigeria found brothel-based FSW were most likely to receive HIV care within ART clinics in public hospitals rather than receiving care at medicine stores, faith-based health centres or traditional healers (Lawan et al., 2012). Among FSW not in care in Rwanda, many women believed that HIV care was not necessary until they were symptomatic or had worsened immunological health (Braunstein et al., 2011). In multivariable analyses, factors associated with being out-of-care in Rwanda included breastfeeding, having a known HIV-infected sexual partner, and reported condom use at last sex. Among FSW in care, structured interviews showed that FSW generally had positive attitudes towards ART as well as knowing the purpose and benefits of ART (Braunstein et al., 2011). --- ART initiation Overall, our search showed that six articles examined ART initiation (Braunstein et al., 2011;Diabaté et al., 2011;Konate et al., 2011;Cowan et al., 2013;Diabaté et al., 2013;Mtetwa, Busza, Chidiya, Mungofa, & Cowan, 2013). Five of the articles used quantitative methods to assess determinants of ART initiation, while one article used qualitative methods to provide a more in-depth understanding of care experiences related to ART initiation among FSW in Zimbabwe (Mtetwa et al., 2013). These articles highlighted important barriers to ART initiation, including stigma and poor nutrition. One article reported on a research project that provided ART as part of a tailored intervention package for FSW in Burkina Faso (Konate et al., 2011). Both professional and non-professional (bar waitresses, fruit sellers, etc.) FSW who were part of the open-cohort in Burkina Faso received ART, in addition to treatment adherence support from clinical psychologists and group education sessions (Konate et al., 2011). With that support, approximately 30% of FSW living with HIV initiated ART. Within three urban and rural areas of Zimbabwe, 56% of FSW on ART reported receiving treatment within a primary care ART clinic and 41% reported receiving treatment at a hospital (Cowan et al., 2013). FSW living with HIV in Zimbabwe discussed discrimination and hostility from hospital staff and reported financial and logistical barriers to treatment (Mtetwa et al., 2013). This included the negative attitudes they would receive during examinations and counselling, as described by one Zimbabwean participant: She opened my file and I saw her face just changed instantly, and she actually frowned and looked at me like I was disgusting her. Her first words to me were, 'so you are a prostitute and you actually have the guts to come here to waste our time and drugs on you, why do you do such things anyway? Why can't you find a man of your own and get married'? (Mtetwa et al., 2013). These types of harsh judgements by healthcare providers were a significant barrier to women's motivation to initiate ART. Other types of public humiliation from hospital staff that women in Zimbabwe described included public announcements in the waiting room which stated that all sex workers should move to the back of the waiting line or stand in a separate line (Mtetwa et al., 2013). Besides poor treatment by healthcare providers, FSW in Zimbabwe described other barriers related to the financial burden of initiating ART. They said that initiating ART treatment had a prohibitive financial burden due to the costs of regular testing and doctors' visits. Nutrition was also revealed as a barrier to treatment as FSW were worried that being on ART would require more nutritious diets than their current diets, and therefore the food would become more of a financial burden. FSW also perceived travel time for receiving treatment as burdensome and that it encroached on their available time to earn money (Mtetwa et al., 2013). --- ART adherence and viral suppression ART adherence and viral suppression was assessed in six articles (Braunstein et al., 2011;Konate et al., 2011;Benoit et al., 2013;Fielding-Miller, Mnisi, Adams, Baral, & Kennedy, 2014;Mbonye, Rutakumwa, Weiss, & Seeley, 2014;Goldenberg et al., 2016). Lack of adequate food was often linked to the difficulty of adhering to ART (Braunstein et al., 2011;Fielding-Miller et al., 2014). Additionally, substance use was strongly associated with gaps in ART treatment and likelihood of a detectable viral load (Mbonye et al., 2014). One article highlighted the importance of intimate partner support for ART adherence (Benoit et al., 2013). Reported adherence was relatively high among FSW who participated in an intervention where ART was provided in Burkina Faso (Konate et al., 2011). Within the first six months post ART initiation, over 80% of FSW achieved adherence levels of 95% or higher, measured by pill counts. Nearly all FSW who initiated ART had reached viral suppression within six months. ART adherence continued to increase to 92% at 12 months post ART initiation. FSW in both Rwanda and Swaziland described hunger or not having adequate food to take pills as reasons for non-adherence (Braunstein et al., 2011;Fielding-Miller et al., 2014). Approximately 30% of FSW in Rwanda reported ever missing a pill since initiating ART, while 14% reported missing pills in the prior three days (Braunstein et al., 2011). Qualitative interviews among FSW in Swaziland revealed that FSW were counselled at the clinics to consume 'healthy foods' in order to manage their HIV infection and disease progression (Fielding-Miller et al., 2014). FSW specifically expressed anxiety of being unable to take their HIV medication on an empty stomach. One FSW described the importance of eating prior to taking ART to facilitate drug absorption: You don't necessarily have to eat tasty food to take the pills. Just any food that will settle in the stomach and allow for digestion of the pills because you cannot take the pills on an empty stomach (Fielding-Miller et al., 2014, p. 5). FSW living with HIV in Swaziland felt they could not regularly afford or access food, particularly healthy foods such as fruits and vegetables, therefore food insecurity presented a potential barrier to their adherence to their medication. Substance use was found to be associated with gaps in ART among FSW living with HIV in Uganda. They described their concern for the effect of alcohol on their health and adherence (Mbonye et al., 2014). Some of these FSW expressed their desire to stop using alcohol and felt it was necessary to leave sex work to abstain from alcohol. Also, FSW openly discussed that drinking inhibited their ability to remain adherent to ART because it limited their ability to remember taking their pills (Mbonye et al., 2014). A study in Kenya identified that FSW who were on ART reported receiving support from intimate partners-including monetary, financial, and emotional support-that enabled them to better adhere to ART (Benoit et al., 2013). Several FSW on ART reported that their intimate partners would buy their medications when needed. One FSW described that her and her intimate partner would share the responsibility of clinic travel for ART. Some FSW also stated that they received reminders from their partners to take medications. One FSW explicitly shared that her intimate partner would send reminders by cell phone to take her pills when they were not together. Additionally, intimate partners encouraged their FSW partners to maintain a healthy lifestyle, such as reducing alcohol use, exercising, and eating healthy foods (Benoit et al., 2013). --- Discussion Our findings from this systematic review characterised determinants and care experiences of HIV care and treatment among FSW living with HIV in sub-Saharan Africa. FSW living with HIV are first and foremost women; therefore known barriers and facilitators to HIV care and treatment for women in sub-Saharan Africa are also applicable for FSW. However, FSW face additional challenges to HIV care and treatment at multiple levels. Our findings complement previous research that documented large gaps globally along these first crucial steps of the HIV care continuum among FSW (Mountain et al., 2014a;Risher, Mayer, & Beyrer, 2015). Though the evidence base was limited to only ten articles, we found several key factors that influence FSW linkage to HIV care, retention, ART initiation, and treatment adherence at multiple levels, and which merit future research in a broader range of settings and populations. At the individual level, we found that substance use can negatively impede engagement in HIV care and treatment, while accurate ART knowledge and positive attitudes of treatment improves engagement throughout the HIV care continuum. At the interpersonal level, social support from peers or intimate partners can lead to optimal ART adherence and ultimately viral suppression. At the structural level, stigma and discrimination from healthcare workers and poor health systems adversely affects linkage and retention to care and ART initiation. Furthermore, food security and poverty were found to be substantial factors affecting ART initiation and adherence. Although more research is needed to address the broad range of FSW populations and settings, these results emphasise critical entry points for interventions to enhance HIV care and treatment for FSW living with HIV in sub-Saharan Africa. Stigma and discrimination occurred during linkage to HIV care and ART initiation for FSW living with HIV. FSW often face multiple levels of stigma and discrimination related to the social and structural context of sex work (Scambler & Paoli, 2008;Logie, James, Tharao, & Loutfy, 2011;Baral et al., 2012;Scheibe et al., 2012;Chersich et al., 2013). Therefore, it is not surprising that this stigma and discrimination continue to occur and are perhaps exacerbated among FSW living with HIV. In our review, FSW highlighted that stigma and discrimination specifically within the healthcare setting were significant barriers to their engagement in care and treatment, which is likely due to their sex work practices and HIV status combined (Logie et al., 2011;USAID, 2013). Interventions focused on healthcare service providers to reduce stigma and discrimination, such as sex work sensitisation training, are urgently needed to improve HIV care and treatment outcomes for FSW, a finding that has been highlighted in other studies (Zulliger et al., 2015). Social support, especially from peers or intimate male partners, could help overcome some of the barriers related to stigma and discrimination. Peer support and health navigation holds strong promise for improving engagement throughout the HIV care continuum. Peer support, in addition to social environment cohesion among FSW, has been associated with FSW's willingness to engage in HIV testing and treatment initiation (Hong, Fang, Li, Liu, & Li, 2008;Deering et al., 2009). HIV care and treatment interventions should also build on the intimate partner dynamics among FSW. Strategies that enhance trust and communications between partners and partner engagement in care could improve emotional quality support for ART initiation (Fleming, Barrington, Perez, Donastorg, & Kerrigan, 2015;Syvertsen et al., 2015). Nutrition plays an important role for ART initiation and adherence among FSW living with HIV in sub-Saharan Africa. Food insecurity has been intrinsically linked with sex work (Oyefara, 2007;Weiser et al., 2007;Anema, Vogenthaler, Frongillo, Kadiyala, & Weiser, 2009) and lack of adequate food and poverty often motivates women to engage in sex work. While engagement in sex work can be income generating, FSW may continue to struggle with food insecurity. Our findings indicate that FSW-particularly within Rwanda and Swaziland-understood the importance of eating healthy foods or any food at all in order to prevent negative ART side effects (Braunstein et al., 2011;Fielding-Miller et al., 2014). While healthful diets are important, a frequent barrier to treatment initiation or continuation among FSWs was a lack of food. Food supplements and context-specific nutritional counselling are valuable interventions to improve food security and to promote ART initiation (Mamlin et al., 2009). It is important that this messaging, however, be realistic to FSWs' available resources. Failure to do so can introduce additional barriers to ART among FSW who are food-insecure. The synergistic relationship between substance use and sex work is well-known. Often substance use is associated with women entering into sex work (Wechsberg, Luseno, Lam, Parry, & Morojele, 2006;Strathdee et al., 2015). Others may use substances to facilitate soliciting clients and to cope with the challenges from engaging in sex work (de Graaf, Vanwesenbeeck, van Zessen, Straver, & Visser, 1995;El-Bassel, Witte, Wada, Gilbert, & Wallace, 2001;Chersich et al., 2007;Gupta, Raj, Decker, Reed, & Silverman, 2009;Li, Li, & Stanton, 2010). Findings from our review demonstrate that substance use, particularly alcohol use, is also a barrier to engaging in HIV care and treatment among FSW. To date, there are few interventions focused on reducing substance use while improving ART uptake (Deering et al., 2009;Donastorg, Barrington, Perez, & Kerrigan, 2014), and none with FSW in sub-Saharan Africa. Substance use can impair cognitive functions, which in turn may adversely affect health seeking behaviour such as receiving and initiating HIV care and treatment (Chitwood, McBride, French, & Comerford, 1999;Tucker, Burnam, Sherbourne, Kung, & Gifford, 2003;Sohler et al., 2007;Simmonds & Coomber, 2009;Lancaster et al., 2016). Integrating substance use treatment with HIV care and treatment programmes, as resources allow, may reach FSW and improve HIV care and treatment outcomes. FSW could be an ideal population to benefit from investigational treatment and prevention modalities. Preliminary trial results have suggested the potential effectiveness of long-acting injectable antiretrovirals for viral suppression (Spreen, Margolis, & Pottage, 2013;Kerrigan, Mantsios, Margolis, & Murray, 2016;Margolis et al., 2016a;Margolis et al., 2016b). If effective, long-acting injectables hold great promise for improving adherence by alleviating the burden of a daily pill for HIV treatment. This review identified important gaps within the current literature that could enhance our understanding of HIV care and treatment experiences for FSW living with HIV in sub-Saharan Africa. ART initiation and ultimate adherence with the goal of viral suppression is critical for improving not only health outcomes but also onward transmission (Cohen et al., 2011;Gardner et al., 2011). Our systematic review reveals the limited literature among FSW living with HIV in sub-Saharan Africa, where the HIV prevalence is highest globally. Also, the lack of evidence on linkage and retention in HIV care among FSW is concerning as countries move towards providing universal treatment. Universal treatment strategies may continue to widen current disparities in linkage and retention to care among FSW. Furthermore, our review emphasises the large variation of engagement in HIV care and treatment among diverse FSW populations and settings in the region. The majority of articles within our review provide insights from cross-sectional quantitative or qualitative data. Further research, including longitudinal research, is imperative in order to provide more clarity on the temporality of determinants significantly affecting engagement and disengagement in HIV care and treatment for FSW living with HIV in sub-Saharan Africa. There are limitations to this systematic review. First, our search criteria of sex work may have affected our final set of articles presented within this review. Women who engaged in transactional sex but did not self-identify as sex workers were not included within the population of our final set of articles. Second, our search was restricted to peer-reviewed published literature. Our findings do not include the grey literature and non-peer reviewed journals that may have additional insights on the HIV care experiences among FSW living with HIV. Third, the articles included in our review had varying typologies of FSW. Therefore, interpretations from our findings must be further explored within various specific FSW populations. Finally, there may be other important factors that are influencing these women's HIV care and treatment experiences that have yet to be researched or published. Thus, this review should be considered a synthesis of the current published literature and not a definitive review of all determinants of the care and treatment experiences of FSW living with HIV. Nonetheless, our findings highlight several future lines of research and potential interventions to improve HIV care and treatment initiation experiences for FSW living with HIV in sub-Saharan Africa. --- Conclusions This systematic review revealed important barriers and facilitators to engagement in HIV care and treatment among FSW in sub-Saharan Africa. The evidence showed that stigma, discrimination, poor nutrition and food insecurity, and substance use impeded FSW's engagement in these critical steps of the HIV care continuum. Developing tailored interventions that address these known barriers for FSW living with HIV in sub-Saharan Key determinants and HIV care and treatment experiences among female sex workers living with HIV at the individual, interpersonal, and structural levels throughout the HIV care continuum, as adapted from Zulliger, et al (under review). HIV care continuum steps shaded in grey were the focus on the present systematic review. --- Flow chart of study selection for inclusion in systematic review --- Africa is crucial to prevent ongoing transmission and improve health outcomes among this population. --- Author Manuscript Lancaster et al. Page 18 Table 1 Characteristics of studies including determinants and care experiences of HIV care and treatment of female sex workers living with HIV
The study sought to investigate social and cultural determinants that affect the uptake of Universal Health Coverage (UHC) in Masinga sub-county, Machakos County. Universal Health care is a huge milestone in the attainment of not only the Millennium Development Goals (MDG) but also the Sustainable Development Goals (SGDs). In Kenya, the government rolled out the UHC program in 2018 with four counties acting as pilot for the rest; Machakos was one of them. The objective of the study was aimed at examining the effects of geographical access on universal health care in Masinga sub-county. A Descriptive Research Design was adopted in which a total of 350 respondents were chosen from all the 7 locations of Masinga sub-county. The sampling was both stratified and systematic random sampling. The pilot study was done in Ndalani location, Yatta sub-county to test the validity and credibility of the study instruments before the actual research. In the actual research the respondents were issued with questionnaires that were self-administered. The questionnaires were both closed ended and open ended; the former sought to capture specific details in the respondents while the latter gave them a leeway to elaborate their answers. Quantitatively, the responses were fed into the SPSS program and analysed; they were presented using percentages, graphs and charts. The study established that the geographical access play an important role in either enhancing or inhibiting the utilization of UHC. It recommended that the government step up sensitization programs to get as many people as possible to utilize the program. It was expected that the study would benefit not only the policy makers, but the county government and the NGOs operating in the area as well.
INTRODUCTION According to WHO (2019), UHC refers to all people and communities who get all the subsidized medical services. It is composed of a variety of important and quality health-based services, health based promotion facilities, rehabilitative, preventive and palliative services. It assist people in accessing health services that look at the basic and important sources of illness and deaths, and ensuring that the service quality is adequate to improving people's health status. Facilitating people's access to health services without incurring much financial burden is believed to reduces the likelihood of getting into poverty in the process of trying to meet medical expenses that sometimes requires one to use all his or her savings, assets hence interring with not only their own future but also of their children. The objective of UHC is part and parcel of the SDG set in 2015. Therefore, the progress of all nations is assessed in terms of whether or not they are able to meet UHC and other health based objectives. Sound health not only facilitates the learning of young people but also adults' learning hence escaping from financial burden. It also gives the basis for long-term economic growth/development (Barasa, Rogo, Mwaura, & Chuma, 2018). The global idea of universal health coverage (UHC) originated in Germany in the year 1883 where the sickness law was successfully legislated giving all its citizens right for the quality health services and forcing the employers in the country to give their employees' health insurance cover (Nxumalo, Tseng & Griffiths, 2018). Later the idea of UHC was introduced to Cuba in 1960s, which facilitated elimination of many diseases such as Polio, Measles, Malaria and Mother to Child Transmission of HIV leading to great heights in health security in the country. Cuba government has given all its citizens free preventive and curative health services through free Universal Health Coverage. Again the Cuban (UHC) policy is based on medical internationalism which focus on being in solidarity with world population by sending medical personnel to all other countries in America and ( Nyikuri, Tsofa & Okoth, 2017). In Africa the Universal Health Coverage policy is on United Nations resolutions 2012 on (UHC) which puts health as a key factor in sustainable development (Barasa, et al 2018). The resolution encourages all the continents, Africa included, to provide access to quality and affordable health care services for the sustainable development (Obatha & Wiley, 2019).Algeria is the first country in Africa to enact the law on Universal Health Coverage (UHC) for its people in 1975. The UHC produced good indicators on maternal mortality ratio, child mortality rate and life expectancy. Both maternal, mortality and child mortality dropped drastically with (UHC) introduction while the life expectancy increased due to good health care services (Tsiachristas et al, 2019). South Africa is another country in Africa which joined UHC partnership in 2016 and the health system has greatly improved the health of people through quality and affordable services provision. The HC is a major agenda for the countries Vision 2030 (Waithaka, Tsofa & Barasa, 2018). The other country in Africa which has successfully implemented the UHC agenda is Rwanda after recovering from the genocide of 1994 which had left the country in very bad health state. Tema, Vito, Zanella, Gurioli, Lanza & Sulpizio (2017) states that all the citizens are covered by UHC for their health needs, Many African nations have attempted to implement the social health insurance schemes of which majority ensure employees based in formal sector that have pooled their resources together. In Kenya, attempts have been made towards introducing UHC since 1963. A household and utilization survey; Conducted in 2007 revealed that only 10% of Kenyans had insurance cover. This involved not only those were in urban areas but also rural areas (MOH, 2019). In another demographic study done in Kenya shows that by 2008 only 9.8% Kenyans had enrolled for health insurance. KNBS (2010), by 2015 only 25% of Kenyans had been covered by UHC (Barasa et al, 2018). Mwaura, Njeri, Barasa, Ramana, Coarasa, Rogo & Khama (2017 adds that health insurance in Kenya falls under both mandatory and voluntary schemes. According to Obadha et al, ( 2019), poverty in Kenya has achieved great heights; preventing many poor Kenyan citizens from access to health services when they are sick. In 2010, the Kenyan Government come up with health policy in its new Constitution which gave all citizens a right to access quality and affordable medical care. To realize the universal health coverage agenda, the government of Kenya selected four pilot counties namely: Kisumu, Nyeri, Isiolo and Machakos for the UHC study. The results were to be used in implementing the UHC services in the other counties and the whole country (Kamau, 2018). In spite of these efforts however, access to universal coverage still remains a mirage to many people across the country, particularly in rural areas. It is against this background that this research chose to do an evaluation of the household factors hindering access to universal health coverage a case of Masinga Sub-County in Machakos County. --- METHODOLOGY This study adopted a descriptive survey design to collect and analyze data. This design is usually appropriate for studies that intend to collect both qualitative and quantitative data. According to Mugenda (2003), this method is appropriate because it eliminates researcher's manipulation of the variables and enables the researcher to describe the state of affairs of the problem under investigation and the relationship between the variables. It also suits research types that require detailed explanation of phenomena. In this case, the design suited the research because it seeks to describe in detail the social and cultural factors that hinder the UHC program in Masinga Sub County. The target population was 350 respondents which included; 50 respondents from each of the seven locations in Masinga sub-county. The sample size in this research constituted the households in Masinga Sub County. According to the KNBS (2019), the sub county had a total of 36,251 households. These households were picked from all the 7 locations of the sub county namely: Kivaa, Masinga, Muthesya, Ndithini, Kithyoko, Kangonde and Ekalakala. A sample is a sub-set of the population that can be analyzed at reasonable cost and used to make generalizations concerning the population parameters easily (Mugenda, 2003). The sample size was determined by use of stratified sampling technique because the study site had seven locations which acted as stratums. Basically focused on households in locations within Masinga subcounty, Machakos county. The sample size was 350 households for seven locations with each location having 50 households. Systematic random and stratified random sampling technique was used to select the sample where 350 households were selected and each location had the following sample; Kivaa 50, Masinga 50, Muthesya 50, Ndithini 50, Kithyoko 50, Kangonde 50 and Ekalakala 50. Stratified random sampling was used to select the respondents from their respective strata where the sample size was derived randomly. The respondents were first grouped into strata of 50 per ward and then selection was done through random systematic technique in which after every five homesteads/respondent were picked until they reach the total of 50 per ward. The seven locations made a target population of 350 respondents. Desk search techniques were used to collect secondary data from already existing sources and previous research studies (Mugenda, 2003). This was done through reading relevant literature available in the library, various documents publications and reports including journals, and magazines were read all alike during the study. The researcher developed the questionnaire which were used by the respondents for the study. The study also undertook interviews of the nurses, sub chiefs and chiefs. These interviews were undertaken through the use of a structured questionnaires and interview guide for the study. Primary data was obtained using a developed questionnaire with both closed and open-ended questions which researcher and research assistants administered to the households and key informants at convenient time of early hours in the morning, in the evening and over the weekends. Secondary data was sought through desktop search techniques from the existing material sources and from previous research studies through reading relevant literature available in the library, various documents publications and reports including, journals, and magazines; where all the relevant information on previous UHC studies were reviewed. A pilot test included 10 respondents and was done in Ndalani location in Yatta sub-county to evaluate the completeness, precision, accuracy and clarity of the questions to the respondents. This piloting was done in Ndalani location because it borders Masinga sub-county and has similar demographic characteristics with Masinga subcounty. Out of the 10 respondents 5 were households, 2 sub-chiefs, 1 chief and 2 nurses. This was done to ensure the reliability of data collection tools that were used in this study. The questionnaires were administered to the respondents and the whole exercise was conducted within three weeks. --- FINDINGS The age of the respondents was considered as an important element of the research because well distribution of age brackets would be representative of the entire population. The findings were presented in Figure 1 below: The age of the respondents, as indicated in the figure above, was distributed with a majority of them, 62 being between 20 to 30 years and another 20% were above the age of 30. The rest, 18% were aged below 20 years. This was considered as an appropriate representative sample to generate anticipated responses because they were well distributed and as such representative of all age brackets. Education levels play an important role in as far as moderating socio cultural practices and beliefs. --- Respondents' Age Below 20 --- 20-30 Above 30 Therefore, this study sought to establish the educational levels off the respondents in a bid to understand their views on the matter under investigation. The results were presented in the figure 2 below: --- Figure 2: Education Levels A majority of the respondents, 65% indicated having certificate level qualification, another 15% had diploma and 15% more had degree. A paltry 5% indicated having no academic qualification. This representation was considered largely literate and thus able to competently respond to the questions. Since the study targeted households, it was important to know the number of children in the households because healthcare cuts across all ages in the households. The responses were presented in figure 3 below: they had no children in their households. This representation was appropriate as it was representative of the scenarios listed above. --- Effects of Geographical Access on Healthcare coverage As much as the government may have opened up opportunities for free medical healthcare, one impending challenge is that of accessibility. To this end, the respondents were asked several questions aimed at determining the effects of geographical access on healthcare coverage. --- Figure 4: Distance to Health Facility from Correspondent's Home From the responses above, it was apparent that distance is a constraining factor towards the access of healthcare; this was seen in the 60% of the responses that indicated that it was far and another 30% stated that it was very far. Only 10% indicated that it was near; this was because they resided near Masinga town. Asked further the specific distance, 10% indicated that it was 1 Kilometre away, 60% stated it was up to three Kilometres away while another 30% indicated that it was more than 5 Kilometres away from their home. They were further asked to state how distance was a challenge in accessing healthcare services; they stated that it was expensive to travel to health facilities that were far and they may not have had the requisite funds to do that. This discouraged them and as such only visited these facilities when their conditions got worse. They stated that to solve this challenge, they were forced to hire motorbikes to take them faster to healthcare facilities and this was risky and at the same time more expensive. When asked to state what they thought to be done in order to solve the problem of distance; the respondents stated that the government ought to build more health facilities closer so that those who reside in far flung areas are not disadvantaged by distance. Others stated that the government ought to employ mobile clinics that rotate in the various parts of the sub county regularly in designated places so that more people can be covered. These findings were in tandem with those of Anand (2008) analysis study in China which established that the geographical factors were inhibiting factors towards access to medical healthcare; those who resided in the rural areas hardly frequented hospitals because of the logistical nightmare that it brought in terms of transportation costs and other related costs. It was found out that because of this, there was a huge disparity in mortality rates when rural areas are compared with urban areas because in the latter, geographical factors played a central stage in discouraging or inhibiting their uptake of medical services. An interview with a Key respondent, a nurse, revealed the following: As much as the UHC program is in progress, it is challenging for medical workers to reach out to people who are in far flung areas……just as it is difficult for them to visit the health facilities, so it is also challenging for us to reach them. (KIS 003). The relationship between distance and facility selection in urban settings was less clear as women had more health service options within reasonable travel distance. Factors such as market or employment location may also influence health facility selection. In a dense urban setting in Senegal, women were willing to travel further to obtain family planning services from higher quality facilities. In urban Sierra Leone, few women cited facility distance compared to reputation or cost as a primary consideration when selecting health providers for their children. The study took place in adjacent neighborhoods with multiple clinics and hospitals within a 1 km radius, potentially attenuating the role of distance. The relationship between distance and facility selection for delivery in urban settings is also complex. Health service availability and access does not ensure use. While urban women are more likely than rural women to deliver in facilities, the percentage of urban women delivering outside of a facility remains high. A Nigeria DHS study found that nearly half of all urban women sought health services of a facility. In an urban settlement in Lagos, Nigeria, women had multiple facility options within the city, but over half (51.4%) delivered outside of facilities. According to Winston et al ( 2016), poverty further complicates the relationship between health service access and bypassing behaviors in urban environments. Urban poor often live on city outskirts and must travel further for health services. Inability to pay service fees or the cost of consumables poses a challenge for many women who would prefer to seek another provider if cost was not an issue. Further, insufficient funds to cover fees often result in women choosing to treat a sick child at home rather than visit a health facility. Urban poor living in informal settlements face added challenges of insufficient quality health care options. Studies from two informal settlements in Nairobi, Kenya, found that women did not consider distance or transport a hindrance to visiting the nearest facility within the settlement. However, facilities within informal settlements are often private making cost a challenge for obtaining care and are often poorly equipped to provide adequate services. Focus groups from an informal settlement in Nairobi, Kenya, found that poor women recognize the safety of hospital services but prefer home due to challenges with traveling outside the settlement to reach the main road for transport, transport costs and facility-related costs, and perceived negative provider treatment. --- CONCLUSION AND RECOMMENDATION The purpose of this study was to investigate the socio and cultural hindrances to access to Universal Health Coverage in Masinga Sub County in Machakos County. The target population of the study was 350 respondents. The researcher grouped the respondents into strata of 50 per location and then selection was done through random systematic technique in which after every five homesteads, respondents were picked until they reach the total of 50 per location. The findings of the study informed the specific objectives and answered the study questions. The study found out that socio-cultural factors can either positively or negatively influence the uptake of Universal Health Care program. It was also established that as much as the program is well intentioned; it still needs a lot of sensitization in order to foster full acceptance by the vast majority of people. At the same time, the determinants such as religion, traditional beliefs and geographical factors ought to be mitigated by the county and national government so that more people could benefit from it. Construction of more health facilities and improved infrastructure within the sub-county to facilitate enrolment and easy accessibility to UHC services within Masinga sub-county. Administration should be involved in door to door campaign to sensitize and mobilise the community for enrollment and accessibility to UHC services. This can be done with the collaboration with the government under the office of assistant sub-chief and chiefs.
Introduction Patients with low socioeconomic status have been reported to have poorer outcome than those with a high socioeconomic status after several types of surgery. The influence of socioeconomic factors on weight loss after bariatric surgery remains unclear. The aim of the present study was to evaluate the association between socioeconomic factors and postoperative weight loss. Materials and methods This was a retrospective, nationwide cohort study with 5-year follow-up data for 13,275 patients operated with primary gastric bypass in Sweden between January 2007 and December 2012 (n = 13,275), linking data from the Scandinavian Obesity Surgery Registry, Statistics Sweden, the Swedish National Patient Register, and the Swedish Prescribed Drugs Register. The assessed socioeconomic variables were education, profession, disposable income, place of residence, marital status, financial aid and heritage. The main outcome was weight loss 5 years after surgery, measured as total weight loss (TWL). Linear regression models, adjusted for age, preoperative body mass index (BMI), sex and comorbid diseases were constructed. Results The mean TWL 5 years after surgery was 28.3 ± 9.86%. In the adjusted model, first-generation immigrants (%TWL, B -2.4 [95% CI -2.9 to -1.9], p < 0.0001) lost significantly less weight than the mean, while residents in medium-sized (B 0.8 [95% CI 0.4-1.2], p = 0.0001) or small towns (B 0.8 [95% CI 0.4-1.2], p < 0.0001) lost significantly more weight. Conclusions All socioeconomic groups experienced improvements in weight after bariatric surgery. However, as firstgeneration immigrants and patients residing in larger towns (>200,000 inhabitants) tend to have inferior weight loss compared to other groups, increased support in the pre-and postoperative setting for these two groups could be of value. The remaining socioeconomic factors appear to have a weaker association with postoperative weight loss.
Introduction Gastric bypass surgery is a safe and effective treatment for morbid obesity [1,2]. Mean weight loss remains high even after long-term follow-up [3]. There are groups of patients, however, that experience a lesser degree of long-term weight loss [4]. While age, sex and obesity-related comorbidities, such as diabetes, have been reported to influence postoperative weight loss [5][6][7][8][9][10], the influence of socioeconomic factors remains unclear [11,12]. A low socioeconomic status has been reported to be associated with higher complication rates and poorer outcomes after surgical procedures [13][14][15]. Recent studies have shown the same applies to gastric bypass surgery, with an increased risk for postoperative complications and less improvement in quality of life [16,17]. The recognition of risk factors for inadequate postoperative weight loss that can be identified preoperatively may help in identifying certain groups of patients who require increased support in the pre-and postoperative setting. The aim of the present study was to identify socioeconomic factors associated with suboptimal postoperative weight loss 5 years after surgery. --- Methods The Scandinavian Obesity Surgery Register (SOReg) is a nationwide register for metabolic surgery, containing virtually all patients operated with metabolic surgery in Sweden since 2007 [18]. From the SOReg, all primary gastric bypass procedures from June 1, 2007 until December 31, 2012, were identified and assessed for inclusion in the study. Pre-established exclusion criteria were age <18 years; missing information on weight 5 years after surgery; and operation at a centre not routinely performing a 5-year follow-up. Based on personal identification numbers (unique to all Swedish citizens), data from SOReg were cross-linked to the Swedish National Patient Register, the Swedish Prescribed Drug Register, and Statistics Sweden. The Swedish National Patient Register covers inpatient and outpatient care with high validity for the variables included in the present study [19]. The Prescribed Drug Register covers all prescribed drugs in Sweden, based on ATC-codes [20]. Baseline characteristics, perioperative data, and followup data were obtained from the SOReg, the Swedish National Patient Register and the Swedish Prescribed Drug Register. Patient-specific data on socioeconomic factors (education, profession, disposable income, residence, marital status, financial aid, and heritage) were obtained from Statistics Sweden, reporting quality assured and validated personal data on socioeconomic factors (https://www.scb. se/en/About-us/main-activity/quality-work/statistics-sw eden-has-quality-certification/). Educational level was divided into four groups based on the highest completed education at the time of surgery: primary education (≤9 years of schooling), secondary education (completed 11-12 years of schooling), higher education ≤3 years (completed college or university degree with ≤3 years of education), and higher education >3 years. Profession was reported in accordance with the International Standard Classification of Occupations from 1988 (ISCO-88) and further classified into the following subgroups (based on the respective ISCO-88 groups): Senior officials and management (group 1), Professionals and technicians (groups 2 and 3), Clerical support workers (group 4), Service and sales workers (group 5), Manual labour (groups 6-8), and Elementary occupation (group 9). The place of residence was divided into three categories: Large city (>200,000 inhabitants) and municipality near a large city, medium-sized town (≥50,000 inhabitants) and municipality near a medium-sized town, and smaller town or urban area (<50,000 inhabitants) and rural municipality disposable income, in accordance with the definition of the Swedish Association of Local Authorities and Regions. Disposable income (total taxable income minus taxes and other negative transfers) was divided into percentiles (lowest 20th, 20th to median, median to 80th, and highest 80th) based on the disposable income of all adults in Sweden during the year of surgery. Marital status, financial aid, and heritage were all based on accepted standards as described previously [16]. Comorbidity at baseline was defined as continuous treatment (pharmacological or with positive airway pressure) for sleep apnoea, hypertension, dyslipidaemia, dyspepsia/ GERD, and depression. Diabetes was defined according to the American Diabetes Association [21]. Cardiovascular comorbidity was defined as a diagnosis of ischaemic heart disease, angina pectoris, arrhythmia, or heart failure at any time prior to surgery. --- Procedure The surgical technique for laparoscopic gastric bypass is highly standardized in Sweden, with the majority being antecolic, antegastric, Roux-en-Y gastric bypass with a small (<25 mL) gastric pouch, an alimentary limb of 100 cm and a biliopancreatic limb of 50 cm [22]. In open cases, the gastric pouch and small bowel are handled similarly. --- Outcome The main outcome was weight loss 5 years after surgery defined as the percentage of total weight loss (%TWL). Secondary outcomes were percentage excess BMI loss (% EBMIL = 100 × [preoperative BMI -BMI 5 years after surgery]/[preoperative BMI -25]), and the proportion of patients achieving satisfactory weight loss (defined as EBMIL ≥ 50%). --- Sensitivity analysis Risk factors for loss to follow-up were analyzed as a sensitivity analysis. A further analysis was performed including only patients operated on at centres with >75% follow-up rates for the same year of surgery. --- Statistics Categorical values were presented as numbers and percentages, continuous values as mean ± standard deviation for values with normal distribution, and median with interquartile range (IQR) for values without normal distribution. The association between patient-specific risk factors and weight loss was evaluated using linear regression analyses with the regression coefficient (B) and 95% confidence interval as measures of association. The socioeconomic factors were further evaluated in a linear regression model adjusted for preoperative factors (age, BMI, sex, and comorbidity) known to influence weight loss. The association between patient-specific risk factors and the chance of achieving an EBMIL ≥ 50% was evaluated with logistic regression. All factors evaluated were also entered into a multivariable logistic regression model. The model was also tested for multicollinearity using linear regression. A variance inflation factor (VIF) >5 was considered to indicate an issue with multicollinearity. Due to the multiplicity of variables analyzed, the Bonferroni-Holm method was used to compensate for multiple calculations [23]. IBM SPSS version 25 (IBM Corporation, Armonk, New York, USA) was used for all statistical analyses. --- Results During the inclusion period, 29,524 patients operated with a primary gastric bypass procedure were identified. After exclusion of patients who died before the 5-year follow-up (n = 336), patients operated on at a centre not routinely performing a 5-year follow-up (n = 4326), and patients without weight registered at the 5-year follow-up (n = 11,587), 13,275 patients remained within the study group (53.4% of patients with potential 5-year follow-up). --- Operative data and weight results The mean age at surgery was 42.3 ± 11.1 years, the mean preoperative BMI was 42.5 ± 5.3 kg/m 2 , 77.6% were women and 49.8% suffered an obesity-related comorbid condition. In all, 94.6% of the operations were managed with a laparoscopic approach (n = 12,561), 1.3% were converted to open surgery (n = 167), and 4.1% were primarily open procedures (n = 547). The mean operation time was 84 ± 38.9 min, with a median postoperative hospital stay of 2 days (IQR 2-3 days). At 1, 2, and 5 years after surgery, the mean BMI was reduced to 29.2 ± 4.6 kg/m 2 , 28.8 ± 4.8 kg/m 2 , and 30.4 ± 5.3 kg/m 2 , respectively (p < 0.0001 for all, compared to baseline). At 5 years, the average reduction in BMI was 12.1 ± 4.8 BMI units, corresponding to an average percentage %TWL of 28.3 ± 9.9%, and a %EBMIL of 71.6 ± 26.1%. At that time point, satisfactory excess weight loss (≥50% EBMIL) was achieved in 10,572 patients (79.6%). --- Factors affecting postoperative weight loss at 5 years Lower %TWL was associated with an occupation other than service and sales work, higher disposable income, living in larger cities, receiving financial aid other than social benefits, and being a first-generation immigrant, as well as older age, male gender, and obesity-related comorbidities. Higher %TWL was seen in higher BMI and single status (Table 1). An occupation other than service and sales work, clerical support work or management, receiving financial aid, being a 1st generation immigrant, and disposable incomes in the lowest 20th, and highest 80th percentiles, older age, male gender, higher BMI, and obesity-related comorbidity (other than dyspepsia/GERD) were associated with a lower % EBMIL. After correction for multiple calculations, disposable income and receiving social benefits no longer remained significant factors (Table 2). After adjustment for factors previously known to affect weight loss after bariatric surgery (age, BMI, sex, and obesity-related comorbidities), higher education, living in larger cities and being a first-generation immigrant were independently associated with a lower %TWL and % EBMIL. An occupation as a professional or technician and receiving social benefits were independently associated with a lower %TWL, but not independently associated with a lower %EBMIL. After correction for multiple calculations, place of residence and being a first-generation immigrant remained significant risk factors (Table 3). Receiving disability pension/early retirement, social benefits, and being a first-generation immigrant, were all independently associated with a lower chance of achieving a postoperative EBMIL ≥ 50%, while employment as a senior official or manager, higher income, and residence in small towns were associated with a higher chance (Table 4). Amongst first-generation immigrants, all non-Nordic subgroups had less weight loss, in terms of both %TWL and %EBMIL. Patients born outside Europe also had a lower chance of achieving a postoperative EBMIL ≥ 50% (Table 5). No multicollinearity issue was detected in either of the multivariable models. --- Sensitivity analysis Loss to follow-up was more common in patients with a low disposable income, those receiving social benefits, citizens of medium-sized towns, patients who were unmarried, patients with a higher BMI and younger ages, males, and those with absence of comorbidities (except for depression) (Supplementary Table 1). However, when entering only patients from centres with a >75% follow-up rate, very similar results to those of the main analyses were seen (Supplementary Table 2). --- Discussion Among the socioeconomic variables studied, being a firstgeneration immigrant and living in a larger city were independently associated with less weight loss (measured by %TWL and %EBMIL). With these exceptions, socioeconomic factors had less impact on weight loss than other patient-specific factors, which is consistent with previous smaller studies reporting a lack of association [11,12]. First-generation immigrants experienced significantly less weight loss at 5 years than other groups of patients, and fewer patients in this group achieved satisfactory weight loss. After adjustment for other potential risk factors, the risk for less weight loss among patients born outside of the Nordic countries, and in particular outside of Europe, was equivalent to the effect of strong patientdemographic factors such as age, sex, and metabolic comorbidities. This group of patients may also experience higher complication rates [16] as well as less improvement in HRQoL [17]. Although there may be a difference in the response to bariatric surgery between ethnic groups [11,24], the inferior weight loss among first-generation immigrants could be related to difficulties in their ability to understand and apply preoperative information (health literacy), failure to appreciate the importance of patient involvement, lack of a supportive network, and simple misunderstandings due to language or cultural mismatch between care providers and patients [25]. Furthermore, inherited eating habits and a different food culture could be of importance. Finally, the motivation of the patient to undergo bariatric surgery is known to differ [26,27]. Although immigrants from countries outside of Europe had a tendency towards less weight loss, first-generation immigrants from other parts of Europe also achieved less weight loss than patients born in Sweden. This finding suggests a psychosocial rather than a strictly biological explanation for these differences in outcome. Patients residing in larger cities had lost less weight 5 years after surgery than patients residing in small towns or municipalities. This group of patients has also been reported to be lost to follow-up more often and report less improvement in health-related quality of life after bariatric surgery [17,28]. The explanation for this is likely to be multifactorial, including behavioural and sociopsychological factors not considered in the present study. Part of the explanation may lie in the chronic stress and higher cortisol levels associated with urban life [29], less time for exercise due to congestion, increased travelling times, as well as a higher availability of energy dense food, often called "junk food". In the unadjusted analyses, receiving social benefits were associated with less weight loss, and patients receiving social benefits or disability pension/early retirement were less likely to achieve satisfactory weight loss. Both groups are composed of individuals who often have a difficult economic situation and a higher proportion of physical or mental disabilities that influence their ability to follow diet and exercise recommendations postoperatively. Furthermore, these socioeconomically challenged patients often have a weaker social network and lower health literacy [30]. In fact, lower health literacy may contribute to poor outcome from non-communicable disease among socioeconomically weaker groups [31]. Moreover, a weak association was seen between education, profession and weight loss. Although this could be related to longer working hours and poor work-life balance, the slightly lower weight loss among patients with higher education and professionals/technicians contradicts previous reports and is likely to be due to inequality of access to bariatric surgery rather than a direct association [32]. In a previous American study on US veterans, the average income in the neighbourhood of the patient was reported to influence outcome after bariatric surgery [33]. In our study, higher personal income was associated with a slightly greater EBMIL but lower TWL, thus signalling a potential confounding effect of BMI. Indeed, after correction for other relevant factors, including BMI, no correlation was seen. The association between average neighbourhood income and bariatric surgery outcome is more likely to be explained by other factors associated with residence in poorer neighbourhoods, such as health literacy, lack of a supportive network, and poor access to healthcare. Indeed, it is known that patients with higher incomes have better access to bariatric surgery [34]. In addition to socioeconomic factors, several patientspecific factors also influenced 5-year weight loss. Older age, male gender, and obesity-related comorbidities other than dyspepsia/GERD were all associated with lower postoperative weight loss as well as a reduced chance of achieving satisfactory weight loss (EBMIL > 50%). Preoperative BMI had a strong impact on weight loss, but the impact of BMI was highly dependent on the outcome measured. When weight loss was measured as EBMIL, patients with a higher BMI at the time of surgery had less weight loss, which is in accordance with the results of several previous studies addressing EMBIL as an outcome measure [5,7,8,35]. On the other hand, patients with a higher preoperative BMI lost a greater proportion of their total weight, supporting the results of studies using total weight loss as an outcome measure [6]. Given the link between TWL and other outcomes after bariatric surgery [36], both differences in total weight as well as excess BMI need to be considered when evaluating weight loss after bariatric surgery. The greater weight loss among younger patients and those without obesity-related comorbidities is in-line with previous studies [5,7,9] and may be related to other factors, such as mobility, covariation with other risk factors (such as comorbid disease and age), and established insulin resistance with higher circulating insulin levels, as well as to the effects of medication on weight gain. Clinical depression has also been reported to be associated with poorer follow-up attendance, which in turn is known to be associated with poorer long-term weight results [28,37]. Women had significantly greater weight loss and more often experienced satisfactory weight loss after surgery than men. Although this result contradicts the result of a recent Swiss study including 444 patients [6], it is supported by older studies [11]. Women also attend follow-up visits more often than men [28] and experience better improvement in health-related quality of life [17]. The better compliance and results among women may well be the result of different motivations for surgery. Furthermore, preoperative information, perioperative care, and long-term follow-up programmes are likely to be more adapted to suit the needs of women, since more women than men undergo bariatric surgery. Although several groups with postoperative weight loss less than the mean were identified in this study, it is important to point out that all subgroups showed good weight loss results, confirming the benefits of bariatric surgery. The relatively poor weight loss results among certain subgroups warrant further research to gain more information about specific reasons. Meanwhile, since several of the groups experiencing a poorer weight-related outcome also tended to miss follow-up visits [28], bariatric surgical centres should concentrate on improving follow-up attendance rates, motivating and supporting these patients, and adapting follow-up programmes to meet the requirements of individual patients. The results of the present study suggest that certain socioeconomic groups, in particular The association between socioeconomic factors and weight loss 5 years after gastric bypass surgery first-generation immigrants, are at particular risk for poorer outcome and are a group likely to benefit from more intense perioperative support, as well as directed information adapted to cultural aspects and native language. --- Strengths and limitations The major strengths of this study lie in the large number of patients included and the high quality of data. Furthermore, most previous studies have only measured weight loss as either TWL or EBMIL, but as evident in the present study, both measures are highly dependent on preoperative BMI, though in different ways. EBMIL allows comparisons of patients with varying initial and excess weights, but has the disadvantage of underestimating successful weight loss in patients with very high BMIs. TWL may be a better option under these circumstances, but it may not always provide sufficient clinically relevant information to reflect weight loss success or failure [38]. The inclusion of both measures in this study is thus a strength. There are, however, limitations that must be acknowledged. There were many patients whose weight at the 5-year follow-up was not registered. Maintaining a high follow-up rate over a long period after bariatric surgery is a great challenge [39]. For the purposes of research and patient well-being, however, follow-up is important since patients lost to follow-up are often those with inferior weight loss [28]. Even though a second analysis including only centres with high follow-up rates showed very similar results, the high loss to follow-up may still constitute a potential source of bias. The present study was also limited to socioeconomic and demographic definitions that were decided prior to starting the study. For this reason, cognitive and behavioural factors known to influence weight loss could not be evaluated [40,41]. --- Conclusion All socioeconomic groups experienced improvements in weight after bariatric surgery. However, as first-generation immigrants and residents of larger cities tend to have inferior weight loss, increased support in the pre-and postoperative setting for these two groups could be of value. The remaining socioeconomic factors appear to have a weaker association with postoperative weight loss. --- Ethics The study was approved by the Regional Ethics Committee in Stockholm and followed the standards of the 1964 Helsinki Declaration and its later amendments.
A global report found that the quality of dying in Hong Kong lagged behind that of other high-income economies. This study aims to examine the service gaps by conducting a qualitative exploratory study from multiple stakeholders' perspectives. Purposive and snowball sampling strategies were used to maximize variation in the sample. We interviewed 131 participants, including patients, family members, health care providers, administrators, lawyers, and policy makers. The situation analysis helped identify the facilitators and barriers at individual, organizational, and socio-cultural levels that affect service development. Findings showed that awareness on palliative and end-of-life care is growing, but the existing care is limited in terms of acceptability, coverage, variation in practices, continuity, and sustainability. A number of policy, economic, socio-cultural, environmental, and legal factors were also found to hinder service development. Findings of this study demonstrated that the development of palliative and end-of-life care services involved a paradigm shift relating to society as a whole. The overarching theme is to formulate a government-led policy framework. Furthermore, a public health approach has been advocated to create a supportive environment for service development.
Introduction Palliative and end-of-life care has been considered an ethical practice and an integral part of care for all types of chronic progressive diseases [1][2][3][4]. Literature has shown that the end-of-life care needs of patients with chronic diseases and frail older adults are poorly addressed in the current disease-centred biomedical model of care [5][6][7]. The progressive deteriorating nature of these conditions calls for a new model of care that ameliorates symptoms and promotes dignity of the end of life to counter the phenomenon of the medicalization of death [8]. The health care services in Hong Kong are renowned for their cost-effectiveness; its infant mortality rate is among the lowest and its life expectancy is highest in the world [9]. Although palliative care has been part of the health services for nearly four decades, the 2015 Quality of Death Report published by the Economist Intelligence Unit showed that the quality of end-of-life care in this city lagged behind that of many other high-income regions [10]. Out of 80 included regions, Hong Kong was ranked 22nd, which is lower than several other Asia-Pacific economies, including Taiwan (6th), Singapore (12th), Japan (14th), and South Korea (18th). The report emphasized that the ranking of Hong Kong was relatively low among the high-income regions, and the poor rating was associated with the low healthcare spending, lack of policy evaluation, inadequate capacity to deliver palliative care services, and poor community engagement related to end-of-life care services [8]. These findings were alarming to local society because these appear contradictory to the reputable and highly advanced health services in the territory. In recent years, awareness has been growing in improving palliative and end-of-life care in local society. With over 90% of deaths occurring in hospitals, some clinical departments, such as oncology, geriatrics, and emergency departments, are seeking solutions to improve the care for seriously ill patients and their family members. Collaboration between palliative care and non-palliative care services to address the needs of these patients has been underscored owing to the huge service demand. The Hospital Authority, a statutory body that governs public hospitals, has recently formulated a strategic service framework on palliative care services to guide service development [11]. Likewise, many non-government organizations and professional organizations have conducted various programmes to promote public education or community-based end-of-life care services at their own initiative, with the support of philanthropic bodies. Given these endeavours, overhauling the existing palliative and end-of-life care services is a timely initiative. This paper reports the barriers and challenges in the macro-environment that hinder the development of palliative and end-of-life care in Hong Kong. We used the PESTEL (Political, Economic, Socio-Cultural, Technological, Environmental, and Legal) framework for situation analysis [12]. PESTEL is usually a precedent to other situation analyses for identifying specific aspects in the macro-environmental context that may exert an influence on the implementation of initiatives. --- Materials and Methods --- Study Design and Participants A qualitative exploratory approach, through face-to-face semi-structured interviews, was used to gain a full understanding from multiple perspectives of different stakeholders toward the current end-of-life care in Hong Kong. The focus of the interviews was to explore their experience with end-of-life care and perceived factors that affect its development. --- Sampling and Participants Purposive and snowball sampling strategies were used to maximize variation in the sample in terms of demographic characteristics and experience related to end-of-life care. Individual interviews were conducted with care recipients, including patients, their family members, and bereaved families; individual or focus group interviews were conducted with health care providers of different ranks and disciplines from various hospitals and organizations, and any relevant roles or disciplines involved in the service development, depending on their availability. --- Procedures The team sent invitations via email or post to health professionals and administrators in different departments, hospitals and organizations, and to members of professional groups to invite them to participate in the study. A poster about the study was posted in public areas and social media to invite people in different capacities to join. This process was done to ensure that a heterogeneous group with different voices could be included in the sample. The interviews were conducted by the first author (H.C.) or a trained research assistant at times and places convenient to participants. Each individual interview lasted for approximately 60 min, whereas each focus group interview lasted for approximately 120 min. All participation was on a voluntary and anonymous basis. All participants completed a written consent form for inclusion before they participated in the study. The study was conducted in accordance with the Declaration of Helsinki, and the study was approved by the University Survey and Behavioural Research Ethics Committee. --- Data Analysis Interviews were audio-recorded with participants' consent and transcribed verbatim to ensure an accurate record of the sharing. Data collection and analysis were conducted in an iterative process. Thematic analysis was used to identify the key issues that emerged from the qualitative data analysis [13]. First, the investigators read through the transcripts to obtain an overall picture. Then, an initial list of codes was identified on the basis of the framework of the situation analysis. The codes were compared constantly for similarities and differences to identify ambiguities and overlaps. This step facilitated the identification of persistent patterns and differences within and across the codes. The codes were collated into themes at a broad level that exhibited the latent content in the context. The process required repeated reviews and refinement. QSR NVivo version 11.0 (QSR International, Melbourne, Australia) was used to support data management. The trustworthiness of the study was enhanced by various strategies [14]. All the audio-recordings, transcripts, and field notes following the interviews and data reconstruction products during the study process were kept, thereby providing an audit trail. The research team independently reviewed the findings to prevent idiosyncratic data interpretations. Peer debriefing with other scholars and clinicians in the field was conducted to achieve external validation. Rich and thick descriptions of each subtheme were provided to enable transferability. --- Results A total of 131 participants were interviewed between March and December 2017. They included 25 patients with different life-limiting conditions; 15 family members who were taking care of their sick relatives or had taken care their dying relatives; 50 health professionals from various disciplines; 15 frontline care staff; 15 administrators at the management levels at hospitals, care homes, and non-government organisations; and 11 participants with diverse backgrounds, such as journalists, academics, lawyers, and volunteers. Table 1 shows the demographic characteristics of the participants. The findings were categorized on the basis of the PESTEL framework, with a detailed description as follows. --- Political: Low Priority on the Policy Agenda Participants generally considered the development of palliative and end-of-life care services in the local community to be fragmented with limited coverage. Many of them ascribed the underdeveloped service to the absence of a policy framework to direct the overall service development. In recent years, some government departments began to be aware of the importance of end-of-life care. However, many other social issues are pressing, such as housing and education. The priority of end-of-life care is low on the policy agenda. Several participants who were experienced in this field used the term "bottleneck" as a metaphor to describe the current situation in which the performance of the palliative and end-of-life care remained limited due to inadequate policy guidance, resulting in lose-lose results for the healthcare system, healthcare providers, and clients. --- Economic: Lack of Consistent Funding to Support Care Services Given that government policies specifically on the development of palliative and end-of-life care services are absent in society, the government funding for this aspect has fluctuated. Participants recalled that the funding for palliative care was among the first to be suspended during the economic recession in the past decade. Some participants also noted that the public health care funding for end-of-life care is a low priority and mainly for inpatient care in public hospitals. At present, relevant initiatives were mainly supported by charitable foundations. Given that these funding bodies avoid supporting similar projects continuously, they needed to be discontinued once the funding has ended, even if the services are beneficial for society. The one-off funding mode also affects service sustainability and staff stability. By contrast, some participants were sceptical that palliative and end-of-life care had been considered a means of decreasing the healthcare utilization and costs. --- Socio-Cultural: Unfavourable Culture for Promoting Palliative and End-of-Life Care The socio-cultural factors are complicated and could be further divided into several layers at the societal, familial, and professional levels. --- Denial of Death Participants noted that death is a cultural taboo in the local community. People avoid talking about it for fear that it would attract bad luck; therefore, raising the topic for discussion can be considered ominous. Such avoidance had even diffused into daily life. For example, the number "four," which has the same pronunciation as death, is avoided in the block numbers on an estate or in the floor number of a building. Thus, some healthcare providers were hesitant to discuss prognoses and end-of-life care with patients or their family members because they may be considered not being active in treating patients. Public education about death and dying issues are inadequate. When patients become critically ill, family members are generally emotionally unprepared because they had never thought that the patient's condition may deteriorate. Consequently, they may think that the "sudden" health changes were due to malpractice. Complaint cases on poor communication related to end-of-life care are increasing. --- Myths about Filial Piety Some participants maintained that the traditional belief of filial piety is also a reason that contributes to the death-denying culture. Many family members feel obliged to try every means to extend a patient's life, regardless of the cost and consequences. Some family members thought that at least they need to do something because refusing life-sustaining treatments is deemed as giving up on the patient. By contrast, some patients and older adults who understood the limitations of medicine wished for comfort care at the end of life. They stated that their family members were the ones who felt uncomfortable with the end-of-life care discussion. --- Strong Belief in Medical Authority Society strongly believes that medical doctors are authoritative in treatment decisions; thus paternalism prevails. Such a belief is rooted in an old Chinese saying, "medical doctors possess parents' hearts." Therefore, people generally trust that medical doctors could make the best decisions for the patients. Nevertheless, some patients and family members shared that they were confused with the incongruent advice on goal of care from different health care providers. --- Technological: Less Alluring than Biomedical Sciences Compared with using biomedical sciences to treat diseases, palliative and end-of-life care that highlights compassionate and humanistic care seems less alluring for career development for health professionals. --- Cure-Oriented Approach The rapid advancement in medicine further contributes to the death-denying culture. Over the years, much of the health care resources have been invested in top-notch medical devices and advanced technology. Patients and family members were eager to search for information on various treatments, such as target therapy, immunotherapy, organ transplantation, and complementary and alternative therapies as if a cure should exist for every condition. Likewise, mortality rate is a major key performance indicator of medical services; a patient's death appears as a failure of the healthcare team. A medical doctor stated that part of monthly departmental meetings was to review what treatments have been attempted before a patient's death, regardless of the patient's conditions. The focus was to rule out the possibility of premature death due to negligence, with little attention to the quality of care in the dying process. --- Lack of Professional Training and Education The current palliative and end-of-life care service development and promotion have been highly reliant on committed people. Drawing on the experiences of participants, the value of end-of-life care services was not recognized by most health care providers. Such a problem was apparent in specific units or specialties, such as surgical departments, intensive care units, cardiac care units, paediatric units, and emergency departments, even though patients with serious illnesses accounted for a high proportion of their clients. Some participants noticed that the awareness or knowledge about the concept of palliative care or end-of-life care issues among their colleagues in the healthcare field was not better than that of laypersons. Participants noted that palliative or end-of-life care only accounted for a few hours in their pre-registration professional training or even absent for allied health professionals. By contrast, relevant on-the-job training was enrolled on a self-selective basis. In addition, the training quotas are limited; some participants learnt it by self-directed learning or attended courses or overseas exchange programs at their own expense. Thus, development was limited in terms of availability and continuity. --- Under-Researched Areas Several end-of-life care programs were initiated in some hospitals and long-term care homes, but the practices varied. From participants' experience, although family objection may reduce the participation rate in research, the rejection from funding bodies and ethics committees for conducting the research is the lethal cause of these incubated ideas. The funding bodies denied the value of research in this field because these programs are presumed to definitely result in significant improvement in patients' outcomes. Moreover, the research ethics committee intended to protect patients who were mentally incompetent. Reservations on accepting proxy informed consent by family members to participate in research were noted. Empirical research to evaluate the effects of these programs could hardly be supported. 3.5. Environmental: Undesirable Environment for Providing Holistic Care --- Cramped Environment in Hospitals The environment in public hospitals is generally cramped. Sometimes, the emergency department and corridors of the wards are fully occupied by patients in beds. One doctor participant used the term "battlefield" to describe the hospital environment. Another participant whose father died from a sudden and massive stroke was shocked about his unexpected death, and she was even more upset that her father was sent to the mortuary immediately after his death. She and her family members did not have the time to mourn for her father at the bedside. Other participants also mentioned that the process of transferring deceased patients to the mortuary was dehumanizing. The trolley for carrying deceased patients was made of stainless steel; thus, it looked cold and impersonal. Hospital workers were sometimes rude when placing dead bodies onto the trolley. The mortuaries in some public hospitals also made bereaved family members miserable; for example, they are located on a lower ground level next to garbage dumps or parking lots. --- Poorly Prepared for Home Care Some participants cautioned about the presumption that home would be a better place of end-of-life care than the ward environment. In most cases, patients' homes were also crowded and poorly equipped. They stated examples in which patients had to lie on the floor after being discharged from the hospital or could not bathe because of limited space. Some family members were anxious when patients were discharged because they lacked caregiving skills training or the equipment and facilities for taking care of patients. Family members commonly begged doctors to postpone the date of discharge, and patients were often readmitted to the hospital shortly after discharge. Some participants also noted that inconveniences in transportation for sick people also added burden. Challenges identified by participants for dying at home were mainly about the liability of death outside hospitals and the logistical problem of transferring a dead body using a small lift in a residential building. The police generally need to investigate the causes of deaths that occur outside a hospital to rule out mistreatment or abuse. One participant who had experienced a relative dying at home found that the police interrogation method made her feel humiliated. Some participants worried that neighbours may be superstitious if a patient died at home, affecting the property price. A health professional participant who was involved in home care service believed that dying at home was a privilege in the local context. It was considered as difficult and costly for family to arrange a medical doctor visiting the patient at home regularly to ensure there was medical attendance within 14 days before the patient's death so that autopsy may be waived. --- Revolutionized Long-Term Care Some residential care homes for the elderly (RCHE) had sought funding to build a single bedroom for family members to accompany a dying resident. Although some participants appreciated the private comfortable space, others stated that these rooms have been stigmatised by residents. Participants also worried that the process of a police investigation at the RCHEs may make other residents, relatives, or neighbour be sceptical about the quality of care. --- Legal --- Uncertainty about Advance Directives (AD) There was no specific legislation on AD in Hong Kong. Some participants said such legislation may not help because experience in overseas countries suggested that it cannot help promote its awareness and completion. Nevertheless, some participants urged for a specific law to protect healthcare teams who follow the document, as well as to protect patients' right to self-determination in treatment decision-making. Some participants who were health professionals raised concerns about liability although the legal status of AD is currently recognised under the common law framework, whereas some were hesitant to follow the AD if the patient's family members had not reached a consensus on the treatment decision. Another concern was the difficulty in prognostication in chronic progressive disease, thereby posing a challenge of determining the right timing for the transition from curative care into end-of-life care. For example, reservations were expressed on withholding tube feeding from a person with advanced dementia even if he had indicated advance refusal. Some participants shared their unsuccessful experience of seeking a medical doctor to witness their process of signing an AD. They wished to complete an AD before their condition became critical. However, the doctors were resistant because they believed that they could not yet think about end-of-life care issues at that stage of the disease, and they may change their minds later on. Occasions arose in which patients completed an AD with the support of private general practitioners, although the process was rather costly and the document was not respected by public hospitals. --- Limited Powers of Attorney and Guardians Some participants were confused by the current clinical practice of consulting family members on treatment decisions for patients because hospital guidelines stated clearly that these are medical decisions based on patients' best interests and that family members have no legal right regarding these decisions. At present, the legal powers of guardians were limited to providing consent to medical and dental treatment in the interests of a mentally incapacitated person, according to the Mental Health Ordinance. A guardian cannot refuse treatment on a patient's behalf if the medical team considered it to be in the patient's best interests. By contrast, the Powers of Attorney Ordinance only allows an attorney to manage financial matters, not medical care. Some participants pointed out that the issue that treatment decision-making for the end of life is sometimes value-laden; thus whether it is in the patient's best interests would be subject to individual interpretation. --- Absolute Duties of Ambulance Men Participants who worked as ambulance men for the Ambulance Command under the Fire Services Department worried that the treatment refusal stated in AD is contradictory to their rescue services as stipulated by the law. They shared the feelings of powerlessness when family members pleaded with them not to proceed with resuscitation procedures because they are obliged by their assigned duties. Although situations arose in which a doctor signed the Do Not Attempt Cardiopulmonary Resuscitation (DNACPR) form to verify that a patient was terminally ill, they hesitated to follow this medical order because the form, which is a document developed by Hospital Authority, seems an internal hospital document for personnel use only. --- Legal Concerns of Dying in Place According to the Coroners Ordinance (Cap. 504), deaths that occur outside hospital or nursing home settings should be reported to the Coroner. Participants were doubtful about the idea of dying in place even if they knew it was the patient's wish because these reportable deaths are subject to police investigation, autopsy, and post-mortem examination, and dead bodies needed to be kept in a public mortuary for a period of time. Deaths that occur at home may be exempted from these investigations if the deceased had been diagnosed with a terminal illness or had been attended to by a medical doctor within 14 days before his or her death. However, one participant reminded that the current understanding of the term "terminally ill" does not include chronic advanced or progressive diseases. --- Discussion Palliative and end-of-life care is gaining recognition as a basic right for all who have serious illnesses [1][2][3][4][5]8]. Nevertheless, the findings of this study illustrated that the development of palliative and end-of-life care is shaped by a range of macro-environmental factors at the societal level. As noted among the top-ranked regions ranking in the Quality of Death Report, government support is the key foundation for robust development. For example, the UK government formulated the first national strategy for end-of-life care in 2008. A national palliative and end-of-life care partnership is set up that enables the health and social care sectors to continue to improve the quality of care grounded on their experience and reflection [15]. In Australia, national consensus statements were set out to identify the guiding principles and essential elements for high-quality end-of-life care [16]. The Singapore government has formulated a national palliative care strategy to guide the entire service development [17]. The Irish experience suggested that the support of policymakers is crucial for maintaining a substantial government budget for service development to widen access [18]. After reviewing the national strategies and frameworks related to palliative care of four top-ranked countries, Morrison (2018) identified the following keys to success: involving policy makers in strategy planning to overcome challenges in the infrastructure, implementing a standardized monitoring system to uphold quality evidence-based care, and maintaining an ongoing government investment to ensure sustainability [19]. Therefore, the overarching theme is to formulate a government-led policy framework that demonstrates government leadership in guiding service development. Moreover, the study findings revealed that palliative and end-of-life care development intertwined with a range of economic, technological, environmental, and legal issues. This result is consistent with the Institute of Medicine in the report of Dying in America: Improving Quality and Honoring Individual Preferences Near the End of Life that various socio-cultural, economic, and health system factors hinder the quality of end-of-life care in the United States [20]. Owing to its inherent complexity, the effects of merely institutional policies and professional societies on improving the provision of palliative and end-of-life care are insignificant [19,21]. Collaboration across organizations and care sectors is imperative to ensure equitable access to quality palliative and end-of-life care and consistency in care practices [18]. This need for change is congruent with the global movement of adopting a public health approach for advocating the development of end-of-life care [22][23][24]. This holistic approach requires partnerships among government departments, public and private organizations, and communities to create a supportive environment through formulation of policy framework, revision of law, public and professional education, and re-engineering of services. One recent example in local society is that the Food and Health Bureau has just launched a public consultation on legislating ADs and making legislative amendments to facilitate the wish of dying in place [25]. This consultation underscores the importance of extending the focus beyond medical and social care sectors in the development process, with an emphasis on empowering the entire community to support changes. Sallnow and associates (2016) concluded the effects of the public health approach to end-of-life care in three aspects: practical changes to the caring process, individual attitude and understanding about death and dying issues, and building capacity in the wider community [24]. These impacts are imperative for mobilizing community resources for sustainable practice for universal access. Therefore, the public health approach will maximize the synergistic effects of the efforts of different parties in promoting palliative and end-of-life care. We acknowledged participation bias as a limitation of this study. People who were willing to participate in this study were interested in the topic. We attempted to address this problem by inviting a wide range of people with different backgrounds and experiences with palliative and end-of-life care services by purposive sampling. During the study process, contradictory or disconfirming evidence was sought to explore conflicting accounts or viewpoints and rival explanations. This approach contributed to a comprehensive understanding of the phenomenon of interest, thereby avoiding premature closure. --- Conclusions A number of initiatives for enhancing palliative and end-of-life care have continued to proliferate in recent years in Hong Kong. However, the findings of this study showed that the immediate experience with care for seriously ill patients mostly remain suboptimal, with limitations in coverage, acceptability, continuity, and sustainability. The situation analysis uncovered that a number of political, economic, socio-cultural, technological, environmental, and legal factors were identified as hindering the further development of the service. Thus, the government urgently needs to formulate a policy framework to shape the palliative and end-of-life care and promote its development by implementing strategies using a public health approach in a broad context. --- Conflicts of Interest: The authors declare no conflict of interest.
r Historically, reforms that have increased the duration of job-protected paid parental leave have improved women's economic outcomes. r By targeting the period around childbirth, access to paid parental leave also appears to reduce rates of infant mortality, with breastfeeding representing one potential mechanism. r The provision of more generous paid leave entitlements in countries that offer unpaid or short durations of paid leave could help families strike a balance between the competing demands of earning income and attending to personal and family well-being. Context: Policies legislating paid leave from work for new parents, and to attend to individual and family illness, are common across Organisation for Economic Co-operation and Development (OECD) countries. However, there exists no comprehensive review of their potential impacts on economic, social, and health outcomes.Methods: We conducted a systematic review of the peer-reviewed literature on paid leave and socioeconomic and health outcomes. We reviewed 5,538 abstracts and selected 85 published papers on the impact of parental leave policies, 22 papers on the impact of medical leave policies, and 2 papers that evaluated both types of policies. We synthesized the main findings through a narrative description; a meta-analysis was precluded by heterogeneity in policy attributes, policy changes, outcomes, and study designs.We were able to draw several conclusions about the impact of parental leave policies. First, extensions in the duration of paid parental leave to between 6 and 12 months were accompanied by attendant increases in leave-taking and longer durations of leave. Second, there was little evidence that extending the duration of paid leave had negative employment or economic consequences. Third, unpaid leave does not appear to confer the same benefits as paid leave. Fourth, from a population health perspective, increases in paid parental leave were consistently associated with better infant and child health, particularly in terms of lower mortality rates. Fifth, paid paternal leave policies of adequate length and generosity have induced fathers to take additional time off from work following the birth of a child. How medical leave policies for personal or family illness influence health has not been widely studied. Conclusions: There is substantial quasi-experimental evidence to support expansions in the duration of job-protected paid parental leave as an instrument for supporting women's labor force participation, safeguarding women's incomes and earnings, and improving child survival. This has implications, in particular, for countries that offer shorter durations of job-protected paid leave or lack a national paid leave entitlement altogether.
P arental and medical leave policies allow employees to take time off work for pregnancy, birth, and adoption, for personal illness, or to care for sick children, parents, and spouses. By 2013, all Organisation for Economic Co-operation and Development (OECD) countries other than the United States offered some form of national paid leave policy. Over the past 2 decades, there have been hundreds of changes to legislation governing paid leave from work. Although recent trends are toward more generous benefits and government-mandated leave, there is still substantial variation in allowances and benefits, both crossnationally and subnationally. This variation can contribute to paid leave policies having different effects with respect to the various economic and labor, social, and health outcomes that they plausibly influence. Paid leave might also affect sociodemographic groups differently or vary across contexts depending on the public policy environment. Because decisions to implement or amend paid leave policies should adopt a holistic view that considers the best available evidence, the objective of this review is to evaluate the empirical literature concerning the impact of leave policies, including those that regulate parental and medical leave, on economic and labor, social, and health outcomes in OECD countries. For the purposes of this review, parental leave policies refer to leave associated with pregnancy and birth while medical leave policies refer to leave for personal illness or to care for sick children, parents, and spouses. Paid parental and medical leave policies, although they might be adopted for a variety of reasons, are typically designed to help reconcile work and family responsibilities and to simultaneously improve both economic and labor market outcomes and health outcomes. Access to paid leave might promote entry into the labor force by caregivers and those with chronic conditions, by allowing workers to take a leave of absence from work without necessarily sacrificing their tenure and career prospects. When these policies provide job protection, they might increase job retention and facilitate the return to work after a period of leave, thereby contributing to household income and savings. However, the impact of paid leave might vary based on the length of leave provided, among other policy attributes, and there may be countervailing effects to consider. For example, employment and earnings might decrease with lengthy and recurrent employment interruptions afforded by more generous paid leave policies. Moreover, employers might be biased in their hiring practices, against women of childbearing age in particular, who they presume are at an increased likelihood of taking leave; these discriminatory hiring practices might concentrate women in lower-paying or part-time positions, contributing to wages and benefit gaps when comparing women to men and mothers to nonmothers. 1 The uptake and impact of paid leave might also vary depending on macro-level factors, including economic and labor market conditions. From a population health perspective, paid leave policies have the potential to influence health over the life course. 2 Paid leave might facilitate preventive care. For example, parental leave might promote immunizations for and breastfeeding of infants. Similarly, medical leave policies might facilitate caring for family members with chronic conditions, as well as use of health services for those with covered health conditions. By reducing conflict between work and family responsibilities, job-protected paid leave might reduce stress related to pregnancy, personal illness, and the demands of caregiving for family members. From a social standpoint, when paid parental and medical leave policies are universal and designed for equal access, they can reduce inequalities in uptake, with potentially beneficial effects for families and children. The availability of paid leave might disproportionately benefit socially disadvantaged groups that lack the resources to take time off work. 3 Paternal leave policies in particular help to promote gender equity by encouraging new fathers to participate in child-rearing and by facilitating mothers' participation in the labor market; 4 nonetheless, fathers may be less likely to utilize paternal leave if they experience workplace stigma associated with asking for leave. 5 Similarly, access to longer-term sick or medical leave might ease the onus of caregiving that is disproportionately placed on women and reduce gender inequalities in labor force participation. To the best of our knowledge, the effects of paternal and medical leave policies have not been systematically reviewed. While there is a large body of literature on economic outcomes, 1 and a smaller one on social and health outcomes, 2,6,7 there exists no comprehensive review that describes and synthesizes the interdisciplinary evidence concerning the impact of parental and medical leave policies on socioeconomic and health outcomes. This systematic review aims to describe the potential impact of parental and medical leave policies across economic, social, and health outcomes, with the intention of informing further research that places the benefits of paid leave policies in relation to their costs. --- Methods We conducted a systematic review and narrative synthesis of the literature. We searched CINAHL, PsychInfo, Web of Science, and Medline databases for papers investigating the effects of parental and sick leave policies. The keyword searches included the terms "maternity leave," "maternal leave," "paternity leave," "paternal leave," "parental leave," "medical leave," "personal leave," "family leave," "paid leave," "child care leave," "sick leave," "sick pay," "sickness benefits," "sickness insurance," and "FMLA," combined with terms restricting the searches to articles considering the impact of these policies (ie, "association," "impact," "effect," "correlation," "increase," "decrease," "reduction," "outcome") rather than descriptions of the policies themselves. We did not include any search terms that would restrict the outcome, since we were looking for a broad range of economic, health, and social outcomes. We applied several exclusion criteria. First, we excluded papers with outcomes that did not fall into the 3 outcome categories of interest (economic, health, and social outcomes) during the abstract review, such as fertility patterns. Second, with respect to the policy exposures, we excluded studies that explicitly examined access to short-term, often employer-funded sick days, instead focusing on the impact of longerterm sick and medical leave policies (hereafter called medical leave policies) that permit longer-term sickness-related absences from work to address personal or family illness. 8 Third, we excluded papers that examined individuals' access to leave through an employer, because nonlegislated workplace or employer policies are not as generalizable as aggregate state-or country-level policies. Fourth, we excluded studies that examined individuals' utilization of leave, rather than access or reforms to state-or national-level leave policies, because there is a greater risk of confounding of individual-level leave-taking by socioeconomic status and other characteristics. Finally, we excluded papers that described policies outside of OECD countries, as well as articles without original research (review articles) and non-peer-reviewed, gray literature (Table 1). A title-abstract review was followed by a full-text review to decide on the final included articles. Each title and abstract was reviewed by 1 reviewer (MD, DJ, or JL). We assessed the reliability of the titleabstract search by randomly assigning 150 abstracts to 2 reviewers and assessing the percent agreement concerning which papers should proceed to full-text review, which was determined to be very high (95%). We retrieved and reviewed full-text articles that cleared the title-abstract review. When the result of the full-text review was equivocal, articles were discussed among all authors before a final decision was made to include or exclude the paper. One reviewer extracted information on the years of the study, study context, study design, eligibility criteria, data source and sample, the type of outcome, and policy details from each included paper. Evaluation designs were classified as multivariable regression adjustment, pre-post and interrupted time series (ITS) designs, difference-in-differences (DD) and fixed-effects regression approaches, regression discontinuity (RD), and other model-based analyses. Additionally, we extracted information on the methods used for statistical analysis and qualitative conclusions. We synthesized the main findings through a narrative description; heterogeneity in policy attributes, policy changes, outcomes, and study designs precluded quantitative meta-analysis of the study results. We assessed the methodological strengths, limitations, and potential for biases in the literature, which was based mainly on the design of the evaluation. In general, quasi-experimental studies with a clear identification strategy were considered to be of higher quality than standard regression adjustment approaches that lacked a strategy for addressing sources of unmeasured confounding. --- Results The review process is summarized in Figure 1. 9 This search retrieved 12,106 articles. An additional 7 studies were included from the references of full-text papers, informal web searches (Google Scholar), and reference libraries of authors and colleagues. After removing duplicates we retained 5,538 articles. The abstract and title screening excluded 5,254 articles, leaving 284 articles for full-text screening. After the fulltext review, we included 85 parental leave and 22 medical leave studies, as well as 2 papers that investigated both types of policies, which are classified with the parental leave policies for simplicity. The most common reason for excluding a study was related to the measurement of the exposure, with the review including only studies measuring the impact of a leave policy rather than individual-level leave-taking. Results are presented separately for studies concerning parental leave (Online Appendix Table 1) and studies concerning medical leave for personal or family illness (Online Appendix Table 2). --- Policy Characteristics For the purposes of this review, we have grouped policies affecting leave associated with pregnancy and birth as parental leave policies and policies affecting leave associated with personal or family illness as medical leave policies. These definitions are more in line with the policy framework in European countries, where employees' rights to leave for birth, for personal sickness, and for family sickness are distinct. In the United States the right to take leave, whether paid or unpaid, and for any purpose, often stems from the same policy; hence leave taken for one purpose reduces time available for other purposes. Parental Leave Policies. The paid leave policies available in OECD countries vary in length, benefits, and eligibility, although globally the trend is toward increasing generosity over time. Most of the OECD countries comply with the International Labour Organization's standard of providing at least 14 weeks of paid leave and a wage replacement rate of at least two-thirds of the wage. Aside from the United States, Australia was the only other outlier among OECD countries in terms of whether paid leave was nationally mandated. 10 Before the country enacted a paid parental leave scheme guaranteeing 18 weeks of leave at the national minimum wage rate in 2011, it had taken an approach similar to the United States, based on enterprise-level bargaining for leave. 11 Today, the United States is the only OECD country lacking a national paid parental leave policy. In lieu of a national paid leave benefit, some US businesses are required by the federal Family Medical Leave Act (FMLA) of 1993 to provide at least 12 weeks of unpaid leave to workers depending on eligibility criteria. Since the FMLA was enacted, California (2004), New Jersey (2009), and Rhode Island (2014) passed legislation and enacted policies providing paid leave for durations of 4-6 weeks at wage replacement rates of 55%-60%. 2,12,13 New York recently joined this group in 2018, although its policy will not take full effect until 2021, when the duration of paid leave will be extended to 12 weeks. Paid leave was enacted by Washington, DC, and Washington state in 2017, although these policies have not yet gone into effect. Hawaii and Puerto Rico have specific provisions as part of the temporary disability insurance scheme allowing for 6-8 weeks of maternity leave as well. 2 Similar to the United States, the Canadian government legislates leave policies nationally, with provinces enacting their own laws. Fifty weeks of maternity/parental leave paid at 55% of average insured earning are available in all provinces other than Quebec, but the eligibility criteria for job-protected leave vary substantially across provinces according to the minimum weeks of continued employment required, among other characteristics. In 2006 Quebec opted out of the federal employment insurance program and established the Quebec Parental Insurance Plan, the most generous program in the country, which provides 50 weeks of maternity/parental leave with benefits covering up to 70% of wages. 14 In other OECD countries, paid leave entitlements as of 2016 vary from 12 weeks (Mexico) to 3 years or more (Czech Republic, Finland, Hungary, and Slovakia), with a wage replacement rate greater than or equal to 85% for at least part of the leave. 15 Most policies in OECD countries require employees to have demonstrated some labor force attachment prior to taking leave, such as working a certain number of hours or some length of time before they become eligible, although there are exceptions. In the United States, only about half of employees qualify for the 12 weeks of unpaid leave through the FMLA because people working for smaller employers and those who have worked less than 1,250 hours and/or 12 months are not covered. 16 The United States is the only OECD country that has an employer size requirement for leave eligibility. However, these criteria have also been modified by some US state laws, by either extending the duration of unpaid leave or easing the eligibility thresholds. 17 Other labor force attachment criteria are more flexible; Austria, Finland, Germany, Italy, the Netherlands, Poland, and Slovenia do not have tenure requirements. Beyond maternity leave, parental leave policies can consist of family entitlements used flexibly by either parent, individual entitlements that can be transferred between parents, or nontransferable individual entitlements. In Canada, leave is a family entitlement, whereas in the United States the leave mandate is an individual 12-week entitlement unless spouses work for the same company, in which case the amount of leave that can be used to take care of a newborn child becomes limited to a combined, 12-week family entitlement. At least a portion of most leave entitlements is transferable between parents, but some countries provide nontransferable entitlements to fathers, which we refer to as paternal leave. Paid paternal leave shorter than 2 weeks following birth is commonly available among OECD countries, but evaluations of these policies are scarce, with exceptions including an evaluation of Spain's adoption of 13 days of paternal leave in 2007. 18 It may be difficult to detect the impact of short periods of leave available for fathers because the short duration could likely be made up regardless of the policy, by using vacation days or other forms of excused absence, or because it is of insufficient length to have impact. Conversely, a few countries offer longer paternal leave and other incentives to encourage fathers to participate in child care, such as bonus time off or obligatory paternal leave policies. Portugal, Finland, Iceland, Sweden, and Norway offer paternal leaves ranging from 4 weeks to 3 months. In 1993, Norway was the first country to specifically allocate a 4-week leave for fathers, 19 and Sweden similarly implemented a "daddy month" in 1995. 20 More recently, in 2007, Germany adopted a policy where the 12 months of paid parental leave available is extended by 2 months if fathers use at least 2 months of the entire leave. 21 Multiple evaluations of the impact of such longer paternal leaves and incentivizing policies exist. Medical Leave Policies. Across the OECD, countries have changed many aspects of their paid medical leave policies for personal or family illness, including wage replacement rates, durations of leave, and eligibility criteria. However, in comparison to parental leave policy changes, these policies have been evaluated by only a small number of studies, which in most cases examined the impact of restricting benefits, sometimes explicitly to curb rising costs. For example, Sweden amended its sick leave legislation multiple times between 1992 and 2008, including the introduction of a sick pay period paid by the employer (1992), an unpaid qualifying day (1993), modified compensation levels in several years, and assessments of working capacity (2008). [22][23][24] Sweden also reformed its sickness absence policy in 1995 in order to mitigate rising costs by excluding nonmedical criteria for sick listing, requiring more information on certificates, and requiring that a consultant physician examine all certificates for episodes of more than 28 days. 25 Similarly, in 2009 Estonia cut sickness benefits by reducing compensation levels from 80% to 70% and having payments start on the fourth day instead of the second. 26,27 Italy also reduced the wage replacement rate for sick leave compensation in the public sector. 28 Not all reforms were intended to limit medical leave benefits. For example, to address poor labor market attachment among youth and early exits from the labor force, Finland introduced a partial sickness benefit that allowed workers to combine part-time sick leave with part-time work. 29,30 In Germany, statutory short-term sick pay for private sector employees was increased from 80% to 100% of forgone gross wages in 1999, after it was reduced from 100% to 80% in the first 6 weeks in 1996. 31,32 To prevent discrimination against young women, Norway removed the employer pay liability for short-term (first 16 days) sick leaves for pregnancy-related absences. 33 Medical leave benefits in the United States are relatively modest visà-vis other OECD countries. Only 5 states-California, Connecticut, Massachusetts, Oregon, and Vermont-and Washington, DC, currently mandate employer-funded, short-term paid sick leave that can be used for personal or family sickness, ranging from 24 to 40 hours of leave available annually. Five states-California, Hawaii, New Jersey, New York, and Rhode Island-provide longer paid medical leave through a state family medical leave policy or temporary disability insurance program that can be used for 1 or more of the following purposes: personal sickness, the sickness of a family member, or bonding with a newborn child. In California workers were allowed to take paid time off to care for an ill family member as part of the state's Paid Family Leave Insurance program, which went into effect in 2004 and provided up to 6 weeks of paid leave with a 55% wage replacement for employees qualifying for state disability insurance; 34 however, it was not until 2015 that the state included provisions in its labor law for paid absences for employees' personal sickness. --- Impacts of Parental Leave Policies In the following section we consider evidence on the impact of unpaid leave policies, paid maternity and parental leave policies, and paternal leave policies. Unpaid Leave. With only a handful of states providing any form of paid leave to new parents, the United States has been the primary setting for investigating whether federally mandated unpaid leave following childbirth is associated with better economic, socioeconomic, and health outcomes. Our review identified several studies that evaluated the impact of the federal FMLA, which provides 12 weeks of unpaid leave, on various labor market and health outcomes. [35][36][37][38][39][40][41][42] Although the policy may encourage leave-taking 41,43 and return to work with the same employer, 42 most studies did not suggest that the provision of unpaid leave was accompanied by substantial changes in labor market outcomes. For example, using a DD design applied to data from the National Longitudinal Survey of Youth, in 2003 Baum concluded that the FMLA did not affect employment or wages. 35 These results corroborate earlier null findings by Waldfogel, 41 presumably because the leave is unpaid and short in duration, giving new mothers less control over their decisions about whether and when to return to work. 35 Using a similar design, a 2010 study by Goodpaster indicated that the introduction of the FMLA may have increased the probability that women left the labor force 1 year after giving birth, 36 whereas a 2012 study by Schott suggested women were more likely to return to work on a part-time basis. 39 Work from Han and Waldfogel in 2003 showed that the FMLA was associated with a small impact on leave-taking for women, particularly among collegeeducated and married mothers, and had no impact for men. 37 One DD study assessed the impact on a variety of outcomes of state-level reforms that expanded the coverage or duration of unpaid leave over and above that provided by the FMLA; results showed that these laws decreased the probability that mothers were working in the short term (ie, 2 to 4 months after birth), but increased employment in the longer term (ie, at 9 months and 4 years after birth). There was little evidence, however, for any effect on the mode of child care at 4 years, breastfeeding, maternal depression, maternal parenting scores, household income, cognitive outcomes, or behavioral outcomes. 44 With respect to the impact of the FMLA on population health and health services, a recent study showed that US state laws providing relatively short periods of unpaid leave of 13 weeks or less were associated with a lower probability of cesarean deliveries compared to states without maternity leave laws in the pre-FMLA period. This is perhaps because these laws eliminated "bonus" time routinely given to mothers delivering by cesarean, 45 although they might also have reduced the risk of cesarean delivery by making leave prior to delivery possible. For birth outcomes, one study showed that the FMLA was associated with minor improvements in birth weight and the prematurity rate, as well as a decrease in the infant mortality rate, measured in the first year of life, among college-educated white mothers. 38 Collectively, this research suggests that unpaid leave provided through the FMLA had little, or perhaps even negative, effects on women's labor force participation, employment, and wages, contrary to its intended influence on preserving job tenure. Additionally, the few studies that showed benefits of the program, either in terms of economic or health outcomes, indicated that improvements were concentrated among socioeconomically advantaged groups, leading some authors to conclude that "unpaid maternity leave policy may actually increase disparities because it only benefits those mothers who can afford to take it." 38 Few studies have evaluated the implications of unpaid leave policies outside of the United States, where they are less common. Crossnational work suggests there was no impact on infant and child health of extending unpaid or non-job-protected leave. 46,47 For example, in a cross-national study using aggregate data from 16 OECD countries spanning the period from 1969 to 1994, unpaid leave was not associated with reductions in rates of infant mortality, 48 a conclusion corroborated by similar analyses of more recent data. 46 An evaluation of a 1992 policy that increased the length of low-paid or unpaid parental leave in West Germany found that the reform decreased the time that fathers spent with their children, by about a half hour on a weekday, 18 to 30 months after childbirth. 49 A Spanish study examined the interaction between a national policy allowing parents to take unpaid leave from work to care for children up to 3 years of age and complementary regional policies with different flat-rate benefits, showing usage rates were higher in the regions that provided the highest economic incentive to use parental leave. 50 Paid Maternity and Parental Leave Protections and Economic Outcomes. The majority of studies included in our review examined the impact of paid parental leave on labor and economic outcomes, including employment decisions in the short and longer term after childbirth, overall participation in the labor force, and wages and earnings. Starting with the question of whether more generous paid leave policies induce mothers to take or extend their time away from work, research consistently showed that expansions in the duration of paid leave were accompanied by attendant increases in leave-taking and longer durations of leave. 3,10,12,14,[51][52][53][54][55] Several studies examined the immediate economic targets of paid leave policies, including employment in the short term, typically among women who were employed prior to childbirth. Because new mothers appear to avail themselves of paid maternity and parental leave benefits, a consequence of longer paid leave entitlements is that women may be less likely to be employed and at work immediately before and in the short term after childbirth and are more likely to be providing direct care. 14,[56][57][58] Access to longer periods of paid leave might help to forestall early returns to work, with research indicating that the timing of a mother's return to work peaks around the time that paid leave benefits expire. [59][60][61] A comparison of policies in Hungary, where the parental leave mandate was universal, and Poland, where it was means tested, suggests that providing universal coverage might reduce maternal employment in the short term, presumably by increasing eligibility and uptake of program benefits. 62 These findings are substantiated by a study examining the impact of replacing a means-tested child-rearing benefit program with a universal parental leave benefit in Germany that increased payment amounts and decreased the pay period; the 2007 reform increased household income among those with an infant and expedited women's return to work, particularly among mothers with lower prebirth incomes. 57,[63][64][65] Several studies examined the impact of extending paid leave on women's labor force participation and employment-related outcomes in the medium to long term. Cross-national analyses showed that increasing the duration and benefit level provided by paid leave policies increased rates of women's labor force participation, [66][67][68] although it is unclear whether this resulted from the reforms prompting labor force entry or, conversely, inhibiting labor force exit. For example, a DD analysis applied to aggregate data from 9 OECD countries, where the mean duration increased from 10 to 33 weeks between 1969 and 1993, showed that an increase in the duration of paid leave was associated with an increase in the female employment-to-population ratio. 69 Examining the effect of country-specific reforms on employment outcomes can help to distinguish the relevance of individual paid leave policy components, including the duration of job protection, the duration of leave, and the wage replacement rate. For example, a study examining 3 policy reforms occurring in Austria between 1990 and 2000 suggested that the time when women returned to work after childbirth was most responsive to changes in the duration of job-protected paid leave; employment decisions seemed less sensitive to reforms that changed either the duration of cash benefits or the period of job protection, but not both. 70 A policy that held the duration of leave constant but increased the wage replacement rate available to new mothers in Japan, from 25% to 40%, did not affect job continuity. 71 Waldfogel and colleagues looked specifically at job retention, with analyses using data from the United States, Britain, and Japan indicating that maternity leave eligibility increased the probability that women returned to work with the same employer. 72,73 A German reform, though it decreased employment 10 months after childbirth, when mothers were still eligible for paid leave, was associated with increased employment rates a year and a half after birth and had no impact more than 2 years after birth. 74 Although few studies examined the potentially nonlinear effect of parental leave generosity on women's labor force participation, 3 crossnational analyses showed that more generous parental leave policies increased the probability of working, but with diminishing returns to longer durations of leave. 68,69,75 Using aggregate data from 16 European countries for the period between 1970 and 2010, Akgunduz and Plantenga showed that the duration of weighted leave (the combined length of maternal and parental leave, weighted by the wage replacement rate) had a positive impact on women's labor force participation for durations as high as 45 weeks, although the optimal benefit was achieved at 28 weeks of weighted leave for mothers between 25 and 34 years old. 68 Concerning the employment impact of US reforms, an evaluation of the 1978 Pregnancy Discrimination Act, which extended temporary disability insurance programs by providing wage replacement benefits to pregnant women directly before and after birth, showed that the policy increased the labor force participation rate of pregnant women, women with children under the age of 1, and women with children ages 1-6 years. 76 The effects of introducing paid family leave in California in 2004 were mixed, with conflicting findings from DD analyses that compared the change in outcomes before and after the reform in California relative to other control states. An analysis of employed men and women from the National Longitudinal Survey of Youth suggested that the reform was associated with an increase in employment among women 12 months after childbirth, probably because the policy increased job continuity. 12 In 2013 Rossin-Slater and colleagues, using data from the Current Population Survey, showed that the reform did not substantially impact employment. 3 Another study, however, also making use of the Current Population Survey and published in 2015, concluded that California's paid leave policy increased the labor force participation rate, but also the unemployment rate for young women, potentially because of discrimination in hiring. 77 Evidence concerning the impacts of paid leave policies on wages and earnings is mixed. Whether reforms have negative, null, or positive effects might depend on the structure of the program and the point at which wages and earnings were measured. Since many paid leave policies do not fully replace wages, policies that stimulate leave-taking might decrease earnings in the short term, with cross-national work suggesting that longer periods of leave (approximately 9 months) are associated with a reduction in earnings. 69 In Austria, for example, an evaluation of 1990 and 1996 federal policy reforms that changed the length of paid maternity leave suggested an inverse relation between the length of paid maternity leave and earnings in the short term. 78 Programs that provide job flexibility by facilitating part-time returns to work might also be associated with lower earnings. For example, the introduction of a part-time parental leave program in France was associated with a decrease in wages 1-2 years after childbirth, although these results were not consistent across different model specifications with varying controls, but no decrease in employment. 79 It is important, however, to also consider the medium and longerterm implications of paid leave policies, which could increase wages and earnings by preserving job tenure. A July 2000 Austrian reform that extended the duration that women could receive cash benefits after birth to 30 months did not negatively impact the wages women received from their first job after birth. Additionally, a study evaluating the 1984 national policy change that increased the length of paid parental leave from 14 to 20 weeks in Denmark suggested that the reform was associated with an increase in maternal income 5 years after childbirth. 80 An evaluation of California's 2004 paid leave reform suggested that it increased wage income 1 to 3 years after birth. 3 Additionally, maternity leave eligibility was associated with higher wages approximately 2 years after childbirth in the United States and Britain, although these differences were eliminated by 5 to 8 years after childbirth, suggesting that it took several years for women lacking access to paid leave to make up for lost earnings. 72 Other evidence suggests that the longer-term impact of extending paid leave on earnings is modest or null. 12,52,72,78 A couple of studies examined the association between paid parental leave and measures of poverty across OECD countries, with results suggesting that more generous policies were associated with lower poverty, particularly among single mothers. 81,82 Interestingly, several studies have examined impacts on wage differentials and inequalities in wages by gender, as well as gender dynamics within the household. With respect to wages, this research suggests that the proportion of household income earned by women increased with access to longer (more than 24 weeks) durations of leave. 83 Some research evaluated whether longer durations of paid leave might help safeguard women from the "motherhood penalty," referring specifically to the loss in employment, wages, and annual earnings experienced by women for each subsequent child, relative to men and nonmothers. One cross-national study with a cross-sectional design suggested that the negative association between having young children and employment was larger in countries with longer durations of paid parental leave, whereas another showed that longer durations of paid leave were associated with smaller earnings penalties. 84,85 Looking at employment gaps between mothers and nonmothers, a model-based analysis using data from the European Community Household Panel predicted that an increase in the number of years of leave available to mothers of infants led to small increases in these inequalities. 86 Results from analyses by Pettit and Hook suggest this effect may be nonlinear. 87 Their multilevel analyses of 19 countries included in the Luxembourg Income Study suggested that longer parental leaves were associated with a lower employment gap between mothers and nonmothers; however, benefits diminished with extended leave provisions of 3 years or more. 87 With respect to household gender dynamics, a cross-sectional study including 32 countries suggested that countries offering longer durations of paid parental leave had more egalitarian gender divisions of housework, not including time spent on child care. 88 Subsequent research, measuring housework using data from the Multinational Time Use Study, suggests that the relation may depend on the nature of paid leave available, and specifically whether leave is available to fathers. One study found that a longer duration of parental leave was associated with less time spent on cooking for men and more time spent on cooking and housework for women, which suggests that longer parental leave may exacerbate gender inequalities in time spent on housework; however, women spent less time on cooking if men had access to parental leave. 89 Similarly, Hook showed that the duration of parental leave was associated with less time spent on unpaid work among men, whereas having access to parental leave specifically for fathers was associated with more time spent on unpaid work. 90 This evidence suggests that longer periods of parental leave may deepen specialization within the household and reinforce social norms governing housework and child care, whereas having designated leave for fathers may contribute to a more balanced distribution of unpaid work within the household. However, a fixed-effects regression analysis showed that an increase in the duration of parental leave was associated with increased paternal time spent on child care, specifically for fathers with less education; the impact of increasing paternal leave was similar in magnitude, although less precisely estimated. 91 Paid Maternity and Parental Leave and Child Health and Development. Given the potential for paid leave policies to influence caregiving and economic outcomes, there is a growing body of literature that has examined the population health impact of paid leave, with most research investigating the question of whether extending leave benefits reduces mortality within the first year of life. Evidence on outcomes measured in the neonatal period between birth and the first 28 days of age is mixed. For example, a study by Ruhm in 2000 did not provide evidence that increases in paid leave influenced the incidence of low birth weight, unlike a positive 2005 study by Tanaka. 47,48 Recent work by Stearns showed that the US Pregnancy Discrimination Act of 1978 decreased the incidence of low-birth-weight infants, particularly for unmarried mothers, as well as early-term and small-for-gestational-age births. 92 The potential for parental leave policies to influence neonatal outcomes may be limited by the extent to which paid leave can be taken prior to birth, which could facilitate access to prenatal care and other health-promoting interventions. Several cross-national studies have examined whether national expansions of paid leave influenced rates of infant mortality. This work shows that increases in paid parental and/or maternity leave lowered rates of infant mortality, with benefits largely concentrated in the postneonatal period from 1 to 12 months of age. For example, in separate studies, Ruhm and Tanaka showed that a 10-week extension of paid leave was associated with a roughly 2.5% decrease in the infant mortality rate. [46][47][48]66 An evaluation of paid maternity leave provided through state temporary disability insurance programs, which was mandated by the 1978 US Pregnancy Discrimination Act in 5 states with existing temporary disability insurance programs, did not reduce infant mortality rates. 92 However, this act was unlikely to affect very early or very low birth weight births due to the short amount of antenatal leave available; thus, the lack of a pronounced impact of the reform is unsurprising. Paid leave also appears to lower child mortality measured in the first 5 years of life. Tanaka indicated that a 10-week extension of paid leave benefits lowered child mortality rates by 3%, estimates similar to those from Ruhm. 47,48 The mechanisms that potentially connect paid parental leave to improvements in infant and child mortality might include healthpromoting behaviors such as breastfeeding and immunization, parenting behaviors, and utilization of health services, as well as increased income. Longer leave durations were associated with improvements in the prevalence and duration of breastfeeding in the United States and Canada. 17,93 For example, the 2004 introduction of California's paid leave program was associated with increases in rates of exclusive and overall breastfeeding through the first 3, 6, and 9 months following birth. 17 A 2007 German policy that, among other components, increased financial support to new parents was associated with longer durations of breastfeeding, although there was no impact on the probability of initiating breastfeeding. 94 With respect to parenting behaviors, a recent evaluation of California's paid leave program suggested the reform reduced the incidence of abusive head trauma admissions among children less than 2 years of age, with the proposed mechanism being lower levels of stress and abusive behavior. 95 Research on the use of health services is sparse. An ecological study using cross-sectional data from 185 countries found a positive relation between the length of paid maternity leave and vaccination coverage, although the study design precludes causal inference. 96 Immunization coverage was not influenced by paid leave in the study by Tanaka. However, the duration of job-protected paid leave was already relatively high in many of the OECD countries included. 47 The extension of parental leave in Sweden from 12 to 15 months did not affect the probability that the child was admitted to the hospital within the first 16 years after birth. 97 The effects of leave policy on child development and health over the life course are less clear and, given the lack of evidence, challenging to synthesize. There was little evidence that increased parental leave benefits in Canada influenced children's temperament or motor and social development. 98 Paid leave policies might influence educational outcomes. For example, an evaluation of a 1977 Norwegian reform that introduced 4 months of paid maternity leave and extended the duration of unpaid leave measured longer-term educational impacts, with results supporting a substantial reduction in high school dropout rates at age 30. 99 The 1998 policy in Sweden that extended paid parental leave from 12 to 15 months was associated with better scholastic performance at age 16 years, but only for children of more highly educated mothers. 97 However, most of the literature examining school performance suggests null effects of longer parental leaves. A 1992 Norwegian reform that extended the duration of paid parental leave from 32 to 35 weeks did not influence children's school performance. 19 Similarly, educational attainment did not improve after several reforms extending paid maternity leave benefits in Germany between 1979 and 1992, or after a 1984 reform in Denmark, which extended parental leave benefits from 14 to 20 weeks. 52,80 A cross-national analysis of 20 OECD countries did not provide evidence for a positive association between longer parental leave and school performance. 100 Paid Maternity and Parental Leave and Maternal Health. Research evaluating the impact of parental leave on maternal health is limited. A few studies have examined women's mental health. The expansion of unpaid and paid leave in the United States and Canada, respectively, was not associated with postpartum depression. 44,93 There was no evidence for an impact of the Canadian reform on women's self-reported health in Canada. 93 Looking at life course effects, a study using data from the Survey of Health, Ageing, and Retirement in Europe showed that women who were exposed to more generous federal maternity leave policies at the time of first childbirth reported fewer symptoms of depression after the age of 50 years. 101 This life course effect of parental leave policies on women's mental health warrants further research. Paternal Leave Policies. Historically, the expansion of gender-neutral leave policies, whether paid or unpaid, has not coincided with a marked increase in uptake by fathers, who unlike mothers tend to take few days off from work following childbirth. The duration of unpaid leave in the United States, for example, was not associated with leave-taking among men. 37 In West Germany, the expansion of unpaid leave in 1992 actually decreased paternal child care time in the longer term, 18 to 30 months after childbirth. 49 However, targeted policies have increased fathers' leave-taking following childbirth. In Norway, for example, descriptive evidence suggests that the 1993 federal policy change that added 4 weeks of parental leave for fathers was associated with an increase in leavetaking. 53 In Sweden, 1995 and 2002 reforms that reserved 1 month of paid paternal leave led to substantial increases in paternal leave-taking, although the 2008 introduction of a gender equality bonus did not. 20,102 Additionally, the introduction of 13 days of paid paternal leave in 2007 in Spain appeared to increase leave-taking among fathers. 18 In terms of economic implications, a study by Cools and colleagues suggested that the 1993 reform in Norway did not influence fathers' work hours and earnings when children were 2 to 5 years old, 19 whereas another study implied earnings may have declined in the medium term, 5 years after birth, 53 although effects on earnings from these 2 studies were similar in magnitude. Cross-national analyses of 24 European countries showed that father-friendly parental leave policies were associated with fewer working hours among less educated fathers. 103 However, a German reform adding 2 additional "partner months" in 2007 was not associated with a change in fathers' labor force participation rate. 74 Only 1 study-Cools and colleagues' study of Norway's 1993 reforms-evaluated the impact of a paternal leave policy on outcomes for children; that analysis indicated that paid paternal leave improved children's school performance, but only in families where the father was more educated than the mother. 19 Several studies have examined the implications of paternal leave policies on social outcomes and gender dynamics, including the distribution of care responsibilities within the family. The 1993 Norwegian reform was associated with a reduced frequency of conflicts over housework and a greater division of washing clothes, although there was no impact on views on gender equality or views on public responsibility of child care; 104 the greater division of housework among new parents may have also influenced patterns of household work among their children, with some evidence that household work declined among children born after the 1993 Norwegian reform, particularly among girls. 105 Other research indicates that expanding paternal leave quotas in Norway between 1996 and 2010, from 4 to 10 weeks, caused women to return to work faster, potentially by encouraging a more equitable division of paid and unpaid work among parents. 106 The 1995 Swedish "daddy month" reform did not increase shared responsibilities for child care, including taking leave to care for sick children. 20 The 2007 reform in Germany that provided an additional 2 months of parental leave conditional upon fathers' uptake increased paternal child care time at 1 year and 18 to 30 months after birth, although it did not influence paternal housework. 49 The latter study also showed that paternal child care time only increased when the wage replacement rate, rather than just the duration of leave, increased. 49 --- Impacts of Longer-Term Sick and Medical Leave Policies for Personal or Family Illness As shown in Online Appendix Table 2, most studies evaluated the immediate impact of sick leave and medical leave policies on policy uptake and personal absences from work. This research generally showed that personal sick leave was responsive to changes in policy, with laws that restricted eligibility or benefits typically associated with reductions in mostly short-term leave-taking behavior, 22,[25][26][27][107][108][109][110][111] and vice versa. 28,31,112 Just as generous parental leave policies might have the perverse effects of discouraging labor force attachment and reducing earnings, 69 from the policymakers' perspective, medical leave policies should be optimally designed to achieve the right balance between work absence and presence. In other words, policies need to be sufficiently supportive to facilitate time away from work to address personal or family illness and promote health, but restrictive enough to discourage unnecessary sick leaves, or the "shirking" of work responsibilities. Three evaluations of the German Employment Promotion Act of 1996, which reduced sick pay from 100% to 80% of gross wages in the first 6 weeks for private sector employees, and was subsequently repealed in 1999, showed a positive relation between sick pay and sickness absences. 32,111,113 One study suggested that the 1996 act that limited sick pay did not affect self-rated health; 111 another showed that revoking the act increased the average number of absence days among private sector employees, including employees in partnerships and men, as well as workers with a disability certificate, who anticipated job loss, or who reported low health satisfaction, but did not influence health or wellbeing. 31 These results suggest that the reform may have discouraged unnecessary leave-taking without adversely affecting health. A few studies have evaluated whether more flexible medical leave policies might help strike the right balance. For example, the 2014 study by Kausto and colleagues demonstrated that the introduction of partial medical leave allowing employees to work part-time while recovering from sickness had a strong effect on workforce participation, especially for people suffering from mental disorders. 29 Nonetheless, whether medical leave policies influence health, particularly for those who modify their behavior in response to changes in eligibility conditions or compensation, remains largely unknown. Few studies have evaluated the impact of sick leave policies to care for family members. One study of parents with chronically ill children found that California's introduction of paid family leave did not have a substantial impact on the probability of taking any leave, the duration of leave, or the frequency of unmet need, presumably because few parents were aware of the policy. 34 --- Bias Assessment The extent to which individual studies were subject to bias was based primarily on the study design used to evaluate the impacts of leave policies. The majority of studies appeared to be at low to moderate risk of bias. 114 This was partly determined by the selection of studies according to the design of our review. Specifically, with respect to the definition of the treatment, we focused on the impact of population-level leave policies or access to a leave policy. We explicitly excluded studies that aimed to assess whether individuals' utilization of leave influenced outcomes. Our rationale was that those who take advantage of social programs are likely to differ from those who do not for a variety of reasons that might influence their subsequent socioeconomic and health status; this "selection" into the treatment makes evaluations of individual-level leave-taking more susceptible to confounding. By contrast, changes in an employer, state, or national-level leave policy are arguably more exogenous. Furthermore, study designs based on institutional changes do not measure the impact of individual-level leave-taking; they are analogous to an intention-to-treat (ITT) effect. The ITT effect, although it evaluates the impact of "assignment" to a particular policy reform irrespective of whether individuals actually avail themselves of the benefits, might be more policy relevant. Evaluations of population-level interventions affecting access to leave are not, however, immune to confounding. Studies that used standard regression adjustment to control for measured confounders were, in general, at greater risk of bias. This includes, for example, a study that used data from the US National Longitudinal Survey of Youth to compare rates of job retention for mothers with and without employer-based access to maternity leave. 73 Similarly, Stier and Mandel in 2009 used multilevel regression to estimate the association between living in a country with longer vs shorter paid parental leave and women's share of household income, using data from 21 countries included in the Luxembourg Income Survey. 83 Common to these approaches is the strong and unverifiable assumption that all common causes of the treatment and outcomes of interest are measured and appropriately controlled. It is plausible, for example, that countries with more generous paid leave policies are more economically developed and also offer other entitlements that might affect outcomes, including levels of wage inequality. Other analyses (see, for example, Hanel 57 ) utilized matching methods, including propensity score matching. Although these techniques might offer other benefits (eg, limiting extrapolation beyond regions of "common support"), they follow a similar philosophy to the standard multivariable regression approach and assume that information on measured characteristics is sufficient to create exchangeable treatment and control groups. Most studies applied quasi-experimental techniques to identify the causal effect of a paid leave reform; these approaches are distinguished from standard regression approaches by their potential to address unmeasured confounding. The simplest strategy for addressing unmeasured differences between countries that might affect whether or not a country adopts a particular policy, as well as the outcome of interest, is to compare outcomes before and after the implementation of a policy within a treated country. This before-and-after or pre-post design was commonly used to evaluate leave policies. 10,20,22,[25][26][27]59,66,104,115 The validity of the pre-post comparison depends on the assumption that pre-reform outcome trends are a valid substitute for trends in the post-policy period had the policy not been implemented. To the extent that other factors may have changed coincidentally with the policy itself, this strategy might not yield rigorous evidence of a policy effect. For example, the 2009 Estonian reform to sickness benefits was implemented in response to a recession, which may have had an independent effect on the primary outcome, sickness absence, potentially confounding effects estimates. 26,27 When longer time series were available, studies sometimes incorporated more sophisticated interrupted time series methods to model the counterfactual trend for what would have happened after the implementation of a policy had it not been implemented. For example, Ziefle and Gangl applied ITS to data from the German Socio-Economic Panel between 1984 and 2010 to evaluate the impact of 7 changes in the length and/or benefits of federal parental leave policies. 59 Similar to pre-post comparisons, however, identification of a causal effect relies on the ability to accurately model trends in the outcome before the policy intervention, as well as the assumption that these trends would have continued had the policy not been adopted. The most frequently applied quasi-experimental technique, particularly among parental leave evaluations, was the difference-in-differences design. Rather than using the pre-policy trends in the treatment group to account for secular changes, as with ITS, these studies used a comparison group that did not experience a policy change to infer what would have occurred in the treatment group had it not enacted the policy. An essential condition for drawing causal inference is the "parallel trends" assumption that the change in the outcome in the control group represents what would have happened in the treatment group had it not enacted a particular reform. Changes in other social, political, and economic conditions that coincide with the policy change of interest and also affect the outcome lead to biased estimates. Among the many applications of the DD design were subnational studies that capitalized on state or provincial variations, such as US studies examining the impact of state FMLA and paid leave policies; 3,12,17,35,36,38,77 studies that examined federal policy changes to leave policies, sometimes exploiting variations across age cohorts; 18,19,52,53,61,93 and cross-national studies leveraging variations in leave policies across countries and time periods. [46][47][48]69 Studies tested the robustness of their DD analyses through a variety of approaches, including the use of propensity score matched and synthetic control groups; 79,92 difference-in-difference-in-differences (DDD); 35,92 multiple control groups; 3,52,116 and negative control and other placebo tests. 38 These sensitivity analyses can help rule out bias as an explanation for observed estimates of policy impact. A few evaluation studies used the regression discontinuity approach to take advantage of situations where eligibility for a new policy was determined by the value of an observed continuous characteristic, such as a child's birthdate. The intuition for the RD method is that the administrative thresholds that determine eligibility for a particular benefit are arbitrary and, therefore, individuals who fall just above the eligibility cutoff should be similar to those who fall just below it with respect to all measured and unmeasured characteristics. Assuming families are not manipulating the timing of their births to take advantage of new policies offering more generous benefits, individuals on either side of the threshold are essentially randomized to the treatment or control condition; thus, the RD design is one of the most rigorous methods for estimating the causal effect of a policy reform on those very close to the cutoff. As such, the 3 studies that exploited discontinuities in eligibility criteria to evaluate the effects of leave policies were judged to be at low risk of bias. This includes an evaluation of the 1977 introduction of paid maternity leave in Norway, 99 a 1984 reform that increased the length of paid parental leave in Denmark, 80 and a 1990 policy change that increased the length of paid maternity leave in Austria. 78 --- Discussion Our ability to take leave from work to care for a newborn, a sick family member, or even our own health depends on family leave policies, mandated by our employers or governments, which provide job protection and wage replacement for a fixed duration of time. There have been hundreds of reforms to these policies over the past few decades, particularly as rates of labor force participation by women, who were more frequently caregivers, have increased across OECD countries. We identified 109 peer-reviewed studies that have evaluated the impact of leave policies in OECD countries, with the bulk of the literature focusing on parental leave after the birth of a child. Most studies assessed the impact of leave policies on the proximal economic and labor market targets they were primarily intended to influence, such as leave-taking, employment, wages, and labor force participation, with fewer studies assessing social or health impacts, particularly those occurring years after the initial episode of leave-taking. The literature on medical leave policies, particularly leave to care for family members, was sparse, and only provisional conclusions can be drawn regarding the impact of these policies. Thus, most of our discussion focuses on the impact of parental leave policies. Conceptually, the optimal design of leave policies should strike a balance between the competing demands of earning income and attending to personal and family well-being, including child-rearing. If leave policies are too restrictive, they might discourage labor force entry or sustained participation. For instance, new mothers living in US states that had not extended the federally mandated 12 weeks of unpaid leave provided by the FMLA were less likely to continue working after giving birth, possibly due to the relatively short duration of leave provided. 36 For others, restrictive policies and those with low wage replacement might precipitate a premature return to work. Evidence suggests that when longer, job-protected leave is an option at a low or unpaid rate, people may opt for shorter leaves regardless of policy, reflecting the need for income and labor force attachment. 117 For example, Burtle and Bezruchka describe how the average Californian mother takes only 40% to 50% of the paid leave available, because the wage replacement rate is low and leave is not job protected unless covered by the FMLA. 2 Similarly, mothers in Australia often did not avail themselves of the full 52 weeks of unpaid leave they were entitled to, 11 perhaps because the unaffordability of remaining out of the workforce outweighed the desire to care for newborns longer. Conversely, there is an economic argument that overly lengthy leave available at full pay might encourage extensive interruptions from work, which might depress long-term wages. 69 By treating reforms to leave policies as natural experiments and comparing their impacts, we can try to identify which types of policy designs facilitate leave-taking without having harmful effects on job retention and other outcomes. In practice, our comparison of the impact of leave policies allows us to draw several conclusions. First, legislated, paid parental leave policies are well accessed by mothers, with consistent evidence that expansions in the duration of paid leave up to 12 to 18 months were accompanied by attendant increases in leave-taking and longer durations of leave. Second, there was little evidence that extending the duration of paid leave had negative employment or economic consequences. To the contrary, research indicates that more generous paid leave policies have the potential to increase women's labor force participation, employment, and job retention; some studies suggest these positive effects might diminish after roughly 28 full-time equivalent weeks of paid leave. Several studies showed that longer durations of paid leave could increase wages and income in the longer term; a multiyear positive effect on income may play a critical role in healthy development of children when it comes in the first 3 years of life. Third, unpaid leave does not appear to confer the same benefits as paid leave. Evaluations of unpaid leave provided in different US states or OECD countries demonstrate that unpaid leave has little impact, or in some cases even a negative effect, on women's labor force participation, employment, and wages. Fourth, from a population health perspective, increases in paid parental leave were consistently associated with better infant and child health, at least in terms of lower mortality rates. Fifth, whereas gender-neutral paid leave policies have not increased leave-taking on the part of new fathers, paid paternal leave policies of adequate length and generosity have induced fathers to take additional time off from work following the birth of a child. Our assessment of the literature also underscores several research gaps that might warrant further attention. With respect to the main effects of leave policies, the majority of the literature has focused on the economic and labor market consequences of parental leave reforms. Far less attention has been paid to other policies or outcomes. Accordingly, we highlight several specific areas for future work. First, future research is needed to inform strategies to encourage partners to take leave after the birth of a child and to share care responsibilities more equitably. Second, although the effects of sick and parental leave policies are difficult to disentangle in some contexts, including the United States, where medical and parental leave policies are combined, rigorous evaluations of sick leave policies are needed. Third, comparatively fewer studies have evaluated other outcomes plausibly affected by leave policies, including child health, maternal or paternal health, or social outcomes, particularly as they are experienced over the life course. As a few studies included in our review illustrated, it is challenging but feasible to evaluate the longer-term effects of leave policies on these outcomes. Fourth, with respect to heterogeneity, it is largely unclear if certain population subgroups are more likely to benefit from leave policies than others because extant work has rarely assessed effect measure modification by sociodemographic or other characteristics. Restrictive leave policies might exacerbate social inequalities in the use or duration of paid leave taken, as well as downstream outcomes, as illustrated by the 2011 study by Rossin showing that the FMLA was associated with improvements in child health, but only among college-educated white mothers. 38 Understanding how leave policies affect social groups who struggle the most with the dual demands of work and care is a fruitful area for future work. Whether contextual factors-including other public policies such as those affecting the nature, quality, and affordability of child care and health care-moderate the impact of leave policies is also unknown. Fifth, with some exceptions, 118 few studies have quantified the costs and benefits of paid leave policies from the perspective of employers. Finally, the pathways explaining observed effects, including the impact of leave policies on infant and child mortality, have not been adequately explored. There were limitations to our review. In particular, our review did not evaluate the systematic underrepresentation of null or negative findings in the literature (ie, publication bias). Additionally, although we did not impose any restrictions on language and translated the non-Englishlanguage studies identified through the databases searched, we likely identified a selected sample, and relevant evaluations may not have been captured. In conclusion, the economic, social, and health effects of parental leave depend on the duration, the wage replacement rate, and the baseline scenario against which the policy is implemented. Hands-off approaches that rely on individual employees' negotiations, as in the United States, may improve outcomes unevenly because of inaccessibility to a large portion of the population. Mothers are more likely to take up legislated leave with moderate duration and a high wage replacement rate to care for newborns while remaining in the labor force. Work on related health benefits is limited; however, there are clear benefits of mothers' adequate paid leave for infant and child health. Fathers are more likely to take up leave when they are incentivized through their own, nontransferable paid leave and high wage replacement rates. Finally, more generous baseline scenarios in terms of social benefits may influence the impact of leave policies on maternal-child health and social outcomes, pointing to the need for studies in low-welfare contexts such as the United States. --- Conflict of Interest
Objectives: This research studies the impact of influenza epidemic in the slum and non-slum areas of Delhi, the National Capital Territory of India, by taking proper account of slum demographics and residents' activities, using a highly resolved social contact network of the 13.8 million residents of Delhi. Methods: An SEIR model is used to simulate the spread of influenza on two different synthetic social contact networks of Delhi, one where slums and nonslums are treated the same in terms of their demographics and daily sets of activities and the other, where slum and non-slum regions have different attributes. Results: Differences between the epidemic outcomes on the two networks are large. Time-to-peak infection is overestimated by several weeks, and the cumulative infection rate and peak infection rate are underestimated by 10-50%, when slum attributes are ignored. Conclusions: Slum populations have a significant effect on influenza transmission in urban areas. Improper specification of slums in large urban regions results in underestimation of infections in the entire population and hence will lead to misguided interventions by policy planners.
INTRODUCTION Slums are characterised by overcrowding, lack of clean water, poor sanitation and poor medical facilities. This combined with low vaccination rates, poor education and selfmedication, result in high vulnerability to infections. Diseases like cholera, malaria, dengue, Ebola and HIV are common in slums across the world. [1][2][3] According to the United Nations, 4 the global number of slum residents is more than 1 billion, which is over one-third of the world's urban population and a seventh of all humanity, and this number is estimated to double to about 2 billion by 2030. India has about one-third of the global slum population. A study of eight cities in India finds that in Delhi, 48% of households in slums have five or more people sleeping per room, compared with 19% of non-slum households. 5 The overcrowded living conditions facilitate the spread of infectious diseases, especially airborne infections like influenza. 6 7 In general, high-density areas in developed and developing countries are associated with poverty and higher incidence of diseases. In the US, counties in 14 states show correlation between higher census tract-level poverty with higher influenza-related hospitalisation. 8 9 Yousey-Hindes and Hadler 10 find mean annual incidence of paediatric influenzabased hospitalisation in high-poverty and highdensity areas to be at least three times higher in New Haven, Connecticut whereas Kumar et al 11 detect a steeper, earlier influenza rate increase in high-poverty census tracts in New Haven. Thus, it is not surprising that understanding and improving the health and lives of slum dwellers has been identified as one of the most pressing developmental challenges of the 21st century. 6 12 This research takes a step in this direction by quantifying the effect of slum population attributes (including residents' activity patterns) on the spread of influenza. Our methods also address several of the challenges cited in Pellis et al 13 including developing more realistic heterogeneous populations and determining their effects on epidemic outcomes in and outside slums. The focus of this research is the slum population of Delhi, the National Capital --- Strengths and limitations of this study ▪ A detailed social network has been used for the first time to study epidemics on slums and the larger urban population in which they reside. ▪ Omitting the effect of slums will lead to misguided interventions by policy planners. ▪ With over a billion people living in slums, the results have broader impact. ▪ Owing to lack of space, the effect of interventions will be discussed in a follow-on paper. Territory of India. See figure 1 for maps of Delhi and a zoom-in of some slum zones (regions). Spread and control of influenza on a synthetic social network of Delhi has been studied in Xia et al 14 but it did not model the special attributes of the slum population such as larger household size and different types of daily activity schedules. Both slum and non-slums residents were treated in the same manner. For our study, this is the baseline population, and to the best of our knowledge, the resulting social network is a state-of-the-art representation of daily contacts/interactions of human agents in Delhi, India. However, our study also uses a second network constructed as part of this research: a highly refined social network of Delhi, which accounts for slum demographics and activities. There are 298 geographic slum regions in Delhi, containing about 13% of Delhi population. A slum region is defined in India Census as a residential area where dwellings are unfit for human habitation due to overcrowding, lack of ventilation, light or sanitation, or a combination of factors that are detrimental to safety and health. Agent-based simulations, where each individual is represented as an agent, are conducted for both social networks of Delhi, that is, the original one in Xia et al 14 and the refined one. The goals are to understand how epidemic dynamics differ between slum and non-slum populations, and how these dynamics differ from those in a networked population that ignores the effects of slums, and later use this understanding to design appropriate interventions. To the best of our knowledge, there have been no epidemic simulations of realistic, large urban areas (eg, with several millions of people) that include the effects of slums (their geographic locations, compositions, unique home characteristics, and agent-resolved activity patterns) on infectious disease dynamics. However, many researchers have pointed out the importance of studying slums; for example, see Desai et al, 3 Firdaus, 15 Go et al, 16 Sur et al, 17 Riley et al 18 and Sclar et al. 12 --- METHODS --- Population generation: We start with a pre-existing Delhi synthetic population, 14 developed from LandScan and census data for Delhi, a daily set of activities of individuals, their demographics and the locations of activities collected through surveys by MapMyIndia.com. New data produced in this study include ground survey data on Delhi slum residents' demographics and daily activity sets collected by Indiamart.com. We developed the survey instruments and our commercial partners in India gathered the data. Data on the geographic extents of slum zones in Delhi, in the form of spatial polygons, were obtained from MapMechanic.com. The geographic extent of Delhi and representative slum zone locations and sizes are provided in figure 1. Note the irregular shape of Delhi and the slum zones in figure 1, as well as the locations of slum zones. The population generation process-augmented for the slums and slum residents-assigns slum-specific characteristics, demographics and activities to the individuals whose homes are in those regions. In particular, those who live in slums are those whose home locations are in slum zones. These home locations have (latitude, longitude) coordinates as attributes, and hence homes can be identified as being inside or outside of slums. Since each human agent is assigned a home (and hence home location), each person can be identified as a slum or non-slum resident according to whether the person's home is located in a slum zone using data such as those shown in figure 1. Thus, the number of individuals is the same in the population without slums and the population with slums. The slum population constitutes about 13% (1.8 million) of the entire Delhi population of 13.8 million people. Over half (186) of the slum zones have fewer than 5000 people; 59 zones have between 5000 and 10 000; and the remainder (53) of the zones has between 10 000 and 49 490 people. See online supplementary figures S1-S3 for distributions of age, populations of the 298 slum zones and household size, respectively. For slum and non-slum individuals, we divided activities into the following six categories: home, work, shopping, school (for youths), college (for older adolescents and adults) and other. We combine all these data sets to produce a highly resolved, geolocated and contextualised population of Delhi with slums integrated in it, using the methodology described in refs. 19 and 20. More details on the survey datasets and population generation methodology are provided in the online supplementary information. Resulting networks: Social contact networks are generated from populations as follows. Each person has a home location and a set of daily activities that includes one or more of work, shopping, school, college and other. Each activity means that an individual 'visits' a particular location for the activity, with a start and an end time. When two people visit the same location, and their visit times overlap, they interact, meaning that disease or virus may be transmitted from an infectious person u to a susceptible one v. This means that a network representation of the population has an undirected edge {u,v}. This is precisely how the social contact network is formed. Each edge in the network is labelled with the activities of two individuals in the interaction, and the duration of the interaction. The duration of interaction is used in the epidemic computations below. Each individual has attributes from the population generation process that include age, gender and household ID. But now, the activities of a person that result in interactions with others are encoded as the network edges. The original social contact network of Delhi, called Network 1, treats the slum regions like any other region in Delhi in terms of household sizes, assignment of demographics and individual activities. The enhanced Delhi network, called Network 2, produced as part of this work, includes 298 slum geographic regions (zones) in Delhi. We provide selected comparisons between the two networks, with additional data. First, a major difference between Networks 1 and 2 is that Network 2 has more home-related contacts because the average non-slum household size is 5.2 whereas in the slum regions it is 15.5. Second, there is a 15% increase in number of daily activities in the slum network: 33 890 156 individual activities per day in Network 1 versus 39 077 861 activities in Network 2. According to the activity survey, slum individuals have more varied activities than non-slum individuals (see online supplementary figure S4, category 'other'). The increased number of activities of slum individuals translates into ∼10% increase in average interactions (degree) and average density of ties among individuals (clustering coefficient): 30.4 and 0.680, respectively, for Network 1 versus 33.4 and 0.733, respectively, for Network 2. Looking in more detail at the different types of interactions (eg, a contact between slum and non-slum persons is called a 'slum-non-slum' contact), we observe the interplay between the number of contacts of different types and the durations of contacts. The effect of the contact durations is significant because the probability of disease transmission between an infected person and a susceptible one is an exponential function of contact duration. Using the average duration of a slumnon-slum contact as a baseline, we observe that the average durations of slum-slum and non-slum-non-slum contacts are 2.5 times and 3.2 times greater than baseline, respectively (see online supplementary figure S5, the last set of bars under x-axis value of 'all'). These last two values are non-intuitive because there are more people in a slum house than in a non-slum house. Consequently, one would expect the average contact duration between two slum people to be greater than that for two non-slum people. However, non-slum residents have longer contact durations at work (see online supplementary figure S6, category work). We will see in the results below that the lesser contact duration between two slum individuals compared with two non-slum individuals is more than offset by the larger average number of contacts of slum individuals compared with non-slum people, that is, 67.4 vs 28.3 (see online supplementary figure S7). This latter difference is a result of the greater variety of activities of slum people. The online supplementary information addresses these and other features and differences among Delhi subpopulations. Disease model: We use an SEIR model where each of the 13.8 million individuals can be in one of four states at any given time: Susceptible (S), Exposed (E), Infectious (I) and Removed or Recovered (R). We seed the epidemic in a susceptible population with 20 initial infections that are randomly chosen. Results are not sensitive to the number of initial infections; that is, varying the number of index cases did not change the outcomes of our experiments. An infectious node u spreads the disease to each susceptible neighbour v independently with a certain probability, referred to as the transmission probability. The transmission probability is a function of the duration of contact. This probability is selected to simulate mild, strong and catastrophic influenza. For the Delhi contact network, the transmissibility values corresponding to mild, strong and catastrophic influenza are calibrated to be 0.0000215, 0.000027 and 0.00003, respectively. 21 (Transmissibility is a multiplier on the contact duration of an interaction; the greater the transmissibility, the greater the transmission probability, for the same duration of contact.) Mild, strong and catastrophic transmission rates correspond to R0 equal to 1.05, 1.26 and 1.40 for Network 1 and 1.123, 1.39 and 1.54 for Network 2, respectively. 14 22 The incubation period follows the distribution: 1 day (30%)/2 days (50%)/3 days (20%) and infectious period follows: 3 days (30%)/4 days (40%)/5 days (20%)/6 days (10%). 20 23 If a susceptible node becomes exposed, it stays exposed for the incubation period and then switches to an infectious state for the infected duration, after which it is recovered or removed. Note that this state transition is irreversible and is the only possible disease progression. We refer to Newman, 24 Dimitrov and Meyers 25 for more details on stochastic models for epidemics. For every experiment, 25 runs are simulated and their mean results are reported; in the text, the full range of results is given to address variance. Each simulation run consists of seeding 20 individuals with influenza chosen uniformly at random, from a specified set of individuals (slum residents, non-slum residents, or the entire population). From these initially infected individuals, disease propagates across edges to infect susceptible agents. The details of the simulation parameters are provided in online supplementary table S1. We used EpiFast, a fast discrete event simulator for disease propagation over a contact network. 26 It is implemented in C++/Message Passing Interface (MPI) and uses a parallel algorithm, which enables scaling on distributed memory systems. EpiFast uses a disaggregated, agent-based model, which can represent each interaction between pairs of individuals and hence is used for studies of disease transmission. 14 Disaggregated models require neither partitions of the population nor assumptions about large-scale regularity of interactions, as do compartmental models. We have covered the population generation process, the network generation process, the influenza transmission model and simulations. It is evident that there are heterogeneities in all aspects of this work. Agent-based models are well-suited to capture spatial irregularities in slum zone geometries and locations, among individuallevel characteristics including activities and demographics of slum and non-slum residents, and in connectivity patterns among individuals in the social networks. Other types of models, including differential equation-based (ie, uniform mixing), 27 compartmental 28 and patch models, 29 where counts of agents in each state are maintained, do not have the ability to model individual traits at the level of granularity that agent-based models provide. For example, in the work by Pandey et al 28 on the spread of Ebola, counts of people in different states (eg, infectious) is broken down into compartments such as those in the general community, in hospitals, and those that are healthcare workers. --- RESULTS We start with a comparative analysis of epidemic dynamics on Networks 1 and 2 to understand the impact of integrating slums on epidemic measures. The latter parts mainly focus on Network 2. We also examine the effect of seeding the infections in slum versus non-slum regions on epidemic outcomes. In comparisons between the two networks, we use the same initially infected agents across corresponding diffusion instances between Networks 1 and 2, for better comparisons of results. --- Epidemics on Network 1 vs Network 2 Figure 2 shows the mean epidemic curves, that is, the number of new infections per day, for the two networks under mild, strong and catastrophic influenza cases. Note that estimates of cumulative infection rate, peak infection rate and time to peak will be highly underestimated when Network 1 is used. We used two-sample Student's t-test to test the difference between the two networks under the same transmission rate for time to peak infection, peak infection rate and cumulative infection rate. All tests are statistically significant with p values smaller than 2.2e-16. The 95% CIs are also calculated. In case of Network 1, mild influenza does not cause an epidemic whereas for Network 2 it does. The time to peak infection is 131 (95% CI 120 to 141) days earlier on average in Network 2 for mild influenza, 34 (95% CI 31 to 37) days earlier for strong influenza and 23 (95% CI 21 to 25) days earlier for catastrophic influenza, compared with Network 1. Network 1 takes much longer to reach the peak of the infection-on the order of a few to many weeks-compared with Network 2 for all transmission probabilities, with differences being starker for mild influenza. This means that the speed of virus transmission is much faster in the actual population (with slums) than what policy planners would expect in a slum-free population. Similarly, the peak infection rate is underestimated by a significant fraction under every influenza model in Network 1, as shown in figure 2. The peak infection rate is 162.6% (95% CI 161.8% to 163.4%) greater on average in Network 2 for mild influenza, 47.6% (95% CI 47.4% to 47.8%) greater for strong influenza, and 33.2% (95% CI 33.2% to 33.4%) greater for catastrophic influenza, compared with those in Network 1. The cumulative infection rate (or attack rate) is also underestimated. Under the same transmission rate, the cumulative infection rate is 78.5% (95% CI 73.4% to 80.7%) greater on average in Network 2 for mild influenza, 16.1% (95% CI 16.1% to 16.2%) greater for strong influenza, and 11.0% (95% CI 11.0% to 11.1%) greater for catastrophic influenza, compared with those in Network 1. Figure 3 shows the cumulative infection rates for different subgroups (slum, age and gender) in the two networks. An agent can be part of many of these subgroups; for example, a woman aged 34 who resides in a slum. The figure caption describes the details of the subgroups listed on the x-axis. The cumulative infection rate in the slum, as shown by Network 2, is higher by more than 20 percentage points compared with non-slums, under all influenza models. Network 1 makes no distinction between slum and non-slum regions, and thus both regions face the same infection rate in Network 1. Subpopulations by gender and age also show higher infection rates in Network 2. The difference in subpopulation-level infection rate between the two networks drops as the transmission rate increases from mild to catastrophic. A similar result was reported in Kumar et al. 11 This outcome can be explained by the fact that when viral transmission is very high, as in the case of catastrophic influenza, the higher contact rates and other characteristics of 13% of individuals in the network do not matter as much. The high transmission rate dominates the impact arising from changes in the network structure. Females, young children and the oldest adults encounter relatively higher infection rates under all scenarios in both networks. Online supplementary figures S8 and S9 show differences in contact rates encountered by men and women at home and outside home, and with children, respectively. The higher contact rates of females with children and with others explain the incidence of higher infection rates among females. --- Effect of seeding infections in slum versus non-slum regions Rapid urbanisation is increasing the rise of urban slums and squatter settlements, especially in developing countries, and these areas are easy targets for seeding of epidemic-prone diseases such as influenza and Ebola. 30 For example, West Point slum in Liberia was a focal point in the Ebola epidemic 31 and in 2012 a cholera outbreak in the coastal slums of West Africa killed hundreds and infected more than 25 000 people. 2 To understand the effect of seeding infections in slum versus non-slum areas, we randomly selected 20 individuals for seeding infections in (1) slum regions only; (2) non-slum regions only and (3) total Delhi 'Slum' and 'Nonslum' refer to slum and non-slum regions, respectively. 'Male' and 'Female' denote the total number of males and females in Delhi, respectively. The age subgroups are denoted from 'AG1' to 'AG9', where 'AG1' refers to all individuals between age 0 and age 10 in Delhi, 'AG2' refers to individuals between age 11 and age 20 and so on. Chen J, et al. BMJ Open 2016;6:e011699. doi:10.1136/bmjopen-2016-011699 population. For each case we simulated all three influenza models to check the robustness of the results across transmission rates. Figure 4 shows the epidemic curves for the entire Delhi population under different seeding conditions and three different transmission probabilities, for Network 2. The simulation results indicate that initial seeding conditions make no difference on the overall infection rate of the entire population. Nor does it alter the peak infection rates. However, it does alter the time at which the infections peak. In particular, seeding in the slums results in faster spread of the influenza contagion. For strong influenza, average time to peak occurs at day 118 (range 114-125) when seeding is carried out in slums, whereas it is 131 (range 124-144) when seeding is carried out in non-slums and 128 (range 117-135) when seeding is randomly carried out in the entire population. --- Demographics and network structure impact infection rates in slums In order to design effective interventions, it is important to examine which slum-specific attributes help explain changes in cumulative infection rates. We use a regression model to study the relationship between infection rate and slum-specific features using a simple linear regression model. There are a total of 298 slum zones in Delhi. For each zone, we calculated the cumulative infection rate, for each influenza type, that is, mild, strong and catastrophic, to be used as the response variable in the regression. For slum-specific variables, we calculated for each slum zone: slum-zone population, average degree for each activity type (ie, home, work, shopping, other, school, college), average degree for all activities in total, number of edges in each slum-zone, network density (the total number of edges within a slum zone divided by the maximum possible number of edges) in each slum-zone, average degree within slum-zones for each of the six activity types and in total, average degree of nodes connected to non-slums for activity types 2-6 and total and average household sizes in each slum zone. Next we identified the mutually correlated variables, which were average degree for home, average degree for shopping, average degree within slum for home, and all average degree connected in non-slum. We removed the correlated variables and then conducted variable selection using a bidirectional elimination method (using R function 'both'), which is a combination of forward and backward elimination, testing at each step for variables to be included or excluded in the model using Akaike Information Criterion (AIC). According to AIC, the average number of contacts within a slum zone and the average household size of slums are significant for all influenza models. Table 1 reports the estimates of regression coefficients as well as other statistics in detail. A companion paper will use this information to design effective interventions in slumspecific regions to control the spread of influenza in the Delhi population. --- DISCUSSION Even though slum regions contain only 13% of the total population of Delhi, omitting their effects underestimates the cumulative infection rates, the time to peak infection and the peak infection rates. These results are robust under all influenza models considered here. The speed and the size of virus transmission is faster and underestimated in the actual population (with slums modelled in Network 2) than what policy planners would expect in a slum-free population (Network 1). Our results show that (1) if slums' attributes are not appropriately integrated in the population, estimates of epidemic measures such as epidemic size, its peak value and time to peak are underestimated (by magnitudes of 10-50% or more; eg, for strong influenza, accounting for slums increases the peak infection rate by 47.6%); (2) infection rates by subpopulation show that the slum subpopulation has infection rates that are 20 percentage points higher than those in the non-slum subpopulation; and females and young children encounter higher infection rates in the overall population; and (3) the epidemic size and peak infection rate are independent of where the infection is seeded, that is, slum or non-slum. However, the time-to-peak infection changes with seed locations. Average time-to-peak infection is 118 (range 114-125) days when infection is seeded in slum regions compared with an average of 131 (range 124-144) days when it is seeded in non-slum regions. The averages are based on 25 runs for each case and ranges show the minimum and maximum time-to-peak values across 25 runs. The qualitative aspects of these results are important because they may extend to other cities and countries, and possibly to other infectious diseases. We also show that initial conditions in terms of seeding locations have no significant impact on the total attack rate and peak infection rate. However, seeding in slums results in faster initial spread of the disease contagion. Among all subpopulations, slums are most vulnerable to the spread of influenza. Thus, special attention and appropriate interventions should be applied to slums in order to control the spread of virus in slum areas and also in the entire region. Furthermore, these results may be useful in analysing epidemics in other countries and regions with slum populations. This research demonstrates the need to model slum populations who are more vulnerable to infectious diseases due to large family size, crowded environment and higher mixing rates within and outside slum regions. Ignoring the influence of slum characteristics on their urban environment underestimates the speed of an outbreak and its extent and hence leads to misguided interventions by public health officials and policy planners. Moreover, since the models show that time to peak infection decreases by weeks when slum regions are modelled appropriately, policy planners have significantly less time to react to an outbreak. Finally, the results may have even broader impact for policy planners since over a billion people in the world live in slums. Consequently, these results may be useful for policymaking in several countries, and possibly for other infectious diseases. --- Model limitations This model does not consider differences in susceptibility that might occur due to age, pre-existing conditions, and comorbidities. The entire population is assumed to be susceptible at the beginning of the simulation, except for the index cases. In reality, the infants, older people, hospital workers, etc, may be more susceptible than others. Contributors CK, AM, AV and HM designed and conceived the study. SC, YC, DX, MK and JC built the social network, carried out the experiments and simulations. JC, SC and AV performed the data analysis. JC, SC, YC, MK, CK, AM, HM, AV and DX helped with reviewing the results, and writing of the paper. --- , 2024 by guest. Protected by copyright. http://bmjopen.bmj.com/ BMJ Open: first published as 10.1136/bmjopen-2016-011699 on --- Competing interests None declared. Ethics approval Virginia Tech Institutional Review Board reviewed and approved the survey protocol. Provenance and peer review Not commissioned; externally peer reviewed. Data sharing statement Data pertaining to epidemic curves, regression analysis and statistical analysis can be obtained by contacting the corresponding author through email.
Drug markets in disadvantaged African American neighborhoods have altered social and sexual norms as well as sexual networks, which impact an individual's risk of contracting a sexually transmitted infection. Presently, we describe the prevalence of sexual partnerships with males involved with illegal drugs among a sample of non-drug-dependent females. In 2010, 120 Black females aged 18 to 30 years completed a semistructured HIV-risk interview. Descriptive statistics revealed approximately 80% of females perceived neighborhood drug activity as a major problem, 58% had sex with a male drug dealer, 48% reported sex with a male incarcerated for selling drugs, and 56% believed drug dealers have the most sexual partners. Our results suggest sexual partnerships with males involved in the distribution of drugs are prevalent. These partnerships may play a substantial role in the spread of sexually transmitted infections among low-risk females, as drug dealers likely serve as a bridge between higher HIV-risk drug and prison populations and lower HIV-risk females. However, the significance of partnerships with males involved in drug dealing has received little attention in HIV and drug abuse literature. Presently, there is a need for more research focused on understanding the extent to which the drug epidemic affects the HIV risk of non-drug-dependent Black female residents of neighborhoods inundated with drugs. Special consideration should be given to the role of the neighborhood drug dealer in the spread of sexually transmitted infections.drug markets; African Americans; HIV risk; drug dealers; females Black females are disproportionately affected by sexually transmitted infections (STIs), including the human immunodeficiency virus/acquired immune deficiency syndrome (HIV/ AIDS). For example, the current rate of gonorrhea is 19 times greater and the rate of
Chlamydia is 9 times greater among African American females, compared to White females. The rate of HIV/ AIDS among African American females is 18 times the rate for White females and AIDS is among the leading causes of death among African American women between 10 and 54 years of age (Centers for Disease Control andPrevention [CDC], 2010, 2011). Data suggest that the majority of African American women living with HIV/AIDS contracted the disease through sexual contact with an infected male partner. Furthermore, an overwhelming majority of these cases occur among socially and economically disadvantaged females (CDC, 2011). In the United States, high rates of STIs, the HIV/AIDS epidemic, and the drug epidemic are inextricably linked (CDC, 2011;Celentano, Latimore, & Mehta, 2008). In addition to the exaggerated rates of STIs found in many low socioeconomic status (SES) neighborhoods, rates of drug use and sales are extraordinarily high, as well. In disadvantaged African American communities, in particular, economic and social conditions linked to drug market activity likely influence partner selection, the sexual availability of women, and the type of male sexual behavior that a woman tolerates (Adimora & Schoenbach, 2005). To this end, the purpose of the current article is to describe attitudes about issues related to drug dealing, the prevalence of sexual partnerships with males involved in illegal drug economies, and STI rates among a sample of non-drug-dependent African American females residing in low SES neighborhoods. --- STIs and Sexual Partner Characteristics A variety of factors (e.g., individual, interpersonal, and contextual factors) have been shown to contribute to the high rates of STIs affecting African Americans (Adimora & Schoenbach, 2002, 2005;Adimora, Schoenbach, & Doherty, 2006;Andrinopoulos, Kerrigan, & Ellen, 2006;Doherty, Adimora, Schoenbach, & Aral, 2007;Doherty, Schoenbach, & Adimora, 2009;Farley, 2006;Jennings & Ellen, 2004;Johnson & Rapheal, 2009;Laumann & Youm, 1999). In particular, there is a growing base of literature suggesting that sexual partner characteristics, which are influenced by social and economic factors, may be important contributing factors to the disproportionately high rates of STIs experienced by African American females (Adimora & Schoenbach, 2002, 2005;Akers, Muhammad, & Corbie-Smith, 2011;Andrinopoulos et al., 2006;Doherty et al., 2007;Doherty et al., 2009;Farley, 2006;Jennings & Ellen, 2004;Johnson & Rapheal, 2009;Latkin, Curry, Hua, & Davey, 2007;Laumann & Youm, 1999;Sullivan et al., 2011;Zierler, & Krieger, 1997). Among the sexual partner characteristics linked to STI status are male partner's history of incarceration, substance use/abuse, and sexual concurrency (Adimora, Schoenbach, & Doherty, 2006;Johnson & Rapheal, 2009). For example, in a study by Adimora, Schoenbach, and Doherty (2006), more than 25% of the heterosexually transmitted HIV cases among African American females were associated with sex with a nonmonogamous partner. Furthermore, Johnson and Rapheal (2009) suggested that the bulk of racial disparity in AIDS among women may be explained by male incarceration. For African American females, the type of men with whom they are sexually involved seems to be a risk factor for STIs. --- Neighborhood Drug Activity and STI Risk Currently, there is mounting evidence supporting the significance of neighborhood factors in STI transmission patterns (Latkin et al., 2007;Sullivan et al., 2011;Thomas, Levandowski, Isler, Torrone, & Wilson, 2008). In African American communities, neighborhood drug activity is one characteristic that may be relevant to the distribution of STIs. Although drug use and drug dealing occur at all levels of SES and across race/ethnicity, publicly visible drug dealing is primarily concentrated in socially and economically deprived African American communities (Friedman et al., 2003;Saxe et al., 2001;Wallace & Muroff, 2002;Wilson, 1996). Moreover, some data suggest African Americans are more likely to report witnessing drug sales and drug activity in their neighborhoods than any other group (Saxe et al., 2001;Wallace & Muroff, 2002). In a study by Latkin et al. (2007) found drug market activity was perceived as a neighborhood problem among an overwhelming majority of African American study participants. In many urban economically disadvantaged African American neighborhoods, the drug epidemic has altered social structures, norms and behaviors, and sexual partnerships of core members of drug networks, as well as those of non-drug-dependent residents (Adimora & Schoenbach, 2005). Consequently, practices and behaviors that are ridiculed in mainstream American culture may be more acceptable and common place in these socially disorganized neighborhoods. Andrinopoulos et al. (2006) provided some evidence of altered social norms. Specifically, the men in their study reported that they participated in drug trade to gain social status and respect. In addition, Friedman et al. (2003) found one out of four young people held nonhostile attitudes about drug dealers. Undoubtedly, drug market activity is one aspect of social disorganization that is particularly relevant to the well-being of African Americans living in low non-hostile SES urban neighborhoods. Presently, there is a plethora of literature on the intersection of the drug and HIV/AIDS epidemics, which focuses on understanding and reducing the high rates of STIs and HIV found among substance users/abusers and their partners (Cavanaugh et al., 2011;Celentano et al., 2008;Latka et al., 2001). However, a plausible, yet understudied, pathway through which drug market activity may impact the HIV/STI risk of non-drug-dependent residents of high-risk communities is through its influence on norms and sexual partnerships. Despite not being core members of drug networks, young adult African American females residing in socially disorganized neighborhoods may be at increased risk for adverse health and social outcomes by virtue of where they live, as physical places where networks form influence networks (Doherty, Padian, Marlow, & Aral, 2005). Given African American adolescents and young adults tend to recruit sexual partners from their immediate environments (Zenilman, Ellish, Fresia, & Glass, 1999), it stands to reason that low-risk females residing in neighborhoods inundated with drugs will form sexual partnerships with higher risk males, such as males involved with drugs through either drug use or drug dealing. In the current exploratory study, we look beyond the drug-dependent individual. The purpose is to describe attitudes toward and the prevalence of sexual partnerships with males involved with drugs among a sample of non-drug-dependent African American females. These partnerships may play a substantial role in the spread of STIs among young adult African American females. --- Method Procedures The current article includes data on 120 African American females. Data were drawn from a larger cross-sectional study aimed at examining racial differences in individual HIV-risk behaviors, sexual partner characteristics, and rates of STIs among 240 African American and White females. Females between 18 and 30 years of age were recruited through street recruitment in economically disadvantaged neighborhoods, advertisements in local newspapers, and word of mouth. To be eligible for participation in the study, the females had to meet the following criteria: (a) identify as African American or White; (b) be between the ages of 18 and 30; (c) reside in Baltimore City; (d) have no history of regular drug use, excluding alcohol and marijuana; and (e) report being heterosexual or bisexual. Each participant received remuneration for her time and effort. The Johns Hopkins Bloomberg School of Public Health Institutional Review Board approved the study with standard annual reviews. --- Data Collection After obtaining informed consent, face-to-face interviews were conducted by trained interviewers. The study assessment battery included (a) a detailed HIV-risk behavior interview; (b) urine testing for Chlamydia, gonorrhea, and the presence of psychoactive drugs (e.g., cocaine, marijuana, opiates, methadone, methamphetamine, and 3,4methylenedioxymethamphetamine [MDMA]); and (c) a neuropsychological assessment. Each assessment required approximately two hours to complete. All assessments were conducted in a private interview room at the research site. Participants were asked to return within two weeks of their assessment to receive the results of their STI test. HIV/STI counseling was provided to all participants. Specifically, trained counselors provided information and referrals to available and free or low-cost medical services as needed. In addition, the counselors offered free condoms, advice about drug use and safer sex, and a chance to ask questions. --- Measures Current employment and financial status-Participants were asked to respond to the following questions: (a) Are you currently employed (0 = yes or 1 = no) and (b) How difficult has it been for you to pay monthly bills For the bill pay item, response options ranged from very difficult (1) to not at all difficult (4). For the purposes of data analyses, we created a dichotomous variable (i.e., 0 = not difficult and 1 = difficult). Neighborhood drug activity-Two items comprised the neighborhood drug activity variable: (a) I have seen people using or selling drugs in my neighborhood and (b) in my neighborhood, the people with the most money are drug dealers. For the purposes of data analysis, a dichotomous variable was created (0 = no problem/no risk and 1 = problem/risk). Individuals who endorsed one or both of the statements were placed in the problem/risk category. Male partner characteristics-Information was gathered on male sexual partner characteristics. Participants were asked, "In your lifetime, have you ever had sex with (a) a male who sold or packaged drugs, (b) a male incarcerated for selling drugs, (c) a male who used cocaine or heroin, and (d) a male who had been incarcerated?" Response options included no (0) and yes (1). Lifetime STIs-Participants were asked to endorse each STI they had been diagnosed with or treated for in their lifetime. Each participant responded to a series of items, which included, "In your lifetime, have you ever been diagnosed with or received treatment for (a) Chlamydia, (b) gonorrhea, (c) herpes, (d) syphilis, (e) genital warts, and (f) pelvic inflammatory disease?" Attitudes toward and beliefs about drug dealers-Attitudes and beliefs were assessed by asking participants a series of questions about their personal attitudes and behaviors, and the perceived attitudes of their friends and family. These statements included the following: (a) I date men who sell drugs because they can give me money and buy me things; (b) men who are involved in selling drugs take care of their women; (c) most men I date use drugs other than alcohol and marijuana; (d) in my community, men who sell drugs have the most money; (e) in my community, drug dealers have the most women/girlfriends; (f) I would never date someone who sells drugs; (g) my friends think it is ok to date men who sell drugs; (h) most of my friends date or have dated men who sell drugs; and (i) my family does not approve of me dating men who sell drugs. Participants were asked to indicate the extent to which they agreed with each of the above statements. Response options ranged from strongly disagree (1) to strongly agree (4). For the purposes of data analyses, a dichotomous variable (i.e., agree =1 and disagree = 0) was created. Biological assays-On-site urinalysis for drug metabolites used The Multi-Drug 12 Panel Test, an all-inclusive point of use screening test for the rapid detection of tetrahydrocannabinol (THC)/marijuana, cocaine and its metabolite, benzoylecgonine, Phencyclidine (PCP), morphine and its related metabolites derived from opium (opiates), methamphetamines (including ecstasy), methadone, amphetamines, barbiturates, benzodiazepines, and tricyclic antidepressants (TCA) in human urine. Results of the drug test were available within 5 min. The Johns Hopkins University International STI, HIV, Respiratory, and Biothreat and Emerging Diseases Research Laboratory performed STI tests. Each urine sample was tested for Chlamydia and gonorrhea. --- Results Using SPSS 19, descriptive statistics were obtained. The mean age of the sample was 23.5 years (SD = 3.4). The majority of the sample (84%) completed high school. Approximately 38% reported being employed. The majority of the sample reported not having enough money (57%) or difficulty paying monthly bills (66%). Approximately half of the females were mothers. Forty-seven percent of the sample tested positive for marijuana. Table 1 provides a summary of the sample characteristics. --- Attitudes Toward and Beliefs About Drug Dealers When asked about their beliefs about and attitudes toward drug dealers, approximately 30% of the females sampled reported that drug dealers earn the most money and 55% indicated that drug dealers have the most sex partners. Approximately 54% of the sample reported that their friends approved of them dating drug dealers, whereas about 77% indicated that their families disapproved of them dating men involved in selling drugs. Neighborhood drug activity was a major concern for the majority (82%) of the females in the study. --- Sexual Partner Characteristics Approximately 58% of females reported having sex with a male involved in drug dealing and 48% reported having sex with a male incarcerated for selling drugs. Sex with a male who used cocaine or heroin was reported by 5% of the sample (see Table 2). Compared to nonmarijuana users, a greater proportion of marijuana users reported having sex with a male who had been incarcerated (χ 2 = 4.42; p > .05). Other sexual partner characteristics did not differ across marijuana use groups. --- STI Prevalence Approximately 7% tested positive for Chlamydia or gonorrhea. Self-report data indicated that 46% of participants had an STI in their lifetime and of those reporting ever having an STI, 18% indicated having two or more STIs in their lifetime. The number of cases for selfreported STI ranged from two for syphilis to 40 for Chlamydia. One participant reported being HIV positive. --- Discussion In the present study, approximately 82% of females reported neighborhood drug activity as a major problem. The majority of females in the study indicated sex with a male who sold drugs (58%) or a male who had been incarcerated for selling drugs (48%). Approximately, one out of three females reported that drug dealers in their community earn the most money and approximately one out of two indicated that drug dealers have the most female sexual partners. Finally, 47% of females tested positive for marijuana and more than half of the participants reported having an STI in their lifetime. The current study's findings suggest the drug epidemic in African American communities is likely impacting the norms, behaviors, and sexual partnerships of nonillicit drug-using residents, which may increase their risk for STIs. For example, in our sample, there is some indication of nontraditional social hierarchies and acceptable behaviors. Specifically, 30% of females reported that drug dealers in their community have the highest income. Moreover, our finding that approximately half of the females was sexually involved with a drug dealer and believed their friends approved of them dating drug dealers suggest drug dealing does not carry the same stigma for African American females residing in disadvantaged communities as it does for mainstream Americans. This should not be surprising, given since the 1980s, drug dealing among under-or unemployed African American males has remained prevalent, and peer attitudes toward drug dealing tend to be nonhostile (Centers & Weist, 1998;Floyd et al., 2010;Friedman et al., 2003;Friedman et al., 2007;Stanton, & Galbraith, 1994;Whitehead, Peterson, & Kaljee, 1994). Despite the negative images of the drug dealer often portrayed by the media, in disadvantaged Black neighborhoods with drug markets, the male drug dealer tends to hold a unique social position. High social status, which is directly tied to income, provides the male drug dealer with more opportunities for sexual relationships with multiple partners at varying levels of risk for HIV (i.e., two out of three study participants reported that drug dealers have the most sexual partners). For example, given their social status, drug dealers may be more likely to engage in concurrent or serially monogamous relationships with nonillicit drug-using females. In addition, it is not uncommon for users of heroin and cocaine to exchange sex for drugs, further increasing the drug dealer's risk for contracting and transmitting STIs (Chen, McFarland, & Raymond, 2011;Friedman et al., 2003;Inciardi, 1990). Incarceration is another factor contributing to the heightened HIV risk of males involved in illegally distributing drugs. In the United States, African Americans who account for approximately 12% of the population, account for 33.6% of arrests for the illegal sale, manufacturing, and possession of drugs (U.S. Department of Justice, 2010a, 2010b). Males entering correctional facilities are exposed to a population with a high prevalence of risky behaviors and infectious diseases. For example, while incarcerated heterosexual males are at increased risk for engaging in consensual or forced same sex relationships (Thomas et al., 2008). Upon reentering the community, these individuals return to heterosexual relationships. Consequently, males who are incarcerated for drug-related offenses may serve as bridges between high HIV-risk populations (e.g., drug users and prisoners) and lower risk females. In our study, almost half of the females reported sex with a male incarcerated for the distribution of illegal drugs. The present study's findings are in line with research indicating that young adult African American females tend to engage in disassortative mixing (Adimora & Schoenbach, 2005;Adimora, Schoenbach, & Doherty, 2006;Senn, Carey, Vanable, Urban, & Sliwinski, 2008). Disassortative mixing occurs when higher risk and lower risk persons form partnerships (Laumann & Youm, 1999). Sexual partnerships between high-risk drug-dealing males, who are connected to core members of drug networks and prison populations, and lower risk young adult females is an example of disassortative mixing. Such disassortative sexual mixing has been found to promote heterosexual transmission of HIV and may help explain the extraordinarily high rates of STIs found among this sample of African American females (Adimora & Schoenbach, 2005;Laumann & Youm, 1999). The current study demonstrates the need to systematically examine how the drug epidemic in low SES Black communities impacts sexual networks beyond core members of drug networks to include non-heroin-using and cocaine-using females. However, the study has several limitations. The small sample size likely resulted in the underestimates of particular variables. The small sample size also limits our ability to test statistical models. In addition, we used a convenient sample, which limits the generalizability of the findings. Despite these important limitations, this study is unique in that it considers the effects of the drug epidemic on HIV risk of nonillicit drug-using females and, thereby, provides a platform for future research. In conclusion, drug markets in disadvantaged African American communities continue to be a major problem and their impact extends beyond drug users. Based on our findings, it can be argued that neighborhood drug market activity is influencing the norms and sexual partnerships of non-drug-dependent African American females. It is our hope that this article serves as an impetus for careful consideration of the impact of the enduring drug epidemic on sexual partnerships of low-SES African American young adult women who do not use hard drugs (e.g., heroin, cocaine, or methamphetamines). Specifically, more research into the relationship between involvement with a drug dealer and STI status among African American females is needed. This knowledge may yield valuable insight into why this group remains disproportionately affected by STIs. --- Author Manuscript --- Floyd and Brown Page 11 Table 1 Descriptive Characteristics (N = 120).
This study provides much-needed empirical study of workplace inclusion of underresourced employees of low socioeconomic status. Based upon a conservation of resources perspective, we have examined the centrality of resources as a key inclusion process for well-being outcomes for employees with insufficient resources. In the context of misuse of institutional power over operative workers within highly segmented and hierarchical work settings, this study validates the importance of economic inclusion for fostering workers' well-being via fair employment practices. This study also offers new knowledge of the integrative resource model of workplace inclusion research by validating workers' personal resources of learning orientation as an internal condition for strengthening the positive effect of economic inclusion on well-being.
Introduction Almost half of humanity lives on less than USD 5.50 a day. They are increasingly resource deprived to achieve their economic and social well-being (Oxfam International, 2021). To create a more equitable society, business leaders need to establish more inclusive workplaces for resource-deprived workers. While resource accessibility and integration are paramount to reduce inequality for those workers, we know little about what and how resources ought to be distributed in workplaces. Workplace inclusion for workers with a low socioeconomic status is significantly underresearched. By adopting the conservation of resources (COR) theory, which promotes economic, social, and personal resources relevant to workplace inclusion (Hobföll, 2002;Hobföll et al., 2018), this study explores how managers may grant workers equal access to their valuable resources. According to the sociological approach to workforce inclusion, we regard resources and inclusion as closely related concepts that are particularly important for underresourced workers. Notably, status construction theory suggests that individuals' resource accessibility reinforces the differentiation of shared identity and social values attached to individuals in society (Ridgeway, 1991;Ridgeway & Correll, 2006). The low social value that resourced members project onto less-resourced members reinforces their limited access to resources as the vicious cycle of resource distribution (c.f., Bapuji et al., 2018). This becomes particularly salient in a highly segmented and hierarchical work system such as that of the Bangladeshi factories in the Global South, operating under the global production network (Alamgir et al., 2021;DiTomaso et al., 2007). Underpinned by this sociological perspective to workforce diversity, Mor Barak and Cherin (1998) define inclusion as the degree to which individuals feel they can access resources and information as part of organizational processes; and Nishii (2013) defines a climate of inclusion as employee-shared perceptions of a work environment that grants every worker (regardless of background) equal access to resources via fair employment practices and inclusive interactions. Prior inclusion studies have mainly solicited data from HR professionals, skilled employees, and managers in relation to employees' sociopsychological processes and outcomes in the context of workplace diversity, largely overlooking processes necessary for unskilled, underresourced employees as part of workplace diversity research (with the exceptions of Fujimoto et al., 2019;Janssens & Zanoni, 2008;van Eck et al., 2021). Similar to previous workplace inclusion research, COR research in employment contexts has been limited to a primary focus on the mainstream work context in the Global North settings-such as people experiencing psychological contract breaches, layoffs, job burnout, and depression in professional work-to discover employees' recovery processes (see Hobföll et al., 2018). In the context of the Global South, low-wage workers experience "hunger, starvation and the survivability of their families" (Alamgir et al., 2021, p. 10) and are increasingly resource deprived within the global production network. Surprisingly, little is known about workplace inclusion in an underresourced context. We therefore theorize on and investigate workplace inclusion from a COR perspective in one of the most underresourced workplaces: Bangladeshi garment factories. COR theory emphasizes the common response of a group of people to withstand their traumatic challenges as an inclusive transactional model of how they integrate various resources (i.e., economic, social, and personal resources) to build their self-resilience (Hobföll, 1998(Hobföll, , 2001(Hobföll, , 2011;;Lyons et al., 1998). Previous resource-based research has examined traumatic circumstances that deprive people of their resource capacity, such as the Holocaust (Antonovsky, 1979); war trauma, terrorism, and military occupation (Hobföll et al., 2018); and refugee experience (Nashwan et al., 2019). Emerging from such contexts, COR theory explains the power of individual gains of centrally valued resources that build resilience for managing stress and overcoming traumatic conditions (Hobföll, 2001). In the context of our study, we argue that COR theory is particularly relevant to factory workers in the Global South for regaining their dignity and self-worth through the way they obtain and retain resources (cf. Bain, 2019). By adopting the COR perspective, we therefore explore in this study how managers may provide their workplace inclusion initiatives through the integration of resources to foster their greater workplace inclusion experience and well-being. We conduct a quantitative study supplemented by a preliminary qualitative study showing the importance of fair employment practices (i.e., managerially initiated fair human resource management practices, such as fair pay, training, and grievance procedures) via integrative economic and personal resourcing mechanisms (cf. Alamgir & Banerjee, 2019). By filling in substantial gaps in the COR and workplace inclusion literature relating to what and how resources are required to enhance the inclusion of significantly exploited workers at workplaces, we advance knowledge in three significant ways. First, we provide a much-needed empirical study of workplace inclusion among one of the most underresourced employee groups in the world who face traumatic conditions. COR perspectives enrich workplace diversity research by providing more nuances on how economic, social, and personal resources might act as the basis of an inclusion mechanism that reduces inequality by breaking the link between resource disadvantages and the categorical group-highly relevant for employees with significant resource deprivations (c.f., Tajfel & Turner, 1986). To the best of knowledge, our study is the first CORbased examination of inclusion that explores the centrality of resources as a key inclusion process and well-being outcome for those underresourced employees. In the context of misuse of economic power over operative workers within highly segmented and hierarchical work settings, we validate the significance of managerially initiated economic inclusion via fair employment practices (e.g., economic resources from fair pay, fair training investment, medical services) for fostering workers' well-being. We did not find the positive effects of non-rank sensitive inclusion (as a social resource/ psychological sense of community) on their well-being. This contradictory finding informs that social resources alone cannot foster their well-being, highlighting the different needs of underresourced employees compared to resourced employees. Second, we offer new knowledge on the integrative resource model of workplace inclusion by validating workers' job-related LO as an internal condition for strengthening the positive effect of economic inclusion on life satisfaction, inferring the power of human agency to achieve "workable modus vivendi" in objective circumstances (Archer, 2003, p. 5). Additionally, our findings also showed that LO did not strengthen the relationship between managerial consideration and well-being via non-rank sensitive inclusion as a social resource. This finding informs scholars that personal resources such as LO can only act as a conditioning factor when workers in Global South such as Bangladesh are given adequate economic resources. Third, we extend research on the COR perspective of business ethics for the well-being of factory workers in the Global South by signifying a micro-level managerial intervention that provides economic resources via employment practices to address persistent exploitation (c.f. Bowie, 2017). Next, we discuss the concepts of economic, social, and personal resources relevant to workplace inclusion before developing and presenting our conceptual model and associated hypotheses based on the extant literature and initial interviews. --- 3 --- Economic, Social, and Personal Resources for Workplace Inclusion The COR theory of integrative resources for vulnerable populations has led us to develop an integrative resource model of inclusion and associated hypotheses. We have tied economic, social, and personal resources to workplace inclusion and well-being for factory workers in the Bangladeshi readymade garments sector. --- Economic Resources for Inclusion Factors such as economic stability, job security, and material accessibility are related to economic resources within the COR theory (Hobföll et al., 2018). Based on a socioeconomic approach to addressing economic deprivation, we define economic inclusion of vulnerable and underprivileged groups as access to the financial and material resources that can provide a greater sense of control and future prospects while cultivating a sense of belonging (Atkinson, 1998;Wagle, 2005). As such, economic inclusion promotes job security as a sign of inclusion, symbolizing the organizational acceptance of each employee as an insider in the work system (cf. Pelled et al., 1999). For instance, previous research highlights that in many cases, cost competitiveness and market volatility within the Bangladeshi readymade garments sector have transpired at the expense of inhumane economic work practices (Alamgir et al., 2021). Therefore, this necessitates greater economic inclusion among workers in such a context. --- Social Resources for Inclusion The social support and attachment practices that allow for healing through communal sharing are generally referred to as social resources (Hobföll, 2014). Relating social resources to social inclusion for workers of low socioeconomic status, we adopt Janssens and Zanoni's (2008) concept of non-rank sensitive inclusion, as a manifestation of social resource, meaning relational inclusion for hierarchically segregated workers. This inclusion is distinctive from rank sensitive inclusion, which often means inviting employees to take part in joint organizational information and decision-making processes (Nishii, 2013;Shore et al., 2018). Furthermore, non-rank sensitive inclusion also helps engender psychological sense of community that emphasizes meeting the workers' emotional and symbolic sense of belonging through mutually supportive relationships in a workplace (Sarason, 1974). In the midst of the life-threatening work context, non-rank sensitive inclusion supplements economic inclusion. For instance, as reported in Alamgir and colleagues' recent study of the impact of the COVID-19 pandemic on Bangladeshi factory work practices, workers were subject to managerial verbal and behavioral abuse, which points to the need for more attention to the managerial approach to address unfair work practices (Alamgir et al., 2021). --- Personal Resources for Inclusion Personal resources refer to the degree of personal orientation and key skills in life such as mastery, self-resilience, efficacy, optimism, and job skills (Hobföll et al., 2018). To exemplify the personal resource, during the COVID-19 crisis, while working in unhygienic workplaces, female workers individually negotiated the norms of protection from the virus by taking their initiatives to protect their lives (Alamgir et al., 2021), indicating their resilience for survival in light of their economic needs. COR theory (Hobföll, 1988(Hobföll, , 1989(Hobföll, , 1998) ) signifies those personal resources used by individuals to manage their stress (Hobföll, 2001). We envisage that workers will leverage the impact of economic and non-rank sensitive inclusion on employee well-being with resilience and mastery in important life tasks (Hobföll, 2012). Considering their ability to perform their job as an essential life skill for their survival, we regard the workers' job-related learning orientation (LO) as a personal resource that leverages external resources to promote well-being (Elliot & Church, 1997). --- Conceptual Model and Hypotheses Development Being supplemented by our preliminary explorative interviews (method outlined later), this section provides our conceptual model and hypotheses development. Figure 1 shows our proposed conceptual model. We first develop our hypothesized direct relationships in relation to (1) fair employment practices and well-being and (2) managerial consideration and well-being. We then develop moderated mediation relationship hypotheses on how economic inclusion, non-rank sensitive (social) inclusion, and LO may offer more nuances to the direct relationships. Our conceptual model also corroborates a recent study conducted by Alamgir et al. (2021) in the same Global South work context, reporting the oppressive economic and social conditions that demand the resiliency of readymade garments factory workers in Bangladesh. --- Fair Employment Practices and Well-Being The previous research on employees of low socioeconomic status has stressed the unique ways to provide their sustainable employment (George et al., 2012;Silverthorne, 2007). In light of their deprived employment conditions, we hypothesize the centrality of fair employment practices that influence their well-being. From workplace inclusion literature, fair employment practices are referred to as managerially initiated fair human resource management practices, such as fair pay, fair promotion, and fair training investment, to ensure that resource distribution is unrelated to identity group membership and reduce the tendency of groups with more resources to command more power over groups with fewer resources (Nishii, 2013;Randel et al., 2018). Previous research on Bangladeshi factory workers has reported that employment practices such as immediate employment termination, the absence of medical insurance, and labor unions' powerlessness have led workers with no sense of control to pursue respectable employment practices (Sharma, 2015). The Covid-19 crisis has exposed their potential death by hunger as a result of insecure employment practices such as delayed pay and unhygienic practices (Alamgir et al., 2021). Our initial interviews also corroborated those studies identifying employment practices mainly in fair pay, training investment, grievance procedures, and safety measures as vital sources of their life satisfaction. Therefore, we propose that job security of underresourced employees obtained through fair employment practices is central to protect their own and their families' livelihoods. As part of diversity climate research, researchers have found that perceptions of fair employment practices are positively related to employees feeling senses of hope, optimism, and resilience (Newman et al., 2018), as well as to job satisfaction (Hicks-Clarke & Iles, 2000;Madera et al., 2013), organizational commitment (Gonzalez & DeNisi, 2009;Hopkins et al., 2001), psychological empowerment, freedom of identity, and organizational identification (Chrobot-Mason & Aramovich, 2013). In this study, we examined the link between fair employment practices and life satisfaction. Life satisfaction is acknowledged as "a subjective global evaluation of whether one is content, satisfied and or happy about one's life" referring to overall subjective well-being (Cheung & Lucas, 2015, p. 120;Schmitt et al., 2014). We regard life satisfaction as an appropriate well-being indicator for Bangladeshi garment workers, who often experience extreme adversities in their work and nonwork domains as a whole. Hence, we hypothesized: Hypothesis 1: Fair employment practices are positively and significantly related to Bangladeshi garment workers' life satisfaction. --- Managerial Consideration and Well-Being The recent review of research on low-wage employees has reported the importance of managerial consideration to meet their specific needs to address their marginalized voices (van Eck et al. 2021). In the Covid-19 study, factory owners/ managers showed indifference about their well-being while workers tolerated devasting work and employment practices to keep their job in order to survive (Alamgir et al., 2021). Based upon non-rank sensitive inclusion and previous studies, we adopted the managerial consideration from the concept of managerial individual consideration as a key component of transformational leadership (Avolio & Bass, 1995) and is defined as a leader's ability to treat followers individually with care and concern, continually striving to meet their needs and to develop each individual's full potential (Avolio et al., 1999;Bass, 1985). The inclusive leadership theories have defined inclusive leaders as inviting and appreciating others' contributions (Nembhard & Edmondson, 2006), being open, accessible, and available (Carmeli et al., 2010, p. 250), fostering power sharing (Nishii & Mayer, 2009), and facilitating a sense of belonging and uniqueness in followers (Randel et al., 2018). We argue that leader's consideration for each worker's welfare may authenticate inclusiveness of a leader for the workers under significant exploitations in the work system. We suggest that managerial individual consideration provides nuance to the behavioral aspect of inclusive leadership and is particularly meaningful for significantly exploited employees to obtain their sense of selfworth within the hierarchically segregated work system. Our preliminary interview data also indicated a lack of managerial considerations for their overworked and stressful experience, such as managers not giving break; harsh words from supervisors for having small chats, rude supervisor behavior, threatening their job security. In light of inclusive leadership for the workers within the highly segmented and hierarchical work system, we proposed that managerial individual concerns such as giving personal attention and helpful behaviors might aid the workers to disavow their preconceived socioeconomic status and rank within the work system and view themselves as valued individuals, which, in turn, might increase their level of well-being. Hence, we hypothesized: Hypothesis 2: Managerial consideration is positively and significantly related to Bangladeshi garment workers' life satisfaction. We further contend that the degree of fairly implemented employment practices as a foundation for economic resources, managerial consideration as a foundation for social resources, and employees' LO as a foundation for personal resources may contribute to their resource gain processes (Hobföll, 2012, p. 5;Parks-Yancy et al., 2007). We contend that employees need these resources to enhance their ability to manage and mitigate their trauma and stress levels (Gallo & Matthews, 2003;Gallo et al., 2005). Particularly in the context of significant resource loss, such as in the case of Bangladeshi factory workers, the COR theory and empirical results indicate that resource gain carries significant impact in the loss circumstances for individuals to manage their stress (Hobföll, 2002(Hobföll, , 2012;;Hobföll and Lilly, 1993). Past researchers have indicated that access to a broad range of resources is positively related to well-being (Diener et al., 1995;Diener & Fujita, 1995). Thus, we predicted that possession of economic, social, and personal resources would be an important determinant for workers' well-being (c.f., Hobföll et al., 1990). --- Fair employment Practices, Economic Inclusion, and Life Satisfaction We envisage that fair employment practices in the areas of pay, grievance procedures, training investment, and career opportunities as avenues to foster a sense of economic inclusion among Bangladeshi garment workers. In light of the industry being notorious for its low wages and sudden layoffs of factory workers (Bain, 2019), employment practices may be used to enhance workers' senses of economic security, independence, stability, and sufficiency (Grossman-Thompson, 2018;Kossek et al., 1997). Although low-income employees are often regarded as low-status and disposable labor with restricted employment opportunities (Collinson, 2003;Grossman-Thompson, 2018), a previous study reported that managerially initiated reforms of employment practices (e.g., offering above-minimum-wage jobs, adequate pay to cover basic necessities, child care support, housing subsidies, or job training) were identified as successful initiatives that fostered the economic security of the working poor (Kossek et al., 1997). The recent research has also verified the importance of material safety (e.g., payment security) as part of workplace inclusion initiatives for low-wage workers (van Eck et al. 2021). We contend that the perception of economic or material resource gain as a result of fair employment practices may produce a critical condition for workers to reverse their perceptions of status bias treatment and shape their perceptions of inclusion in the workplace. In our preliminary interviews, the workers (mostly female workers) have reported inadequate economic resources of lack of direct pay and indirect pay (e.g., lack of training investment, onsite medical support, maternity and child care facilities, and the inability of female workers to apply for promotion), demonstrating a potential overlap of perception of fair employment practices and economic inclusion. Our qualitative insights also indicate the patriarchal society of Bangladesh that inherently places women in subordinate positions, warranting the female-related inclusive employment practices within the Bangladeshi context. Notably, economists and psychologists have found income and relative income to be key determinants of subjective well-being (Clark & Lelkes, 2005). Economic factors have also been significantly associated with life satisfaction, and those with higher incomes have reported more life satisfaction than those with lower incomes (Bellis et al., 2012;Cheung & Lucas, 2015;DeNeve & Cooper, 1998;Diener, 2000). Due to economic hardship, individuals of low socioeconomic status have consistently reported weaker psychological states (e.g., stress and depression) and poorer health than individuals of high socioeconomic status (Adler et al., 1994;Calixto & Anaya, 2014;Everson et al., 2002;Verhaeghe, 2014). Therefore, we tested the effect of fair employment practices on life satisfaction through economic inclusion. Accordingly, we hypothesized: Hypothesis 3: Bangladeshi garment workers' sense of economic inclusion positively mediates the relationship between fair employment practices and their life satisfaction. --- Managerial Consideration, Non-rank Sensitive Inlcusion, and Life Satisfaction Managerial consideration reflects social support that affects individuals' social interactions and provides actual assistance and a feeling of attachment to a person or group that is perceived as caring or loving (Hobföll & Stokes, 1988). Social support is the most recognized aspect of social resources for managing stress in the midst of extremely stressful circumstances (Palmieri et al., 2008;Schumm et al., 2006). It fosters a healthier social identity and lends a sense of being valued, which, in turn, promotes psychological well-being (Hobföll et al., 1990). Previous studies on the working poor reported that supervisors' personally providing counseling and helping to meet those employees' needs (e.g., addressing individual transportation and domestic violence problems) as key diversity initiatives that created strong social bonds within the workplace (Kossek et al., 1997). The past study also reported the managerial approach to addressing the individual needs of operative workers as being a key dimension of non-rank sensitive inclusion, breaking the norm of exploitation and simultaneously building symbolic and emotional connections among the workers (Holck, 2017;Janssens & Zanoni, 2008). The recent research has also verified the importance of non-task oriented involvement for building social relationships for greater inclusion of the low-wage workers in workplaces (van Eck et al. 2021). Thus, we predicted that managerial consideration as social resources for the workers would stir a non-rank sensitive inclusion, which would, in turn, enhance workers' psychological well-being. We elaborated on the concept of non-rank sensitive inclusion by adopting the concept of a psychological sense of community. It has been regarded as a feeling that members matter to one another and a shared faith that members' needs will be met through their commitment to be together, and as a result, one does not experience sustained feelings of loneliness (Clark, 2002;McMillan & Chavis, 1986;Sarason, 1974). Therefore, a psychological sense of community (here non-rank sensitive inclusion) is a social resource that fills the members' needs for belonging and personal relatedness in a group (Boyd & Nowell, 2014;Nowell & Boyd, 2010). This concept reflects widely shared definitions of inclusion that emphases on members' insider status. The COR theory (Hobföll, 1989(Hobföll, , 2001) ) suggests that social resources positively impact affective states in a group, such as creating reservoirs of empathy, efficacy, and respect among workers, which then foster collective wellbeing (Hobföll et al., 2018). Therefore, we predicted that a crossover of social resources from managers giving individual workers personal attention, help, and guidance would foster a psychological sense of community among the workers and foster well-being. For example, our initial interview and a more recent study in COVID-19 crisis reported on the significant need for managerial consideration on the workers' overwhelmingly heavy burden conditions to foster the sense of close relationships at the workplace (Alamgir et al., 2021). As such, workers' non-rank sensitive inclusion (sense of community) has been found to enhance psychological well-being (Davidson & Cotter, 1991;Peterson et al., 2008;Pretty et al., 1996;Prezza & Pacilli, 2007). Hence, we hypothesized: Hypothesis 4: Bangladeshi garment workers' non-rank sensitive inclusion positively mediates the relationship between managerial consideration and their life satisfaction. --- Employee Learning Orientation Workers of low socioeconomic status tend to lack certain characteristics, such as a can-do attitude and diligence (Shipler, 2005), and report lower perceived mastery and wellbeing as well as higher perceived constraints (Lachman & Weaver, 1998). We recognize that the lack of these personal characteristics is largely attributed to the significantly exploitative practices and mistreatment exhibited within the Bangladeshi garment factories. Yet we also acknowledge the employees' capacity to establish a modus vivendi through their inner discourse between a given environment and their well-being (c.f. Archer, 2003). Along with psychological studies that challenge the Western-focused trauma treatment on post-stressful events (e.g., war), researchers have called for a more complex psychological model by considering the individuals' varying mental conditions and the need for their stressful management capacities in amidst significant challenges (Attanayake et al., 2009;De Schryver et al., 2015). For example, our initial interviews have reported on the workers' resilience to become financially independent and their desire to obtain more training and feedback to learn new skills and support themselves and their family members with extra earnings. COR research conducted in other contexts (e.g., disaster) has reported that resource loss enhance individuals' motivation to cope with the negative impact of a disruptive event (Freedy et al., 1992;Kaiser et al., 1996). Past COR related research also informs that personal orientation integrated with immediate negative (and positive) events can affect the extent of life satisfaction (Suh et al., 1996). Similarly, we argue that employees' having higher job-related LO levels should strengthen (i.e., moderate) the resourcing effects of inclusion on well-being (Hobföll, 2012) by leveraging on those resources to enhance their life satisfaction. As such, we theorize that employees will gain varying perceptions of economic inclusion depending on their perceptions of fair employment practices for each employee, as well as on their degrees of LO, which influences their well-being. Notably, most of our interview participants also perceived regular income to be highly valuable for developing their lives. Thus, this further highlights that Bangladeshi workers with a higher degree of job-related LO would interact positively with perceived fair employment practices leading to increased well-being via their economic inclusion. As such, we propose: Hypothesis 5: LO moderates the positive effects of fair employment practices on life satisfaction through economic inclusion in such a way that this mediated relationship is stronger for Bangladeshi garment workers with higher LO levels. Furthermore, based on the COR theory, we contend that those with high LO levels would capitalize on managerial consideration to strengthen their learning and show a greater ability to cope with stressful situations than those with low LO levels (c.f., Hobföll et al., 1990). Furthermore, an employee's LO has been reported to influence social interactions with a mentor in such a way that having an LO level similar to that of a mentor tends to obtain the highest levels of psychosocial support from the mentor (Godshalk & Sosik, 2003;Liu et al., 2013). Past research also indicates that due to their underresourced condition, workers with low socioeconomic status tend to show more relationshipdependent and other-oriented behaviors than those with high socioeconomic status, who tend to display more social disengagement and independence (Kraus & Kelner, 2009;Kraus et al., 2009). Our interview participants also echoed their desire to learn new skills from their supervisors and co-workers if this was supported by management. Hence, we theorized that Bangladeshi garment workers with high LO levels would not only be more receptive to managerial consideration but also regard other workers as valuable social resources for them to manage their stress. Thus, we predict that by getting personal managerial care and concern, workers from Global South with high LO levels would reinforce their relationship-dependent and otheroriented behaviors within a factory, thus promoting their non-rank sensitive inclusion, per se, psychological senses of community and well-being. Therefore, we hypothesize: Hypothesis 6: LO moderates the positive effects of managerial considerations on life satisfaction through a non-rank sensitive inclusion (psychological sense of community) in such a way that this mediated relationship is stronger for Bangladeshi garment workers with higher LO levels. As outlined earlier, we assess our integrative resource model and its associated hypotheses in the context of Bangladeshi garment workers representing the Global South. Thus, before embarking on our preliminary qualitative and main quantitative study, we provide the context of Bangladeshi garment workers' well-being. --- The Context of Bangladeshi Garment Workers' Well-Being Bangladesh is the second largest RMG-exporting country after China (Asian Center for Development, 2020). Over the 2020-21 fiscal year, the RMG sector accounted for 81.16% of the country's exports. The RMG industry is the second largest employer in Bangladesh, employing over 4 million people, with the highest number of women among industries in Bangladesh (Asian Center for Development, 2020; BGMEA, 2021; International Labour Organization [ILO], 2020). The gender composition of RMG workers is 60.5% women and 39.5% men (ILO, 2020). However, the ratio of women in leadership was reported to be below 10% for supervisors and only around 4% for managers in the RMG industry (ILO, 2019). At the institutional level, Bangladeshi apparel workers' well-being ought to be governed by the country's constitution1 and the Bangladesh Labour Act of 2006. The Act has been amended multiple times with the ILO toward meeting international labor standards (). The amendments to the Bangladesh Labour Act of 2006 took place in 2013 and 2018 and involved workers' rights, freedom of association to form trade unions, and occupational health and safety conditions (ILO, 2016;Sattar, 2018). After the tragic collapse of the Rana Plaza building in 2013, the Bangladeshi government, the Employers Federation, the Bangladesh Garment Manufacturers and Exporters Association (BGMEA), the Bangladesh Knitwear Manufacturing and Exporting Association (BKMEA), the National Coordinating Committee for Workers' Education, and the Industrial Bangladesh Council (representing unions) signed the National Tripartite Plan of Action (NTPA) to make necessary reforms on fire safety and structural integrity in the garment sector. Although major buyers, retailers, and key fashion brands were not signatories of the agreement, some MNCs formed the Accord and Alliancethe two competing MNC governance initiatives-to improve workplace safety in Bangladesh, especially for the RMG industry (As-Saber, 2014). Despite considerable institutional enforcements and the MNCs' initiatives over the years (Berg et al., 2021), the European Union has reported MNCs' significant shortcomings in respecting labor rights and asked the Bangladeshi government to further amend the Labour Act to address forced labor, minimum wage requirement, and violence against workers (Haque, 2020). Recent studies on the sector have also confirmed the misrepresentation of a compliance framework that assures MNCs and local authorities' legitimacy and gives the impression that management has made changes to workers' conditions (Alamgir & Alakavuklar, 2018;Alamgir & Banerjee, 2019;Alamgir et al., 2021). Bangladeshi RMG workers' wages are still among the lowest in the world (Barrnett, 2019). In the time of the COVID-19 pandemic, their "live or be left to die" employment conditions disclosed inhumane employment practices (e.g., unpaid and unstable wage; unhygienic practices) that confirmed the superficial compliance arrangements (Alamgir et al., 2021). The harsh conditions have led to deaths and several injuries as a result of garment workers' protests calling for pay and revised minimum pay (Connell, 2021;Uddin et al., 2020). Moreover, female factory workers, who make up the majority of the garment workforce, still work in repressive environments, where they are exploited by their managers and supervisors, have no freedom or opportunities for promotion at work (Islam et al., 2018), and experience economic, verbal, physical, and sexual violence (Naved et al., 2018) to survive within the gendered global production network (Alamgir et al., 2021). To the best of our knowledge, although the macro-level initiatives of multiple actors have been the central focus in order to improve their working conditions, little attention has been paid to micro-level workplace inclusion from resourcing perspectives that directly affect their well-being. To date, several studies have highlighted macro-level interventions related to including factory workers in the Global South from various ethical viewpoints. Notably, Kantian ethics have been applied to MNCs to promote global practices that uphold workers' rights (e.g., Arnold & Bowie, 2003) and the government's responsibility to collaborate with employers to provide an acceptable minimum wage/living wage to workers (Brennan, 2019). Altruistic utilitarian ethics have been applied to relational corporate social responsibility to promote workers' economic and social development (Renouard, 2011). New governance perspectives have been applied to laws (Rahim, 2017), sweatshop regulation (Flanigan, 2018), and various actors to promote bottom-up interdependent approaches, address inactive, ineffective, or inequitable governance mechanisms, and improve working conditions (Niforou, 2015;Schrage & Gilbert, 2019). Alamgir et al. (2021) called for more granular investigation of the causes of failure, from how workers perceive their work practices to improve their economic and social well-being. We extend this research route via micro-level exploration of workplace inclusion mechanisms for factory workers from the COR perspective. --- Method To build a rich understanding of our focal COR constructs in the context of Global South workplace inclusion and inform our main quantitative study, we conducted preliminary interviews with 23 Bangladeshi garment workers. The interview has given us the intuition to corroborate our model development and construct our measurement items in light of existing literature and the Global South context. The subsection that follows discusses the research approach undertaken for conducting these preliminary interviews and their integration into our conceptual model and hypotheses development. --- Preliminary Interview Approach Upon receiving ethical clearance, with the assistance of a local nonprofit organization specializing in garment workers in Bangladesh, the research team sought approval from a range of RMG factories to interview their workers. Nine factories provided us the permission to approach and interview those workers who voluntarily agreed to participate in our study. This process resulted in one of the research team members to conduct interviews of 23 garment workers in the local Bengali language. The interviews were translated into English by another Bangladeshi researcher in Australia. As shown in Table 1, nineteen of our participants were female workers, and the remaining four were males. All participants had completed primary schooling, and some had received secondary school certificates. All participants had worked for at least 2 years and had similar salaries, and they worked across a range of small to large (wellestablished) RMG factories in Bangladesh. We asked our interview participants broad questions, such as What have you received from this factory that made you feel valued? What makes an inclusive workplace? What causes you the most stress in your employment? What sort of practices influence your life satisfaction? Interviews lasted for 20-30 min and were audio-recorded (with permission) and transcribed. We have analyzed our data in light of an inclusive workplace and what causes employees to feel stress in their employment. We agreed that workers reported their sense of resource deprivation by how managers treated them. The participants described an inclusive workplace regarding fair employment practices, such as managerial treatment with equal respect for work safety, better pay, job security, voices being heard, and transparent policies and the work relationships such as an "integral part of [the] workplace" and "a team member of the factory." In our interviews, we did not initially adopt COR resource perspectives of economic, social, and personal dimensions. After discussing key patterns in data, we connected the informants' inclusion references with the COR perspective to categorize how workplaces have fostered various resources and well-being. Since inclusion highly connotes resource accessibility, it also made sense for us to use the COR perspective to articulate how different resources promoted workplace inclusion. We identified economic, social, and personal resources as common resources that created an inclusive factory. In relation to economic resources, some participants reported on the fairness of employment practices with economic inferences, saying, "If someone is working late, s/he is just a hard worker," whereas other participants reported receiving "wages every month while working in different RMG factories. I could work overtime and earn extra money for each additional hour. I am happy, and, above all, I am respected in my neighborhood." Some participants highlighted social resources in the form of managerial consideration for their job-related development by receiving clear job instructions and attaining permission to attend outside training or a lack of concern in the areas of training instruction, continuous pressure to meet shipment deadlines, and verbal insults. We identified job-related LO as their desire to learn more skills from supervisors and attend skills training in and outside their factories despite a lack of support. For example, one worker mentioned, "I have not attended any training yet. However, I certainly want to develop required skill[s] if I am given any opportunity in [the] near future." Notably, these initial findings partly align with a recent study in a similar context that reported the exploitative economic and social practices that called for managerial urgency to address their poor work conditions (Alamgir et al., 2021). Table 2 includes sample quotes related to economic, social, and personal resources with reference to employment practices and managerial consideration at the workplace. The emphasis on economic resources (via employment practices), social resources (via managerial consideration), and personal resources (via LO) formed our basis for a quantitative study on inclusion processes as to how those resources were interconnected with the workplace inclusion research domains. Our qualitative data also contributed to economic inclusion and managerial consideration's measurement items for our main study; we specify this later in our measures subsection. The work at the RMG factories is very tiring and creates fatigue. Compared to work pressure, the salary and other non-financial benefits are not enough. The tiring work and not so friendly workplace have been a matter of concern for me and my co-workers for a long time now. However, I must admit that I have managed to help myself and my family with the earnings from working in this industry Jobs in factories are not truly permanent. There are times when the factory has limited orders from buyers or buyers are not paying the factory management on time, we face difficult times. During such times, I am always worried about losing my job. I have seen many colleagues losing their job during such times. So there is no job security here and I am always worried about losing my job. After all, my family depends on this income I think fairness and fair pay is a critical aspect. Fair employment practices and compensation, treatment of employees, ensuring employees are given flexibility at work and opportunities to learn are essential not only for job satisfaction but also for overall life satisfaction. There are even times when I had not received my salary for two months consecutively as buyers were not paying the factory timely I think organizations that have better HR and wellbeing rules and regulations have a positive impact on their employees. The factory must have a robust procedure for grievances and a work culture that treats workers fairly in terms of compensation and benefits Social resources (non-rank sensitive inclusion) and well-being via managerial consideration/supportive practices Support from the factory is also important for me. Suppose any worker needs some money on an urgent basis or in distress. In such case, we will be able to talk to the supervisors and management team for some advance payment or other related support to overcome distress. However, when I find that my work pressure is getting too much, I try to talk to my supervisors for workload adjustment. They hardly adjust that so I make a sick leave (sick call) for one or two days We are always under continuous pressure to meet buyers' specifications and management's guidelines. With that, when management's anxiousness passes down us, we become more stressed. Forget about flexibility or spare time between work; sometimes, we cannot even go for the lunch break due to work pressure. Working continuously for long hours without any break is tiring. And, it causes a lot of physical and mental stress for me Giving the employees to raise their voice on the issues they are facing is very important. Also, the management needs to have an open mind to listen to and take action on the complaints made by the employees. Ensuring a work culture where employees are not afraid of sharing their views and concerns is also very important. Sometimes, I feel like not sharing my problems with anyone at work because I am afraid of how the supervisors will take this My supervisors or HR department do not take any step in supporting me to cope with the work stress --- Quantitative Data and Sample The Bangladeshi garment workforce comprises 4 million (estimated) workers across 4379 garment factories in Bangladesh (Bangladesh Garment Manufacturer & Exporters Association, 2020; International Labour Organization, 2020). As discussed previously, the majority of workers in the Bangladeshi garment sector are at the lower ends of their operational and organizational hierarchies, thus making this group particularly relevant to the aim of this study. Prior to administering our actual surveys, we reverted back to our initial interview participants and showed them our survey and sought for their feedback on the measures including its content validity. This led to minor changes in the wording of some survey items. As part of this study's inclusion criteria that fitted to our research objectives, we sought for factories that have been in operation for more than 10 years in Bangladesh and have exported their entire production overseas. The same local nonprofit organization who assisted us with recruitment of interview participants provided us with a list of 50 similar types of large-scale, export-oriented garment manufactures in Bangladesh that fitted with our inclusion criteria. Using IBM SPSS complex sample function, we generated a refined list of 25 randomly selected garment manufactures to mitigate any potential bias that could arise due to sample selectivity (IBM Corp, 2017). Of the 25 factories approached 18 agreed to participate in this study. The participating factories also reported to the research team that they have had to comply with various human resource and production rules to meet local and international exporting standards. The unit of analysis of this study was garment workers at the lower ends of their operational and organizational hierarchies (i.e., employees of low socioeconomic status) who are vulnerable and least likely to have access to or control of valuable resources and employment practices (Berger et al., 1972;DiTomaso et al., 2007;Earley, 1999). Mid-level line managers/supervisors of the participating factories assisted the research team to communicate (announce) about the voluntary two-wave survey participation to their lower-end factory workers. Next, the research team including the factory management informed the respondents that their responses would be kept anonymous with absolutely no implications on their employment and that non-participation would have no adverse consequences on their employment. As a best practice, to minimize common method variance biases, we carried out the surveys in two waves (Podsakoff et al., 2003). The second survey distribution (T2) was therefore conducted a month after the first time point (T1). The first survey distribution (T1) comprised of our independent, control and moderating variables and the second survey (T2) comprised of our mediating and outcome variables. Given the sort space of time between the two waves of the survey only five participants dropped out between the survey periods. As such, our data collection resulted in valid responses from 220 lower-end garment factory workers across 18 participating factories. Trained researchers conducted the surveys using traditional pen and paper in the local Bengali language. Both Behavior of some of the supervisors was very rough. They neither supported me inside the factory to learn nor spared me for one or two days to learn those skills outside the factory. Even when there were some opportunities for training within and outside the organization, my supervisors did not allow me to attend the training sessions during my regular working hours RMG is a big industry and there are many firms where policies are better and transparent. Hence, I believe if I can learn and develop my skills properly, there are ample opportunities. The work pressure is intense, and many times, I was mistreated by my superiors. There are even times when I had not received my salary for two months consecutively as buyers were not paying the factory timely. However, although the work has been stressful, I have managed to make myself financially independent and help my family from the earnings of this job Most importantly, I am not dependent on my husband. I take care of all the financial needs of my children. Moreover, over these years, I have managed to save some amount of money for the future as well. I have struggled a lot to reach where I am now financially but when I think about how I have managed to come so far especially being a woman, I feel good sets of surveys (T1 and T2) were translated by a Bangladeshi researcher in Bangladesh and then translated back by another Bangladeshi researcher working in Australia. The sample was comprised 75% of female and 25% of male garment workers, with 41% of respondents below the age of 25, 40% between 26 and 34, 13% between 35 and 44, and the remaining 6% over 45 years. Of the \participants, 58% had completed Year 5 schooling, 28% Year 10 schooling, and 14% Year 12 schooling, and 1% had a diploma degree. In terms of work experience, 25% of the respondents had less than 1 year, 35% had between 1 and 2 years, 21% had between 3 and 4 years, and the remaining 19% had 5 years or more. A plurality of the respondents (i.e., 47%) had a monthly income between 5000 and 8000 BDT,2 and this figure ranged between 8001 and 11,000 BDT for 33%, between 11,001 and 14,000 BDT for 15%, and above 14,000 BDT for the remaining 5%. --- Measures Table 3 shows the items used in this study and their associated squared multiple correlations values. Average variance extracted (AVE) and composite reliability values for our study constructs are presented in Table 4. All items representing this study's key constructs were measured by using a 1 = "strongly disagree" to 7 = "strongly agree" Likert-type scale. Scales used for measuring the variables in our study are highlighted below: Independent variables (measured in T1): Five-item fair employment practices were adopted from Nishii's (2013) climate-for-inclusion items. The managerial individual consideration measure comprised four items adopted from Bass's (1985) study. Furthermore, as part of testing the face validity of the managerial consideration scale in the context of underresourced employees, we conducted exploratory interviews with 23 garment workers in Bangladesh in relation to what they valued as inclusive managerial behavior at work. Based on the feedback received from these interviews, we included two additional items-"I can share my personal thoughts/suggestions" and "Provides clear and timely guidance for my work"-that holistically captured our individual managerial consideration construct. When we ran a confirmatory factor analysis (CFA), the six items of the managerial individual consideration scale showed acceptable composite reliability (0.91) and AVE values (0.51; see Table 2). Additionally, the content validity of these items was established by interviewing seven more garment factory workers (five female and two male) and several professional individuals in Bangladesh (e.g., two human resources employees, one industrial relations academic, one industry consultant, and one professional with industry experience in the Bangladesh garment sector). All participants regarded the six items representing the managerial consideration measure in our study as satisfactory. A CFA of the managerial consideration scale was also conducted, which indicated a good model (IFI = 0.99, TLI = 0.98, CFI = 0.99, and RMSEA = 0.05; pclose = 0.31). This confirmed the unidimensionality of the six-item managerial consideration scale (Hair et al., 2010). Moderator variable (measured in T1): Employee LO was measured through six items adopted from Elliot and Church's (1997) study. Mediating variables (measured in T2): Clark ( 2002) developed a scale for psychological sense of community based upon conventional definitions of community developed by Glynn (1981), McMillan and Chavis (1986), and Chavis and Wandersman (1990). We adopted Clark's five-item scale as a measure of non-rank sensitive inclusion. Based upon the exploratory interviews with 23 garment factory workers in Bangladesh in relation to what they valued as inclusive at work, we developed a four-item economic inclusion scale relating to economic independency, sufficiency and regularity for their own and family's livelihood. Additionally, we established the content validity of these items by interviewing seven more garment factory workers (five female and two male) and five professional individuals in the Bangladeshi garments industry (i.e., two human resources employees, one industrial relations academic, one industry consultant, and one professional with industry experience in the Bangladesh garment sector). Outcome variable (measured in T2): We used Diener's et al. (1985) five-item life satisfaction scale to measure life satisfaction in T2. Based on the feedback from interviews with sample garment factory workers, we also included one item of job satisfaction in the survey to measure the important work aspect of life satisfaction among Bangladeshi garment workers. Control variables (measured in T1): We included age, gender, income, job experience, and education of respondents as control variables in our proposed conceptual model, as former study authors suggested that a range of sociodemographic factors can affect low-income 1 3 workers' perceptions of their organizations and employee inclusiveness (Habib, 2014). --- Results --- Reliability and Validity As shown in Table 4, all constructs showed acceptable internal consistency (i.e., ≥ 0.7; Hair et al., 2010). We conducted a multifactor CFA using version 25 of the IBM AMOS software. The multifactor CFA helped us to generate values such as AVE and composite reliability (i.e., internal reliability). Using Fornell and Lfarcker's (1981) technique, we tested discriminant validity by comparing the square roots of the values with the interconstruct correlations (as shown in Table 3). The square roots of the AVE values for each construct were greater than the interconstruct correlations, thereby demonstrating discriminant validity (see Table 3; Hair et al., 2010). Further, all values of AVE were greater than 0.5, showing convergent validity (Bagozzi & Yi, 2012). --- Controlling for Common Method Bias We followed best practices to reduce common method variance (CMV) bias by ensuring respondent anonymity, informing respondents that there were no right or wrong answers to prevent evaluation hesitation, and conducting our surveys at two different time points-T1 and T2 (Podsakoff et al., 2003). However, given that we used single-source and cross-sectional data, CMV bias was still a possibility in our study. Thus, to test if CMV could affect our structural equation modeling (SEM) path estimates, we added a common latent factor to the multifactor CFA model and compared the standardized regression weights before and after the addition of the CLF (Gaskin, 2013). Some of these weights differed by greater than 0.2 (i.e., for all five items), which suggested that CMV might be an issue in our data set. To minimize and control for CMV bias, we included the common latent factor measure while estimating our hypothesized relationships. All hypotheses' testing results were thus adjusted for CMV factors, providing more accurate path estimates (Podsakoff et al., 2003). --- Path Results We ran SEM to estimate the direct and mediated path effects. The statistics of our SEM were good fits to our data set (IFI = 0.95, TLI = 0.94, CFI = 0.95, and RMSEA = 0.05; pclose = 0.22; Hair et al., 2010). In addition, the ratio of the chi-square value to its degree of freedom fit statistic (Hair et al., 2010;Holmes-Smith et al., 2006) was acceptable at 1.6, which was below the cutoff of 3.00 (Bagozzi & Yi, 2012). Additionally, we ran a power test of our model that showed a value of 1, which suggested that a sample size of 220 was adequate to test our hypothesized relationships (MacCallum et al., 1996). --- Direct and Mediated Path Effects We applied the bootstrapping (N = 2000) method using 95% bias-corrected confidence interval procedures in SEM with AMOS 25 to estimate the direct and mediated hypothesized paths within our proposed model. Given that this procedure involves resampling the data multiple times to estimate the entire sampling distribution of the path effect, it provides results with stronger accuracy of confidence intervals (Zhao et al., 2010). As a robustness check of our path results from SEM, we also used Hayes's Process version 3 with IBM SPSS 25 to check if the direction and path estimates were similar (Hayes et al., 2017). The process analyses showed, as expected, a slight variation in the path estimates from the SEM path results. Nonetheless, all directions and the significance of the path effects remained the same. In Hypothesis 1, we stated that fair employment practices significantly and positively relate to employee life satisfaction. As shown in Table 5, the direct path effect (b = 0.06, p > 0.05) of fair employment practices on employee life satisfaction was not significant, and we thereby reject our first hypothesis. In Hypothesis 2, we posited that managerial consideration significantly and positively relates to employee life satisfaction. As represented in Table 5, the direct path effect showed insignificant results (b = 0.07, p > 0.05), so we reject our second hypothesis. In Hypothesis 3, we stated that economic inclusion would positively mediate the relationship between fair employment practices and life satisfaction (inclusive of job satisfaction). As shown in Table 5, the mediated path estimates (b = 0.11, p < 0.01) of fair employment practices on life satisfaction through economic inclusion were positively significant, therefore supporting our third hypothesis. Specifically, we identified fair employment practices as conduits to economic inclusion, which fostered their well-being. Contradicting previous research findings, we found no direct effects of fair employment practices on employee attitudes. Instead, employee perceptions of economic inclusion in the implementation of fair employment practices were necessary to foster their well-being. Our qualitative data have provided granular understanding of this finding as the workers mostly reported economic inferred unfair employment practices especially in the area of insufficient medical services, training investment, safety measures and child care facilities and overtime pay (see Table 1). For example, these are representative quotes from the workers: One critical practice Sthat would be beneficial to workers is the provision of a healthy and safe workplace. This can be achieved by having an onsite doctor or medical facility or by providing medical insurance for the workers. Most of the workers are female and a good number of them are working mothers. Factories should implement practices that support working mothers such as providing maternity and childcare facilities (Female, 4 years in the factory, USD100-150 per month) The factory's expectation is that employees will be able to cope with changes in requirements and learn new design requirements by themselves. If there are any changes in machinery requirements, the organization arranges training for only a few people and for a short time. We have to learn to observe and learn from those few people (Male, 4 years in the factory, USD100-150 per month) In Hypothesis 4, we posited that social resource (nonrank sensitive inclusion) allocation would positively mediate the relationship between managerial consideration and life satisfaction. Our mediated path effect proved to be insignificant (b = 0.01, p > 0.05), so we thereby reject our fourth hypothesis. Our findings that did not support the positive attitudinal effects of managerial consideration through non-rank sensitive inclusion contradicted the inclusion study in the past, which assumed that managerial initiatives that meet an individual's --- Moderated Mediation Effects To test for moderated mediation (i.e., boundary condition effect), we used IBM SPSS Process version 3, as AMOS 25 has limitations in terms of calculating the index of moderated mediation values (Hayes, 2013). This statistical software package not only enables assessment of moderation but can also test for moderated mediation effect (Hayes, 2013). As shown in Table 5, the interaction between fair employment practices and LO on economic inclusion is positive and significant (b = 0.48, p < 0.05), therefore demonstrating the moderating effect of LO. As shown in Table 4, zero did not fall between the upper and lower confidence interval limits for the index of moderated mediation value (0.10), which provided support for our fifth hypothesis. Given this support, we probed this moderated mediated relationship further between low (-1 SD) and high (+ 1 SD) values (see Table 5 and Fig. 2), which revealed that for employees with higher levels of LO, the positive mediated effect of fair employment practices on life satisfaction through economic inclusion was stronger. Some of our qualitative findings also provided support for this finding (see Table 3). The participants demonstrated their willingness to learn and develop their skills; and resilience to manage their lives particularly in economic areas, despite of enormous work pressures and lack of managerial support for their upskilling. For example, our qualitative data have reported on the workers' resilience to become financially independent and their desire to learn new skills despite insufficient training and lack of living wages: Although I received training initially, it proved to be insufficient. I only received verbal instructions without any practical demonstration. It was difficult for me to start working. Also, when I began to learn my work on my own, I thought of receiving some support from the factory. However, I sadly did not even get any proper feedback (Female, 4 years in the factory, USD100-150 per month) The following quotation demonstrates her resilience to cope with life circumstances via job-related LO by her own initiative to find employment to look after her family members: However, I must admit that I have managed to help myself and my family with the earnings from working in this industry. I was a housewife before starting work in the RMG industry, and my husband was the only wage-earner. The amount of money my husband used to earn was not sufficient to take care of the needs of my children. Since I started working, I have managed to take care of my children. It has helped my husband to get rid of some financial burdens from his shoulders (Female,4 Testing of the effect that interaction between managerial consideration and LO had on non-rank sensitive inclusion yielded insignificant results (b = 0.05, p > 0.05; see Table 5). Furthermore, the index of moderated mediation for this relationship was also insignificant (zero falling between the upper and lower confidence interval levels), so we thereby reject our sixth hypothesis. --- Discussion This research has provided an integrative resource perspective of workplace inclusion in light of the growing population of workers with low socioeconomic status worldwide. Based on the COR perspective, we identified the centrality of economic resources intersected with personal resources as a key workplace inclusion process for the wellbeing of employees with insufficient resources. Within hierarchically and geographically segregated societies where economic inequality is deeply entrenched (Bapuji et al., 2018), economic inclusion via fair employment practices requires multiple human agencies to adopt ethics and respectfully treat every worker as ends, not means, for economic growth (cf. Bowie, 2017). The recently proposed stakeholder governance perspective (Amis et al., 2020b), economic value creation by stakeholders (Bapuji et al., 2018), economic-social-economic cycle of inclusion (Fujimoto & Uddin, 2021), and inclusion concepts for lowwage workers (van Eck et al., 2021) encourage corporate leaders and other stakeholders to prioritize the interests of most exploited stakeholders in order to meet their economic needs. The key stakeholders prioritizing fair employment practices at low-wage workplaces may reverse the hierarchical discriminatory trends within the workforce, which are particularly imposed on female workers in the patriarchal society (cf. Markus, 2017;Ridgeway, 1991). Our study promotes the collective advancement of micro-level initiatives that might break the macro-micro-macro-status construction process, in which the macro-level inequitable context allows for micro-level inequitable resource distribution that in turn reinforces macro-status differences (Barney & Felin, 2013;Ridgeway et al., 1998). Our COR perspective for the least resourced workers also supports John Rawl's perspective of guaranteeing basic liberties by improving the situation for the least well-off members of society as a central management approach (Amis et al., 2021). Notably, our study did not find the influence of managerial consideration and non-rank inclusion on life satisfaction. This finding contradicts previous studies that identified the importance of symbolic and emotional connections and non-task-oriented involvement in building social relationships for low-skilled workers in the western context (Holck, 2017;Janssens & Zanoni, 2008;van Eck et al. 2021). The context of this study of Bangladeshi workers in the RMG industry with significantly more severe employment conditions than previous studies in Western settings may provide insights into how those social resources alone cannot result in better life satisfaction for those workers (c.f., Alamgir et al., 2021). Since the workplace inclusion study took place for the first time in Global South, to our best knowledge, this study has signified the importance of contextual implicational differences in promoting the inclusive workplace and well-being of lowskilled workers between the Global South and the Global North. --- Theoretical Contributions Our study has made a number of theoretical contributions in COR and workplace inclusion research related to workers of low socioeconomic status in the Global South. First, we provided a COR concept of inclusion for those workers by departing from mainstream diversity and inclusion concepts of the sociopsychological tradition (Tajfel & Turner, 1986), addressing underrepresented resource/inclusion concepts that are more relevant to the Global South's operant workers (cf. Amis et al., 2021;Bapuji, et al., 2018). Based on status construction theory, the economic resource differences between low-and high-skilled jobs are a major root of shared beliefs within high-and low-status groups, including the self-perception of those individuals with fewer economic resources as belonging to a low-status, less valued group (Ridgeway, 1991;Ridgeway & Balkwell, 1997;Ridgeway et al., 1998). To address the widespread economic inequality attached to the personal beliefs of workers who experience incomparable economic inequality in the global supply chain, this study highlighted organizational leaders' central roles in demarcating employment practices that foster the equitable economic and social status of those workers via direct and indirect material creations (Amis et al., 2020a;Bapuji, 2015). In the COR and workplace inclusion literature, we confirmed the utility of combined resources via managerially initiated fair employment practices to create inclusive workplaces for workers of low socioeconomic status. This study has bridged the gap between COR theory and practice on how economic resources via fair employment practices and personal resources via job-related LO jointly promote the well-being of workers of low socioeconomic status in the Global South. Our finding confirmed COR theory's inclusive transaction model for resource-deprived workers by extending its scope from professional workers' recovery process to how exploited workers obtain their livelihood via economic and personal resources (cf. Hobföll et al., 2018). This study has also bridged the gap between workplace inclusion theory and practice in relation to how managerially initiated fair employment practices (e.g., pay, training, opportunities to voice grievances), which are associated with economic inclusion (e.g., access to economic independence) more than social inclusion (e.g., sense of bonding), promote well-being for workers of low socioeconomic status. Departing from the social inclusion of skilled employees, our findings emphasize economic resource accessibility/ economic inclusion as critical pathways for the least resourced workers to obtain meaningful lives according to how their leaders treat them in their workplaces. The context of this study limits our theoretical contribution to the economic inclusion of low-skilled workers at workplaces in the Global South. However, our approach should be relevant to low-skilled workers in the Global North, as we witness mass protests fueled by a combination of economic woes, growing inequalities, and workers' job insecurity in the Global South and Global North (United Nations, 2020). Second, we signified inclusive leadership for promoting rank sensitive inclusion (e.g., joint organizational decisionmaking) rather than non-rank sensitive inclusion (e.g., social bonds that meet the need for belonging) to enhance the economic well-being of the least resourced workers (cf. Janssens & Zanoni, 2008;Nishii, 2013;Shore et al., 2018). In particular, the importance of managerial consideration for employment practices in the provision of direct or indirect economic resources indicates the importance of inclusive leadership to promote a joint decision-making process with the employees in establishing fair employment practices that promote their job or economic security (cf. van Eck et al., 2021). In line with researchers advocating for revising management theory through a prism of economic inequity for societal well-being (Bapuji et al., 2020), we advocate for a perspective of economic inclusion using an ethical lens. This can improve management and leadership theories by promoting managerial joint decision-making on workplace inclusion with the workers most deprived of resources. Third, this study introduces an integrative resource model of inclusion research by validating workers' personal LO resources as an internal condition for strengthening the positive effect of economic inclusion on well-being. We highlight that resilient vulnerable employees are able to manage stressful environments when corporate leaders and other stakeholders enhance fair employment practices for their well-being using the COR perspective. We identified a boundary condition effect of LO on the mediated relationship between fair employment practices and well-being through economic inclusion, confirming the COR theory on substantial effects of personal resilience and mastery in traumatic contexts, regardless of actual wealth conditions, which determine life satisfaction (Freedy et al., 1992;Johnson & Krueger, 2006;Kaiser et al., 1996). Whereas previous workplace inclusion researchers have focused on organizational practices, climate, and leadership in fostering positive attitudes, we suggest that a high level of LO or personal resources/orientation plays an important role in strengthening the positive effects of fair employment practices on the well-being of resource-deprived employees. We lend credence to the COR theory (Hobföll, 2012) that well-being is born of a relationship between personal and environmental resources and aligns with multiple levels of economic value creation (cf. Bapuji et al., 2018). These levels promote individual and environment (e.g., managerial support, fair employment practices, labor laws) interactions to accumulate valuable resources for less-resourced individuals. Notably, personal LO resources were not an internal condition for strengthening the positive effect of non-rank sensitive resources or social inclusion on well-being. This result highlights that personal LO only enhance life satisfaction when the workers are given adequate economic resources (e.g., fair pay and medical support) or economic inclusion to strengthen their job security. As the integrated notion of resources has been incorporated in workplace inclusion research, more attention to personal resources is warranted to examine how external and internal factors interact to influence the impact of an inclusive workplace on employee well-being. The theorization of inclusion in addressing economic inequality in workplaces is still at an early stage of development. Thus, we presented burgeoning contextual knowledge of inclusion by adopting a COR perspective to address the persistent socioeconomic exploitation of employees with insufficient valuable resources. --- Practical Implications Despite macro-levels actors undertaking initiatives to improve their working conditions, workers' "live or be left to die" exploitations reported in the recent COVID-19 pandemic suggest that the macro-level initiatives may have been poorly implemented in practice. Notably, a significant shortcoming of MNCs' governance initiatives (Accord and Alliance) with misrepresented compliance framework/ impression management has taught us the inadequacy of organizational-level change, requiring micro-managerial practices to address the workers' inequality (c.f. Amis et al., 2021). MNC representatives, instead of operating at the macro-level to address labor conditions, ought to interact more with factory managers, factory supervisors, and factory workers at the factory level to produce fair employment practices to meet their immediate economic needs in particular. This study has indicated managerially initiated fair employment practices aligned with equitable economic resource distribution to those workers, notably in the areas of fair pay for their work volume, onsite doctors, medical and childcare facilities, training investment, and employees' ability to voice their grievances on overtime and stressful labor. As economic value deprivations and lack of freedom of speech are significant concerns for those workers, more training investment accompanied by more frequent grievance procedures could bolster their sense of economic and social inclusion and mastery over their economic independence to improve their life conditions (Dhooge, 2016). Within the Bangladeshi RMG industry, training investment will be an important indirect form of economic resources to retain productive workers while their wages remain low to attract international buyers (e.g., Basterretxea & Albizu, 2011). The importance of fair employment practices in this globalized context signifies regular monitoring and governance of multiple leadership internal and external to the factories in order to enhance economic value via fair employment practices that will help the workers to survive. The wellbeing of those factory workers is underscored by the gravity of broader societal inequality being shaped by multiple actors at organizational, industry, national, and global levels (e.g., factory managers, owners, MNC representatives, local government, international NGOs, and the ILO), thus requiring multiple leadership initiatives to influence their workplace inclusion (c.f., van Eck et al., 2021). Unfortunately, international NGOs and the United Nations have been criticized for their Western ethnocentric approach in their operations in culturally diverse and institutionally complex contexts (Schwarz & Fritsch, 2015), exacerbated by their donor organizations demanding rules, regulations, and work practices that are shaped by their national norms (Heiss & Kelley, 2017). The elite local and international stakeholders (e.g., MNCs, nation-states, or the ILO) have also been criticized for their reforming of local conditions in their own interest without consulting the vulnerable members (Meardi & Marginson, 2014;Munir et al., 2018). Thus, the unreliability of macro-governance and compliance of the Bangladeshi RMG industry further promote the importance of leadership initiatives within the workplace to improve the economic conditions of those workers. Moreover, the protection of factory workers' well-being is critical to keep their business because international buyers, customers, and shareholders increasingly prefer to source their products from ethically compliant RMG factories in light of recent order cancelations based on the perceived lack of ethical compliance in the RMG industry (Ahmed et al., 2020;Boufous et al., 2021). Moving on from superficial safety compliances in factories and Western or distant approaches, therefore, external and internal leadership must continually monitor and improve fair economic distribution to those hierarchically and geographically segregated outsourced offshore workers so as to create more inclusive workplaces for their well-being (cf. Alamgir et al., 2021;Bapuji, 2015). Furthermore, in light of widespread globalized businesses, we contend that existing MNC practices, education, and communication of academic and practitioner publications often overlook unethical employment practices regarding the least resourced workers in outsourced Global South operations. The COR/ethics perspective for exploited workers, profoundly challenges business and society leaders' economic domineering approach that heavily exploits the lives of those least resourced in workplaces and society. Our study signified a leadership managerial approach to break the rationalized, insufficiently challenged economic exploitation of those hierarchically and geographically segregated workers in order to create inclusive workplaces that are more relevant for their context. --- Conclusion In light of exacerbating global inequality, this study promotes the COR/inclusion perspective of business leaders to creatively address much felt economic deprivation of those vulnerable workers across the globe, more so in the Global South. In light of seemingly grim COR ethical perspectives adopted by business leaders from wealthy corporations, we call for more research and practices devoted to addressing human dignity and workplace inclusion of exploited workers to live equitably. Our findings demonstrated that managerially initiated economic inclusion, rather than social inclusion practices, fostered worker well-being in contexts where workers are vulnerable and exploited. Therefore, we call for more business leaders to adopt an integrative resource perspective of economic inclusion that collaborates with vulnerable employees' resources (e.g., willingness to upskill at their jobs) to promote a more exemplary inclusive workplace for low-skilled employees. To this end, we promote greater economic inclusion via fair employment practices, even at the expense of corporate profit. --- Declarations Conflict of interest No conflict of interest. --- Ethical standards Yes. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Golf is an important and growing industry in South Africa that currently fosters the creation of an informal job sector of which little is known about the health and safety risks. The purpose of the study is to investigate the prevalence and significance of musculoskeletal pain in male caddies compared to other golf course employees while holding contributing factors such as socioeconomic status, age, and education constant. Cross-sectional data were collected and analyzed from a convenience sample of 249 caddies and 74 non-caddies from six golf courses in Johannesburg, South Africa. Structural interviews were conducted to collect data on general demographics and musculoskeletal pain for two to three days at each golf course. On average, caddies were eight years older, had an income of 2880 rand less a month, and worked 4 h less a shift compared to non-caddies employed at the golf courses. Caddies were approximately 10% more likely to experience lower back and shoulder pain than non-caddies. Logistic regression models show a significantly increased adjusted odds ratio for musculoskeletal pain in caddies for neck (3.29, p = 0.015), back (2.39, p = 0.045), arm (2.95, p = 0.027), and leg (2.83, p = 0.019) compared to other golf course workers. The study findings indicate that caddying, as a growing informal occupation is at higher risk for musculoskeletal pain in caddies. Future policy should consider the safety of such a vulnerable population without limiting their ability to generate an income.
Introduction Golf has shown tremendous growth worldwide, becoming one of the largest sports-related travel markets [1]. Offering 345 playable days a year and considerably low membership costs and green fees, South Africa has developed rapidly into a competitive golf industry market [2]. With one of the highest average number of full-time employees globally, at 42 employees per 18-hole course, South Africa's golf industry has the potential for significant economic growth and opportunity. Unfortunately, South Africa's golf courses have the lowest salary costs globally, making up only 23% of their operating budget compared to 30-40% in Europe [2]. With impending development and future business performance, it is imperative that the occupational health of the golf course workers be investigated. Caddying is considered a low-skill job with poor working conditions. In South Africa, caddies are part of the informal workforce, as they are not legally employed by the golf course, they arrive at the course in hopes of acquiring employment by the golfer. This means that caddies could be waiting at the golf course all day, and not work a single round of golf. Informal workers generally have no control over their work environment, while the formal economy is governed by policies and legislation. Despite the caddies being informal workers, the expectations from the golf course are similar to those of regular employees. Most caddies are required to wear uniforms and abide by company codes of conduct and performance. In contrast, the golf courses are not responsible for maintaining adequate working conditions or a safe working environment. Adequate working conditions would include access to protective equipment such as shoes, hats, gloves, sunscreen, and basic human necessities such as clean drinking water or a space to break or rest [3]. A safe working environment, for example, would include policies and support to protect against verbally abusive patrons [3]. This unique circumstance makes caddies a vulnerable population susceptible to exploitation and injury. Caddies are traditionally exposed to many risk factors associated with musculoskeletal pain and other physical problems during their work. Given the unique structure of their employment, serious lack of occupational health and safety equipment and inconsistent working hours, the prevalence of musculoskeletal pain may be vastly different than formally employed caddies. A study investigating caddies in South Korea found that 44.8% of caddies complained of musculoskeletal pain or ailments resulting from the repetitive standing, walking, and carrying golf bags as required by their job [4]. This study is comparable because caddies in South Korea have an informal employment structure and limited control over their occupational health and safety equipment and environment. To our knowledge, there have been no studies investigating musculoskeletal pain experienced by caddies in South Africa. Previous international study of musculoskeletal pain in caddies may not be comparable to the South African sample because these studies investigated caddies with formal employment and regular extended working hours. Knowledge of the rates and contributing factors related to occupational musculoskeletal pain specific to the golf industry in South Africa could provide much-needed support for policy development to increase preventative measures to this emerging profession. The aim of this cross-sectional study was to assess the prevalence and estimate the adjusted odds-ratio of musculoskeletal pain in male caddies compared to other golf course workers and investigate the association with sociodemographic characteristics and work activities. Exploring the relationship between occupation and pain among caddies is a useful first step toward the development of appropriate interventions and policies. --- Materials and Methods --- Participants Seven golf courses were selected and approached, with six agreeing to participate in the study. Convenience sampling was used to survey caddies at each golf course located in Johannesburg, South Africa. The study was conducted over a 2-to 3-day period at each golf course to increase the probability of capturing the greatest number of individuals and decrease non-response rate. All individuals present on the day of data collection, and who consented, participated in the study. Of the 329 participants registered, 323 completed the survey, 249 of them identified as caddies, and 74 identified as non-caddies. This study included only male individuals because all caddy workers were males. Therefore, no females were included in the non-caddies group, which comprised of groundskeepers, restaurant staff, and administrative employees who were formally employed by the golf course. --- Measurements Structured face-to-face interviews were performed by trained fieldworkers, with local language translation possible, using electronic RedCap data processing software after informed consent was obtained. The questionnaire consisted of 268 detailed questions about socio-demographics (including age, education, living costs, income, and food security), occupational history (length of shifts, number of shifts a week), occupational exposures (history of injuries), alcohol and drug use, baseline health, healthcare access, and mental health screening. Musculoskeletal pain was measured by structured questions adapted from the validated Nordic Musculoskeletal Questionnaire [5]: such as "have you at any time in the last 12 months had trouble (ache, pain, discomfort, numbness) in the neck" and "have you had trouble at any time during the last 7 days (with neck pain)." Data about pain were also collected from different areas of the body including shoulder, elbow, hand and wrist, upper back, lower back, hip, knee, and ankle. Participants were able to answer yes or no, and specify the right, left, or both appendages if applicable. Participants were also asked if they felt their pain was due to their occupation, and if so, what they felt the contribution factors or actions were. This was an open-ended question which was captured in free text by the interviewer. --- Data Analysis Descriptive statistics such as means and standard deviations were used to summarize continuous variables, while categorical variables were presented in frequency and percentages. Potential confounding or predictor variables were identified; age, body mass index (BMI), chronic illness, education, primary provider, number of dependents, housing, monthly income, days a week worked, and average length of shift. Other variables such as smoking status, alcohol consumption, distance walked during shift, weight of golf bag, and access to drinking water were considered but were not viable due to significant null response (>20% of participants). The variables for pain were compared between occupational groups using descriptive statistics. All statistical analysis was conducted in R software (University of Auckland, Auckland, New Zealand). All univariate and bivariate analysis was completed prior to initiating the regression analysis and a significance level of 5% was applied to all tests. A logistic regression model was fitted to investigate the effect of working as a caddy on developing musculoskeletal pain. For these models, the categorical variables for pain were condensed into four categories-neck, back, arm, and leg. Shoulder, elbow, hand, and wrist pain were joined to create the variable arm pain. Upper and lower back pain were joined to create the variable back pain. Hip, knee, and ankle pain were joined to create the variable leg pain. Pain was not differentiated by how many of the limbs or areas it was present in, a single yes in any area identified was coded as positive for pain. Four models were created, each focusing on a separate location of pain; neck, back, arm, and leg. Initially, a simple regression model was built to determine the unadjusted work type category (caddy or non-caddy) effect on musculoskeletal pain, to which variables were added individually and analyzed. The variables were added in the same order for each model and followed the listed groupings: demographics, health indicators, and job-related factors. Variables were kept in the model if they produced a 10% or larger change in the work type coefficient or the work type standard error, and the Akaike's Information Criteria (AIC) did not increase by more than 2. Using these parameters, "primary income" and "number of dependents" were not kept in any of the models. When "chronic illness" was added to the arm pain model, there was no change in AIC, work type coefficient, or work type standard error, however due to the conceptual link between pain, injury, and chronic disease it was kept in the model. When "BMI" was added to the leg pain model, it did not cause a 10% or larger change in B1 or B1SE, but the AIC decreased from 405 to 395, because of this it was kept in the model. "days worked a week" was only significant in the leg pain model, therefore was the only model that kept this variable. After the final models were created, variables "primary income", "number of dependents", and "days worked a week" were readded to the models where applicable and compared to the final model. Re-adding these variables in all 4 pain models, caused no significant changes to work type coefficient, and increased AIC. The sample size used for each model is indicated, as some cases were removed due to missing data. --- Results --- Sample Demographics, Health Behaviours, and Job-Related Factors A description of the sample population is presented in Table 1. Fifty percent of the caddies had a monthly income of less than 2849 rand ($182) per month. The non-caddies earned nearly double with a mean of 5729 rand ($367). This disparity in monthly income highlights the socio-economic instability of caddies and in previous reports has been linked to food insecurity [3]. Non-caddies worked five to seven days per week, with a median of five days and eight hours per day. On the contrary, caddies worked a median of three days a week for five hours per day. The informal nature of caddy's work means that caddies often wait at the golf course for an opportunity to work, so the time reflected in a typical working day does not indicate how much time is spent at the golf course waiting for work. The difference in regular working hours between the two occupational categories is shown in Figure 1. Overall, caddies have shorter working hours than non-caddies. The two occupational groups also differ in age, caddies have a mean age of 48 compared to non-caddies. Caddies represent a more mature population shown in Figure 2. --- Analytic Comparisons Overall caddies reported a higher prevalence of musculoskeletal pain, the most commonly affected areas being lower back (38%), shoulders (35%), and ankles (32%). Of the caddies that responded, 60% attributed carrying heavy golf bags as the cause of pain. Walking was identified in 33% of responding cases as their self-identified action causing pain. Of those caddies who reported lower back and ankle pain, over 40% were forced to take time off because of the discomfort. The non-caddies reported a lower prevalence of back (29%), shoulder (24%), and neck (22%) pains. Of the non-caddies that responded, 9% responded that they attributed carrying golf bags as the cause of pain, while walking was identified in 13% of respondents. Some areas for self-identified causes of pain were chemicals (4%), ergonomic (26%), and other (48%). --- Logistic Regression Models The logistic regression models were created to quantify the effect that being a caddy has on the odds of developing a musculoskeletal pain compared to other golf course workers. Data from the entire sample was placed in each of the models created, each estimating the odds for a different pain location, work type category was the only variable consistently significant in each model, holding all other variables constant. The work category coefficients and odds ratios are captured in Table 2. Ultimately, the odds of a caddy experiencing musculoskeletal pain were 2.39 to 3.29 times the odds of a non-caddy, depending on pain location. --- Discussion Musculoskeletal conditions, including pain, cause a significant global burden [6]. The Global Burden of Disease 2010 study showed that lower back pain ranked highest for disability and sixth for overall burden, while neck pain ranked fourth highest for disability and 21st for burden [6]. Despite the increased focus on musculoskeletal pain globally, there remains a significant deficit in research specific to South African's working-age population and even less investigating the specific mechanisms of pain in informal occupations. To our knowledge, there has not been a recent national South African survey that estimates the prevalence of musculoskeletal pain. The first steps in understanding the magnitude of the problem is increasing the related research especially among vulnerable populations such as low socioeconomic groups [7,8] Caddies represent a vulnerable population of men working in an informal capacity, with little structure in income or consideration of safety. The role of the work environment in developing musculoskeletal pain in caddies has not been previously investigated in South Africa. The caddies in South Africa are not working long hours but waiting for hours and sometimes days for the opportunity to be hired by a golfer. This presents a much different environment for pain and injury than a traditional caddy role which may include multiple games per day. In contrast, other studies have investigated musculoskeletal pain in caddies but in a formally employed role with substantially improved equipment and different work environments. Caddies are likely entering the job from a place of little employment options and limited income stability [7]. The socio-economic, health, and job factors were pivotal in determining the direction of influence in this relationship. The adjusted odds ratios present a strong case that the physical work being completed by caddies is affecting their rate of musculoskeletal pain compared to other golf course workers. The most common locations for musculoskeletal pain in caddies are the shoulder, ankle, and lower back [3]. Caddies have self-identified that these pain locations are likely related to actions they take to perform their job, which includes walking approximately six km per game and carrying a golf bag of approximately 15 kg [9]. Carrying a golf bag over ones' shoulder puts direct pressure and strain on the shoulder and neck muscles and alters a person's upright posture. Walking with a heavy golf bag demands greater muscle activation and overloading these muscles can lead to musculoskeletal pain [9]. Gosheger et al. found that carrying a golf bag for approximately four to five hours is physically demanding and commonly results in shoulder, back, and ankle injuries in persons who carried their bag on a regular basis [9]. Golf tourism has increased in South Africa and stands to continue to grow [10]. The golf courses charge relatively low green fees and membership fees and spend little on labour and wages compared to global competitors. Many golf courses do not offer motorized or pushcarts for the golf bags and so capitalize on workers presenting at the course to be hired directly by the golfer. This allows golf courses to take minimal or no responsibility for the safety of the persons that work on their course. Most courses expect caddies to wear a uniform and provide them; however, do not provide equipment for adequate occupational safety, such as shoes, hats, or gloves. This does not go without considering the current relationship between the golf course and caddy. Caddies currently have the freedom to create their own schedules and select persons that they work for, these types of advantages may provide reasons not to push for a more structured form of employment [11]. This reasoning has been highlighted previously in a similar case in India, in which caddies did not want to seek formal employment and preferred the current informal structure with suggestions for minor changes [11]. South Africa presents its own unique situation that must be considered before suggesting policy or procedure for change. This would include investigating the perceptions, requirements, and objectives of the caddies themselves. This study has some limitations, firstly due to the cross-sectional design, the relationship between musculoskeletal pain and working as a caddy should not be considered causal. The design of the study has resulted in bias by the method of data collection, including recall bias and interviewer bias. Selection bias may also have influence on the data, as a convenience sample was used. All individuals present on the day of data collection and consented to participate were included in the study, and thus were not randomly selected. This design caused information bias as information provided by those present might differ from those who were absent which could have potentially changed the findings of this study. It cannot be ruled out that the participants may not be representative of all thus making it difficult to generalize the results. Based on the information generated from this report, a larger study specific to musculoskeletal injury in caddies should be considered to further investigate the relationship and provide appropriate recommendations. In South Africa and generally across the world there have been very few studies addressing the impact of work exposures and health outcomes in caddies. --- Conclusions Caddies are part of the expanding informal economy in South Africa. This vulnerable group of persons has been shown to have a significantly increased occurrence of musculoskeletal pain while adjusting for potentially confounding factors. As the golf industry expands so should the policy regarding the unique relationship between caddies and the golf course. It is clear that caddies represent a marginalized and vulnerable population that has a considerable increase in risk for musculoskeletal pain compared to formally employed golf course employees. Caddies should be shown methods of carrying bags to reduce additional stress on the body. In addition, golfers should be encouraged to use lighter bags, and golf courses could provide bag trolleys. Caution must be taken to ensure that new policy should not encourage golf courses to remove caddies completely as this has become their main means of income. In addition, one needs to consider and respect the direction of change considered acceptable by both golf course and caddy. There is a need for a collaboration to ensure safety and continued partnership for both. --- Author Contributions: Conceptualization of main project, N.N., T.K., K.W., V.N.; Conceptualization of this manuscript topic, J.G., N.N.; methodology, J.G., F.M.; formal analysis, J.G.; investigation, J.G.; resources, N.N.; data curation, J.G.; writing-original draft preparation, J.G.; writing-review and editing, J.G., F.M., N.T., and N.N. All authors have read and agreed to the published version of the manuscript. --- Conflicts of Interest: The authors declare no conflict of interest.
Objectives Cardiovascular disease is a major cause of morbidity and mortality in Ghana, and urban poor communities are disproportionately affected. Research has shown that knowledge of cardiovascular disease (CVD) is the first step to risk reduction. This study examines knowledge of CVD and risk factors and determinants of CVD knowledge in three urban poor communities in Accra, Ghana. Methods Using the Cardiovascular Disease Risk Factors Knowledge Level Scale, which has been validated in Ghana, we conducted a cross-sectional survey with 775 respondents aged 15-59 years. CVD knowledge was computed as a continuous variable based on correct answers to 27 questions, and each correct response was assigned one point. Linear regression was used to determine the factors associated with CVD knowledge.The mean age of the participants was 30.3±10.8 years and the mean knowledge score was 19.3±4.8. About one-fifth of participants were living with chronic diseases. Overall, 71.1% had good CVD knowledge, and 28.9% had moderate or poor CVD knowledge. CVD knowledge was low in the symptoms and risk factor domains. A larger proportion received CVD knowledge from radio and television. The determinants of CVD knowledge included ethnicity, alcohol consumption, self-reported health and sources of CVD knowledge. CVD knowledge was highest among a minority Akan ethnic group, those who were current alcohol consumers and those who rated their health as very good/excellent, compared with their respective counterparts. CVD knowledge was significantly lower among those who received information from health workers and multiple sources. Conclusion This study underscores the need for health education programmes to promote practical knowledge on CVD symptoms, risks and treatment. We outline health systems and community-level barriers to good CVD knowledge and discuss the implications for developing context-specific and culturally congruent CVD primary prevention interventions.
BACKGROUND Cardiovascular disease (CVD) is a major cause of death globally. 1 Low and middle-income countries (LMICs) are disproportionately affected because more than three-quarters of global deaths occur in these regions. 2 By 2020, it is projected that CVD mortality will increase by 120% for women and 137% for men; by 2030, almost 23.6 million people will die from CVD, mainly from heart disease and stroke. [2][3][4][5] In Ghana, CVD is a major cause of death and accounts for as high as one-fifth of all causes of death. 6 Research shows that small reductions in risk factors at the population level translate into substantial reductions in CVD events and deaths. [7][8][9] Population-based intervention approaches reduce the burden of CVDs and close the gap in CVD burden between highincome and low-income area. 10 Effective population-based approaches to CVDs have the potential to reduce the number of people who require drug treatment. 11 12 Primary intervention strategies, such as public health education, are critical to reduce the incidence and prevalence of CVD. [13][14][15] Research has shown that lay knowledge of risk factors of diabetes, hypertension and stroke is poor in many countries, [13][14][15][16][17] including Ghana. 18 19 This partly leads to poor management and treatment outcomes such --- Strengths and limitations of this study ► This study enrolled individuals living in three urban poor communities in Accra, Ghana. ► We adopted a cross-sectional design, using the Cardiovascular Disease Risk Factors Knowledge Level Scale, to understand level and determinants of cardiovascular disease (CVD) knowledge. ► A limitation of our study is that the CVD knowledge scale that we adopted has an imbalance between scale-item weight and the incidence of existing risk factors of CVD in Ghana. ► Another limitation is that participants' responses on sources of CVD knowledge may have been affected by recall bias. copyright. --- Open access as delay in presentation to hospital for early diagnosis and treatment, 17 high case fatality rate 20 ; under diagnosis and high rates of disability 16 ; high incidence of CVD 20 and premature deaths. 16 Promoting CVD knowledge is, therefore, important because it is the first step to risk reduction across global communities. 16 20 A few studies that have been carried out in urban poor communities in Accra, Ghana have shown high CVD prevalence rates, 18 19 and these rates are higher than in the general population. Since knowledge of CVD is a first step for enhancing primary prevention, it is, therefore, important to examine what people know about CVD in these communities. This study examines knowledge of CVD and risk factors and determinants of CVD knowledge in three urban poor communities in Accra, Ghana. --- METHODS --- Study design This was a cross-sectional study. We used the Strengthening the Reporting of Observational Studies in Epidemiology cross-sectional checklist when writing our report. 21 --- Study setting The study was conducted in Agbogbloshie, James Town and Ussher Town. All three communities are located in the Ashiedu Keteke submetropolis of the Accra Metropolitan Assembly and are close to the central business district. 22 Agbogbloshie is a migrant multiethnic community and many of the residents lack access to formal healthcare, clean water, sanitation and live in makeshift housing structures. Many of the inhabitants work as traders and artisans. 22 James Town and Ussher Town are indigenous Ga communities and have relatively better access to healthcare with a government health facility (ie, Ussher Town Polyclinic), which serves both communities. The main economic activities in these two communities are fishing and petty trading. 19 All the three communities are characterised by high population density and low socioeconomic status with an average monthly income of 126.13 Ghana Cedis (US$28.6; 2011). 19 Although about three quarters have attained up to Junior High School (or middle school) and above, the quality of education is generally low due to the dominance of poorly resourced public schools in these communities. Previous studies have shown the high prevalence of hypertension in these three communities, with low levels of awareness, treatment and control. The sampling design followed a two-stage sampling. The first stage involved random selection of enumeration areas (EAs) proportionate to the population sizes of the three localities. A total of 5 EAs were selected from Agbogbloshie, 8 from James Town and 16 from Ussher Town. After this, all the structures in the sampled EAs were numbered and a household listing exercise was conducted. Households on the list were cumulated, and this constituted the sampling frame. The second stage was based on systematic sampling of forty households from each of the 29 EAs, and this resulted in a total of 1160 sampled households. All household members in their reproductive ages (between 15-59 years for men and 15-49 years for women) were eligible for interviews. Details of the sampling procedure have been provided elsewhere. 22 --- Measures --- Cardiovascular disease knowledge We adapted the Cardiovascular Disease Risk Factors Knowledge Level scale developed by Arikan et al 23 and which had been validated in the urban Ghanaian context. 24 The knowledge scale consists of 27 items (table 1). The first 19 items focused on the risk factors of CVD; items 20 and 21 addressed CVD symptoms and items 22 to 27 focused on CVD prevention, treatment and control. Items 8, 16 and 27 were negatively worded and these were recoded in the analyses. The items in the scale were presented to the participants in a truefalse question format composed of full sentences. The participants were asked to answer 'yes', 'no' or 'i don't know' to each item. Every 'correct answer' corresponded to 1 point, and every 'wrong answer' or 'I don't know' corresponded to 0 point. The scale has a high internal consistency coefficient (Cronbach's alpha) of 0.81 and has been shown to have good indices of validity. 23 Scores were also expressed in percentages. Score of <50% was classified as poor CVD knowledge, between 50% and 69% as moderate CVD knowledge and score ≥70% as good CVD knowledge. 25 --- Chronic diseases Respondents who had been diagnosed with any of the following conditions were coded as living with a chronic disease: arthritis, asthma, diabetes, heart disease, high blood cholesterol, hypertension, kidney disease and stroke. --- Open access --- Sources of knowledge The questions on sources of knowledge were multiple response scale and participants were supposed to state if they had heard of CVD from each of the following sources: television, radio, friends/relatives, schools/ teachers, health workers and other sources (eg, pamphlets/posters, newspaper/magazines, community meetings, mosques/churches, drama/performance and work place). The multiple responses were recategorised into a single variable (with seven categories) called 'sources of knowledge'. The categories for the sources of knowledge variable were: television, radio, friends/relatives, schools/teachers, health workers other sources and multiple sources. --- Physical activity Physical activity was measured as the number of days respondents spent doing moderate-intensity activities including sports, fitness or recreational leisure activities. We recategorised these into three: physically inactive, partially active (those engaged in physical activities less than three times a week) and fully active (those who engaged in physical activities three or more times a week. 26 Smoking, alcohol consumption and self-rated fat intake The question on smoking focused on whether participants currently use (smoke, sniff or chew) any tobacco products such as cigarettes, cigars, pipes, chewing tobacco or snuff at the time of the survey. Those who responded 'yes' were coded as 'smokers' and those who responded 'no' were coded as 'non-smokers'. The question on alcohol consumption focused on consumption of any alcoholic drink in the 30 days prior to the survey. The responses were 'yes' (if a participant consumed alcohol in the 30 days prior to the survey) and 'no' (if a participant did not consume alcohol in that time). For self-rated fat intake, participants were asked to rate the level of fat in their diet over the past 12 months prior to the survey, and the responses were low, medium and high. Previous studies have shown that self-rated fat intake is a valid measure for evaluating diet quality at the population and it also provides a simple method for identifying people with worst diet quality. 27 28 Sociodemographic variables Sociodemographic variables included in the analysis were sex, age, level of education, religion, locality, occupation, marital status and ethnicity. --- Data analysis Means and SD were used to summarise continuous variables and frequency distributions were used to summarise categorical variables. We used multiple linear regression to examine the predictors (sociodemographic characteristics, chronic diseases, physical activity, alcohol consumption, smoking, self-rated fat intake and sources of CVD knowledge) of CVD knowledge and the significant levels were set at p<0.05, p<0.01 and p<0.001. The variables included in the multivariable analysis and their corresponding reference categories were theoretically selected based on previous studies. For instance, previous studies showed that older people have more CVD knowledge compared with youths; hence, we made the youths (15-24 years) the reference category. The data were analysed using STATA V.12. --- Patient and public involvement Patients were not involved at any stage of the research for this study. Open access --- RESULTS --- Background characteristics The mean age of the participants was 30.3±10. --- Health profiles Table 3 shows that about one-fifth (20.5%) of the respondents were living with at least one chronic disease (hypertension-17.4%, diabetes-5.9%, asthma-3.1%, stroke-2.3%, heart disease-0.4%, arthritis-0.3%, high blood cholesterol-0.3% and kidney disease-0.3%). Less than one-tenth (7.2%) were smokers, more than half (52.4%) were current consumers of alcohol and about one-tenth (9.9%) engaged in physical activity three or more times a week. More than half (57.9%) reported that their diet in the last 12 months was medium in fat and 16.6% consumed foods that were high in fat within this period. With regards to self-reported health, more than 4 out of 10 rated their health as excellent and slightly more than 1 out of 10 rated their health as poor (48.5% and 11.8%, respectively). --- Open access With respect to sources of CVD knowledge, almost a third of participants (28.9%) had heard of CVD from the radio, 19.1% from television, 13.8% from friends and relatives, 6.6% from health workers and 19.2% from multiple sources (table 4). --- Cardiovascular disease knowledge The mean knowledge score was 19.3±4.8 (table 5). With respect to CVD risk factors, majority of the respondents (92.8%) knew that cigarette smoking increased the risk of CVD while eating fruit and vegetables everyday, and regular exercises could reduce the risk of the disease. However, less than half linked fatty foods to increase in blood cholesterol levels. Also, about 46.0% said that CVD could not be affected by heavy alcohol use and slightly more than half (50.2%) agreed that family history constituted CVD risk. The results further showed that knowledge on CVD symptoms was low. Specifically, less than 60% knew that shortness of breath was a sign of CVD and slightly more than half attributed feelings of chest pain or discomfort to CVD symptoms. With respect to CVD prevention, treatment and control, 72.0% said that keeping blood pressure under control reduces risk of heart disease; more than half (55.9%) mentioned that people with high blood pressure need to use blood pressure medicine for life. Also, less than half (46.8%) said that taking a blood pressure medicine for 1 month can cure and more than 4 out of 10 (43.0%) said that blood pressure should be checked only when people have chest pain or headaches. Overall, about 71.1% of the respondents had good CVD knowledge, 20.5% had moderate CVD knowledge and less than one-tenth (8.4%) had poor CVD knowledge. --- Determinants of CVD knowledge Table 6 shows that ethnicity, alcohol consumption, self-reported health and sources of CVD knowledge were determinants of CVD knowledge. Those who were Akan had more CVD knowledge than the Ga-Dangme (β=2.98, p<0.01); however, CVD knowledge was not significantly different among those who were Ewe, Ga-Dangme and those who belonged to other ethnic groups. CVD knowledge was higher among those who were current consumers of alcohol compared with their counterparts ((β=1.82, p<0.05). Those who rated their health as very good/excellent had more CVD knowledge than those who rated their health as poor. Those who received CVD information from health workers and multiple sources significantly had lower CVD knowledge than those who received CVD information from the radio ((β=-4.26, p<0.05; β=-5.10, p<0.01, respectively). --- DISCUSSION Our study showed that knowledge of CVD risk factors and CVD symptoms was low among the participants. The determinants of CVD knowledge included ethnicity, alcohol consumption, self-reported health and sources of CVD knowledge. CVD knowledge was highest among a minority Akan ethnic group, those who were current consumers of alcohol, those who rated their health as very good or excellent and those who received CVD information from other sources other than television, radio, school and health workers. The study communities are made up of predominantly lower socioeconomic status (SES) individuals. Research in Cameroon, Canada, India and Japan showed that people with low SES have poor knowledge of CVD because they are less likely to get access to educational or informational material about CVD and other health issues. [29][30][31][32] As is often characteristic of low-income settings, and particularly for this study's setting, the environment is obesogenic, consumption of healthy foods is low, few individuals engage in health-enabling habits (eg, physical activity, moderate alcohol consumption) and health-seeking behaviours are poor. [33][34][35][36] There is also the tendency to get health information and advice from friends, relatives or peers, to self-medicate, and to healershop across biomedical and alternative health systems. 18 37 This combination of factors is implicated in a person's risk of developing CVD and its complications. Crucially, early recognition of CVD symptoms is an important step that must occur before treatment can be obtained, and individuals' inability to recognise the symptoms of CVD may contribute to delay in presentation to hospital for early diagnosis and treatment and poor prognosis. 17 20 Late presentation of serious conditions is common in Ghana and has been implicated in poor prognosis for cancers. 38 With respect to the determinants of CVD knowledge, our study showed that age was not a determinant of CVD knowledge among the respondents. Some studies have shown that older people living with cardiovascular disease have more opportunities to access information about CVD. 39 Other studies have reported either no age-related associations or inverse relationship between age and CVD knowledge. 29 40 The present findings raise a cause for concern because CVD knowledge was low among the youths, even though research in Africa has shown an increase in incidence of CVD in this age group. 13 14 41 Also, CVD risk is reported to be high among youth in the study communities and youth engage in limited physical activity --- Open access and other risk protective behaviours. 35 There is, therefore, a need to improve public knowledge of CVD, especially among the youth with an overall goal of promoting lifestyle changes before disease progression occurs; this may help to reduce the incidence of CVD among youth and older community members. There was no significant difference in CVD knowledge between those who were living with at least one chronic condition and those without chronic disease. While people living with chronic conditions were expected to have experiential knowledge of the conditions, due to more regular interactions with healthcare professionals, the data also showed that only 6.6% of participants received CVD knowledge from health workers. This raises questions about whether people living with CVD get general or personalised information and counselling on the treatment/management of the condition. The current evidence suggests that Ghana's health system's responses to non-communicable diseases (NCDs) have not been comprehensive or integrated. 42 Within the study communities, previous research suggests that community health workers have poor CVD knowledge. 19 This lack of knowledge is likely to shape the quality and outcome of professional biomedical care. For example, respondents who heard about CVD from health workers had lower CVD knowledge than those who sourced information from radio and television. The mass media plays an important role in disseminating information on chronic diseases --- Open access in Ghana and radio is a popular source of information for many Ghanaians. 43 However, the content of media reportage on NCDs is drawn from disparate national and international sources and might be inaccurate or irrelevant to the Ghanaian sociocultural setting. 43 Therefore, while radio has wider coverage for disseminating CVD information in the study communities, compared with health workers, the content of information might be as problematic as that from community health workers who have poor CVD knowledge. --- Limitations This study has four key limitations. First, participants' responses on sources of CVD knowledge may have been affected by recall bias. Second, the lower scores for items under CVD symptoms and some other items may be due to inadequate community-based CVD prevention programmes in these communities and the country at large. Furthermore, the CVD knowledge scale has an imbalance between scale-item weight and the incidence of existing risk factors of CVD in Ghana. For instance, only one question was asked about overweight even though evidence shows that overweight/obesity is an increasing public health challenge in the country. Finally, we measured physical activity as the number of days respondents spent doing moderate-intensity activities including sports, fitness or recreational leisure activities. Based on this, 21.5% of the study participants --- Open access were partially or fully active compared with 22.4% of participants who had skilled manual occupation. This discrepancy may be because we used moderate-intensity activities as proxies for physical activity and this measure did not take into account other forms of physical activity such as those associated with skilled manual occupation. This may have led to an underestimation of physically active participants in this study. Despite these limitations, this study provides an important overview of CVD knowledge in three urban poor communities in Accra and the findings are generalisable to the three study communities. --- CONCLUSION This study examined the level and determinants of CVD knowledge in three urban poor communities in Accra, Ghana. The results suggest that there is an urgent need for CVD education in these communities, in order to promote prevention and management of this condition. Presently, Ghana's health system is weak and not NCD competent. Our findings suggest that people are not drawing knowledge from health workers but from social networks and the mass media. A key strategy will be to invest in health systems strengthening: existing initiatives that aim to enhance universal health coverage such as Community-Based Health Planning and Services (CHPS) and the National Health Insurance Scheme (NHIS) can be leveraged for community-based CVD prevention and control. 19 Under the CHPS programme, community health workers (CHWs) have been trained to provide care for general and reproductive health problems; they can equally be trained to provide diagnostic and primary care services for CVD. 19 The NHIS can be expanded to include provision of NCD medicines and technologies (eg, laboratory tests, blood glucose metres, blood pressure monitors) in primary care facilities to improve early detection, treatment and continuity of care. Successful CVD intervention programmes in LMICs incorporate task shifting approaches, public health and peer education and training of healthcare and allied professionals. 11 A second strategy will be to build capacity at the community level to improve knowledge and healthprotective practices. The mass media and social and religious networks are major sources of health, illness, CVD and NCD knowledge for the study communities. 19 Mass media sources can be improved through training of journalists and developing task-shifting strategies in social and faith-based spaces. 43 Faith-based organisations in the study communities provide health programmes for congregants and also involve non-health professionals in their healthcare activities. 24 CHWs can be trained to deliver information, screening and support services in these spaces. Finally, establishing and supporting CVD patient groups can empower individuals living with CVD 37 43 as well as create powerful platforms for patient-led CVD advocacy for the wider community. Contributors OAS and AdGA conceptualised the study. MKK drafted the background section. OAS analysed the data. OAS and CA drafted the results, discussion and conclusion sections. RBA and PYA assisted with data analysis and draft of results. AdGA led and supervised the interpretation of the data and writing of the manuscript. All authors read and approved the final manuscript. OAS and AdGA are responsible for the overall content as the guarantors. --- Data availability statement Data are available upon reasonable request. The datasets used and/or analysed for this study are not available on a public repository as they contain identifiable and sensitive information making it impossible to protect participants' confidentiality. Researchers interested in accessing this data may contact the corresponding author. --- Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research. Patient consent for publication Not applicable. Ethics approval Ethical approval for the study was obtained from the Noguchi Memorial Institute for Medical Research-Institutional Review Board in August 2013 (certified protocol number 105/12-13). Written informed consent was sought from participants prior to their inclusion in the study. Provenance and peer review Not commissioned; externally peer reviewed.
Community Service for Household Financial Management aims to increase knowledge in broadening the understanding of housewives when recording family financial management. A household certainly has the goal of a financially prosperous family, with prosperous finances, of course, the financial management that occurs in the family must have carried out good and efficient financial planning. Financial conditions that are already prosperous must have a savings or investment fund, this is because it will have an impact on the future life of the family. The method used when carrying out this activity is using training and discussion sessions or questions and answers on the elaboration of household financial management. Housewives were trained by giving one problem illustrating a family, of the 15 participants who attended they were very active and responded to the ongoing training activities on the questions they were working on. Participants in the activity, namely housewives as financial managers in a family, must understand the concept of financial planning in the future, to turn a family into a prosperous family.
INTRODUCTION This financial management is one of the most dominant factors so that harmony and welfare can be seen in a household or an organization that is run. The welfare of a household can be shaken if a wife as a financial manager in a household does not carry out her financial management planning properly. Inefficient management of household finances creates many problems and conflicts, such as causing loss of trust and even causing divorce between husband and wife. This financial management seeks to ensure that the family's financial cycle goes according to the family's financial plans and goals and can arrange for a balance between family income and expenses. thus if this financial planning is not considered, there will be problems that occur such as an imbalance between income and expenditure, for example, the amount of expenditure becomes greater than income. Debt also has the impact of being the biggest source of expenditure, of course, this will trigger a family's unwelfare which causes conflict. People often think that accounting/bookkeeping can only be practiced in one business entity, even though financial management in the household describes the practices and values reflected in accounting, such as transparency and accountability. then there will be problems that occur such as an imbalance between income and expenditure, for example, the amount of expenditure becomes greater than income. Debt also has the impact of being the biggest source of expenditure, of course, this will trigger a family's unwelfare which causes conflict. People often think that accounting/bookkeeping can only be practiced in one business entity, even though financial management in the household describes the practices and values reflected in accounting, such as transparency and accountability. then there will be problems that occur such as an imbalance between income and expenditure, for example, the amount of expenditure becomes greater than income. Debt also has the impact of being the biggest source of expenditure, of course, this will trigger a family's unwelfare which causes conflict. People often think that accounting/bookkeeping can only be practiced in one business entity, even though financial management in the household describes the practices and values reflected in accounting, such as transparency and accountability. Of course, this will trigger the occurrence of a family's welfare which causes conflict. People often think that accounting/bookkeeping can only be practiced in one business entity, even though financial management in the household describes the practices and values reflected in accounting, such as transparency and accountability. Of course, this will trigger the occurrence of a family's welfare which causes conflict. People often think that accounting/bookkeeping can only be practiced in one business entity, even though financial management in the household describes the practices and values reflected in accounting, such as transparency and accountability. According to (Anggraeni, 2012) Managing the Household Economy (ERT) is an action to plan, implement, monitor, evaluate, and control the acquisition and use of family economic resources, especially finances to achieve the optimum level of fulfillment of needs, ensure the stability and growth of the family economy. Managing family finances looks very simple and easy to practice. However, in the practical process, not everyone can manage it properly and efficiently. This problem is not related to the size of a salary or income that will be received, but this is related to how a housewife spends existing financial funds in a directed manner according to the needs of her priority scale. An allocation of this family expenditure budget is if it is not managed properly then a family will give birth to families that dig holes and close holes. The life that is carried out also always feels lacking continuously, even though it has increased income funds. At this stage of family financial planning is also an expertise that can organize and also plan family financial funds clearly by the portion of the distribution of family needs which will have an impact on short, medium, and long-term needs. When managing family finances, basically it must also be involved in setting the allocation of financial funds because in this study a financial controller must interpret estimates for current needs, future needs, and also unexpected needs. If what is meant by current needs is a need to spend money that must be met on the same day if needed, for example, expenses for kitchen expenses, motorbike installment fees, cellphone credit fees, electricity costs, and so on. Future needs are expenditures that will issue funds in the future, for example, expenses that will be paid for the education costs of children who are currently in school until they graduate, pilgrimage fees, child marriage costs, and so on. Meanwhile, unexpected needs are uncertain needs that must be issued today or tomorrow, but the budget must be prepared, for example, medical expenses in case of illness in one of the family members. When the budget is not prepared by the portion of needs, it will cause unfavorable conditions, of course, affect family life, which in turn will have an impact on becoming a family that is not economically prosperous. Of course, someone who holds financial funds must try as much as possible to meet an existing need. , this will also achieve a prosperous family, namely when the family can enjoy a decent or reasonable life, After participating in this activity, it is hoped that all participants present will have a constructive concept of thought and attitude, especially in terms of managing family income, and be able to use these financial funds according to the priority scale of the family. The target for this activity program is housewives in the Emplasmen Aeknabara Village area. Where the activities of these housewives are working, housewives who are not workers, and some housewives have their own business with an average educational background up to high school (SMA). According to (Manurung, 2013) the results of his research stated the importance of the role of accounting in the household for accountant families (educators and practitioners), namely to be able to plan each household budget, record keeping, decision making, and long-term planning in the household. Money has a very important role in the continuity of human life because this life requires basic needs that must be met so that life can properly be fulfilled. According to (Apriyanto & Ramli, 2020) Money is so important in modern human civilization, that money can make people happy and can also be a source of disaster, so many families get divorced and family problems arise due to money problems. As for these basic needs including clothing, food, and shelter, their fulfillment certainly requires money. The greater a family's income not guarantee the fulfillment of all needs in a family or the organization it runs, this is because there are still families experiencing financial deficits at the end of the month or lack of financial funds which will lead to debts that will become problems for the family. According to (Siregar, 2019) With the ability of housewives to manage family finances in an appropriate, timely, appropriate, appropriate price, and appropriate quality manner, family welfare is a financial problem that cannot be managed properly. Managing household finances is not as easy as turning the palm. Moreover, if the family's income is erratic, and if you are not careful in managing your finances, the saying --- 96 that a large peg rather than a pillar can affect household finances.According to (Siagian & Khair, 2018) Economic stability in the family is one of the factors that determine family happiness, because income that is not sufficient for life's needs can be the main cause of quarrels in the family. --- METHOD This research was conducted in Emplasmen Aeknabara Village, Bilah Hulu District, Labuhan Batu Regency on 15 February 2023. The target audience for this research was housewives in Emplasmen Aeknabara Village. The method applied to the research that has been carried out is in the form of elaboration and brief training on the basics of managing household finances using material guidelines and explanations regarding household bookkeeping and the budgets that will be needed. Participants who come from housewives in Emplasmen Aeknabara Village are given the following activities: • Household financial management training • Provide a question and answer session on the material provided on how to manage household finances • Determine how to plan finances The author describes it directly to housewives by providing understanding in the form of managing household finances by giving examples and explaining household financial reports. In carrying out this activity, of course, the target must understand competence in financial management, especially housewives. In the research activities that have been carried out, accounting knowledge is not only studied as a provision for work but can be applied to everyday life to manage personal finances so that they are organized efficiently and will be continued in households that will later form a family. --- RESULTS In global economic movements that often occur, inflation is very high, in the fluctuations and fluctuations of inflation, the people must be able to interpret these impacts in the future. In an increase in the price of very high household expenses, it will hurt some families, due to the instability between current income and expenses. Housewives who act as financial managers in a family must have a better understanding of how to plan for efficient household financial management. With the training activities on socialization being carried out, the author will provide information on the topic: 1. Provide teaching regarding household financial management training, 2. Provide a question and answer session on how to manage household finances, 3. Teach and determine how to plan household finances in the future. Community service is carried out socially so that the community, especially housewives, can find out about recording household financial books. Housewives in Emplasmen Aeknabara Village receive income from income every month, but the amount of money is not fixed, this is the main problem that is often the complaint of housewives. Most of the livelihoods of their families are construction workers, and casual daily laborers (BHL). So housewives usually receive wages that are earned not monthly but every day. In the ongoing phase of the community service starting at 14.00 to 16.30 WIB, this counseling activity was carried out on 15 February 2023 at Emplasmen Aeknabara Village which was attended by 15 participants. In stage 1 the author explains to housewives about financial planning & ways of efficient financial management. Where the author explains that financial planning is an act of activity of an individual in managing his income to achieve regular financial goals. In financial planning, of course, you must first understand the concept of what needs and wants, in his explanation that if someone prioritizes desires rather than fulfilling their needs, a family will experience the impact of a financial decline that occurs in his family. Financial planning that will be implemented is as follows: 1. Recognizing Financial Conditions, what is meant at this stage is that a housewife must understand assets that have high selling value in the future or long-term assets because in this concept the assets referred to are one of the comparisons to find out whether our financial condition is good or not. This must be compared with the existing debts, if the condition of the long-term assets that we have are of greater value than the debt, then the financial condition of a family is good. 2. Determining Desires, when determining a wish, we should list a desire to be targeted first, how much funds will be prepared to buy a wish, and a timeframe for achieving it must be set to be consistent in setting aside income each month. 3. Determining what is the main desire and preparing the budget, the budget process that must be managed must also have goals for what costs will be incurred. --- Figure 1. Provides an explanation of household financial planning In stage 2 carry out practical calculation activities, planning & question and answer sessions. As an example illustration below: a. Investment Fund / Savings = RP. 7,000,000 x 10% = Rp. 700,000 b. Insurance Fund = RP. 7,000,000 x 10% = Rp. 700,000 c. Children's Education Fund = RP. 7,000,000 x 20% = Rp. 1,400,000 d. Household Consumption Fund = If the cost of consumptive used is 40% then Rp. 2,800,000 e. Debt Fund =Current installments from the KPM are IDR 1,000,000, the maximum monthly limit is 24% for debt because it will be adjusted to a monthly income of IDR 7,000,000 x 24% = IDR. 1,400,000, so the current KPM installment is Rp. 1,000,000, the maximum loan that can only be Rp. 400,000. To even become Rp. In this session there was good feedback, the enthusiasm of the participants in asking a question, and the mothers who attended showed a high interest in the activities carried out. The results of the training conducted by the participants, 90% of women gave a good assessment and understanding of how to record simple, efficient financial management to do. The purpose of carrying out this activity is so that the community, especially mothers in Emplasmen Aeknabara Village, can understand and allocate their finances at an efficient cycle rate, such as investment/savings funds 10-20% of income, education/education funds of approximately 20 % of income, funds for household needs a maximum of 60% of income, picnic funds of approximately 10% of income, entertainment funds of approximately 5% of income, protection funds of approximately 10% of income, social/infaq/zakat funds 5-10%. So that housewives who manage income every month can interpret how much it costs to be incurred and can reduce economic problems that often occur among the people in general. --- CONCLUSION In the implementation activities that have been carried out "Counseling on Financial Management for Housewives in Emplasmen Aeknabara Village" when the counseling was carried out, the authors received a positive response from the people who were present at the implementation. From the understanding of the housewives who were present during the counseling they only understood that if they receive a monthly or daily income, they immediately spend the money they have to pay the required costs without having to understand the efficient allocation of the financial cycle. With an understanding like this, of course, you will not know the state of the financial conditions that have been issued, because the financial allocation is still not directed and will give rise to the word "Bigger Stakes Than Poles". With this activity, the people began to understand the importance of allocating the correct financial percentage cycle, as well as the importance of recording household finances every month. This must be done so that we can see the financial condition and health of the wallet that we have, with directed financial recording, the family's financial health condition will lead to prosperous family finances. Most of the people have problems with their financial condition, this is because the income/wages they receive each month are uncertain. The wages are the people's salaries they receive only per day, and the jobs they have are average casual daily workers (BHL) & construction workers. With the jobs they have, of course, the income they receive is also uncertain, the uncertainty of the results of the wages received, the community allocates their financial funds only according to daily expenses without having to save funds. In the emergence of a form of financial constraints like this, the authors explain that in the formation of a family, of course, husband and wife must support each other. If a husband only gets erratic wages/income, of course, a wife can also help her husband to increase income in the family. A wife may open a small business at home or on social media, in this way, it certainly opens up opportunities to be more productive in allocating family finances.
There may be differences between this version and the published version. You are advised to consult the publisher's version if you wish to cite from it.
INTRODUCTION A major challenge in Computational Social Science [6,12,15] consists in modelling and explaining the temporal dynamics of human communications. Which interactions lead to more successful communication or productive meetings? How can we infer temporal models of interactions? How can we explain what these temporal interaction really mean? Current statistical analysis techniques do not explore the full temporal aspect of time-series data generated by interactive systems, and certainly they do not address complex queries involving temporal dependencies. We investigate Markov rewards models (also called discrete-time Markov chains with rewards) for human-human interactions in social group meetings and how to interpret them. We identify various queries predicating over the temporal interactions between different roles, the impact of different sentiments in interactions or in decision making, causality between particular states, etc. We use probabilistic computational tree logic (PCTL) with rewards [4,11], which is a type of probabilistic temporal logic variant, to formalize these queries. We then use the PRISM tool [11], a symbolic probabilistic model checker, to analyse the formal queries and thus interpret the temporal interaction models. Probabilistic model checking [4] is a well-established verification technique that explores all possible states of a Markov model in a systematic and exhaustive manner, and computes the probability that a temporal property of the system under analysis holds. We can ask queries such as 'What is the average count of the project manager's interventions until a decision is taken?', 'What is the probability of a decision to be taken without anybody commenting about their understanding?', or 'What is the average interaction count from one decision to another decision without a negative sentiment being expressed in the interim?'. Figure 1 illustrates the method we propose for probabilistic modeling analysis of social group behaviour. The main contribution of this paper consists in empirically demonstrating the expressiveness of probabilistic temporal logic properties and probabilistic model checking for the analysis of temporal dynamics of social group interactions in meetings. --- RELATED WORK Our work is most closely related to the Markov Rewards Model by Murray [13,14] for analyzing and querying social sequences. In that work, social interactions are represented as a sequence of states, and particular states are associated with rewards or costs that are dependant on the query being asked. A Value Iteration algorithm is then used to estimate the expected value of every state, with a state's value indicating how it is related to the outcome of interest being queried. In our work, we will use the same state representation as Murray, but show that our probabilistic model checking framework allows us to ask queries that would be difficult or impossible to ask in the Markov Rewards Model framework. More generally, our approach is an example of social sequence analysis [7], where the goal is to analyze patterns in social sequences or to compare social sequences to one another. These social sequences might unfold at the macro scale (over days or weeks) or at the micro scale (over minutes or hours), and the present work is concerned with social sequences at the micro scale. The past decade has seen an increasing amount of work on developing technologies for supporting meetings, including the use of machine learning for making predictions on meeting data. This includes detection of decision items [10] and classification of dialogue act types [8], in addition to predictions for many other meeting phenomena [16]. The field of Social Signal Processing (SSP) consists of work that examines social interaction through primarily nonverbal cues [18], such as gesture, gaze, and prosody. There is also a growing inter-disciplinary field of meeting science that aims to understand the processes that take place before, during, and after meetings [1]. --- CORPUS The dataset used in this paper is the Augmented Multimodal Interaction (AMI) meeting corpus [5]. Each meeting group in the corpus consists of four people, and the group completes a sequence of Figure 1: Overall process of modeling and analysis of group interactions four meetings where they are role-playing as members of a company that is designing and marketing a product. Each person in the group is assigned a role; the roles are Project Manager (PM), Marketing Expert (ME), User Interface Designer (UI), and Industrial Designer (ID). Despite the artificial scenario and the assigned roles, the speech is spontaneous and unscripted, and each group is free to make decisions as they see fit. We discuss further aspects of the corpus in Section 4.1, where we describe the state representation used in this work. --- PROBABILISTIC TEMPORAL MODELLING AND ANALYSIS OF INTERACTION In this section we describe the state representation used in our Markov models, the probabilistic temporal logic properties and reward structures used for formalising queries about group interactions captured by the Markov model, and the probabilistic model checker PRISM used for formally analysing these queries. --- Markov models of social group interactions In our representation of social sequences in meeting, each state is labelled by a 5-tuple consisting of the following information: (1) the participant's role in the group: PM (Project Manager), ME (Marketing Expert), UI (User Interface Designer), and ID (Industrial Designer); (2) the dialogue act type taking one of the 15 values listed and briefly described in Table 1; (3) the sentiment being expressed: nosentiment, positive, negative, posneg (both); (4) whether the utterance involves a decision: nodecision, decision; (5) whether the utterance involves an action item: noaction, yesaction. In addition to the complex states described in the preceding section, there are START and STOP labeled states representing the beginning and the end of a meeting. Example states include the following: • <PM-bck-positive-nodecision-noaction> describes the situation where the project manager makes a positive backchannel comment, unrelated to a decision or action; • <PM-el.ass-nosentiment-nodecision-yesaction> represents the project manager eliciting feedback about an action item; The Markov aspect of the Markov models is that the probability of a given state depends only on the preceding state in the sequence. The state transition probabilities are estimated directly from the transition counts in the data. This way we obtain a discrete-time Markov model of the behaviour seen in the meeting data, where the states labels and the transition probability function are defined as above, the initial state is labelled by START. A path in a Markov model is a non-empty sequence of states such that the transition probability from one state to the next one in the sequence is strictly greater than zero. --- Probabilistic temporal logic and model checking Probabilistic model checking is a technique for modelling and analysing stochastic systems, usually focused on investigating correctness properties of the real-life system. It requires an abstract, high-level description of the system and specifications of the properties expressed in a suitable temporal logic. In the first step a probabilistic model checker tool builds a model of the system from its description, typically a Markov model (e.g., discrete time Markov chain, continuous time Markov chain, or Markov decision process). In the second step, the tool uses model checking algorithms to verify automatically if a temporal logic property is satisfied or not, or to compute the probability of a temporal logic formula to hold. These model checking algorithms explore the model in an systematic and exhaustive way. Probabilistic Computation Tree Logic (PCTL) [4,11] is a probabilistic branching-time temporal logic that allows one to express a probability measure of the satisfaction of a temporal property by a state of a discrete-time Markov model. The syntax is the following: State formulae: Φ ::= true | a | ¬ Φ | Φ ∧ Φ | P ▷◁ p [Ψ] | S ▷◁ p [Φ] Path formulae: Ψ ::= X Φ | Φ U ≤N Φ where a represents an atomic proposition, ▷◁ ∈ {≤, <, ≥, >}, p ∈ [0, 1], and N ∈ N ∪ {∞}. For a path π starting from a state s, we define the satisfaction relation π |= Ψ as follows: • π |= X Φ is true if and only if Φ is satisfied in the next state following s in the path π ; • π |= Φ 1 U ≤N Φ 2 is true if and only if Φ 2 is satisfied within N time steps and Φ 1 is true up until that point where Φ 2 is satisfied for the first time. The syntax above includes only a minimal set of operators; the propositional operators false, disjunction ∨ and implication =⇒ can be derived. Two common derived path operators are: the eventually operator F where F ≤n Φ ≡ true U ≤n Φ and the always operator G where G Ψ ≡ ¬(F ¬ Ψ). If N = ∞, i.e., the until operator U is not bounded, then the superscript is omitted. For example, how do we check whether the probability of reaching a yesaction within 50 utterances while the sentiment being expressed is not a positive one is greater than 0.75? The corresponding PCTL property represented as P ≥0.75 [¬ "positive" U ≤50 "yesaction"]. The model checking algorithm computes the reachability probability for all states satisfying the atomic proposition "yesaction" provided that all previous states visited do not satisfy the atomic proposition "positive"; if the resulting probability is greater than 0.75 then the model checking problem returns true; otherwise it returns false. PRISM is a probabilistic model checker [11] used for formal modelling and analysis of systems that exhibit random or probabilistic behaviour. Its high-level state-based modelling language supports a variety of probabilistic models, including discrete-time Markov chains. In PRISM we can replace the bounds ▷◁ p in the properties with =? and thus obtain the numerical value that makes the property true. PRISM also allows models to be augmented with reward structures, which assign positive real values to states and/or transitions for the purpose of reasoning over expected or average values of these rewards. In PRISM we can specify the following reward-based temporal properties: • R r wd=? C ≤N in a state s computes the expected value of the reward named rwd accumulated along all paths starting from s within N time-steps. • R r wd=? [ F Φ ] in a state s computes the expected value of the reward named rwd accumulated along all paths starting from s until the state formula Φ is satisfied. In PRISM, filters check for properties that hold when starting from sets of states satisfying given propositions. In this paper we use the filter operators state and avg in the following two types of properties: • filter(state, Φ, cond1) evaluates the satisfaction of the state formula Φ in the state uniquely identified by the Boolean proposition cond1; • filter(avg, Φ, cond2) computes the average over all states where cond2 is true. In the following, for convenience, we refer to PCTL properties with or without rewards simply as properties or queries, though strictly they also include PRISM operators. --- EXPERIMENTS AND RESULTS In this section, we first define the behavioural model used, followed by a set of queries, their encoding as probabilistic temporal logic properties, and their results, which demonstrate the flexibility and expressiveness of the method presented in this paper. --- Defining a behavioural model of social group interactions The behavioural model is a Markov rewards model initially inferred as described in Section 4.1 to which we add labels and reward structure definitions as required by the queries. In our case the atomic propositions associated with each state are the state labels and the individual particles composing the state label. The PRISM model encoding the Markov model for the input data set considered for this paper as well as the PRISM properties analysed later in this paper are available at http://www.dcs.gla. ac.uk/~oandrei/resources/imsgi_gift18. The PRISM model has a relatively small state space of 196 reachable states (out of 269 states in total) and 4002 transitions, therefore the model checking process for one temporal property is not time-consuming (under 0.1 seconds for all instances of the properties listed in the next section). We defined the following reward structures: • r_Steps assigns a value of 1 to each transition or time-step. We use this reward structure when computing the average number of time-steps (i.e., interactions) from one state to another state. --- Querying the Markov model We use the command line of the PRISM tool to execute each of the queries presented in this section through the probabilistic model checking engine and export the results. For some of the PRISM properties below we make the following notation for the sake of brevity. We use the placeholder roleLabel to be instantiated with any of the roles PM, ME, UI, or ID. The atomic proposition y=j refers to the state variable y in the PRISM model with the identifier j; in this case j takes values from 0 to 268. --- 5.2.1 Queries for validating the model. We first start with examples of queries and results that confirm our expectations about meetings generally and the AMI scenario specifically. For example, some of the results reflect the fact that project managers (PM) tend to begin meetings, and -in the AMI scenario, at least -are the most active participants. Some of the results of this first set of queries are merely artifacts of the AMI scenario, and in particular of the fact that participants are assigned clearly-defined roles and have to progress through distinct phases of a role-playing exercise. We then move on to queries and results that generate more insight into meeting interactions. Q1: How long does it typically take in a meeting before each type of role has participated? These queries are encoded in PRISM as: R{"r_Steps"}=?[F "PM"] R{"r_Steps"}=?[F "ME"] R{"r_Steps"}=?[F "ID"] R{"r_Steps"}=?[F "UI"] Each of the PRISM queries above computes the average accumulated number of time steps (or interactions) it takes to reach a state corresponding to a particular role. The actual average number of steps is computed using the transition reward r_Steps. The analysis results are 2.13 time steps for PM, 5.26 for ME, 5.99 for ID, and 6.03 for UI. This is an intuitive (and expected) result, showing that the project manager (team leader) tends to begin the meeting discussions, but also that all members participate early on in the discussion. Q2: How long does it typically take in a meeting before each type of non-PM role has participated after a Project Manager? The PRISM properties encoding of Q2 are: filter(avg, R{"r_Steps"}=?[F "ME"], "PM") filter(avg, R{"r_Steps"}=?[F "ID"], "PM") filter(avg, R{"r_Steps"}=?[F "UI"], "PM") Then the PRISM property encoding Q3 is: χ 3 (roleLabel)/(χ 3 (PM) + χ 3 (ME) + χ 3 (U I ) + χ 3 (I D)) Checking this property instantiated with each of the four roles, we obtain that PM participates 32%, ME 24%, while UI and ID are participating in equal measure at 22%. This results reflect the fact that project managers tend to be more dominant in the meeting discussions, and in particularly in regards to decision-making. Q4: How many times in average a PM (or some other role) is involved in decision-making within 100 time steps? Let χ 4 (roleLabel) denote the PRISM property that computes the average visit counts to states where roleLabel made a decision within 100 time steps: --- R{"r_roleLabel_decision"}=?[C<=100] Then the PRISM property encoding Q4 is: χ 4 (roleLabel)/(χ 4 (PM) + χ 4 (ME) + χ 4 (U I ) + χ 4 (I D)) After checking the four instances of this property, we obtain the following results: 86% for PM, 9% for UI, 3% for ID, and 1% for ME. As expected, project managers are making the majority of decisions, and the differences between the other three roles are likely an artifact of the AMI scenario. Q5: Which type of non-PM roles is more participatory following a PM within 100 time steps? The PRISM property encoding this query averages over all PM states the visit counts to roleLabel within 100 time steps: filter(avg, R{"r_roleLabel"}=?[C<=100], "PM") and the results of model checking it are: 36% for ME, 33% for ID, and 32% for UI. This shows that the non-PM roles are approximately equally likely to participate after the PM, with the ME being slightly more frequent. Again, this may be an artifact of the AMI scenario. Q6: Which roles with positive sentiment have the highest probability in the long-run? The PRISM property encoding this query looks at the probability in the long-run to be in a particular type of role with a positive sentiment: and the results are as follows: 34% for PM, 32% for ME, 18% for ID, 16% for UI. These results largely reflect the fact that the PM tends to be most active person in the AMI meeting discussions. --- 5.2.2 Queries for further exploration of interactions. Many of the preceding sets of queries and results conform to our expectations about meeting behaviour and the AMI scenario. We now turn to a set of queries and results that generate more valuable insight into meeting interactions. Q7: Which non-decision states are most valuable in contributing to decisions being made within 100 time-steps? The PRISM property encoding this query computes the probability of reaching a decision state within 100 time-steps when starting from a specific non-decision state: filter(state, P=?[F<=100 "decision"], (y=j)&"nodecision") The top ten most valuable non-decision states (i.e., most likely to lead to a decision within 100 time-steps) are the following: The most noticeable trend is that states containing sentimentboth positive and negative -are highly associated with decisionmaking. A second trend is that non-decision states belonging to the PM are highly associated with decisions being made. Both of these findings are intuitive; for example, participants tend to express a variety of opinions before mutually deciding on a solution or course of action. --- State Q8: Which PM states tend to lead to more participation by non-PM participants within 50 time-steps? The corresponding PRISM property for the ME role sets a reward of 1 for each visit of a ME state and hence computes the average visit counts to ME states within 50 time-steps when starting from a specific PM state. These results tell us that the PM is particularly likely to get participation from other members when he or she explicitly seeks input (e.g. elAss and elInf dialogue act types) and when expressing sentiment. Q9: Which non-sentiment states are highly associated with positive sentiment? The PRISM property encoding this query looks at each state tuple with no sentiment being expressed and then computes the probability of the next state to include a positive sentiment: filter(state,P=?[X "positive"], (y=j) & "nosentiment") The top ten non-sentiment states most likely to be associated with positive sentiment in the next state are the following: These results show that states containing dialogue acts that are explicitly eliciting information (e.g. elAss, elSug, elUnd, elInf) are likely to be followed by expressions of positive sentiment. In particular, the top state represents the PM explicitly seeking an assessment from one or more of the other group members, and this is very likely to be followed by a positive sentiment state. Q10: Which non-sentiment states are highly associated with negative sentiment? Similar to Q9, the PRISM property encoding Q10 is: filter(state,P=?[X "negative"],(y=j) & "nosentiment") The top ten non-sentiment states most highly associated with negative sentiment in the next state are the following: Interestingly, states that explicitly elicit information and belong to somebody other than the PM are associated with negative sentiment. This result coupled with the previous result suggest that participants may be eager to please the PM through expressions of positive sentiment and agreement, and more willing to express negative sentiment to non-PM participants. --- State Q11: Which non-decision states that occur early in meetings tend to cause decisions to be made quickly? The PRISM property encoding this query is: P=?[F<=50 ((y=j) & "nodecision" & P>=1[X "decision"])] where we considered early meetings to be within 50 time steps. This property computes the probability of eventually (i.e., in the Future) to reach a nodecision state identified by j within 50 time steps and in the neXt state a decision is taken (with probability 1). Interestingly, none of these states involve sentiment, and they belong to a variety of the roles. However, the top two results both belong to the PM. This reveals that sentiment and decision-making are less associated with each other early on in the meetings. Q12: If one person expresses positive sentiment, does it lead to other people expressing positive sentiment? We compare the average probability of expressing one type of sentiment after another or the same type of sentiment using the following PRISM properties and their results: --- 0.04 For example, the last property above computes for each positive state s the probability of reaching a negative state when starting from s, and then returns the average over all positive states s. These results show that an expression of positive sentiment is very likely to be followed by another expression of positive sentiment, and similarly with negative sentiment following negative sentiment. It is less common for negative to follow positive and viceverse, which is partly reflecting the fact that negative sentiment is much less common in this corpus. --- Q13: If a PM person expresses positive sentiments, what is the probability that it leads to positive sentiment expressed by a non-PM person? This query is a form of causality relation between positive sentiments expressed by a PM person and a non-PM person. We formalise query Q13 as a probabilistic constrained response [9] where we instantiate roleLabel by ME, UI, or ID: P>=1 [G (("PM" & "positive") => P>=p [(!("roleLabel" & "negative") & !("PM" & "negative")) U<=N ("roleLabel" & "positive")])] This PRISM property states the following: whenever PM expresses positive sentiment then, with probability greater than p, roleLabel and PM do not express negative sentiment until roleLabel expresses a positive sentiment within N time steps. This property helps us identify the maximum probability p for which the answer to the query is true when instantiating the roleLabel for non-PM roles. For N = 100, then the maximum probabilities p for which the answers to Q13 are true are 0.1 for ME, 0.06 for ID, and 0.05 for UI respectively. For N = 500, then the maximum probabilities p for which the answers to Q13 are true are 0.4 for ME, 0.25 for ID, and 0.25 for UI. We conclude that ME is approximately twice as likely than ID and UI to respond positively to a PM positive sentiment. This result is likely to reflect the structure of the AMI scenario. It tells us that the ME has a great deal of responsibility and can perhaps be seen as a secondary leader of the meeting. --- CONCLUSION In this paper we demonstrated the expressiveness of probabilistic temporal logic properties for formalising various probabilistic and reward-based queries about group interactions in meetings and then analysed them with the probabilistic model checker PRISM and interpreted them for the AMI corpus. Some of the queries analysed above do not need probabilistic temporal logic properties to be asked on the initial data set. However, all queries involving bounded time steps and in particular the steady-state properties, e.g. Q11 and Q13, cannot be expressed in any other way than as temporal property formulae. The queries Q1 -Q6 validate our behavioral model as their results confirm expected interactions, while the queries Q7 -Q13 highlight novel insight into the AMI dataset we analysed. In this paper we analysed the Markov model inferred from state transitions counts in the data. For future work we will consider admixture models inferred from the data using classical Expectation-Maximisation algorithms where each component (associated with a latent variable) in the admixture model models a particular pattern of behaviour, similar to the work of [2,3]. The challenge will be in identifying suitable classes of probabilistic temporal properties for characterising and discriminating between the patterns for the particular type of interaction data contained AIM corpus. In future work, we will experiment with alternative state representations, particularly representations that are less specific to the AMI corpus scenario and its roles. For example, we will include demographic characteristics such as gender and the native language of the speaker. We will also apply this representation and methodology to other group interaction datasets such as the ELEA corpus [17].
The study objective was to assess the determinants of sustainability of youth empowerment projects in Machakos County, Kenya; with the specific objectives of the study being to evaluate how project stakeholder engagement, project management skills, project funding and project scope management determines the sustainability of youth empowerment projects within Machakos County. The study was guided by stakeholder, skills, fund accounting, and control theories. A semi-structured self-administered questionnaire was used to collect primary data from project officers, managers and other key stakeholders within 73 youth empowerment programmes, which was boosted with secondary data sources. The data was analysed by qualitative and quantitative means using SPSS, with an OLS regression being done to ascertain the connection among the study variables. The study found that project management skills, stakeholders' engagement, project scope management, and project funding are strong determinants of project sustainability with the factors showing high correlation coefficients which were both positive and statistically significant. It was concluded that project management skills, stakeholders' engagement, project scope management, and project funding positively influences project sustainability. Improvements in these four spheres would lead to improvements in project sustainability. The study therefore recommended that future youth empowerment programmes should enhance their project sustainability by observing the four determinants, which is project management skills, stakeholders' engagement, project scope management, and project funding so as to improve on project performance and impact within the target beneficiaries. The study also suggested that organizations implementing youth empowerment programmes should invest more on researching the current trends in project management skills, stakeholders' engagement, project scope management, and project funding so as to realize effective and successful implementation of these practices within their organizations. The study suggested further research within more sectors in Kenya such as education, children welfare, wildlife and environmental conservation among others and within the region to confirm these findings in a different and wider context.
INTRODUCTION While youth unemployment is a prevalent problem in the whole world, the state of affairs is even poorer in Kenya. This is the case because according to UNDP (2017), the youth constitutes to three out of every five unemployed Kenyans. To counter the high level of unemployment among the youth, governments have initiated quite a number of projects both at the County and national level in order to deal with this growing menace (Thairu, 2018). According to Martin (2018), the rate of youth unemployment continues to increase over the years and this not only result to despair but also disillusionment among the youth hence making them vulnerable to violence and criminal undertakings. Given this situation, it is therefore very crucial for every government to work towards reversing youth unemployment situation. According to International Labor Organization (2017), in their Global employment trends for youth report, youth account for over 35% of the unemployed population globally. The unemployment rate marginally increased to 13.2% from 13% in 2017. In 2017, the projected total number of unemployed youth was 71.8 Million but in 2018 the number was expected to increase by 200,000 to reach 72 Million. Germany is one of the developed countries that has heavily capitalized in youth empowerment projects and has reaped big time. In less than a decade, Germany has invested more than $ 1 billion in its youth initiatives so as to counter the level of unemployment. One of the key initiatives include football and VTCs. Germany vocational training system is work-based and highly productive with VTCs enrollment in 2017 reaching 1.3 million. According to Euler (2018), in 2017 youth unemployment rate plummeted to 6.4% compared with that of U.S. which stands at 9.5%, making Germany one of the high-yielding in global workforces. The Vocational training system supplies companies with highly skilled employees and provide a diversified and auspicious career options for youth, and fostering culture and society. Due to these youth empowerment initiatives, Germany's GDP has also increased significantly (Euler, 2018). According to Nnadozie (2018), unemployment of youth in Africa is a weightier hurdle than climate change. The total number rose from 28 Million in 2016 to 29 Million in 2017. According to ILO (2018), the estimated total number of youth in Africa is 226 Million, representing 20% of global youth population, and the number is projected to increase up to 42% by the year 2030. This then implies that the rate of youth unemployment may further increase if this menace isn't resolved. Nnadazie (2018), stresses that African countries lack a permanent solution to this problem. In Kenya, numerous unemployment rate statistics released by different agencies cause public debate. According to UNDP annual report (2017), youth unemployment rate stands at 26.2%. The basic labor force report (2018), done by KNBS asserts that youth unemployment in Kenya is 11.4% for people aged between 15-34 years. The report further suggests that 86% of the unemployed population comprise of people younger than 35 years and that the 15-34 years' youth cohort represents 56% of the working population in Kenya. The unemployment rate is high due to lack of employable skills among the youth. The Skills Gap analysis report by the government of Kenya in 2012 further indicates that the youth represents 75% of the total population and out of this only 39% get employed and the rest do not find their way into the job market. Majority of the unemployed youth live in the rural areas and due to scarce resources usually go to towns and cities to look for opportunities. Most of them usually end up in slum areas and are endangered to radicalization and recruitment into gangs (Oketch, 2017). According to KNBS (2019), Machakos County has a total population of 1.4 Million people and out of that 34% (476,356) are youth aged 15 to 35. The KNBS report further asserts that only 27% of the residents of Machakos County have a secondary level of schooling and above. This therefore implies that majority of the people in Machakos County lack employable skills and therefore experience a very big problem in securing a job. It is projected that 36% of the youth in Machakos County do not have jobs (KNBS, 2017). The unemployment level is very high in the rural areas due to reasons such as lack of education, lack of appropriate training, poverty and discriminatory development projects. This factors also affect the sustainability of youth empowerment projects. Other factors include mismanagement of funds and lack of enough funding. The County government has also initiated various programs focusing on talent and sports development. The youth in Machakos County also engage in other non -agricultural activities such as carpentry, masonry and also working as boda boda operators. Despite the establishment of many youth empowerment projects, high youth unemployment rate still lingers due to some of the projects collapsing even before the end of their implementation period and some not able to meet their set objectives. --- Statement of the Problem Regardless of the fact that the logic behind establishment of youth empowerment programmes is to endow young people with employment skills, there are still intensified worries regarding overall youth unemployment rate. Makanga (2016), asserts that even though a high number of youth empowerment programmes are being initiated nationally, youth unemployment still lingers. The main rationale why youth empowerment projects are initiated is to instill skills and to empower financially the youth who involuntarily drop out of secondary or primary school. However, majority of the youth do not get jobs (Thairu, 2018). According to UNDP (2017), 26% of the youth in Kenya are unemployed. The high unemployment rate among the youth begs the question as to why majority of the youth are not getting wage or selfemployed despite the presence of youth empowerment programmes across the Country. It is evident that both developing and developed countries such as Germany, Malaysia, and Japan underscore on youth empowerment programmes. For instance, through youth empowerment, Germany's youth unemployment rate plummeted from 9% to 6.4% (Euler, 2018). Currently in Kenya, youth empowerment projects are being overseen by the Ministry of Youth Affairs at the County level. Their main aim is to ensure that youth empowerment programmes are used as vehicles toward which the youth attain financial ability and competitive skills for recompensing self and/or wage employment. This also helps in reducing the number of youth migrating from rural to urban areas to look for jobs (Oketch, 2017). However, despite the important roles youth empowerment plays, and more so on unemployment, there is not much academic inquiry on the topic, neither is there studies seeking empirical evidence of the determinants of programme sustainability. Nonetheless, for these endeavors to succeed, the youth empowerment projects must be sustainable so as to ensure they meet their cardinal objective, which is to enhance employability of young people (Thairu, 2018). Therefore, the sustainability determinants of youth empowerment projects need to be looked at and how they impact reduction of unemployment rate among the youth. This study focuses on Machakos County because 36% of the youth are jobless (KNBS, 2017). Majority of youth in Machakos County also lack employable skills as only 27% of the residents have a secondary education level. This is an absurd state of affairs as Machakos County has 75 government and donor funded youth empowerment projects (County Project Directorate). Despite having a high number of youth empowerment projects 25% of them have stalled and 30% were never implemented (Auditor General Report, 2017). Few studies have been done such as Mugure (2019), who looked at effectiveness of socioeconomic projects on youth empowerment. Wohore (2016), also looked at youth empowerment support services. However, these studies have failed to analyze the determinants of sustainability of youth empowerment projects. It is against this backdrop that the researcher sought to assess the determinants of sustainability for youth empowerment projects in Machakos County, Kenya. (2018), a stakeholder refers to anyone involved and invested in or affected by an organization or project. Stakeholders include employees, customers, local communities, government, suppliers and many more. This implies that stakeholders can either be inside or outside the project because they are usually very interested in the project and its progress. Mostly, stakeholders sponsor a project and are very concerned with the projects successful completion. Stakeholders usually have the ability to influence everyone in the project including the senior management, staff, customers, project leaders and many more. --- Objectives of the Study This theory informs the study in establishing the correlation between stakeholder engagement and sustainability of youth empowerment projects. The major strength of this theory is that it appreciates the advantages of stakeholder involvement in needs and solutions identification regarding their problems. It is very applicable to this study as it clearly shows how stakeholder engagement results to greater project benefits however it does not clearly explain the level of stakeholder engagement that should be done to ensure project sustainability. --- Skills Theory The skills theory was established by Katz in 1955. The skills approach provides a structure for discerning leadership on an inherent level. There is a distinct difference between the skills theory and trait theory. According to Richardson (2018), the trait theory only focusses on inborn capabilities of a leader to lead whereas the skills approach concentrates on the skills that a leader can develop over a certain period of time. Another key difference between the two is that in skills theory it is believed that leaders can develop competencies whereas in trait theory it is believed that leaders were born to lead and were born with competencies required for effective leadership. The skills theory is very important in project administration because it underscores the necessity for project managers to possess the right leadership skills and the capability to help others execute their roles successfully thus leading to successful implementation of projects (Richardson, 2018). This theory informs the study in establishing the correlation between possession of project management skills and sustainability of youth empowerment projects. The strength of this theory is that it underscores the benefits that come along when the project team is fully equipped with skills although it is usually very weak in prognostic value as it fails to elucidate how a particular skillset can influence performance. --- Fund Accounting Theory In 1947, Economist William Joseph Vatter established the fund accounting theory. According to William Vatter, fund accounting refers to an accounting system that underscores on accountability rather than profitability (Moonitz, 2016). Coleman (2017), affirms that funding involves the provision of financial resources such as money or other values such as time or effort so as to finance a project and this is usually done by individuals, companies or organizations. On the other hand, Zeng (2017), asserts that a fund is an accounting entity which has a self-balancing set of accounts with the ability of documenting cash usage, related liabilities and cash balances for the specific activities being executed and all this is done as per the project or organizational regulations. The fund theory defines assets differently as compared to other accounting theories. The fund accounting theory refers to assets as commodities obtained in order to grant a multiplication to their service potentials. Coleman (2017), asserts that fund accountability is very key because it results to improved relations with funders and also enhances financial security. Fund accountability also lead to improved performance because all the activities are executed as per the set budget. In Kenya, if youth empowerment projects could execute their activities and purchase of commodities or payment of services in a transparent manner, they could be able to run for the planned implementation period and also achieve the set objectives. This would then make more youth to gain employable skills thus leading to low rates of unemployment among the youth. This theory greatly informed this study in helping to comprehend project funding and how factors such as source of funds, frequency of funding and management of funds can influence project sustainability. --- Control Theory In the late 1960s Walter Reckless and Travis Hirschi came up with the control theory. Control theory puts emphasis on control mechanisms which should be foisted at all levels of an organization (Hirschi, 2017). According to Glad & Ljung (2018), there are different control mechanisms which organizations can use so as to ensure that the desired results are achieved. The control mechanisms include performance measurement mechanisms, organizational structure and behavioral controls such as organizational policies and norms. According to control theory, the results achieved must be in line with the goals and objectives of the overall organization. Jagacinski and Flach (2018), asserts that a project or organization can use any type of a control system or even a combination of the control systems. Selection of the control system can be done influenced by either policies, structure, administrative information or norms of the project or organization. Control theory plays a very crucial role in performance management through output evaluation which assists in maintaining consistency with established parameters. This theory greatly informs this study in establishing the correlation between project control systems and sustainability of projects. This is because in project management, a control system can be very beneficial in terms of time management, budget control and scope management. A control system helps to identify any deviations which can then be managed in time thus enhancing sustainability of projects. --- Pressure-State-Response Model The Pressure State Response (PSR) model is perhaps the most commonly used indicator framework (MfE, 2002). It was originally developed for environmental statistics in Canada, prior to the wider adoption of the concept of Sustainable Development (Pinter et al, 2017). PSR was adopted by the OECD in 1991 for use in environmental indicator reports, and has since been modified and developed in various ways to better account for other aspects of sustainability. The PSR model is presented as per Figure 1. --- Figure 1: Pressure-State-Response Model of Project Sustainability The PSR model has been simplified to a five-step process (five indicator types) by (Pinter et al, 2017) as to include: human activity or natural stressorthere could be one of two types of stress: human activity such as economic, population or industrial stress; or natural events such as earthquakes, floods or droughts; pressure (or driving force)there are various pressures that can be imposed on the environment -for example pollution of air, water, and land; release of hazardous wastes; loss of vegetation and biodiversity; and loss of soil (the term driving force allows for non-human stressors as well such as natural events); state (or quality or condition) -this refers to the state or condition of the environment (economic, social, natural), as measured by indicators; socio-economic impactthis change in environmental quality or state has an impact on social, cultural and economic values of humans; and, policy response -government agencies and the private sector can respond to changes in environmental quality by implementing policy or taking other actions (Segnestam, 2019). --- Dependent Variable --- ` --- Independent --- Empirical Review Stakeholder engagement is a key factor that can impact sustainability of youth empowerment projects. A number of studies have point out that engaging stakeholders has a positive effect on the implementation of projects. Eric (2016), through his study asserts that stakeholder involvement greatly determines whether a project is sustainable or not. According to Eskerod et al., (2016), lack of managing stakeholder's expectations can cause serious problems such as disagreements and lack of resources which in turn can lead to project closure. Eskerod further reiterates that stakeholder engagement should be done throughout the entire project life cycle. Bourne (2016), puts emphasis on the need of having proper communication between the project leadership and all the stakeholders. The study also highlights that stakeholder communication should be exercised through meetings and regular reporting. Through this, the project management can be able to know the stakeholder's personal agendas, perceptions, requirements, concerns and expectations which can impact on the outcome of the project. Mok, Shen and Yang (2019), looked at the value of stakeholder engagement and management in the construction industry, where he argued that in construction, it is crucial to have a supporting apparatus that not only assists in collaboration between parties but also ensure effective communication. It is also important to ensure that both contractors and project managers work closely together to manage stakeholders. Project management skills also determines whether a project is sustainable or not. According to Kearns et al., (2016), problem solving skills is the ability to not only anticipate problems but also to provide solutions to those problems and to come up with mitigation strategies A study done by Manazar (2017), indicates that project managers should have some soft leadership skills for them to properly manage a project, which comprises good communication skills, coordination skills, interpersonal skills, and problem solving skills. Northouse (2018), asserts that it is imperative for project managers to have interpersonal skills because it enables them to be able to motivate other staff. It also enables them to recognize the strengths and weaknesses of their team members thus being able to capitalize on them. This in turn enhances the success of the project. Brierre (2015), puts weight on the necessity for project managers to possess coordination skills. Coordination skills refers to the ability of the project manager to deal with issues outside and inside the organization and developing cordial relationship with fellow team members. This enables the staff to work as a team to achieve set objective. Numerous studies have revealed that sustainability of youth empowerment projects is greatly influenced by the level of financial management. Some studies done in India have underscored on the need for project managers/leaders to make other members understand financial records. To avoid skirmishes, distrust and misunderstandings it is also crucial to elucidate the financial records to the members who are less educated. The managers also need to manage the funds well by accounting for every single coin and not misappropriating funds. If this issues are not well taken care of, the sustainability of projects is at stake (Swilling, 2016). According to Berechman (2018), all stakeholders should be part of every financial decision making process so as to enhance the success of the project. Coleman (2017), asserts that the determinants of project sustainability are both internal and external factors. He puts emphasis on adequate financing, excellent financial management and proper project planning as the key aspects that determine project sustainability. According to the European Regional Development Fund report (2015), lack of prudent financial management results to failure of many projects due to embezzlement of funds. It is therefore crucial to have a sound project financial management which helps prevent friction with the project donor or stakeholders. This can work well by coming up with a well-structured set of rules and transparent reporting which should be planned for even before the project begins. The financial management should supply the correct information to the donor and stakeholders when needed (Coleman, 2017). According to Kerzner (2017), project scope management plays a key role by ensuring all the planned project work is completed within the specified period of time and budget. A study done by Richardson (2018), indicated that poor project scope definition and management largely affects performance of construction projects. The study is in consonance with Heldman's affirmation that poor project scope management causes project delays and rework thus resulting to poor quality products (Heldman, 2018). Project scope should be well defined and managed regardless of the project size. Kerzner (2017), asserts that project scope management is very key has it influences other project aspects such as cost, time and quality and this in turn strongly influences the performance of the project. For projects to avoid delays and cost overruns it is also vital for them to determine what scope to outsource. This should only apply if the project management feels they can't be able to do the job within the specified period of time and also if they lack the knowledge or expertise of doing it. Martens & Carvalho (2016), did a summary of literatures and established that project sustainability is very key as it ensures project's impact continues far into the future. This implies that ensuring sustainability of youth empowerment projects may result in more youths gaining new wage/self-employment. Martens and Carvalho (2016), further asserts that there is need for a good leadership to be in place in order for sustainability to be achieved. Effective and visionary leaders should plan for project sustainability and work closely with the community and various stakeholders towards achieving it. Mavi and Standing (2018), asserts that some of the critical success factors of project sustainability include having partnerships and collaborations with other programs and government agencies. It is important for projects to establish connections with other projects during the early stages and to strengthen it throughout the entire project life cycle. One should ensure strong partnerships by engaging with those affected by your project, those interested in the objectives of your project and those that can contribute crucial resources and support. Swilling (2016), affirms that collaboration and partnerships can assist in sustaining the efforts of a program in a very big way. According to Mavi & Standing (2018), insists that employing marketing skills and efficient communication to notify others about your program successes and goals can help create a base of support that can sustain your program. --- METHODOLOGY The study employed a descriptive survey design to evaluate the determinants of sustainability of youth empowerment projects. The researcher targeted 75 youth empowerment projects. This study collected primary data by use of a structured questionnaire. Seven youth empowerment projects were subjected to a pilot study and the results were used to enhance the data collection tools in the overall study findings. After the data collection process, a rigorous check was done to ensure data completeness. The data was then carefully coded and entered using SPSS for analysis. A regression analysis was done to measure the variable's level of significance. A ranking of the variables on how it greatly affects the dependent variables was also done. The multiple regression model for this study was as below: Y = β 0 + β 1 X 1 + β 2 X 2 + β 3 X 3 + β 4 X 4 + ε Where: Y = sustainability of youth empowerment projects (dependent variable); β 0 = constant coefficient of intercept X 1 = Stakeholder Engagement (independent variable) X 2 = Project management skills (independent variable) X 3 = Project Funding (independent variable) X 4 = Project Scope Management (independent variable) β1…β 4 = regression coefficient of four variables ε = Error term. --- FINDINGS Descriptive Outcomes The study collected data that was able to highlight the state of various determinants of project sustainability within youth empowerment projects to inform the state of project sustainability, as well as the practices adopted in regard to stakeholder engagement, project management skills, project funding and project scope; all of which could help identify the determinants of project sustainability in youth empowerment programmes at Machakos County. Analyzing this data by use of descriptive statistics where frequencies, percentages, mean values, and standard deviation helped in highlighting the state of these assessed constructs. The outcomes of this descriptive analysis as undertaken in the study are presented in this section. --- Stakeholders Engagement Practices in Youth Empowerment Programmes A look at the state of stakeholder engagement within the studied youth empowerment programmes was undertaken with a view of bringing out the ratings of the various stakeholder engagement factors. The study required respondents to rate their level of application of stakeholder engagement practices on a five point Likert scale where 1 presents 'very small level', 2 presents 'small level'; 3 presents 'moderate level', 4 presents 'great level' and 5 presents 'very great level'. The study revealed that general look at the youth empowerment projects revealed that they rated their application of various stakeholder engagement practices at a moderate level (mean 3.488), though a slight majority of projects apply the practices at high (29.3%) and very high (21.6%) levels. This therefore shows that the youth employment projects do apply stakeholder engagements with 'stakeholders' involvement in formulating annual project sustainability plans' (mean 3.712) being the most applied practice, followed by 'beneficiaries' involvement in needs and solutions identification regarding their problems' (mean 3.507). The other stakeholder engagement practices include: 'all stakeholders fully understand project implementation guidelines and during project commissioning the stakeholders are given all the guiding principles' (mean 3.438); 'stakeholders are involved in project identification, selection, planning and implementation' (mean 3.397); and, the lowest rating was reported as 'the project applies a participatory approach to ensure cost sharing of project activities' (mean 3.384). All these outcomes showed that though majority of the youth empowerment projects applied the stakeholder management practices to a great extent, a significant proportion seldom applied these strategies. The outcomes were presented in Table 1. 2. The survey found that in general, project management skills are observed to a moderate level (mean 3.463) among the studied youth empowerment projects, with majority of the projects rating their project management skills as being at 'great level' -4, by 36.5% of the respondents, while the respondents that gave a 'very great level' -5 rating, being 16.0%; 'moderate level' -3 rating, 28.3%; 'small level' -2 rating were 16.2%; and those with 'very small level' -1 rating were 4.5%. The average ratings were observed to be very close to each other for the specific project management skills with the most common project management skill was the one regarding 'project resources being managed appropriately' (mean 3.699), with the next highly rated skills being 'ensuring transparency in project procurement processes' (mean 3.493), 'possession of skills for auditing and budgeting' (mean 3.425); 'project team possesses conceptual, human and technical skills' (mean 3.397); 'the project team possess sufficient project management skills' (mean 3.384); and, 'the project stakeholders were contented with management skills of the project staff' (mean 3.384). These ratings indicated that project management skills were neither low, nor exceptionally high except for a small proportion of the youth empowerment programmes. --- Level of Project Funding within Youth Empowerment Programmes A look at the level of project funding activities within the youth empowerment programmes revealed the ratings of the various project funding indicators, as highlighted in the outcomes presented in Table 3. The respondents were required to rate their level of application of various project management skills on a five point Likert scale where 1 presents 'very small level', 2 presents 'small level'; 3 presents 'moderate level', 4 presents 'great level' and 5 presents 'very great level'. It was observed that majority of the studied youth empowerment programmes rated their funding status at 4 'great level' (35.9%), with another significant proportion rating their funding status as being 5 'very great level' (21.1%). It was further observed that only 20.9% of the respondents rated their funding status as being either 1 'very small level' or 2 'small level', with the rest of the respondents rating their funding as 3 'moderate level' (24.1%). The average rating of funding within the YEPs in the study area was found to be 3.562, an indication that the overall rating for funding among the studied YEPs lay between the 'moderate' (3) and 'great' (4) levels. A look at the various factors informing of funding status within the studied projects revealed that the highest rated indicator was 'funds are received on a reliable frequency hence high chances for sustainability' (mean 4.00), followed by 'the project has adequate financial mechanisms to control project funds' (mean 3.712); 'stakeholders participate in resource allocation meetings for projects activities' (mean 3.397); 'there are adequate financing mechanisms in your project' (mean 3.370); and, 'the beneficiary community commits resources to boost project continuity after closure' (mean 3.329). These ratings indicate that except one that was rated on average at the 'great' level, the other indicators of funding are mostly rated at the 'moderate' level of observation or application among the studied YEPs. The study further looked at the level of application of various project scope management practices within the studied YEPs, where the ratings of the various scope management practices were assessed and the outcomes of this undertaking were as presented in Table 4. The respondents were required to rate their level of application of various scope management practices on a five point Likert scale where 1 presents 'very small level', 2 presents 'small level'; 3 presents 'moderate level', 4 presents 'great level' and 5 presents 'very great level'. The study found that the highest rated scope management practice within the studied YEPs, was 'project scope is well defined and the project has a well-defined scope management plan' (mean 3.945), with the rest of the scope indicators being observed to have very closely related scores, such as 'periodic scope changes made are well managed' (mean 3.480); 'there is efficient annual scope validation' (mean 3.438); 'the project deliverables are achieved within the specified period of time' (mean 3.384); and 'there are periodic reports generated and shared on the status of the project scope' (mean 3.945). The overall look at the scope management practices revealed an average rating of 3.512 indicating that the studied YEPs reported their scope management practices to lie between 'moderate' (3) and 'great' (4) levels, with the proportion of those YEPs that rated their scope management practices at 'great level' being the highest at 37.5% followed by those who rated their scope management at 'moderate level' being 28.2% of the studied YEPs. To understand and inform the study dependent variable, the study looked at the state of project sustainability within the studied YEPs, where the ratings of the various sustainability indicators were assessed and outcomes of this undertaking were presented in Table 5. Respondents were required to rate the state of various sustainability indicators on a five point Likert scale highlighting level of agreement with various statements representing indicators, with 1 for 'strongly disagree', 2 presents 'disagree'; 3 presents 'neutral', 4 presents 'agree' and 5 presents 'strongly agree'. An assessment of the status of various project sustainability indicators revealed that having 'continuity of project benefits even after project closure' (mean 3.507) -where majority of the respondents rated the indicator as 'agree' -4 (43.8%) and significant others were rated as 'neutral ' -3 (27.4%). The respondents rated having 'project ownership by all the stakeholders' at a mean rating of 3.452 out of a possible maximum of 5 points, which is a relatively high rating, while the least rated sustainability status indicator was having 'a functional project structure' with an average rating of 3.397. At an overall level, project sustainability of YEPs revealed an average rating of 3.452, with majority of the respondents rating project sustainability at 4 (31.1%) and 3 (30.1%). The study therefore confirms that the status of project sustainability within the studied YEPs can be said to be moderate, with only 49.8% of the respondents having a high rating of their project sustainability status. --- Correlation between Study Variables The study assessed the relationship between the study variables, hence the need to undertake a correlation analysis. The correlation coefficients of the study variable were as presented below. The study observed that all the assessed relationships between the study variables were positive with statistically significant correlation coefficients at 95% confidence level. The relationship between project sustainability and stakeholder engagement (r 0.632; p 0.000), project management skills (r 0.593; p 0.000), project funding (r 0.745; p 0.000), and project scope management (r 0.855; p 0.000), is high with project scope management indicating the largest relationship with project sustainability, followed by project funding, stakeholder engagement, and the least coefficient was recorded for project management skills. This confirmed that the independent variables (stakeholder engagement, project management skills, project funding, and project scope management) have a positive relationship with the dependent variable (project sustainability). The study further looked at the correlation between pairs of the independent variables where it was found that there was high and statistically significant correlations between stakeholder engagement and project scope management (r 0.751); stakeholder engagement and project funding (r 0.703); stakeholder engagement and project management skills (r 0.743); project management skills and project funding (r 0.725); project management skills and project scope management (r 0.699); project scope management and project funding (r 0.740). These correlation relationships were found to be very high, though none of these qualify as indicators of autocorrelation problem since the correlation is lower than 0.80 which was the threshold offered by Saunders, et al. (2016) as the correlation coefficient beyond which autocorrelation problem should be flagged. --- Regression Analysis Outcomes The regression model summary outcomes presented consisted of the correlation coefficient (R), the coefficient of determination (R 2 ), the adjusted coefficient of determination and the standard error estimate. The correlation coefficient for the regression model was observed to be very high at 0.866, which revealed presence of a link between the independent study variables and the independent variables. The coefficient of determination was observed to be significantly high (R 2 = 0.749), which indicated that the four independent variables: stakeholder engagement, project management skills, project funding, and project scope management, are able to explain 74.9% of the variability in project sustainability. Therefore, quite a significant proportion of project sustainability is determined by the four sustainability determinants of stakeholder engagement, project management skills, project funding, and project scope management. Further outcomes from OLS regression is the ANOVA showing the comparison between the residual and the regression sum of squares from the mean square and the statistical significance of the regression model. The ANOVA analysis revealed that the relationship between project sustainability and stakeholder engagement, project management skills, project funding, and project scope management is statistically significant (p<0.05), at 95% confidence level, with the sum of squares and mean squares showing considerably different regression and residual values, hence confirming existence of a relationship between the two factors. This confirms that the regression model shows a statistically significant relationship between the dependent and the independent variables, with statistically significant ANOVA model leading to the rejection of the null hypothesis stated as: stakeholder engagement, project management skills, project funding, and project scope management have no influence on project sustainability (Reject H o when p<0.05). Further regression analysis outcomes were offered in the regression model coefficients and their significance levels (Sig.). The output indicates the coefficient for each of the four sustainability determinants including stakeholder engagement, project management skills, project funding, and project scope management as determinants of project sustainability. As presented, the study found that project sustainability is affected by various factors such as stakeholder engagement, project management skills, project funding, and project scope management, with the sustainability determinants indicating positive and statistically significant coefficients confirming that the coefficients are significantly different from zero. The statistically significant factors were 'stakeholder engagement' with a coefficient of 0.168 (p=0.021), 'project management skills' with a coefficient of 0.295 (p=0.012), 'project funding practices' with a coefficient of 0.580 (p=0.028), and 'project scope management' with a coefficient of 0.807 (p=0.000). However, the regression model constant -0.165 (p=0.551) was found to have a non-statistically significantly coefficient indicating that it is not significantly different from zero, an indication that they have negligible contribution in the model, and hence ought to be dropped from the regression model. Therefore, the regression model can be reconfigured as: Y = 0.168 X 1 + 0.295 X 2 + 0.580 X 3 + 0.807 X 4 + ε PS = 0.168 SE + 0.295 PMS + 0.580 PF + 0.807 PSM + ε; (Where PS/X 1 is Project Sustainability, PMS/X 2 is Project Management Skills, PF/X 3 is Project Funding; and PSM/X 4 is Project Scope Management) The regression analysis therefore revealed that the four factors which includes stakeholder engagement, project management skills, project funding, and project scope management have a direct and significant effect on project sustainability of youth empowerment programmes. The study concluded that the youth empowerment programmes employ the various determinants of project sustainability in their projects such as stakeholder's engagement, project management skills, project funding and project scope management, albeit at a moderate level, which aligned with the level of project sustainability observed among the studied projects, also at moderate level. The four factors were employed within the YEPs at nearly similar ratings, highest rated factor being project funding (mean 3.562), followed by project scope management (mean 3.512), stakeholders engagement (mean 3.488), and the least was project management skills (mean 3.463). A close rating for the state of project sustainability within the projects was realized with a mean rating of 3.452 in a 5 point Likert scale. The study further found that the four factors have a great link with project sustainability, all being observed to show high correlation coefficients, which can be ranked from highest to lowest as project scope management, project funding, stakeholder engagement, and the lowest link was observed to be for project management skills. All the four factors were confirmed to have an effect on project sustainability, with the factors being able to explain a very high proportion (74.9%) of the variabilities in project sustainability. The study therefore concluded that the four factors are determinants of project sustainability within YEPs. The four factors including stakeholders' engagement, project management skills, project funding, and project scope management were confirmed to have positive and statistically significant influence on project sustainability; confirming that improvements in either of the four factors would lead to improvements in project sustainability. The study concluded that stakeholders' engagement, project management skills, project funding, and project scope management influences project sustainability, and therefore can be considered as part of the determinants of project sustainability in YEPs. The study recommended that more practical application research needs to be done to assess the extent to which the consideration of these factors can be effective in fostering project sustainability, so as to guide future YEPs on ways they could apply these factors to maximize programme sustainability, hence realizing the solution of the low level of project sustainability observed among the YEPs in Kenya. The study observed that project stakeholder management and especially engagement strategies offer enormous space for creative, interesting and effective solutions which appear limited only by the cost, time and other constraints under which projects operate. Both projects and their stakeholders can benefit immensely from wellchosen stakeholder management and engagement strategies and this is not only ethically desirable but, if pursued by projects systematically, wholeheartedly and professionally, and is sustained over time, can bring about attainment of the best possible overall situation namely, a 'win-win' situation -for both of them. The study recommended that stakeholder engagement offers tremendous practical significance for projects which undoubtedly could benefit improved sustainability levels in future. The role of the project manager in the realization of sustainability requires adequate competencies. The study found that the concept of project management competencies is vital to the project management profession, as well-developed standards for project management competencies are available from two of the world's leading professional organizations. The study concluded that projects rely upon project management --- The Strategic Journal of Business & Change Management. ISSN 2312-9492 (Online) 2414-8970 (Print). www.strategicjournals.com competencies in implementing project sustainability measures in organizations. The study recommended that the observed project management competence gap needs to be remedied, by providing guidance for the addition of new competencies to the standards of project management competencies needed in the role of implementing sustainability initiatives. Our analyses, recommends further development of project management competencies for improved project sustainability. The study found that the level of project funding is a key debilitating factor for project sustainability of the YEPs. The study therefore recommended that project managers and other high level stakeholders within the YEPs should look at improving the level of project funding, increasingly improving the application of project funding as they have potential of improving project sustainability as well as project performance. The study recommends that organizations should invest more on research of current funding trends in while implementing continuous financial management skill development and infrastructure that support this key aspect of project implementation so as to ensure effective and successful implementation of the youth empowerment projects in a more sustainable manner. This study revealed that there is a significant impact of project scope management practices on project sustainability, with aspects such as project timeline, deliverables and tasks coming out as key areas of consideration. The study confirmed that enhancing application of project scope management practices can significantly impact project sustainability by enhancing continuity of project benefits, functional project structure, and project ownership. The study recommended that social development organizations should therefore make it mandatory for scope management practices to be employed in the implementation of all youth empowerment projects. --- Suggestions for Further Studies This study was undertaken targeting the youth empowerment programmes within the geographical scope of Machakos County. The study therefore suggests further study testing the relationships captured in this study within a sample of diverse project management organizationssuch as those managing projects within agricultural sectors, education sector, children welfare or environmental management, as well as within varying geographical set-ups or contexts in order to further understand these outcomes within a different situation and confirm these findings. The study also noted some observations by some of the past researchers indicating possible existence of some more determinants, an issue that was confirmed by the model coefficient of determination indicating that the model can explain 74.9% of the variabilities in programme sustainability, hinting at the possibilities of presence of other determinants. The study therefore observed the need to assess the moderating and intervening variables in this model, which will not only enhance the conceptual rigor, but also enhance understanding of the relationship. The study suggested that future researchers should create a more expanded model integrating mediating and moderating factors to further guide empirical work in less studied contexts, so as to establish more determinants of project sustainability and eventually help in improving performance and impact.
The nonuse of family planning methods remains a major public health concern in the low-and-middle-income countries, especially due to its impact on unwanted pregnancy, high rate of abortion, and transmission of sexually transmitted diseases. Various demographic and socioeconomic factors have been reported to be associated with the nonuse of family planning methods. In the present study, we aimed at assessing the influence of domestic violence (DV) on contraceptive use among ever married women in Nigeria. Methods: Data on 22,275 women aged between 15 and 49 years were collected from the most recent Nigeria Demographic and Health Survey conducted in 2013. The outcome variable was contraceptive utilization status, and the main exposure variable was DV, which was assessed by the self-reported experience of physical and psychological abuse. Complex survey method was employed to account for the multistage design of the survey. Data analyses were performed by using bivariate and multivariable techniques.The mean age of the participants was 31.33±8.26. More than four fifths (84%) of the participants reported that they were not using any contraceptive methods at all. Lifetime prevalence of psychological and physical abuse was, respectively, 19.0% (95% CI =18.0-20.1) and 14.1% (95% CI =13.3-14.9). Women who reported physical abuse were 28% (adjusted odds ratio [AOR] =1.275; 95% CI =1.030-1.578), and those reported both physical and psychological abuse had 52% (AOR =1.520; 95% CI =1.132-2.042) higher odds of not using any contraception.The rate of contraception nonuse was considerably high and was found to be significantly associated with DV. Thus, the high prevalence of DV may compromise the effectiveness of the family planning programs in the long run. Evidence-based intervention strategies should be developed to protect the health and reproductive rights of the vulnerable women and to reduce DV by giving the issue a wider recognition in public policy making.
Introduction Family planning programs constitute a crucial public health component in terms of offering the services and commodities, including contraceptives, which enable communities to achieve their reproductive goals such as planned pregnancy, birth spacing, and maintaining desired number of children, to name a few. In the low-and-middleincome-countries, which account for almost all maternal and infant mortalities in the world, 1 optimum utilization of family planning services is regarded as a pivotal strategy for attaining the sustainable development goals. 2 Adoption of family planning/ contraceptive methods has proven to be highly beneficial for tackling the burden of unintended and unwanted pregnancies and unsafe abortions and for preventing the transmission of HIV/AIDS and other sexually transmitted diseases (STDs). [3][4][5][6] Apart from decreasing the risk of exposure to high-risk births, unwanted pregnancies, and STDs, contraception is also regarded as a key to broader demographic, socioeconomic, and environmental goals, especially in countries facing sustainability challenges owing to high fertility rates. 7 The demography of Nigeria, currently the most populous country in Africa, is characterized by high total fertility and maternal and child mortality rates. Since the introduction of the first population policy in 1988 with an explicit aim to curb fertility rate (from six children/family to an average of 4 children/family), 8 there has been an increased emphasis on family planning programs throughout the country. However, according to recent statistics, the total fertility rate is still very high (5.5% as of 2013) with an alarmingly high prevalence of unsafe abortion and maternal mortality in the country. 9 Evidence suggests that over a quarter (25%-35%) of the global maternal deaths could have been averted through universal access and adequate use of contraceptives. 10 Despite these persistent challenges and well-documented benefits of contraceptive methods, the prevalence of nonuse remains considerably low in Nigeria, which is to a certain degree reflected through the high unwanted pregnancy and maternal and child mortality rates. 11,12 There is a growing volume of studies on family planning demonstrating the influence of various individual, familial, geographic, sociocultural, and health care system-induced factors on the nonutilization and suboptimal utilization of contraceptive methods. Appreciable progress has been achieved to address the physical barriers to the uptake of contraceptive services. However, in many instances, more complex issues contextual to the native sociocultural structure, which are usually less responsive to the intervention strategies such as gender discrimination 13 and domestic violence 14 (DV; also known as intimate partner violence [IPV] or abuse) lie behind the direct physical factors. While both men and women can be the subject of DV, in most cases its woman who are perpetrated, and the scope of the present study was also limited male-to-female type of violence only. The definition of men-to-women violence can differ substantially depending on the contexts; however, according to the most widely accepted definition by World Health Organization, it refers to "the range of sexually, psychologically and physically coercive acts used against adult and adolescent women by a current or former male partner." 15 Violence against women is a global issue that compromises their human rights, health, well-being, and the quality of life. No doubt the topic has emerged as a major theme in international conferences and attracted many policy and programmatic interventions. In spite of these efforts, DV is still rampant and continues to affect millions of lives worldwide to which Nigeria is no exception. According to recent estimates, DV is a widespread phenomenon in the country with its prevalence ranging from 17% to 78.8%. 16 Current medical literature provides ample evidence on the impacts of DV on women's reproductive health outcomes. [17][18][19] However, that on the use of family planning/contraceptive use is rather scarce in Nigeria. Therefore, the present study was conducted with the objectives of providing updates on the situation of contraceptive use and DV and their correlation among ever married women in Nigeria. Country representative primary data on social issues are hard to achieve for countries with poor research infrastructure; therefore, we sourced the already-published data from Measure DHS, which is available for free to researchers. Although the data are secondary and cross-sectional in nature, it is expected that the findings of the present study will facilitate informed policy action aimed at addressing DV-related barriers to contraceptive use among Nigerian women. --- Conceptual framework DV is a complex social construct that can result as interplay between individual-and community-level factors with sociocultural norms and values and political priorities of the population. Regardless of the exact cause or origin, DV can pose serious obstacles to contraceptive use through several direct and indirect pathways. Direct pathways can include physical mutilation that results in reduced ability to access the available services. Depression, poor self-efficacy and self-esteem, and care-seeking behavior are among the indirect causes. DV can also impact the extent to which a woman can exercise her role or decide her health care priorities. A healthy spousal relation sets the basis for effective communication and understanding of each other's physical and psychological needs as well as joint decision making on their reproductive goals. Form this viewpoint, DV can also cause (or result from) power imbalance with a subsequent reduction in autonomy to effectively communicate her preferences. 20 To calculate the independent association between DV and contraceptive use, we adjusted the analysis for variables that are conceptually relevant to the variables of interest. For instance, the accessibility and utilization of contraceptive methods can be influenced by various sociodemographic factors such as age, area of residence, and religious affiliation. Contraceptive use as an indicator of health literacy and behavior can vary substantially among individuals depending on the degree of educational attainment as well as financial capacity. From the perspective of gender equity, the sex of household head can play crucial roles on women's experience of DV and so can have decision making autonomy on the access to and utilization of reproductive health care services. --- Methods --- Survey and sampling design Nigeria Demographic and Health Survey (NDHS) 2013 was the fourth round of DHS survey in Nigeria, which was implemented by the National Population Commission with the financial and technical assistance by ICF International provisioned through the United States Agency for International Development-funded MEASURE DHS program. DHS surveys are national representatives that collect information on a wide range of public health-related topics such as anthropometric, demographic, and socioeconomic factors; family planning; and DV. The survey covered men and women aged between 15 and 49 years and under-5 children residing in noninstitutional settings. For sampling, a three-staged stratified cluster design was employed, which was based on a list of enumeration areas (EAs) from the 2006 Population Census of the Federal Republic of Nigeria. EAs are systematically selected units from the localities, which constitute the local government areas (LGAs). LGAs are subdivisions of each of the 36 administrative states (including the Federal Capital Territory called Abuja) and classified under six developmental zones in the country. EAs were used to form the survey clusters called primary sampling units. NDHS 2013 consisted of 904 clusters (372 in urban areas and 532 in rural areas) encompassing a total of 40,320 households from which 38,948 women were successfully interviewed with a response rate of 98%. Fieldwork lasted from February 15, 2013, to the end of May of the same year and was carried out by 36 interviewing teams in each state plus one in the Federal Capital Territory of Abuja. A more detailed version of the survey was published elsewhere. 21 --- Variables The outcome variable was self-reported contraceptive utilization status. Respondents were asked whether they are currently using contraception. The options for answer were "yes" or "No." Those who commented "don't know" were also categorized as "no." The explanatory variable of focus was DV, which was assessed by the responses to a set of questions on physical and psychological abuse. For psychological abuse, the following four aspects were taken into consideration: 1) ever been humiliated by husband/partner; 2) ever been threatened with harm by husband/partner; 3) ever been insulted or made to feel bad by husband/partner; and 4) experienced any emotional violence. The following three were used to proxy for physical abuse: 1) ever been pushed, shook, or had something thrown by husband/partner; 2) ever been slapped by husband/partner; and 3) ever been punched with fist or hit by something harmful by husband/partner. A set of confounding variables were included in the analysis as well based on their relevance in light of previous studies such as age, religious affiliation, the type of residency, educational attainment, wealth status of households, sex of household head, and having decision making autonomy on own health care. Table 1 describes these variables. --- Analytical procedure As deemed necessary for complex survey designs, the data set was first converted to a plan file by adjusting for the sampling strata, primary sampling unit, and sampling weight. As the initial analysis, the basic sociodemographic characteristics of the participants were presented in terms of frequencies and percentages. Following descriptive analysis, χ 2 tests were performed to check for the significant associations between the explanatory variables and use of contraception. Variables that were found to be significantly associated in the χ 2 tests (at p<0.25) were selected for final regression analysis. 22 In the final step, binary logistic regression model was used to calculate the odds ratios (OR) of the associations between contraceptive use and two types of DV. Contraceptive use status was modeled as a function of the two types of DV (physical and psychological) and was estimated by binary logistic regression, while adjusting for various demographic and socioeconomic parameters that were found (based on literature review) empirically and theoretically pertinent to the outcome and exposure variables. The results of regression analysis were presented as ORs along with their 95% CIs as an indicator of significance as well as the precision of the OR values. For all associations, p-value of <0.05 was considered statistically significant. All the analyses were performed with SPSS Version 24 (IBM Corporation, Armonk, NY, USA). --- Ethical approval The protocol of DHS surveys was approved by the Ethics Committee of ORC Macro (Macro International Inc.). The study was based on the analysis of anonymized secondary data available in the public domain of DHS; therefore, no additional approval was necessary. However, the approval for the reuse of the data was obtained by authors from DHS. --- Results --- Descriptive statistics In total, 22,275 ever married women aged between 15 and 49 years (mean age of 31.33 years) were included in this study. Table 1 displays the basic demographic and socioeconomic characteristics of the participants. As Table 1 indicates, above one fifth (21.7%) of the participants were in the age-group of 25-29 years, about two thirds were of rural residents, and about half were followers of Christian faith (52.7%). The percentage of women who had no formal education was 42.3, and percentages of primary-and secondary-level completion rate were, respectively, 20.9 and 28.5. The rate of current employment was 71%. About two fifths of the women were living in the poorer-poorest households and less than a fifth in the richest. A vast majority of the households were male-headed (86.3%), and only 6.6% of the women had the autonomy of deciding on their health care. --- Bivariate association between contraceptive utilization status and sociodemographic parameters The overall prevalence of contraceptive use among the participants was 16% (95% CI =15.2-17.0). χ 2 tests of independence were conducted to assess the bivariate relationship between contraceptive use and the sociodemographic factors. indicates that the likelihood of using contraceptive was highest among women aged between 30 and 34 years, being urban residents, being followers of Christian faith, having secondary school qualification, not employed, living in the richer-richest and male-headed households, and making decision on health jointly with husbands/partners. Table 3 presents the prevalence of the individual types of physical and psychological abuse. The combined prevalence of physical abuse was 14.1% (95% CI =13.3-14.9), and that of psychological abuse was 19.0% (95% CI =18.0-20.1). --- Multivariable regression analysis Table 4 summarizes the results of multivariable regression on the association between DV and contraceptive use. In total, five regression models were run, of which the first three were univariate (unadjusted), one partially adjusted, and one fully adjusted. Contraceptive utilization status was regressed first against psychological abuse, second against physical abuse, and then against both. All three types showed significant association with the nonuse of contraceptives, with the OR being highest for those who reported experiencing physical abuse only (OR =2.086; 95% CI =1.842-2.363). Although psychological abuse showed significant association in the univariate model, the significance was lost after partially adjusting for the other types of abuse and in the full model also adjusting for all the covariates. Importantly, the odds of nonuse both in the partially (OR =1.971; 95% CI =1.521-2.555) and in the fully adjusted (OR =1.520; 95% CI =1.132-2.042) models were highest for those reporting both psychological and physical abuse. --- Discussion The present study was conducted with data derived from NDHS. The aims were to provide an update on the pattern of contraceptive use among currently married women as well as to measure the association between contraceptive use and DV. Several important findings emerged from this analysis that merit special attention. The overall prevalence of contraceptive use among the study participants was strikingly low with marked disparities across age and geographic and socioeconomic factors. Only 16% (95% CI =15.2-17.0) of the women reported using any contraceptive method, which is far below the global level of 63.3% (60.4-66.0%) and African country average of 31% (as of 2010). 23 The rate is higher than those in two previous NDHS surveys: 12.6% in 2003 and 14.6% in 2008. However, putting in comparison with other comparable economies such as Kenya (42.1%) 24 and Ethiopia (29.2%) 25 reveals that Nigeria has a long way to go to catch up with its regional counterparts. This slow progress in the prevalence of contraception is particularly thought-provoking, given the rising rate of female literacy and a fair level of knowledge regarding contraception in the population. 26 Some plausible explanations behind this could be the occurrence of socioeconomic inequalities in accessing health care. Our results indicated that only 2.2% of the women in the households with poorest wealth status were using contraceptive method in contrast to 46.5% in the richest households. Similar findings were observed in countries across South Asia and sub-Saharan Africa including Bangladesh, 1 India, 27 Ghana, 28 Kenya, 24 Ethiopia, 25 and Zimbabwe 29 that have reported a striking gap in the utilization of basic reproductive services among women in the richest and lowest wealth groups. Financial well-being at individual and familial levels has been shown to be a direct determinant of utilization of maternal health care and family planning services even when the services were available free of cost. 30 The sexual and reproductive autonomy-related issues that usually arise from general power imbalance, marital discord, and spousal abuse come next to the direct socioeconomic determinants. Women reporting lifetime experience in IPV are significantly less likely to be able to use contraception and maintain their fertility goals. 14 The relationship between IPV/DV and the use of family planning services is a hard one to clarify; however, the possible mechanisms appear to be the erosion of processes such as women's decision-making autonomy, self-esteem, and poor health care-seeking behavior. According to prior researches, women in 8 of 19 countries in sub-Saharan Africa were more likely to use contraception when they had greater decision-making autonomy. 31 Lack of autonomy has also shown to be associated with poor uptake of contraception and maternal health care services in Bangladesh 32 and India. 33 These findings are well in line with the ones from the present study. Surprisingly, we did not find any significant influence of psychological abuse on contraceptive use; however, that of physical abuse and the combination of both have demonstrated a strong association. A good number of studies have been conducted so far at national and subnational levels that provided varying perspectives regarding the high fertility and the low use of maternal care and family planning services in Nigeria. The majority of the studies have argued surrounding the socioeconomic grounds, with only a few probing into the culture-and gender empowerment-related issues such as ethnicity, religion, the level of autonomy, and spousal abuse. 34,35 Thus, the current evidence base may be such that addressing the socioeconomic factors emerges as the most compelling priorities to improve the coverage of family planning services. Even so, the findings of the present study imply the necessity for the prevention and intervention of DV, which may potentially contribute to the achievement of national family planning and demographic targets. DV is a multifaceted problem that cuts across a host of sociocultural factors, which means that any attempt to mitigate this will require a multisectoral policy approach stressing on issues including women empowerment, promoting women's rights, making gender-friendly public policies, and perhaps, the most important of all, raising public awareness. To facilitate this, further researches need to be carried out to better understand the contexts of spousal abuse and the pathways through which DV and women's ability to use family planning services interact. This study has several strengths and limitations. First, it showed contraceptive use as a function of DV unlike the majority of the past studies that focused on socioeconomic and demographic factors. The quality of survey was high, and sample size was a representative of the population aged between 15 and 49 years. Data analysis followed a carefully selected set of procedures and was accounted for the complex nature of the sampling strategies. The cross-sectional nature of the data, which precludes making any causal inferences, and the use of self-reported indicators of the variables instead of objective measurement, which increases the chance of reporting bias, are among the limitations. Nonetheless, the findings provide important insights for policy makers and researchers and thus invite more in-depth surveys for future. --- Conclusion The findings of the present study indicate a remarkably low prevalence of contraceptive use among Nigerian women. The rate of DV was equally distressing and has shown a significant association with the adoption of contraception. In light of these findings, it is recommended that policy makers place special emphasis on developing strategies to protect women from any form of perpetration and to integrate gender issues to matters that concern women's reproductive health. --- Disclosure The authors report no conflicts of interest in this work. Open Access Journal of Contraception is an international, peerreviewed, open access, online journal, publishing original research, reports, reviews and commentaries on all areas of contraception. In addition to clinical research, demographics and health-related aspects, the journal welcomes new findings in animal and preclinical studies relating to understanding the biological mechanisms and practical development of new contraceptive agents. The manuscript management system is completely online and includes a very quick and fair peer-review system. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors.
According to a report by the World Economic Forum, the water crisis is the fourth most serious global risk to society. The apparent limitations of the hydraulic paradigm to solving this crisis are leading to a change in water management approaches. Recently, decentralized wastewater treatment systems have re-emerged as a partial solution to this problem. However, to implement these systems successfully, it is necessary not only to design this technology but also to have social support and willingness among citizens to use it. Previous studies have shown that these technologies are often perceived as being too costly, and people often do not consider the need for adopting them. However, it has also been pointed out that thinking about these technologies as a sustainable endeavor to reduce human impact on the environment can help to overcome the barriers to usage. Thus, we test whether priming environmental concerns before presenting information about decentralized wastewater treatment plants will increase acceptance of those technologies. In this study, we test whether priming environmental concerns can enhance the acceptance of decentralized wastewater treatment plants even when presenting disadvantages of the technology. In order to do so, we designed an experimental study with a sample of 287 people (85.7% women, M age = 20, 28). The experimental design was 2 (priming the environmental concern vs. no priming) × 2 (type of information: only advantages vs. advantages and disadvantages). The results showed that those in the environmental concern priming condition had more positive attitudes and behavioral intentions toward decentralized wastewater treatment plants than those in the control condition group. Participants who received only advantages information had a more positive perception toward the decentralized wastewater systems than in the condition, where disadvantages were present, but in the priming condition this difference was not significant. This implies that priming environmental concern helps to overcome the possible disadvantages that act as barriers to acceptance.
INTRODUCTION According to the World Economic Forum (2020), the water crisis is one of the top five global problems. The water crisis is related to both the scarcity of this resource and its quality due to pollution and eutrophication (Ganoulis, 2009;World Water Assessment Programme, 2020). Solving this crisis depends partly on changing people's behavior. Various campaigns have tried to reduce water consumption and make the population aware of the limited nature of this resource (Syme et al., 2000;Katz et al., 2016). However, the demand for water continues to increase. For this reason, the United Nations warns that there is an urgent need to address the crucial challenges caused by water stress, since current water management is failing to respond to this problem (Cosgrove and Loucks, 2015;Seemha and Ganesapillai, 2017). An alternative approach to address this crisis is to use technologies that facilitate the reuse of water (Fielding et al., 2018) and better use of the nutrients in wastewater, thus preventing untreated waste from causing the deterioration of freshwater resources (Lam et al., 2020). One such technology is decentralized wastewater treatment plants. This technology challenges the current approach to disposing of waste far from home; it involves local treatment of wastewater (in buildings, neighborhoods, or small communities), favoring the local recovery of water and nutrients for new uses, thus promoting the circular economy (Lens et al., 2005;Roefs et al., 2017). Nevertheless, despite the advantages of decentralized plants, they can also result in installation, maintenance, and locationbased costs (Mankad and Tapsuwan, 2011). Therefore, people may be reluctant to install this type of technology unless advantages over the current centralized system are apparent. In other words, the traditional resistance to change (Petty et al., 2003) could be present in this case. Some may have a reactive response to a technology that is unfamiliar, externally imposed, and may have unclear implications from their perspectives. This is especially prevalent in places where water issues, and environmental sustainability more generally, are not perceived as a problem (Gómez-Román et al., 2020). Given this situation, providing information to citizens can improve acceptance of these technologies (Mankad and Tapsuwan, 2011;Mankad, 2012). However, what kind of information will have the most impact on social acceptance? To answer this question, there are two important things one must consider. On the one hand, what is the level of concern about the issue that this technology aims to solve? On the other hand, what kind of information should be offered to the public about the new technology? --- Environmental Concern Decentralized plants serve as an alternative to an environmental problem: water stress. Therefore, a necessaryalthough not sufficient -condition for the acceptance of that technology is the existence of some public awareness or concern about environmental issues. If the public does not feel environmental issues are a problem, strategies to solve the problem of water stress will receive little or no support. In recent years, concern about environmental issues has grown significantly (Liu et al., 2014;Currie and Choma, 2018;Lewis et al., 2019). Environmental policymaking has become part of the agenda of nearly all political bodies around the world (Krosnick et al., 2006;Fairbrother, 2017), and it is also a subject on which there is broad social consensus (Steg and Vlek, 2009;Eurobarometer, 2019). All of these favor the acceptance of environmental sustainability and circular economy policies. Studies on the perception of environmental risk clearly show how concern for the environment is one of the antecedents of pro-environmental attitudes and behaviors (O'Connor et al., 1999(O'Connor et al., , 2002;;Heath and Gifford, 2006;Hidalgo and Pisano, 2010). In accordance with the above, the activation and accessibility of the environmental issue, insofar as it evokes the problems in this area, could translate into attitudes, emotions and behaviors more favorable to decentralized plants; that is, in a greater acceptance of this technology. This leads us the concept of priming. Studies on priming analyze how exposure to prior information affects a subsequent decision or behavior (Jonas and Sassenberg, 2006;Custers and Aarts, 2010). So, according to this, making accessible or priming the environmental concern could make more accessible information that already exists in memory or associated processes (the environmental problem), so that it becomes salient in subsequent decision-making (Kay et al., 2004;Scheufele and Tewksbury, 2007), in this case being more favorable to accept decentralized plants. Nevertheless, in addition to concern for the environment, which in this case would be activated through priming, there are other possible factors involved in the acceptance of decentralized wastewater treatment plants. Among them, the cost-benefit calculation is of importance; here, it includes not only economic issues but also elements such as loss of comfort or aspects related to technology maintenance (Mankad and Tapsuwan, 2011). --- Information About Technology: Focusing Only on the Positive? As discussed previously, providing the population with information assists in overcoming barriers to acceptance (Mankad and Tapsuwan, 2011;Fielding et al., 2018), especially in places where public opinion has not yet been able to form an impression about it (Jacoby, 2000). However, when presenting information to the public, one must take into account that several elements may influence (to a greater or lesser extent) the effect that information may have. One of these factors is related to the unilateral or bilateral nature of the arguments presented to the public. The first consists of expressing only the advantages and positive aspects, while the latter also includes weak or negative aspects of a technology. There is mixed evidence on the efficacy of presenting unilateral or bilateral arguments (Allen, 1991). The effectiveness of including disadvantages in persuasive messaging is not entirely clear, especially when the public does not yet have an elaborate opinion on the subject under study (Rosenberg, 2001). Presenting positive messages while also discussing some disadvantages or less positive elements improves source credibility, and the public may have more confidence in the veracity of such messaging (Crowley and Hoyer, 1994;Schlosser, 2011). Nevertheless, the persuasiveness of messaging will also depend on whether the disadvantages that are presented (and refuted) are relevant to the people receiving the message (O' Keefe, 1990). The effect may also be different depending on the recipient of the message. One-sided messages (unilateral) appear to be more effective when the audience is initially in favor of the message's content (Petty and Cacioppo, 1986). However, if the recipients are well-informed, two-way (bilateral) messages are more effective. --- Study Aims and Hypothesis This exploratory study analyzes the influence that environmental concern priming and different types of messages (unilateral vs. bilateral) about decentralized plants have on the social acceptance of this sustainable technology. Our hypotheses are as follows: H 1 : The activation of environmental concern priming will favor the acceptance of decentralized wastewater plants. H 2 : Public perception of decentralized plants will be more positive when the information presented relates only to the plant's advantages. H 3 : An interaction between priming and information will occur when messaging relates only to the plant's advantages or advantages and disadvantages. When environmental concern priming is not activated, public perception of the plants will be most negative when discussing the disadvantages (as opposed to the condition where only advantages are discussed). However, there will be no significant difference in technology acceptance between participants when the environmental concern priming is activated (irrespective of presentation of the advantages or both advantages and disadvantages). --- MATERIALS AND METHODS --- Participants and Design Using GPower software (v 3.1.9.4), a power analysis was conducted to calculate the ideal sample needed for this study (Faul et al., 2009). In order to detect an effect size f 2 (V) = 0.06 with 95% power (alpha = 0.05), G*Power suggests we would need 208 participants to carry MANOVA analysis. The aim was to recruit a group of participants larger than the ideal sample size in anticipation of potential missing responses or deficient data. Those who responded too quickly were automatically screened out. The sampling procedure resulted in a final sample of 287 students from the faculties of Psychology and Education at the University of Santiago de Compostela (Spain; 85.7% women, age = 20.28; SD = 2.19). The experimental design was 2 (priming the environmental concern vs. no priming) × 2 (type of information: only advantages vs. advantages and disadvantages). All data and materials used in this research are publicly accessible at osf.io/97v45. No studies in this manuscript were preregistered. --- Procedure Students were asked to participate in a research project that was taking place at the University. To participate, they were required to fill out a questionnaire using the Qualtrics platform on their mobile device or laptop. Participants were randomly assigned to each of the experimental conditions. Participants took an average of 13 min to complete the task. Participants answered a questionnaire that consists of five parts: an introduction, a priming section (with two conditions), an information section (with two conditions), another information section with a series of questions about acceptance of the decentralized plant, and a final section relating to sociodemographic information and debriefing. The introduction of the questionnaire includes an acknowledgement of their participation and asks participants to be honest in their responses. The introduction states that the Bioethics Committee of the University of Santiago de Compostela has approved the study, guaranteeing anonymity and data protection. Participants could interrupt or abandon their participation at any time if they wished. Before fulfilling the questionnaire, students were required to provide informed consent to participate in the research. Once the students accepted the commitment, the program randomly assigned the participants to the different experimental conditions. First, participants were told that they would be asked a few questions from another ongoing research project, making additional use of their involvement. This opening allowed the opportunity to present priming information before presenting the decentralized plants information. --- Priming Conditions Half of the participants were randomly assigned to one of two conditions: the environmental concern priming group or the control group. In the environmental concern priming group, participants were required to consider environmental problems before they were presented with information about the decentralized plant. To do this, participants answered two questions. First, they were asked to rank a series of environmental challenges by importance: climate change, water scarcity, air pollution, water pollution, deforestation, soil degradation, energy consumption, and waste. Next, participants had to rate the degree of importance of those environmental issues from 1 (none) to 9 (a lot). In the control condition group, to keep the participants as cognitively active as those in the experimental group, participants were required to order a series of musical styles by affinity. They were then asked to indicate how much they like each of these musical styles, from 1 (none) and 9 (a lot). --- Information Conditions In the next section, participants were shown a message that thanked them for their participation in the other investigation and informed them that they were to answer the investigation questions for the second study. In this section, they were required to imagine that their faculty was developing a project to install a plant to treat wastewater in the faculty's basement. Participants were provided an explanation as to the plant's functioning. Then, participants received a set of randomized information. Half of the participants read information about the plant that presented its advantages, while the other half read information about the advantages and the possible disadvantages of the plant (see the Annex for complete information). Next, all participants were required to answer the substantive questions. The purpose of these questions was to determine whether the acceptance of a decentralized wastewater treatment plant differed among the participants after they were randomly exposed to priming and the information presented. To finalize the questionnaire, participants answered one block of sociodemographic questions. They were then shown a goodbye message, again thanking them for their participation. Participants read that this was a hypothetical situation for research purposes; their faculty would not install a decentralized wastewater treatment plant. Participants could provide their email to obtain a report with the results of the investigation. Furthermore, they were also provided a contact email if they wanted to report, solve, or discuss any issue about the research project. --- Measures We used several different types of measures to determine the level of acceptance of decentralized plants: attitudes, strength of attitudes, emotions, and behavioral intention. --- Attitudes Toward Decentralized Plants Participants were required to answer on a 9-point semantic differential scale (1 = nothing and 9 = a lot) to what extent they thought that the faculty's decentralized plant project was: "very bad-very good, " "I do not like it at all-I like it very much, " "very negative-very positive, " "very unnecessary-very necessary, " "very useless-very useful, " "very unacceptable-very acceptable, " "very inappropriate-very appropriate, " or "extremely harmfulextremely beneficial" (α = 0.91). --- Strength of Attitudes Participants assessed their opinions about the installation of the plant in the faculty. On a 9-point semantic differential scale (1 = nothing and 9 = a lot), they had to answer additional questions about their previous answers including how convinced they were about their opinions, how confident they were in their answers, the relevance of their answers, and how easily they would change their opinion in a discussion (α = 0.87). --- Emotions Participants responded to what extent thinking about the installation of the plant in the faculty makes them feel a number of emotions (1 = nothing and 9 = a lot): worried, disgusted, angry, fearful, helpless (negative emotions, α = 0.78), relieved, proud, optimistic, enthusiastic, and comfortable (positive emotions, α = 0.84). --- Behavioral Intention Participants indicated their degree of agreement (1 = no agreement and 9 = totally agree) with the following statements: they would support the installation of the plant in the faculty, they would campaign in favor of the installation of the plant in the faculty, they would recommend that these plants be installed in other buildings of the University and the city, and they would install a plant in their building or house (α = 0.88). --- Priming Control In order to draw valid conclusions, participants should not identify the connection between the priming task and the subsequent task (Bargh, 2006). In this study, participants had to answer an open question asking them what they believed the objective of the research is. --- RESULTS In the open response question, none of the participants identified the relationship between both tasks. Participants referred to questions about "assessing/checking the degree of acceptance of the technology presented, " "how people perceive a new technology after presenting information about it, " and "assessing opinions that can be controversial anonymously. " No one referred to the effect that the questionnaire's first task had on the second part of the questionnaire, demonstrating that they were not aware of the priming task. After verifying that the participants were not aware of the experiment's manipulation, we analyzed the effect that environmental concern priming had on the acceptance of decentralized plants. Specifically, we considered how the inclusion or exclusion of information about plant disadvantages influenced the participants' perceptions. Table 1 shows the MANOVA results for each of the variables under study in each of the conditions. As one can see in Table 1, the effect of priming is significant. Having participants think about environmental issues before being presented the information about decentralized plants affected their level of acceptance. Thus, those participants who received the environmental concern priming obtained significantly higher scores than the control group in the measures: attitudes (F = 8.10, p = 0.005, η 2 = 0.028), strength of attitudes (F = 9.97, p = 0.002, η 2 = 0.034), behavioral intention (F = 6.32, p = 0.013, η 2 = 0.022), and positive emotions (F = 8.14, p = 0.005, η 2 = 0.028). There were no significant differences between the control group and the experimental group regarding negative emotions (F = 0.73, p = 0.394, η 2 = 0.003). Regarding the informative content of the message, presenting the advantages and disadvantages of the plant produced attitudes that were significantly more negative than those who only received information about the advantages (F = 7.27, p = 0.007, η 2 = 0.025). Those who read information about disadvantages experienced slightly stronger negative emotions than those who read only advantages (F = 5.33, p = 0.022, η 2 = 0.19). However, reporting advantages and disadvantages did not create significant differences in the strength of attitudes (F = 0.29, p = 0.589, η 2 = 0.001), behavioral intention (F = 3.53, p = 0.061, η 2 = 0.013), or positive emotions measures (F = 0.44, p = 0.501, η 2 = 0.002). The interaction of the two conditions (i.e., the priming task and type of information) was not significant for any of the variables under study. --- DISCUSSION Decentralized wastewater treatment plants allow recovery and reuse of water and nutrients from wastewater, promoting the circular economy (Lens et al., 2005;Roefs et al., 2017). Although this technology may be a possible solution to the global water crisis, the truth is that implementation of the technology depends on having social acceptance (Mankad, 2012;Gómez-Román et al., 2020). Although numerous studies have shown that providing information is a facilitating factor for acceptance (Mankad and Tapsuwan, 2011;Rolland et al., 2020), the way such information is presented is not a trivial question. It can have decisive consequences for the development of public opinion (Valentin and Bogus, 2015). How that message is presented is key to gaining broad consensus (Wiest et al., 2015). Consequently, the communication processes in the formation of interpretive frameworks on this technology are critical, especially when public opinion on this technology is not yet clear. In this study, our goal was to determine whether asking people to think about environmental problems (through environmental concern priming) before presenting information about decentralized wastewater treatment plants influences their acceptance of the technology. Moreover, we wanted to test whether including bilateral arguments about the technology's disadvantages influenced the degree of acceptance. The results partially confirm the hypotheses of this study. As expected, those who think about environmental problems before receiving the information about plants had a more positive perception of the technology. However, the effect of presenting unilateral or bilateral arguments is less clear. Mentioning only the plants' advantages led to more positive attitudes and fewer negative emotions relating to these technologies, but there were no significant differences in the rest of the acceptance indicators. Although the trend in the acceptance indicators, strength of attitudes, and behavioral intention were similar, those who received arguments only about the advantages had a more positive perception of the technology. That being said, the difference was not significant compared to those who received information about both the advantages and disadvantages. Even though the priming and information interaction was not significant, acceptance was more favorable even where the disadvantages were presented so long as participants were primed through questions about environmental concern. As expected, environmental concern priming reduced differences in acceptance between those who received information about advantages only and those who received information about both the advantages and disadvantages. Therefore, activating environmental concern improved participants' perception of information about decentralized wastewater treatment plants, even when the technology's disadvantages were explicitly presented. Considering that this is an exploratory study, these results need to be considered cautiously, and they are only an initial approximation. Firstly, because of the sample, the study relied on university students as participants to test this exploratory hypothesis. Now there is a need to replicate these results in a more general population sample, also in different contexts. Secondly, because the effect sizes were relatively small, so future studies should replicate these findings to make stronger statements about the trend of this exploratory study. Although the effect sizes found were modest, this research provides encouraging evidence for its value as an explanatory mechanism to launch communication campaigns and catalyze other research studies with larger samples. The purpose of this research implies that it is necessary to examine public opinion about an issue that is not yet up for debate on the public agenda. It is necessary to consider carefully how we present the information even in the control condition. The first impression on a new topic establishes the framework from which one will process the rest of the information on that issue (Wilson et al., 1989(Wilson et al., , 2000)), so it is necessary to be cautious when launching a broader population study. Moreover, when doing so, researchers must use "debriefing" strategies to avoid unleashing a possible social debate on the topic that is not yet on the public agenda. The results of this study indicate that, although the unilateral condition pertaining to advantages with environmental priming shows the most promising results, there were no significant differences between the unilateral and bilateral information scores in the condition of environmental concern priming. In both cases, results were very positive. One can thus conclude that environmental concern priming is a necessary element in improving social acceptance of decentralized wastewater treatment plants. However, it is not as straightforward if the type of arguments presented (i.e., only advantages or also the disadvantages of the technology) play a role. Perhaps the key question is not the type of arguments that are presented, but who provides the arguments (depending on the trust or credibility given to the source). Alternatively, audience characteristics may be critical. Therefore, these questions should be explored (and even combined) to determine which elements are critical when encouraging acceptance of these technologies. This study is a first step to demonstrate experimentally that acceptance of decentralized wastewater treatment plants depends not only on reporting the qualities of this technology but also on providing the information within the context of global environmental problems. --- DATA AVAILABILITY STATEMENT All data and materials used in this research are publicly accessible at osf.io/97v45. --- ETHICS STATEMENT The studies involving human participants were reviewed and approved by Bioethics Committee of the University of Santiago de Compostela. The patients/participants provided their written informed consent to participate in this study. --- AUTHOR CONTRIBUTIONS J-MS: funding. CG-R, J-MS, and MA: conceptualization and methodology. CG-R and J-MS: writing original draft, review, and editing. MA and BM: writing -review and editing. All authors contributed to the article and approved the submitted version. --- Conflict of Interest: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's Note: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. --- ANNEX --- Information about decentralized wastewater treatment plants (advantages and disadvantages) Imagine that the Faculty has a project to install a plant to treat the wastewater in the Faculty's basement. Until now, the wastewater from the Faculty and the rest of the buildings and houses in Santiago are channeled through the sewerage networks to the centralized treatment plant in Silvouta, just over 6 km from the city center. The Faculty is proposing treating in situ the wastewater in the building. That is, the different types of water: gray (from the sinks) and black (from the toilets) that are generated in the Faculty would be collected separately and once treated and purified in the basement floor, used for different uses, such as filling cisterns or watering green areas. That would save a large amount of drinking water. Each time the cistern is flushed, 8-10 liters of drinking water are used. Considering the number of people who work/study at the Faculty, that would mean saving about 19,200 liters of drinking water every day. Another advantage is that this plant would recover the phosphorus in the wastewater and use it as fertilizer. Phosphorous is a rare mineral, which is why it has become a strategic priority for food production. Nevertheless, that plant also has some drawbacks. One of them is that every now and then, and due to failure, it can produce unpleasant odors. Its installation would also entail a significant economic cost that the Faculty would have to be borne to build a new pipes system that would separate the gray water from the black, to build the plant, and to maintain it.
In the context of Covid-19 government-ordered lockdowns, more individualistic people might be less willing to leave their homes to protect their own health, or they might be more willing to go out to relieve their boredom. Using an Australian sample, a pilot study found that people's lay theories were consistent with the latter possibility, that individualism would be associated with a greater willingness to violate lockdown orders. Using a longitudinal dataset containing location records of about 18 million smartphones across the US, Study 1 found that people in more individualistic states were less likely to comply with social distancing rules following lockdown orders. Additional analyses replicated this finding with reference to counties' residential mobility, which is associated with increased individualism. In a longitudinal dataset containing mobility data across 79 countries and regions, Study 2 found that people in more individualistic countries and regions were also less likely to follow social distancing rules. Pre-registered Study 3 replicated these findings at the individual level: people scoring higher on an individualism scale indicated that they had violated social distancing rules more often during the Covid-19 pandemic. Study 4 found that the effect of individualism on violating social distancing rules was mediated by people's selfishness and boredom. Overall, our findings document a cultural antecedent of individuals' socially responsible behavior during a pandemic, and suggest an additional explanation for why the Covid-19 pandemic has been much harder to contain in some parts of the world than in others.
Cultural Antecedents of Virus Transmission: Individualism is Associated with Lower --- Compliance with Social Distancing Rules During the Covid-19 Pandemic One strategy that governments use to fight pandemics is to institute lockdowns, that is, to order all residents working in non-essential jobs to stay in their homes at all times, allowing only residents working in essential jobs involving food and medication to work onsite. Many governments around the world instituted some form of lockdown to mitigate the Covid-19 pandemic. For example, China instituted a two-month lockdown in Wuhan, the first-known city that suffered from the pandemic. The spread of the virus was successfully curbed in this period, probably because a significant proportion of people followed lockdown orders (Kupferschmidt & Cohen, 2020). In contrast, even after a lockdown was instituted in northern Italy for two months, the region continued seeing new cases and deaths (McCann et al., 2020), possibly because a significant proportion of people violated lockdown orders (Tondo, 2020). Similarly, in the New York City metropolitan region, which was heavily affected in the early stages of the pandemic in the US, cases continued increasing despite a lockdown, likely because a significant proportion of people did not follow lockdown orders (Kenton, 2020). In this research, we ask why compliance with lockdowns varies across countries and regions. --- Antecedents of Compliance with Social Distancing Rules Research has identified several factors that underlie people's tendency to follow social distancing rules. With reference to personality traits, people higher on conscientiousness, agreeableness, and openness to experience were more likely to follow social distancing rules (Götz, Gvirtz, et al., 2021;Peters et al., 2020). The findings about conscientiousness and agreeableness would be expected based on definitions of these traits. However, the finding about openness is surprising-one could have expected that people higher on openness would be more willing to violate lockdown orders to explore their altered social surroundings. With reference to cognitive traits, people with higher working memory capacity were more likely to comply with social distancing rules because they were able to better do a cost-benefit analysis and to conclude that the benefits of social distancing outweigh its costs (Xie et al., 2020). In addition to these individual-level factors, there are also macro-level factors that influence people's compliance with social distancing rules. For example, with reference to sociodemographic characteristics, residents of counties with higher income and residents of states with higher educational attainment were more likely to follow social distancing rules (Im et al., 2020;Weill et al., 2020). This finding is consistent with the idea that people in richer and more educated regions have more choice in their everyday lives (Snibbe & Markus, 2005;Stephens et al., 2007), and thus are better able to alter their behavior in response to lockdown orders. Moreover, compared to people in states that voted for the Republican presidential candidate in the most recent political election, residents of states who voted for the Democratic presidential candidate were more likely to follow social distancing rules (Im et al., 2020), possibly because the Republican President Donald Trump personally downplayed the severity of the pandemic. --- The Predictive Role of Individualism-Collectivism In this research, we focus on cultural antecedents of compliance with Covid-19 social distancing rules. This work builds on prior research showing that cultural values can shape the spread of infectious diseases (Borg, 2014;Gaygısız et al., 2017). Most of this research has focused on individualism-collectivism, which is one of the most studied values in cross-cultural psychology (Hofstede, 1980;Triandis, 1972). Specifically, "individualists give priority to personal goals over the goals of collectives; collectivists … subordinate their personal goals to the collective goals" (Triandis, 1989, p. 509). A related distinction focuses on individuals' relationships with specific others rather than with groups: people with a more independent selfconstrual emphasize expressing themselves and influencing others, whereas those with a more interdependent self-construal emphasize attending to others' preferences and needs (Markus & Kitayama, 1991). In independent contexts, actions are supposed to be "freely chosen contingent on one's own preferences, goals, intentions, motives," whereas, in interdependent contexts, actions are supposed to be "responsive to obligations and expectations of others, roles, situations" (Markus & Kitayama, 2003, p. 7). The differing construals of the self and agency associated with individualism-collectivism are related to a number of psychological and behavioral outcomes. For example, whereas people from more independent or individualistic cultures tend to view themselves from a firstperson perspective-they view the world from their own eyes rather than from the eyes of others-those from more interdependent or collectivistic cultures routinely view themselves from a third-person perspective (Cohen et al., 2007). In the context of lockdown orders during a pandemic, if people take others' or the society's perspective, then they might realize that even if they strongly want to go out, it would be in the community's interest not to do so (as the person might catch an infection outside and bring it home, or transmit their infection to others if they have an asymptomatic infection). In contrast, attending to the self and one's personal goals could mean an increased tendency to act on one's desires to leave one's home (e.g., to have a change of location or to improve one's mood), irrespective of the risks that it might pose to others or society. These arguments lead to the prediction that individualism is associated with a lower tendency to comply with social distancing rules during the Covid-19 pandemic. However, it is also possible that more individualistic people might follow social distancing rules more closely. Attending to others' interests could be reflected in a greater desire to meet friends and family members to make sure they are doing fine, relieve them of loneliness, and help them if needed, resulting in lower compliance with social distancing rules. Further, more collectivistic people tend to be more responsive to social pressures (e.g., Cialdini et al., 1999), so they might have a harder time refusing the requests of friends and family members to meet up, or of their work supervisor or colleagues to come to work despite lockdown orders. In contrast, attending to one's needs and interests could be reflected in an increased tendency to comply with social distancing rules to ensure one's own safety. Thus, these arguments lead to the prediction that individualism would be associated with a higher tendency to comply with social distancing rules during the Covid-19 pandemic. Indeed, extant research has provided mixed evidence about the effect of individualism on people's tendency to follow social distancing rules. On the one hand, research has found that in counties that spent more years on the US frontier, and thus are more likely to emphasize individualism (Kitayama et al., 2006), people were less likely to follow social distancing guidelines (Bazzi et al., 2021). Further, unpublished research has found that in more individualistic countries, people were more willing to violate social distancing rules (Frey et al., 2020;Im & Chen, 2020). Along related lines, people in more individualistic regions were less likely to wear masks (Lu et al., 2021). However, other unpublished research has found that in more individualistic US states, people were more willing to follow social distancing rules (Im et al., 2020). These inconsistent results about the relationship between region-level individualism and people's tendency to follow social distancing rules may be due to certain limitations of extant research. First, past papers on this topic have each included a single study at one level of analysis (i.e., either county, state, or country-level analysis; e.g., Bazzi et al., 2021;Frey et al., 2020;Huynh, 2020;Im et al., 2020). The mixed findings could arise either from differing indicators of individualism across different studies or from idiosyncratic analytic choices (Silberzahn et al., 2018). Relatedly, these mixed findings could result from different levels of analyses. Most research in cultural psychology has taken a macro-level approach to individualism-collectivism by comparing individuals across countries varying on individualism-collectivism or independence-interdependence (Hofstede, 1980;Markus & Kitayama, 1991;Triandis, 1989). Researchers have also taken a micro-level approach by studying individuals varying on the value of individualism-collectivism (e.g., Singelis, 1994;Triandis, 1995). However, researchers rarely examine whether effects obtained at the macro-level generalize at the micro-level and vice-versa (for an exception, see Lee et al., 2000). Some research in cultural psychology suggests that macro-level findings might not generalize at the micro-level and vice-versa. For example, although various indicators of analytic-holistic cognition vary consistently at the macrolevel (i.e., East Asians are more holistic and Westerners are more analytic on a wide range of tasks), at the micro-level, various tasks assessing analytic-holistic tendencies are uncorrelated with each other (Na et al., 2010; see also Kitayama et al., 2009). Thus, it is of importance to examine whether the individualism effect holds at multiple levels. If micro-level individualism is the key construct, then individuals' personal values would be the key driver of their social distancing behavior-more individualistic people violate social distancing rules more often. Any macro-level findings would then be mere aggregates of individual-level phenomena-the key cause is individual values, not regionally prevalent values. Alternatively, suppose macro-level individualism is the key causal construct. In that case, most people might violate (or follow) social distancing rules in more individualistic regions because everyone else is doing so, and because doing so is consistent with the individualistic ethos prevalent in the region. In this case, individual-level values may play little to no role, and therefore, region-level findings would not replicate at the individual level. It is also possible that there are both macro-level and micro-level effects, such that everyone, irrespective of their personal values, violates social distancing rules more in more individualistic (or more collectivistic) regions; and within a given region, more individualistic (or more collectivistic) people violate social distancing rules more. To address some of these complexities, we test the effect of individualism at each level of analysis. However, although our studies rule in a microlevel effect and also document the possibility of a macro-level effect, we were unable to independently assess micro-and macro-level effects in the study. Third, past research has not explored the mechanisms underlying the relationship between individualism and people's tendency to follow social distancing rules. Given the competing hypotheses outlined above, it is possible that the different mechanisms underlying individualism may lead to different effects on people's tendency to follow social distancing rules. Thus, exploring the underlying mechanism can help clarify the mixed findings in the literature. In the present research, we focus on four classes of potential mechanisms: concern for self, concern for others, motivation for norm compliance, and optimism about the pandemic. Fourth, past studies suffer from limitations associated with data analysis choices. For instance, Frey et al.'s (2020) results may be subject to the omitted variable bias as they did not control for regional-level characteristics (e.g., economic development, educational attainment, population density). Moreover, the social distancing data in some studies only covered the early stage of the Covid-19 pandemic (e.g., till March 29, 2020for Huynh, 2020;till April 13, 2020for Im et al., 2020). These limitations may explain some of the inconsistent findings in the literature. --- Overview of Studies To address the above limitations, the present research uses a multi-method investigation to examine the role of individualism in people's tendency to follow Covid-19 social distancing rules. Given the competing hypotheses outlined above, we first investigate people's lay theories about whether individualism would be associated with more or less compliance with social distancing rules. Unlike scientific theories, lay theories are rarely explicitly articulated but "set up an interpretive frame within which information is processed" (Chiu et al., 1997, p. 19). As lay theories provide people with schema-like knowledge structures that help them process information and make decisions (Levy et al., 2006;Molden & Dweck, 2006), understanding people's lay theories about the role of individualism during the pandemic can help make sense of and predict their behavior. We then test whether people's lay theories about the role of individualism during the pandemic hold at multi-levels of analysis, including the individual, county, state, and country levels. Further, we examine several potential mechanisms that can explain the effect of individualism on people's tendency to follow social distancing rules. Finally, we address limitations of prior research by controlling for a number of region-level factors, such as economic development, educational attainment, and population density, and by using data till December 31, 2020. We also conduct analyses using multiple archival datasets assessing actual behavior. Specifically, we conducted five studies to test our hypotheses using different research designs (experiment, correlational, and longitudinal) and samples from different countries. Using an Australian sample, a pilot study tested people's lay theories about the effect of country-level individualism-collectivism on residents' likelihood of following social distancing rules during a Covid-19 lockdown. In Study 1, we analyzed a longitudinal dataset with records of about 18 million smartphones across the US. We used two different region-level indicators of individualism-a state-level individualism score (Vandello & Cohen, 1999) and county-level residential mobility (Oishi & Kisling, 2009). In Study 2, we analyzed another longitudinal dataset with people's mobility data across 79 countries and regions varying on individualism. Study 3 tested whether Americans who scored higher on individualism reported that they had violated social distancing rules more often during the Covid-19 lockdown in their locality. Finally, Study 4 tested four underlying mechanisms that can explain why more individualistic people are more likely to violate social distancing rules, using data from both the US and the UK. We report all participants, conditions, and measures. Materials used in the pilot study and Studies 3 and 4, which are not already available in previous publications, are reported in the Supplementary Materials. Survey materials, data, and code related to this article are available on https://osf.io/d3sm7/?view_only=222e0b5abc42468d82f5b7900b28f99e. --- Pilot Study: Lay Theories about Individualism and Compliance with Covid-19 Social Distancing Rules A polit study assessed people's lay theories about whether individualism is associated with following or violating social distancing rules during the Covid-19 pandemic. We presented participants with descriptions of an individualistic country and a collectivistic country, and assessed their expectations about the extent to which people in the two countries would follow social distancing rules during a Covid-19 lockdown. --- Method We pre-registered the methods and analyses of this study at https://osf.io/c2ymu?view_only=4e44ac0162564b06b4ef82a07e52a6c2. Participants. In a previous study, we identified a correlation coefficient of r = .27 (equivalent to Cohen's d = .56) between individualism and violating social distancing rules. We assumed a slightly smaller effect size of d = .50. A power analysis with d = .50, a = .05 (onetailed), and power = 80%, indicated that we need to recruit 102 participants. Rounding this number, we posted a survey seeking 100 Australian residents on Prolific (Peer et al., 2017). In response, 77 participants completed the survey (Mage = 31.03, SDage = 9.19; 28 women, 48 men, and 1 missing) before it expired. All responses came from unique IP addresses. The study scenario was set in the Solomon Islands, a group of islands close to Australia and Indonesia. We decided to sample participants from Australia because Australians likely know that the Solomon Islands actually exist and are not fictitious, and would be interested in reading about the Solomon Islands' culture. However, we estimated that few Australians have visited the Solomon Islands, so our participants would probably not have any pre-existing assumptions about the culture of the Solomon Islands. More generally, we sought to sample participants from countries other than the US. Procedure. Participants were presented with a scenario describing the culture of two Pacific island nations close to Australia (i.e., the Solomon Islands and the Marshall Islands). We described one country's culture as individualistic and the other's as collectivistic. Participants were randomly assigned to either the Solomon Islands-Individualistic Marshall Islands-Collectivistic condition or the Solomon Islands-Collectivistic Marshall Islands-Individualistic condition. The content of the manipulation was based on Triandis and Gelfand's (1998) individualism-collectivism scale. Specifically, in the individualistic culture condition, participants were told that residents of the relevant island prefer to be independent, prefer individual activities over group activities, believe that competition is the law of nature, and try to work harder to beat others. In the collectivist culture condition, participants were told that residents of the relevant island emphasize the well-being of their friends, enjoy in spending time with others, feel good when they cooperate with others, and believe that it is important to respect the decisions made by the group as a whole (see Supplementary Materials for the detailed scenarios). After they read the scenario, participants were informed that Covid-19 has spread to both the Solomon Islands and the Marshall Islands, and the two islands have instituted a lockdown-all residents are asked to stay at their home at all times unless they were working in essential industries. We asked participants: "During the lockdown, in which country do you think people will be more likely to (1) follow the lockdown regulations, (2) follow social-distancing guidelines, (3) follow stay-at-home guidelines, and (4) follow the government's orders" (a = .77). Participants were asked to respond on a 11-point scale ranging from "-5 = definitely more likely in the Solomon Islands" to "5 = definitely more likely in the Marshall Islands." --- Results As per the pre-registered analysis plan, we excluded two participants who provided gibberish or irrelevant responses to the open-ended question asking them to describe the culture of each island (see Supplementary Materials for responses that were judged to be gibberish). An independent-samples t-test revealed that participants in the Solomon Islands-Collectivistic Marshall Islands-Individualistic condition were more likely to expect people in the Solomon Islands to follow social distancing rules during the Covid-19 pandemic (M = -1.14, 95% CI [-1.82, -.40], SD = 2.24) than those in the Solomon Islands-Individualistic Marshall Islands-Collectivistic condition (M = .25, 95% CI [-.54, 1.06], SD = 2.55), t(73) = 2.51, p = .007 (onetailed, as we pre-registered a directional hypothesis), d = .59, 95% CI [.05, 1.12]. Thus, the pilot study found that although competing hypotheses can be made about the effect of individualism on the extent to which people follow Covid-19 social distancing rules, our participants expected residents of an individualistic culture to be less likely to follow social distancing rules during a Covid-19 lockdown than residents of a collectivistic culture. The subsequent studies tested whether people's lay theory actually pans out with behavioral data at multiple levels of analysis. --- Study 1: Region-Level Longitudinal Study Using Mobile-phone Location Data The goal of this study was to examine whether people's lay theories about the individualism effect hold at the state and county levels using behavioral data. Using location data from US residents' mobile phones, we assessed the extent to which a government-ordered lockdown increased the proportion of residents in a given county who stayed at home in the daytime. The bigger the increase, the more effective the lockdown. We measured region-level individualism in two different ways. First, we used state-level individualism scores provided by Vandello and Cohen (1999), which were constructed based on socio-structural variables (e.g., ratio of divorce rate to marriage rate), behaviors (e.g., proportion of people carpooling), and attitudes (e.g., proportion of people without a religious affiliation). Second, we used a socio-structural variable-residential mobility-a precursor of individualism (e.g., Oishi, Lun, et al., 2007;Oishi et al., 2012;Oishi & Kisling, 2009). Residential mobility is defined as "the frequency with which individuals change their residence" (Oishi, 2010, p. 6). Individuals who move more frequently place greater importance on their personal selves over their collective selves (Oishi, Lun, et al., 2007). For example, people living in metropolitan cities, where residential mobility is relatively higher, considered their personal self as more important than those living in regional cities, where residential mobility is relatively lower (Kashima et al., 2004). As people in individualistic cultures place greater importance on their personal self than on their collective self (Triandis et al., 1988), residential mobility serves as an antecedent of individualism (Oishi et al., 2012). Indeed, extensive research has found that in regions with higher residential mobility, people are more individualistic (Oishi, 2010). We thus used county-level residential mobility as another indicator of individualism. Thus, we sought to test our hypothesis using two different region-level indicators of individualism. --- Method Independent variables. For the first indicator of individualism, we obtained the statelevel collectivism index from Vandello and Cohen (1999) and then multiplied it by -1 to obtain a state-level individualism index. The second indicator was residential mobility. Following Oishi, Rothman, et al. (2007) andMcCann (2015), we computed county-level residential mobility by dividing the number of residents who lived in a different dwelling in a different micropolitan or metropolitan one year ago, by the total population in the county. We obtained this data from the 2016 American Community Survey's 5-year estimate at the census block group level (U.S. Census Bureau, 2016). We aggregated the block-level data into county level to calculate residential mobility. Higher residential mobility represents higher individualism. The correlation between Vandello and Cohen's (1999) individualism score and residential mobility is 0.099 (p < 0.001). For ease of interpretation, we normalized all independent variables to a mean of 0 and a standard deviation of 1. Dependent variable. To measure the extent to which people followed social distancing rules, we used data provided by SafeGraph Inc (SafeGraph, 2020).1 The dataset contains location information of millions of US residents who are representative of the 77% of US residents who use smartphones (Chen & Rohla, 2018). We analyzed all data from January 1 to December 31, 2020. The dataset contained location records of about 18 million smartphones, with an average of 6,000 smartphones in each county. Participants used one of many smartphone apps and provided their opt-in consent to the app to collect their location data. The data is anonymous and is aggregated at the level of census block groups. Based on a smartphone's geolocation throughout the day, SafeGraph coded the overall traveling pattern for all devices in each census block group on a given date. Our analysis was at the level of dates nested within counties. To measure the extent to which people followed social distancing rules, we constructed several measures. The first measure was the median number of minutes devices were found at home among all devices on a given date in a given county ("HomeDwellTime"). Specifically, for each device, SafeGraph summed the number of minutes the device was found at home across the day to get the total number of at-home minutes. Then SafeGraph calculated the median number of at-home minutes among all devices within a given county. The second measure was the percentage of smartphones that were completely at home on a given date in a given county (i.e., we divided the number of smartphones that spent the whole day at home by the total number of smartphones; "%StayHome"). SafeGraph marks device holders as working (part-time or fulltime) when the device is found at a location other than home for more than 3 hours. Therefore, for robustness check, we also computed our dependent variable by dividing the number of smartphones that were completely at home by the number of smartphones belonging to individuals not working that day ("%StayHome(NonWork)"). We used a fourth measure-the median percentage of time devices were found at home on a given date in a given county ("PercHome"). Specifically, for each device, SafeGraph divided the number of minutes the device was observed at home by the number of minutes the device was observed at all places to calculate the percentage of time the device was found at home. Then SafeGraph took the median percentage of time devices were found at home across all observed devices within a given county. The correlations between HomeDwellTime and %StayHome, between HomeDwellTime and %StayHome(NonWork), between HomeDwellTime and PercHome, between %StayHome and %StayHome(NonWork), between %StayHome and PercHome, and between %StayHome(NonWork) and PercHome are 0.232 (p < .001), 0.207 (p < .001), 0.636 (p < .001), 0.965 (p < .001), 0.721 (p < .001), and 0.638 (p < .001), respectively. Other variables. Following Allcott, Boxell, Conway, Gentzkow, et al. (2020) and Alexander and Karger (2021), we integrated county-level stay-at-home orders with state-level policies to form a county-level policy stringency index. Specifically, we obtained information about county-level stay-at-home orders from the National Association of Counties (NACo).2 We obtained the composite state-level policy stringency index from Oxford Covid-19 Government Response Tracker (OxCGRT; Hale et al., 2020), which equaled the sum of the closure and containment policy stringency on eight dimensions (i.e., school closure, workplace closure, public event cancellation, gathering restriction, public transport closure, stay-at-home requirements, internal movement restriction, and international travel controls).3 For the 148 counties that issued stay-at-home orders earlier than the state did, we coded a stay-at-home order dummy variable as 1 after the county-level policy came into effect but before the statelevel policy came into effect. For this period, we created a composite county-level policy stringency index for these 148 counties, which equaled the stay-at-home order dummy plus seven other policy stringency indices (i.e., school closure, workplace closure, public event cancellation, gathering restriction, public transport closure, internal movement restriction, and international travel controls) coded by OxCGRT. For all other periods for these 148 counties and the remaining counties, in which a state-level policy was in effect, the county-level policy stringency index equaled the composite state-level policy stringency index calculated by OxCGRT. For ease of interpretation, we normalized this variable to a mean of 0 and a standard deviation of 1. We controlled the natural logarithm of one plus the new Covid-19 deaths in that county on that date in our analyses. These variables were obtained from data provided by The New York Times (Smith et al., 2020). 4 We included number of new deaths as a control variable because the greater the number of new deaths in a county, the more people in that county would be expected to stay at their homes (Ding et al., 2020;Ru et al., 2020). Following Allcott, Boxell, Conway, Ferguson, et al., (2020), if the number of new deaths in a given county on a given date was missing, we assumed that there were no confirmed new deaths in the county on that date. Therefore, we replaced missing number of deaths with 0.5 We also included a number of county-level control variables: median income, percentage of individuals with a Bachelor's degree or higher, percentage of individuals who identify as non-white, population density, percentage of individuals who are over 65 years old, and percentage of residents who voted for Donald Trump in the 2016 US presidential election. We included these socio-demographic variables because they have been found to be correlated with individualism (Kemmelmeier, 2003;Snibbe & Markus, 2005;Vandello & Cohen, 1999). In addition, we controlled for median income because people in higher-income countries and higher-income localities in the US comply more with the Covid-19 lockdown orders (Maire, 2020;Weill et al., 2020). We controlled for educational attainment because better-educated people are more likely to follow social distancing rules (Zhao et al., 2020). We controlled for the proportion of people from ethnic minorities because certain minority groups are disproportionately represented in essential jobs, such as healthcare, grocery stores, and public transportation (U.S. Bureau of Labor Statistics, 2019), which might require them to report to work even under a lockdown. We controlled for total population and population density because the spread of SARS-CoV-2 relies on human-to-human contact, and more people and higher population density leads to higher contact rates (Hu et al., 2013), and might thus reduce people's tendency to violate social distancing rules. We controlled for the percentage of the population over 65 years old because older people are more likely to become severely ill from Covid-19 and thus might be more likely to follow social distancing rules. We controlled for the percentage of voters who voted for Donald Trump because at the beginning of the Covid-19 pandemic because President Trump downplayed the risks of Covid-19, which would likely reduce Trump voters' compliance with social distancing rules (Allcott, Boxell, Conway, Gentzkow, et al., 2020;Gollwitzer et al., 2020;Painter & Qiu, 2021). We obtained data on median income from 2016 American Community Survery's 5-year estimate at the county level (U.S. Census Bureau, 2016). We obtained data on educational level (i.e. number of people with different levels of educational attainment), ethnicity (i.e. number of people of different races), total population, total land, and age distribution (i.e. number of people in different age groups) from the 2016 American Community Survey's 5-year estimate at the census group level. Data at the census group level were aggregated into the county level using county FIPS codes. The data on voting patterns in the 2016 US presidential elections was obtained from the MIT Election Data and Science Lab (2018).6 All our measures are summarized in Table 1. --- %StayHome (NonWork) Number of devices that were found completely at home divided by the number of devices without working patterns (e.g., part-time or full-time) in a county on a day. SafeGraph Inc. --- PrecHome The median percentage of time devices were found at home in a county on a day. SafeGraph Inc. --- Policy Str County-level stringency index of the pandemic containment policies. OxCGRT To test our hypothesis, we analyzed the data using the difference-in-difference approach (Bertrand et al., 2004). As a quasi-experimental design, this approach utilizes the staggered adoption of containment and closure policies across counties. This approach can help to tease out the effects of unobserved but fixed omitted variables (Angrist & Pischke, 2008). Our analyses were conducted at the County × Date level, with the following regression model: 𝐻𝑜𝑚𝑒𝐷𝑤𝑒𝑙𝑙𝑇𝑖𝑚𝑒 !,# = 𝛼 + 𝛽 $ × 𝑃𝑜𝑙𝑖𝑐𝑦𝑆𝑡𝑟 !,# + 𝛽 % × 𝐼𝑛𝑑𝑖𝑣𝑖𝑑𝑢𝑎𝑙𝑖𝑠𝑚 ! × 𝑃𝑜𝑙𝑖𝑐𝑦𝑆𝑡𝑟 !,# +𝛽 & × 𝐶𝑜𝑛𝑡𝑟𝑜𝑙𝑠 ! × 𝑃𝑜𝑙𝑖𝑐𝑦𝑆𝑡𝑟 !,# + 𝐿𝑛(1 + 𝑁𝑒𝑤𝐷𝑒𝑎𝑡ℎ𝑠) !,# + 𝑟 ! + 𝑑 # + 𝜀 !,# In this formula, 𝑖 represents each county; 𝑡 represents each day from January 1 to December 31, 2020; 𝐻𝑜𝑚𝑒𝐷𝑤𝑒𝑙𝑙𝑇𝑖𝑚𝑒 !,# is the median number of minutes devices are found at We included county-level fixed effects to account for the dozens of ways in which counties differ from each other but are not captured by our control variables. No matter how many county-level variables we control for, there is always the possibility that some relevant variables are omitted (Imbens & Wooldridge, 2009). Thus, including county-level fixed effects is a conservative strategy that accounts for all other variables that differ across counties (Bertrand & Mullainathan, 2003). We included date-level fixed effects to account for the effects of datespecific events (e.g., national policy announcements, the weather) that varied across dates, and thus could have impacted the dependent variables. In our analyses, we clustered standard errors at the county level to account for within-county correlation in the dependent variable. Our model accounts for the main effect of county-level individualism in the analysishowever, this effect is absorbed in the county-level fixed effects and thus not represented as a separate coefficient. We used the difference-in-difference analytic method, which was implemented with the STATA command reghdfe developed by Correia (2017). Specifically, we included the county-level fixed effects and date-level fixed effects in the regressions for the county × date panel data in this study. Including the county-level fixed effects is equivalent to including an indicator/dummy variable for each county. Since individualism is a state-level measure and does not change across time, the main effect of individualism is absorbed by the county-level fixed effects. Given estimating coefficients using the regression analysis may suffer from the omitted variable bias (Imbens & Wooldridge, 2009), using the fixed-effect model can help mitigate this problem. When testing the fixed-effect model using the difference-in-difference analytic method, STATA automatically drops the main effects due to their collinearity with the fixed effects while retaining the interaction effects. For these reasons, the main effect of individualism is absent from our table. --- Results State-level individualism score and following social distancing rules. Table 2 reports the results based on containment policy stringency and Vandello and Cohen's (1999) individualism index. Model 1 reports the results for the median number of minutes devices are found at home for all devices in each county on a given date. The coefficient of PolicyStr in Model 1 is 3.783 (p < 0.001), indicating that people spent more time at home when the containment policies are more stringent. The coefficient of the interaction between Individualism and Policy Str in Model 1 is -6.586 (p < 0.001). The negative sign indicates that people in counties with higher individualism were less likely to follow social distancing rules to stay home. Models 2 and 3 examine the percentage of devices that were found at home during the entire day. Model 2 reports the results including all residents. The effects are qualitatively the same when we exclude residents who went to work on a given day and thus might be classified as essential workers (Model 3). As a robustness check, Model 4 examines the median percentage of time devices were found at home, and once again, the coefficient of the interaction between 𝐼𝑛𝑑𝑖𝑣𝑖𝑑𝑢𝑎𝑙𝑖𝑠𝑚 and 𝑃𝑜𝑙𝑖𝑐𝑦 𝑆𝑡𝑟 is still negative and significant. In Models 2 and 3, the coefficients of PolicyStr are 0.890 (p < 0.001) and 0.991 (p < 0.001), respectively. These results indicate that more residents spent their whole day at home when the containment policies were more stringent. However, the effect is small-a one standard deviation increase in PolicyStr only leads to a 0.890 percentage point increase in the percentage of residents staying at home for the whole day. This small effect is consistent with findings from recent research (Allcott, Boxell, Conway, Gentzkow, et al., 2020;Chiou & Tucker, 2020;Painter & Qiu, 2021). One explanation is that our conservative approach of including county and date dummy variables extracted a large amount of variance that could potentially have been associated with shelter-in-place orders. These dummy variables would not have reduced the effect size if shelter-in-place orders were randomly distributed over counties and dates, but in reality, the orders were instead relatively smoothly distributed over space and time. We also included interaction terms between county-level socio-demographic characteristics and the 𝑃𝑜𝑙𝑖𝑐𝑦 𝑆𝑡𝑟 as control variables. In Model 2, the interaction coefficient between 𝑀𝑒𝑑𝑖𝑎𝑛𝐼𝑛𝑐𝑜𝑚𝑒 and 𝑃𝑜𝑙𝑖𝑐𝑦 𝑆𝑡𝑟 is 0.496 (p < 0.001), indicating that people in wealthier counties were more likely to follow social distancing rules. The coefficient of the interaction between %𝐻𝑖𝑔ℎ𝐸𝑑𝑢𝑐 and 𝑃𝑜𝑙𝑖𝑐𝑦 𝑆𝑡𝑟 is 0.267 (p < 0.001), indicating that people in counties with a higher proportion of residents with a Bachelor's degree or higher were more likely to follow the closure policies. The coefficient of the interaction between %𝑀𝑖𝑛𝑜𝑟𝑖𝑡𝑦 and 𝑃𝑜𝑙𝑖𝑐𝑦 𝑆𝑡𝑟 is -0.365 (p < 0.001), indicating that people in counties with a higher proportion of non-white residents were less likely to follow social distancing rules. The coefficient of the interaction between 𝑃𝑜𝑝𝑢𝐷𝑒𝑛𝑠𝑖𝑡𝑦 and 𝑃𝑜𝑙𝑖𝑐𝑦 𝑆𝑡𝑟 is 0.038 (p = 0.34), indicating that people in counties with higher population density were non-significantly more likely to follow social distancing rules. The coefficient of the interaction between %𝑂𝑣𝑒𝑟65 and 𝑃𝑜𝑙𝑖𝑐𝑦 𝑆𝑡𝑟 is -0.196 (p < 0.001), indicating that people in counties with a higher percentage of people above 65 years old were less likely to follow social distancing rules. The coefficient of the interaction between %𝑇𝑟𝑢𝑚𝑝 and 𝑃𝑜𝑙𝑖𝑐𝑦 𝑆𝑡𝑟 is -0.503 (p < 0.001), indicating that people in counties with a higher proportion of Trump voters were less likely to follow social distancing rules. Our results held even after controlling for the big five personality traits (please see Panel A of Table A2 in the Supplementary Materials). For brevity, regression results without control variables are included in the Supplementary Materials. --- Residential mobility. --- Discussion Study 1 found that in more individualistic US states, Covid-19 lockdowns led to a smaller increase in the proportion of people staying home the whole day. Similarly, in counties with higher residential mobility, which is associated with greater individualism, lockdowns led to a smaller increase in the proportion of people staying home the whole day. This finding is consistent with that of Salvador et al. (2020), who found that the greater a country's relational mobility, the faster its growth rate of Covid-19 in the country. Given that more independent countries have higher relational mobility (Schug et al., 2010), our findings converge with Salvador et al. (2020). Regression results without control variables are included in the Supplementary Materials. Our findings held even after controlling for county-level severity of Covid-19 (i.e., the number of new Covid-19 deaths in each county on each date). An examination of the control variables indicated that counties with a higher median income, higher education attainment, fewer people over 65 years of age, and fewer people who voted for President Trump in 2016 exhibited a bigger increase in the proportion of people staying home following a lockdown. --- Study 2: Country-Level Longitudinal Study Using Cross-Country Google Mobility Data Study 2 sought to replicate Study 1's findings at the country level by analyzing Google mobility data across 79 countries and regions. We examined whether people in individualistic countries were less likely to follow social distancing rules. We measured country/region-level individualism using Hofstede's scores (Hofstede, 1980). We measured people's tendency to violate or follow social distancing rules by calculating the number of times they visited parks (e.g., local parks, national parks, public beaches, marinas, and public gardens), grocery stores (e.g., grocery markets, food warehouses, and pharmacies), retail & recreation locations (e.g., restaurants, cafes, shopping centers, theme parks, museums, libraries, and movie theaters), and workplaces, compared to residential places. The inclusion of multiple dependent measures helps assess the specificity of the effect of individualism. The findings from our Study 1 suggest that people in more independent counties would be less willing to follow social distancing rules, and would thus be less likely to be found at residential places, and more likely to visit other places that are open, such as parks and grocery stores. However, during much of the pandemic, workplaces, and retail and recreation businesses were either fully closed or open under limited capacity. More generally, the stronger the lockdown policy in place, the more likely that people could not voluntarily choose to visit these places. Our theorizing states that more individualistic people are more likely to voluntarily go out to places that were actually open during the pandemic, so individualism should be unrelated to people's mobility to workplaces, and retail and recreation locations. In case individualism predicts people's mobility to workplaces, and retail and recreation locations, then the findings would suggest that the effect of individualism could be spurious. --- Method Independent variable. We obtained country-level individualism scores from Geert Hofstede's website (https://geerthofstede.com/). This data has been widely used in previous studies (e.g., Chui et al., 2010;Han et al., 2010;Hofstede, 1980). For ease of interpretation, we normalized the independent variable to a mean of 0 and a standard deviation of 1. Dependent variable. To measure the extent to which people followed social distancing rules, we used cross-country mobility data from Google. 8 The Google mobility dataset covers 135 countries and regions around the world. This dataset provides how people's frequency of visits to various places (e.g., grocery stores, pharmacies, parks, restaurants, workplaces, and places of residence) changed compared to a baseline period (i.e., January 3 to February 6, 2020). During the baseline period, very few countries and regions had adopted lockdown or social distancing policies. The Google mobility dataset covers mobility data from February 15 onwards. Similar to Study 1, we used data till December 31, 2020. During this period, most countries had some form of a lockdown as many countries were severely affected by the Covid-19 pandemic. For each country on each day, Google calculated the number of visits on each day of the week compared to the median number of visits on the same day of the week during the baseline period. For example, the mobility data on May 1 (Friday) would reflect the number of visits on May 1 minus the median value of the number of visits on January 3 (Friday), January 10 (Friday), January 17 (Friday), January 24 (Friday), and January 31 (Friday). Similar to Study 1, we analyzed the data with dates nested within countries. We constructed five dependent variables. We calculated residents' mobility pattern using the number of visits to "Parks" (e.g., local parks, national parks, public beaches, marinas, and public gardens), "Grocery & Pharmacy" (e.g., grocery markets, food warehouses, and pharmacies), "Retail & Recreation" (e.g., restaurants, cafes, shopping centers, theme parks, museums, libraries, and movie theaters), and "Workplace." The higher the mobility to "Parks," "Grocery & Pharmacy," "Retail & Recreation," and "Workplace," the higher probability people are violating social distancing rules. The higher the mobility to "Residential Places," the lower the probability people are violating social distancing rules. --- Other variables. As the definition of stay-at-home orders could vary across countries, we included the stringency of each country's lockdown orders as a key variable in our model. The policy stringency index measured the overall stringency of governments' measures to contain Covid-19. We obtained this data directly from Oxford Covid-19 Government Response Tracker (OxCGRT; Hale et al., 2020) 9 . OxCGRT collected information on common policy measures that governments took to contain the Covid-19 pandemic, such as closing school, closing non-essential workplaces, closing public transport, canceling public events, putting restrictions on gatherings, instituting stay-at-home requirements, restricting internal movement, and restricting international travel. We also controlled for the natural logarithm of one plus the number of Covid-19 new deaths in the country on each date, as provided by OxCGRT. We included a number of country-level control variables: GDP per capita, median age, total population, population density, and life expectancy. We obtained the data on GDP per capita, total population, and population density from World Bank (2018). 10 We obtained the data on median age of the country's population from Department of Economics and Social Affairs in United Nations (2020). 11 We obtained the life expectancy data from Worldometer (2020).12 We included GDP per capita because in wealthier countries, people might be more responsive to government orders (Giuliano, 2005). We included the population density and total population for the same reasons as in Study 1. We included life expectancy as a proxy for the robustness of a country's health system (Evans et al., 2001); people might be more likely to violate social distancing policies if confident about their country's health system. We included median age because young people are more likely to violate social distancing rules (Berg et al., 2020). We did not include country-level tightness scores as a covariate because Gelfand et al.'s (2011) scores are only available for 33 countries. All our measures are summarized in Table 4. After merging variables from the above datasets, we had data from OxCGRT, Google mobility, and Hofstede for 79 countries and regions. Therefore, we focus on these 79 countries and regions in our following analyses. --- Grocery & Pharmacy Mobility to grocery & pharmacy (e.g., grocery markets, food warehouses) on a given day compared to mobility to grocery & pharmacy in the baseline period. --- Google Mobility --- Residential Places Mobility to residential places on a given day compared to mobility to residential places in the baseline period. Google Mobility --- Retail & Recreation Mobility to retail & recreation (e.g., restaurants, cafes, shopping centers) on a given day compared to mobility to retail & recreation in the baseline period. --- Google Mobility --- Workplace Mobility to workplaces on a given day compared to mobility to workplaces in the baseline period. As in Study 1, we tested whether the effects of stringency of government containment policies on people's mobility to parks, grocery and pharmacy, retail and recreation places, workplace, and residential places become weaker in countries that are higher in individualism. Our analyses were conducted at the Country × Date level, with the following regression model: As in Study 1, we included country-level fixed effects and date-level fixed effects. The main effects of individualism and of the control variables are absorbed by the country-level fixed effects. We clustered standard errors at the country level to account for within-country correlation in the dependent variable. 𝑃𝑎𝑟𝑘𝑀𝑜𝑏𝑖𝑙𝑖𝑡𝑦 !,# = 𝛼 + 𝛽 $ × 𝑃𝑜𝑙𝑖𝑐𝑦𝑆𝑡𝑟 !,# + 𝛽 % × 𝐼𝑛𝑑𝑖𝑣𝑖𝑑𝑢𝑎𝑙𝑖𝑠𝑚 ! × 𝑃𝑜𝑙𝑖𝑐𝑦𝑆𝑡𝑟 !,# +𝛽 & × 𝐶𝑜𝑛𝑡𝑟𝑜𝑙𝑠 ! × 𝑃𝑜𝑙𝑖𝑐𝑦𝑆𝑡𝑟 !,# + 𝐿𝑛(1 + 𝑁𝑒𝑤𝐷𝑒𝑎𝑡ℎ𝑠) !,# + 𝑟 ! + 𝑑 # + 𝜀 !,# --- Results Individualism and Google Mobility. --- Discussion Study 2 replicated the key finding of Study 1 at the country-level: Covid-19 lockdown orders led to a decrease in the proportion of people visiting parks or grocery and pharmacy stores and an increase in the likelihood of being found at residential places. However, people in more individualistic countries left their home more frequently by visiting public parks or grocery and pharmacy stores despite social distancing rules. In Study 2, we found no relationship between individualism and people's mobility to workplaces and to retail and recreation places. One explanation is that during times of stringent social distancing policies, these places were likely fully closed or opened at limited capacities, and thus people had less discretion in whether they could visit workplaces and retail and recreation locations. It is also possible that by December 2020, some work and retail locations had opened up, and even before then, there was probably a high degree of variability in the extent to which lockdown orders were enforced. Thus, people in more individualistic countries could have visited these places but decided not to do so. Perhaps people in these countries were not necessarily motivated to hurt their fellow citizens by going to high-risk places that could worsen the spread of Covid-19 (e.g., work and retail, which are typically indoor places), but were motivated to exercise their individual freedoms by going to outdoor places (e.g., parks) where they could meet people even if it meant violating official lockdown orders. For the nonsignificant results of some of our covariates, we have two explanations. First, controlling multiple predictors in the same model may weaken the effect of a given variable. For example, if we only included GDP per capita in the regression, then GDP per capita significantly predicted the mobility to grocery stores and workplaces. However, if we included both GDP per capita and individualism in the regression, then GDP per capita was no longer statistically significant. Also, if we only included population density in the regression, then population density significantly predicted mobility to parks and residential places. However, if we include both population density and individualism in the regression, the effect of population density became weaker. These results seem to suggest that cultural variables have higher explanatory power than demographic variables. Second, we clustered the standard error at the country level when calculating the p-values of the coefficients. We did so because there are strong within country correlations among the mobility variables (Abadie et al., 2017). Clustering the standard error is a conservative method, and explain why some of our covariates were nonsignificant. --- Study 3: Pre-registered Correlational Replication at the Individual Level Although the findings of Studies 1 and 2 were consistent with our hypotheses, both studies used macro-level, not individual-level, measures of individualism (region-level in Study 1, and country-level in Study 2). Although we controlled for a number of region-level and country-level variables, it is always possible that some key variables correlated with individualism were left out. The goal of Study 3 was to provide a conceptual replication of Study 1's and Study 2's key findings by conducting a correlational study at the individual level. We recruited participants who had lived under a Covid-19 lockdown and measured their personal degree of individualism. We then tested whether more individualistic people reported that they had violated social distancing rules more often during the Covid-19 lockdown in their community. --- Method We pre-registered the methods and analyses of this study at https://osf.io/6mjd4?view_only=b40d0787bd8a4364be182538142d3a77. Participants. In a previous study using a similar design, we found an effect size in the predicted direction with r = .27. A power analysis with r = .27, a = .05 (one-tailed), and power = 80% indicated that we need to recruit 81 participants. Given that we had an exclusion (see below), we posted a survey seeking 100 US residents on Amazon's Mechanical Turk. Using a prescreen, we only allowed prospective participants who had stayed at least a week under a Covid-19 lockdown, but did not have to work onsite during this time (i.e., did not work in essential services) to proceed with the survey. In response, 97 participants completed the survey (Mage = 40.78, SDage = 14.26; 55 women, 42 men; 72.2% obtained bachelor's degree or below, 27.8% obtained master's degree or above; 27.8% were lower-middle-class or below, 72.2% were middle class or above; 75.3% European, 10.3% African, 5.2% Latin American, 3.1% Native American, 7.2% East Asian, 3.1% South-east Asian, 3.1% South Asian, 1% Middle Eastern, and 2.1% other). All participants had unique IP addresses. Procedure. We measured participants' individualism using the 8-item scale developed by Triandis and Gelfand (1998). Participants were asked to respond to sample items such as "I rely on myself most of the time; I rarely rely on others" on a 7-point scale ranging from "strongly disagree" to "strongly agree." We measured the extent to which participants had violated social distancing rules during the Covid-19 lockdown in their community using a 6-item scale developed for this study. We asked participants to "Think about the time when you were living under a lockdown, that is, when people were prohibited from leaving their home except for essential items (e.g., food and medicine)." They were then asked to respond to items including: (a) "During the lockdown, how often did you leave your home to relieve your boredom," (b) "During the lockdown, how often did you physically meet your friends or significant other who were not living with you," (c) "During the lockdown, how often did you go out in places where there were many other people around," (d) "During the lockdown, how often did you visit parks, beaches, or other outdoor areas that were closed," (e) "During the lockdown, how often did you loiter around in public places," and (f) "During the lockdown, how often did you go to supermarkets to buy non-essential items" on a 7-point scale ranging from "never" to "multiple times a day." Higher scores on this measure indicated that participants had violated social distancing rules more often during the lockdown in their locality. We measured people's political orientation using three items, each measured on a 7-point bipolar scale: "Please indicate your political orientation: strongly liberal-strongly conservative; strongly left-strongly right"; strongly Democrat-strongly Republican." Finally, we asked participants an open-ended question: "Please summarize the main point of the statements that you responded to in the above survey." --- Results As per the pre-registered analysis plan, we excluded eight participants who provided gibberish or irrelevant responses to the open-ended question asking them to summarize the main point of the measures that they responded to (see Supplementary Materials for the responses that were judged to be gibberish). As shown in Table 6, we found that more individualistic people reported that they had violated social distancing rules more often, r = .269, 95% CI [.055, .432], p = .005 (one-tailed, as we pre-registered a directional hypothesis). We further conducted regression analyses while controlling for political orientation. As shown in Model 2 of Table 7, the relationship between individualism and violating social distancing rules remained significant, B = .271, 95% CI [.035, .507], p = .013 (one-tailed, as we pre-registered a directional hypothesis), b = .242. --- Discussion Study 3 provided support for our key hypothesis at the individual level: more individualistic people reported that they had violated social distancing rules more often when they were living under a Covid-19 lockdown. Individuals' political orientation was not associated with their tendency to follow social distancing rules. --- Study 4: Examining Underlying Mechanisms A key question then arises: Why are more individualistic people less likely to follow social distancing rules? In Study 4, we examined a number of potential mechanisms that can explain the relationship between individualism and the extent to which people followed social distancing rules during Covid-19 lockdowns. Specifically, we investigated four different underlying mechanisms: concern for self, concern for others, compliance with norms, and optimism. First, individualism is associated with a greater focus on one's own self-interest and a greater concern for oneself (Triandis, 1995). In the context of Covid-19, increased concern for one's interests means going outside whenever one desires, even if it means violating shelter-inplace guidelines and leaving their home just for a change of scenery whenever they feel bored. Thus, more individualistic people might be less likely to follow social distancing rules because they are more concerned about their own interests. Second, in addition to being more self-interested, more individualistic people care less about others' needs and interests (Triandis, 1988). Although a greater emphasis on self-interest and a reduced emphasis on others' interests often go hand in hand, the two are experimentally dissociable (e.g., De Dreu & Nauta, 2009;van Lange et al., 1997). People from more individualistic cultures are not only more focused on their self-interest but also less concerned about others' interests (Pearson & Stephan, 1998). In the context of Covid-19, reduced concern for others' interests means going outside even if it means putting others at risk (e.g., infecting others, in case one has an asymptomatic infection; or getting infected outside and bringing the infection home, thereby putting others in one's household at risk). Thus, more individualistic people might be less likely to follow social distancing rules because they are less concerned about others' interests. Third, people high in individualism are more strongly guided by their personal preferences and thus are less likely to conform to social norms (Cialdini et al., 1999;Savani et al., 2008). For example, even when people's personal values were similar across cultures, social norms influenced people's decisions less in an individualistic culture than in a collectivistic culture (Savani et al., 2015). In the context of Covid-19, social norms call for following social distancing rules because that is what a majority of other people are doing. Thus, more individualistic people might be less likely to follow social distancing rules because they do not like to comply with social norms. Finally, in more individualistic cultures, people are more optimistic (Chang, 1996). For example, Americans think that they are more likely to personally encounter good outcomes than other people, but this difference is smaller with Japanese participants (Rose et al., 2008). In the context of Covid-19, optimism can translate into the belief that the risk of catching a Covid-19 infection is low, that the consequences of catching Covid-19 are not as bad, and that the pandemic would be arrested shortly. Thus, more individualistic people might be less likely to follow social distancing rules because they are more optimistic about Covid-19. To test whether our findings hold across different countries, Study 4 collected data from the US and UK. Importantly, these two countries have some of the highest numbers of confirmed Covid-19 cases in the world. Moreover, in addition to controlling for individuals' political orientation, we also controlled for people's degree of physical activity before Covid-19, as more physically active people might be more likely to violate social distancing rules. --- Method Participants. A power analysis with r = .27 (from Study 4), a = .05 (one-tailed), and power = 80% indicated that we need to recruit 81 participants. However, as we were testing a number of potential mediators in this study, we decided on a sample size of 200 participants per country. We posted surveys seeking 200 US residents on Amazon's Mechanical Turk and 400 UK residents on Prolific. We sought to recruit more UK residents because Prolific did not allow us to kick out participants who failed to pass the prescreen question. As in Study 3, only prospective participants who had stayed at least a week under a Covid-19 lockdown but did not have to go to work during this time (i.e., did not work in essential services) were allowed to participate in our study. In response, 199 Americans and 274 British completed the survey. All responses came from unique IP addresses. None of the Americans but 25 British provided gibberish or irrelevant responses to an open-ended question asking them to summarize the main point of the measures that they responded to. They were thus excluded (see Supplementary Materials for the responses that were judged to be gibberish or irrelevant). The final sample consisted of 199 Americans (Mage = 43.01, SDage = 12.48, 1 missing; 103 women, 94 men, 2 other; 81.4% obtained bachelor's degree or below, 18.6% obtained master's degree or above; 44.2% were lower-middle class or below, 55.8% were middle class or above; 80.4% European, 9.5% African, 4.5% Latin American, 2.5% Native American, 6.5% East Asian, 0.5% South-east Asian, 1% South Asian, and 0.5% other) and 249 British (Mage = 40.63, SDage = 14.23, 5 missing; 182 women, 67 men; 79.4% obtained bachelor's degree or below, 20.6% obtained master's degree or above; 58.2% were lower-middle class or below, 41.8% were middle class or above; 85.1% European, 2.8% African, 0.8% Latin American, 2% East Asian, 1.2% South-east Asian, 4.8% South Asian, 1.2% Middle Eastern, and 2.4% other). Procedure. We measured participants' individualism and the extent to which they had violated social distancing rules during the Covid-19 lockdown 13 using the same measures used 13 The only difference between Study 3 and Study 4 regarding the measure of violating social distancing rules is the instructions. In Study 3, we instructed participants, "Think about the time when you were living under a lockdown, that is, when people were prohibited from leaving their home except for essential items (e.g., food and medicine)." In Study 4, we instructed participants, "Think about the time when you were living under a lockdown. We want to learn about how often you left home for reasons other than purchasing essential items (food and medicines) and getting exercise." in Study 3. Table 8 displays the list of mediator measures. Specifically, concern for self was operationalized by measures of selfishness, desire for freedom, and boredom during lockdown, concern for others was operationalized by measures of sympathy and prosocial motivation, compliance with norms was operationalized by measures of compliance with social norm and compliance with government order, and optimism was operationalized by measures of optimism toward Covid-19 and perceived vulnerability of catching Covid-19. All items of all newly created measures are available in the Supplementary Materials. There is a low likelihood that I will get infected with Covid-19. 7-point scale: "not at all" to "to an extremely large extent" For political orientation, we used the same 3-item scale as in Study 3 for the US sample. However, we removed the item, "Please indicate your political orientation" (7-point scale: "Strongly Democrat" to "Strongly Republican"), for the UK sample because this item did not make sense in UK. We measured the extent to which participants were physically active before Covid-19 by asking participants to respond to the question "Overall, how often did you exercise outside of your home before Covid-19" on a 7-point scale ranging from "never" to "multiple times a day." --- Results We merged the US and UK samples to conduct analyses. As shown in --- General Discussion The current research identified a dark side of individualism-a lower willingness to follow social distancing rules amid a pandemic. A pilot study identified people's lay theories about the effect of individualism, specifically, that people expect residents of individualistic cultures to follow social distancing rules less. Then using a combination of longitudinal and correlational study designs, we examined whether people's lay theories about the individualism effect hold at the country, region and individual levels. Specifically, Study 1 found that in US states that are higher in individualism, residents were less likely to follow social distancing rules, as indicated by the physical location of their cellphone throughout the day. Further, in counties with higher residential mobility, which is associated with individualism, residents were less likely to follow social distancing rules. This finding held even after we controlled for all possible county-level differences using county fixed effects, and all specific date-level events using date fixed effects. Study 2 conceptually replicated the above findings across 79 countries and regions. We found that in more individualistic countries and regions, people left their home more frequently despite social distancing rules, as indicated by increased mobility to public parks and grocery and pharmacy stores, and decreased tendency to stay at the residential places. However, as expected, there was no relationship between individualism and mobility to workplaces and retail and recreation locations, which were largely closed during stringent Covid-19 restrictions. Our findings held even after controlling for all possible country-level differences using country fixed effects and all specific date-level events using date fixed effects. Study 3 replicated these findings at the individual level: Americans who scored higher on individualism stated that they had violated social distancing rules more often during lockdowns in their community. Study 4 found that the relationship between individualism and violating social distancing rules was explained by selfishness and boredom: more individualistic people were more selfish and experienced more boredom, and therefore were more likely to violate social distancing rules. --- Theoretical Implications Our research makes a number of theoretical contributions. First, we contribute to the literature on predictors of people's compliance with social distancing rules by examining individualism-collectivism as an important cultural predictor. Extant research is exclusively based on single studies conducted at the region-level and has obtained mixed findings (Bazzi et al., 2021;Frey et al., 2020;Im & Chen, 2020). We enrich this line of research by providing converging evidence for the idea that individualism is associated with lower compliance with social distancing rules at the individual, county, state, and region levels. We further find that people even hold the lay theory that in more individualistic cultures, people would be less likely to follow social distancing rules. Importantly, our findings indicate that individualism has similar effects at both the micro-level and the macro-level. It is possible that the macro-level findings from Studies 1 and 2 are entirely driven by individuals' personal values, not by cultural values. However, as we did not have data on the values of individual mobile phone users in Studies 1 and 2, we cannot assess whether individual and cultural values both played a role. Nevertheless, our findings are consistent with the general idea that cultural values can play an important role in containing the spread of infectious diseases (Borg, 2014;Gaygısız et al., 2017). Second, our research contributes to the literature by examining four mechanisms that might explain the relationship between individualism and people's tendency to follow social distancing rules: concern for the self, concern for others, compliance with norms, and optimism. Our results substantiated the self-concern mechanism. Specifically, we found that more individualistic people were more selfish and experienced more boredom, and therefore, were more likely to violate social distancing rules. By identifying selfishness and boredom as potential underlying mechanisms that explained the effect of individualism on people's violating social distancing rules, our research provides a more nuanced understanding of why individualism impacts people's tendency to follow social distancing rules. Third, our research contributes to the individualism-collectivism literature by highlighting the utility of the individualism-collectivism construct. Numerous researchers have criticized this construct, arguing that it is often theorized but not empirically documented (Matsumoto, 1999), does not reliably differ across cultures (Oyserman et al., 2002), does not explain cultural differences in behavior (Yamagishi et al., 2008;Zou et al., 2009), does not capture the complexities of culture (Kitayama, 2002), and romanticizes certain cultures (Liu et al., 2019). We find that individualism-collectivism predicts an important behavior in a crisis at both the macrolevel and the micro-level, which suggests that the construct is still societally relevant. Finally, the findings of the present research complement past research documenting that the threat of infectious diseases leads cultures to become more collectivistic (Murray & Schaller, 2012). For example, cultures that faced a greater threat from pathogens in their history score higher on collectivism (Fincher et al., 2008), and people in such cultures are more likely to conform to the majority and prioritize obedience (Murray et al., 2011). The current research suggests that this relationship might be bidirectional, such that people in more collectivistic cultures are more likely to take actions that can slow the spread of novel pathogens. --- Practical Implications Our research has important practical implications. We found that people residing in more individualistic countries, states, and counties were more likely to violate social distancing rules, which could accelerate the spread of the virus and thus pose a threat to public health. Policymakers can thus use regions' individualism score as a risk factor for increased virus transmission, and seek to target these regions with pandemic-containment measures. To motivate residents of individualistic regions, and people high on individualism, to follow the social distancing rules, policymakers can frame social distancing rule in terms of the benefit they bring to the individual, not to society as a whole. This framing might be more effective given that individualistic people care more about their own self-interests, as verified by our final study. --- Limitations and Future Research Directions Consistent with Ding et al. (2020), our Study 1 found that people in counties with a higher percentage of people above 65 years old were less likely to follow social distancing rules. This finding is counterintuitive because older adults are more likely to catch Covid-19 (Saadat et al., 2020), and therefore, should be more likely to follow social distancing rules. However, neither past research nor our studies examined the actual behaviors of individuals, let alone those of individual older adults. It is possible that individual older adults are less likely to follow social distancing rules, in which case public agencies might seek to address older adults' needs so that they do not need to leave their homes as often. Alternatively, it is possible that in counties with a bigger proportion of people above the age of 65, older adults still follow social distancing rules, but younger people in these counties might need to move around more to serve the older adults (e.g., to take care of their health, food, and other needs). More broadly, counties with a high proportion of older adults (e.g., retirement communities) might include a different composition of middle-aged or younger adults than other counties, which could have resulted in our counterintuitive finding. Future research can investigate this surprising finding in greater detail. We employed Vandello and Cohen's (1999) index to measure state-level individualism. Although this index has been widely used in recent research on state-level values (e.g., Harrington & Gelfand, 2014), it was developed two decades ago, so it may not capture the current state of individualism-collectivism across the US states. Additionally, due to the heterogeneity of cultures within states (e.g., the rural versus urban divide), different counties within the same state likely vary on individualism. However, these limitations work against our hypotheses by reducing the likelihood of finding an association between individualism and people's tendency to violate social distancing rules. We hope future research would develop new state-level and county-level measures of individualism, which would allow researchers to assess whether our findings can be replicated using improved and more fine-grained indices. Our studies tested the effect of individualism on people's tendency to violate social distancing rules at multiple levels of analyses. Although we obtained similar findings at both the macro-level and the micro-level, we cannot rule out the possibility that the macro-level effect of individualism that we found was due to the impact of aggregated micro-level individualism. To test whether macro-level individualism has incremental effect on individuals' compliance with social distancing rules above and beyond micro-level individualism, future research needs to conduct a multi-level study in which both macro-level and micro-level individualism are measured and tested. For example, in Studies 1 and 2, if we had measures of individuals' personal level of individualism, then we could test whether country-, state-, and county-level individualism predicted compliance with social distancing rules above and beyond people's personal-level individualism. In addition to examining the effect of individualism, we also tested for any effect of cultural tightness in our supplementary analyses. Given that people in tighter cultures are more likely to follow social norms and orders from authority figures (Gelfand et al., 2011), we expected that people higher in tightness or living in tighter states would be more likely to follow social distancing rules. Nevertheless, we found mixed results in two studies. In Study 1, people in tighter states were more likely to follow social distancing rules when our dependent variable was the median number of minutes devices were found at home or the median percentage of time devices were found at home. However, the effect reversed when the dependent variable either included or excluded residents who went to work on a given day (see Supplementary Materials). In Study 3, people higher in tightness reported following social distancing rules more when we controlled for their individualism; however, this effect reversed once we removed individualism from the model (see Supplementary Materials). Future research can investigate these inconsistent findings regarding tightness in greater detail. More generally, the findings indicate that tightness is not the only construct that predicts whether people will follow the rules and orders. In the present case, individualism seems to be a more consistent predictor. Our research suggests that future research on tightness needs to assess whether tightness predicts people's tendency to follow the rules, norms, and orders above and beyond individualismcollectivism. Although we found that people in more individualistic countries and regions are more likely to violate social distancing rules, we did not specifically focus on the downstream consequences of this violation, such as higher mortality rates. Follow-up analysis showed that in the country-level study (Study 2), there was a positive correlation between individualism and the number of Covid-19 deaths in 2020 (r = .26, p = .018). However, in the region-level study (Study 1), the correlation was negative (r = -.24, p = .096). These inconsistent results might be due to the correlational nature of our data and analyses, as mortality rates are influenced by a large number of other factors (e.g., proportion of older adults in the population and population density). Other research did not find any relationship between country-level individualism and mortality (Gelfand et al., 2021). We encourage future research to examine the relationship between individualism and mortality rates in greater depth. Although our pilot study found that participants have a lay theory that more individualistic people are more likely to violate social distancing rules during the Covid-19 pandemic, we only tested this lay theory in an Australian sample. Given past research on cultural differences in people's lay theories (e.g., Morris et al., 2001;Savani & Job, 2017), future research can examine whether these findings would generalize to other cultures. The effect sizes observed in some of our studies, particularly the archival studies 1 and 2, are small. One explanation is that our macro-level measures of individualism in these studies are noisy. For example, in Study 1, we used a measure of state-level individualism collected over 20 years ago, and a measure of county-level residential mobility, which is an indirect measure of individualism. In Study 2, we used a measure of country-level individualism collected over 40 years ago. As cultures change over time (Varnum & Grossman, 2017), these measures might be somewhat out of date. Additionally, these measures were noisy to begin with. Yet, the small effect sizes are consistent with findings of previous research examining the effect of cultural factors on people's tendency to violate social distancing rules (Bazzi et al., 2021). As pointed out by Prentice and Miller (1992), small effect sizes can be practically meaningful if they affect a large number of individuals. Indeed, "some small effects may also have direct real-world consequences" (Götz, Gosling, et al., 2021, p. 2). Given that the Covid-19 pandemic is still raging across the world and may affect the world in the predictable future, we believe that our study can have important practical implications despite the small (but statistically significant) effect sizes. Additionally, we found stronger effects in which we directly measured participants' individualism, r = .18 -.27 in Studies 3 and 4, despite the fact that both the independent variable and the dependent variable were measured with error in these studies. Finally, while the SafeGraph dataset used in Study 1 provides more granular data at the census block level, we conducted our analyses at the county-level. This more macro-level analysis may miss out on variation at the census block-level. However, about 20% of the census block groups in the SafeGraph dataset have fewer than 40 devices, which may not be representative of the census block and thus have a high degree of error variance. Therefore, in line with other research using the SafeGraph social distancing data (e.g., Chiou & Tucker, 2020;Ding et al. 2020;Painter & Qiu, 2021), we aggregated the data at the county-level. However, future research can conduct more granular analyses at the census block level. --- Conclusion Overall, the present research indicates that cultural values have implications for consequential behaviors even during once-in-a-century events, such as a worldwide pandemic. Our findings suggest that everything else being equal, more individualistic people, more individualistic regions, and more individualistic countries are likely to have a harder time combatting pandemics because fewer people are likely to follow government orders. It is possible that America's greater individualism explains why the US had a much harder time quelling the Covid-19 pandemic than other similarly developed countries in Europe and East Asia. More generally, given patterns of increasing individualism around the world (Greenfield, 2013;Grossmann & Varnum, 2015;Hamamura, 2012;Santos et al., 2017), the current findings suggest that everything else being equal, the world might have a more difficult time quelling pandemics in the future.
Kakuma refugee camp, one of the biggest refugee camps in the world, lies very marginalized in Northwestern Kenya. People living there are restricted in mobility, access to resources and work. While Kakuma has become a vivid city and home, the majority of people just want to get out. Resettlement means the chance to start a new life in places like the USA, Canada or Europe, it is everybody's dream. With the use of social media and access to wider transnational networks and information, the perception of resettlement has undergone major transformations. Based on conversations with people resettled, field work and online ethnography, I want to analyse how the journey of resettlement is personally experienced vis a vis its presentation on social media. Following this analysis, I will show, how resettlement is perceived through pictures and texts and what is shown and what is hidden of the journey to a new life abroad.
Introduction Kakuma Refugee Camp is located in northwestern Kenya, with a population of around 200,000 people from various places in Eastern Africa and beyond. During its 30-year existence it has become an "accidental city" (Jansen 2018) with its own social organization, politics, culture and economies. It is a marginalized, restricted place but also a vivid hub, a home, a place of hope and dreams for its inhabitants. Resettlement has always been one of the most preferred ways to leave the camp and start a new life in destinations like the USA, Canada, Australia or in Europe. However, as participation in resettlement is restricted, inhabitants do not know when, and if they might be resettled; the chances of getting resettled resemble that of winning a lottery. Several studies have looked at resettlement from Kenya with different foci, such as the implementation of resettlement programs on a regional or national scale as one of the durable solutions (Mbae 2007, Murithi 2012, Mwalimu 2004, Shutzer 2012) or focusing on specific refugee groups, for example Somali and/or Sudanese refugees (e.g. Balakian 2020, Horst 2006a, b, Ikanda 2018a, b, Marete 2011). The psychological effects on refugees have been well described by Cindy Horst in her research with Somali refugees in Dadaab. The desire to leave the camp via resettlement is described as a suffering, named buufis. Buufis can be so all-encompassing and forceful that it can have severe psychological effects on refugees, causing mental health issues and suicide. As Horst argues, these resettlement dreams have to be understood in the frame of Somalis' "culture of migration", in their far-reaching transnational networks and transnational practices, such as flows of remittances, information and imaginations (Horst 2006a, b). Jansen (2008) has described the different effects of resettlement especially for the Kakuma refugee camp community, the rising demand for resettlement as well as refugees' strategies to achieve it. Sophia Balakin (2020) has looked at the administrative process of resettlement, based on her research with Somali refugees in Nairobi. She describes it as a "patchwork of governance of non-citizenship" of diverse state and non-state actors with different overlapping and contradictory interests and practices through which refugees have to navigate. Therefore, refugees (here Somali) are forced to apply certain strategies in activating their social networks, sharing knowledge and resources to accomplish the resettlement process. Other authors have studied resettlement retrospectively, from the perspective of refugees who have already arrived in the destination countries (Marete 2011, Muftee 2015). Not much is known about the experience of the whole resettlement process from a refugee perspective, starting in the refugee camps or urban settlements to the new life in the resettlement countries. Studies that have looked at the relationship between social media and resettlement, mainly focused on its role after arrival in the host country. However, Jay Marlowe (2000) has worked out the relationship of social media with regard to the integration of refugees in New Zealand and Andrade and Doolin (2016) with regard to their social inclusion. Ahmed, Veronis and Alghazali (2020) and Veronis, Tabler and Ahmed (2018) have focused on the use of social media by resettled Syrian refugees in Canada. The authors show how social media provide a transcultural virtual "contact zone" (Pratt 1991) for the resettled Syrian refugees after their arrival in Canada, where they can meet people of the host community, exchange information and ideas and in this way learn from each other. Social media can thus be interpreted as "borderlands" (Anzaldúa 1999) through which refugees can negotiate cultural differences during resettlement (Veronis et al. 2018). All these studies focus on the role of social media after arrival in the host country with a main focus on integration. The emphasis of this paper is on the individual refugees' experiences and their representation of the whole process of resettlement, from being invited to a first interview to arriving and settling in the host country. As I will argue, with the use of social media and the access to wider transnational networks and information about possible future homes, the dream of resettlement has undergone a major transformation. I investigate how resettlement is currently discussed among Kakuma inhabitants in the camp and in relation to that how resettlement is presented, discussed or visualized on social media platforms. With these insights into refugees' digital representations of resettlement, I want to contribute to a better understanding of resettlement from a refugee's perspective and show how social media has transformed the idea and imaginations of resettlement. The article is based on online ethnography and research at distance since 2020, as well as fieldwork on the ground with Kakuma Refugee Camp inhabitants in August and September 2021. Online ethnography required communicating with refugees via WhatsApp and Facebook messenger and the collection and analyses of Facebook, Instagram, and WhatsApp status posts. During fieldwork in Kakuma refugee camp, I talked to refugees informally and formally in interviews and accompanied refugees during their daily activities. I regularly communicated with refugees who succeeded in their aim of resettlement to Germany, establishing friendly and trust relationships. In this way, I was able to follow up their digital representations of their journey to Germany. Additionally, I was able to visit one family and one young man from Kakuma, who were transferred to places not too far from my hometown. First, I will introduce Kakuma Refugee Camp as a temporal or permanent home for its inhabitants and as my site of fieldwork. In addition, I will review the history of resettlement in the camp as well as some general facts and figures on resettlement from Kenya. Based on my fieldwork in the camp in August and September 2021, I want to present and discuss some initial findings, which provide insights into recent resettlement programs. And I will show how resettlement is received, discussed and practiced and explore the effects of this organized form of migration on the chosen ones as well as the ones who stay behind. Since resettlement to Germany was in progress during my stay in Kakuma, a special focus will be on the German implementation of the program. I will especially look at the role of mobile phones and social media for refugees and how these media influence and change the communication about resettlement. In applying a temporal perspective, I want to show how resettlement communication changes over time, depending on if resettlement is a future dream, a present practice or a past experience. Using examples of refugees' participation in the German resettlement program and communication with them, I want to give insights on how refugees digitally represent and reflect on their journeys. --- Kakuma as unintentional home and the dream of resettlement Kakuma Refugee Camp was established in 1992 to give shelter to the arriving 'Lost boys of Sudan', young boys and girls who were orphaned and displaced during the Second Sudanese Civil War . Other refugees came from Ethiopia, Somalia and the Great Lakes region due to the political instability in their countries.2 Over the years, the camp's population from diverse nations has increased tremendously. As of the end of July 2020, the camp counted a population of about 196,666 people from the Horn of Africa and Eastern Africa (UNHCR Kenya 2020). The camp consists of four parts -Kakuma 1 to 4 -with different zones within those parts. The oldest of those, Kakuma 1, was built in 1992 and is subdivided into different national or ethnic communities. Kakuma 2/Zone 7 was built in 1997 and is subdivided into parts inhabited by different Somali and Sudanese communities. Kakuma 3/Zone 8 from 1999 consists of a mixed international community (with a majority of Sudanese) and the reception center. Kakuma 4/Zone 9 was added to the camp when Somali Bantu arrived from Daadab (Jansen 2018: 72-76). The Kalobeyei settlement was designed as an alternative and innovative form of accommodation. Refugees should be able to live more or less self-sustained in three villages (Betts/Omata/Sterck 2020). The camp is under administration of the United Nations High Commissioner for Refugees (UNHCR) and the jurisdiction of the Kenyan Government and the Department of Refugee Affairs. Furthermore, a wide range of humanitarian organizations3 is active in the camp (KANERE 2022). Kakuma Refugee Camp is situated in north-western Kenya on the outskirts of Kakuma town in the Turkana West District of Turkana County. It lies about 120 km from the next bigger city Lodwar and 130 km from the border to South Sudan. The camp is surrounded by a harsh semiarid desert environment with dust storms occurring regularly, high daily temperatures of 35 to 38 degrees Celsius, and regular outbreaks of malaria and cholera during floods and in the rainy season (UNHCR Kenya, 2020). Around the camp, the majority of the local population is made up of the nomadic pastoralists of Turkana. They are themselves a marginalized group of people who depend on (missionary) aid to access education and health services. As access to water and pastureland is restricted, the area has become a place of regular intergroup and cross-border violence with the neighboring Pokot, Karamojong, and others. Although the local population also profits from the camp, the relationship with the refugees is ambivalent and marked by envy. This is often expressed in sayings such as 'It is better to be a refugee than a Turkana in Kakuma' as well as in violent conflicts between the two groups (Aukot 2003: 74). In recent years, however, the relationship between refugees and hosts has improved due to increasing trade and business between Turkana and the refugees and by means of development projects that also target the host community (Jansen/de Bruijne 2020). Like other refugee camps, Kakuma is a place where most people stay for several years, a whole life, or even several generations. Over the decades of its existence, it has become a citycamp (Agier 2002) with its own urban structures and social organizations. It is a geographically defined, ruled, and restricted place but also a place of hope and individual chances for success in-and outside of the camp. Life in the refugee camp is marked by restrictions of resources like water and food and limitations in movement and social mobility. Refugees are not allowed to leave the district without official permission from camp management and working possibilities are limited as organizations in the camp pay only a small salary. Within the camp there is a lack of security and regularly occurring violence, with conflicts between camp inhabitants as well as with the neighboring Turkana. People living there feel as if the refugee label has been stamped on them, hiding their personal identity (Amina 2017). In this negative in-between presentness, dreams and hopes for change and a better future outside the camp are all-encompassing. Possible ways to leave the camp are education in the camp and a subsequent scholarship at a university in Kenya or abroad, a job opportunity outside the camp in Kakuma and in other Kenyan towns or living in an urban settlement if one is financially able. Another option is the UNHCR resettlement program, which allows selected refugees to be accommodated in a country abroad (see also Jansen 2018). The further possible and defined durable solution for refugees is repatriation to their countries of origin, which is occasionally promoted by the UNHCR, when the situation in the countries seems to be stable. But for many refugees, repatriation is not an option. Firstly, they might have never been to the countries of their parents. Therefore, they don't really feel connected. Secondly, although there might be peace, the difficult living conditions and lack of job and business opportunities and social networks have even caused refugees to return to the camp after they have been repatriated (see also Jansen 2018: 165-190). The last option which some refugees also take into consideration in their despair is the "safari mbaya" (the bad voyage), the illegal onward migration to Europe or other places with the help of a smuggler. As Jansen has shown, the implementation of resettlement programs has huge effects on the camp population. Resettlement dramatically raises expectations and hopes for leaving the camp and has become a "pervasive wish" and goal in itself. Many refugees believe resettlement can be achieved by certain strategies, like claiming insecurities and negotiating their vulnerabilities (Jansen 2008: 1-2, 7-16). Obtaining reliable data on resettlement from Kenya is difficult, as numbers for certain years are missing or vary between sources. Between 1992 and2006, 84,240 refugees have been resettled to third countries from several locations in Kenya (Jansen 2008). According to UNHCR statistics, from 2007 to 2013, approximately 15,320 refugees left Kenya for third countries through a resettlement program (UNHCR Global report). 4 From 2014 to 2020, numbers were published in a regular manner, totaling 30,273 refugees who were resettled from Kenya to third countries, with an average of 4,325 resettled refugees per year. The numbers range from as low as 443 in 2020, due to the Covid-19 pandemic, to a peak of 7,359 in 2016. For 2021, 3,000 refugees have been resettled from Kenya. 5 As Jansen (2008) reports, larger numbers of refugees from Kakuma Refugee Camp were first resettled within the framework of two resettlement programs, which were aimed at the Sudanese inhabitants of the camp. The first one was the United States Refugee Program (USRP), which at the end of 2000 had resettled 3,800 Sudanese "unaccompanied minors". These refugees were part of a large group of 20,000 young Sudanese arriving in Kakuma in 1992 who had been expelled from Ethiopia and arrived on foot. The next large group were 15,000 Somali Bantus 6 , the largest group ever resettled from Africa. According to Jansen's calculations, it is estimated that between the years 2001 and 2006, about 25,000 refugees have been resettled from Kakuma Refugee Camp to third countries (Jansen 2008: 3). The effects that Jansen (2008) describes for the first resettlement programs from Kakuma were manifold especially regarding the financial situation, mental wellbeing and social life in the camp. The remittances of resettled refugees transferred back to the camp inhabitants brought capital for them, making an important part of their income. As Jansen reports, the remittances contributed to the economy and lifestyle in the camp as well as the possibility of informal local integration as refugees could afford to live and work in Kenyan towns (Jansen 2008: 4, FN 9). Sudanese refugees came back to the camp 5 See respected websites of the UNHCR Global Report. For 2019 numbers vary between the data published by UNHCR and the welfare association Caritas Germany with 3941 and 2221 resettled refugees respectively. https://reporting.unhcr.org/node/2537?y=2019#year; https://resettlement.de/kenia/ (last access 16.11.2021). 6 Somali Bantus are a minority of non-Cushitic Somalis, who are victims of discrimination and insecurity in their homeland (Jansen 2008:3). wearing suits and showing pictures of their houses and cars in the US or Australia, telling their success stories, and were able to select brides, offering dowries up to 75,000 USD. Many masked or lied about the failures and difficulties they faced, creating an idealized and untrue image of their countries of residence. Another effect described by Jansen (2008) was the interest and curiosity for resettled refugees' life histories in their new countries of residence. Therefore, many journalists, researchers and artists subsequently came to visit Kakuma Refugee Camp, resulting in quite a number of publications on the Lost Boys' life, migration and resettlement stories (e. g. the novel What is the What by Dave Eggers ( 2006)) or films like The Lost Boys of Sudan by Megan Mylan and John Schenk (2003). During the visits by church groups or NGOs, promises and attempts were made to invite people to resettle. This added to the creation not only of the dream of resettlement, but also to the hope of seeing it realized, since the ongoing resettlements, visible in the planes taking off sometimes up to a few times a week, proved the possibility of it (Jansen 2008: 4). As Cindy Horst has shown for refugees living in Dadaab, Somali refugees call this constant and all-encompassing longing to go abroad and leave the camp buufis 7 (2006a). Buufis can have severe psychological effects like mental health issues and can lead even to suicide. According to Horst, buufis is fostered by transnational flows of remittances and information (Horst 2006a idem). --- The multiplicity of examples of others leaving the camp, and the images they bring via the transnational connections that mobile phones and the Internet facilitate, as well as media such as satellite TVs, lead to an active imagining of the 'Western world', adding to the wish for resettlement of many refugees (Jansen 2008: 4). This active imagining of a possible future life in the Global North through resettlement as well as the communication with resettled relatives and friends has reached another dimension through the use of new and social media as I want to show in the following. --- The influence of new and social media on resettlement in Kakuma In the early 1990s, when the first refugees arrived, Kakuma town was just a small Turkana meeting hub, badly connected by road to Lodwar and South Sudan through the semi-desert. Longterm inhabitants told me that it was a horrible place. The first refugees had to live in tents in the unbearable heat without the shade of trees or protection from the wind, with scorpions or snakes, seasonal floods and insufficient provision and care. Moreover, fleeing meant disconnecting from family and friends at home and not knowing about each other's state of being. To call home via landline was nearly impossible and very expensive. Lemy 8 came to Kakuma as a small boy, fleeing the war in South Sudan. When the first refugees were resettled, relatives and friends who were left behind would lose contact with their loved ones. As he remembers: 7 Buufis is a Somali term that means 'to blow into or to inflate' (Zorc, Osman 1993 as cited by Horst 2006a: 143). It refers to air, hawo, which also stands for a longing or desire for something specific, an ambition, or a daydream. Thus, buufis can be understood as a longing or desire blown into someone's mind (Horst 2006a: 143). 8 Names of interlocutors were changed. Over the time, internet and mobile phones became available and widely used in Kenya. Since 2008, social media platforms like Facebook and Viber and later from 2014/15 WhatsApp and messenger and video calls became important. Also, the more affordable internet data made it possible for residents of Kakuma to call their relatives in their home countries and in their new homes in third countries like the US (Lemy 2021). In 2016, UNHCR, with the support of Accenture Development Partnerships (ADP), carried out a global assessment of refugees' access to, and use of the internet and mobile phones. The aim was to develop the new UNHCR Global Strategy for Connectivity for Refugees. The report entitled: "Connecting Refugees", made several key findings, which indicate that refugees' connectivity is still restricted due to their place of location (urban or rural), affordability, literacy and language knowledge, societal and cultural challenges as well as gender and technical gaps in coverage. Despite affordability constraints, refugees place significant value on being connected. Access to the internet is crucial for refugees in communicating with friends and family, in both their home and host country, as well as for providing help and assistance. In this way, as the UNHCR states, mobile phones and internet connectivity have become part of the overall aim of increasing refugee well-being and self-reliance in refugee camps (UNHCR 2016: 22). In 2017, the UNHCR initiated an ICT (Information and Communication Technology) Boot camp for Kakuma inhabitants. The plan here was to educate refugees on the technical skills of ICT, aimed at enhancing their abilities to find education or work (Otieno 2017). Within the vicinity of the camp, internet cafes and mobile phone shops have blossomed and service providers like Safaricom and mobile money services like M-Pesa offer their services. However, power cuts and financial resources still restrict media usage. In order to access the internet, camp inhabitants have to have properly functioning and charged smartphones and buy data bundles from the respective provider, which not everybody can afford. Unreliable and time-limited networks are another problem they encounter. Kakuma's inhabitants use their mobile phones to access the internet and social media platforms very actively, especially WhatsApp, Facebook, FB Messenger, TikTok, Snapchat, Twitter, and Instagram are important. Although they sometimes have difficulties with access or money to buy data bundles, social media platforms like Facebook and WhatsApp are most important to them, as they can connect with people outside the camp and present themselves with a personal identity beyond their refugee status (Joyce 2017;Amina 2017;Böhme, 2019). Through WhatsApp, Facebook or LinkedIn refugees have built up large networks of contacts in Kakuma, Kenya, and abroad. Today, Internet and social media make up a big part of economic activities as well as the visual landscape in the camp with painted phone shops, creative charging facilities and their advertisement boards all over the place. Since camp inhabitants use social media, they use it for their private and business activities and vividly take part in online communities not only different from offline groups in the camp but in global networks that go far beyond the borders of the refugee camp. Moreover, refugees are able to present themselves online with their own and alternative identities. The already existing camp specific negotiation of (ethnic) identities, the "ethnic chessboard" (Agier 2002: 334) was extended by a virtual dimension. On the internet, refugees can take up multiple and alternative identities and can display or hide themselves more freely (Witteborn 2015). As both off-and online activities influence each other, the negotiation of identities becomes highly dynamic and multidimensional. Kakuma refugee camp is presented on diverse different websites and all major social media platforms through pictures, texts and films.9 But, more importantly, inhabitants can easily communicate with family and friends abroad. A friend or family member is now "just a phone call away" (Amina 2017). Lemy describes the possibility of virtual communication as another step to being even more connected among the family members: So when the family wants to talk to my sister, I call her using video call and we could see her, my children whom we have never met, but we are a family. So, we communicate, we talk as if we have met or we have been together. They know that we are their uncles, my mother is their grandmother so something of that sort, and it is through the video calls (Lemy 2021). The new possibility of seeing each other also changed the modes and relations of trust and mistrust between the people who have left and those who had to stay behind in the refugee camp. So, it has had a great impact in terms of trusting the other countries, you were not able to trust. Eh you don't believe, somebody is telling you that I am in the US, life is really good, but you are not seeing that. So, through whatever they are posting on Facebook, you are able to see that. Through the video calls, you are calling someone, they are in their house, they show you their house, they are showing you the city, if somebody is walking in the city. So, it matters you to believe certain places are really nice, certain places have developed (Lemy 2021). The internet and social media have made it possible for the residents of Kakuma not only to connect with their places of origin but also with possible future places to stay. On the internet, information and pictures of possible new futures elsewhere are distributed, received, and appropriated and influence favored places of destination as well as practices which aim at achieving resettlement goals. Social media also has a huge influence on the camp inhabitants' social and professional activities, which then might lead to important contacts abroad or enable careers in the camp. Through social media, refugees can publish camp related projects and events and connect to larger supporting networks to receive donations or even become famous. Social media in this way also functions as a motor of change (Amina 2017) and adds up to the strategies to receive resettlement. --- From Kakuma to Germany: Recent Resettlement Practices and Discourses In 2012, Germany became part of the resettlement program on a pilot basis, and permanently joined the international community of the more than 30 resettlement states in 2014. 10 In the years 2012 to 2014, Germany's admission quota was 300 persons per year. In 2015, this quota was increased to 500 people. In 2016 and 2017, Germany participated in the EU Resettlement Pilot Program with a total of 1,600 refugees, whereupon the national admission quota was taken into account (Baraulina/Bitterwolf 2018). Against the background of the challenges during the years 2015 to 2017, during which more than a million people sought protection in Germany, nearly no attention was paid to resettlement with its relatively small figures in the public debate. It has only been since April 2018, on the occasion of Germany's commitment to the EU to provide 10,200 places for the European resettlement program, that the sense and purpose of this admission program has been publicly discussed. Some commentators stress the regulated admission procedure in the context of resettlement with the hope that the program will become a real "alternative to the German asylum procedure". However, other actors criticize these projects as a "moral fig leaf". In contingent-based admission procedures like resettlement, they see 10 Germany's participation in resettlement programs is based on the resolution of the Standing Conference of the Ministers and Senators of the Interior of the German Federal States, from December 9, 2011, which "in the interest of further developing and improving refugee protection, 'advocated the permanent participation of the Federal Republic of Germany in the admission and resettlement of refugees from third countries in particular need of protection in cooperation with the UNHCR (resettlement)'" (Bundesministerium des Innern, für Bau und Heimat 2021). the danger that refugees would be denied individual access to a fair asylum procedure on European territory. The human rights organization Pro Asyl for example calls for "the validity of individual asylum rights instead of collective acts of mercy" (Baraulina/Bitterwolf 2018). In 2019, the European Commission called on its member states to create new reception places for humanitarian admission and resettlement in the year 2020. The Commission also communicated, that at the time funds from the EU's Asylum, Migration and Integration Fund (AMIF) would be available to financially support 20,000 places across the EU with actual entries until 30 June 2021. Due to the Covid-19 pandemic, this period has since been extended to 31 December 2021. Against the background of the coalition agreement of the 18th legislative period, Germany was said to make an appropriate contribution to admission quotas for persons in need of humanitarian protection. Germany has assured the European Commission of its support and promised to make a total of 5,500 places available for 2020, of which only around 1,200 entries could be realized due to the pandemic (Bundesministerium des Innern, für Bau und Heimat 2021). Shortly after I arrived in Kakuma in August 2021, camp inhabitants were in the preparation process for resettlement to Germany. The resettlement to Germany came rather as a surprise to camp inhabitants and it was said that it was the first time that Germans have come to take people from Kakuma to their country. Germany was also not on the list of desired destinations. In first place for desired destinations abroad was and still is Canada, followed by the US. This was interrupted by Trump's presidency, as under the Trump government resettlement from Kakuma to the US was halted (Beers 2020). Trump was known for his anti-asylum politics and many refugees were disappointed and changed their opinion of the US being a desired destination. But when Joe Biden came to power in 2021, much hope was laid on the new government to take up resettlement cases again. After Canada and the US, Australia is also well known and highly appreciated by camp inhabitants although news circulate regarding the bad treatment of asylum seekers in the country. Europe, with resettling countries like Sweden, Norway or the Netherlands is not as popular but also desired. While most camp inhabitants had contacts in Canada or the US, Germany was not well known and almost nobody would have had any contacts there. In their discourses about desired destinations and resettlement, refugees would actively value and compare the living conditions in the different countries. Canada and the US were said to quickly enable a good and luxurious life of having a good place to stay, a house, a job and a car as well as earning a lot of money to send home. But by taking out loans many refugees would also fall into a debt trap. A similar picture would be described by people wishing to go to Australia, which was said to have quick business opportunities. Europe on the other hand was said to be difficult. Integration would take a long time due to bureaucratic regulations, the language barrier and access to the job market. People would struggle and sometimes regret being there (Benjamin 2021). The examples show how refugees actively acquire and share knowledge on different host countries and how discourses and myths about certain places create a ranking of popular destinations. --- The process of being resettled -refugees' knowledge, challenges and fears The resettlement process is a complex procedure with a mixture of interviews, check-ups, screenings and preparations, which can take several months. The process is shared knowledge, which is passed on between refugees and camp inhabitants (cf. Balakian 2020). I therefore asked Kevin, a young South Sudanese who got resettlement to Germany to write down what he knows about the process and how he experienced resettlement. As Kevin explained, resettlement is already a topic during the first "eligibility interviews" conducted by the UNHCR, when refugees register in a camp with the purpose of collecting the person's data and creating a personal file. The eligibility interviews determine whether asylum seekers are granted refuge and given the mandate or sent back to their countries. Sometimes, the new arrivals spend many months or even a year at the reception camp located in Kakuma 3 until their cases are decided. Information such as contacts, educational level, skills are collected and sometimes people are also asked if they would consider being resettled to a third country (Kevin 2021). When a foreign country proposes to resettle refugees, the UNHCR officials use these files to check the criteria required by the host country to determine who is chosen for the selection process. Then, the person is called for a "profiling interview" with UNHCR staff to decide his/her eligibility, which is currently done at the field post in Kakuma. The questions resemble the ones asked when one registers and are meant to cross check the information already provided. Also, they check the family background and security status. Among the questions asked are: Why did you leave your country? Why do you think that you can't go back? The questions differ by household members. This is followed by another interview at the UNHCR compound, during which information on eligibility is graphically shared (Kevin 2021). If the refugee passes this, he/she is invited to another round of interviews at the compound of the International Organisation of Migration (IOM), conducted by representatives from the host country. While most refugees feel at that level as having already been accepted, their joy can be tempered, as the respective embassy still has to select candidates and might not opt for the person for unknown reasons. More interviews await the selected "lucky ones" and only upon passing those successfully will they move on to the next level: a medical interview by the IOM medical team with blood screening, X-ray and other medical check-ups such as eye test, pregnancy check and vaccinations. After three months, they will be informed of the flight schedule, followed by a cultural orientation of about three days. The cultural orientation course is meant to inform refugees on daily life in Germany. The topics include safety during the flight, the educational system, law, work, religion and cultural practices. Prior to this, refugees have to hand over their refugee card to the IOM and fingerprints are taken at the government led Refugee Affairs Secretariat (RAS) where one also has to return everything belonging to the government. The fingerprints carry the most weight in making somebody "free" and ready for resettlement. The deactivation of the UNHCR refugee status from the camp is done at the UNHCR office or at the Field Post. The RAS officials then come to the community with a clearance form to show that one has given back the house that was assigned to the person when being granted refugee status. After all this, one is officially cleared and ready for transfer to the Nairobi transit location, which is currently the hostel of the Young Men's Christian Association (YMCA). During their one-week stay at the YMCA, the resettled refugees go through other different forms of medical treatment to ensure fitness for travel. Medication includes malaria tablets and other medicines. Due to the Covid-19 protocols, refugees also have to do a PCR test before they are allowed to travel. They also receive further travel information now. The information includes the weight of the luggage, information on travel itinerary and arrival. At the airport, everyone is given their documents and awaits boarding, upon clearing the customs duty desk (Kevin 2021). Kevin's account of resettlement shows the complex and multi-layered process, which involves multiple actors. This "patchwork governance" of resettlement (Balakian 2020) can lead to severe complications and backlashes for those chosen to enter the resettlement screening. While in Kakuma, I heard of many obstacles that could hinder refugees from reaching the final point of the actual departure. Persons' data would not appear, or would contradict information in the computer system, or fingerprints were lost (see also Jansen 2008: 6). In these cases, the persons' resettlement process was put on hold, without them knowing when and if their resettlement could continue. I also heard that when the RAS officers come to "take back" camp inhabitants' houses, they would ask for huge sums of compensation money for any damage or changes done to the houses. Refugees interpreted this practice as corruption, because camp officers could blackmail refugees into not clearing their cases for resettlement. These backlashes left many refugees, who had already felt safe for resettlement, confused and depressed. When I visited one of the Sudanese households of three women in the camp who had been chosen to be resettled to Germany, they were literally sitting on their ready-packed luggage, waiting for their cases to be processed. But as there was said to be something wrong with one of the women's fingerprints, their case was put on hold. When the travel dates passed without them being processed, they were deeply shocked and disappointed as they had already imagined and planned a life in Germany. Other refugees chosen for resettlement were afraid that something could happen in the final process to prevent them from travelling. For this reason, most people would not publicly announce that they had been granted resettlement. As several research participants told me, Congolese and refugees from the Great Lakes region in particular would fear witchcraft from jealous neighbors or that acquaintances would hinder their resettlement. As they told me, many stories circulated about people being bewitched just before their resettlement. Other problems would be encounters with the police just before travelling, as happened to one of my interlocutors and his friends just the night before he was due to fly out of Kakuma, which resulted in paying bribes to the police to let them out of prison. When travel dates to Germany were announced, the news of the people chosen to go to Germany went round the camp. The news of resettlement divided camp inhabitants into those who had obtained resettlement, and those who had not. It was said that only (South) Sudanese and people from the Great Lakes region would have been offered resettlement to Germany and refugees understood this as ethnic and/or religious bias. Many times, refugees from Ethiopia and Somalia would ask me why Germany would not take them. Sometimes relatives or friends were separated in the process of resettlement. While visiting the Kakuma Project Virtual Training Centre, I saw two young men in the classroom learning German on their laptops. They were watching a YouTube language training video with simple conversations in German, which they eagerly followed, listened and repeated. When I introduced myself, they were happy to know somebody from Germany, and we exchanged telephone numbers. Noah and Robert were in their late teens and going to school when they were called for the German resettlement scheme. Subsequent to our first meeting, both of them regularly greeted me in German and tried some simple conversations via WhatsApp. They both told me that they wanted to become soldiers in the German army and inquired with me if that was possible. But while Robert progressed well through the medical and administrative preparation, there was a problem with Noah's case. As he told me later, his family would not have been allowed to leave as his aunt was pregnant and as it was the rule, if one of a registered group was not eligible, the whole group could not be resettled. He was totally disillusioned when he finally saw his best friend leaving for Germany without him. As the example shows, refugees selected to take part in the resettlement process avidly make use of social media to learn the language and inform themselves about the host country and in this way already start preparing for integration. In the following, the communication patterns before, during and after resettlement will be illustrated with the example of camp inhabitants who received resettlement to Germany. --- Before Departure: Imaginations, Worries and Hopes Resettlement from Kakuma to Germany was originally scheduled for spring 2020 but the program was stopped due to Covid-19. At that time a Somali woman, with whom I had been communicating via WhatsApp and Facebook since 2017, told me via WhatsApp that she had already completed an interview and was nervously awaiting the answer. When resettlement was stopped due to Covid-19, her hopes were shattered. This year, I heard the news from Kevin, who got resettlement to Germany and at that time already had several interviews and medical check-ups. I got to know Kevin when he posted a photograph of a young boy in Kakuma and since then we regularly chatted or called via WhatsApp. Kevin came to Kakuma as a child together with his aunt, when he was 4 years old, fleeing from the war in South Sudan. One of the trainees of Filmaid, a program teaching refugees in filmmaking, he soon became very active in film and media production as well as on social media. He was chosen as a so-called Global Shaper11 and was invited to the World Economic Forum in Davos, Switzerland in 2018. Since then, he would regularly participate in online webinars, meetings and discussions and present his life story and his work. As his friends told me, it is also since he has been in Switzerland, that he has changed his way of speaking. He was now known for speaking tweng, the imitation of an US American accent. As he had told me previously, he was hoping for a scholarship to attend a film school in the UK or in Canada. Germany was not on his radar. When talking about his resettlement he was rather cool and emotionless. However, because I am German, he would involve me in conversations and questions about Germany as well as send me WhatsApp messages during his ongoing cultural preparation class. Due to the country's ambiguous public image, Germany was perceived by refugees as well with positive and negative attributes. While Germany was known for the reign of Angela Merkel and her refugee-friendly politics, discourses about the Hitler regime, the Holocaust and recent right-wing movements frighten refugees. One day I was invited to the home of Benjamin, a friend of my field assistant, who would be resettled to Germany. Benjamin came to Kakuma in 2011 from DR Congo. Since then, he had lived with his wife and two younger sons in a compound. Benjamin did his BA in Management and Public Administration online and was very active in education projects in the camp. As he told me, he was very worried as he had only limited, contradictory information about his possible future home. He had read about the history of Hitler and the Nazi regime and also that Nazis were still in the country. They would attack black people and there were some places people could not even go. On the other side, he had heard that there was good social and health care and education. He was worried about all the restrictions and regulations they were told about in the cultural preparation class like having to learn German before starting to work or study. He asked me about getting a place to stay, work and starting a business. Moreover, he had learned that to have children you had to be financially stable. Benjamin recently got a job opportunity with a rich businessman in Nairobi, so he was weighing up which way was best for the future of his family. As most people had no social networks at all in Germany, he said that all those currently selected would rather live another two years in the camp, if they knew that they would get resettlement to another country afterwards. The fact, that Germany was the least preferred option was also proven by the case of a Somali woman I spoke to. She finally rejected resettlement to Germany, when she got the chance to get a private resettlement sponsorship to Canada. Following our conversation, Benjamin was relieved and told me that now at least he was sure that they could start a new life there without too many worries. --- Safari ya Ujerumani 12 : Staying connected and giving testimony via online communication The first group of resettled refugees to Germany finally left on September 8 th and 9 th 2021. They had checked in their bags at IOM beforehand and went to Kakuma airstrip the next day. They would fly to Nairobi and be brought to a transit residence at the YMCA in Nairobi. There they had to stay in quarantine for another week doing another PCR test until they flew to Germany. Kevin posted a picture when arriving at Wilson airport in Nairobi walking from the plane captioned "#stride on#". With the airport and planes under the cloudy sky behind him, he was in a cool outfit of white sneakers and long socks, military trousers and a jeans jacket. Wearing a respiratory mask, he posed, putting his cap back on his head while walking. With this post, he portrayed himself as a traveler for whom this journey was just a continuation of what he was always doing. The next day he posted another picture from the airport of a small refugee girl he had travelled with, captioned #myfriend Jojo#. As expressed in the post, the resettled refugees had already become a community of chosen ones, who shared their experiences during resettlement. As pictures were not allowed at the YMCA, Kevin sent daily updates on how he was feeling or what he did. As it was very boring just staying there, Kevin was killing time, reading or chatting with friends via his smartphone. Benjamin worriedly told me before departure, that the UNHCR had changed the reception center in Germany from a camp near a big city in mid-western Germany to a very small town in one of the eastern states. Every change of plan was strictly observed by the refugees, and they could only speculate as to why they would be brought to another accommodation. Via internet and social media, they tried to get as much information as possible about the new place. After the week in quarantine, the journey would continue now finally to Germany. The flight was operated by Qatar airlines, and this was a major event for most of the resettled refugees, who had never before boarded such a huge plane or had never before travelled by air. The luxurious interior of the Qatar plane was a signifier of what they hoped to reach: a better life in Germany. Several refugees posted pictures of the flight on their WhatsApp statuses and Instagram. Kevin posted a picture of himself sitting in the plane and looking out of the window with the comment "It's been God since day #1", marking the event as a major achievement enabled by God. The picture was liked by 135 people and commented on by 15. The comments range from congratulations in words or using emojis (clapping hands, emoji with heart eyes), a recommendation to visit the #benzmuseum in Stuttgart or prayers to arrive safely. One commented "Yes it has always been 100" as a direct answer to Kevin's comment. The comment "On the promised land already [emoji of raising both hands in celebration]" points to the often religiously interpreted journey to a kind of holy land. However, not all the refugees posted news of this journey and achievement at the same time. As some of the resettled refugees feared that something could still prevent the journey, they refrained from posting. As Glory told me, he observed a cultural difference in posting the events of resettlement. He has two friends, one from South Sudan and the other from Rwanda, who got resettlement at the same time. The one from South Sudan posted his journey as a live event with pictures of his boarding card and the itinerary map from the plane and a funny map of the airport in Qatar. The Rwandan friend first posted pictures 21 hours later after they had already arrived. As my contact told me, his Rwandan friend […] was not at ease revealing his information as fast as [his friend] were. This may be due to cultural expectations as people from the Great Lakes prefer keep their resettlement matters secret because they believe that bad people can influence their luck (Glory 2021). The examples show how resettlement is presented online as a success story and major journey in life and embedded in local beliefs and discourses on success and failure, mistrust and envy (see Jansen 2008;Horst 2006a, b). --- The Arrival: Digital commentaries on their new homes The first cohort of refugees resettled to Germany arrived at the reception center in a small town in Eastern Germany in mid-September. Soon thereafter, I received WhatsApp messages from my contacts informing me about their arrival. Benjamin sent me a message saying: "Hello, yesterday at 8:00 am we landed in […] state where we will be for 14days quarantine". Further, he sent two pictures of him standing in front of the reception center. The picture shows a big, white, modern 5-storey building. On the pictures, Benjamin is smiling happily and proud of having made it to Germany. He then told me about another Covid-19 test they had to do before they were allowed to move around the small town. But Benjamin and his family had to realize soon that not everything worked well in Germany's refugee reception. While telling a friend in Kakuma, who I also know, that the bad food was a big issue, he would tell me that he had no complaints, and everything was fine. Possible explanations are that he either did not trust me enough or did not want to complain to a German. As soon as he knew to which state he was BÖHME transferred, he would tell me and inquire about if it was a good state. But when they brought the family to a shared facility in another city, he was confused and asked to call me for help. As he told me very anxiously on the telephone, since they had arrived in the new home in the late evening, they were not given any food, drinks or money and did not know what to do. When I called the management of the facility, the person told me that he was not responsible for their board nor their social money, and that they should address their "fellow Africans" for help. As the job center responsible for the payment of social money was already closed for the weekend, Benjamin and his family had to rely on other refugees' assistance. When I visited them some weeks later in the shared facility, I realized how disillusioned they must have been. The home was a rundown site of a former preschool in a quite marginalized and hidden area in one of the suburbs of the city. As a family of four, they had to share one room. Like in the refugee camp, they had to be inventive and used a sheet to divide the room into two spaces. At least now they had electricity, a heater and a fridge. A Nigerian woman in the home and the church community helped them with food and advice. They only received their social money 3 weeks after their arrival and now had to wait for the language courses as well as finding a flat. Similarly, Kevin had sent me a picture in front of the reception center in the small town in Eastern Germany after his arrival. As the messages conveyed, he soon started to explore his new place to stay. He posted a picture of an acorn and videos from a bicycle trip with his friend through the forests nearby. The pictures and videos gave the impression of him being happy and enjoying his first days and weeks in Germany. After two weeks, Kevin was transferred to a major town in one of the southern states of Germany, but at this time did not know if he would be staying there. He was finally transferred to a very small town in one of the federal states. While en route on the bus, he proudly posted a video about heading to this place. He also commented ironically on a possible future while filming a very posh electric car riding beside the bus, writing "#One day". But soon, his posts and messages conveyed a change of mood. He posted pictures of himself lonely in his apartment, doing exercise, cooking or just hanging around. The only social post was a picture taken with a Russian neighbor he had met. When I visited him, he was living in a shared facility in a very small room, while in the winter it was grey and cold outside. German bureaucracy still prevented him from taking part in language courses or work. According to the residence obligation he is obliged to take residence in the local district for three years. Being used to his big social network, work and leisure activities in Kakuma, he felt bored and isolated in the small German town. His smartphone, laptop and social media platforms now were the only means to carry on his usual social and professional activities within his social networks as well as trying to navigate his new life in Germany. --- Conclusion Resettlement, besides repatriation and local integration, is regarded by the UNHCR as one of the three durable solutions for refugees. But as resettlement programs are dependent on certain countries' willingness and restricted to a certain target group as well as certain numbers of people to be resettled, only less than one percentage of refugees take part in resettlement programs (UNHCR 2021). For people living in refugee camps in the Global South, resettlement is the ultimate dream as it means getting the chance to start a new, good and secure life in the respective host countries of the Global North. Communication and media have always played a crucial role in resettlement. While until the early/mid 2000s people were literally separated by resettlement and communication with relatives and friends abroad was either impossible or very difficult, the digital revolution with mobile phones and smartphones has hugely transformed resettlement. Not only do resettled refugees and the family and friends left behind stay in contact and regularly communicate with each other. Also, those left behind can obtain more reliable proof of their new life abroad. Resettled refugees do report and digitally reflect on their journeys, arrival and new life in the host country via mobile phone and the internet, as well as on their former life back in the camp. As shown in this paper, for people living in Kakuma Refugee Camp, the dream of resettlement can accompany them their whole lives. Resettlement has a huge influence not only for the resettled people, but also for those staying behind. Moreover, it has considerable effects on the economy, sociality and on individuals living in the camp. Resettlement is heard, seen and felt and raises many emotions. And, as shown, the sudden launch of resettlement programs leads to discourses, rumors and envy among camp residents. The examples above tell us about the Kakuma refugees' experience of resettlement from the time they are selected, before and during their journey as well as after they have arrived in the host countries. They show refugees' knowledge and discourses of the process of resettlement, of receiving countries, through which they make sense of the resettlement process. Refugees vividly communicate and comment on their resettlement via mobile phone and social media. Moreover, as they also post about their past life in Kakuma, they also tell us about how they remember and miss "home" and keep up with their social networks in the camp. These messages not only reveal how refugees feel and experience resettlement, but also that it is a matter of individuality, cultural practices and trust regarding what is communicated and posted, when and to whom. Texts, pictures and videos are carefully selected, edited and presented in the way resettled people want to be seen. --- Funding The research for the article was funded by the German Research Foundation DFG and part of the project "Vertrauensbildung und Zukunftskonstruktion über Smartphones und soziale Medien an Zwischenorten transnationaler Migration am Beispiel von Geflüchteten aus Ostafrika" (Trust building and future construction via smartphones and social media at intermediate places of transnational migration with the example of refugees from Eastern Africa) at Trier University. --- Declaration of conflicting interests The author declared no potential conflict of interests with respect to the research, authorship and/ or publication of this article.
The dynamics of social networks is a complex process, as there are many factors which contribute to the formation and evolution of social links. While certain real-world properties are captured by the degree-driven preferential attachment model, it still cannot fully explain social network dynamics. Indeed, important properties such as dynamic community formation, link weight evolution, or degree saturation cannot be completely and simultaneously described by state of the art models. In this paper, we explore the distribution of social network parameters and centralities and argue that node degree is not the main attractor of new social links. Consequently, as node betweenness proves to be paramount to attracting new links -as well as strengthening existing links -, we propose the new Weighted Betweenness Preferential Attachment (WBPA) model, which renders quantitatively robust results on realistic network metrics. Moreover, we support our WBPA model with a socio-psychological interpretation, that offers a deeper understanding of the mechanics behind social network dynamics. Despite the widespread use of the Gaussian distribution in science and technology, many social, biological, and technological networks are better described by a power-law (Zipf) distribution of nodes degree (the node degree is the number of links incident to a node). The Barabasi-Albert (BA) model, based on the degree-driven preferential attachment, generates such scale free networks with a power-law distribution of node degree P(k) = k -λ . In fact, degree preferential attachment (DPA) is widely considered to be one of the main factors behind complex network evolution (the scale-free topologies generated with the BA model are able to capture other real-world social network properties such as a low average path length L) 1,2 . However, recent research challenges the idea that the scale free property is prevalent in complex networks 3 . Additionally, the degree-driven preferential attachment model has well-known limitations to accurately describe social networks (i.e., complex networks where nodes represent individuals or social agents, and links represent social ties or social relationships), owing to the following considerations: • People are physically and psychologically limited to a maximum number of real-world friendships; this imposes a saturation limit on node degree 4,5 . Conversely, in the BA model no such limit exists. • People have weighted relationships, i.e., not all ties are equally important: an average person knows roughly 350 persons, can actively befriend no more than 150 people (Dunbar's number) 4 , and has only a few very strong social ties (links) 6 . The BA model does not account for such link weights 7 . • The structure and dynamics of communities in social networks are not accurately described with DPA 7-11 . To address these issues, recent research has combined the DPA model with properties derived directly from empirical data. For instance, there exist proposals which add the small-world property to scale-free models (e.g., Holme-Kim model 12 , evolving scale-free networks 13 ) or the power-law distribution to small-worlds (e.g., the Watts-Strogatz model with degree distribution 14 , multistage random growing small-worlds 15 , evolving small-worlds 16 , random connectivity small-worlds 17 ). Other research proposals extend Milgram's experiment 18 ,
e.g., static-geographic 19 and cellular 20 models. However, all these models are still not accurate enough when compared against real-world social networks. To better understand the real-world accuracy problem, we perform a topological analysis on a variety of real-world network datasets and show that node betweenness (which expresses the node quality of being "in between" communities) is power-law distributed and-at the same time-correlated with link weight distributions. Our empirical findings align well with previous research in some particular cases 11,21 . Such empirical pieces of evidence suggest that, for social networks, the node degree is not the main driver of preferential attachment; therefore other centralities may be better attractors of social ties. We conclude that node betweenness-as opposed to node degree or any other centrality metric-is the key attractor for new social ties. Consequently, as the main theoretical contribution, we introduce the new Weighted Betweenness Preferential Attachment (WBPA) model, which is a simple yet fundamental mechanism to replicate real-world social networks topologies more accurately than other state-of-the-art models. More precisely, we show that the WBPA model is the first social network model that is able to replicate community structure while it simultaneously: (i) explains how link weights evolve, and (ii) reproduces the natural saturation of degree in hub nodes. Finally, we further interpret WBPA from a socio-psychological perspective, which may explain why node betweenness is such an important factor behind social network formation and evolution. --- Results Centrality statistics. We investigate the distributions of node betweenness on a variety of social network datasets: Facebook users (590 nodes), Google Plus users (638 nodes), weighted co-authorships in network science (1589 nodes), weighted online social network (1899 nodes), weighted Bitcoin web of trust (5881 nodes), unweighted Wikipedia votes (7115 nodes), weighted scientific collaboration network (7343 nodes), unweighted Condensed Matter collaborations (23 K nodes), weighted MathOverflow user interactions (25 K nodes), unweighted HEP citations (28 K nodes), POK social network (29 K nodes), unweighted email interaction (37 K nodes), IMDB actors (48 K nodes), Brightkite OSN users (58 K nodes), Facebook -New Orleans (64 K nodes), respectively Epinions (76 K nodes), Slashdot (82 K nodes) and Timik (364 K nodes) on-line platforms. To improve the robustness of our analysis, we ensure data diversity by considering network datasets with different sizes, weighted and unweighted, and representing various types of social relationships (see Methods). Our first observation is that, in all datasets, node degree, node betweenness, link betweenness, and link weights (for datasets with weighted links) are power-law distributed. Moreover, the power-law slope of degree distribution is steeper in comparison with node betweenness distribution. More precisely, as presented in Fig. 1a, the average degree slope is γ deg = 2.097 (standard deviation σ = 0.774) and the average betweenness slope is γ btw = 1.609 (σ = 0.431), meaning that γ deg is typically 30.3% steeper than γ btw across all datasets (details in SI. 1. Social network datasets statistics). Also, for all considered datasets there is a significant non-linear (polynomial or exponential) correlation between node betweenness and node degree (see Fig. 1b); this further suggests that node betweenness may be the source of imbalance in node degree distribution. The statistics for the entire dataset collection are presented in SI. 1. The second observation is that-unlike node degree-node betweenness is significantly more correlated with the weights of the incident links. After assessing the correlation between both node betweenness and node degree with the weighted sum of all adjacent links, we argue that betweenness acts as an attractor for stronger ties. For example, for the co-authorships weighted network with 1589 nodes 23 , the top 5% links accumulate 27.4% of the total weight in the graph; these top 5% links are incident to nodes which amass 80.2% of the total node betweenness, but only 14.9% of the total node degree (see Fig. 2-further numerical details in SI.1, Table 2). In all analyzed weighted datasets, node betweenness correlates with incident link weights by ratios that are 2.5-9 times higher than node degree-link weights associations (additional details in SI.1, Fig. 2). The first observation indicates a significant correlation between node degree and node betweenness but it does not necessarily imply causation. However, the second observation is that betweenness attracts stronger links which, in turn, triggers more imbalance in degree distribution; this suggests that node betweenness is behind Figure 1. (a) Overview of centrality distribution slopes for all empirical datasets; the average slopes are highlighted for node degree (blue) and node betweenness (red). (b) Non-linear correlation of node betweenness and node degree in a representative weighted on-line social network (OSN) 22 with 1899 nodes. These results show that, in social networks, degree and betweenness have a power-law distribution (with a steeper slope for degree), and that there is a non-linear correlation between the two centralities. networks evolution, while the power-law degree distribution is only a by-product. The importance of node betweenness is further supported by the analysis of centrality dynamics. To this end, we provide the example of an on-line social network, UPT.social, which was intended to facilitate social interaction between students and members of faculty at University Politehnica of Timişoara, Romania 24 . Right after its launch in 2016, UPT.social attracted hundreds of users, and the entire dynamical process of new links formation was recorded as snapshots of the first 6 weeks (T 0 -T 5 ). As exemplified in Fig. 3 (and further detailed in SI.3, Fig. 6), the nodes with high betweenness become the principal attractors of new social ties; we also note that the top 3 nodes attracting new edges at time snapshot T 2 are the ones which maximize their betweenness beforehand, and then trigger a subsequent degree increase. As shown, once node degree begins to saturate (T 3 -T 5 ), node betweenness drops, as nodes fulfill their initial bridging potential. --- Betweenness preferential attachment (BPA). In what follows, we propose the betweenness preferential attachment model (BPA) and conjecture that-for social networks-it is more realistic than the degree preferential attachment (DPA) model. The fundamental difference between the degree-driven and betweenness-driven preferential attachment is illustrated in Fig. 4; the upper panel shows that, under the DPA rule, the nodes with high degree (colored in orange) gain an even higher degree. In contrast, the lower panel in Fig. 4 shows that, under the BPA rule, the nodes with high betweenness (orange) attract more links and increase their degrees; in turn this decreases their betweenness via a redistribution process, thus limiting the number of new links for high-degree nodes as a second order effect. This may explain why, in real-world networks, the number of new links is limited for high degree nodes (i.e., degree saturation). WBPA model. Besides validating the BPA mechanism, we also realize that all the empirical network data gathered in a real-world context is weighted, even if the information about link weights is not always available. For example, there is no link weight information in our Facebook and Google Plus datasets, yet these networks are clearly part of a weighted social context in which each link has a distinct social strength. Realistic networks evolve according to a mechanism which considers link weights, therefore we develop the weighted BPA (WBPA) algorithm to characterize the social network evolution. The WBPA algorithm for link weight assignment according to the fitness-weight correlation is given in Fig. 5 and discussed below. In the case of WBPA, the fitness f is node betweenness. Note that even though link weights w ij are not used directly during the growth phase, they have a significant second order impact: Betweenness depends on the shortest paths in the graph, which in turn are highly dependent on link weights. Link weights are updated in step 3 of the WBPA algorithm, and whenever a weight becomes ≤0, the corresponding link is removed. Weighted BPA Algorithm (WBPA). --- 1) Distribute weights: Begin with an arbitrarily connected graph G with nodes V and bidirectional links E (i.e., for ∀e ij ∃ e ji ). A weight w ij is added for each link e ij in the graph, so that w ij is proportional to fitness f j of the target node v j . For each node v i , all incident link weights w ij are normalized so that the outgoing weighted degree is 1. 2) Growth (BPA): At every step, a new node v k is introduced; the new node tries to connect to n (1 ≤ n ≤ V) existing nodes in G. The probability p i that v k becomes connected to an existing node v i is proportional to fitness f i . Therefore, we have = ∑ ∈ p f f / i i j V j where the sum is made over all nodes in the graph. 3) Dynamic weight redistribution: Once a new node v k becomes connected to an existing node v i , weights w ki and w ik are initialized with the normalized fitnesses f i and f k respectively. As the weighted outgoing degree of node v i increases by w ik , every other weight w ij is rescaled with -w ik /n, where n is the previous number of neighbors of node v i . --- Figure 4. The mechanisms of degree preferential attachment (DPA) versus betweenness preferential attachment (BPA) depicted in terms of acquiring new links and limiting the (excessive) accumulation of degree over time. In DPA, nodes with high degree attract even more links, and thus node degree increases ad infinitum. Conversely, in BPA, nodes attracting new links because of their high betweenness will eventually lose their betweenness in favor of their neighboring nodes, thus limiting the acquired degree. Assessing the realism of WBPA. WBPA defines complex interactions between link weights and node centralities, hence we expect emerging phenomena such as n-order effects. Therefore, a mathematical analysis of WBPA would be cumbersome and beyond the scope of our paper. Instead, as validation strategy, we test WBPA against several preferential attachment (PA) models to explore which one produces the most realistic social network topology. To this end, we quantify preferential attachment according to a fitness function f which expresses the capability of individual nodes to attract new connections (e.g., if f is chosen to be node degree Deg, then we reproduce the classic BA model 2 ). We consider f as one of the following network centralities: degree Deg (DPA model), betweenness Btw (WBPA model), eigenvector centrality EC (ECPA model), closeness Cls (ClsPA model), and clustering coefficient CC (CCPA model). Each node centrality is defined in the Methods section. The comparison between synthetic and real-world networks is done through topological similarity assessment supported by the statistical fidelity metric 25 , alongside standard deviation and p-values. Fidelity takes values ϕ ∈ [0, 1] with 1 representing a network that is identical with the reference network (see the Methods section for more details). We also make use of the following graph metrics to characterize and compare networks: average degree (AD), average path length (APL), average clustering coefficient (ACC), modularity (Mod), graph diameter (Dmt), and graph density (Dns). We start by measuring the distributions of these six metrics on the 18 selected real-word datasets. To assess which centrality is the most appropriate as fitness function, we start by generating networks according to each PA model, of increasing sizes: N = {1K, 2K, 5K, 10K, 50K, 100K} nodes; the full statistical results are presented in SI.2. Best fitness for preferential attachment. Aggregating the statistical results from SI.2-Fig. 4 (real-world data) and Fig. 5 (PA networks), we provide an intuitive visual comparison in Fig. 6 between the averaged evolution of the six graph metrics on the real-world data (N = 590 to N = 364 K nodes), and on the degree-driven and betweenness-driven PA networks. To better illustrate the comparisons between the synthetic PA networks and the real-world datasets, we present the trend lines for each graph metric in Fig. 6; for the real-world data networks the trend line is green-dotted, for Btw fitness networks is blue, and for Deg fitness networks is red. On close inspection, we uncover the following: • AD in real data evolves differently than in PA networks. • APL evolution in real data resembles Btw networks much better than Deg networks. We measure a statistical fidelity of ϕ Btw = 0.925 and ϕ Deg = 0.853. • ACC evolution in real data resembles Btw more than Deg, with statistical fidelities of ϕ Btw = 0.665 and ϕ Deg = 0.515. • Mod evolution in real data resembles both networks very well, with statistical fidelities of ϕ Btw = 0.814 and ϕ Deg = 0.812 (a slight advantage for the Btw networks). • Dmt evolution in real data resembles Deg more than Btw. Even though we see the same type of increase, Deg produces longer diameters as seen in the majority of real-word data. The measured statistical fidelities are ϕ Btw = 0.796 and ϕ Deg = 0.836. • Dns evolution in real data resembles both networks, with statistical fidelities of ϕ Btw = 0.634 and ϕ Deg = 0.634. For simplicity, Fig. 6 includes only Deg and Btw PA networks in the comparison with real-world data; the full numerical data-with all PA network models-are detailed in Table 1. All these results demonstrate the superior realism provided by the WBPA in comparison to the classic DPA principle, as well as in comparison to PA driven by other node centralities such as eigenvector, closeness or clustering coefficient. We strengthen our analysis by presenting several direct comparisons between real networks and synthetic PA networks, generated with the same node sizes as the real-world reference networks. The comparisons are made (c) Once v 6 and v 1 connect, node v 1 assigns a weight w 1-6 on the new link that is proportional to fitness f 6 . As such, a proportional weight ratio of w 1-6 /4 is subtracted (indicated with a minus sign) from the four already existing links. If any of the newly resulting weights drop below 0, the corresponding link is removed from node v 1 . According to the BPA principle, the fitness f is represented by the node betweenness centrality. using the fidelity metric ϕ, as well as by comparing individual graph metrics (one by one), to show that WBPA is superior to the other PA networks. To this end, we select the Facebook (FB), Google Plus (GP), Online social network (OSN), and IMDB real-world datasets, and provide the full statistical results in Table 2; here, each sub-table contains the reference real-world network and its graph metrics on the first row, while the remaining lines contain the averaged graph metrics for 10 synthetic networks generated according to preferential attachment driven by each centrality (Deg, Btw, EC, Cls, CC). Additionally, we provide measurements for a Null model (Random network) to serve as baseline. The standard deviation for each synthetic dataset metric is symbolized with a ± sign. The mechanism of preferential attachment which we adopt in our paper is a fundamental, yet generic and simple framework. State of the art studies which are specifically aimed at creating realistic topologies propose algorithms with a far increased complexity. Therefore, intuitively, it is expected that state of the art models like Cellular (Cell) 20 , Home-Kim (HK) 12 , Toivonen (TV) 26 , or Watts-Strogatz with degree distribution (WSDD) 14 etc., will generate more realistic topologies in terms of the six discussed graph metrics. To test this hypothesis, we further generate such synthetic networks of size N = 10,000 and compare them with WBPA, DPA networks and several real-world datasets. The results are provided in Table 3, showing that not only is WBPA superior to DPA and PA models driven by other centralities but, in most cases (i.e., 10 out of 13), it outperforms the other synthetic models in terms of topological fidelity as well. For readability purposes we did not add information about the standard deviations of each synthetic model here; this information may be found in SI.4, Tables 4 and5. To offer the diversity required by a robust test of our model, we also include unweighted networks in our collection. A fair comparison between WBPA networks (which are all weighted) and the large and unweighted example networks, requires that all weights on our WBPA algorithm output be discarded. In this comparison, we start by generating WBPA networks of 10,000 nodes, then make all weights > w 0 ji become 1, thus obtaining unweighted BPA networks. The upper half of Table 3 contains the average fidelities of WBPA, DPA and the two null model networks, towards the real-world reference networks. The lower half of Table 3 contains the other state of the art synthetic networks. Our WBPA obtains the highest fidelity towards most empirical references, e.g., 13-68% higher ϕ FB , 21-81% higher ϕ OSN , 4-47% higher ϕ TK than all other synthetic models. As such, we prove the increased realism of our model in comparison with some elaborated state-of-the-art models (briefly described in SI. 4, and quantified in SI.4, Table 4). Compared to DPA, our model produces networks with higher fidelity values; when averaged over all empirical networks we obtain: 0 831 Btw φ = . and 0 777 Deg φ = . . We note that the WBPA model produces a specific distribution of the Betweenness/Degree (B/D) ratio. To this end, we measure B/D distributions on all datasets (weighted and unweighted), as well as on our synthetic WBPA-generated networks, using the Gini coefficient (a Gini coefficient takes values between 0 and 1, with values closer to 0 representing a more uniform dispersion of data) to evaluate data dispersion 27 . The Gini values obtained on the empirical data are given in Table 4: all empirical datasets, whether weighted or unweighted, have their Gini coefficients within a similar range, i.e., the average real-world Gini is g real = 0.5193 ± 0.071. Indeed, for WBPA networks with 10,000 nodes, we have an average Gini coefficient of g WBPA = 0.4962 ± 0.0282, which is very close to the real-world B/D Gini values (-4.5%). Additionally, we generate 10 of each random, small world, and PA networks of 10,000 nodes. For these synthetic networks we obtain the corresponding Gini values in Table 4. The PA networks (except WBPA) produce an average g PA = 0.7784 ± 0.0128, whereas the random network produces an average Gini g rand = 0.9374 ± 0.0013. These results point out two key aspects: (i) the B/D dispersion in other PA and other state-of-the-art synthetic models differs significantly from real-world social networks, and (ii) WBPA produces networks with B/D distributions that are closer to the real-world. Two specific B/D distributions are exemplified in Fig. 7a,b for the Google Plus and POK users networks, respectively. Figure 7c,d The WBPA realism is also backed up by the centrality distribution analysis. The power-law slopes for degree and betweenness distributions in WBPA (γ deg = 1.391 and γ btw = 1.171) are very similar to the real-world distributions from the Centrality statistics section (see Fig. 1) and SI.1, Table 1, meaning that the degree slope is steeper than the betweenness slope (with 18.8%). Similar to the real-world cases, we obtain a polynomial fit for the node betweenness-degree correlation in WBPA (y = 0.246x 2 + 329.8x -3569.4, with correlation coefficient R 2 = 0.9977). --- Discussion and a Socio-Psychological Interpretation From a computational standpoint, node betweenness is significantly more complex to compute in comparison with node degree. However, when individuals make assessments of social attractiveness in real-world situations-which is essential for driving preferential attachment and establishing new social links-they do not rely on executing algorithms or other types of quantitative evaluations. Instead, individuals make decisions based on qualitative perceptions 30 . In light of the quality over quantity hypothesis proposed by social psychology 31 , we argue that node betweenness is a far better indicator of social attractiveness than node degree, because the quality of being "in between" can be easily and quickly perceived, due to the fact that humans are better at observing qualitative aspects (e.g., differences and diversity) than quantitative ones 32 . This idea is supported by an experimental study on how people favor investing in fewer qualitative social ties, rather than numerous lower quality ties 32 . Our results indicate that WBPA provides a more accurate social network topological model, being able to reproduce real-world community structure as well as to explain degree saturation and link weight evolution. We believe that the WBPA model transcends the mere topological perspective on social relationships evolution. As such, in the field of social psychology, individuals are perceived as social creatures who strive for social recognition, validation, approval and fame 7,19,33,34 . Indeed, individuals tend to connect to two types of other nodes: individuals who are popular in their communities (i.e., typically they have high degree), and individuals who connect multiple communities (having high betweenness). While the former type of interconnection is mostly related to the popularity of individuals within local communities, it appears to be an epiphenomenon of the latter. Also, state of the art has previously identified that social networks have apparent (degree) assortative mixing, while, technological and biological networks appear to be disassortative in nature 34,35 . The study in 35 explains this as most networks have a tendency to evolve, unless otherwise constrained, towards their maximum entropy state-which is usually disassortative. A similar debate was introduced by Borondo et al. based on the concepts of meritocracy versus topocracy 36 . The authors discuss the critical point at which social value changes from being based on personal merit, to being based on social position, status, and acquaintances. In the context of social networks, we interpret this issue as follows: in our ego-networks the balance between friends with less influence and ones with more influence than us translates into betweenness assortativity. Indeed, connecting to persons with high betweenness and increasing our tie strength with them (through, say, a stable social relationship), we ourselves become, in turn, more influential social bridges. This propagation of influence determines other persons, with lower betweenness, to interact with us and direct more tie strength towards us. Towards this end, we introduce the concept of social evolution cycle, which revolves around betweenness assortativity rather than degree assortativity 34,35,37 . According to our approach, individuals become more influential over time by increasing their own betweenness. Therefore, the exhibition of one individual's desire to increase his/ her betweenness is two-fold: it attracts new ties (i.e., increase in degree), and it creates stronger ties (i.e., increase in link weight); this process continues for the next generation of individuals who aspire to climb the social ladder. As shown, this conclusion is supported by the evolution of networks generated with WBPA. We envision two ways of improving an individual's social status. The first choice relies on forcing tie strengths inside the existing neighborhood to increase first, followed by an increase in influence. The second choice relies on increasing influence first by broadening the neighborhood to influential agents (BPA principle), which will in turn trigger an increase in tie strengths. We consider the second choice as the more plausible social process, as detailed and explained in Fig. 8. We conclude that the WBPA model is quantitatively more robust than DPA, as it can reproduce more accurately a wide range of real-world social networks. Such a conclusion means that node degree is not the main driver in social network dynamics. Instead, node betweenness is a much better indicator of social attractiveness, because it drives the formation of new social bonds, as well as the evolution of social status of individuals. From a Figure 7. Distributions of betweenness/degree (B/D) ratios in empirical and synthetic social networks characterized by Gini coefficients g. (a) Google Plus users network 28 (g GP = 0.4820). (b) POK users network 29 (g PK = 0.4879). (c) DPA network 2 (g DPA = 0.7828 ± 0.0182) (d) WBPA network (g WBPA = 0.4962 ± 0.0282). The B/D distribution in our WBPA network model, as opposed to the DPA network, is very similar to that found in real-world networks. socio-psychological standpoint, individuals (intuitively) perceive node's betweenness as the capacity of bridging communities, irrespective of its degree. As shown, WBPA is a subtle mechanism at work that is able to replicate the social network community structure. Also, WBPA explains the dynamic accumulation of degree and link weights, as well as the eventual degree saturation, as a second order effect. Consequently, we believe our work paves the way for a new and deeper understanding of the mechanisms that lie behind the dynamics of complex social networks. --- Methods Real-world datasets. All data used in this study were selected to facilitate a thorough analysis of node betweenness and degree, as well as measuring the realism of synthetic networks. The real-world datasets have been chosen based on diversity of both context and network size. Prior studies confirm that data mining from sources such as Facebook or Google Plus is reliable for realistic social network research 38,39 , and indicate a strong correlation between the real-world and virtual friendships of people 40,41 . Table 5 provides the graph metric measurements used for the realism assessment of our WBPA model, as presented in the Results section. Our real-world datasets comprise the following social networks (ordered by network size, from N = 590 to N = 364K nodes): Facebook (FB) users 41 , Google Plus (GP) users 28 , weighted co-authorships (CoAu) in network science 23 , weighted on-line social network (OSN) 22 , trade network using Bitcoin OTC platform (BTC) 42 , votes for Wikipedia administrators (WkV) 43 , weighted scientific collaboration network in Computational Geometry (Geom) 44 , Condensed Matter collaboration network from arXiv (CM) 45 , weighted interactions on the stack exchange web site MathOverflow (MOvr) 46 , High-Energy Physics citation network (HEP) 47 , POK online social network 29 , Enron email (EmE) communication network 48 , IMDB adult actors co-appearances, Brightkite online social network (BK) 49 , Facebook-New Orleans (FBNO) 50 , Epinions online social network (EP) 51 , Slashdot online social network (SL) 48 , and Timik online platform (TK) 52 . Information about the nature of nodes and links, as well as direct URLs for each dataset are provided in SI.5 Datasets availability, Table 6. In the main manuscript, Table 6 presents the natural ranges for the graph metrics that are provided in Table 5, as they are measured across the entire range of considered real-world on-line social networks 41 . --- Network centralities. All graphs are generated and visualized using Gephi 53 ; the graph centralities are analyzed using the poweRlaw package distributed with R according to the methodology described in 54 . Full details for the topological analysis of data are given in SI.1. Furthermore, to quantify the specific distributions of B/D ratios introduced in this paper we made use of the Gini coefficient-borrowed from the area of economics where it is used to evaluate data dispersion 27 . In SI.2 we present the preferential attachment analysis based on combinations of two and three node centralities. Given a graph G = (V, E), with nodes v i ∈ V and links e ij ∈ E, we define the basic graph centralities and metrics used throughout the paper. We represent the adjacency matrix as W = {w ij }, which contains either the weight of the link for any link e ij , or 0, if no link exists. If the network is unweighted, then each w ij = 1. The degree k i of a node v i (also denoted as D) is defined as k w i i j = ∑ . In case of directed networks, there is a differentiation between in-degree and out-degree, but that is beyond the scope of this subsection. The average degree AD of the graph is calculated over all nodes as 1 : The clustering coefficient CC i measures the fraction of existing links in the vicinity V i of a node, and is formally defined as 55 : = | | ∈ | - CC e j k V k k { , }( 1) (2) i jk i i i with k i being the degree of node v i , and e jk the set of links connecting two friends in the vicinity of node v i , all divided by the maximum number of links in vicinity V i . Consequently, the average clustering coefficient ACC of the entire graph is the average of all CC i over all nodes. Considering d(v i , v j ) as the shortest path between two nodes in G, the average path length APL is defined as 1 : ∑ = - ≠ ∈ APL n n d v v 1 ( 1) ( , )(3) i j G i j If there is no path between two nodes, then that particular distance is considered 0; n is the total number of nodes |V| in G. The diameter of a graph is defined as the longest geodesic 56 , namely the longest shortest distance between any two nodes: Dmt = max(d(v i , v j )). Graph density is simply defined as the ratio between number of links and maximum possible number of links, if the graph were complete 56 . For undirected graphs, it is defined as: Dns E n n 2 ( 1) ( 4) = | | - Modularity is a measure for quantifying the strength of division of a graph into modules, or clusters, and is often used in detection of community structure 57 [-1/2, 1). If it is positive, then the number of links within a cluster exceeds the expected number. Also, a high overall modularity means dense connections between the nodes within modules and sparse connections between nodes in different modules. We use the algorithm of Blondel et al. to compute modularity 58 . Betweenness centrality is commonly defined as the fraction of shortest paths between all node pairs that pass through a node of interest 1 , and is defined as 59 : ∑ σ σ = ≠ ≠ ∈ Btw v v ( ) ( )(5) i i j k G jk i jk where σ jk (v i ) is the number of shortest paths in G which pass through node v i , and σ jk is the total number of shortest paths between all pairs of two nodes v j and v k from G. Closeness centrality is defined as the inverse of the sum of geodesic distances to all other nodes in G 1,56 , and can be considered as a measure of how long it will take to spread information from a given node to other reachable nodes in the network: Cls v d v v ( ) ( , )(6) i v G v i j \ 1 j i ∑ =                ∈ - where d(v i , v j ) is the distance (number of hops) between the two nodes v i and v j . The most common centrality based on the random walk process is the Eigenvector centrality (EC), which assumes that the influence of a node is not only determined by the number of its neighbors, but also by the influence of each neighbor 23 . The centrality of any node is proportional to the sum of neighboring centralities 1 . Considering a constant λ, the EC is formally defined as: ∑ λ = ∈ EC v E C v ( ) 1 ( ) (7 ) i v V j j i Assessing network fidelity. In order to assess the structural realism of the generated social networks, we used the statistical fidelity ϕ, which is proven to offer reliable insights on complex network topologies 25 . The fidelity metric ϕ numerically captures the similarity between any graph topology G * with respect to another reference graph G (i.e., a complex network G = (V, E)). More precisely, by measuring and comparing their common individual graph metrics, a maximum fidelity of 1 represents complete similarity, while a minimum fidelity of 0 represents complete dissimilarity between the two compared topologies. Of note, the fidelity is not dependent on the choice of metrics of interest, however it is customizable to allow a weighted comparison. Depending on the context of the problem, any numerical value (i.e. metric) that is representative for the model can be used. The definition and proof of statistical fidelity ϕ are detailed in 25 . Definition 1. Given a reference topology G, and any other network G * being compared to G, the arithmetic fidelity A ϕ * , which expresses the similarity between G * and G, is defined as: In equation 8, i is the index of the metric which describes the two networks being compared, and n is the total number of metrics used in the comparison. In this paper we compute the fidelity between multiple synthetic topologies and the empirical social network references. These reference datasets are chosen because they have typical real-life social network features. The fidelity comparison is made relative to the set of relevant network metrics (indexed by i). In this paper, fidelity is measured by taking into consideration the following topological characteristics: average degree AD, average path length APL, average clustering coefficient ACC, modularity Mod, diameter Dmt, and density Dns. --- Author Contributions A.T., M.U., and R.M. designed research, analyzed data and wrote the paper; A.T. and M.U. designed algorithms; A.T. performed simulations. --- Additional Information Supplementary information accompanies this paper at https://doi.org/10.1038/s41598-018-29224-w. --- Competing Interests: The authors declare no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rape is one of the serious crimes committed every day in Nigeria with many negative effects on victims, perpetrators, and society. However, the enormity of the crime is not fully represented in Nollywood films leading to subpar treatment of rape as a subject matter in most Nigerian films. Using a descriptive approach, the study examined the subject of rape in Kunle Afulanyo's October 1 and Moses Inwang's Alter Ego. These films were selected using a purposive sampling technique that enabled the researchers to select from several Nollywood films that dwell on the issue of sexual violence. The study adopted Michael Gottfredson and Travis Hirschi's Self Control Theory to interrogate the motivation, perception and attitude toward rape. The study revealed that rape is committed within the institutions that are responsible for the development and protection of young people, resulting in latent physical, psychological and sociological problems for the victims, perpetrators, and society. The study recommended that Nollywood film practitioners should use their works as weapons against all forms of sexual violence, and concluded that social and cultural institutions have a major role to play in the intervention and survival of victims of rape by intensifying awareness of the dangers of sexual violence in Nigeria.
INTRODUCTION Nollywood is one of the biggest film industries in the World, and remains at the forefront in the treatment of societal ills in Nigeria, part of which is the problem of sexual and genderbased violence. Hence, some film practitioners have through the medium of film technology represented the growing socio-cultural concern on rape and its dangers in society to bring about corrective ideas and interventions that perhaps will re-engineer the consciousness of the viewers, society, and government to tackle the lingering scourge of rape and other forms of sexual violence in the country. Some of these films include but are not limited to Yet Another Day (1999), The Price (1999), Last Girl Standing (2004), Slave to Lust (2007), Tango with Me (2010), Child, not Bride (2014), Code of Silence (2015), Zahra (2017), Azonto Babes (2018), and The WildFlower (2022). However, most of these films treat sexual violence as a subtopic and thus lack the needed performance to represent the enormity of the crime, as such lacking the capacity to create adequate awareness for the fight against the scourge of rape in Nigeria. Worried by the lack of serious treatment of sexual violence by Nollywood film practitioners, Omolayo (2021) asserts that: The consequences and treatment of the matter are extremely subpar. Its treatment trivialises the matter and disregards the importance of consent. Most importantly, it delivers little to no backlash on the aggressor. The lack of legal and even social consequences delivers an underlying message that rape is not an abominable act. (Parr. 16) Although the Nigerian film industry can lead the chase in the fight against rape (Ejiofor, Ojiakor & Nwaozor, 2017), Nwalikpe (2018) in an empirical study on the role of Nollywood Films in Combating Sexual Violence Among Female Adolescence... reveals that "female adolescents have not fully felt the impact of Nollywood film in combating sexual violence" (1). This is so because the aesthetics of rape and style of representation employed by Nollywood film producers and directors "tends to promote the male over the female and are discriminatory towards the female" (Omoera, Elegbe & Doghudje, 2019, 138), a problem Oladosu (2021) notes is underpinned by the lack of feminist studies on rape. The unintended trivialisation and misrepresentation of the subject of rape and other acts of sexual violence in most Nollywood films show a gap between the diegetic performance and the actual reality of rape in our society. Hence, Olateru (2021) affirms that because Nollywood treats rape with levity, culprits do not do jail time, psychological effects are not given an in-depth portrayal and most victims are even allowed to marry their aggressors. The perfunctory representation of rape and other forms of sexual violence, and the trivialisation of their effects and consequences on victims, perpetrators, and society in Nollywood films inform the probl em of the study. --- LITERATURE Sexual violence is any unwanted act aimed at undermining the sexuality of another person. It includes rape, sexual assault, sexual harassment, human trafficking, and all forms of sexual threats. However, the major concern of this paper is rape which is the most brutal of all the forms. Generally, rape is the act of forcefully having penetrative sex with someone against his/her will. It is one of the many violent crimes against humankind that deprives human beings of the freedom to express feelings of mutual consent. Globally, the crime of rape is a social enigma that poses a huge threat to gender-based relationships. Schewe (2007) defines rape as "non-consenting sexual behaviours committed against an acquaintance, a dating partner, or a stranger" (224). According to Ben-Nun (2016), it is: A type of sexual assault usually involving sexual intercourse or other forms of sexual penetration perpetrated against a person without that person's consent. The act may be carried out by physical force, coercion, abuse of authority, or against a person who is incapable of giving valid consent, such as one who is unconscious, incapacitated, has an intellectual disability, or below the legal age of consent. (15) While Ben-Noun's definition of rape presents a balanced interpretation of rape, most definitions and arguments on rape present it to be a type of crime committed mainly by men against women. This perception stems from the fact that "90% of victims of rape are female" (Chiazor et al., 2016, 7765) hence, Okorie (2019) notes that, "rape is a classical example of gender-oriented crime in that it can only be committed by a male upon a female" (6) The argument above is further undergirded by historical and legal perspectives on rape. For example, Peltola (2021) notes that: Rape and sexual violence have been committed against women and girls for as long as man has inhabited the planet. These acts have been cited in historical documents, including the biblical Old Testament and other religious works, depicted in sculptures and art pieces, and found in literary tomes used in Homer's Iliad. (3) Even some legal definitions implicate women as ultimate victims of rape. For instance, the Criminal Code Act of Nigeria, a legal document that criminalizes rape in Southern Nigeria-States defines rape in section (357) as: ...unlawful knowledge of a woman or girl without her consent, or with her consent, if the consent is obtained by force or by means of threats or intimidation of any kind, or by fear of harm, or by means of false and fraudulent representation as to the nature of the act, or, in the case of a married woman, by personalising her husband... Relatively, in Northern Nigeria, the Sharia Penal Code section 127(1) also defines rape as: Sexual intercourse with a woman... (a) against her will; (b) without her consent; (c) with her consent, when her consent has been obtained by putting her in fear of death or hurt; (d) with her consent, when the man knows that he is not her husband and that her consent is given because she believes that he is another man to whom she is or believes herself to be lawfully married; (e) with or without her consent, when she is under fourteen years of age or of unsound mind. By implication, Alao (2018) notes that rape is widely "seen as a common phenomenon against the female gender as they are the most vulnerable" (1). Implicitly, men are seen as key perpetrators while women as victims. However, "there are many rumoured or even reported cases of men who have been raped in contemporary societies, including Nigeria" (Akinwole & Omoera, 2013, 5). Though little material exists on the subject and the numbers remain unclear (Sivakumeran, 2007, 254), Amnesty International (2020) acknowledges that "men and boys are also subjected to rape but to a lesser extent" (1). But Sivakumeran's (2007) study on Sexual Violence against Men in Armed Conflict reveals that "sexual violence is committed against men more frequently than is often thought, and that victims are raped at home, in the community and in prison; by men and by women; during conflict and in time of peace" (253). Furthermore, Sivakumeran (2007) explains the different forms through which male victims are raped. According to him, "victims may be forced to perform fellatio on their perpetrators or on one another; perpetrators may anally rape victims themselves, using objects, or force vict ims to rape fellow victims" (265). By and large, rape is used as the most brutal weapon of war to exercise power and dominance against men and women and to undermine the ethical fabric of society (Peltola, 2021, 2). Despite, that "little attention is paid to cases in which the victim is male" (Idisis & Edoute 2007) owing to issues of "masculine stereotype" (Sivakumeran, 2007 , 255) and identity, Lowell (2010) affirms that, "rape statues in their jurisdictions are genderneutral and apply equally to perpetrators of either sex" (158). Therefore, beyond the polemics on the appropriate victims of this crime, rape is an unlawful and violent sexual act that can be committed across genders, cutting across ages, tribes, races, religions and statuses, as well as, posing serious threats to the security of the victims and society. The effects of rape on victims are deleterious. These effects are physical, psychological, sociological, economic, and spiritual. To a large extent, it affects the perpetrators and society at large. Rape by its very nature is a physical but violent attack that leads to serious pain and bodily injuries. In a worst-case scenario, it leads to the fatality of the victim. Perpetrators of rape most times resort to the use of dangerous weapons to subdue and repel any form of resistance from their victims. Akinwole and Omoera (2013) further highlight some other physical effects of rape on victims as, "injuries from beating, or choking, such as bruises, scratches, cuts, and broken bones; swelling around the genital area, bruising around the vagina, injury to the rectal vaginal area, sexually transmitted infection (such as herpes, gonorrhoea, HIV/AIDS, and Syphilis), and possible pregnancy" (10). More often than not, the ordeal victims of rape go through in the hands of perpetrators leaves them traumatized for a long period. A. Alhassan, as quoted in Chiazor et al. (2013) observes that "in the months following a rape, victims often have symptoms of depression or traumatic stress. They are more likely to abuse alcohol or drugs to control their symptoms" (7778). And most times they suffer from "poor self-image, unhealthy sex life, and depressive or post-traumatic stress disorders in their lifetime, long time negative effects on sexuality and inability to form or maintain trusting relationships are common" (7778). Unfortunately, rapists are not aware of the psychological havoc or trauma they have inflicted on their victims (Aloa, 2018). The danger of the physical and psychological distresses caused by rape is that it makes rape a hurdle to economic development (Chiazor et al., 2016). Generally, "it is difficult to place a monetary value on the harm caused by sexual assault, but it is important to recognise that there are financial costs to the victim/survivor and the wider community" (Boyd, 2020, 6). Therefore, Jones and Walker (2014) affirm that rape "is broadening beyond the immediate physical and psychological trauma of the individual victims to include not only the long term impact on their lives but also the broader social and economic impacts on families and communities"(2). Relatively, rape victims go through moments of spiritual ebb as a result of violent infiltration of not just the physical body, but the inner sanctuary and sanctity, thus causing a dent in the purity of the human soul. By this, ISSN: 2689-5129 Volume 6, Issue 6, 2023 (pp. 124-138) 128 Article DOI: 10.52589/AJSSHR-6JDMEPB8 DOI URL: https://doi.org/10.52589/AJSSHR-6JDMEPB8 www.abjournals.org a victim spiritually feels "unappreciated and exploited" consequently affecting his/her whole being and setting his/her soul on fire for "spiritual hunger and emptiness" (Ezenweke & Kanu, 2012, 11). However, no crime ever goes unpunished. Rapists are not severed from certain consequences resulting from their actions. Chiazor et al. (2016) posit that: No rapist goes free, even if he is not apprehended by law enforcement agencies. He will always be hounded by the memory of the evil perpetrated on his victims. The offenders should know that rape has severe consequences, ranging from incarceration to poor health, guilt and condemnation, social stigma, bad criminal record, sexually transmitted diseases and several others. ( 7777) Each rape attack is a tragedy for the victims and their relatives, the witnesses, the community to which they belong, and even the perpetrator (Okorie, 2019). Since society is a major beneficiary of the enormous dangers of rape and other acts of sexual violence, there is every need to treat the matter with the seriousness it requires. Every social institution has a part to play in the fight against rape and other forms of sexual abuse against men and women. There is no doubt that rape and other forms of sexual violence are a global problem with multifaceted effects staring unblinkingly in Nigeria. It is like a cankerworm that seems to be thriving and deepening its roots at an alarming rate today in Nigerian society (Mofoluwawo, 2017). The failure of fundamental social institutions like the family, school, and church in creating adequate awareness of this problem, coupled with other factors contribute to the rising cases of rape in the country. Mofoluwawo (2017) attributes the culture of silence and lax Nigerian laws as some of the factors that enable rape to thrive. Jewkes and Naeema (2002) note that the victims' fear of not being believed by society as well as the lack of supportive institutions (Chiazor et al., 2016) and stigmatization of victims all contribute to the rising cases of rape which Jewkes and Naeema (2002) argue reflects a high level of social tolerance of the crime. --- THEORETICAL FRAMEWORK This study is hinged on Michael Gottfredson and Travis Hirschi's Self Control Theory, which is also known as the General Theory of Crime. The core of this theory is that people with social bonds have high self-control and tend to disassociate from any form of aggressive behaviour while people with low social bonds develop low self-control and are prone to deviant social behaviours. Control theorists hold that "individuals who possess weak or broken social bonds to conventional institutions are more likely to engage in deviant behaviour" (Peguero et al., 2011). Hence, the development of low self-control over time makes such individuals susceptible to sexual violence while forming a strong social bond with social institutions, goals and belief helps a potential aggressor to overcome the impulse to commit any crime, including rape. Therefore, Peguro et al. (2011), opine that the Self Control theory "is based on bridging the link between individuals and conventional social institutions in order to explain delinquent behaviour" (260). Furthermore, Control theorists believe that self-control is a learned behaviour that starts in the formative age and can undergo the process of resistance over time. They trace the connection between criminal behaviour and age. For example, Cretacci (2008) agrees that "people who engage in criminality suffer from low self-control that is stable over time" and "formed early in life," (539) and consequently, "predisposes offenders to live of crime, and is manifested in personality problems" (539). He goes further to state that the masked manifestations of low self-control are evident in "immediate gratification, impulsivity, taking risks, laziness, and assaultive behaviour" (539). Ultimately, the essential element of criminality is the absence of self-control which Lowell (2010) agrees explains the "impulsivity of man towards aggressive behaviour" (160) and intrinsic desire to use sex as a means to an end. While Control theorists generally agree that the important factor behind any crime is a person's lack of self-control, Schreck and Hirschi (2021) emphasise that crime and delinquency result when the individual's bond to society is weak or broken. Therefore, the first task of the control theorist will be to identify the important elements of social bond (Schreck & Hirschi, 2021), that enables an individual to overcome his/her personality problems. These elements they identified as commitment, attachment, involvement and belief. However, the importance of social institutions is at the heart of control theorists in interpreting the four elements of social bond. According to Peguero et al. (2011), An individual's bond to social institutions consists of four elements: emotional attachment to parents, peers, and conventional institutions such as school and work; commitment to longterm educational, occupational, or other conventional goals, involvement in conventional activities such as work, homework, hobbies; and belief in the moral validity of the law. ( 260) They argue further that while these four elements of social control can independently inhibit delinquency, the combined effect of the four elements of social bond on delinquency is greater than the sum of their individual effects Therefore, this theory is pertinent to this study because it seeks to evaluate the important role of social institution in the human relationship because sexual violence is often influenced by factors operating both at the individual level and the level of society. It will further help to understand the characters' impulsive responses toward sexual violence and consequent survivalist strategies. --- METHODOLOGY The methodology adopted for this study is descriptive, aimed at examining rape and its consequent effects on victims, perpetrators, and society in Kunle Afulanyo's October 1 and Moses Inwang's Alter Ego. These films were selected through a purposive sampling technique which enabled the researchers to select two films from a range of Nollywood productions that present the theme of rape and other forms of sexual violence. More so, these films were selected due to their serious treatment and exploration of the reality of rape on the victims, perpetrators, and society, and consequently, they were discussed using Content Analysis for critical interpretations and discussions. --- Rape in October 1 and Alter Ego October 1 revolves around Aderopo, the prince of Akote who rapes and kills his victims in a manner that indicates a serial killer. Inspector Danladi Waziri is sent to Akote a few weeks before Nigeria's Independence to unravel the mysterious murder of two women in Akote. of the corpses reveals that the deceased were raped before they were strangulated and slit with a razor by their assailant. To put an end to the security and tension caused by the mysterious killings, a dusk-to-dawn curfew is imposed. Regrettably, the security measure put in place does not deter Aderopo from carrying out more attacks. Unfortunately, he is killed in the process of trying to evade the scene of another rape attack on Tawa, his boyhood sweetheart. October 1 examines the causes and the numerous effects of rape on victims, rapists and society. Aderope and Agbekoya are two out of the three brightest students in Akote favoured by Rev. Father Dowling, a British priest to be taken to Lagos to further their education at Kings College. The decision to take them to Kings College to explore their academic potential is welcomed and appreciated by the people of Akote who consequently entrusted their future and safety to Father Dowling's guardianship. The prospects of attending Kings College are enormous and expectations are high for these teenage boys to realise the dream of attaining sound education and prominence like Chief Obafemi Awolowo and Samuel Akintola who are celebrated as precursors of the newly independent Nigeria. But, behind the facade of Father Dowling's holy appearance lies a hidden and dark motive, one that eventually changes the boys' perception of care, trust and Western education. Agbekoya in his confessional statement reveals the traumatising experience they went through at the hands of Rev. Father Dowling. Agbekoya: I was fourteen; Ropo was twelve when we left for Lagos. During the daytime we attended school but every Thursday night Father would beckon... the man would do unspeakable things in that room, things I couldn't understand, things that destroyed my soul. Afterward, it will be Ropo's turn. That man violated me every Thursday for five m onths. I couldn't take it anymore. One day I stole some of his money, boarded a bus and I came back to Akote. From Agbekoya's narration of the dehumanising past, we see the role of the environment in the effectuation of rape. Most institutions of learning, especially boarding schools, are fertile ground where the act of homosexuality is incarnated. Unfortunately, Father Dowling, a homosexual uses his position in society and church to sexually violate teenage boys entrusted in his care. As a priest, his behaviour counteracts the Christian doctrines on sodomy as an abominable sexual relationship, more so in Nigeria where there is zero tolerance for such acts. Father Dowling's sense of ethics and morals depicts him as a person with low selfcontrol, and his immoral act is seriously a let-down for the boys. Unfortunately, homosexuality and moral degradation are on the increase within religious institutions today. The traumatic experience Agbekoya undergoes does not only question his belief in the ideology of colonisation and subsequent westernisation of all aspects that violate the cosmological existence of the African people, but it also takes a toll on his soul and pushes his hunger for redemption. Unable to bear the unspeakable violation of his sexuality by the one who is expected to be a purveyor of moral and spiritual security, he takes the courage to give up his education and return to Akote to become a farmer. Generally, rape and other forms of sexual violence are serious offences against the personality of the victim. To Agbekoya, rape is an unspeakable act. But, his inability to reveal the untold experience enshrines the culture of silence which is one of the factors that motivate rape to thrive both in traditional and modern Nigerian societies. Erroneousl y, he blames his family and society for exposing him to such evil and goes through posttraumatic nightmares, fear and shame due to his rape experience. The intimidation, shame, and agony of being raped constantly for five months by a priest force him to reject Western education and any attempt to suggest sending his son to school, and further sever him from society. However, he channels his commitment to becoming a successful farmer as a means of escaping the trauma of sexual abuse, as well as for economic and cultural survival. Unfortunately, the culture of silence motivates rapists to continue in their acts without recourse to the overall damage they inflict on their victims. Agbekoya's reticence on the issue exposes Aderopo to further sexual abuse at the hands of Father Dowling. In his confessional statement, he tells Inspector Waziri that he "had five months of Father Dowling, Ropo had six years". Unlike him who takes the easy way out to preserve his sanity, Aderopo is determined to get to the apogee of his educational career to the envy of his contemporaries like Mr. Olaitan and Miss Tawa. But his personality is masked in pain and deception from the six years of constant rape by Father Dowling. Like Agbekoya, he blames society for his ordeal and thus resorts to raping and killing female virgins in Akote to represent the torture and abuse he endured. Aderopo's choice of raping and killing his victims without any regard for his position as the Prince and heir apparent to the throne, a responsibility that ent ails the protection of his subjects is ironic. Rape is condemned in Akote and is a source of great dishonour to a woman, her family and community, and can mar a woman's chances of getting married. Afonja who is more bewildered by the desecration of a woman's purity than for her death informs Inspector Waziri that all the young women in Akote are virgins and that it is a dishonour to lose one's virginity before marriage. By this, the film director underscores the role of abnormal eroticism and hyper-sexuality in the effectuation of rape on women. However, we find out that Aderopo's rampage is beyond his uncontrollable sexual desire but a form of patriarchal show of dominance that he learned from Rev. Father Dowling. This can be seen in the crude manner in which he reveals his identity to his victims, the struggles and forceful penetration, as well as the incisions he makes on the mutilated bodies of his victims. Interestingly, rapists are hardly strangers to their victims and environment, and their actions are never spontaneous; they are carefully planned and executed. Hence Inspector Waziri warns his officers that the killer knows the layout of the land well. Like the river of blood, he moves in and out of will at his pleasure and the need to be battle-ready. Aderopo's onslaught in Akote goes beyond mere rape and killing of female virgins to represent his identity crisis and the quest for false retribution. His act shows a man that is in enmity with his environment. Unfortunately, his approach to justice is barbaric and defies the cosmological existence of Akote. He is filled with hate, pain and fear but fails to seek any form of help. Like Agbekoya, he blames social institutions -community, family, school and church-for his low self-control and inability to form a bond that would enable his healing, hence he sees rape as a means to an end. Though Aderopo regrets the man he is; a man far below the expectations of his people, he cannot help the situation he finds himself in. Instead, he reimages himself as a demon far greater than Rev. Father Dowling's thus raising the issue of the role of society in assuaging the pain, stigma and other latent psychological illnesses associated with rape and the reintegration of self-belief and worth. Unfortunately, he is shot and dies and thus, misses the opportunity to get any form of healing and restitution. More so, justice is attained through the death of Father Dowling whom Agbekoya suffocates on his bed during one of his episcopacy visits to Akote. The death of both Father Dowling and Aderopo signals the director's absolute rejection of rape crime that undermines the cultural and ethical beliefs of a people. Alter Ego tells the story of Ada, a human rights lawyer who seeks the course of justice for helpless victims of rape in urban centres. Driven by her childhood rape experience at the hands of her Physical Education teacher in secondary school, Ada is fuelled by the hunger to defend vulnerable women who are abused by men every day, including her love interest, Mr. Timothy. Unfortunately, she suffers from hyper-arousal, a Post-Traumatic Stress disorder that makes her crave sex indiscriminately with men especially, with her male domestic staff and office employees and this becomes her albatross when her sister's fiancée-Daniel becomes her victim. Her sister, Ngozi is infuriated and disappointed with Ada for taking advantage of Daniel. She connives with Daniel to stand as a witness against Ada in court during the trial of Timothy for revenge for her betrayal. Nonetheless, Ada is not deterred by her sister's consequent betrayal in court but pursues her case against Timothy and eventually wins. Alter Ego highlights the vulnerability of rape victims in a society that prioritises sex as a means of gratification. Ada's case is one of the many underage stories of sexual abuse in the metropolis. The film is set in Lagos, a metropolitan society that is widely regarded for hustle and bustling. Her parents are career-oriented with little or no time to look after her welfare and need for care and protection. Consequently, her childhood is devoid of adequate pa rental care and attention due to her parents' pursuit of an economic breakthrough. This exposed her to sexual violence, ranging from sexual assault, rape, and consequent threats to her education if she told anyone. Her rape experience at the hands of Mr. Kolade; her Physical Education takes an adverse effect on her like many victims of rape. Consequently, she is mentally broken and seeks a leeway out of her conundrum. She narrates her childhood experience to Timothy thus: Ada: I was thirteen, JSS 2. My parents were the busiest in the world. My mother had just started her importation business and my father was a court judge. It was my mother's duty to pick me up from school but she always ran late. Every damn time she would run late, I would be the only one left in school for hours. There seemed to be this teacher, Physical Education; he would wait hours after school with me for hours until my mom would come to pick me up. Sometimes he would even volunteer to drop me at our store. My mom loved him. Oh, why won't she? Then one day he took me to the staff room, then he started to talk to me the way he had never spoken to me before. He started to touch me and he made me touch him, he showed me his private and I just knew something wasn't right and I screamed, I yelled, I scratched. There was no use. He forced himself on me that day. That was just the beginning. He did it frequently. I lost count. I would beg my mom please don't come late, not even one second, but she would come late every single damn time. He did it so much that by the time I turned fourteen, I had already gotten used to it. Her parents' near absence in the early stages of her life created a vacuum for Mr. Kayode to assert his inordinate sexual fantasy about her. Ada blames her mother's insensitivi ty for her constant defilement and suffering. Worst still, she did not receive any encouragement to speak up and thus bottled up her trauma for many years. She confesses to Timothy further that she could not tell anyone because: Ada: He threatened me. I was so young. What did I know? He said if I ever told anyone he would turn it all on me and that he would make sure I never got into any other school again. I summoned the courage to tell someone, my parents were always so busy. They were never just there. I was so scared. I just kept it to myself. I became my own secret; my dirty little secret. The failure of Ada's parents is an indictment of the quality of parenting and the failure of the family as an institution in providing social security for children, especially young girls. More so, society expects people like Mr. Kolade to be role models to the younger generation. Unfortunately, people like Mr. Kolade are wolves in sheep's clothing and will stop at nothing to achieve their sexual orgy. His inappropriate relationship with Ada traumatised her to the point that she got used to the abuse within a year. However, she is not deterred by her ordeal, rather the anger and resentment she feels drive her desire to survive and become successful in society. As a human rights activist, she dedicates her time and mission to the fight for justice for vulnerable girls and women that are constantly raped and battered. She stops at nothing to ensure that she gets justice for her clients, even if it means publicly exposing and disgracing perpetrators. For example, she films Pastor Duke's sexual abuse of a woman seeking prayers for the fruit of the womb in his office and plays the tape during church service before the full glare of his congregations, and further goes ahead to prosecute him in court for raping a minor. Her combination of legal and orthodox approaches amazes his soon-to-be in-law; Daniel, who confesses that "this is an operation perfectly executed. I mean, it is different from the way lawyers I know handle cases". Her new course in life sees her challenge powerful individuals and institutions that encourage sexual violence against poor and vulnerable women and children in society. Hence, she explains to Daniel that when "the poor and the vulnerable ones are taken advantage of. It fills my rage. It is perceived that they have no voice, no identity. I choose to be their voice". Unfortunately, her success in the legal field and attendant fame in society are marred by her addictive and uncontrollable sexual urge, a disorder that was caused by her constant abuse by Mr. Kayode. This health condition, by and large, creates an inner personality that has an insatiable desire for sex. This personality is an image of her past that refuses to go away regardless of the years and resources she has spent on therapy. In other words, her past experience keeps conflicting with her present reality. Her excessive craving for sexual pleasure sees her sleep with her male domestic and office employees, including Daniel, her sister's fiancé. Hence, Timothy accuses her of being a nymphomaniac and a sex pervert who takes advantage of her position as a successful independent woman to lure young men into having sex with her. While she is not in denial of her debility, she recognises that she is dealing with a force greater than she can control and thus, confesses to her sister Ngozi that "there is something in me. I can't control it." She blames her teacher for creating that monster in her. But we observe as events unfold that she is not docile about her situation. Through her conversation with her psychotherapist, Dr. Tochi, we realise the difficulties she is faced with in maintaining her treatment plan. Choked in-between success and pain, the old experience and new reality Dr. Tochi blames her for the slow pace of her recovery. --- Dr. Tochi: We had treatment plans for you. You didn't follow it. What's important now is to follow your treatment plan. Fight! Fight! Her inability to overcome her sexual disorder underscores the life-threatening effect of rape on a victim and the need to seek professional help. However, she bases her recovery on creating a successful alter ego as a lawyer to escape the trauma of her past experience. However, it is the society that profits more from her gesture than she does, because behind a successful law career is a child that seeks revenge for the wrong done to her. But as Tochi later advises her, the needed purgation of emotion and healing of the mind comes through timely intervention, adherence to treatment plans and determination to rise above pain and uncertainties. One of the uncertainties Ada faces is her ability to fall in love with a man. This opportunity presents itself with Mr. Timothy, a globally renowned gentleman who runs a Non-Governmental Organization (NGO) that caters to the needs of internally displaced persons in various camps. His interest in the welfare of women and children in displaced environments endears him to Ada who later reveals to her sister that "I don't even think I have felt this way for any man before". But, far beyond his humanitarian work is a sexual predator that leverages his connection and influence to rape, abuse and molest young female beneficiaries. His masked personality is revealed by Aisha one of his victims of sexual abuse. --- Aisha: Mr. Timothy forced himself on me and when I refused, he raped me after beating me mercilessly. He threatened that if I told anyone he would remove my brother and I from school... and he would kick us out too. Consequently, Aisha warns Ada to be wary of Timothy as he is not who she and the world thinks he is. Ada is enraged by Timothy's monstrous action against vulnerable people like Aisha and decides to bring Timothy to the book regardless of the romantic attraction she feels for him. Timothy is sneaky in his approach to using emotional blackmail to talk Ada out of having any legal battle against him. He pleads thus: --- Timothy: Baby, listen to me. You shared your story with me, I listened, I understood, I cried with you, I felt your pain. Now, I want you to hear my story and give me the same understanding and empathy... I was sexually abused as a child as well by my nanny between the ages of twelve and eighteen. I was repeatedly sexually abused by that demon. And like most abused children, I am suffering the after effect of repeated sexual abuse. Am sure you are familiar with such conditions. Timothy's personality reveals a man that lacks self-control and as such, lacks belief, commitment, and confidence in the ability of social institutions in fostering protection for rape victims. However, Ada is neither threatened by his sarcasm nor having her emotional feelings clouds her sense of justice. Therefore, she corrects Timothy that, "raping and assaulting people are not side effects of sexual abuse". She goes further to ask him: Ada: having suffered the consequences, the humiliation and degradation of sexual abuse, how could you possibly do that to another human being? Ada understands the pain and trauma of being a victim of sexual abuse and this gives her the courage to prosecute Timothy in court and with the belief that "every time a child molester faces" her in court "he or she must go to jail". Ada's quest for just desert leads her against all odds to triumph over Timothy in court, and consequently get him a fourteen-year jail sentence. Relatively, as the film ends, we see that her Physical Education teacher was unrepentant, and by an act of providence gets deserving retribution for his crime against humanity. He confesses to Ada thus: Mr. Kolade: When you left the school, I tried to rape another girl, but I was caught. I faced the wrath of jungle justice. Even so, I faced fourteen years of my life in jail. When I was released from prison, I was sick, homeless and hopeless. Strangers feed me with their crumbs. Mr. Kolade's confession is a pointer to the fact that nature arouses any form of crime against humanity, more so when such crime is targeted at the sexuality of any individual. Hence, through his ordeal, we see that society and natural justice have set him on a course of abject penury and abandonment. However, Ada's courage to face her past aggressor is tailored toward her recovery and desire to get justice. The film director uses her vociferousness and personal struggles in life to advocate for an immediate response to the crime of rape and all other forms of sexual violence that nether the growth and development of proper human relationships in society. --- DISCUSSION Rape is undoubtedly a challenge facing both traditional and contemporary societies. The enormity of this crime traverses beyond personal aggression to become an attack on institutions that foster human relationships. Unfortunately, the primary perpetrators of this crime are individuals who are supposed to be moral agents, whose immoral acts against the younger generation and vulnerable in society result in a vicious cycle. From the analyses of these films, we see the characters of Father Dowling and Aderopo in October 1, and Mr. Kolade and Timothy in Alter Ego, as representatives of abusive and decadent institutions, who see sex as a means of personal gratification and thus use their positions in society to experiment their sexual fantasies on their victims. Their low self-esteem and lack of control drive their impulsiveness towards rape. Unfortunately, most victims of rape are younger generations entrusted with the care of these social agents. Due to age, the tendency to model deviant sexual behaviour is high, thus creating a vicious cycle, as well as latent psychologica l effects that manifest in adulthood. In October 1, the director shows the vicious nature of rape and its latent effect through the dotage Aderopo goes through, resulting in his adoption of rape and consequent killing of his victims as means of assuaging his pain. Also in Alter Ego, the character of Mr. Timothy is developed to show the latent and vicious cycle effect of rape on individuals. The obvious implication is that society mainly suffers. Afolanya and Inwang use the pain and traumatic experiences of these characters to underline the dangers of sexual violence on young minds which often than not, result in severance from all emotional attachment, commitment, involvement and belief in social institutions thereby, informing an adoption of violence as means of survival. In contrast, the characters of Agbekoya and Ada are projected to have a strong will to survive their experiences and channel their anger and pain toward the fight against rape. The directors are able to achieve this through their personal pursuits in life and their quest to speak out and defend the vulnerable around them. In Alter Ego, Ada's combination of orthodox and legal methods of justice proved potent in instilling the consciousness that society and the law arbores sexual violence. Furthermore, her ability to seek professional help from a psychologist shows that the survival of rape victims is a function of personal willingness and ISSN: 2689-5129 Volume 6, Issue 6, 2023 (pp. 124-138) 136 Article DOI: 10.52589/AJSSHR-6JDMEPB8 DOI URL: https://doi.org/10.52589/AJSSHR-6JDMEPB8 www.abjournals.org timely intervention from supportive institutions. Though she is a product of an abusive institution, her ability to maintain a strong social bond with social institutions through attachment and commitment in her law career, involvement and belief in the legal system enables her to tow the path of self-redemption. Furthermore, the films highlight the role of parenting and the environment as factors that contribute to the vulnerability of young people and exposure to rape. The films show that perpetrators of rape are products of the environment in which they operate and therefore the role of parents in ensuring the safety and active participation in the education of their children is imperative. These films thus become an awareness campaign and clarion call for the involvement of parents in the security and health education of their children. --- CONCLUSION The role of film in the fight against societal ills like sexual violence is not in doubt. It is important for Nollywood film practitioners to use their works as weapons against the culture of rape and all other types of sexual violence in society that nether human development. This can be achieved by fully exploring the narrative of sexual violence to help expose the aesthetics of crime and help victims to seek appropriate help. More importantly, social institutions like the family, school, church, government and Non-Governmental Organisations despite their failure still have a major role to play in the intervention and survival of rape victims in Nigeria. The ability of these institutions to intensify awareness of the dangers of sexual violence, as well as, take a stand in the prosecution of sexual offenders will go a long way in reducing the crime in Nigeria.
Researchers know relatively little about the educational attainment of sexual minorities, despite the fact that educational attainment is consistently associated with a range of social, economic, and health outcomes. We examined whether sexual attraction in adolescence and early adulthood was associated with educational attainment in early adulthood among a nationally representative sample of US young adults. We analyzed Waves I and IV restricted data from the National Longitudinal Study of Adolescent Health (n=14,111). Sexual orientation was assessed using selfreports of romantic attraction in Waves I (adolescence) and IV (adulthood). Multinomial regression models were estimated and all analyses were stratified by gender. Women attracted to the same-sex in adulthood only had lower educational attainment compared to women attracted only to the opposite-sex in adolescence and adulthood. Men attracted to the same-sex in adolescence only had lower educational attainment compared to men attracted only to the opposite-sex in adolescence and adulthood. Adolescent experiences and academic performance attenuated educational disparities among men and women. Adjustment for adolescent experiences also revealed a suppression effect; women attracted to the same-sex in adolescence and adulthood had lower predicted probabilities of having a high school diploma or less compared to women attracted only to the opposite-sex in adolescence and adulthood. Our findings challenge previous research documenting higher educational attainment among sexual minorities in the US. Additional population-based studies documenting the educational attainment of sexual minority adults are needed.
research (Institute of Medicine 2011). Lack of information on the educational attainment of lesbian, gay, and bisexual (LGB) adults and a reliance on non-probability samples to describe demographic characteristics of LGB populations were of particular concern. Given strong evidence that educational attainment is consistently and positively associated with a range of social, economic, and health outcomes, including, for example, a sense of personal control (Mirowsky and Ross 2003;Schieman and Plickert 2008), occupational status (Kerckhoff, Raundenbush, and Glennie 2001), income (Elman and O'Rand 2004;Kerckhoff et al. 2001;Murnane, Willett, and Levy 1995), health (Elo 2009;Lynch 2003;Ross and Wu 1995) and longevity (Elo 2009;Kitagawa and Hauser 1973;Miech et al. 2011;Rogers et al. 2010), it is surprising that so few studies have investigated disparities in educational attainment by sexual orientation. To our knowledge, our study is the first to examine the educational attainment of sexual minorities using a nationally representative sample of US young adults. Our study is also novel in that it applies a life course perspective to identify potential mechanisms through which educational disparities by sexual orientation manifest. One study that explicitly examined educational attainment among gays and lesbians found that sexual minorities with same-sex partners had higher educational attainment than married heterosexuals (Black et al. 2000). Using the 1990 Census, Black and colleagues (2000) found that among 25 to 34 year olds, approximately 43% of gay partnered men had at least a college degree compared to 24% of married heterosexual men, whereas 47% of lesbian partnered women had at least a college degree compared to 22% of married heterosexual women. Other studies on wage discrimination based on sexual orientation have also reported higher levels of educational attainment among sexual minority persons in bivariate analyses using a variety of population-based data sources (e.g., the General Social Survey, the Current Population Survey, and the California Health Interview Survey) (Berg and Lien 2002;Black et al. 2003;Black, Sanders, and Taylor 2007;Carpenter 2005;Daneshvary, Waddoups, and Wimmer 2008;Elmslie and Tebaldi 2007). All of these studies, however, had limited external validity, as the samples were restricted to cohabitating partners, full-time workers, or both. Thus, a large segment of the LGB population was excluded from prior estimates of educational attainment, potentially leading to biased conclusions about educational disparities by sexual orientation. Moreover, educational attainment is not merely a marker of human capital, but reflects a dynamic and evolving interaction between individuals and their social environments from childhood through adulthood (Walsemann, Geronimus, and Gee 2008). This follows a life course perspective that posits that childhood and adolescent experiences can result in the accumulation of educational advantages or disadvantages, which over time impact an individual's likelihood of attaining a post-secondary degree (Elman and O'Rand 2007). For example, childhood SES (Cabrera and La Nasa 2001;Ewert 2010;Goldrick-Rab 2006;Grodsky and Jackson 2009), childhood health (Eide and Showalter 2011;Eide, Showalter, and Goldhaber 2010;Haas and Fosse 2008;Jackson 2009), peer victimization (Haas and Fosse 2008;Nishina, Juvonen, and Witkow 2005) and academic performance (Ewert 2010;Jackson 2009;Messersmith and Schulenberg 2008) can have long-term effects on educational careers. Using data from the National Longitudinal Study of Youth (1997), Jackson (2009) found that adolescents who reported poorer health were less likely to graduate from high school by age 19 or attend a 4-year college compared to those who reported better health. Academic participation and performance were strong mediators of this relationship accounting for 50% of the difference in 4-year college attendance. Others have found that poor psychological functioning decreases school functioning (Nishina et al. 2005) and increases the risk of dropping out of high school (Breslau et al. 2008;Fletcher 2010). A key driver of the relationsip between adolescent health and educational attainment may be experiences of peer victimization during childhood and adolescence. Nishini and colleagues (2005) documented poorer psychological functioning and increased numbers of somatic complaints (e.g., headaches, stomachaces) among middle-schoolers who reported verbal or physical assaults or general harrassment. Psychological functioning and somatic complaints were in turn associated with lower school functioning. Haas and Fosse (2008) found that feeling safe in school increased the odds of timely high school graduation and college enrollment, whereas physical altercations decreased the odds. A recent meta-analysis of 33 cross-sectional studies investigating peer victimization and academic functioning demonstrated a significant, negative association; greater peer victimization was associated with poorer academic functioning (Nakamoto and Schwartz 2010). The relationship between peer victimization, adolescent health, and academic achievement is of particular concern with regard to LGB populations as LGB students are more likely than heterosexual students to miss school because they feel unsafe (Bontempo and D'Augelli 2002;Poteat et al. 2011), be physically threatened (O'Shaughnessy et al. 2004), experience psychological problems (Russell and Joyner 2001), feel marginalized at school, hold lower expectations of attending college, and have lower academic performance (O'Shaughnessy et al. 2004;Pearson, Muller, and Wilkinson 2007;Poteat et al. 2011). From a life course perspective (Elder, Kirkpatrick Johnson, and Crosnoe 2003), such experiences can have life-long consequences for the educational attainment of LGB individuals by decreasing the likelihood of graduating from high school or college (Cabrera, Nora, and Castaneda 1993;Buchmann, DiPrete, and McDaniel 2008;Hearn 1992). As such, adolescents who identify as LGB or are suspected of being LGB may experience a series of events during high school that diminishes academic achievement, resulting in lower educational attainment as compared to heterosexual adolescents. This may also be the case for adolescents who are aware of their same-sex attractions but have not "come out" to their peers, particularly if they perceive that their peers will harass or bully them (Meyer 2003). Not all LGB adults, however, were aware of their same-sex attractions or exhibited gender atypical behavior (i.e., did not conform to traditional gender roles) as adolescents (Frankowski and The Committee on Adolescence 2004;Jager and Davis-Kean 2011;Saewyc 2011). As a result, these adults may not have experienced harassment or discrimination based on their sexual orientation during high school. By not experiencing these psychosocial stressors during adolescence, the high school academic performance of LGBs who became aware of their same-sex attractions as adults would likely be similar to the academic performance of heterosexuals. Thus, one might expect that their educational attainment would also be similar to heterosexual adults. This expectation is based on the life course concept of timing (Elder et al. 2003), which posits that the impact of a given exposure depends upon when the exposure occurs during the life course. In particular, exposure to events or experiences during high school has the greatest impact on successful, "on-time" attainment of a post-secondary degree (Elman and O'Rand 2007). Although individuals who become aware of same-sex attractions during college may still experience issues with college persistence and completion, these individuals have already attained a base level of educational attainment -a high school diploma -that those who became aware of their same-sex attractions in high school may not have attained. Thus, it is important to consider how the timing of awareness of same-sex attractions in adolescence and/or early adulthood impacts educational attainment, since the timing might have important effects on individuals' social and educational trajectories. Our study advances current LGB research by exploring whether or not life course sexual attraction is associated with educational attainment among a nationally representative sample of US young adults. We chose to use life course sexual attraction as our measure of sexual orientation for two reasons. First, awareness of sexual attraction occurs, on average, around age 9 for boys and age 10 for girls, whereas the average age of sexual identification as LGB occurs around age 16 to 17 for girls and boys, respectively (D'Augelli 2006;Herdt and Boxer 1993). Our first assessment of sexual orientation occurs when respondents were 11 to 20 years old; thus, a measure of sexual attraction likely provides a more valid assessment of sexual orientation for our sample than sexual identity given that individuals may not identify as LGB until late adolescence or early adulthood (Savin-Williams 2001). Second, in our study, sexual attraction was measured in adolescence and adulthood, whereas sexual identity was only measured in adulthood. By using measures of sexual attraction at both time points, we meet one of the important criteria for longitudinal analysismeasurement consistency (Singer and Willet 2003). We hypothesize that individuals with same-sex attractions during adolescence will report lower educational attainment in adulthood compared to individuals with only opposite-sex attractions in adolescence and adulthood, but that individuals with same-sex attractions in adulthood only will report similar levels of educational attainment as individuals with only opposite-sex attractions in adolescence and adulthood. We also expect that educational disparities will be attenuated with adjustment for adolescent health and experiences, as well as high school academic performance. --- METHODS --- Sample We analyzed Wave I (1994/5) and Wave IV (2007/8) restricted data from the National Longitudinal Study of Adolescent Health (Add Health), a nationally representative sample of adolescents in grades 7-12 in 19947-12 in -19957-12 in (Harris et al. 2009). The Add Health sample is representative of US schools with respect to region of country, urbanicity, school size, school type (private/public), and race/ethnicity. Our analysis used data from in-home interviews of respondents in Waves I and IV, as well as data from in-home interviews of parents in Wave I. We restricted our sample to those assigned probability weights in Wave IV (n=14,800). Approximately 688 respondents were excluded due to item non-response on covariates (352 females and 332 males). Most of these exclusions were due to item non-response on self-reported grades (222 females and 196 males). After exclusions, our final analytic sample consisted of 14,111 respondents (7,516 females and 6,595 males). We explored potential differences on key demographics between the sample with complete data and the sample that was excluded from our analyses due to item non-response. Among females, older respondents and Hispanics were more likely to have missing data on covariates than younger respondents and whites, whereas females who resided in rural communities at baseline were less likely to have missing data on covariates than females who resided in urban communities at baseline. Among males, older respondents, blacks, and Hispanics were more likely to have missing data on covariates than younger respondents or whites. It is important to note that our overall rate of item non-response (~5%) is quite minimal and is therefore unlikely to result in significant biases in analyses using complete data (Heeringa, West, and Berglund 2010). --- Measures Educational Attainment-Respondents reported their highest level of education along with the type of degrees they had received by Wave IV. We coded respondents as 1=high school diploma or less, 2=some college or Associate's degree, and 3=Bachelor's degree or higher. We considered other specifications of this variable (e.g., 8 categories, 4 categories), but our specification yielded substantively similar results and did not suffer from issues of data sparseness. Life Course Sexual Attraction-In Wave I, respondents were asked, "Have you ever had a romantic attraction to a female? To a male?" In Wave IV, respondents were asked, "Are you romantically attracted to females? To males?" We categorized respondents as 1) attracted only to opposite-sex in youth and adulthood, 2) attracted to same-sex in youth, but not adulthood, 3) attracted to same-sex in adulthood, but not youth, 4) attracted to same-sex in youth and adulthood, and 5) not attracted to either same-or opposite-sex in youth or adulthood. We considered categorizing respondents who did not report a romantic attraction to either sex during adolescence separately from those who reported no romantic attraction to either sex in adulthood, but issues with data sparseness prevented us from doing so. Sensitivity analyses, however, suggested that these groups experienced similar levels of educational attainment. As a result, individuals who reported no attraction to either sex in youth and adulthood are included in the same category as individuals who reported no attraction to either sex in youth only as well as individuals who reported no attraction to either sex in adulthood only. Further, again due to issues of data sparseness, we were unable to disaggregate individuals attracted to both sexes (i.e., bisexual attractions) from individuals attracted only to the same sex. Adolescent health at Wave I-We include a number of indicators assessing the health and health behaviors of respondents at Wave I in order to assess the extent to which adolescent health and health behaviors mediate the relationship between sexual attraction and educational attainment. Self-rated health was assessed using the following question: "In general, how is your health? Would you say excellent, very good, good, fair, or poor?" Higher values reflect better health. We measured depressive symptoms using the 19-item Center for Epidemiological Studies Depression Scale (CES-D) available in Add Health. Respondents were asked how often in the past week they had experienced any of 19 symptoms. Per convention, positively worded items were reverse coded and the 19 items were summed (Cronbach's α=0.86). Values ranged from 0 to 56. We measured somatic symptoms using 12 indicators of physical symptoms (i.e., headache, feeling hot, stomachache, cold sweats, weakness, feeling sick, wake up tired, dizziness, chest pains, aches or pains, trouble falling asleep, and trouble relaxing). Respondents were asked how often they experienced any of these symptoms in the past 12 months (0=never, 4=everyday). Scores on the summated scale ranged from 0 to 41 (Cronbach's α=0.77). We measured victimization if in the past 12 months respondents experienced any of the following: (1) someone pulled a knife or gun on them; (2) they were shot or stabbed; or (3) they were jumped. --- Adolescent Academic Performance and Expectations-Because LGB adolescents may experience greater harassment and discrimination at school due to their sexual orientation, their academic performance may suffer. Thus, we include indicators of academic performance and expectations measured at Wave I to assess the extent to which adolescent academic performance and expectations mediate the relationship between sexual attraction and educational attainment. We measured difficulties in school using four items. Respondents were asked how often during the 1994-5 school year they had trouble getting along with teachers, paying attention in school, getting homework done, and getting along with students (0=never, 4=everyday). Scores on the summated scale ranged from 0 to 16 (Cronbach's α=0.69). Academic expectations were assessed using the following question: "On a scale of 1 to 5, where 1 is low and 5 is high, how likely is it that you will go to college?" We coded respondents as having high expectations if they reported a 4 or 5 on the scale. We calculated respondents' grade point average (GPA) in the most recent grading period by averaging their grades (using a 4-point scale, where 1=D or lower, 2=C, 3=B, 4=A) in English, mathematics, history or social science, and science. Values ranged from 1 to 4. Socio-demographics-We include a number of covariates that have been associated with educational attainment in prior research (Buchmann et al. 2008;Cabrera and LaNasa 2001;Goldrick-Rab 2006). We categorized self-reported race/ethnicity as non-Hispanic white, non-Hispanic black, Hispanic, or other race/ethnicity. We categorized respondents as immigrants if they reported being born outside of the US to non-US citizens. Age in Wave IV ranged from 24 to 34 years old. Family structure in Wave I was categorized as nuclear (two biological parents), step-family (one biological and one step-parent), female-headed, extended/intergenerational family, and other. We also include region of the country (West, Midwest, South, or Northeast) where the respondent resided in Wave I, as well as urbanicity in Wave I (urban, rural, or suburban). Finally, we constructed a composite measure of family SES because multivariate indices of SES are more reliable than single-item measures and doing so reduced issues with item-missingness. Family SES was calculated as the mean of standardized (z-score) measures of family poverty, parental education, and parental occupation. The composite score was calculated for all respondents who had information on at least one of the indicators used in the composite measure. Unemployed and stay-at-home parents did not report an occupational status. If the respondent resided with one parent, information for the one parent was used. If the respondent resided with two parents, the average of both parents' information was calculated. Positive values represented higher levels of SES (Cronbach's α=0.66). --- Analytic Approach Given that women often show greater fluidity in their sexuality and some women become aware of their romantic attractions significantly later in the life course compared to men (Diamond 1998;Diamond 2000;Diamond 2012;Floyd and Bakeman 2006;Floyd and Stein 2002;Savin-Williams 2001;Savin-Williams and Diamond 2000), all analyses were gender stratified. We began with descriptive statistics to understand data distribution. Next, we examined bivariate associations between selected characteristics and life course sexual attraction. We used multinomial logit regression to examine the association between life course sexual attraction and educational attainment, rather than the more commonly used ordered logit regression because ordered logit regression assumes that the explanatory variables have the same effect on the outcome across all levels of the outcome (Hardin and Hilbe 2012). This assumption was not met with our data. We report predicted probabilities and marginal effects rather than relative risks in our multinomial logit regression models as predicted probabilities and marginal effects provide easily understood measures that can be used to compare risk across population groups. We weighted all analyses to adjust for Add Health's sampling design and respondent attrition using the svy command in Stata v12. Predicted probabilities and marginal effects were calculated using the margins command in Stata v12. --- Sensitivity Analyses We ran a set of sensitivity analyses to determine if results were being driven by model specification. These analyses included baseline measures of suicidal ideation, engagement in risky health behaviors (i.e., smoking, binge drinking, illicit drug use), engagement in delinquent behaviors, feelings of school belonging, and parental support. Results from these analyses did not alter our inferences. Moreover, these covariates were unrelated to educational attainment in multivariate models. Given issues of parsimony and to retain sample size, we chose to exclude these variables from our final models. --- RESULTS --- Sample Characteristics Table 1 presents sample characteristics by gender. Among females, 35% attained at least a Bachelor's degree by Wave IV, whereas 28.5% had attained a high school diploma or less. Over 76% reported attraction only to males in youth and adulthood, 3.5% reported attraction to females in youth, but not adulthood, 8% reported attraction to females in adulthood, but not youth, and 1.4% reported attraction to females in youth and adulthood. On average, female students reported good to very good health (M = 3.8), the majority held expectations to attend college (79.9%), and the average GPA in the last academic term was 2.9. Among males, approximately 28% attained at least a Bachelor's degree by Wave IV, whereas 38.9% attained a high school diploma or less. Over 76% reported attraction only to females in youth and adulthood, 6.1% reported attraction to males in youth, but not adulthood, 3.1% reported attraction to males in adulthood, but not youth, and 1% reported attraction to males in youth and adulthood. On average, male students reported very good health (M = 4.0), the majority held expectations to attend college (71.9%), and the average GPA in the last academic term was 2.7. --- Bivariate Analysis Table 2 presents selected bivariate associations between sample characteristics and life course sexual attraction, separately for females and males. Among females, educational attainment varied by life course sexual attraction, with women who had consistent attractions in youth and adulthood (to the opposite-sex or to the same-sex) experiencing similarly high levels of educational attainment. That is, approximately 38% of women attracted only to the opposite-sex in youth and adulthood as well as 38% of women attracted to the same-sex in youth and adulthood had a college degree, whereas 21.5% of women attracted to the same-sex in adulthood only had attained a college degree. Those without romantic attractions in youth or adulthood also had lower rates of attaining a college degree (26.2%) compared to women attracted only to the opposite-sex in youth and adulthood. Significant differences by life course sexual attraction were also noted for all key covariates. In general, these findings suggest that women with same-sex attractions as adults had poorer adolescent health and greater difficulties in school than women with only opposite-sex attractions in youth and adulthood. Among males, we found significant differences by life course sexual attraction across all covariates presented in Table 2 except for race/ethnicity and self-rated health. Men attracted to the same-sex only in youth had lower educational attainment compared to men attracted only to the opposite-sex in adolescence and adulthood (50.5% vs. 35.9% had a high school diploma or less). Similar rates of low education were found among men without romantic attractions in youth or adulthood (52.9%). Additionally, 36.5% of men who reported attraction to the same-sex in youth only had been victimized in the year prior to baseline compared to 28.3% of men who reported attraction only to the opposite-sex in youth and adulthood. --- Multinomial Logit Regression Analyses We present weighted estimates of predicted probabilities and marginal effects for females in Table 3. Our model building approach allowed us to test our two hypotheses. In Model 1, we examined the effects of life course sexual attraction on educational attainment, with adjustment for socio-demographic covariates. We ran two additional models that adjusted for adolescent health and experiences in Wave I (Model 2) and academic performance and expectations in Wave I (Model 3) to test our hypotheses that educational disparities by life course sexual attraction would be attenuated after adjustment for these covariates. Estimates represent average predicted probabilities, as all covariates were centered at their grand means. Among females, those who were attracted to the same-sex in adulthood only had lower educational attainment than women who were attracted only to the opposite-sex in youth and adulthood (Model 1). Specifically, the predicted probability of having a high school diploma or less was significantly greater for women attracted to the same-sex in adulthood only compared to women attracted only to the opposite-sex in youth and adulthood (PP=0.37 versus PP=0.24, respectively). Women who reported no attraction to either sex in youth or adulthood also had a higher predicted probability of having a high school diploma or less compared to women attracted only to the opposite-sex in youth and adulthood (PP=0.34 versus PP=0.24, respectively). Women who were attracted to the same-sex in youth only reported similar levels of educational attainment as women attracted only to the opposite-sex in youth and adulthood. Adjustment for adolescent health and experiences at Wave I (Model 2) and academic performance and expectations at Wave I (Model 3) attenuated, but did not eliminate, differences in the predicted probabilities of having a high school diploma or less and having a Bachelor's degree or higher between women attracted to the same-sex in adulthood only and women attracted only to the opposite-sex in youth and adulthood. For example, the gap in the predicted probabilities of attaining a college degree between these two groups narrowed from -0.14 in Model 1 to -0.10 in Model 3, but the gap was still statistically significant in Model 3. Statistically significant differences in predicted probabilities found between women who reported no attraction in youth or adulthood and women attracted only to the opposite-sex in youth and adulthood remained across Models 2 and 3. We also found that adjustment for adolescent health and experiences at Wave I (Model 2) resulted in a statistically significantly lower predicted probability of having a high school diploma or less among women with same-sex attractions in youth and adulthood (PP=0.15) compared to women attracted only to the opposite-sex in youth and adulthood (PP=0.24). The gap in predicted probabilities between these two groups remained significant after further adjustment for academic performance and expectations at Wave I (Model 3). We present weighted estimates of predicted probabilities and marginal effects for males in Table 4. Among males, those who were attracted to the same-sex in youth only had a higher predicted probability (PP=0.48) of having a high school diploma or less and a lower predicted probability of having some college or an Associate's degree (PP=0.31) than men who were attracted only to the opposite-sex in youth and adulthood (PP=0.36 and PP=0.39, respectively, Model 1). Similar results were found for men who reported no attraction to either sex in youth or adulthood. Additionally, men who reported no attraction to either sex in youth or adulthood also had a lower predicted probability of having a Bachelor's degree or higher (PP=0.17) than men who were attracted only to the opposite-sex in youth and adulthood (PP=0.25). Adjustment for adolescent health and experiences at Wave I (Model 2) attenuated the differences in predicted probabilities of having a high school diploma or less between men who were attracted to the same-sex in youth only and men who were attracted only to the opposite-sex in youth and adulthood. Specifically, the difference in predicted probabilities between these two groups was 0.12 in Model 1 and 0.09 in Model 2. Adjustment for academic performance and expectations at Wave I (Model 3) resulted in a non-significant difference in predicted probabilities between these groups (PP yo -PP oppsex =0.08, ns) as well as a non-significant difference in predicted probabilities of attaining a high school diploma or less between men who reported no attraction to either sex in youth or adulthood and men who were attracted only to the opposite-sex in youth and adulthood (PP noattract -PP oppsex =0.13, ns). The lower predicted probabilities of having some college or Associate's degree or of having a Bachelor's degree or higher found for men who reported no attraction to either sex in youth or adulthood as compared to men who were attracted only to the opposite-sex in youth and adulthood were not attenuated in Models 2 or 3. --- DISCUSSION Educational attainment is a key determinant of social, economic, and health conditions across the life course. As such, lack of valid and reliable estimates of LGB educational attainment has significant implications for the ability of social scientists and demographers to understand the characteristics and experiences of the LGB population. Our study is one of the first to describe the educational attainment of the LGB young adult population and examine the potential mechanisms through which educational disparities by sexual orientation manifest. We had 3 hypotheses: 1) individuals with same-sex attractions during adolescence would report lower educational attainment in adulthood compared to individuals with only opposite-sex attractions in adolescence and adulthood; 2) individuals with same-sex attractions in adulthood only would report similar levels of educational attainment as individuals with only opposite-sex attractions in adolescence and adulthood; and 3) educational disparities would be attenuated with adjustment for adolescent health and experiences, as well as high school academic performance. We found support for all three hypotheses among men, but some of our findings ran counter to our hypotheses among women. Women who were attracted to the same-sex in adolescence had similar levels of educational attainment as women who were attracted only to men in adolescence and adulthood. However, women attracted to the same-sex in adulthood only had lower educational attainment compared to women attracted only to the opposite-sex in adolescence and adulthood; that is, they were more likely to have a high school diploma or less and were less likely to have a Bachelor's degree or higher than women attracted only to the opposite-sex in adolescence and adulthood. Adjustment for adolescent experiences and academic performance reduced, but did not fully attenuate, these educational disparities. These findings may be related to gender differences in the timing at which developmental milestones related to individuals' sexuality are reached, including the age when one becomes aware of same-sex attractions, engages in same-sex behaviors, and self-identifies as lesbian or bisexual. For example, women, unlike men, often experience same-sex attractions and identities in response to a single intimate relationship with another woman during late adolescence or early adulthood (Diamond 2012;Floyd and Stein 2002). Completing these developmental milestones during the transition to adulthood, a time when individuals must choose whether or not to attend post-secondary school, may be associated with less social support (Needham and Austin 2010), fewer role models (Floyd and Bakeman 2006), and higher rates of psychosocial stress (Rankin 2003), all of which may hamper individuals from achieving their educational goals. Indeed, young adults often rely on their parents for financial resources (Valentine, Skelton, and Butler 2003); this may be particularly true for young adults attending college. Disclosing one's sexual orientation to parents while in college may lead to the withdrawal of financial (Valentine et al. 2003) or emotional support from parents (Needham and Austin 2010), resulting in a disruption of the student's educational pursuits. Delayed entry into college and disrupted educational careers reduce the likelihood that one will complete a college degree (Buchmann et al. 2008;Ewert 2010;Goldrick-Rab 2006). Although we were unable to test these potential pathways with our data, future research should consider how the timing and self-disclosure of same-sex attraction impacts the educational attainment of lesbian and bisexual women. In models adjusting only for socio-demographics, we found that women with same-sex attractions in adolescence and adulthood reported similar levels of educational attainment as women with opposite-sex attractions in adolescence and adulthood. Once we adjusted for adolescent health and experiences, women with same-sex attractions in adolescence and adulthood were less likely than women with opposite-sex attractions in adolescence and adulthood to have attained a high school diploma or less, a finding that held with further adjustment for academic performance. The greater prevalence of adolescent health problems, victimization, and difficulties in school experienced by women with same-sex attractions in adolescence and adulthood as compared to women with opposite-sex attractions in adolescence and adulthood, likely concealed their lower risk of attaining a high school diploma or less, which corresponds to our hypothesis that adolescent health and academic performance would explain educational disparities by life course sexual attraction. As expected, among men, we found that same-sex attraction in adolescence only was associated with lower educational attainment, whereas same-sex attraction in adulthood only was not. Because boys, in general, become aware of same-sex attractions, engage in samesex behaviors, and come out to friends and family at earlier ages than girls, adolescent boys who are attracted to the same-sex may be at greater risk of experiencing poor educational outcomes due to the challenges they often face within their schools and families (O'Shaughnessy et al. 2004;Pearson et al. 2007;Poteat et al. 2011). Our results lend support for such a conclusion; after adjustment for adolescent health, victimization, difficulties in school, and academic performance, men attracted to the same-sex in adolescence only experienced similar levels of educational attainment as men attracted only to the oppositesex in adolescence and adulthood. Interestingly, men with same-sex attractions in adolescence and adulthood experienced similar levels of educational attainment as men who maintained opposite-sex attractions, regardless of the covariates included in the model. Perhaps they were more likely to seek and obtain acceptance for their same-sex attractions during adolescence as compared to men attracted to the same-sex in adolescence only. Further research on resilience and identity development is required to confirm or challenge this supposition. Our findings also suggest gender differences in the underlying processes linking life course sexual attraction and educational attainment. Perhaps the psychosocial environments of girls who report same-sex attractions in adolescence as compared to boys who report same-sex attractions in adolescence vary. For example, studies have found that adolescent boys are more prejudiced towards sexual minority youth than adolescent girls (Baker and Fishbein 1998;Poteat, Espelage, and Koenig 2009). This may result in adolescent boys with samesex attractions experiencing more stigma and discrimination than their female counterparts, which may in turn negatively impact their academic performance in high school more so than same-sex attracted girls. Conversely, women who become aware of their same-sex attractions as adults may exhibit different characteristics (e.g., lower sense of personal control, lower self-efficacy) or be exposed to less supportive family environments as compared to girls who become aware of their same-sex attractions as adolescents. These differences may also be related to academic performance and educational attainment (Cutrona et al. 1994;Fass and Tubman 2002). Examining these potential underlying processes was beyond the scope of our data, but given our findings, they warrant more intensive consideration in future research. Prior studies using national data report higher educational attainment among LGB individuals (Berg and Lien 2002;Black et al. 2000;Black et al. 2003;Black et al. 2007;Carpenter 2005;Daneshvary et al. 2008;Elmslie and Tebaldi 2007). Although bivariate results suggest that this might be the case in our sample for men who report same-sex attraction only in adulthood, these findings did not hold in multivariate analyses. Most of these past studies, however, defined a person as LGB if they were cohabitating with a samesex partner. This group is unlikely to be representative of all LGB populations. For example, LGBs who are cohabitating with same-sex partners may be younger than LGBs who are not -especially in the early 1990's when social mores were more conservative than they are now (Loftus 2001). Given the changing distribution of educational attainment that occurred in the United States during the 20 th century (Carlson 2008; US Census Bureau 2012), the higher levels of educational attainment found among LGBs in the studies that used data from the 1990s may, in part, reflect age and cohort effects. Further, many of these studies also restrict analyses to full-time workers. Because educational attainment and employment status are highly correlated, this further restriction (cohabitating plus working) may have biased estimates of educational attainment upward. Overall, our findings lend support to the importance of taking a life course perspective when examining the relationship between sexual minority status and educational attainment. First, a life course perspective recognizes that the effect of a given event or exposure may depend on the timing of that event or exposure (Elder et al 2003). Our results provide evidence that the timing of awareness of same-sex attraction matters for educational attainment, and that it might matter differently for males and females. Second, our findings support the proposition that the accumulation of educational advantages and disadvantages during adolescence impacts educational attainment and that this process is, in part, a mechanism through which sexual minority adolescent males experience lower educational attainment. Additional research that considers sexual attraction to both sexes and other dimensions of sexual orientation, particularly sexual identity, at various points in the life course is needed to gain a better understanding of the socio-demographic characteristics of the LGB population. Moreover, research is needed on the experiences of individuals reporting no sexual attractions, as they also reported lower educational attainment than individuals with opposite-sex attractions only. Whereas the number of individuals reporting no sexual attractions or identifying as "asexual" has grown in recent years, asexuality remains a relatively new sexual identity about which a paucity of research exists (Prause and Graham 2007;Scherrer 2008). As a result, legitimization of asexuality as a sexual identity is lacking, as may be social acceptance from family and community members; all of which may negatively impact educational outcomes (Bogaert 2004;Prause and Graham 2007;Scherrer 2008). Investigation into these issues is needed to validate or challenge these suppositions. Lastly, future research should include transgender populations and should explore how issues of gender identity, gender atypicality, and timing of gender transitioning during adolescence and/or adulthood are associated with educational attainment. --- Limitations Our sample represents individuals who were attending grades 7-12 in 1994-1995; thus, inferences should only be made to this population. To our knowledge, however, this is the first study to use a nationally representative sample to describe the educational attainment of LGB young adults and to understand the correlates associated with educational disparities between LGB and heterosexual young adults. Given the age of our sample at baseline and the consistency in which measures of sexual orientation were collected, we relied on romantic attraction as our measure of sexual orientation, which represents only one dimensions of sexual orientation. Attraction, however, is considered the defining feature of sexual orientation (Diamond 2005;Levine 2003; Leiblum and Rosen 1988), and is likely the most appropriate measure to use when studying adolescents. However, some level of misclassification may have occurred in our study, particularly among respondents who were younger at Wave I and who had not yet become aware of their same-sex attractions until later in adolescence. Finally, the number of individuals who reported romantic attractions to the same-sex ranged in size from 70 to 575 when gender stratified, which likely reduced our ability to detect significant differences. As such, we were unable to distinguish individuals who reported attraction to both sexes from those who reported same-sex attraction only. --- Conclusions Our findings challenge results from prior studies documenting higher educational attainment among sexual minorities in the US. Rather, we found that educational attainment differs by life course sexual attraction; women attracted to the same-sex in adulthood only, men attracted to the same-sex in youth only, and both men and women reporting no sexual attractions in youth or adulthood had lower educational attainment compared to respondents attracted to only the opposite-sex in youth and adulthood. Additional information about the socio-demographics of the LGB population using representative samples, as well as identification of the mechanisms driving the social stratification of the LGB population, is imperative as it may ultimately lead to the development of effective policies targeted at addressing these key forms of social stratification.
Genetics and genomics research (GGR) is associated with several challenges including, but not limited to, methods and implications of sharing research findings with participants and their family members, issues of confidentiality, and ownership of data obtained from samples. Additionally, GGR holds significant potential risk for social and psychological harms. Considerable research has been conducted globally, and has advanced the debate on return of genetic and genomics testing results. However, such investigations are limited in the African setting, including Uganda where research ethics guidance on return of results is deficient or suboptimal at best. The objective of this study was to assess perceptions of grassroots communities on if and how feedback of individual genetics and genomics testing results should occur in Uganda with a view to improving ethics guidance.This was a cross-sectional study that employed a qualitative exploratory approach. Five deliberative focus group discussions (FGDs) were conducted with 42 participants from grassroots communities representing three major ethnic groupings. These were rural settings and the majority of participants were subsistence farmers with limited or no exposure to GGR. Data were analysed through thematic analysis, with both deductive and inductive approaches applied to interrogate predetermined themes and to identify any emerging themes. NVivo software (QSR international 2020) was used to support data analysis and illustrative quotes were extracted.All the respondents were willing to participate in GGR and receive feedback of results conditional upon a health benefit. The main motivation was diagnostic and therapeutic benefits as
Introduction Although the expanding applicability of knowledge generated from (GGR) holds great promise for discoveries in the biomedical and socio-behavioural sciences, it also raises challenging ethical and societal issues. Such challenges include, but are not limited to, implications of sharing research findings with participants and their family members, issues of confidentiality and determining appropriate strategies for providing information to individuals tested [1][2][3]. Furthermore, GGR has significant potential risk for social and psychological harms. For example, studies that generate information about an individual's health risks can provoke anxiety and confusion, damage familial relationships due to misaligned paternity, and/or compromise the individual's future financial status [4][5][6][7]. Results could also possibly be used as a basis for ethnic/racial segregation or discrimination such as denial of insurance coverage or employment [8]. A significant amount of research has been conducted in the fields of genetics and genomics. Associated findings have contributed to the global debate on return of GGR results [9][10][11][12][13][14]. Although international policies for return of individual genetic research findings are still evolving, there is general consensus for feedback of results. A number of criteria need to be met including 1) the ability to assess the evidence base for potentially disease-causing genetic variants in relation to the concerned population(s); 2) assessment of whether the particular finding is beneficial to the individual; and 3) ensuring that patients are appropriately informed of the implications of the findings for their disease or treatment, and referral for follow-up care while seeking guidance of the Research Ethics Committee (REC) [15]. However, debate addressing similar issues of relevance to the African setting [1,2,[16][17][18][19][20], and Uganda in particular is still limited. This situation is exacerbated by the fact that many countries in the African region including Uganda lack ethics guidance on return of results [21]. GGR has been conducted for about 20 years in the Ugandan setting and is expected to continue to increase owing to its potential for advancing targeted disease detection and interventions for both communicable and non-communicable diseases in this resource-limited setting [22]. However, there is a paucity of knowledge on the ethical, legal and social challenges that accompany GGR in the country [23][24][25][26]. There are few publications on perspectives of researchers [24,26] and research participants [25]. This literature highlighted the need for adequate informed consent, community engagement, genetic counselling, feedback of beneficial/ actionable GGR findings and ethics guidance for research. In addition to this list, research participants expressed the need for more effective support during and after participation and as well as feedback of all findings since consideration of beneficial/actionable findings to a good extent is individual based. Published literature on perspectives of grassroots communities is lacking. These communities are based in rural, communal settings with limited interaction with the outside world, are less formally educated and represent lower socio-economic groups. Adding the views of grassroots communities to those of other stakeholders will provide a more comprehensive context for guideline development and conduct of GGR in Uganda. We set out to assess the perceptions of grassroots communities on feedback of individual GGR results in Uganda to inform development of contextualized research ethics guidelines. --- Methods --- Study design and setting This was a cross-sectional study that employed a qualitative exploratory approach. The study was conducted by a multi-disciplinary team of researchers comprising social scientists, bioethicists and medical scientists with experience in qualitative research. JO a male medical doctor and academic with bioethics training and experience, BK a female PhD sociology academic of more than 20 years and JB a male PhD Philosophy academic led most of the focus group discussions (FGDs). They were assisted by eight research assistants (four women and four men) who were proficient in the respective local languages. Data were collected between January and February 2021. Participants were recruited from remote grassroots communities in three regions of Uganda each representing a major ethnic grouping. Two deliberative FGDs were conducted in each region with one involving youth (those18-35 years as classified in Uganda) and the other involving individuals of 36 years and above. However, in one of the regions only one FGD involving individuals older than 35 years was conducted. The communities were selected from the eastern, northern and west Nile regions of Uganda to represent the main ethnic groupings. Participants were recruited from predetermined ethnic groups, districts and sub-counties. The specific local communities were selected by the research assistants identified at the respective sub-counties. The research assistants worked with a local mobilizer to identify the potential participants and invited them for the meeting with the study team. Individuals who responded to the call and met the inclusion criteria were given information about the study and those who consented were recruited. --- Data collection The FGDs were conducted in open spaces in the compounds of health facilities, schools or churches a safe distance away from non-participants to ensure privacy. Data were collected in adherence to the Covid-19 prevention measures which included hand sanitization, face masking, social distancing and utilization of open spaces. Data collection entailed face to face deliberative FGDs that were conducted in the respective local languages of the selected communities. The discussions took about one and a half hours excluding the education session. Participants were first asked general questions on awareness and knowledge about genetics and genomics. This was followed by a 30-minute explanatory session on the meaning and role of genetic inheritance, DNA, genes, genome and how they affect individual's inherited traits and susceptibility or resistance to some health conditions. Additionally, explanation of genomics research and genomics as well as the testing and feedback of GGR results was done. This education session was followed by a discussion moderated by the FGD guide. The discussion included willingness to participate in GGR, willingness to receive feedback of GGR results, conditions for feedback and extending feedback to family and community. The discussions were audio recorded and complemented by notes taken by research assistants. --- Data management and analysis Recorded information was transcribed verbatim, checked for accuracy and later translated into English. Data were analysed along the main themes of the study. The analysis was conducted using a comprehensive thematic matrix that included identifying codes to determine common patterns arising from the narratives. Thematic analysis was done both deductively based on the study predetermined themes and, inductively to identify emerging themes. We started deductively with a set of themes, but then worked inductively to come up with any new themes as we iteratively read through the data [27,28]. Transcripts were further reviewed for emerging themes which were integrated into the thematic matrix. The researcher, JO was involved in applying and confirming application of codes across all transcripts and disagreements were resolved by cross checking with the recorded data. NVivo software (QSR international 2020) was used to support data analysis and illustrative quotes were extracted. Both men and women aged 18 years and above who had provided written informed consent participated in the study. For non-literate participants, the consent process was witnessed by a literate individual of the participants' choice who was independent from the study and the written consent was documented by a thumb print of the participant followed by signature of the witness. No participant identifying information was recorded. --- Ethical considerations --- Findings Five deliberative FGDs involving 42 participants were conducted across the three regions of the country. Over half of the FGD participants were male and aged between 18-77 years. The majority were small scale farmers, Christians, married and had children. They all lived in rural communities. All the participants were willing to participate in GGR and receive feedback of genetic results. The main reasons for receiving genetic results were the need to know one's health status and to seek care or to plan for one's future and the future of their families. However, the willingness to participate in GGR was conditional based on the expectation that feedback of results would occur. Other expectations included adequate informed consent, genetic counselling, implications of testing, privacy and confidentiality. Thematic analysis identified four themes and a number of sub-themes including 1) the need to know one's health status; with subthemes of therapeutic and/ or diagnostic misconception, as well as concerns, challenges and implications for sharing; 2) paternity information as a benefit and risk; 3) ethical considerations for feedback of findings, with sub-themes of adequate informed consent, genetic counselling as well as privacy and confidentiality; and 4) extending feedback of genetics findings to family and community, (Table 1). Although the respondents were agreeable to feedback of all GGR results including aggregate results, individual results, incidental findings and secondary findings, for this paper, we focused on perspectives regarding feedback of individual genetics results. --- The need to know one's health status Almost all the participants responded in the affirmative when asked about their willingness to receive feedback of hypothetical genetic test results because they felt that it was useless to take the test if they would not receive results. All respondents stated that genetic testing is acceptable and would contribute to knowledge in the field. Respondents also felt that findings of such testing need to be shared with individuals tested so that they would be aware of their health status. Knowing one's genetic information was a major motivating factor for participating in GGR or taking a genetic test. Respondents felt that for any test carried out, the results will either turn out to be positive or negative, and for any underlying condition the results will turn out positive meaning that feedback helps one to start living a new life. Respondents noted that even if the treatment may not be available for the diagnosed condition, it would still help them understand their health condition and plan for their future. Thus, if results are not shared, the individuals tested will remain unsettled and anxious wondering what could be happening to their bodies. Some thought that if treatment is not available at the testing centre, it could as well be sought from other hospitals provided one knew what their health problem was. A minority of respondents wished to know their test results only if the condition is treatable. Otherwise, it would be stressful and cause unnecessary anxiety for one to be told of a disease that has no treatment option. ''Using my body parts, I am not interested. If they tell me my blood group, I understand but if it is something else, I am not interested." --- FGD 007 Respondent 10 Respondents had various reasons for wanting to know the results of their hypothetical genetics testing including being able to plan for the future, knowing their health conditions and being able to resolve some of the community myths particularly following death of individuals. For others, since the samples were from their bodies, they felt they had a right to know the outcomes of the tests through provision of feedback. Some felt that knowing the results would be helpful in guiding the individuals on seeking therapy early enough. "So that people can clearly know the actual cause of the death of a person, not that they are left to imagine." FGD 009 Respondent 2 ''The results of the DNA should be given to me because it was part of my body that was removed, it was nobody's body part, it was mine." FGD 007 Respondent 10 ''I think the results should be given to you because by the time you went for the test you wanted to know your health status so the results should be given to you so that you can know about your health status better." FGD 007 Respondent 2 1.1 Therapeutic and/or diagnostic misconception. Respondents highlighted several benefits associated with provision of feedback of genetic testing results including the fact that individuals are helped to plan their lives and the wellbeing of their families. They felt that the results could guide medical professionals and scientists to search for treatment, and institute preventive measures before a disease manifests. Others felt that it will be an added advantage because they will have gained more health information about themselves to help predict the future. --- ''I think it is basically the knowledge after getting the information that really prepares you to be free. Now like us at least we have heard and we have gotten to know what it is all about, so it gives me the freedom (courage) to participate freely without the fear that I had before." FGD 006 Respondent 4 ''It would help me know what the illness is and whether the complication is from my mother or my father, so that I can alert them and see how to protect my children." FGD 009 Respondent 1 ''When I receive feedback at the right time and there are no other discouragements and at the same time the person who is giving me feedback first begins by counselling and guiding me, reminding me of what went on and how to live afterwards." --- FGD 006 Respondent 1 Others noted that genetic testing and associated feedback of results is good because they get to know their diagnosis and can plan to prevent future illness. Some thought it an added advantage because in certain circumstances, individuals live with uncertainty and suspicion concerning particular traits that could be eliminated through testing. A desire for ancestry testing was expressed. --- ''We know that these diseases could have come from the ancestral line of our parents, so knowing the result is good because you can be able to trace whether it is coming from your mother's line or father's line and inform them to protect the next generation of the family.' FGD 009 Respondent 6 "Yes, I want to know the results of this DNA test because it helps me to know the status of my blood and also know my clan too." FGD 008 Respondent 1 1.2 Concerns, Challenges and implications for sharing. Respondents noted that although GGR and the associated feedback of results are important, they were worried about the cost of such tests. They felt it might be too costly for them in case there was need for such genomics testing post the research period. Hence respondents appealed for affordable genetics testing within reach of the low-income earners. There was also concern about being diagnosed with a condition where the cost of treatment is prohibitive. ''The problem is, it is a bit expensive and then those services are very far, otherwise I would really say that it is a good thing to do." FGD 006 Respondent 1 2. Paternity information as a benefit and risk. About one third of the respondents including both men and women linked participation in GGR and receiving feedback of results to establishing the paternity of their children and of other family members. A number of respondents had experience in using DNA related technologies to solve social issues. --- "I want to acknowledge that my father made my sister to go through the same. At first, he denied being the father to my sister but when they went for a DNA test, it was confirmed that he was the true father. He no longer has any doubts and he is instead happy now." FGD 006 Respondent 7 ''In fact, this has happened to me before; my husband denied my second child saying I cheated and when we went to the hospital to prove, their DNA was the same and he even did not apologize for accusing me of adultery". --- FGD 009 Respondent 4 ''It is a good thing because there have been cases of domestic violence because of a man doubting the paternity of some of his children, such would help solve some of these problems causing violence in the homes." --- FGD 009 Respondent 6 Some respondents noted that feedback of GGR results has the potential to reveal discordance in paternity in case both the child and male parents are tested, and this has the potential to cause family disruption and associated psychological harm and suffering of the individuals involved. " --- Considerations for feedback of findings Although respondents expressed willingness to receive feedback of genetics testing results, they highlighted several requirements that need to be put in place before results are shared. 3. ''If they also tell me very well whereby, I also fully understand this research I can accept to participate so that I can be evidence to the community to let them know that it is not a bad initiative after all so that the research runs smoothly." --- FGD 007 Respondent 5 Respondents highlighted the need for research teams to facilitate participants' understanding of the genetic information through a process which would include the use of visual aids in order to facilitate the information delivery process and promote understanding. ''What I think is that there should be a projector to show us photographic images of this genetic science and the procedure of genetic testing. When you see that the other child resembles the parents it makes you to appreciate that you are studying something that exists." --- FGD 006 Respondent 5 'If the research department could show a video, somebody who does not know how to read will be able to interpret what is going on and gain interest in participating in testing and receipt of results." FGD 006 Respondent 4 Respondents highlighted the need for a clear appreciation of the genetic condition likely to be identified by the test results so that they are well informed of what is likely to happen to their bodies. They stressed the desire to know the results to the extent possible and if the information turns out to be complicated, then the provision of feedback should involve their parents or close relatives. Hence, the need for the informed consent process to employ visual aids to facilitate participant's understanding. ''If it were possible, I would want to see the nature of the disease through an image or explained to me thoroughly." --- FGD 009 Respondent 3 ''I will accept because they would have taught me and I would have understood very well what they want to do. This will allow me to make up my mind and also I will know exactly what to do and also know what exactly is needed for my life." FGD 007 Respondent 3 3.2 Genetic counselling. Adequate genetic counselling by a well-trained professional, preferably a doctor was considered essential for participants before getting their genetic testing results. Respondents felt that good counselling would help allay anxiety associated with receiving genetics results. Counselling would also help address any misconceptions or misunderstandings associated with genetics and genomics. Many respondents preferred to receive the feedback of results in person. Respondents felt that this would give the person providing the feedback an opportunity to guide them on health facilities that could provide the requisite medication. "The counselling that is given before feedback of results from there (research centre) can encourage me and even my clan members to do that test." Respondents felt that prior to feedback of results and the associated health information, there was a need for education on any available options for managing the resultant health disorder. For example, if there was a possibility of managing the health condition, this should be stated before the condition is disclosed. They also observed that it is important that enough time should be spent with the participants when disclosing findings. FGD "The doctor should first counsel me because sometimes if they find a disease and they just give me a paper, it can make me unsettled but if the doctor talks to me, tells me that we found a disease but take care of yourself, take your medication, this will eradicate my fears." FGD --- Privacy and confidentiality. Respondents stressed the need for a quiet and private environment at the time of disclosing the results. Since genetics testing results is regarded as private information, the need to observe privacy and confidentiality is a growing reality that should be respected at all times. Many respondents proposed that at the time of disclosing findings, it should be only the doctor and the individual who was tested. "Earlier you talked about confidentiality which automatically means that in case they pick my sample for DNA my results will definitely be given to me meaning that whatever it is it will be between me, the doctors and the people carrying out the tests." --- FGD 007 Respondent 10 "The results should be given to me directly. In case they find any medical complications, the person who has brought the results should explain to me their findings and also if possible, bring medicine and prescribe for me how to take the medication. If you give the results to someone else the person will begin telling people behind my back how my condition is very worrying and bad." --- FGD 007 Respondent 7 While most respondents felt that the results should be shared directly with the individual who was tested, some thought that they would need support and presence of a family member at the time of getting the results. Others proposed that if it's a condition that affects the family, then the doctor can disclose to the whole family. This would help everyone to know about the condition in the family. "For me I want to be with my parents when I am getting the results from my hereditary testing." --- FGD 010 Respondent 5 "The results should be given to me personally because it is me supposed to tell my parents and also, I would want it in written form because the records can help me in future, let's say my condition becomes worse, I need to show those results at the hospital. If you get your results via message, you can't go and show a message to a doctor so I feel it's better to receive in written form." FGD 007 Respondent 5 --- Extending feedback of genetics findings to family and community 4.1 Extending feedback to family. Many respondents expressed the wish to share their genetic results with some family members but this can only be done after adequate consent and genetic counselling to evaluate the possible implications of such sharing. Many families in Uganda share health and medical information because to a great extent the family members meet the cost of treatment. Sharing genetic results is no exception particularly if the genetic predisposition has health implications. Additionally, genetic predisposition can affect other members of the family, and such sharing would act both as a warning and a preventive measure. It would also facilitate future planning for the concerned family. Respondents had varied opinions on extending feedback of results to their relatives with some stressing that the results belong to the individual who was tested, while others thought they could share findings with close family members. Some highlighted the fact that if one is likely to suffer from a genetic condition, then it was necessary to share the findings of genetic testing with individuals who will take care of them in case they become sick. ''The family members need to know because some diseases may need extra attention and care like meals on time, special foods etc, so that the family members can be helpful in looking after you." --- FGD 010 Respondent 5 "I also feel it is right to tell my parents because it gives my brothers and also my wife the opportunity to also go and test in case, I turn out to be positive of any illness so that other children don't inherit the diseases too." FGD 007 Respondent 8 Other reasons for extending feedback of genetics results to family included the need to inform others and help them know of their predisposition to disease early so as to take appropriate action. To others, genetic information was considered a family health issue which affects all members of the family and so they have to be told the results. ''To me I think it depends on the type of disease because sometimes it might be a non-lifethreatening condition or it can be like epilepsy which doesn't go hand in hand with loud noise (which is thought to trigger some epileptic attacks) so the people back home should know how to handle me. So, if the feedback of results is to me, then at least my parents have to be there and also, they should put in the records so that in case another disease comes in, the doctors will have an idea on how to help me." --- FGD 007 Respondent 8 ''It helps the family to understand the problem that they are faced with so that it's able to plan together, how to help in case there is any one sick and others are not, the family can understand how to plan and handle such situations.' --- FGD 006 Respondent 1 ''Because it helps on the side of treatment and health and unity of the family." FGD 006 Respondent 8 Reasons for not extending feedback of genetics testing results to family included the feeling that such information is individual and private, the fear that some members may not understand the meaning of such information or their inability to handle the associated stress and anxiety. --- ''My view is that your test results should only be given to you because they are private and will only affect you." FGD 010 Respondent 1 "These results will remain in my house; I will not share them out anyhow." FGD 008 Respondent 5 4.2 Extending feedback of genetic results to the community members. Individuals in the Ugandan rural settings live as communities, often sharing health information and assisting in care of others including contributions to the treatment costs for those in need. If genetics results point to a particular predisposition that may affect the health of an individual, their family or community, such information may be shared by the individual tested at their discretion with clear appreciation of the implications. Some respondents felt it was acceptable to share their genetics results with the community particularly those they thought would support them in case they became unwell. Others felt that DNA test results do not necessarily mean disease. Hence trusted people who may not be relatives, can know the results and advise. ''For me I would tell all of them, so that they are aware of what the doctor has advised me to do and stand with me in support." --- FGD 010 Respondent 3 "It is good to share results with other people because it safeguards their health. If a person knows that I have a particular genetic disease, it will be up to them to decide whether to produce with me (children) or not. If a person chooses to marry me, that is their risk." FGD 006 Respondent 7 "It is good for the community to know because these days people assume someone's death up to the extent of accusing other community members with whom the person could have had a grudge, to avoid such assumptions, they should know." FGD 010 Respondent 2 "I will accept that my wife should know, the community should also know, there are other diseases that can be spread, those near me can even help me if I am weak, my neighbours should also know." --- FGD 008 Respondent 1 Respondents who did not favour extending feedback of genetics results to the community thought that this was private health information for the family that should not be shared with non-family members. Others felt like sharing such information exposes the health condition of the family and that might end up causing the family to be ridiculed or stigmatised. --- "Yes, my results are important to my family members to know but not outside of family because sometimes that is a secret we have in our house." FGD 008 Respondent 10 "It depends on the type of disease, if it is a disease that I can survive with by taking care of myself I feel it is ok to keep it to myself but if it is a condition that needs people's help like me getting lost, then the community should know about it." FGD 007 Respondent 8 --- ''I think it's a bad idea because people who do not like you take advantage of the information to spread bad information about you and you become the talk of the town, so I think it's best to give it to the owner of the results." FGD 010 Respondent 5 --- Strategies for sharing feedback of genetics and genomics research results. Regarding the strategies for sharing feedback with family, several approaches were suggested by the respondents. Some respondents felt that they should have exclusive rights to disclose the information, hence the doctor should provide them with enough information that can be used to inform others. Some respondents felt that the doctor is better placed to provide feedback of results to family members because they are assumed to be better informed and equipped with the necessary genetics counselling skills. While others thought that they would pass information to an elder in the clan or family who would in turn take the responsibility of conveying the information to the rest of the family through approaches like family meetings. --- "Alternatively, the testing team can come home pick samples one by one (testing can be carried out at the home of the participants), it will be right to counsel me together with my parents so that they can know what to do, those are the ways our results can come back to us." FGD 007 Respondent 6 --- Discussion We set out to assess the views of grassroots communities in Uganda on if and how feedback of hypothetical genetics results should occur and if they were willing to participate in GGR. Our study findings show that such feedback of results was acceptable to all respondents conditional upon receiving health status information. Several reasons for needing feedback of results were identified including and especially, the need to know one's health status and planning for the future. Several strategies were proposed if such feedback was to be conducted appropriately. Receiving genetics results that impact on one's health condition can be a benefit to research participants particularly in an African rural setting where genetic testing is out of reach of almost all individuals and access to general public health services is limited. Feedback to research participants is a growing reality and an ethical obligation that should be incorporated in the research processes as highlighted by several ethics guidelines [15,29,30]. Although the usual call for feedback of research findings has been focussed on other fields of research, the need for return of results is emerging in genetics research because of the anticipated implications such as fear and anxiety related to uncertainty around impact of findings on future health or other family members. The fears associated with findings of genetics research can be appropriately addressed via mechanisms like adequate consent processes and genetic counselling. However, qualified genetic counsellors are an extremely limited resource in Uganda and in most African countries [26]. Other mechanisms to address the fears include observance of privacy and confidentiality as well as only sharing results that are potentially beneficial or actionable. Related work among genomics research participants and genomics researchers in Uganda has also highlighted the need for feedback of GGR results [25,26]. Although researchers in Uganda are supportive of return of individual genetics results, they were hesitant to share results due to complexity of the science and lack of staff to accurately communicate the meaning of results, the lack of context specific guidelines as well as the challenges of accurate interpretation of the findings [26]. This dilemma faced by researchers is worsened by the desire of grassroots communities to receive feedback of all their GGR findings with clear interpretation whether beneficial/actionable or not. Participants in this study conflated genetics and genomics results even though it was explained that only significant and actionable genetics results could be returned in the information session prior to the FGDs. The need for feedback of genetics results has been described as a form of solidarity by research participants in Botswana and as a reciprocity obligation of researchers who can make participants feel valued as part of a mutual relationship [19]. Dissemination, beneficence and reciprocity have been considered as essential components of a framework for enhancing ethical genetics research with indigenous communities in the USA [31]. It should be noted that, the hypothetical research scenario the participants are responding to in this study may not correspond to the reality of feedback, and this should be borne in mind when extrapolating on the basis of these study findings. Participants felt that knowing the outcome of GGR testing would help them seek early treatment or prevention, creating the impression that genetic testing results were reliably associated with causality of disease and that treatment is available for all conditions caused by a genetic predisposition. Likewise, there was an expectation that genomics results would confer individual benefit. Although this was a rural non-research setting, it is important to note that therapeutic and diagnostic misconception where participants perceive research as diagnosis or care rather than experimentation are very common [32][33][34][35][36][37][38][39][40]. Such misconception may lead individuals to participate in research with diagnostic or therapeutic expectations as has been observed among tuberculosis genetic study participants in Cameroon [32]. Yet most genetic studies may not yield results that can benefit health or predict risk of disease [41]. But even in cases where accurate diagnosis can be made, many diseases identified may not be treatable. For feedback of genetics results to be conducted appropriately, several strategies were proposed by the respondents including adequate consent processes, genetic counselling as well as privacy and confidentiality. Informed consent for research participation is an ethical requirement that should be carried out as a continuous process starting before recruitment, through implementation to the post study period. Such consent processes should be suitable for participants and be provided in a language that is easily understandable to the participant. And for feedback of genetics findings, consent should be done at different levels including before recruitment of the participant, and at return of results for those who chose to receive results. The need for meaningful informed consent has been highlighted by participants of a genomics research study in Uganda [25]. However, the referenced study also revealed participants' recall bias about their participation in the concerned study that affected their shared experience on the informed consent process. GGR has been challenged by the fact that genetics and genomics terminology and associated vocabulary may be difficult to translate into many of the local languages in Uganda. Hence making it difficult to achieve adequate and meaningful consent. Recent work that reviewed consent documents for 13 H3Africa genomics projects observed that genetics was mostly explained in terms of inherited characteristics, heredity and health, genes and disease causation, or disease susceptibility and only one project made provisions for the feedback of individual genetic results [42]. However, it is important to note that not all GGR can return individual results for example genetic epidemiology of complex disorders or population genetics results which focus on the population overall. Challenges regarding meaningful informed consent for GGR have been observed particularly when it comes to sharing of human biological samples and data in the context of international collaborative research [43,44]. In order to address the challenges associated with informed consent in GGR, some commentators have proposed tailoring the informed consent process based on a ten-point framework. The framework includes among others the study design, data and biological sample sharing, reporting study results to participants, cultural context, language and literacy and potential for stigmatization of study populations [45]. However, this proposed framework needs to be clearly interpreted and studied if it is to be meaningfully applied. In additional, for consent to be meaningful it should be coupled with relevant information on the proposed genetics testing and associated implication. Such genetic counselling is essential and should be provided before testing and during feedback of results. It should be noted that although genetic counselling is a developing field in emerging economies like South Africa [46], there is a relative lack of qualified genetic counsellors and the associated counselling in many of the low resource settings in Africa including Uganda [26,47]. Yet such genetic counselling would go a long way in addressing issues like implications of GGR and feedback of results to the individual, the family and the community. Other issues that can be addressed by counselling include aspects like therapeutic misconception, privacy and confidentiality as well as the common beliefs in the Ugandan setting of genetic testing being associated primarily with paternity testing. Since the concept of genetic counselling is relatively new in our setting and virtually non-existent in the rural communities, respondents felt that the doctor who is most knowledgeable should be the one to conduct the counselling. The lack of genetic counsellors can be addressed by capacity building for genetic counselling. Furthermore, consent forms should be explicit on aspects like who would have access to genetic results and whether return of results concerning paternity information would be done. If so, this should be approved by the REC before data collection is initiated. Otherwise, it is always a dilemma when researchers discover sensitive information after running the tests and seek guidance from an equally unprepared REC. For example, during genetic testing for sickle cell disease, which is prevalent in Uganda, it's not uncommon to discover discordant genetic information between the child and the male parent. Similar findings emerged in a study in Kenya that discussed challenges associated with paternity mal-alignment [48]. It would be good if consent documents provided by researchers and approved by RECs clearly state if such paternity information including misaligned paternity will be provided to both parents. Our study findings highlight a situation where participants stress the need for privacy and confidentiality of their participation in GGR and return of results. However, despite the call for privacy and confidentiality, most of the respondents preferred the presence of a family member during feedback of results, hinting that such feedback could as well be done at participants' homes. Hence the concept of confidentiality in these communities needs to be clarified and could imply keeping information not only to the individual tested but within their close family. The nature of confidentiality proposed by our respondents is quite different from the model practiced in the western world where private information usually remains with the individual tested. Additionally, the privacy and confidentiality mandated in the research ethics guidelines and some laws like the Data Protection and Privacy Act of 2019 in Uganda [29,30,49], employ the western approach that focuses on individual privacy and not the family as a whole. Hence the need for adaptation to fit the local setting. However, it should be noted that anonymisation of genomics data may not always be possible. Other considerations to facilitate understanding of the GGR concepts include meaningful community engagement (CE). Such engagement would help researchers understand community-based practices for example the meaning of privacy and confidentiality, and whether it should be handled at the individual level or family level. Some commentators have proposed the Tygerberg Research Ubuntu-Inspired Community Engagement Model. This model would require RECs/IRBs to play a role in requiring a CE plan for every study that is community based, and scientific journals to require a paragraph on CE in publication of relevant research projects. This would ensure moving CE from a guidance requirement to a regulatory requirement, emphasizing that it is a critical component of a robust consent process in research and that it ought to be embedded within research projects, where applicable [50]. However, for such community engagement to be effective in facilitating research that is responsive to community needs, it has to be meaningful and collaborative. Such collaboration may extend to levels like community involvement, community empowerment and community participation. The collaboration can facilitate applicable benefit sharing models, developing capacity for genetic testing, counselling, technology transfer and translation of vocabulary. Many respondents were agreeable to extending feedback of genetics testing results to family because genetics information was considered to belong to the whole family since it is inherited. The need for extending feedback to family and sometimes the wider community could be explained by the fact that most of the individuals in the Ugandan rural setting live in communities, often share health information and support each other during times of sickness. It is also important because family and community members play an important role in the provision of health care to patients. However, despite the fact that most of the respondents were agreeable to extending feedback of GGR results to family, fewer respondents supported such feedback to the wider community and only for particular health conditions. Sharing of genetics results with family and community was assumed to have a potential to benefit the individual tested in terms of social and financial support in case the individual became unwell, but would also act as an early warning to others at risk. Extending feedback should depend on the willingness of the individual tested, nature of the genetic condition, adequate genetic counselling and appreciation of the implications and, if there is likelihood of the tested individual to benefit from such disclosure. The perception regarding extending feedback of genetics research results needs to be studied further and should be done on a case-by-case basis because the implications may vary amongst individual patients and from one family to the next. If genetics research results point to a particular predisposition that may affect the health of an individual, their family or community, extending feedback to family and community may be done at the discretion of the individual tested based on a clear appreciation of the implications. And such sharing may involve general information not specific personal information. This is in line with recommendations from a USA consultative team involving an ELSI working group of national experts which, among other recommendations, suggested that researchers should elicit participants' preferences on such extension of feedback to family but also recommended further research on the subject matter [51]. Other countries like Australia, Britain and France have legislation that allows healthcare professionals to disclose genetic information considered beneficial to family members in case the individual tested is not willing to do so [52][53][54]. While current Belgian legislation coupled, with international precedent, may provide sufficient justification to establish a duty to inform relatives of their genetic risk in some cases [55]. It is imperative that the privacy and confidentiality of the person enrolling in the study be respected. But in cases where there is benefit in sharing results with family members, the original participant should grant permission because just like feedback to individuals, feedback should not be imposed on family members, but should be based on their voluntary consent [14]. A study involving REC chairpersons in the USA showed that 62% of the REC chairs agreed that participants should be informed that their results could be offered to family members and asked to indicate their choice, but such a statement may not be adequate informed consent [56]. Keeping genetic information and associated diseases confidential may be very difficult particularly in the Ugandan setting where the costs of medical treatment to a great extent are met by the relatives and sometimes the wider community who may inadvertently learn of the patients' genetic condition. Since individuals in the communities are agreeable to extending feedback of GGR results to family members, it's up to the research regulators with the oversight mandate to devise appropriate frameworks and guidelines to guide the process while respecting the participants' preferences, privacy and confidentiality. Although feedback of results is generally acceptable by grassroots communities, it is necessary to adequately evaluate and address the implications associated with such testing and feedback of results. Implications may include psychological and socio-economic aspects such as family disruption, a feeling of low self-esteem, stigma and discrimination, denial of insurance or increased premiums. Such effects may affect the individual tested, their family or the wider community. Yet familial implications of genomics contribute to the need for extending feedback to family. Hence adequate genetic counselling both before testing and at feedback of results, appropriate consent as well as maintenance of privacy and confidentiality are necessary for GGR. Other challenges associated with return of GGR results include lack of appreciation of the information due to unfamiliarity with genetics and genomics vocabulary as well as affordability of the testing services to the community post the research period. Hence the need for appropriate translation of the information to a language understandable by the participant, use of visual aids to support information delivery as well as use of non-technical terminology. Additionally, the lack of availability or affordability of genetic testing facilities to the community post study, lack of a clear interpretation of the results and meaning of the findings may cause confusion. The researchers should therefore build meaningful collaboration with communities where research is conducted, and share only confirmed results and information with participants. This will avoid the negative implications of such research and enable research communities to benefit from the results of research. Finally, most GGR conducted in Uganda to address ethical, social and legal issues has been carried out in well-established research settings, and the views that have informed debate on ethical conduct of GGR in the country are mainly those of research participants, researchers and research regulators [23][24][25][26]57]. Our addition of the grassroots communities will contribute a new dimension with an additional group of stakeholders whose views will enrich the literature as well as the targeted ethics guidelines for conduct of GGR in Uganda. The acceptability of grassroots communities to participated in GGR as well as receive feedback of results whether beneficial/actionable or not goes beyond the protective recommendation by researchers and research regulators of sharing only beneficial/actionable findings. We believe the ethics guidelines for conduct of GGR informed by our study findings will go a long way in informing regulation and oversight of GGR in the country. --- Limitations of the study The individuals who participated in the study were research naive and may not have fully appreciated the implications of participation in GGR and feedback of the associated results. To address this aspect, a deliberative approach to FGDs was used and included an educational component to help the respondents appreciate GGR. However, the depth of understanding required for GGR may not have been achieved in 30 minutes and a longer period of engagement with similar communities should precede GGR studies. In particular, the explanation of the difference between genetics research and genomics research probably required more time and use of additional educational tools. Since the study was conducted in three different languages, the researchers needed assistance from individuals fluent in the respective languages to conduct the FGDs and this might have affected the quality of the interviews and the subsequent data. Additionally, the FGD were conducted in three different local languages and later translated into English which could affect the quality of data. These challenges were mitigated by identifying research assistants with good experience in qualitative data collection, protocol training of the research assistants to understand the study and using well translated data collection tools. Given the fact that this was a qualitative study, although the findings provide a deep understanding of the subject matter, they may not be generalizable. However, a wider range of other stakeholders have been involved in related research which enriches the generated data. Stakeholders like researchers and REC members have been involved in in-depth interviews and quantitative surveys, genomic research participants and patients have been involved in FGDs. --- Conclusion Participation in hypothetical GGR as well as feedback of genetic testing results is acceptable to individuals in grassroots communities conditional upon receiving health status information, to establish paternity in some cases and to plan for the future. The strong diagnostic and therapeutic misconception linked to GGR is concerning and has significant implications for consent processes, community engagement, genetic counselling and research ethics guidance. Furthermore, the expectation of paternity testing results being embedded in all GGR needs to be managed appropriately. Given the misperceptions and unrealistic expectations expressed, it is an ethical imperative to build meaningful collaboration with research communities for appropriate genomic language/vocabulary translation, benefit sharing, capacity development and knowledge translation. Finally, the willingness of grassroots communities to participate in GGR as well as receive feedback of genetics results whether beneficial/actionable or not goes beyond the protective recommendation by researchers and research regulators of sharing only beneficial/actionable findings and should be further evaluated. While extending feedback of genetic research results to close family members was generally acceptable, concern was expressed in extending feedback to the community.
Objective: To investigate the association between socioeconomic deprivation and the persistence of SARS-CoV-2 clusters. Methods: We analyzed 3,355 SARS-CoV-2 positive test results in the state of Geneva (Switzerland) from February 26 to April 30, 2020. We used a spatiotemporal cluster detection algorithm to monitor SARS-CoV-2 transmission dynamics and defined spatial cluster persistence as the time in days from emergence to disappearance. Using spatial cluster persistence measured outcome and a deprivation index based on neighborhood-level census socioeconomic data, stratified survival functions were estimated using the Kaplan-Meier estimator. Population density adjusted Cox proportional hazards (PH) regression models were then used to examine the association between neighborhood socioeconomic deprivation and persistence of SARS-CoV-2 clusters. Results: SARS-CoV-2 clusters persisted significantly longer in socioeconomically disadvantaged neighborhoods. In the Cox PH model, the standardized deprivation index was associated with an increased spatial cluster persistence (hazard ratio [HR], 1.43 [95% CI, 1.28-1.59]). The adjusted tercile-specific deprivation index HR was 1.82 [95% CI, 1.56-2.17].The increased risk of infection of disadvantaged individuals may also be due to the persistence of community transmission. These findings further highlight the need for interventions mitigating inequalities in the risk of SARS-CoV-2 infection and thus, of serious illness and mortality.
INTRODUCTION There has been an active debate regarding the socioeconomic determinants contributing to the pandemic, several studies highlighting that the SARS-CoV-2 virus disproportionately affects socioeconomically disadvantaged individuals (1)(2)(3). Recent evidence has suggested that neighborhood environmental and socioeconomic factors including poor housing quality, overcrowding and inability to work from home may influence SARS-CoV-2 transmission (4). However, the association between neighborhood socioeconomic deprivation and SARS-CoV-2 transmission dynamics remains to be examined. SARS-CoV-2 spreads via close contact during daily activities which results in geographic clustering of cases (5). The location and duration of persistence of these clusters-monitored using spatiotemporal cluster detection techniques-can provide unique insights into the determinants of transmission (6). We hypothesized that the increased risk of infection within disadvantaged communities is the result of conditions favoring sustained and persistent community transmission. Hence, socioeconomically deprived neighborhoods would have longerlasting SARS-COV-2 transmission clusters than less deprived neighborhoods. We investigated this hypothesis by combining spatiotemporal cluster detection with cluster survival analysis, an approach similar to the one applied to cancer data by Huang et al. (7). --- MATERIALS AND METHODS We analyzed data from 3,355 SARS-CoV-2 RT-PCR positive test results among 17,698 individuals tested in the state of Geneva, Switzerland, covering the first phase of the pandemic (February 26 to April 30, 2020). All included patients were confirmed to be infected with SARS-CoV-2 by RT-PCR assays. The Virology Laboratory at Geneva University Hospitals did the tests and provided anonymized data, including residential addresses. Only individuals who provided a valid residential address and who resided state of Geneva were included. Residential addresses were geocoded by address matching on the reference dataset of Swiss addresses (www.housing-stat.ch). SARS-CoV-2 transmission patterns were monitored through space and time using the modified space-time density-based spatial clustering of application with noise (6) (MST-DBSCAN) algorithm available in the Python package pysda. The MST-DBSCAN algorithm was run with a maximum spatial distance of 600 m, a minimum time-distance value of 1 day, and a maximum time-distance value of 14 days. We then replicated the analysis using different spatial windows (200, 400, 800, and 1,000 m) and observed similar results. The MST-DBSCAN algorithm is one among various density-based clustering methods to detect disease clusters. This modified version of the spatiotemporal DBSCAN presents the advantage of incorporating the effect of the incubation period. The MST-DBSCAN detects clusters of cases but also identifies the daily evolution type of each cluster (6). We projected the daily evolution type of spacetime clusters onto the 2,830 Swiss Areas (SA) neighborhoods (www.microgis.ch) of the state of Geneva. This approach allowed to record the kind of cluster evolution each SA neighborhood underwent each day (i.e., increase, keep, decrease, no cluster). The SA neighborhood was used as it constitutes the smallest spatial unit characterized by aggregated census socioeconomic data in Switzerland. An index of socioeconomic deprivation was calculated at the SA neighborhood-level using a method developed Principal component analysis was used to synthesize the information from these data. To obtain a single index for all neighborhoods, the inertia of the first component was maximized by discarding variables only weakly correlated with the first component and variables contributing lower than the average (8). We defined spatial cluster persistence as the time in days from emergence to disappearance, and censored clusters remaining on the last day of the study period. Using spatial cluster persistence as the measured outcome, we estimated survival functions stratified by terciles of neighborhood-level socioeconomic deprivation with the Kaplan-Meier estimator. The contribution of the neighborhood-level socioeconomic deprivation to spatial cluster persistence was estimated in a Cox proportional hazards (PH) regression model (7) with robust standard errors. We then estimated the contribution of each individual component of the socioeconomic deprivation index [i.e., median income, foreigners (%), median rent, unemployment (%), occupation and education] as continuous independent variables in a separate Cox PH model (Table 1). Models were adjusted for neighborhood-level population density, and covariates were standardized. --- RESULTS We identified 1,079 spatial clusters over the 65 days study period, which, once projected covered, 1,931 neighborhoods of the state of Geneva (Figure 1). The median neighborhood-level SARS-CoV-2 incidence rate ranged from 0 cases per 100,000 (interquartile range, IQR = 650) in the least deprived tercile to 465 (IQR = 866) in the most deprived tercile. Clusters emerged on average 4 days earlier in the most deprived tercile compared to the moderately deprived and 6 days compared to the least deprived terciles (Supplementary Table 1). The persistence of clusters varied substantially across terciles of the neighborhood-level deprivation index (Figure 2). Two months after the emergence of SARS-CoV-2 clusters, almost 85% of the spatial clusters remained in the most deprived areas, compared to around 70% in the moderately deprived areas and around 30% in the least deprived areas. This trend was confirmed by the Cox PH model adjusted for neighborhood-level population density in which the standardized deprivation index was associated with an increased spatial cluster persistence (hazard ratio, HR = 1.43 [95% confidence interval, CI = 1.28-1.59], P < 0.005) and the adjusted tercile-specific deprivation index HR was 1.82 [95% CI, 1.56-2.17]. Hazard ratios and confidence intervals of the Cox PH model including the individual components of the socioeconomic deprivation index are presented in Table 1. Low median income, low median rent, high percentage of foreigners and high unemployment were associated with spatial cluster persistence to varying, but statistically significant, degrees (Table 1). There was no statistically significant association between spatial cluster persistence and tertiary education and the percentage of actives in the primary sector (Table 1). The MST-DBSCAN algorithm was run with a maximum spatial distance of 600 m, a minimum time-distance value of 1 day, and a maximum time distance value of 14 days. Similar results were observed using different spatial windows (200, 400, 800, and 1,000 m). --- DISCUSSION We combined spatiotemporal cluster detection with survival analysis and found that neighborhood-level socioeconomic deprivation was associated with persistent spatial clustering of SARS-CoV-2. This result supports our hypothesis, suggesting that the increased risk of infection of disadvantaged communities may also be due to the persistence of community transmission. This suggestion is of importance, considering that socioeconomically disadvantaged individuals are also at risk of worse COVID-19 outcomes due to a greater burden of obesity and other chronic diseases (9). Moreover, digital COVID-19 public health tools, such as contact tracing apps, have been developed and deployed since the first phase of the pandemic. While data remain scarce, evidence suggests that socioeconomic status is a determinant of attitudes toward these technologies (10). Public health attention and locally tailored interventions are required in socioeconomically disadvantaged communities to prevent the intersectionality of these multiple aspects of disadvantage further compounding the risk of infection, the risk of serious illness and thus, of mortality (3,11,12). --- Limitations The place of infection being unknown, we were not able to differentiate between persistence of spatial clusters driven by an increased local transmission or by an increased importation of cases in the community (e.g., from the workplace) or by a coalescence of both. We cannot exclude that the contact tracing strategy-which consisted in tracing and testing close contacts of positive cases-and socioeconomic variations in testing rate influenced spatial cluster persistence. --- CONCLUSIONS These findings bring unique insights into the determinants of transmission and suggest that the increased risk of infection of disadvantaged individuals may also be due to the persistence of community transmission. The persistence of transmission in disadvantaged populations further highlights the pressing need for public health interventions preventing an exacerbation of inequalities in the risk of SARS-CoV-2 infection, of serious illness and thus, of mortality. data. Requests to access these datasets should be directed to Idris Guessous, [email protected]. --- DATA AVAILABILITY STATEMENT The dataset analyzed during the current study is available from the corresponding author upon reasonable request. The dataset could not be made publicly available due to the sensitivity of individual georeferenced SARS-CoV-2 testing --- AUTHOR CONTRIBUTIONS DD performed the data analyses and drafted the manuscript. DD, IG, and SJ conceived the study and completed the manuscript. JS, NV, AA, and SS participated in the design of the study and helped to draft the manuscript. All authors read and approved the final manuscript. --- SUPPLEMENTARY MATERIAL The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpubh. 2020.626090/full#supplementary-material --- Conflict of Interest: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Developing cooperative systems and social media requires taking complex decisions concerning the social interaction to be supported as well as the technical foundation. In this paper we build on the long and successful tradition of design patterns and the social framework of Erving Goffman. We present design patterns that address both challenges of social interaction and technical foundation-they provide input for software developers with respect to structuring software and to providing adequate support for the interaction of users with the environment and with each other.
Introduction In social computing, systems aim at facilitating communication and cooperation among users who are either at the same location or at different locations. Social media summarises concepts and systems that aim at an active participation of users during an interaction, easy exchange of information, and sophisticated self-presentation [11]. Developing concepts for those systems is a challenging task and has been researched for more than two decades [12] [7]. They often have a strong influence on the structure and flow of the interaction in the group, as Schmidt [18, p. vii] explains: 'the development of computing technologies have from the very beginning been tightly interwoven with the development of cooperative work'. And he [18, p. vii] continues: 'our understanding of the coordinative practices, for which these coordination technologies are being developed, is quite deficient, leaving systems designers and software engineers to base their system designs on rudimentary technologies. The result is that these vitally important systems, though technically sound, typically are experienced as cumbersome, inefficient, rigid, crude'. Patterns have a long and successful tradition for drafting, for documenting, and for reusing the underlying concepts. Very prominently, Christopher Alexander has suggested and provided design patterns in architecture [2]. He introduced a pattern language to describe solutions that were repeatedly applied to reoccurring challenges in the design of buildings. In software engineering software design patterns have been successfully used for documenting and reusing knowledge and provided a 'way of supporting object-oriented design' [20, p. 422]. With respect to social computing, design patterns can document the knowledge and experience with developing cooperative technology. All these different types of design patterns provide valuable input for cooperative systems and social media. However, there are also limiting factors: software design patterns primarily help structuring software, and cooperative design patterns are primarily based on the analysis of existing cooperative systems or on some specific ethnographic studies. Therefore, the gap is that the complex task of making both types of patterns compatible is in the hands of software designers and developers. In this paper we build on the history of patterns and present overarching design patterns for social computing systems. For this purpose we leverage on the works of Erving Goffman who studied social interaction among humans and their use of their technical environment for several decades and derived a framework for social interaction. He uses a metaphor of a performance where everybody is an actor that present her-or himself and acts with others. In the next section we provide a background of patterns. We then introduce the framework of social interaction of Erving Goffman. We discuss how this framework informs the design of cooperative systems and we derive design patterns for cooperative systems that are modelled in a unified modelling language format for software designers and developers. Finally, we summarise our contribution. --- Background of Design Pattern Christopher Alexander et al. [3] were the first to systematically distil patterns from reoccurring solutions to reoccurring problems. In the domain of architecture they identified a language of connected patterns for designing buildings. In this section we introduce patterns related to the design of software in general as well as for cooperative systems and social media in particular that build of Alexander et al. --- Software Design Patterns Software designers and developers widely use software design patterns. Gamma, Helm, Johnson and Vlissides [5] suggested the most notable pattern language for object-oriented software development. They characterise a pattern as a composition of a problem that during the development frequently occurs, a principal solution to the problem, and consequences from applying the solution. Their pattern language includes 23 patterns for classes (i.e., static relationships during compile-time) and objects (i.e., dynamic relationships during run-time) in three categories: creational, structural, and behavioural patterns [21]. Cooperative systems and social media use network-based and distributed software architectures in the background. POSA2 offers a rather technical pattern language addressing the challenges of distributed software architectures especially in the context of object-oriented middleware such as CORBA, COM+, or Jini [17]. It has four categories representing the main challenges of object-oriented middleware: 'Service Access and Configuration', 'Event Handling', 'Concurrency', and 'Synchronisation'. The description of patterns is extensive and contains precise design implication for the named middleware along verbose source code examples. While software design patterns are substantial for sustainable software development, they still leave the burden of the complexity of social interactions to software designers and developers of cooperative systems. --- Design Patterns for Cooperative Systems and Social Media Design patterns for cooperative systems and social media typically focus on human behaviour and interaction. We describe patterns that support designers and developers of cooperative systems and social media. A pattern language for computer-mediated interaction condenses features and properties of existing cooperative systems [19]. It has three categories: 'community support', 'group support', and 'base technology'. The variety of patterns reaches from simple ones (e.g., the 'login pattern' allows users to interact within a system as individuals with an associated user accounts) to complex ones (e.g., the 'remote field of vision pattern' allows users, which work remotely on shared artefacts to be aware of at which parts others are currently watching at). The description of patterns is very detailed and considers caveats as well as implications for security. Specific patterns for privacy and sharing provide solutions to problems concerning the quality of use of cooperative systems [4]. They result from field studies, notes, and design sketches that were translated into three patterns: The 'workspace with privacy gradient', the 'combination of personal and shared devices', and the 'drop connector'. Descriptive patterns have been suggested to allows a better facilitation of the communication in interdisciplinary design teams during the development process [13,14]. They are comprehensive and express 'generally recurrent phenomena' extracted from ethnographic studies at workplaces. The resulting descriptive pattern language consists of six patterns: 'multiple representations of information', 'artefact as an audit trail', 'use of a public artefact', 'accounting for an unseen artefact', 'working with interruptions', and 'forms of co-located teamwork'. Their patterns are extracted from fieldwork results using two types of properties: 'spatially-oriented features' and 'work-oriented features'. Their patterns can be extended with a 'vignette', which describes real examples as special use cases and provide further design implications. Despite the fact that patterns for cooperative systems and social media provide detailed insights into practices and requirements of users working together, they mostly lack the dynamic notion of such systems, where users can take advantage of a throughout personalisation of their environment. --- Goffman's Framework of Social Interaction We introduce the background and major concepts of Goffman's framework of social interaction that are relevant for designers and developers of cooperative systems. --- Fig. 1. Major Concepts in Goffman's framework of social interaction Goffman [6] studied social interaction among humans for several decades and developed a conceptual framework of social interaction among humans in face-to-face situations. It is based on his own observations, on observations of other researchers, and on informal sources. In the following, we describe Goffman's framework in three categories: participants, regions, and performance (cf. Fig. 1). --- Participants Participants act according to their social status (i.e., socio-economic standing in the society). They present a routine (i.e., a 'pre-established pattern of action which is unfolded during a performance' [6, p. 16]). For Goffman humans follow two types of ideals when interacting with each other: the optimistic ideal of full harmony, which according to Goffman is hard to achieve; and the pragmatic ideal as a projection that should be in accordance with reality and that others can accept-at least temporarily-without showing deep and inner feelings of the self. In a performance a performer and an audience are involved. A performer defines a situation through a projection of reality as expressions of a character bound to a certain social role in front of an audience. Performers anticipate their audience and continuously adapt their performance according to its responses. Goffman distinguished three audience types. The present audience attends the performance, receives expressions, verifies them according to the projected situation, and responds. The unseen audience is imaginary and used to anticipate a performance. The week audience is real, but not present. It constitutes of other performers giving similar performances. In preparation of a performance, a performer can exchange experiences and responses with the weak audience to improve her own ability to be convincing. Goffman describes the collaboration of performers as a 'performance team'. Its members ideally fit together as a whole in presenting similar individual performances to amplify a desired projection, or in presenting dissimilar performances that complement to a joint projection. --- Regions Regions are spatial arrangements used for performances and include specific media for communication as well as boundaries for perception. Goffman names three types of regions: stage, backstage, and outside. A stage provides a setting for the actual performance and is embroidered with decorative properties (i.e., decorum). It supports performers in fostering a situation. Both the performers, as well as the audience can access the stage, having different perspectives. The backstage is a region that performers can access to prepare and evaluate their performance. Also team members suspend backstage. The audience cannot access the backstage. The outside region describes the third type that is neither stage nor backstage. Although it will be excluded from a performance, performers will prepare and use a dedicated front for the outside (e.g., the façade of buildings of a company). --- Performance For Goffman a performance means social interaction as a finite cycle of expressions to define a situation and of responses as feedback of validity. A performance takes place in a region of type stage. For a performance, each performer prepares a set of fronts, which represents her towards the audience. A front unites material and immaterial parts. Sign equipment is a front's material part and denotes to all properties required to give a convincing performance. The personal front is a front's immaterial part and denotes to certain types of behaviour of a performer (e.g., speech patterns). It combines 'appearance' (i.e., presenting a performer's social status) and 'manner' (i.e., presenting a performer's interactional role). Characters make the appearance of performers on the stage. A character-as a figure-is composed of a 'front', which is specifically adapted to the audience and performance. In a performance team, the team as whole has a united front (e.g., according to a professional status) and each member has a character with an associated front to invoke during staging. During a performance a character plays routines to convey acceptable and to conceal inacceptable expressions. In a performance team multiple characters will follow this behaviour. Expressions are information that is communicated by a character using 'signvehicles' (i.e., information carriers). There are wanted expressions that are acceptable and foster a situation as a valid projection of reality, and unwanted expressions that are inacceptable and inappropriate for a given performance in front of a particular audience. In order to manifest a performance that is coherent, a performer strives to communicate expressions consistently through their characters towards an audience. Thus a performer's character endeavours to conceal unwanted expressions. Responses are all kinds of feedback. An audience continuously verifies the performance according to the defined situation and the overall reality as well as to the front of the character. It responds the result to the performer. In order to manifest a valid performance, performer and audience agree on three principal constructs that prevent a false or doubtful projection of reality based on contradictive expressions or discrediting actions: The 'Working Consensus' is an agreement on the definition of the situation and describes a temporal value system among all participants. The 'Reciprocity' means that performers guise their characters to act according to the situation (i.e., provoke neither intentionally nor factually misunderstandings) and that the audience responds to performance according to the situation (i.e., allege neither consciously nor unconsciously false behaviour). The 'Interactional Modus Vivendi' describes that an individual in the audience only responds to expressions that are important for the individual; the individual in the audience remains silent in things that are only important to others. Goffman describes additional participants. For instance, the team support, which is one of the following: colleagues that constitute the weak audience, training specialists that build up a desirable performance, service specialists that maintain a performance, confidants that listen to a performer's sins, or renegades that preserve a idealistic moral stand that a performer or team failed to keep. Goffman also defines outsiders as being neither performers nor audience having little or no knowledge of the performance. They can access the type outside region. --- Informing the Design of Social Computing In this section we transform Goffman's framework into design patterns. We used three steps. We first identified key statements of Goffman's framework concerning structural aspects (i.e., social entities involved into interactions) and dynamic aspects (i.e., actions of and interactions between social entities). In a second step, we augmented these aspects with literature reviews and lessons learned from conceptualising and developing cooperative systems-especially concerning the transition from physically co-present humans to virtually co-present humans (e.g., [8,9]). In a third step, we iteratively derived four design patterns for cooperative systems and modelled them in the unified modelling language (UML version 2.4 [16]). --- Structure of Social Computing The structure of social computing systems refers to entities and their relations as essential ingredients. Our UML class diagrams emphasise entities involved, their compositions, and their dependencies. We use interfaces for modelling general entity behaviour that can be applied to a variety of instances. We use abstract classes for modelling entities that share implementations, and we use standard classes for modelling specific entities. The first structural pattern we introduce is the Social Entity Pattern (cf. Fig. 2). It describes the general setting of people involved in an interaction and their roles. The interface SocialEntities refers to humans that are explicitly included in an interaction. A social entity has general knowledge of the world and specific knowledge of particular domains. It relies on Routines as 'pre-established pattern of action […] which may be presented or played…' [6, p. 16]. It conveys information it likes to share with others, and conceals information it likes to hide from others. There are four classes implementing the interface SocialEntity: ActiveIndividuals, ActiveTeams, PassiveIndividuals, and PassiveTeams. --- Fig. 2. Social Entity Pattern as UML class diagram ActiveIndividuals refer to Goffman's performers and are instances of classes with a repertoire of Faces. They anticipate the behaviours of others and select as well as fit their faces towards them. An ActiveTeam consists of at least two ActiveIndividuals: which refers to 'any set of individuals who cooperate in staging a single routine' and '…an emergent team impression arises which can conveniently be treated as a fact in its own right…' [6, p. 79]. Teams have an overall goal. As noted above members of a team can have in individual activity or a shared activity. Since the delegation of an ActiveTeam's members can vary from team to team, it is the responsibility of extended classes to implement that behaviour. An ActiveIndividual and ActiveTeam can rely on their Support (i.e., social entities that provide services or feedback). PassiveIndividuals as an abstract class implements the interface social entity with the ability to observe an action. Further implementations of such passive individuals are the PassiveTeam, which refers to the audience that participates in the interaction. About the relationship of active individuals and passive individuals Goffman states: '…the part one individual plays is tailored to the parts played by the others present, and yet these others also constitute the audience' [6, p. xi]. Parallel to the team above a PassiveTeam is an aggregation of PassiveIndividuals; Goffman writes: 'There will be a team of persons whose activity … in conjunction with available props will constitute the scene from which the performed character's self will emerge, and another team, the audience.' [6, p. 253]. In the pattern a Face class lays out the foundation for a distinct configuration of an active individual or team as a prototype to be applied in an interaction. Our notion of a face refers to Goffman's front; it is the 'part of the individual's performance which regularly functions in a general and fixed fashion to define the situation for those who observe the performance' [6, p. 22]. An ActiveIndividual can have multiple faces as a repository of communication methods and properties towards passive individuals. Since, in cooperative systems simultaneous interactions are likely, it is important to note that an active individual may have multiple active faces at a time (i.e., a system is required to provide means for the preparation of an interaction as well as means for easy access to the repository of faces to choose from). A Character is a specific configuration of a face. When instantiated in an interaction, an ActiveIndividual selects and transforms a face into a Character containing information and dissemination methods: 'When a participant conveys something during interaction, we expect him to communicate only through the lips of the character he has chosen to project' [6, p. 176]. In our pattern the interface Artefact refers to work-related (e.g., documents) and leisure-related objects (e.g., movies). In contrast, Goffman narrows the performance down to interacting individuals or teams; for Goffman external objects contribute to the overall expression of a situation as a setting: 'there is the setting, involving furniture, decor, physical layout, and other background items which supply the scenery and stage props for the spate of human action played out before, within, or upon it.' [6, p. 22]. However, in social computing often an Artefact is an essential part of an interaction. It relates to virtual or physical objects that can be created, edited, and deleted in the course of an interaction. In a routine, a composition of artefacts that can be involved; in social computing systems this is typically represented as collaborative editing or sharing. The Interaction refers to Goffman's performance. It is a composition of characters of one or more active and one or more passive individuals. It has three phases: in the preparation an active individual sets her role; in the execution a character acts towards passive individuals or a passive team; and in the finalisation an active individual collects responses from its interaction and uses the outcome for further refinements of its faces. A history as set of interactions is important in social computing systems for verifying information and deducing information (i.e., drawing conclusions). The second structural pattern is the Region Pattern (cf. Fig. 3). It maps Goffman's regions into a combination of Visibility and Locality that can be applied in Interactions. Goffman writes: 'A region may be defined as any place that is bounded to some degree by barriers to perception.' [6, p. 106]. As described above, Goffman distinguishes the regions stage, backstage, and outside. However, in our opinion, social computing systems require a more flexible representation that should allow for and contribute to in-between regions. --- Fig. 3. Region Pattern as UML class diagram The interface Visibility represents filters for types of information and dissemination methods to be applied to interaction with social entities. While active individuals and active teams can access a huge amount of information, passive individuals and passive teams can only access designated information. The interface Locality also refers to filters, but they provide methods as a boundary of real locations (e.g., a display in a shared office space) or virtual locations (e.g. a user's timeline). Combining Visibility and Locality provides means for sophisticated configurations than the region types proposed by Goffman could cover. The combination reflects individual sharing preferences that also apply during a system's automatic inference of information (i.e., map and reduce). For instance, an interaction can span real and virtual locations at once while communication is still filtered. The filtering can be achieved by matching properties of CommunicationEntities (e.g., an Artefact as a shared object) towards the properties of passive entities involved into the interaction. Subsequently, we introduce the interfaces DirectSocialCommunication and MediatedSocialCommunication along their patterns. --- Dynamics of Social Computing The dynamics of social computing systems refers to the general communication behaviour of humans within the system. The two patterns focus on the interaction between an ActiveIndividual and a PassiveTeam as the execution of an interaction. We show two patterns as use cases in UML sequence diagrams. Each diagram shows the entities involved in the execution, and sequences of synchronous and asynchronous calls used in it (please note, we explain an interaction of an individual, for team performances the steps are similar). --- Fig. 4. Direct Social Communication Pattern as UML sequence diagram The first dynamic pattern is the Direct Social Interaction Pattern (cf. Fig. 4). It starts with the path an ActiveIndividual executed to setup its Face and Character and activates a PassiveIndividual-summarised as anticipate-call in the diagram. After that, an ActiveIndividual instantiates a Character object for direct social interaction. According to Goffman, faces are selected and adapted, rather than created; he writes: 'different routines may employ the same front, it is to be noted that a given social front tends to become institutionalised in terms of the abstract stereotyped expectations to which it gives rise, and tends to take on a meaning and stability apart from the specific tasks which happen at the time' [6, p. 27]. This manner of stereotypical selection and adaptation allows PassiveIndividuals to recognise familiarity between Characters of different ActiveIndividuals and thus simplifies the validation process. The Character object creates the DirectSocialCommunication object for delivering information. In the loop of direct social communication, a Character calls its associated Face to obtain valid and appropriate information. It then delegates this information to the DirectSocialCommunication object for further distribution in an Interaction. Goffman describes direct social interaction as a communication of 'signactivity'-the transmission of expressions towards the audience relying on 'signvehicles'. He distinguishes two 'radically different' types of communication: the given and the given-off [6, p. 2]. In this pattern DirectSocialCommunication refers to the type 'given'. It stands for communication in a narrow sense as it consists of verbal or written symbols (i.e., speech and text). All social entities involved in an interaction are familiar with the encoding and decoding these symbols. The process of delivering DirectSocialCommunication occurs frequently in a loop and simultaneously during an interaction, the resulting calls are asynchronous ones. As described previously, a PassiveIndividual receives the information and matches its consistency. A PassiveIndividual responds accordingly concerning the information's inner validity (e.g., authorisation of sender and contents) as well as regarding previously received ones (e.g., history of the interaction). An ActiveIndividual can emphasise information during sending DirectSocialCommunication as it can adapt a Face using the responses received-Goffman speaks of governable aspects [6, p. 7]. --- Fig. 5. Mediated Social Interaction Pattern as UML sequence diagram The second dynamic pattern we introduce is the Mediated Social Interaction Pattern (cf. Fig. 5). It reflects the process of accessing an artefact and distributing occurring information of accessing it towards the passive individuals. In cooperative systems and social media applications this type of information is typically used for providing awareness information to the users [10]. MediatedSocialCommunication refers to communication in a broader sense and is related to Goffman's 'given-off'. It consists of a range of behaviours that can hardly controlled or manipulated-Goffman writes of ungovernable aspects and is of 'more theatrical and contextual kind, the non-verbal, presumably unintentional kind, whether this communication be purposely engineered or not.' [6, p. 4]. When accessing an Artefact at least one separate object MediatedSocialCommunication is created automatically. The PassiveIndividual receives information and matches it with previous objects of the type DirectSocialCommunication and MediatedSocialCommunication and responds accordingly. --- Discussion and Conclusions In this paper, we have argued that designers and developers of social computing systems face complex design decisions. To support them, we identified key concepts of Goffman's framework and derived structural and dynamic UML patterns. Our study of Goffman's framework and the derived patterns relate to some findings of previous work on patterns-corroborating these findings. Our patterns bridge between the artefact-specific patterns of Martin et al. [13,14] and the collaborationspecific patterns of [19]. The Social Entity Pattern represents the typical behaviour of users frequently switching their hats between the two roles of an active individual and a passive individual. The faces they rely on during their performances are diverse in terms of contained information, actions, and reactions. System should address this need for diversity by providing a repository of faces the users can chose from and evolve their characters upon. Yet, our pattern reaches beyond the existing ones, as it allows multiple, persistent, temporal and spatial active characters. The Region Pattern addresses the requirement of diverse spaces for preparing, sharing, and acting. Social computing systems should provide these spaces, as users need them for their performance. On the one hand users prepare the interaction using more 'technical standards' in a 'backstage' region where 'the suppressed facts make an appearance.' [6, p. 112]. On the other hand, users interact on 'stage' type regions using more 'expressive standards'. Providing stability of locality and visibility in this pattern is important for preventing users of unmeant disclosures that Goffman calls 'some major forms of performance disruption-unmeant gestures, inopportune intrusions, faux pas, and scenes.' and 'When these flusterings or symptoms of embarrassment become perceived, the reality that is supported by the performance is likely to be further jeopardised and weakened' [6, p. 212]. The Direct Social Interaction Pattern and Mediated Social Interaction Pattern cope with the performance itself and provide a model for Goffman's two communication types of 'given' and 'given-off'. Users require means of dramaturgical discipline-for instance, the anticipation of the passive individuals-to manage their impression validly. The patterns explicitly inform designers and software engineers of social computing systems to apply the Region Pattern in order to consider the hardly to governable type of communication (e.g., when accessing resources within the system or generating meta data). For future work the structural and the dynamic patterns should be applied in the design of social computing systems so their actual benefit for designers and developers in conceptualising and implementing can be measured in empirical studies. Furthermore, Goffman offers detailed descriptions of more social processes (e.g., make work) and best practices (e.g., team collusion) that may supply further patterns towards an extensive language of patterns for social computing systems.
) Illegitimate bodies? Turner syndrome and the silent interplay of age, gender, and generational positions.
Introduction Anthropology has for long explored how "the body is used to think time, and how it also, in turn, is used to set the clock of the temporal thresholds which society invents" (Julien and Voléry, 2019: p. 7). Since the beginning of the 20th century, ethnologists have demonstrated how societies carefully organize the coupling of bodies and times in defining age and social transitions. More specifically, these passages-births, deaths, the end of childhood, the beginning of adulthood, the start and the end of procreative or sexual periods-all constitute opportunities to re-establish individuals and groups in time: in an age, in a life's stage, and in a generational position. Thus, despite the academic and social discourse about the dynamic character of the life course (Elder et al., 2003), and despite the reconfiguration of the categories of age (Blatterer, 2007a,b), and gender performativity (Butler, 2004), the social norms that establish a conformity between bodily transformations and stages in life have not disappeared: they mutate and reemerge in different forms. Expert knowledge is still home to norms that codify the link between body and time. /fsoc. . The "surgery of age" (Moulinié, 1998) aligns the human body with standards of development or shields it from the ravages of time (Vincent, 2006;Dalgalarrondo and Hauray, 2015). Through hormonal corrections, a biological materiality is adjusted to social ambitions of self-optimization, to gender social identities, and to age status (Oudshoorn, 1994;Conrad and Potter, 2004;Toër-Fabre and Levy, 2007;Fishman et al., 2010). This paper aims to show how age categories, gender norms and generational position intertwine in the definition of a "legitimate body" (Boni-Legoff, 2016). According to Isabelle Boni-Legoff, this legitimacy is based on the opposition which hierarchizes bodies between masculine (legitimate) and feminine (illegitimate). She also underlines the importance of the triad "gender, class, race" in defining the contours of conformity to hegemonic norms. This approach, however, underestimates two aspects. The first is the place of bodily materiality itself, which is inseparable from its discursive setting and foundational to the experience that the actors have of this legitimacy. The second is the importance of the age position in the social structure and in the process of constructing personhood. The under-estimation of age categories, compared to the value given to gender, ethnicity, race and class, has been stressed by other authors, particularly those working in fields where agerelated asymmetries are more apparent, such as childhood (James and Prout, 1997) or aging (Rennes, 2020). Finally, the position in the generational order as a central dimension of the "legitimate body" is relatively absent from the scientific debate. Gender distinction-whether it is assigned, negotiated or reformulated over time-age and generation constitute the very nexus between the body, the short time spans of the individual life, and the longer times which are perpetuated through the lines of descent. "Whether it is seen as the biographical time that brings an individual from conception to birth and then to death, or the historical time that distinguishes the past, the present and the future of a society, or the socio-cosmic time spanning cycles of metamorphosis and the regeneration of beings, the masculine/feminine distinction is continually deployed and changed, constructed and altered" (Théry, 2008: p. 31). The starting point of this article is a research study on the experience of women affected by Turner syndrome, a genetic condition causing slowed growth and variations in physical and sexual development. The syndrome upends the apparent concordance between the time of growth, the social age, the expressions of conventional femininity that emerge over a number Even though Marcel Mauss proposed, in , to consider the body as a physio-psycho-sociological entity, for a long time anthropology and sociology, especially in France, have emphasized the discursive dimension of corporeality (Warnier, ). However, we do not underestimate these categories and their intersection in the making of body norms. For example, female adolescent bodies and their eroticization have been an historical, political and social construction in which gender, class and ethnicity assignments are intertwined (Walkerdine, ; Dorlin, ; Liotard and Jamain-Samson, ). We here consider generation as much a category that orders and structures society on the model of other social positions (Bühler-Niederberger, ), as a relational process through which family members are produced, legitimized or transformed (Alanen, ). of successive stages (the forming of breasts, the first menses, the beginning of sexual activity, . . . ), and the longer time of generational renewal, marked by the possibility of motherhood (Maciejewska-Mroczek et al., 2019). For women who suffer from this genetic condition, the gap between the physical development in time and social status associated with age, gender and generation, may engender a persistent feeling of "being out of place." The exceptions due to this chromosome disorder allow us to demonstrate the importance of the "right" time for the fabrication of a "right" body, as well as the power of the norms that connect bodily transformations to the age and thresholds fixed by society. As the term "age" is polysemous, we will distinguish throughout the article between chronological age, age status, the stage in life, and age linked to a generational position. These different meanings intertwine and overlap, but as they interact, they refer to various ways of finding one's place in a time disrupted by illness. After presenting the research and the fieldwork, the article explores how measuring the body has become central in the social construction of age, and in the formation of the concept of "age-appropriateness" (Kelle, 2010). Then, we analyze how, among women suffering from Turner syndrome, the impossibility of conforming to such standards can produce a lack of legitimacy associated to various forms of desynchronization: between physical appearance, chronological age and age status; between the physical developments induced by hormone therapy and a particular stage in life; between chronological and reproductive age and generational position. Finally, the paper stresses how the relationship between body and time results from a complex interplay of social markers of age and gender that represent different ways of coping with the social and biographical situation. --- Fieldwork and methods The present article is the result of an ongoing research project on the bodily experience and life trajectories of people suffering from Turner syndrome. This is a rare genetic condition, due to complete or partial absence of the second chromosome X, that affects about ½, 500 newborn females in France every year. The consequences are small stature, ovarian insufficiency resulting, for the most part, in a delay or absence of pubertal development, and infertility. Morphological specificities and associated disabilities (e.g., hearing deficiencies) as well as an increased risk of diseases caught later on can also appear. Hormone treatments-both growth and sexual hormones-can be part of the treatment. Such pharmaceutical corrections are part of a molecular fabrication of age and gender (Gaudillière, 2003;Murano, 2019) that calls into question how the medical profession, as well as women and their entourage, deal with the risks of normalizing the body by its "enhancement" (William et al., 2011;Rajtar, 2019). The complex history of Turner syndrome also gives an insight into the scientific debate on sex/gender variations, which is "good to think with" as Löwy recalls quoting Lévi-Strauss (Löwy, 2019: p. 31). The fieldwork that gave rise to this article was conducted in France and carried out through ethnographic and participative methods. We met 20 girls and women between the ages of 10-60, all heterosexual, from different social classes and with varying educational backgrounds. Skin color, migrational background or Diasio . /fsoc. . the experience of being racialized or ethnicized are not significant in the population interviewed. One person had recently migrated from Central Africa to join her sister in France and to receive medical follow-up. The first interviewees were approached through a message that circulated thanks to a not-for-profit organization, explaining the objectives and methods of the research. After the initial contacts, people were recruited through a snowball effect. We met the respondents in their own homes on several occasions, some of them twice, and sometimes in the context of the activities of a patients' association. This approach allowed us to gather narratives in a variety of speaking situations: formal interviews, informal exchanges, for example at lunch or over a coffee, and conversations between women or girls affected by the syndrome. The formal interview guide was organized around six topics: the discovery of TS and the care trajectory; the experience of growing up and aging; the age transitions and the expected physical transformations; the evolution of the relationship with one's body, with oneself and with others; the competences engendered by Turner's syndrome; the perspective on both medical and associative care. As is common during ethnographic fieldwork, other issues arose during the discussions. In the case of the younger participants, we also met their parents and sometimes their siblings. During 2 years, we participated in the activities of a patients' organization. We also held two workshops on what it means to grow up with Turner syndrome. We then carried out a content analysis, paying particular attention to contextualizing the narratives and pointing out the discrepancies and convergences between the different issues with respect to the modes and contexts of the data collection (e.g., formal interviews or observation). The narratives and experiences are extremely diverse. Working with a group suffering from a rare syndrome poses the problem of constituting a population that has homogeneous social characteristics, while the women we worked with have different ages, family and professional situations and largely different histories with the disease. This diversity makes it difficult to interpret the influence of class and "race" on women's experiences. On the other hand, it has enabled us to highlight the importance of age, gender and generational positions, which emerge as transversal In France, the term race is increasingly used among sociologists. In anthropology, however, there is an ongoing debate about the pertinence of this term, which fixes and essentializes a di erence that is produced within asymmetrical social relations. The history of the discipline, the heavy heritage of physical anthropology, the fact that race does not constitute a category of public policy as it can be, for instance, in the USA, lead us to be cautious in using this term. I will therefore add inverted commas to this term or prefer the word "racization" which defines the way in which color, itself constructed, is part of a process of hierarchization of social groups. We cannot give more details about the places and organizations that were part of the fieldwork for reasons of confidentiality. The COVID pandemic slowed down research between and . All identifiable data that could identify the person are modified: first name, age, place of residence, profession. The first and last names are replaced by pseudonyms, as well as the place of residence and care by another city that comes close to the characteristics of the original one. The occupation is changed to another belonging to the same socio-professional category. These precautions are adopted in the case of all interviewees (parents, health professionals, etc.). elements in the construction of a legitimate body. The variability is also linked to the specificity of syndromes, which are not diseases and manifest themselves rather like the expression of an anomaly (Canguilhem, 1943). A syndrome can vary greatly in shape from one person to another and its manifestations will unfold along widely diverging timelines. Finally, there is the history of Turner syndrome itself, starting with its first description dating back to 1938. Its chromosomic origins were identified in 1965 and growth and sexual hormone therapies, were widely adopted only from the 1990s onwards. Today the diffusion of prenatal screening has contributed to parents being informed at an increasingly early stage, which allows them the possibility to interrupt a pregnancy or anticipate the hormone treatments for their little girl. Treatments have evolved too, having become more individualized with a more refined adjustment of dosages. Because of this great diversity, we have decided to present our findings under four narratives. They echo the testimonies of several of the women we met, while allowing for a better contextualization of their statements and experiences. These four cases were selected for several reasons. First, each of them is particularly representative of the form of desynchronization that we wish to highlight. Second, the chosen narratives are particularly dense with information that makes it possible to situate the experience of Turner syndrome as disorder in time and in status in the overall life course. Third, the four individuals are very different with respect to social class, family history, level of education, and place of residence. The narrative of the first interviewee is characterized by the durable experience of poverty and social marginalization in a small provincial town. The second interlocutor belongs to the urban wealthy bourgeoisie, and has been educated in some of the best schools in the country. The third person has experienced a history of upward social mobility, as she is the daughter of farmers who ended up in a recognized and well-paid profession. The fourth subject belongs to the middle class, characterized however by a significant intellectual "capital" in Bourdieusian terms. The choice of these narratives is not meant to associate social profiles with experiences of the syndrome. However, we considered it important not to eliminate the diversity of social conditions and to situate the narratives in specific contexts. --- The normative power of physical development The need to organize bodily changes and to establish fixed thresholds for different ages took on a particular dimension between the 19th and 20th centuries. Scientific measuring of the body accompanies child policies, as well as policies concerning adolescence and old age. The techniques of surveillance medicine such as screening, population studies, statistical enquiries, and public health campaigns have turned the variations of these changing bodies into measurable, understandable, objective, and predictable phenomena (Armstrong, 1995). Developmental thinking has introduced the idea of a life cycle divided in a regular and universal succession of stages (Turmel, 2008;Diasio, 2019b), which relating bodily transformations to a specific vision of time, seen as linear, irreversible, progressive and teleological. This epistemology and the apparatus it deploys aim to stabilize the mutability of the body. The purpose is to distinguish between changes that happen over the course of a life and are associated with a social age, and the modifications that stem from a changed state of health. Science has taken up the challenge of making these variations measurable, understandable, objectifiable and predictable, particularly in fields like pediatrics, psychiatrics and geriatrics. This process has contributed to a conceptualization of the "healthy" body as a "stable" body (Armstrong, 1983;De Swaan, 1990), and of adulthood as the age of stability. The variations also define which dispositions and behaviors are possible, or even recommended, for which times in life: psychocognitive development tests for children or graphs measuring the autonomy of older people are examples of this. This means that age-appropriateness, now more than ever, constitutes a central component of the management of people's existence: the beginning of school-going, leisure and sports activities, relationships and sex lives, and ages to become a parent (or not) and to leave one's "active" life. The physical, cognitive or psychological measures intertwine with political decisions in order to find the best regulation of people's life courses. The association of age categories with the social distribution of competences is not a prerogative of so-called Western modernity, and several societies institute "a normative relation between a certain age and an activity" (Widmer, 1983: p. 346). However, in the technology-based contemporary societies, this combination is founded on the scientific measures and its legitimacy is reinforced by a linear, chronological and mathematical definition of age. The contemporary fascination for quantification (Voléry, 2020) also combines with the standardization, the bureaucratization and the individualization of age as "a neutral and universal criterion for public action" (Rennes, 2019: p. 266). These quantified measures are appropriated by the social actors as a way of precisely establishing the stages of their development: children, who are the first target of this process, do not hesitate to declare, for instance, that they are "8 years and 3 months" old. The bodily changes and the measure of age are thus at the very core of their social identities (James, 1993). The importance of measuring has two consequences. Firstly, it reinforces the effects of the naturalization of age and the difficulty of thinking this dimension as a social category with its effects on the people being categorized (Hacking, 1999). The government of time and of the body's instability is a central dimension of a biopolitical order, but it is a "soft" biopolitics (Diasio, 2019a), all the more efficient because it is so obvious. Secondly, the concern with defining more and more precisely what is "age-appropriate" opens up areas of uncertainty (Kelle, 2010). The more we try to grasp universal criteria of development, the more we encounter variations, nuances and idiosyncrasies. That is how diagnoses of non-conformity with age-appropriate-development have also become so widespread in the field of formal school learning, and as in the alarm about the precocious puberty in spite of the controversial medical data (Cozzi and Vinel, 2015;Piccand, 2015). The medicalization of ages that are judged to be "critical" (e.g., puberty, menopause) entails both an eagerness to define what is proper to a given age, and the difficulty to distinguish between "normal" and "pathological" changes. "The body's aging process, whether in childhood or in later life, has become, in itself, problematic during the course of the 20th century in Western societies (. . . ) the instability of the aging body, coupled with a decline in childhood mortality and increasing life expectancy, has worked to blur biomedicine's normal division between natural and pathological bodily change. And, in so doing, it has produced a range of new uncertainties about the life course as lived" (James and Hockey, 2007: p. 143). Finally, the measuring of ages and bodies is now faced with a medical world that has become more and more individualized and where the molecularization and singularization of treatments call into question instituted categories (Rabinow and Rose, 2006;Raman and Tutton, 2010). An illness or a genetic condition will deeply affect this normalization of the link between body and time. A serious child disease will place the child in a position that may put it "outside of childhood" (Bluebond-Langner, 1996), while older people faced with degenerative processes are described as returning to childhood (Hareven, 1995). In other circumstances, such as with the case of myotonic dystrophy, adults who are considered at the height of their strength will have to take the same precautions as older people (Perrot, 2021). Such imbalances do not necessarily lead in the direction of increased fragility: children suffering from type 1 diabetes can outperform adults when it comes to auto-administering their care (setting up the catheter, injecting themselves), by subverting attitudes that are usually associated with their age and, sometimes, their gender (Williams, 2000;Renedo et al., 2020). With Turner syndrome, the experience of stunted growth, small stature and late puberty highlights the normative power of the "right" kind of development when it comes to defining who one is and what one's place is: "A friend of the family described me as an old child, and I proudly compared myself to Peter Pan. [. . . ] I might not object to womanhood, but I could not imagine myself as a woman. Being a kid defined who I was" (Beit-Aharon, 2013). This temporal disorientation can also translate into a confusion about one's status: am I a child? An adult? A woman? We will focus now on four "cases" that exemplify, each one in its peculiar way, the diversity of misalignments between body, time and gender norms. These stories also present women of different ages, social conditions and family histories. As such, they require more indepth exploration to set their experiences in perspective and in a specific context. --- Nadine: "As a child treated worse than dirt" Nadine is 55 and was born and raised in the East of France. She lives alone in social housing, in a suburb of a median size town. Her apartment carries the traces of a medicalized life: her bed is equipped to facilitate her weekly injections, there are piles of drugs in the bathroom and living room, and she has a regulator for the sound on her TV to help her with her hearing problems. As she is currently working as a home care assistant, she could be described as the help who "does the dead, " as Verdier (1979) put it. That is also how she talks about her work: "I close their eyes, " she says, and proudly recounts how she was able to accompany several people all the way to the end of their lives. As the eighth of eleven siblings, born to a family of farmers, she was raised by her grandmother until she was 11. When she was a little girl, she was already "always" getting sick. Her frail health and her difficulties at school made her father decide to make her abandon school. That was her first struggle, because she enjoyed studying, even though she was slow and had to suffer the mockeries of her classmates who called her "the dwarf." Then, when she turned 14, she left school and entered an institution. Fourteen was also the age at which her parents, worried about the fact that she still had not had her period, took her to a doctor: "They took me to Paris, I underwent some tests and all that and that's where they discovered I had Turner syndrome. But nobody told me anything. When I was at school [. . . ] it was the nurse who would give me the drugs that made me have my period like everyone else." The nurse came to the refectory every day at 12 to distribute the boarders' medication. Nadine would remain in the dark about her illness until she turned 24: "By dint of . . . my parents ended up telling me. And then after a while I also asked the doctors." She received very vague explanations about some genetic problem, until the day she obtained more precise information through a patients' organization. This struggle for knowledge was doubled by a permanent struggle for recognition of herself and her abilities, in spite of her small stature. "I was treated like dirt, " she often repeats. It was a struggle against her parents who considered her "inferior" to other children, a struggle for the right to get her driver's license and her first responder's certificate, or her diploma of home care assistant. As she says: "OK, we're not tall, but I do the work just as well as a person of natural height." In her account, her small stature appears to be the source of a double process of inferiorization and invisibilization. It puts her in a status of "a child, " and "a sick child" at that. "To make people see that I'm here, that I'm fighting, that is very hard [. . . ] when I go [to the doctor] with someone, he will be talking to the tall person who accompanies me, instead of talking to me." The struggle to be seen and heard influences the affirmation of her femininity too. Being a woman with Turner syndrome means "fighting, thinking, and making oneself heard." Nadine is thus clearing a path for herself where she can be seen and heard. She loves music, and at one moment, during the interview, she sings in a confident, well-placed, and resounding voice. She sings at family gatherings or with friends, dressed as a boy, and incarnates male stars of French pop music from the seventies. Several of their portraits adorn the walls of her apartment. As she is squeezed into the position of a child, which contrasts with her chronological age, Nadine mobilizes her body to "show herself " and "be heard, " and her entire account is structured on these different sensory matrixes. --- Corinne: Going through life with a persistent feeling of illegitimacy Corinne is a 60-year-old biologist who works in the agro-food industry, is married and has adopted a child who is now 15 years old. The family lives in an individual house with a small garden, in a suburban town in the Southeast of France. She comes from an upper-class family with a father working in the industrial sector, a mother who was a homemaker and a younger brother, making up a family marked by the silence surrounding her "illness." After "having been stretched out in every direction" to trigger her growth, the Turner syndrome diagnosis was given when she was 14, by a renowned Parisian doctor who did not deem it necessary to inform the young patient about her condition. However, her mother would immediately tell her, since, Corinne being an intellectually precocious child, she was about to pass her baccalaureate at age 15. This institutional threshold therefore influenced her illness trajectory (Corbin and Strauss, 1988). At the time, all the information she received centered on the absence of fertility: "no growth, no puberty, you won't have children." Corinne discusses the violence of the diagnosis, and how her mother's view of her would from then on transform her into an "a-sexual being" trapped in the generational position of the "girl" who had no right to a sex-life, love, or motherhood. The hormone treatments, which were still in their early days in the 1970s, changed her body: she gained weight, and the presence of male hormones preoccupied her: "what am I going to turn into?" The stages in the treatment and her own transformations are both discussed in terms of the chemical products that acted on "[her] fabricated body": the nandrolone phase, the trophobolene phase, and the estradiol-progesteron phrase. Her "accelerated puberty" was overwhelming and depressing. Biological and social stages succeeded one another-puberty, college, first job, very tardy first amorous relationships, adoptive motherhood-but she felt out of sync and often saw "an overcast horizon." Being out of step with time, and having her intellectual qualities dissociated from her childlike appearance, contributed to amplifying the feeling of not "being in her place, " which constitutes the red thread of her narrative. "Considering that I was intellectually precocious and at the same time I was physically not developing, I cannot tell you how hard it was to deal with that gap. I was at university with people who were 19-20, had no breasts, and I was 1.35 meters tall. How do you find your place in that? How do you find your place among them? Some of them were already in a couple, anyway (knocks on the table). You ask yourself: what am I doing here?" In her testimony, her small stature is constantly tied to a femininity that is perceived to be defective (often expressed through references to the lack of breasts) and a sexuality that does not follow the same stages as the others: her very late first kiss, and her stormy sentimental relationships. The fact that physical development, belonging to an age and gender, and being in the "right" stage of life are all out of sync, manifests itself in a variety of areas. When she gained a lot of weight due to a bad dosage of her treatment, her mother would dress her in pregnant women's clothes. At work, she is not taken seriously. When she decided to adopt, the employee turned to her husband to place the baby in his arms, which strengthened her feeling of illegitimacy: "I felt illegitimate, completely (Pause). Because you are always asking yourself the question, whether you are legitimate. [. . . ] You never really feel completely legitimate, as a woman. I never feel completely legitimate as a woman." The illegitimate body is, in Corinne's experience, entangled in age and gender non-appropriateness. The gap between body and time is here described as a difficulty to settle in the conventional stages of life (Settersten, 2002). As in the case for Nadine, it is possible to find a place at the margin or to occupy a void. That way, with friends, "we have a specific place, one that does not belong to anyone else. [. . . ] We are not a threat, neither for the boys nor for the girls. We're the good friends, we make everyone laugh, we accommodate everyone, we smile all the time and accept everything." On the other side of reproductive life: Emilie and the menopausal turn-o These cases can be associated with a time of late diagnoses and limited treatments. However, the feeling of illegitimacy caused by a body that does not show the "right" markers of gender and age is still just as pervasive among our younger respondents. Moreover, being assigned to childhood is then doubled by another positioning "outside of any age, " namely menopause. During one of the first meetings we attended between Turner syndrome women and girls and gynecologists, one of the criticisms patients voiced was that the hormone substitution treatment mentions "women who could be 51, " and this is "very problematic." This discussion, though it seemed banal at the time, would subsequently become quite meaningful. Emilie is a 36-year-old manager. She lives in a big city in the West of France and is now going through some great existential changes: she has new professional responsibilities, and she has entered a relationship in which she lives with her partner as a couple, after a deferred love-life and a "period of great mistrust toward men." Her parents were farmers, and they both suffered from other chronic illnesses themselves. She was diagnosed at age 14 because of slow growth and a lack of any signs of puberty: "I never had let's say the physical passage . . . you progress in age, in life, but your body itself does not evolve, it is at a standstill." Between 1999 and 2019, the treatment of the syndrome happened haltingly, with many stops and starts, and periods of waiting. In 2019, Emilie says she felt "ready without really being ready" to follow a treatment and it was only gradually that "in my head it had matured, well I'm soon going to be 35, I will have to do something. So I began the treatment in January 2020." This interior time was a long, personal and rugged road that clashed with the rapid physical changes induced by the first growth hormone cycle, which had made her grow over 20 cm in a very short time. The fragmented way in which the treatment was then followed up was enhanced by the discovery that she suffers from epilepsy, which for a while relegated the syndrome to the background and with it, her relation to "Emilie the woman." When she discusses this, she talks in the third person: "that part of me that is the syndrome, that part of Emilie, but Emilie as a woman, not as a person, was beginning to fall asleep . . . and also the relation with men, and with her own body." The absence of a menstrual cycle, combined with a fragile bone structure and a disposition of the sexual organs that does not facilitate sexual relations, gave her the feeling that she had "the status of a woman in her menopause." Emilie finds this idea "embarrassing" and "psychologically complicated" and associates it with the absence of a "woman's life" : "I did not have a woman's life as such . . . I had pushed it away somewhat, see? I was Emilie, a person, but I was not . . . I was not a woman (She stops, tears in her eyes) this is sort of the hardest part." Translator's note: In colloquial French, the expression avoir une vie de femme means to have a love life, and more specifically a sex life. These words revive widespread social ideas that associate the end of the menstrual cycle with a passage [Skultans ((1970) 2007)] and the decline of femininity (Lock, 1993). The lack of the life of a woman is here translated too into a difficulty to approach sexuality, and the grief over the loss of maternity and a feeling of being "mismatched" and "out of place." Family gatherings or reunions with friends strengthen the feeling of being "the weak link, the ugly little duckling, the odd number." Evoking menopause thus also stirs up another question, which is the one about filiation and one's genealogical position. The menopause for mothers and the first period for girls constitute a focal transmission point in contemporary France (Vinel, 2008). In many societies, the end of fertility means handing over one's legitimacy to procreate, even if that sometimes means, at least formally, to stop having sexual relations (Beyene, 1986;Delanoë, 2006). These rules, which can be transgressed through more or less explicit practices, aim to dissipate any possible confusion or ambiguity in the succession of generations. Mobilizing the image of the menopausal woman thus highlights another question: that of one's "place" in the order of generations, and of the contribution of women to the process of succession. --- Françoise, or the trouble in kinship The inability to have children has serious consequences for the family group, because the transfer of reproductive power constitutes a structuring element of generational succession. The infertility linked to the total absence of X chromosomes is not an individual question: it also puts the continuity of the lineage at risk. As Radkowska-Walkowicz and Maciejewska-Mroczek (2023: p. 6) pointed out with regard to Poland, mothers whose children have Turner syndrome see their daughter's infertility as "as an interruption of the intergenerational transfer of norms and values [that] jeopardizes women's hope for future grand-motherhood". That makes it one of the consequences of Turner syndrome that are the most difficult to deal with within the family. The silence around what is often an open secret goes beyond differences linked to age, different diagnoses, or therapeutic and biographical trajectories. Indeed, the silence surrounding infertility feeds into and is fed by a concern around the ability to procreate that also applies to other members of the family, among whom it induces a desire to investigate their own chromosome types, or a fear of maternity-linked events like pregnancy or late periods. Relationships among siblings seem to be particularly troubled by sterility. The infertility of one is seen as a potential threat to the fertility of others. Moreover, because of the confusion between genetic and hereditary illnesses, the children of a brother or a sister can also be preoccupied that their aunt's sterility could spill over onto them. Lastly, the pregnancy of a sister, or the new fatherhood of a brother can bring about a lack of equality of place, and an asymmetry in generational ranking, which the women we interviewed experience with much apprehension. Françoise is 50; she has an intellectually oriented profession and lives in the south of France. She is very active in the local branch of a not-for-profit, she welcomes us, puts us into contact with people likely to be of value to the enquiry, and encourages us to pursue. As opposed to the many silences that have parsed our fieldwork, she underlines the importance of "talking, talking, talking" about it. Psychoanalysis helped her to become aware of the questions of sexuality, to which she returns often during our meetings and which just as often are avoided in collective discussions. "The question of infertility is a major worry for us; as is the question of sexuality, but people don't talk about it. When you have learned to dissociate womanhood from maternity, and maternity from sexuality, you are OK. [. . . ] But, you don't have tits you don't get a guy; and to have words to explain all this, that helps." Her experience of "being small" is told with a mixture of tenderness and concern. She is tender when she evokes her relationship with her brother when they were children: "He was very protective of me and his friends too, I was small, they never heckled me or pushed me around, they were very brotherly and adorable." However, she was impatient, "worrying about a body that wouldn't grow" and this becomes clear from a dialogue with her youngest sister during a workshop the patients' organization: Françoise: Marie-Laure had her period, and I was expecting to be next, my sister had gotten ahead of me. Marie-Laure: we called her the "munchkin" (la puce). Françoise: yes, I had a grandma who was small. Marie-Laure: Françoise didn't see anything coming, and I thought, lucky her. Françoise: and I would cry [. . . ] At the time of the diagnosis, we knew something was not right with me, but until then I had my place among the siblings. Marie-Laure: and then I became a mom, in 1980 we did not know that it was genetic. Moreover, in all this I was worried about her, about my big sister. "Finding one's place" by comparing the changes of one's body to those that are happening to other family members, especially those of the same sex, constitutes one of the ways in which people take up their place in a family (Diasio, 2014). Turner syndrome, however, turns these relations upside down. The youngest sister's first period had an impact on Françoise's status as the oldest daughter. The birth order and its connection to gender are fundamental in the relationship between siblings, even in European societies where the prevailing social norm is to consider siblings as equal (Segalen and Ravis-Giordani, 1994;Fine, 2011). Françoise's infertility subsequently comes to blur her place in the family. Nevertheless, the birth of nieces and nephews is told with much humor as a way of settling back into a genealogical order and compensating for the lack of motherhood without suffering its burdens. --- The complex interplay of bodily markers In the experience of Nadine, the gap between chronological and age status, her small size and the childlike body discredit her and make her invisible as an adult woman. Corinne discusses how important it is to settle at the right moment of one's life course, in order to avoid a lack of legitimacy as a female adult, as a professional, and as a mother. Emilie's experience shows a tension between her life as a young woman and her menopausal status. The infertility and chaotic development of Françoise bring about disorder among the siblings and question her place in the generational rank. The four narratives presented here show how the relationship between body and time is not a well-oiled mechanism of biological data and social roles. They also reveal how the significant markers for the adjustment or misadjustment between body and time vary according to the moment of the life course, the social interactions and the particular temporality that is at stake. The choice of age and gender markers is far from arbitrary or coincidental. As women with Turner syndrome deal with the numerous ways in which the social making of age and gender is expressed: in their accounts, the height, the breast, the infertility, and the absence of menses are the most relevant phenomena. These, however, may be differently stressed according to the biographical and social context of the women's experience, and to the tactics (de Certeau, 1980) they deploy to "make do" with this genetic condition. Height is certainly the body feature that comes up most frequently. It is mentioned in relation to the altered rhythm of individual growth, it reveals the presence of the syndrome, and it underlines the difficulty of being part of a life stage, particularly in relation to peers. The misalignment between height and age thus refers to different temporalities: that of growing up, of the illness trajectory, and of the succession of age stages. However, stature materializes social, family, emotional and sexual relationships too. Height constitutes one of the first indicators brought to bear in measuring growth (Tersigni, 2015), but it is also a primordial expression of sexual dimorphism. It is the result of several genetic variants and social practices, such as unequal access to food resources (Guillaumin, 1992) or matrimonial choices, which have had an impact on genomes by selecting tall men and small women (Touraille, 2008). Height therefore constitutes a double operator in classifications of both gender and age. It indicates childhood, but also the fact of belonging to the female gender. In the case of women and girls with Turner syndrome, the question of height is ambivalent. Many of our respondents told us how their own mothers or other female members of the family are "small" and that this resemblance delayed diagnosis. The problem of height seems to become more acute when the child leaves the family circle, for instance at middle school, in their professional life, or in public spaces. In these situations, height is an element that materializes the disconnection between age and the fact of being hemmed into the children's category. As the word "small" signifies this double assignation, both to a physical dimension and to a state of social and psychological immaturity, the short stature embodies the asymmetrical and hierarchical relationship between adults and children. However, when these women evoke their love heterosexual relationships or interactions with male members of family (such as brothers or cousins), the fact of "being small" is less likely to be a source of discrimination. The "male taller norm" that dictates that men should be taller than women [Bozon, (1991[Bozon, ( ) 2006] is still important in matrimonial arrangements and constitutes a materialization of the gender hierarchy. That is why in a group interview we saw smiles and laughter of agreement when one of our respondents said, "We Turner girls always go out with tall men!" A gender perspective then subverts the stigma of an "age-non appropriate" height. Therefore, in some narratives, the small stature is considered as part of the "normality" of gender-based dimorphisms, which is in turn reinforced by the "normality" of heterosexual relationships. This legitimization of small stature through heterosexual relationships may explain why the question of height comes up more painfully in the narrative of Nadine. She refers halfheartedly to her long condition of loneliness, repeatedly dodging the question of sexuality and love relationships, and justifying her single state by referring to infertility. Françoise's attitude, on the other hand, is different: in her account, the short stature is the trigger for the diagnosis, the proof that "something is going wrong, " but it also gives rise to a protective attitude on the part of her brother, which is recalled with tenderness. In her account, especially when she talks about her heterosexual relationships, infertility plays the main role. Infertility also gave rise to her psychoanalytical treatment in order to learn to dissociate femininity from motherhood. Should we see this difference as an effect of social class and level of education between two persons with very different social positions? Maybe, even if our data do not allow us to make definitive interpretations. The absence of breasts and infertility are particularly emphasized in sexual and love affairs, in interactions with friends, especially in youth transitions that put gender identity at stake (Fingerson, 2006), or in family relationships that involve intergenerational transmission. Nevertheless, while menstrual blood is often socially considered as a gendered matrix of experience, which permits "a shared subjectivity" (Pandolfi, 1991: p. 155) and a bodily mapping of gender difference (Prendergast, 2000), the girls and women we met undermine the importance of this fuzzy web. In fact, the attitude toward menstruation is closer than one might imagine to that of French adolescents encountered in other research (Mardon, 2009). The menarche is part of the definition of growing up and indicates a good state of health, nevertheless the presence of first menses is not a sufficient condition to become "women" (Diasio, 2014). However, the importance given to periods depends, more than other factors, on the moment in which these women and girls are interviewed. In adolescence, the onset of menarche is considered rather as a way of aligning with the experience of other girls and finding one's place among peers or in the family, whereas over time the presence of monthly menstrual blood may be considered as an unnecessary bother. However, menstruation may regain relevance in the context of a relationship with a male partner who may regard it "as a sign of womanhood", as a 40 years old woman says. The approach to menstruation also depends on the generation to which the girls and women interviewed belong. For Corinne, who was treated in the 1980s with hormone therapies that were still in their trial and error stages, the onset of her first period came late and followed periods of "self-manipulation, " as she calls them, which were particularly painful. For Nadine, the menarche was experienced in misunderstanding and the passivity of a pill silently swallowed in the school canteen. These experiences invite us to situate the construction of a legitimate body in another temporality, which is the history of the discovery of the syndrome, its care and the evolution of treatments. A whole history of the patient and his or her participation in care is also intertwined with these transformations. Thus, the youngest girls in our population live in an age in which a new vision of children as present beings (Lee, 2001) leads doctors and parents to encourage their participation in therapeutic choices. Children and teens can then be consulted to know if and when to initiate treatment with sex hormones and to dissociate, for example, the growth of the breasts from the onset of menstruation. Physical transformations that happen over time are thus a part of a continuous and multidimensional process of biological and social facts which is open to interpretation, appropriation, and even conflict: "Over time, the combination can at times be harmonious, and at other times dissonant, and the individual is confronted with contradictory injunctions and "moral tensions" (Peatrick, 2003: p. 16). In the biological continuum, some markers will be socially selected (or not), physical qualities will be encouraged (or not), and certain practices valued, while others will be left by the wayside. We can therefore adopt, for age, the same statements that feminist biologist Fausto-Sterling (2000) dedicates to gender: "Our bodies are too complex to provide clear-cut answers about sexual difference. The more we look for a simple, physical basis for 'sex' , the more it becomes clear that 'sex' is not a pure, physical category. What bodily signals and functions we define as male or female are already entangled in our ideas about gender." Out of the obvious: The body against "nature" While the syndrome causes distress and a sense of lack of legitimacy, it also gives rise to another form of "presence in the world" (De Martino, 1948) that leads to a critical re-examination of hegemonic models of womanhood and their intersection with life stages. Radkowska-Walkowicz writes of an "emancipatory model of femininity" (Radkowska-Walkowicz, 2019: p. 138). Instead, we noted a reflexive stance: being aware of the normative power that the measurement of bodies obtained in contemporary society, the women and girls we met assiduously exert a distanced and critical gaze on their life path and the intertwining of bodily manifestations, age and gender positions. From one account to another, we find recurrent statements: "Turner girls think a lot, " "We have to think twice, " "We think more than those who have no disability." Thus, Corinne claims that the syndrome brings "a different view of femininity, and of men. [. . . ] We think a lot about sexuality, about. . . well, about many things, about the couple, about the family, we think a lot. And young girls who have nothing [i.e., who have no syndrome], who haven't thought about it at all, who throw themselves into their life, their love life without having thought about it for a moment, well they don't ask themselves: "can something else exist, can we get there in another way? So they put on make-up, they flirt, they appeal and that's it." In these words, it is less a matter of distancing themselves from forms of coquetry or so-called "feminine" aesthetic practices than of questioning their obviousness. Sometimes, the medical doctor invites the girl to observe at what time her classmates are menstruating in order to 'be ready' and to match pharmacological interventions to a form of social conformity (Laiacona, ). This concern with the normativity of body development also a ects some parents of children with Turner syndrome, who question, for example, whether growth or sex hormones are necessary or not. Frontiers in Sociology frontiersin.org Diasio . /fsoc. . Even if the experiences and illness trajectories are heterogeneous, we can observe the practice of a "bioreflexivity" (Memmi, 2003), which encourages women, at different moments of their existence, to question the influence of the syndrome on bodily states intertwined with age status and gender role. The stages of existence, which seem to occur "naturally" for those not affected by the syndrome, are submitted to a relentless evaluation to understand "how one stage succeeds another." The body, its changes over time, and in particular its gendered expressions, such as the presence of menstruation or the development of breasts, are often described as "artificial, " "manipulated, " "weird, " "fake." This lack of "naturality" flushes out the apparent correspondence between biological and social age and raises questions to understand "where we stand." As Maëlle (21, assistant nurse) says, "growing up, becoming an adult, means growing in awareness of what Turner is, of what the relation to the syndrome is." It is interesting to note that in the discourse the term "phase" is frequently used in place of the "age." The phase eludes naturalization. It is constructed at the intersection of a type of treatment, such as the "nandrolone phase" mentioned by Corinne, a step in the care trajectory, an experience of the body, such as a slowed or accelerated stature growth, and an existential bifurcation: leaving one's parents' home, a break-up in love, entering or quitting a professional activity. Ages and their thresholds are then thought otherwise than in reference to the apparent concordance of biological transformations and social positions. Thus, if being assimilated to a menopausal woman when one is young is rather uncomfortable, as we have seen with Emilie's narrative, the onset of aging and the cessation of hormone treatments are interpreted as an alignment with the experience of other women in their menopause. Far from challenging the sense of womanhood the moment of entering an artificial menopause thus constitutes a sort of return to the circle of socalled "normality." This process of demystification of the obvious leads our interviewees to unravel some dimensions of being a woman, which, from their point of view, are wrongly considered interdependent. "The first thing people ask you when you talk about not being able to have children is if you have your period. It is weird, like it is a sign of fertility. That's why it's problematic [. . . ] you can have your period and be infertile [. . . ] for me it's decorrelated" (Ariel, 40 years old, employee). As we have seen in Françoise's narrative, a cascade of decorrelations challenges common associations that women with Turner syndrome affirm enduring in everyday life: the association between chromosomal sex and femininity, between femininity and seduction, between femininity and maternity, between maternity and sexuality, or between the menstrual cycle Our first observations reveal that the approach to aesthetic codes also varies between generations. For example, a not-for-profit has recently set up workshops to encourage young girls not to neglect their appearance and to choose clothes that enhance their bodies despite their short stature. According to one of our eldest interviewees, there is a "cultural change" related to the earlier start of medical treatments. --- As demonstrated for instance by Delanoë ( ) in the case of France. and fertility. Furthermore, women with Turner syndrome deal with the numerous ways of conceptualization of sexual polymorphism (chromosomic, gonadic, hormonal, morphological. . . ) and split up the variable expressions of gender. Throughout the interviews, the researcher witnesses the attempts to unfold, as much as possible, these gender attributions and overlaps that are perceived as problematic. The questioning of the social and cultural evidence of "femininity" is often expressed through fighting metaphors. Becoming an adult woman is often associated with learning to fight, to struggle, to not be crushed, and above all to start talking, to no longer be suffocated by the blanket of silence in which one was enveloped (and cloaked) during childhood (Laiacona, 2019). Nadine's narrative is an example of this warrior attitude: "I'm a fighter and I'm going to do everything I can to. . . to succeed if I fail. I am really a fighter. It is long, but I fight, I fight. That is what is good [in Turner syndrome], I try to show that I am here (she pounds) that I can." The words are often punctuated by gestures, such as the pounding of the fist on the table, which materialize these claims of fighting spirit. Becoming a woman means learning to "be seen" and "be heard, " where Turner's syndrome amplifies the invisibilization and minorization of women in relation to men, namely in the public space. Thus, one interviewee recalls how, faced with her repeated requests for a document to the administrative services, she would have had to ask her husband to intervene (which she did not) with "his big size and his big voice [. . . ] Not only are we women, we are also small and childlike!" This reflexive and fighting womanhood may constitute an example for other female relatives. The older women in our population point out how they establish special relationships with some nieces who turn to them for counsel or advice. They may be girls who have not yet had children, who are late in their first period, or who are complaining about family difficulties. These elective bonds reinstate our interviewees in a generational position. Thus, the body, in its multiple expressions, may be a resource for resistance to these classifications, and to naturalization. For instance, while Nadine's height inferiorizes her, her voice, whether through singing or through her rants at physicians ("I'm sure they heard me!"), comes to her rescue to allow her to assert herself with family, friends, and medical or professional milieu. In Françoise's case, her small stature and gender asymmetry generate a protective attitude on the part of her brother, her brother's friends and in the close ties which she entertains with her nephews and nieces. This means that the experiences of women who have Turner syndrome demonstrate how "the body is not only shaped by social relations, but also enters into their construction as both a resource and a constraint" (Prout, 2000: p. 5). Taking this approach avoids the tendency to fall into the double reductionism of naturalization or radical constructivism, both in age (ibidem) and in gender (Touraille, 2011;Raz, 2019). It thus enables us to reconsider the "materialization of sex and the sexuation of matter" (Kraus, 2000: p. 190). While society acts upon the "unfinished body" (Schilling, 1993;Remotti, 2003) through cultural practices and a range of technologies of the self (Foucault, 1988), the body acts on society as a matrix of possibilities and limitations: its changes, its troubles, the play of its matters and contingencies, solicit choices and practices, and provoke social responses, power positions, and resistance. --- Conclusion To be an adult with a small stature and a childlike morphology, to become a "complete" woman at an inappropriate age, to go from late growth into an accelerated onset of puberty caused by hormone treatments: suffering from Turner syndrome brings about desynchronizations between several times. The temporality of syndrome, treatments, growth and aging, and filiation disrupted by partial or total infertility, are not coincidental, and the gaps between them may give rise to experiences of liminality (Turner, 1969: p. 95) and stigma. The social norms that govern the "right" development of a body in time do not only establish appropriate age status. They are also at the heart of the ways in which gender is processed: it is not only a question of being at the right time, but of arriving there by negotiating the codes of a socially defined femininity. Lastly, the point is also to establish oneself in the long time of generations, whose renewal is threatened by infertility. The troubled body is thus defined by a disorder in time, which is also a disorder in status that produces a lack of legitimacy in several areas: family, work, relations with medical staff, friends, or lovers. This discloses the link between body and time as one of the last bastions where social and individual existences are naturalized and essentialized. The experience of women having Turner syndrome also reveals the obvious, elusive, undiscussed character of adulthood, and partially of womanhood, in contemporary French society, and the silent force of the entanglement of age, gender, and generation in the course of the life. Nevertheless, these narratives also reveal that in the thickness of the body and in its multiple manifestations, there are resources that go through the social evidence and put it into question. The body's materiality, with its complexity and singularity, plays against "nature". The different markers used to denote age or gender only become meaningful and effective if they are situated in social situations and relations, including generational ones. For the girls and women we met, age and gender are not stable categories, they are rather forms of action that are expected in the context of a given relationship (Alès and Barraud, 2001): for example as daughter, sister, partner, aunt and so on. Thus, day by day, with inventiveness, reflexivity, and humor, these women measure themselves to the established categories, find a place in their interstices, and continue to struggle, brave and pugnacious, for their own legitimacy. However, we feel it is important not to idealize this combative stance, which we found in other experiences of chronic illness. The ideal of reflexivity and self-improvement may trap the patient in a model that he or she cannot achieve and bring new forms of determinism and disempowerment (Diasio, a). --- Data availability statement The datasets presented in this article are not readily available in order to protect participant privacy. Requests to access the datasets should be directed to [email protected]. --- Ethics statement The research is registered with the CNIL data protection service of the University of Strasbourg at the following address: https:// cil.unistra.fr/registre. --- Author contributions The author confirms being the sole contributor of this work and has approved it for publication. --- Conflict of interest The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. --- Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Background Current cultural competence training needs were assessed as baseline measurement in Dutch physician assistant (PA) students and PA alumni that were not specifically trained in cultural competence. In particular, differences in cultural competency between PA students and PA alumni were assessed.In this cross-sectional, observational cohort study knowledge, attitude, and skills and self-perceived overall cultural competence were assessed in Dutch PA students and alumni. Demographics, education and learning needs were collected. Total cultural competence domain scores as well as percentage of maximum scores were calculated. Results A total of 40 PA students and 96 alumni (female:75%; Dutch origin:97%) consented to participate. Cultural competence behavior was moderate in both groups. In contrast, general knowledge and exploration of patients' social context were insufficient, i.e., 53% and 34%, respectively. Self-perceived cultural competence was significantly higher in PA alumni (6.5 ± 1.3, mean ± SD) than in students (6.0 ± 1.3; P < 0.05). Low heterogeneity among PA students and educator exists. Seventy percent of the respondents considers cultural competence important and the majority expressed a need for cultural competence training. Conclusions Dutch PA students and alumni have moderate overall cultural competence, but insufficient knowledge and exploring social context. Based on these outcomes the curriculum of the master of science program for physician assistant will be adapted.Emphasis should be made to increase the diversity of PA students to stimulate cross-cultural learning and developing a diverse PA workforce.
Background Patient ethnicity predicts the quality of care one receives, independent of access to health care or socioeconomic status [1]. With the increasing migrant populations in the Netherlands -from 20% in 2009 to 24% in 2020 [2] -medical healthcare needs to be adapted to the requirements of this culturally and/or ethnically diverse patient population [3]. Barriers for equity in healthcare lay within poorly matched care to the minority needs, language, cultural familiarity, or health systems [1]. Health professionals and professional organizations subscribe to this view and should take a leadership role in advocating for interventions to reduce these disparities, although it is not yet clear yet what works best [4,5]. Doctor-patient communication is directly linked to patient satisfaction, adherence to treatment and, subsequently, health outcomes [6]. In a context where health professionals increasingly engage diverse patients with different perspectives of health the patient communication is challenging and requires culturally competent health professionals [7]. Cultural competency has been defined as "the ability of healthcare professionals to communicate with and effectively provide highquality care to patients from diverse sociocultural backgrounds" [7] is essential to professionalism and quality of care [8]. Culturally competent health professionals improved public health and patient satisfaction and may have a positive effect on the outcome of medical treatment [9]. Training of cultural competency for health professionals has been suggested to improve their cultural behavior [10] and increasing awareness of provider bias and discrimination in medical decision-making has been observed [7,11]. Cultural competence has been gradually incorporated in the Physician Assistant (PA) programs since about two decades in the US [12] and since 2017 in the Netherlands [13]. Cultural competency education for the PA is similar to that used in medical education and mainly focuses on knowledge, attitudes and skills [14]. Although cultural competence and cross-cultural training in PA programs showed increased multicultural awareness, knowledge and skills [15,16], study results were heterogeneous as various instruments or constructs of cultural competence had been used. Nevertheless, it was uniformly stated that exposure to diversity and cultural issues is essential to develop cultural desire and awareness. Recently, data in US PA students showed that they acknowledge the importance of cultural competency in their profession, but also acknowledge their own lack of knowledge and skills on this topic [17]. Cultural competence of Dutch PAs is not known, although data on Dutch medical students and physicians identified gaps in knowledge and culturally competent behavior [18]. Therefore, it was suggested that cultural competence training and creating awareness of students' incompetence should be part of the medical training program [18]. With regard to the urgent need of cultural competence training of PA students, the curriculum of the master physician assistant will be adapted and monitored for efficacy. To determine educational needs, current cultural competence was assessed as baseline measurement in Dutch PA students and PA alumni that were not specifically trained in cultural competence. In particular, differences in cultural competence between PA students and PA alumni were determined. --- Methods --- Study design This was a cross-sectional, observational study to quantitatively assess baseline cultural competence in a cohort of PA students and alumni of the master of science program at the HAN University of Applied Sciences, Nijmegen, The Netherlands. --- Participants PA students and alumni who were not formal trained on cultural competence during their PA training were recruited in August 2020 among students from cohort 2019 and 2020 (n = 149) and the alumni database of the HAN University of Applied Sciences (n = 412). Participants were invited by e-mail to participate in the study. Participation was voluntary and without any restrictions. Responses were collected anonymously, and no personal information was to be collected. This study was deemed exempt from scientific medical research involving human subjects according to the Dutch law ('WMO') and medical ethical approval for this study was therefore not obligatory. The study protocol was reviewed by the Ethical Research Committee of the HAN University of Applied Science (Reference: ECO 189.06/20) for local approval. Consent of the participants was obtained online prior to the start of the cultural competence questionnaire. --- General data of the cohort Demographic data included gender, PA status, working experience, country of origin of participants and their parents, professional experience with minority patients, cultural competence courses and (current) working location. --- Assessment of cultural competence Cultural competence was assessed using a questionnaire based upon the conceptual framework for teaching and learning of cultural competence including knowledge, attitude, and skills [18]. Cultural competence was assessed by three domains: i) General knowledge of ethnic minority care provision and interpretation services, ii) Reflection ability (attitude) for insight into one's own understanding of prejudice and cultural frames of reference determined by the Groningen Reflection Ability Scale (GRAS) [19], and iii) Cultural competent consultation behavior (skills) during medical consultations with ethnic minority patients. The original items on knowledge were updated to the current state of the art and the short case scenarios were adapted to match with the PA profession. Finally, selfperceived overall cultural competence was assessed using a 1-10 scale. --- Education and learning needs of the PA PAs experiences during university education on cultural competency, data on cultural diversity of students and teachers, and experiences on the role of PA education in culturally competency were collected. Usefulness and learning needs on cultural competence in relation to relevant competencies of the PA curriculum were explored. --- Data collection and storage Data was collected using a web-based questionnaire developed in Qualtrics © XM software (Qualtrics, Provo, Utah, USA; version August 2020). The data will be stored digitally for a maximum duration of 10 years at the HAN University on a password-protected research drive. Access to the research data is limited to researchers of the study. --- Statistical analyses Response rates were determined by calculating the ratio of responders to the number of invited PA students and alumni. Cultural competence scores were summed per dimension for each domain. Relative scores were calculated as percentage of maximum scores for each dimension and interpreted as insufficient cultural competence when < 60%, moderate when 60-80%, and sufficient when > 80% [18]. Descriptive statistics were performed and presented where applicable. Chi-square tests were performed to analyse demographic characteristics between PA students and alumni. Differences between mean or median group scores per dimension were analyzed using either parametric or non-parametric statistical tests for differences in cultural competence between PA students and alumni. P-values < 0.05 were considered statistically significant. --- Results --- Demographics From August to September 2020 a total of which 136 consented to complete the online questionnaire (response rate 27%), the majority being PA alumnus (n = 96), female (n = 98) and of Dutch origin (n = 132) (Table 1). Fifteen percent of the responders had an experience of living abroad for half a year or more. The majority (65%) had a working experience in health care of 15 years or more being statistically significant higher in alumni and 54% had a PA working experience up to 5 years. Nineteen percent had previously worked in one of the largest cities within the Netherlands. The number of minority patients was estimated to be 25% or less in the working area of 60% of the respondents (Table 1). --- Cultural competence The questions on cultural competence domains were completed by at least 80% of the respondents. Although moderate scores were obtained for both PA students and alumni, insufficient general knowledge on care provision of ethnic minorities was observed in both groups (Table 2). Reflection ability was moderate for all PAs. Also, moderate scores for consultation behavior were detected, although exploring of patients' social context was insufficient (Table 2). In particular, only 20 to 40% of the PAs were aware of the family composition or country of origin of more than 75% of the minority patients in their practice (Fig. 1). Moreover, the majority of the PAs (56 to 78%) knows of less than 25% of the minority patients in their practice details about the use of health care in the migrants' country of origin, the number of school years, or their reasons for migration. Finally, the self-perceived cultural competence of all PAs was rated as moderate, although alumni rated themselves significantly higher (6.5 ± 1.3, mean ± SD) compared to students (6.0 ± 1.2, P < 0.04) (Table 2). --- Language barriers and communication Thirty-nine percent of all PAs often to regularly encountered language difficulties when consulting minority patients in the year before the study and 78% had experienced using a professional interpreter. The PAs had sufficient interpreter behavior during consultation (82% of maximum score, Table 1). Respondents' desirability for using a professional interpreter is presented in Fig. 2. According to the majority of the PAs the use of a professional interpreter is regularly to often preferable, and 62% indicated the use of a child younger than 16 years as never desirable. --- Educational needs for cultural competence and the PA curriculum A total of 111 respondents completed the questions on the PA curriculum and educational needs. According to twelve percent of all PAs the PA curriculum has added value to their cultural competence behavior. Only 29% of the PAs indicated to feel confident in health care consultation of minority patients, whereas only eight percent felt they had received sufficient cultural competence training during the PA education. Sixty percent indicated that having a different cultural background than their patients did not cause any problems during consultations. Nevertheless, 48% of the PAs acknowledged that there is a need for healthcare professionals with various cultural backgrounds within the Netherlands to provide the best possible care. Moreover, 70% of the PAs considered cultural competence important for their work as PA. Sixty-seven and 78% of the PA student and alumni, respectively, indicated to have had few intercultural diversity among fellow students during their health care bachelor and PA master education. Even so, PA educators were little culturally heterogeneous according to 78% of the PA students and alumni. Ninety-two percent of those having experienced extracurricular cultural competence training indicated that this added to their culturally competent behaviour. Forty-three percent of all PA respondents indicated a need for training to increase their knowledge of culturally competent medical care and treatment of patients from diverse cultural backgrounds. The majority of the respondents indicated to have a moderate to high educational need regarding the competencies of the PA curriculum on medical treatment and social approach that are related to a culturally diverse patient population (Table 3). --- Discussion This study showed that culturally competent behavior of a cohort of Dutch PA students and alumni from the HAN University of Applied Sciences was moderate, although general knowledge on ethnic minority care provision and exploring social context during consultation was insufficient. Only self-perceived cultural competence was higher in PA alumni compared to students. PAs considered cultural competence essential for providing quality of healthcare and expressed a need for education in this field. In particular, cultural heterogeneity among peer students and teachers was considered low. To our best knowledge, this is the first study on cultural competency of PAs in the Netherlands. Cultural Fig. 2 Desirability of using different types of interpreters according to Dutch physician assistants competency of Dutch PAs was comparable with those in the US as well as Dutch medical students and physicians showing cultural competence being considered important, but students being deficient in cultural knowledge, skills and behavior during encounters [17,18]. The low scores on cultural knowledge of Dutch PA students and alumni may be explained by the fact that cultural competence has only recently been added to the PA program in the Netherlands [13]. The current study showed PAs' selfperceived cultural competence to be moderate, with values being significantly higher in alumni, which could be explained by the higher number of working years in the alumni group. In-practice exposure to a culturally diverse population is essential for development of cultural awareness and culturally competent behavior [21]. Culturally competent consulting behavior was moderate in this sample of Dutch PA students and alumni and health literacy was moderately explored. However, exploration of patients' social context was very low in both groups. Not knowing this information of patients is of concern as low literacy is associated with several adverse health outcomes [22]. In the model of shared-decision making in the intercultural context, professionals should develop skills to recognize possible differences in language, values about health and illness, expectations and prejudices [23]. So, although PAs value themselves for being moderate cultural competent, the gap in exploration of the social context of minority patients and recognizing communication limitations may limit patient-centered care [23]. Also, good patient-centered communication requires overcoming language difficulties and PAs should be taught that high-quality care to patients with language difficulties is possible when effort in using interpreters is made [24]. Cultural diversity in the PA curriculum should be taught with appropriate theoretical foundation and context, and show that cultural competence can be a vehicle for improving health care in general [25].The respondents in our study were culturally aware and indicated to be highly interested in cultural competence training, which has previously been reported for PAs [17]. Although physicians may value cross-cultural care as important, behavior in practice shows otherwise: little time to address cultural issues for training, formal Table 3 Educational needs of Dutch PA students and alumni for cultural competencerelated to core competences of the PA master curriculum [20] evaluation or role modeling [26] or existence of subtle biases based on ethnicity [27]. So, the importance of educating physicians to treat patient with equity, not equally, is clear. Intercultural communication should be part of the intercultural training as well and include language differences, differences in perception of illness and disease, social components of health communication, and doctors' and patients' prejudices and assumptions [28]. More specific, the three core communication skills, i.e. listening, exploring and checking, should be part of the medical curriculum [29]. Consulting with a professional interpreter should be practiced, as previously addressed, as well as teaching knowledge on mechanisms relevant to various ethnic groups [29]. Observed consultations of Dutch physicians with non-Dutch patients showed that physicians practice only generic communication skills or some relevant intercultural communication skills and focus mainly on the biomedical aspects [30]. In our study, reflections skills as measured by the GRAS [19] were rated high by all physician assistants and were similar to those in Dutch medical students and residents [18]. The scores reflect well-developed general reflection skills in these professionals, however, no insight in actual reflection on their own prejudices or cultural values is given [18]. In addition, there may be a difference between residents' self-perception and their actual performance, as was previously observed [31]. Paternotte et.al. [32] suggested to train specific skills such as asking about the language proficiency of patients or checking if the proposed treatment plan fits into the cultural habits of the patient. This is in line with the results of our study, as exploring these elements of the social context of minority patients was often omitted. The PA cohort in the present study was homogeneous, of Dutch origin and also had experienced very little cultural diversity during their education regarding peers and educators. Comparable figures have been previously reported in the US [33,34] as well as in the Netherlands [35] and is a topic of concern [36]. For the improvement of cultural competence organizational and structural interventions are necessary in addition to the clinical education initiatives that have been discussed above [7]. Increasing diversity in the healthcare profession including the PA workforce will increase the possibility to eliminate health disparities [1] as it is (more) representing the general population [33,37]. An ethnically and culturally diverse student population will improve cross-cultural learning and bring diversity of thought into the classroom [33,38]. Educators should stimulate awareness of personal biases and an open attitude [29]. --- Study limitations The response-rate among PAs was quite low and the questionnaire was completed by 74% (n = 100) of the respondents. Particularly, respondents tended to dropout during completing the GRAS questionnaire. Therefore, the results of the study may be biased representing mainly motivated PAs. To obtain a more representative and larger sample the data should preferably be collected concomitant with education hours. In addition, regression analysis would add significantly to the impact of this paper to control for various demographic variables, particularly with a larger sample, but was beyond the scope of the study. A challenge for future research is standardization of assessing cultural competence. Many different cultural competency assessment tools exist but non being validated in PAs or even in other professions, hampering good comparison between the outcomes of studies performed. As the construct of cultural competence in the other regions such as the US is different compared to the Netherlands, and used for different health professions with different meanings [39] it is difficult to comparing cultural competence of PAs between the Netherlands and the US. Nevertheless, the results of this study can be well compared with those in physicians studies in the Netherlands, as the questionnaire used in this study was originally developed for use in Dutch medical students [18] and slightly adapted for the purpose of this study. Self-perceived cultural competence assessments, used in the majority of studies, are subject to reporting bias and not suitable for rigorous cultural competence measurement and education on a uniform cultural competency construct [12]. So more emphasis should be made on observation methods for cultural competent behavior [40] or objective structured clinical exam [16,41]. In addition, a curriculum-scan such as the Tool for Assessing Cultural Competence Training (TACCT) could provide insight in the content of the curriculum [42] and provide suggestions for improvement. Finally, the term cultural competence is subject to discussion as it includes more than knowledge only. Knowledge is an essential, but not the exclusive, aspect for health care professionals to becoming aware of the patients' culture and of one's own, facilitating patient-centered care. Thus, future studies should focus on cultural humility as well. --- Conclusions This study shows that cultural competence in a cohort of Dutch PA students and alumni was moderate but have insufficient general knowledge on ethnic minority care provision and exploring social context. PAs acknowledge the importance of cultural competence essential for providing quality of healthcare and expressed a need for education in this field. Based on these outcomes the curriculum of the master of science program for physician assistant will be adapted and monitored for efficacy. As cultural competence is variously defined, educated, and measured, uniformity between PA curricula should be encouraged and objective measures developed. Simultaneously, emphasis should be made to increase the diversity of PA students to stimulate cross-cultural learning as well as develop a diverse PA workforce. • fast, convenient online submission • thorough peer review by experienced researchers in your field • rapid publication on acceptance • support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year --- • At BMC, research is always in progress. --- Learn more biomedcentral.com/submissions Ready to submit your research Ready to submit your research ? Choose BMC and benefit from: ? Choose BMC and benefit from: --- Abbreviations --- GRAS Groningen --- Availability of data and materials The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. --- Declarations Ethics approval and consent to participate All study methods were performed in accordance with the Declaration of Helsinki, the Dutch Code of Conduct of Scientific integrity ('Nederlandse Gedragscode wetenschappelijke integriteit' , 2018), and the local guidelines and regulations. The study was granted an exemption from requiring ethics approval according to the Dutch law ('Wet medisch-wetenschappelijk onderzoek met mensen Wetenschappelijk' (WMO)) by the Ethical Research Committee ('Ethische Commissie Onderzoek' (ECO)) of the HAN University of Applied Science, reference 189.06/20. Study information was provided to the participants in the recruitment e-mail. Participation was voluntary and without any restrictions. It was emphasized that participation and outcomes would not have any influence on their study progress. Responses were collected anonymously, and no personal information was to be collected. Informed Consent of the participants their parents/ legal guardians in case of minors was obtained online prior to the start of the cultural competence questionnaire. --- Consent for publication Not applicable. --- Competing interests The authors declare that they have no competing interests. --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The emotional stability and physical health of workers on board aircraft are faced with the factors and conditions that enable professionals to carry out their activities and develop normally, despite the fact that these conditions may present themselves to professionals in adverse conditions [1]. The modern history of aviation with its great technological complexity has pilots as redundant components that integrate embedded controls in modern aircraft. This leads us to say that the value of the worker as a permanent social group in society does not receive, currently, the proper priority. In research on the health of the pilot, there are three major perspectives that have been investigated that influence his stability, as well as the mental and emotional development of the modern airline pilot [2]: The previous life of the individual directly tied to experience, age, genetic and physiological vectors, The social environment, cultural environment and formal education leading to the final result, manifested by the ability, personality, strength and character and The verifiable standards of quality and quantity of life desired, ambition and achievements and its effects.
Introduction The Digital technology advances, has changed the shape and size of instruments used for navigation and communication. This has changed the actions of pilots, especially in relation to emergency procedures. There are few studies that correlate the reduction of accidents with the cognitive and technological changes. The increased cognitive load relates to these changes and requires assessment. The benefits presented by new technologies do not erase the mental models built, with hard work, during times of initial training of the aircraft career pilots in flying schools. The public must be heeded when an aircraft incident or accident becomes part of the news. In search of who or what to blame, the pilot is guilty and immediately appointed as the underlying factors that involve real evidence of the fact they are neglected.The reading of the Black-Boxes notes that 70 % to 80 % of accidents happen due to human error, or to a string of failures that were related to the human factor [3]. We can mention stress and the failure to fully understand the new procedures related to technological innovations linked to automation. Complex automation interfaces always promote a wide difference in philosophy and procedures for implementation of these types of aircraft, including aircraft that are different even manufactured by the same manufacturer. In this case, we frequently can identify inadequate training that contributes to the difficulty in understanding procedures by the crews. Accident investigations concluded that the ideal would be to include, in the pilot training, a psychological stage, giving to him the opportunity of self-knowledge, identifying possible "psychological breakdowns" that his biological machine can present that endangers the safety of flight. Would be given, thus, more humane and scientific support to the crew and to everyone else involved with the aerial activity, minimizing factors that cause incidents and accidents. Accident investigators concluded that the ideal situation for pilot training should include a psychological phase [4], giving him or her, the opportunity of self-knowledge, identifying possible "psychological breakdowns" that biological features can present and can endanger the safety of flight. It should be given, thus, more humane and scientific support to the crew and everyone else involved with the aerial activity, reducing factors that can cause incidents and accidents. Accidents do not just happen. They have complex causes that can take days, weeks or even years to develop [5]. However, when lack of attention and/or neglect take place resulting in a crash, we can be most certain there was a series of interactions between the user and the system that created the conditions for that to happen [6]. We understand that human variability and system failures are an integral part of the main sources of human error, causing incidents and accidents. The great human effort required managing and performing actions with the interface as the task of monitoring, the precision in the application of command and maintaining a permanent mental model consistent with the innovations in automation make it vulnerable to many human situations where errors can occur. The human variability in aviation is a possible component of human error and we can see the consequences of these errors leading to serious damage to aircraft and people. It is not easy, in new aviation, to convey the ability to read the instruments displays. This can conduct to the deficiency and the misunderstanding in monitoring and performing control tasks: lack of motivation, the fact that it is stressful and tiring, and generate failures in control (scope, format and activation), poor training and instructions that are wrong or ambiguous. The mind of the pilot is influenced by cognition and communication components during flight, especially if we observe all information processed and are very critical considering that one is constantly getting this information through their instruments. There is information about altitude, speed and position of one's aircraft and the operation of its hydraulic power systems. If any problem occurs, several lights will light up and warning sounds emerge increasing the volume and type of man-machine communication which can diminish the perception of detail in information that must be processed and administered by the pilot. All this information must be processed by one's brain at the same time as it decides the necessary action in a context of very limited time. There is a limit of information that the brain can deal with which is part of natural human limitation. It can lead to the unusual situation in which, although the mind is operating normally, the volume of data makes it operate in overload, which may lead to failures and mistakes if we consider this man as a biological machine [6,7]. Today there are only the pilot and co-pilot in the cockpit and modern automated. Only two men just to control a Boeing 777. This is a large modern aircraft carrying hundreds of passengers and so much faster. Now a days, the tasks of the pilots were multiplied and increased the weights of aircraft, and the number of passengers, speeds takeoffs and landings were more significant, decreasing the number of men in the cockpit. However, the biological machine called human being is not structurally changed in the last thousands of years to support the increased cognitive and emotional overload. How to know your limits? The professional called Mechanics of Flight (the third man in the cockpit), was extinguished when computers arrived. Until the 70 s there was a work station flight engineer. In a modern station with only the pilot and co-pilot, two men just to control a Boeing 777, a huge and modern aircraft carries hundreds of passengers much more quickly. Several procedures were loaded to the pilots that were executed by the Flight Engineer (O terceiro piloto no cockpit -extinto). Several procedures were loaded to the pilots that were executed by the extinguished Flight Engineer (Op.cit). --- Fundamentation The following factors are an integral part of cognitive activity in the pilot: fatigue, body rhythm and rest, sleep and its disorders, the circadian cycle and its changes, the G-force and acceleration of gravity, the physiological demands in high-altitude, night-time takeoffs and the problem of false illusion of climbing. But, other physiological demands are placed by the aviators. It is suggested that specific studies must be made for each type of aircraft and workplace, with the aim of contributing to the reduction of incidents arising from causes so predictable, yet so little studied. We must also give priority to airmen scientists that have produced these studies in physiology and occupational medicine, since the literature is scarce about indicating the need for further work in this direction. Human cognition refers to mental processes involved in thinking and their use. It is a multidisciplinary area of interest includes cognitive psychology, psychobiology, philosophy, anthropology, linguistics and artificial intelligence as a means to better understand how people perceive, learn, remember and how people think, because will lead to a much broader understanding of human behavior. Cognition is not presented as an isolated entity, being composed of a number of other components, such as mental imagery, attention, consciousness, perception, memory, language, problem solving, creativity, decision making, reasoning, cognitive changes during development throughout life, human intelligence, artificial intelligence and various other aspects of human thought [8]. The procedures of flying an aircraft involve observation and reaction to events that take place inside the cabin of flight and the environment outside the aircraft [4]. The pilot is required to use information that is perceived in order to take decisions and actions to ensure the safe path of the aircraft all the time. Thus, full use of the cognitive processes becomes dominant so that a pilot can achieve full success with the task of flying the "heavier than air". With the advent of automated inclusion of artifacts in the cabin of flight that assist the pilot in charge of controlling the aircraft, provide a great load of information that must be processed in a very short space of time, when we consider the rapidity with which changes occur, an approach that cover the human being as an individual is strongly need. Rather, the approach should include their cognition in relation to all these artifacts and other workers who share that workspace [9]. The deployment of the accidents are usually generated by bad-planned-tasks. A strong component that creates stress and fatigue of pilots, referred to the design of protection, detection and effective handling of fire coming from electrical short circuit on board, is sometimes encountered as tragically happened on the Swissair Airlines flight 111, near Nova Scotia on September 2, 1998. The staff of the Federal Aviation Administration (FAA), responsible for human factors research and modern automated interfaces, reports a situation exacerbated by the widespread use an electrical product and a potentially dangerous wire on aircrafts, called "Kapton" [4]. If a person has to deal with an outbreak of fire, coming from an electrical source at home, the first thing he would do is disconnect the electrical power switch for the fuses. But this option is not available on aircraft like the Boeing B777 and new Airbus. The aviation industry is not adequately addressing the problem of electrical fire in flight and is trying to deal recklessly [10] The high rate of procedural error associated with cognitive errors, in the automation age, suggests that the projects in aviation have ergonomic flaws. In addiction, is has been related that the current generation of jet transport aircraft, used on airlines, like the Airbus A320, A330, A340, Boeing B777, MD11 and the new A380, that are virtually "not flyable" without electricity. We can mention an older generation, such as the Douglas DC9 and the Boeing 737. Another factor in pushing the pilots that causes emotional fatigue and stress is the reduction of the cockpit crew to just two. The next generation of large transport planes four engines (600 passengers) shows a relatively complex operation and has only two humans in the cockpit. The flight operation is performed by these two pilots, including emergency procedures, which should be monitored or re-checked. This is only possible in a three-crew cockpit or cockpit of a very simple operation. According to the FAA, the only cockpit with two pilots that meets these criteria is the cabin of the old DC9-30 and the MD11 series. The current generation of aircraft from Boeing and Airbus do not fit these criteria, particularly with respect to engine fire during the flight and in-flight electrical fire. The science of combining humans with machines requires close attention to the interfaces that will put these components (human-machine) working properly. The deep study of humans shows their ability to instinctively assess and treat a situation in a dynamic scenario. A good ergonomic design project recognizes that humans are fallible and not very suitable for monitoring tasks. A properly designed machine (such as a computer) can be excellent in monitoring tasks. This work of monitoring and the increasing the amount of information invariably creates a cognitive and emotional overload and can result in fatigue and stress. According to a group of ergonomic studies from FAA [11] in the United States this scenario is hardly considered by the management of aviation companies and, more seriously the manufacturers, gradually, introduce further informations on the displays of Glass cockpits. These new projects always determine some physiological, emotional and cognitive impact on the pilots. The accident records of official institutes such as the NTSB (National Transportation Safety Bureau, USA) and CENIPA (Central Research and Prevention of Accidents, Brazil) show that some difficulties in the operation, maintenance or training aircraft, which could affect flight safety are not being rapidly and systematically passed on to crews worldwide. These professionals of aviation may also not be unaware of the particular circumstances involved in relevant accidents and incidents, which makes the dissemination of experiences very precarious. One of the myths about the impact of automation on human performance: "while investment in automation increases, less investment is needed in human skill". In fact, many experiments showed that the progressive automation creates new demands for knowledge, and greater, skills in humans. Investigations of the FAA [11], announced that aviation companies have reported institutional problems existing in the nature and the complexity of automated flight platforms. This results in additional knowledge requirements for pilots on how to work subsystems and automated methods differently. Studies showed the industry of aviation introduced the complexities of automated platforms flight inducing pilots to develop mental models about overly simplified or erroneous system operation. This applies, particularly, on the logic of the transition from manual operation mode to operation in automatic mode. The process of performing normal training teaches only how to control the automated systems in normal but do not teach entirely how to manage different situations that the pilots will eventually be able to find. This is a very serious situation that can proved through many aviation investigation reports that registered the pilots not knowing what to do, after some computers decisions taken, in emergences situations [10]. VARIG (Brazilian Air lines), for example, until recently, had no Boeing 777 simulators where pilots could simulate the emergence loss of automated systems what should be done, at list, twice a month, following the example of Singapore Airlines. According to FAA [11], investigations showed incidents where pilots have had trouble to perform, successfully, a particular level of automation. The pilots, in some of these situations, took long delays in trying to accomplish the task through automation, rather than trying to, alternatively, find other means to accomplish their flight management objectives. Under these circumstances, that the new system is more vulnerable to sustaining the performance and the confidence. This is shaking the binomial Human-Automation compounded with a progression of confusion and misunderstanding. The qualification program presumes it is important for crews to be prepared to deal with normal situations, to deal with success and with the probable. The history of aviation shows and teaches that a specific emergency situation, if it has not happen, will certainly happen. The future work makes an assessment in systemic performance on pilots. Evaluating performance errors, and crew training qualifications, procedures, operations, and regulations, allows them to understand the components that contribute to errors. At first sight, the errors of the pilots can easily be identified, and it can be postulated that many of these errors are predictable and are induced by one or more factors related to the project, training, procedures, policies, or the job. The most difficult task is centered on these errors and promoting a corrective action before the occurrence of a potentially dangerous situation. The FAA team, which deals with human factors [12], believes it is necessary to improve the ability of aircraft manufacturers and aviation companies in detecting and eliminating the features of a project, that create predictable errors. The regulations and criteria for approval today do not include the detailed project evaluation from a flight deck in order to contribute in reducing pilot errors and performance problems that lead to human errors and accidents. Neither the appropriate criteria nor the methods or tools exist for designers or for those responsible for regulations to use them to conduct such assessments. Changes must be made in the criteria, standards, methods, processes and tools used in the design and certification. Accidents like the crash of the Airbus A320 of the AirInter (a France aviation company) near Strasbourg provide evidence of deficiencies in the project. This accident highlights the weaknesses in several areas, particularly when the potential for seemingly minor features has a significant role in an accident. In this example, inadvertently setting an improper vertical speed may have been an important factor in the accident because of the similarities in the flight path angle and the vertical speed in the way as are registered in the FCU (Flight Control Unit). This issue was raised during the approval process of certification and it was believed that the warnings of the flight mode and the PFD (Primary Flight Display-display basic flight information) would compensate for any confusion caused by exposure of the FCU, and that pilots would use appropriate procedures to monitor the path of the vertical plane, away from land, and energy state. This assessment was incorrect. Under current standards, assessments of cognitive load of pilots to develop potential errors and their consequences are not evaluated. Besides, the FAA seeks to analyze the errors of pilots, a means of identifying and removing preventively future design errors that lead to problems and their consequences. This posture is essential for future evaluations of jobs in aircraft crews. Identify projects that could lead to pilot error, prematurely, in the stages of manufacture and certification process will allow corrective actions in stages that have viable cost to correct or modify with lower impact on the production schedule. Additionally, looking at the human side, this reduces unnecessary loss of life. --- Contextualization On April 26, 1994, an Airbus A300-600 operated by China Airlines crashed at Nagoya, Japan, killing 264 passengers and flightcrew members. Contributing to the accident were conflicting actions taken by the flightcrew and the airplane's autopilot. The crash provided a stark example of how a breakdown in the flightcrew/automation interface can affect flight safety. Although this particular accident involved an A300-600, other accidents, incidents, and safety indicators demonstrate that this problem is not confined to any one airplane type, airplane manufacturer, operator, or geographical region. This point was tragically demonstrated by the crash of a Boeing 757 operated by American Airlines near Cali, Columbia on December 20, 1995, and a November 12, 1995 incident (very nearly a fatal accident) in which a American Airlines Douglas MD-80 descended below the minimum descent altitude on approach to Bradley International Airport, CT, clipped the tops of trees, and landed short of the runway. As a result of the Nagoya accident as well as other incidents and accidents that appear to highlight difficulties in flightcrews interacting with the increasing flight deck automation, the Federal Aviation Administration's (FAA) Transport Airplane Directorate, under the approval of the Director, Aircraft Certification Service, launched a study to evaluate the flightcrew/flight deck automation interfaces of current generation transport category airplanes. The following airplane types were included in the evaluation: Boeing: Models 737/757/767/747-400/777, Airbus: Models A300-600/A310/A320/ A330/A340, McDonnell Douglas: Models MD-80/MD-90/MD-11, Fokker: Model F28-0100/-0070 [5]. The Federal Aviation A chartered a human factors (HUMAN FACTOR) team to address these human factors issues, with representatives from the FAA Aircraft Certification and Flight Standards Services, the National Aeronautics and Space Administration, and the Joint Aviation Authorities (JAA), assisted by technical advisors from the Ohio State University, the University of Illinois, and the University of Texas. The HUMAN FACTOR [11]. Team was asked to identify specific or generic problems in design, training, flightcrew qualifications, and operations, and to recommend appropriate means to address these problems. In addition, the HUMAN FACTOR Team was specifically directed to identify those concerns that should be the subject of new or revised Federal Aviation Regulations (FAR), Advisory Circulars (AC), or policies. The HUMAN FACTOR Team relied on readily available information sources, including accident/incident reports, Aviation Safety Reporting System reports, research reports, and trade and scientific journals. In addition, meetings were held with operators, manufacturers, pilots' associations, researchers, and industry organizations to solicit their input. Additional inputs to the HUMAN FACTOR Team were received from various individuals and organizations interested in the HUMAN FACTOR Team's efforts [11]. When examining the evidence, the HUMAN FACTOR Team found that traditional methods of assessing safety are often insufficient to pinpoint vulnerabilities that may lead to an accident. Consequently, the HUMAN FACTOR Team examined accident precursors, such as incidents, errors, and difficulties encountered in operations and training. The HUMAN FACTOR Team also examined research studies that were intended to identify issues and improve understanding of difficulties with flightcrew/ automation interaction. In examining flightcrew error, the HUMAN FACTOR Team recognized that it was necessary to look beyond the label of flightcrew error to understand why the errors occurred [10]. We looked for contributing factors from design, training and flightcrew qualification, operations, and regulatory processes. While the HUMAN FACTOR Team was chartered primarily to examine the flightcrew interface to the flight deck systems, we quickly recognized that considering only the interface would be insufficient to address all of the relevant safety concerns. Therefore, we considered issues more broadly, including issues concerning the functionality of the uderlying systems. From the evidence, the HUMAN FACTOR Team identified issues that show vulnerabilities in flightcrew management of automation and situation awareness and include concerns about: • Pilot understanding of the automation's capabilities, limitations, modes, and operating principles and techniques. The HUMAN FACTOR Team frequently heard about automation "surprises," where the automation behaved in ways the flightcrew did not expect. "Why did it do that?" "What is it doing now?" and "What will it do next?" were common questions expressed by flightcrews from operational experience. • Differing pilot decisions about the appropriate automation level to use or whether to turn the automation on or off when they get into unusual or non-normal situations (e.g., attempted engagement of the autopilot during the moments preceding the A310 crash at Bucharest). This may also lead to potential mismatches with the manufacturers' assumptions about how the flightcrew will use the automation. Flightcrew situation awareness issues included vulnerabilities in, for example: • Automation/mode awareness. This was an area where we heard a universal message of concern about each of the aircraft in our charter. • Flight path awareness, including insufficient terrain awareness (sometimes involving loss of control or controlled flight into terrain) and energy awareness (especially low energy state). These vulnerabilities appear to exist to varying degrees across the current fleet of transport category airplanes in our study, regardless of the manufacturer, the operator, or whether accidents have occurred in a particular airplane type. Although the Team found specific issues associated with particular design, operating, and training philosophies, we consider the generic issues and vulnerabilities to be a larger threat to safety, and the most important and most difficult to address. It is this larger pattern that serves as a barrier to needed improvements to the current level of safety, or could threaten the current safety record in the future aviation environment. It is this larger pattern that needs to be characterized, understood, and addressed. In trying to understand this larger pattern, the Team considered it important to examine why these vulnerabilities exist [4]. The Team concluded that the vulnerabilities are there because of a number of interrelated deficiencies in the current aviation system: • Insufficient communication and coordination. Examples include lack of communication about in-service experience within and between organizations; incompatibilities between the air traffic system and airplane capabilities; poor interfaces between organizations; and lack of coordination of research needs and results between the research community, designers, regulators, and operators. • Processes used for design, training, and regulatory functions inadequately address human performance issues. As a result, users can be surprised by subtle behavior or overwhelmed by the complexity embedded in current systems operated within the current operating environment. Process improvements are needed to provide the framework for consistent application of principles and methods for eliminating vulnerabilities in design, training, and operations. • Insufficient criteria, methods, and tools for design, training, and evaluation. Existing methods, data, and tools are inadequate to evaluate and resolve many of the important human performance issues. It is relatively easy to get agreement that automation should be human-centered, or that potentially hazardous situations should be avoided; it is much more difficult to get agreement on how to achieve these objectives. • Insufficient knowledge and skills. Designers, pilots, operators, regulators, and researchers do not always possess adequate knowledge and skills in certain areas related to human performance. It is of great concern to this team that investments in necessary levels of human expertise are being reduced in response to economic pressures when two-thirds to three-quarters of all accidents have flightcrew error cited as a major factor. • Insufficient understanding and consideration of cultural differences in design, training, operations, and evaluation. The aviation community has an inadequate understanding of the influence of culture and language on flightcrew/automation interaction. Cultural differences may reflect differences in the country of origin, philosophy of regulators, organizational philosophy, or other factors. There is a need to improve the aviation community's understanding and consideration of the implications of cultural influences on human performance. --- Conclusion A few decades ago, in my early life entering the airlines, we were taught to fly the Authomatic Control in the Throttle Quadrant (TQ) course, with SOP's (Standard Operating Procedure) attached. The line operations were refined during line training. The initial emphasis was knowing how the automated new system worked and how to fly it. The line training refined these skills and expanded how to operate it within the airways system and a multitude of busy airports and small visual airfields. Understanding the complexities of the systems came with our 'apprenticeship', which had started. When automation became readily available we used it to reduce workload when we felt like it. We didn't really trust it but we used it knowing we could easily disconnect it when it didn't do what we wanted. Now some airliners want everything done on autopilot because it can fly better than any pilot. Airlines hire young pilots with little experience and they are shown how you don't need to hand fly any more because of automation. Labor is cheap. We developed a study focusing on the guilt of pilots in accidents when preparing our thesis. In fact, the official records of aircraft accidents blame the participation of the pilots like a large contributive factor in these events. Modifying this scenario is very difficult in the short term, but we can see as the results of our study, which the root causes of human participation, the possibility of changing this situation. The cognitive factor has high participation in the origins of the problems (42 % of all accidents found on our search). If we consider other factors, such as lack of usability applied to the ergonomics products, the choise of inappropriate materials and poor design, for example, this percentage is even higher. Time is a factor to consider. This generates a substantial change in the statistical findings of contributive factors and culpability on accidents. The last consideration on this process, as relevant and true, somewhat later, must be visible solutions. In aviation, these processes came very slowly, because everything is wildly tested and involves many people and institutions. The criteria adopted by the official organizations responsible for investigation in aviation accidents do not provide alternatives that allow a clearer view of the problems that are consequence of cognitive or other problems that have originate from ergonomic factors. We must also consider that some of these criteria cause the possibility of bringing impotence of the pilot to act on certain circumstances. The immediate result is a streamlining of the culpability in the accident that invariably falls on the human factor as a single cause or a contributing factor. Many errors are classified as only "pilot incapacitation" or "navigational error". Our research shows that there is a misunderstanding and a need to distinguish disability and pilot incapacitation (because of inadequate training) or even navigational error. Our thesis has produced a comprehensive list of accidents and a database that allows extracting the ergonomic, systemic and emotional factors that contribute to aircraft accidents. These records do not correlate nor fall into stereotypes or patterns. These patterns are structured by the system itself as the accident records are being deployed. We developed a computer system to build a way for managing a database called the Aviation Accident Database. The data collected for implementing the database were from the main international entities for registration and prevention of aircraft accidents as the NTSB (USA), CAA (Canada), ZAA (New Zealand) and CENIPA (Brazil). This system analyses each accident and determines the direction and the convergence of its group focused, instantly deployed according to their characteristics, assigning it as a default, if the conditions already exist prior to grouping. Otherwise, the system starts formatting a new profile of an accident. This feature allows the system to determine a second type of group, reporting details of the accident, which could help point to evidence of origin of the errors. Especially for those accidents that have relation with a cognitive vector. Our study showed different scenarios when the accidents are correlated with multiple variables. This possibility, of course, is due to the ability of Aviation DataBase System [6,7], which allows the referred type of analysis. It is necessary to identify accurately the problems or errors that contribute to the pilots making it impossible to act properly. These problems could point, eventually, to an temporary incompetence of the pilot due to limited capacity or lack of training appropriateness of automation in aircraft. We must also consider many other reasons that can alleviate the effective participation or culpability of the pilot. Addressing these problems to a systemic view expands the frontiers of research and prevention of aircraft accidents. This system has the purpose of correlating a large number of variables. In this case, the data collected converges to the casualties of accidents involving aircraft, and so, can greatly aid the realization of scientific cognitive studies or applications on training aviation schools or even in aviation companies. This large database could be used in the prevention of aircraft accidents allowing reaching other conclusions that would result in equally important ways to improve air safety and save lives.
The article analyzes the possibility of transitioning a general education school to an online learning format. The indicators are such categories as educational activity, quality of education, intensity of education, learning motives. To achieve this goal, a sociological survey of high school students in Rostov-on-Don (Russia), as well as their parents and school teachers, was conducted. The last two groups of respondents are presented in the status of experts. Based on the analysis of empirical data, the following conclusions were made. More than half of all high school students surveyed (66.6%) expressed their intention to continue the online learning experience they received during the response to the COVID-19 epidemic. However, the conjugations of willingness to study online with the categories "learning motives", "quality of education", "intensity of education" showed that the high motivation declared by high school students for the learning process does not correspond to their real behavior in distance lessons. The main motive for choosing online education for high school students is the convenience of this format of education. The survey showed a low degree of significance of other reasons for choosing online education. Preferences for online convenience and the desire to learn asynchronously reflect the unmanifested goal of getting out of the teacher's control in order to reduce their educational activity. It can be assumed that this is due to the social immaturity of high school students and the lack of understanding by most of them of the value of secondary education. Based on the analyzed data, three approximately equal groups of respondents were identified. In the first group, high school students are focused on the standard school-lesson system with elements of e-learning (40% of respondents). In the second group, the advantages of online learning are articulated, which are associated with convenience and greater resource potential compared to classical learning (35% of respondents). The third group represents the interests of high school students, who are not so much interested in the format of education as the opportunity to get out of the control of the teacher and find themselves in a convenient educational environment to simulate learning activities (25% of respondents). This means that online learning format, the usefulness of which is obvious only if students have a stable cognitive activity, is unacceptable for most high school students.
Introduction Online learning has been attracted close interest from scientists for last two decades. It is caused by new information and technical possibilities of organizing educational work in a remote format. Modern technologies provide instantaneous transmission of educational information over a distance and maintain synchronous audiovisual contact between the teacher and the student. Thanks to new technological opportunities, a teacher in the classroom is no longer considered by a certain part of society as something mandatory for receiving a quality education. This part of society represents online learning as a modern educational model that fully meets the requirements and demands of the time. This position is reflected in scientific discourse (Golovanova, 2019;Grechushkina, 2021;Smirnova, 2019). However, many scientists support an alternative point of view, according to which online learning is seen as a threat to high-quality, --- Motivation, Intensity and Quality of Educational Activity of Russian --- Schoolchildren in Online Learning Aleksander V. Dyatlov 1* , Vitaly V. Kovalev 2 , Diana N. Chilingarova 3 intensive educational process (Ivanova and Murugova, 2020;Kuznetsov, 2020;Kovalev and Latsveeva, 2021). It would be wrong to reduce the problem of the relevance of online learning in high school only to the technological aspect. The consciousness of children is not a computer program that can be filled with all the information necessary for life through technical communication channels. The main question is whether Russian schoolchildren have cognitive activity sufficient for the emergence of intrinsic motivation for learning activities. Depending on the answer to it, scientists determined the prospects for the development of e-learning in high school. In essence, this means that it was through the fact of acknowledging or denying motivation that the very possibility of transferring schoolchildren to the digital learning format was assessed. When answering this question, all experts are divided into two large groups: the first unites scientists who support assertion that schoolchildren are intellectually and socially ready to switch to online lessons -in their opinion, the majority of students will not lose motivation (Said, 2018;Solovieva and Semenova, 2020;Razmacheva, 2021;Kozitsyna, 2021). A different position is held by those who do not see high school students as having sufficient cognitive activity to switch to remote mode (Markeeva, 2020;Bakaeva, 2016;Nishanbaeva, 2021). The second point of view currently prevails in the scientific community. The study of the established scientific discourse, however, showed that Russian sociologists had not conducted empirical sociological research on the study of the motivation, intensity and quality of education of high school students. The present work aims to close this gap in science. --- Materials and Methods The theoretical and methodological foundations of the study are based on the approach developed in the course of the joint scientific work of V.I. Chuprov and Yu.A. Zubok (Zubok and Chuprov, 2020;Chuprov and Zubok, 2008). The authors call it polyparadigm, uniting the most interesting results achieved in the previous scientific tradition in study of youth. In the framework of the scientific activities of these Russian sociologists, the most significant features that form the sociological definition of youth as an age group were integrated. "The variety of these features determines the complex internal structure of youth, its differentiation and differences, in which its essential properties are revealed. These are the transition of social status, lability, extremeness, transgressiveness of consciousness, increasing globalization and new forms of standardization" (Zubok and Chuprov, 2017). These characteristics have been detailed in the theoretical works of researchers. In them, young people are characterized by social instability, change of interests, mobile shift of value accentuations in the hierarchy of their own needs. The authors argue that the personal properties of this age group are extremely variable and this inevitably affects the properties of social interactions of young people. It is difficult for young people to fix their interest on one concrete thing, they need a change of impressions, a constant feeling of novelty. It is necessary to say a few words about the author's categorical apparatus. It was developed in such a way that questions of the questionnaire together constituted a variable image of cognitive activity of high school students. The basic concepts differentiated in the questionnaire include: motives, needs, personality traits of a high school student, motivations for action, educational goals. The empirical base of the study was formed on the basis of a mass sociological survey conducted by the authors in January-February 2022. Respondents who received personal experience of learning activities in online learning were interviewed. The study involved 860 high school students, 1246 parents and 636 teachers living in Rostov-on-Don. To increase the representativeness of the method used, the sample included the parents of the surveyed high school students and those teachers who had experience of teaching high school students online. Data processing was carried out in the SPSS-22 program. Dyatlov, V. A., Kovalev, V. V, & Chilingarova, N. D. (2023) --- Results and Discussions First, we evaluate individual feelings from the learning experience (work, control of high school students) during the period of self-isolation and social distancing. We will be interested in attitudes towards compulsory online learning. Table 1 Distribution of answers to the question: "What are your individual feelings from the experience of distance learning (working at school)?", % The results were partly expected, but quite revealing. More than half of high school students would not mind going back to remote mode. But, importantly, the vast majority of respondents who made this choice pointed to the convenience of online learning, rather than its ability to provide quality education. Meanwhile, for this variable, the respondents could choose two options. However, convenience was chosen by 50% of high school students, and only 16.6% expressed confidence that online learning gives a quality result. We cannot say that high school students are not interested in quality issues, because 33.8% chose the option "I am against online -quality of education is declining." Even more significant results were shown by teachers (64.3%) and parents (49.3%), denying the ability of online learning to ensure the quality of education. The largest scale of rejection of online learning is shown by parents. Among them, only 7.4% chose the option "I would like to continue -it is convenient for me" (teachers -22.9%). Supporters of online learning prioritize the convenience of study (work) when choosing this form of education. Cautiously for the time being, we will make an assumption that even to the detriment of quality. But so far this thesis has no valid evidence. In addition to quality, the ability to maintain a high intensity of learning activities in online learning should be recognized as significant. It is understood as the ability to perform a certain amount of educational tasks per unit of time, or the ability to engage in educational activities for a certain period of time, or both. --- Table 2 Distribution of answers to the question: "Did your intensity of learning activity decrease in conditions of online learning?", % Dyatlov, V. A., Kovalev, V. V, & Chilingarova, N. D. (2023) There is one important aspect of this variable that needs attention. From the table 1 it follows that more than half of high school students are aimed at continuing their studies online. Meanwhile, only 32.2% of them are convinced that the intensity of their educational activities has increased. It can be assumed with some degree of confidence that in this group there is a large stratum of respondents who perceive online learning as a way to reduce the intensity of learning activities. Connecting with the question of evaluating one's attitude to online learning will help us understand the potential size of the group of high school students for whom the rejection of classroom-contact learning can be interpreted as cost minimization. --- Table 3 Conjugation of answers to the questions: "What are your individual feelings from the experience of distance learning (working at school)?" and "Did your intensity of learning activity decrease in conditions of online learning?", % Some explanations for the Table 3: in the line record there are three groups in which the assessment of the results of the intensity of learning activities in the online learning mode is determined. Frequency distributions are given in brackets. The columns indicate the characteristics of attitudes towards their own experience of distance learning. The status of the variables has an entry in the columns, because we find out how much learning tasks are performed by high school students in each of the four groups of schoolchildren who assessed their learning experience in online learning. The pairing results show the reliability of the data obtained. Those who negatively assessed their experience of online learning, for the most part, are confident in its low intensity. For example, in the group "I am against online -quality of education is declining" -56.0% indicated a decrease in the intensity of study (36.1% found it difficult to draw conclusions). There is also a noticeable strong change in relation to the average distributions: in the group of high school students who oppose online because of the deterioration in the quality of education, the variant "I solved more educational tasks" was chosen by 4,9% (for the total sample 32,2%); and by the group of high school students who do not like to study online -10,9% (for the total sample 32,2%). Other distributions among respondents, where the online experience is rated positively due to its convenience. In this group, the attitude to the intensity of study in the conditions of online learning does not differ much from the average values for the entire sample. Here, 39.3% of respondents solved more educational tasks (32.2% in the total sample). The increase in the number of high school students who successfully solved a larger amount of educational problems, as we see, is insignificant. But this is not even more significant. As you can easily see at the intersection of the first column and the second line, 26% of high school students from this group (online is convenient) chose the answer option "I solved a smaller amount of educational tasks due to overwork, reduced control by the teacher and the loss of the opportunity to receive teacher explanation in time". Moreover, the downward differences in the total sample are not significant (26%<30.9%). It is in this segment that you need to look for the bulk of high school students, for whom online is just a way to reduce the intensity of learning. Equally indicative are the results in the second column group: "I would like to continue -it improves the quality of education" (16.6%). Deviations from the average frequency indicators are very significant. In this group, 59.3% of respondents are convinced that they solve a larger volume of educational tasks (32.2% in the total sample). The decrease in the intensity of learning is no longer as noticeable as in the group where online learning was chosen for convenience. Only 15% indicated the solution of a smaller volume of tasks (30.9% in the total sample). These 15% of high school students are also potentially adding to the cohort of those who are aimed at reducing the intensity of their studies. Having dealt with the data obtained and revealed their reliability, we then calculate the approximate number of high school students for whom the rejection of classical education can be interpreted as a conscious decrease in the intensity of educational work. To do this is quite simple. The action will be performed in two stages, separately for the groups "online is convenient" and "online is qualitatively". The first group ("online is convenient") makes up 50% of the entire sample (Table 1). In it, 26% of respondents indicated a decrease in the volume of tasks to be solved, and 34.7% found it difficult to answer. Both of these figures should be divided by two, highlighting half of the respondents from the total sample. Consequently, 13% of all high school students surveyed chose online only for the sake of convenience, and 17.3% presumably also, at least in some part of their learning activities. The second group ("online is qualitatively") makes up 16.6% of the entire sample (Table 1). It indicated a decrease in the volume of tasks to be solved and found it difficult to answer -15% and 25.7% of respondents. Relative to the general sample -2.5% and 4.7%. Summing up the results for both groups, it turns out that 15.5% (13% + 2.5%) of respondents chose online learning even in the face of a reflexive decrease in the volume of tasks to be solved, and 22% (17.3% + 4.7%) conditions of non-reflexible decrease in the volume of tasks to be solved. Thus, their choice of the form of education in favor of online is not associated with the success of solving educational problems. That is, convenience is a value in itself, acting as a priority in relation to intensity. These figures are very symptomatic and indicate a high percentage of respondents (more than a third of the number of respondents) for whom intensity in training is not considered a significant choice and needs external motivation. Returning to the data in the Table 2, we note that more than a third of the surveyed high school students (36.9%) found it difficult to determine the state of intensity of their educational activities. The reasons for this may be different, but it is important for us to understand that more than a third of the respondents demonstrate that they do not have the proper indicators for evaluating the effectiveness of online learning. This is important from the point of view of learning motivation, because in this selected group, motives lose their connection with the results of education. This group can be defined as not being indifferent to learning results, but not having the ability to objectively assess the intensity of their learning activities. The teachers were extremely critical. Only 7.1% supported the position according to which the intensity of learning activity in the context of online learning is growing (the fact of a decrease was noted by 41.9%). It is hardly possible to question their expertise on this issue. Let us assume that teachers themselves work more effectively in the online format, and this is inevitably reflected in the growth of the intensity of involvement of schoolchildren in the educational process. At the same time, more than half of teachers (50.9%) felt that the intensity of online learning "depends on the individual characteristics of children." Such a high percentage of answers for this position suggests that the effectiveness of learning activities for some students in online learning is increasing. We did not find out the point of view of the parents, because considered that they did not have expert competence on this issue. Next, we will connect the issues of quality and intensity of educational activity with motivation itself. To begin with, we present the results of the simplest (frequent) measurement of the state of motivation in online learning. --- Table 4 Distribution of answers to the question: "Have you noticed in yourself (in high school students) a decrease in motivation for learning when switching to distance learning?", % It is easily to see that the positions of high school students and teachers diverge quite strongly. The most important difference stems from the assessment of the relationship between convenience and motivation. 46.4% of the surveyed high school students are convinced of the growth or maintenance of the previous level of motivation, because "homeschooling is convenient". Only 5.6% of the teachers surveyed agreed with this judgment. This range of opinions can partly be explained by the fact that the students spoke about themselves, and the teachers gave an expert assessment. And the answer "some definitely pretend to study" looked much more attractive. Understanding the risks of underrepresentation, we added the ability to select two set points to the teacher question variable. However, this did not increase the popularity of choosing the first two options. Teachers did not see a significant relationship between convenience and increasing motivation. Equally diametrically different is the position between the two groups of respondents and in the opposite position, according to which a decrease in motivation is claimed in the conditions of online learning (fourth line of the table 4). Only 10.1% of the surveyed high school students considered "learning at home is almost impossible". Among teachers, such categoricalness was supported by 39.3% of respondents who chose the option "learning at home leads to partly imitation of learning due to a sharp decrease in quality". If evaluated by the aggregate, the least likely option was "online is very productive" (10.8%) / "high school students in any form of education understand well why they need knowledge" (11.4%). High school students and teachers were given markedly different definitions. Pupils were asked about the quality of online learning, teachers about the presence of certain personality traits in high school students, with which society associates the onset of social maturity. In fact, teachers spoke about the lack of social maturity among high school students, and schoolchildren ignored the category of "learning productivity" as significant for themselves. From this position, let's turn to a meaningful description of the relationship between motivation and the assessment of the intensity of learning activity in online learning. We have already measured the linear distribution with respect to the intensity of online learning (Table 2). Recall that among high school students, three groups of approximately the same size were distinguished, in which one part of the respondents (32.2%) showed confidence in the growth of intensity in online learning, the second indicated its decrease (30.9%), and the third found it difficult in the choice of answer (36.9%). Motivation was measured separately from intensity, and slightly more than half of the total number of students surveyed noted that it did not decrease online (Table 4). Now we need to use the "intensity" criterion to characterize the degree of involvement in educational activities of both motivated and demotivated high school students. This will help test the validity of judgments about their high motivation in online learning environments. This must be done for two reasons. First, any judgments of schoolchildren should be subjected to critical evaluation. Secondly, the opinions of high school students and teachers are diametrically opposed. As a test criterion, the questionnaire included the question "How often did you do extraneous activities during distance lessons?". Variants of the proposed answers for this criterion acts as given values. The answers to the question about motivation are in the status of the variables that are being measured. --- Table 5 Conjugation of answers to the questions: "Have you noticed a decrease in motivation for learning when switching to distance learning?" and "How often did you do extraneous activities during distance lessons?", % In Table 5 we have four groups of respondents. The first consists of those who find it convenient to study online; the second unites those who chose the option "learning online is very productive"; the third is formed from respondents who considered that "studying at home is difficult"; the fourth was formed through the choice of the judgment "learning at home is almost impossible". The first two groups can be considered motivated in terms of online learning, the third is partially motivated and the fourth is demotivated. The data is contained in four columns. The given values for them are judgments that characterize the intensity of the educational activity of the degree of readiness of schoolchildren to be included in the educational process online. The purpose of this conjugation model is to check the reliability of the answers of high school students in Table 4, which shows that 57.2% (46.4% + 10.8%) of respondents (in the first two groups of motivated ones) stated that their motivation in online learning does not decrease. As a hypothesis, it would be reasonable to assume that the intensity of education in the two motivated groups will be noticeably higher than in the partially motivated or unmotivated group. In addition, two groups of motivated schoolchildren should be expected, if not a 100% choice of the given value "I was not distracted at all, I was constantly included in the educational process", then at least within the limits of 60%-70%. However, this hypothesis was only partially confirmed. Yes, indeed, in the two motivated (in terms of self-presentation) groups, a significantly smaller part of the respondents were distracted from the educational process (the first line of Table 5). But this is not even half of the number of the first and second groups. In the first group, 33.1% chose the option "It is hard to keep attention, but I tried", and in the second group even more -43.5%. 18.4% in the first group and 16.3% in the second "About half the time I switched to my own affairs". 9.1% and 7.6% and even "Most of the time minding my own business". Let us calculate, according to the same scheme, according to which the data in Table 3 were calculated, the percentage of high school students from the total sample, who spent half or more of their time doing extraneous activities during distance lessons. There are 15% of those in the two groups of motivated people. To these can be added 20.2% of those who had difficulty concentrating in online classes. We remind that these percentages are not within the group of motivated high school students, but from the general sample. Intragroup distributions can be seen in the Table 5. These figures show that the high level of motivation among high school students who stated that they have no problems with motivation in the online learning format does not correspond to the real state of affairs. Obviously, this means that in the structure of the personality of high school students there are social qualities that do not contribute to maintaining motivation at a high level in the process of online learning. We use one more criterion to test the subjectively presented ability to find intrinsic motivation in the conditions of online learning: "What online learning format do you prefer to study in?". Respondents were asked to choose three set points: asynchronous, synchronous, and regular school. Objectively, only highly motivated people can learn effectively in asynchronous mode, because in it a priori there are no external impulses to motivation. About synchronous online and standard school education, it is unreasonable to make unambiguous conclusions about motivation on the proposed grounds. Therefore, the unconditional willingness to go to a distance can be seen only among motivated supporters of online learning. --- Table 6 Conjugation of answers to the questions: "Have you noticed a decrease in motivation for learning when switching to distance learning?" and "What online learning format do you prefer to study in?", % Presented in the Table 6 data give us the opportunity to clarify two positions. First, to find out if all high school students who find the online format convenient or productive want to abandon the standard school education. Secondly, does the awareness of the fact of a decrease in motivation in the conditions of online learning lead to a negative attitude towards the synchronous or asynchronous format of educational work. We are going to answer the first question first. Remind that the group of high school students, in which motivation does not decrease in the conditions of online learning, is divided into two subgroups: in one group, motivation does not decrease, because online is recognized as convenient; in the second because of its productivity. It should be recognized that in the first subgroup (online is convenient), according to the results of conjugation, 23.1% chose the answer option "It is better to study in a regular class at school", and in the second subgroup (online is productive) 43.5%, that is almost half. Such distributions at least show that the intention to continue the online learning experience, which was stated by 66.6% of the surveyed high school students (Table 1), does not at all mean their abandonment of the classroom system. This aspect of the analysis gives us additional information to conclude that the first group of motivated is extremely heterogeneous. This was already noticeable from the data included in Dyatlov, V. A., Kovalev, V. V, & Chilingarova, N. D. (2023). Motivation, Intensity and Quality of Educational Activity of Russian Schoolchildren in Online Learning,International Journal of Cognitive Research in Science,Engineering and Education (IJCRSEE),11(3),[461][462][463][464][465][466][467][468][469][470][471][472][473] tables 4 and 5. In this group, there is a fairly high percentage of those who are indifferent to both the intensity and quality of training. Some details on these aspects are given to us by pairing the question of motivation with the question of preferences for the format of training. Thus, an indicative downward trend is visible within the "motivated to online learning" group: of those who associate motivation with convenience, only 23.1% choose the intention to study at a regular school; where staying motivated online is associated with the "productivity" category, 43.5% of high school students would prefer to study in a regular school. We also consider it no coincidence that asynchronous learning is most often chosen by those schoolchildren who are primarily focused on the convenience of the online mode (37.2%). Those who have not lost motivation in the online format, but who associate it not with convenience, but with the growth of their own learning productivity, choose asynchronous learning much less often (19.5%). It can be assumed that such significant preferences for online convenience and the desire to learn asynchronously, in fact, reflect the goal that is not manifested openly to get out of the control of the teacher in order to reduce their educational activity. With regard to the group where motivation in the conditions of online learning is lost or partially lost (the third and fourth columns in Table 6), everything is very clear. Those who have partially or completely lost motivation are aimed at studying in a regular class (subgroup "studying at home is difficult" -76.0% and subgroup "it is better to study in a regular class at school" -87.7%). However, in these two subgroups there is a small part of high school students who, even with motivation lost online, choose synchronous or asynchronous learning. Our calculations showed that it is 9.1% of the total sample. With a high degree of probability, we can assume that these are schoolchildren who are not interested in either the quality or the intensity of learning activities. A significant result of the survey was the identification of three groups of high school students: 1) reducing the performance of online learning and opposing this format; 2) reducing the rates of online learning and advocating this format; 3) supporters of this format that increase the performance of online learning. In terms of size, the groups are approximately equal with some preponderance towards the first of the three. But these are schoolchildren's ideas. Let us check them through the expert judgments of teachers and parents. --- Table 7 Distribution of answers to the question: "Are high school students divided into those who are better off studying remotely and those who are better off studying in the classroom", % Teachers, in general, confirmed the presence of the three groups listed above, but the quantitative distributions turned out to be quite different. Thus, only 5.9% of the surveyed pedagogues indicated that "all children show the best results in online learning". At the same time, 43.1% of teachers confirmed that among high school students there are children who are disabled, homebodies, shy, highly motivated, who are better off learning in electronic format. And finally, the majority (51%) agreed with the opinion that "transition to online had a negative impact on the quality of education for all high school students". The opinions of teachers are also confirmed by the consolidated point of view of parents. --- Table 8 Distribution of answers to the question: "When the school worked remotely, did you have conflicts with your child because of study issues?", % Almost half of the parents indicated that they had conflicts with their children caused by an insufficient level of motivation to study online. In fact, there could be more of them, but we must take into account that not all parents have opportunity to control their children due to professional employment. The percentage received is only little more than what the high school students themselves said, referring to the decrease in their level of motivation during distance learning. And finally, the last one. In the emerging discourse, many scientists, teachers and parents share the opinion that online learning is a modern model of education. Some suggest transferring school education to electronic format right now. Such projects were actively discussed during the pandemic in the context of the initiatives of the head of Sberbank, Herman Gref (School, 2019). After the end of the pandemic, the Internet was filled with advertising messages of various kinds of online schools that operate as an alternative to the regular high school. The main focus of advertising content unfolds through a description of the advantages of online education compared to studying in a regular school. This advertisement and the very functioning of the institution of alternative online learning became possible due to the presence in the Law on Education of Russia of a legal norm that gives parents the right to transfer their child to family education (clause 3, part 1, article 17). This construction is used by entrepreneurs in the educational services market to advertise their activities, without bearing any responsibility for the quality of education, because. Responsibility shifts entirely to the child's parents. In this regard, we decided to update and personalize the problem by formulating the variable in such a way that the respondents (teachers and parents) answered the question about their readiness to transfer not some abstract (foreign) children, but their own children to online learning. --- Table 9 Distribution of answers to the question: "Would you choose online education for your child instead of a common school?", % The results obtained show that our experts do not accept online learning uncompromisingly. The percentage of teachers and parents who are ready to choose online education for their high school students instead of a general education school turned out to be negligible. The experts considered that in Dyatlov, V. A., Kovalev, V. V, & Chilingarova, N. D. (2023). Motivation, Intensity and Quality of Educational Activity of Russian Schoolchildren in Online Learning,International Journal of Cognitive Research in Science,Engineering and Education (IJCRSEE),11(3),[461][462][463][464][465][466][467][468][469][470][471][472][473] the context of distance learning, schoolchildren do not develop intrinsic motivation at the required level, and the intensity of education and results of educational activities themselves do not meet the established standards. More or less significant interest in online learning is associated only with its implementation in the status of additional education. The data obtained could have been even worse for the supporters of online education, if some of the answers had not been pulled over by the option "I would choose it as additional education", because respondents were given the opportunity to make only two choices of given values. --- Conclusions The obtained results can be structured according to three main positions: the attitude to online learning, the perception of one's own motivation in the conditions of online learning, and the expert evaluation of the online learning work of high school students by teachers and parents. Attitude towards online learning. The survey showed that the majority of schoolchildren (66.6%) are set to repeat the experience of online learning. However, out of this number, 50.0% of respondents perceive online as a form of education that is convenient for them, and only 16.6% associate it with an opportunity to improve the quality of education. As for the ability of the online format to support the intensity of learning activities, in the process of analyzing empirical data, three approximately equivalent groups were identified: 32.2% solved more learning tasks, 30.9% of respondents managed to complete a smaller volume, and 36.9% found it difficult to answer. The presence of a large number of set values who found it difficult to choose indicates the inability of this part of high school students to realize their own intensity of educational work in online learning, which casts doubt on the educational value of their choice of distance format. This is especially true in the context of the fact that in this group (36.9% of those who found it difficult to answer), 46.6% expressed their intention to continue the online learning experience. The fact that it is not worth taking literally the desire of high school students to study online without a critical reassessment is also evidenced by the fact that about a third of the students in the group who positively assessed the experience of distance learning (26% -"online is convenient"; 15% -"online provides quality") decided less the volume of educational tasks, which indicates a focus on imitation of education. Perception of one's own motivation in the context of online learning. A direct question to high school students about the state of their motivation made it possible to single out four groups of respondents: motivation does not decrease, because learning online is convenient (46.4%); motivation does not decrease, because study online productively (10.8%); motivation is partially reduced, because studying at home is difficult (32.7%); motivation is reduced, tk. studying at home is almost impossible (10.1%). As you can see, 57.2% (46.4% + 10.8%) of high school students did not notice a decrease in motivation when switching to online learning. However, their chosen setpoints once again showed convenience over productivity. Convenience, of course, is a significant condition for increasing motivation in the educational process, but it can hardly be called decisive. And in this case, it can even act as a factor that reduces the readiness of high school students for intensive and high-quality education. This hypothesis was tested with two questions. The first question is about the frequency of doing extraneous activities during distance lessons. In the group of those motivated because of the convenience of online learning, only 35.8% of respondents were not distracted by extraneous matters, and in the group of those motivated because of online opportunities to provide productive learning -31.5%. The conducted pairing showed a low degree of reliability of the answers of high school students that their motivation does not decrease in the conditions of online learning. The second question is about the preferences of the learning format: asynchronous, synchronous and regular class. This is a criterion in order to identify the relationship between the presence of motivation in an online environment and the willingness to abandon the traditional class-lesson system. It was found that in the group motivated because of online convenience, 23.1% want to study in a regular school, and in the group motivated because of online productivity -43.5%. This means that even the presence of motivation in online lessons is not associated with greater productivity of digital education compared to a traditional school. And, conversely, some high school students from the two groups of partially motivated and demotivated online choose this particular format of learning. According to our calculations, they make up 9% of the total sample. Their aim is obvious: to get out of the control of the teacher. All this confirms the hypothesis that the high motivation declared by high school students for the online learning process is actually more of an intention which is difficult to implement than a reality. It can be assumed that such significant preferences for online convenience and the desire to learn asynchronously, in fact, reflect the goal that is not manifested openly to get out of the control of the teacher in order to reduce their educational activity. Obviously, this means that in the structure of the personality of high school students there are social qualities that do not contribute to maintaining motivation at a high level in the process of online learning. Expert evaluation of educational work of high school students in online format by teachers and parents. Teachers (74.7%) and parents (81.8%) reacted negatively to the experience of high school in online learning, explaining this by the decline in the quality of education and the inconvenience of this format. Only 7.1% of teachers are convinced that high school students solved more learning problems in online lessons. 43.6% of pedagogues have encountered an attempt to simulate the learning process by students, and 39.3% believe that online learning is a complete imitation of the educational process. 51.0% of teachers believe that the transition to online has had a negative impact on the quality of education for all high school students. However, 43.1% specify that it is more convenient for some categories of children (homebodies, disabled, shy, highly motivated) to study remotely. 47.2% of parents came into conflict with their children due to the fact that they were engaged in extraneous activities during the lesson. And finally, only 5.6% of parents and 8.8% of teachers would like to transfer their children to online education. The latest data is especially characteristic of pedagogues: 25.3% of them would like to repeat the experience of online learning at school, but only 8.8% choose it for their own children. This confirms the point of view of some parents that a number of teachers advocate online learning only for their own convenience. Based on the analyzed data, three approximately equal groups of respondents can be distinguished. In the first group, high school students are focused on the standard school-lesson system with elements of e-learning. According to our calculations, it consists of 40% of respondents. The second group articulates the advantages of online learning, which are associated with convenience and greater resource potential compared to classical learning. It contains about 35% of the respondents. The third group presents the interests of high school students, for whom, from the point of view of solving educational problems, it is not so much the format of education that is important, but the opportunity to get out of the control of the teacher and find themselves in an educational environment that is convenient for themselves. This group is formed from 25% of the respondents. The error in calculating the number of groups can be no more than 3-5%. The main conclusion: in the presence of a dichotomy between quality and convenience, high school students choose convenience non-reflexively. This means that the online learning format, the usefulness of which is obvious only if students have a stable cognitive activity, is unacceptable for most high school students. --- Conflict of interests The authors declare no conflict of interest. --- Author Contributions
The definition of society is tight with human group-level behavior. Group faultlines defined as hypothetical lines splitting groups into homogeneous subgroups based on members' attributes have been proposed as a theoretical method to identify conflicts within groups. For instance, crusades and women's rights protests are the consequences of strong faultlines in societies with diverse cultures. Measuring the presence and strength of faultlines represents an important challenge. Existing literature resorts in questionnaires as traditional tool to find group-level behavioral attributes and thus identify faultlines. However, questionnaire data usually come with limitations and biases, especially for large-scale human group-level research. On top of that, questionnaires limit faultline research due to the possibility of dishonest answers, unconscientious responses, and differences in understanding and interpretation. In this paper, we propose a new methodology for measuring faultlines in large-scale groups, which leverages data readily available from online social networks' marketing platforms. Our methodology overcomes the limitations of traditional methods to measure group-level attributes and group faultlines at scale. To prove the applicability of our methodology, we analyzed the faultlines between people living in Spain, grouped by geographical regions. We collected data on 67,270 interest topics from Facebook users living in Spain,
Introduction "Conflict is the beginning of consciousness" -M. Esther Harding. A short period after settling the flames of World War II, many social scientists started thinking about how to explain the psychological forces that culminated in the Holocaust, among other horrors. During the post-war period leading into the 1970s, a branch of social scientists focused on group and group-formation procedures to find an interpretation for conflicts related to collective human behavior. In this context, 'group' was a label for aggregated interpersonal processes. Measurement techniques for the group-level behavior lack consistent findings when considering single group members' attribute such as race [1][2][3][4][5][6]. Therefore, researchers were motivated to investigate the impact of multiple group member attributes alignment (e.g. race and gender) on team members conflicts. Faultlines are hypothetical dividing lines splitting a team into one or more relatively homogeneous subgroups [7,8]. Studies on the effects of faultline dynamics to explain theoretical underpinnings and effects of faultlines appear in sociology literature [9,10]. Like many other aspects of human behavior, the implementation of measurement tools has been challenging. Still, reliable measurement techniques associated with group-level attributes have been introduced by the literature [11,12]. Group faultlines usually have a detrimental effect on team-level outcomes [8,10]. Lau and Murnighan (1998) introduced the initial faultline theoretical model [7,13]. They based the theoretical reasoning on social categorization and social identity approaches [14]. Despite a well-developed theoretical framework, limited measurement techniques currently exist to create a strong link between these theories and the real world. Managers and politicians have considered faultlines measurements an essential tool for managing performance and leadership. Thanks to technological developments during the past few decades, many aspects of human social behavior are now more apparent to scientists. One of the main contributions of technology to human life is the onset and spread of social network platforms. These platforms offer free services to users in exchange of access to users' data; they enrich their databases by the behavioral attributes of their users and manipulate them for marketing purposes. For example, Facebook provides a marketing platform for advertisers to target their audiences based on demographic, behavioral characteristics and location. These platforms' new social behavior measurement instruments have more valuable benefits than traditional ones (surveys and questionnaires). The traditional approach to measure faultlines was the application of questionnaires by asking team members about their behavioral attributes and calculating the metrics. This approach exposes the results to biases such as dishonest or unconscientious responses. Besides, scaling the research to larger groups using this approach is costly and timeconsuming. In this research, we employ data from social networks' marketing platforms and introduce a new approach to overcome these limitations. This new approach aims to increase the scalability and accuracy of faultlines measurement while making it less expensive. We introduce a reliable methodology based on data from billions of social network users to measure the faultline separating populations in different geographic regions. To prove the applicability of this tool, we analyzed the faultlines between people living in Spain, grouped by geographical regions. Spain has experienced identity-related regionalism independence movements and conflicts. If our methodology performs well, it should be able to capture these conflicts. --- Theoretical discussion The regional/national identity salience in geographic regions produces the previously mentioned conflicts. Political leaders tend to promote the differences such as cultures and national identities for getting more votes by drawing a clear line (faultline) between them and us (e.g., Catalans vs. non-Catalans in Spain). Social science literature is rich in theories and measurement techniques to analyze faultlines. We extend the available measurement techniques to understand better the faultlines' status of large-scale groups. This research proves how our new proposed technique measures well faultlines between groups living in different Spanish Autonomous Communities1 (referred to as CCAAs by their abbreviation in Spanish). We first apply self-categorization and social identity theories to identify the places where we expect to find strong faultlines. Then, we use one of the most popular online social network platforms (Facebook) to measure the faultline's distance and strength. --- Faultline theories The term faultline originates from geography and refers to the intersection of two tectonic plates. Therefore, faultlines mark locations that are more prone to split. Lau and Murnighan (1998) adopted this definition for research in group conflicts by defining faultlines as "hypothetical dividing lines that may split a group into subgroups based on one or more attributes" [7]. The purpose behind measuring faultlines is to quantify how a team is prone to split into subgroups [15]. According to the faultline theories, the groups divided into two homogeneous subgroups with distant intra-group attributes are more likely to conflict between members. Three main categories of faultlines have been the focus of the articles in this literature (1) Separation-based faultlines (e.g., followers of different football teams) (2) Information-based faultlines (e.g., engineers vs. psychologists) (3) Resource-based faultlines (group members' access to "finite resources, e.g., power, materials, authority, and status") [16]. Social identity and self-categorization are two of the most prevailing theories in this field. They are building blocks for faultline research as they explain: (1) sub-group formation, (2) relationship between group identity and trust, and (3) the nature of ingroup-outgroup biases [17]. --- Self-categorization and social identity Social categorization theory justifies faultlines in human groups, and the comparativefit is one of the several factors affecting social categorization processes. Comparative-fit explains how observed similarities and differences, such as languages or accents, are perceived as social categories [14]. A strong faultline makes the differences within groups more salient. The human's brain ability to process information is limited. For example, if we see an object is flying and singing, we unconsciously assume it is a bird. Then we assign all the bird category attributes such as breeding by laying eggs and having wings to that object. Therefore, abstraction is the key attribute of the human brain to understand the surrounding world. The legitimate model of the human brain is the highest level of abstraction for demonstrating cognitive mechanisms [18]. The human capacity to recognize different levels of abstraction is limited. Cognitive procedures such as abstraction, thinking, and learning structure the information we retrieve from the world outside. When individuals confront disorganized and unlabelled data, they abstract the complex data into basic concepts with specific goals [18]. If the flying object has one wing instead of two in the bird example, the human's mind still puts it in the birds' category. The same happens when we see someone speaking a language (Italian, Chinese, Catalan, etc.), then we unconsciously assign attributes to that person (e.g., the origin country is Italy, China, Catalonia, etc.). The social identity approach describes the state of people thinking of themselves and others as a group. This theory states the three steps of psychological processes to perceive the social group is: (1) social categorization: organize social information by categorizing people into groups such as Catalan, Castilian, South American, and Japanese. (2) social comparison: give meaning to those categories to understand the group's task in the specific situation (e.g., Catalans speak Catalan, Japanese are hardworking). (3) social identification: the process in which people relate themselves to one of those categories (e.g., I am Catalan!, I am Spanish!). The lowest level of abstraction is given as a personal self during this process, where the perceiver categorizes themself as "I". A higher level of abstraction corresponds to a social self, where the perceiver categorizes themself as part of a "we" compared to a salient out-group (them) [19]. Social identity theory explains some behavioral attributes of group members. According to this theory, people maintain their self-esteem by a cognitive bias assigning positive attributes to their group, nationality, category, etc. Individuals are assumed to be intrinsically motivated to achieve positive distinctiveness and "strive for a positive self-concept" [20]. This cognitive bias may also result in uneven distribution of resources and discrimination within groups. Therefore, members endorse resource distributions that would maximize the positive distinctiveness of an in-group in contrast to an out-group at the expense of personal self-interest [14]. Self-identity theory also explains that an in-group seeks to increase self-esteem by direct competition against the out-group. This effect would cause polarization of the group at a high level of social competition and make two salient subgroups. According to the similarity attraction paradigm [21], members in one subgroup experience psychological distance from other subgroup members and are less likely to cooperate [22]. Therefore, people living in the same country feel they are in the same group, thus, they have less distant behavioral attributes than the people in other countries (other groups) (Proposition 1). The self-categorization theory argues that a category's prototype is contingent on the context in which the category is encountered. This theory is consistent with leader categorization theory, whereby stereotypical leaders were more effective than non-stereotypical leaders [23]. --- Insular effects Islands have developed isolated living communities, whether plant, animal, or human, separated from, and differing to varying degrees from, mainland communities of the same kind. Means of physical communications, such as transport, were crucial for the past interaction of island and continental populations. They were also largely dependent on distance from the mainland, the climate, and technology. Contacts are influential in determining the degree, and the nature of cultural factors [24]. This is especially true in islands, which have been less affected by the cultural and ethnic change, hostile invasion, mass immigration, or political interference, and at the same time have been more exposed, if not open, to cultural stimuli from a wider variety of sources [25]. The distance and insularity of these islands result in more differential cultural attributes in the population. The differential cultural attributes may grew a strong regional identity and made it prevailing compared to the countries' national identity. According to the faultline theories, the inhabitants of islands should consider themselves a distinct group that will lead to a strong faultline. Therefore, the faultlines in the islands are expected to be relatively strong (Proposition 2). --- Conflict Consensus exists in the faultline theories literature that a strong, activated faultline in a group of people can explain some social conflicts (task conflict, relationship conflict, process conflict) [26]. Faultline activation is defined as the process by which members of a group are perceived as members of one or more subgroups [27]. A vast body of literature is devoted to developing theories and techniques for measuring and managing conflicts (e.g., international joint ventures [28,29], bi-cultural family kids [30]). The existing literature considered a strong faultline an important predictor of group conflicts. Thatcher and Patel (2011) argued that if a group perceives other sub-groups as threatening, individuals maintain their self-esteem by positive distinctiveness, resulting in a conflict between subgroups [8]. On the other hand, group diversity decreases conflict and group faultline strength [31]. Therefore, based on the literature outcomes, our methodology should observe higher faultlines values in regions that experienced some regional conflicts in their history (Proposition 3). --- Measurement The literature in the human group-level measurements mainly relies on questionnaire surveys. The application of questionnaires in the analysis comes with limitations, such as the number of questions and non-scalability. Large-scale surveys and collecting empirical data on the population have been costly, time-consuming, and in many instances impossible during human history [32]. On the other side, advertisers can elicit many behavioral dimensions by tracking internet users' online behaviors [33]. Such platforms continuously track users' interests, beliefs, preferences, behaviors, locations, and interactions. The majority of faultline research has been conducted by questionnaire survey-based experiments using relatively small groups. This paper is the first attempt at using large-scale field data provided by online social platforms in faultline research. We use one of the most prominent social networking platforms' data (Facebook), with more than 2.9B monthly active users, to measure the faultlines. Facebook places particular importance on classifying the interests of its users for marketing purposes [34] and measures all the individual user's preferences. --- Interests in Facebook Facebook infers user preferences from self-reported interests, clicking behaviors on Facebook posts, software downloads, GPS location, and processing the communications with other users in multiple platforms (e.g., Facebook, Instagram or Whatsapp). Facebook makes this information anonymized and accessible to marketers through an application programming interface (API). Facebook finds users' interests by tracking their activities on Facebook's platforms (i.e., Facebook and Instagram) and third-party websites, apps, and online services. To be more specific, in addition to the information collected from its owned social networks and applications (Facebook, Instagram, and Whatsapp), Facebook collects data from more than 30% of the most popular websites [35]. Facebook may also track users' locations through their mobile devices, inferring the amount of time each user spends in locations such as football fields, universities, theaters, restaurants, and churches. Facebook users' interests are shaped by multiple facets of their activity (e.g., if someone goes to the football stadium for all Real Madrid football team matches, after checking out Real Madrid online website, Facebook most probably assigns "Real Madrid football team" to the user interests). Thus, countless interests shape human preferences in Facebook. Facebook organizes interests in a multi-level, hierarchical structure with 14 root categories: business and industry, education, family and relationships, fitness and wellness, food and drink, hobbies and activities, lifestyle and culture, news and entertainment, people, shopping and fashion, sports and outdoors, technology, travel places, and events. Facebook also assigns unique, language-independent id to each interest. Facebook finds user interests through multiple information channels, including page likes, self-declared interests, downloaded apps, and location. This approach forms the most comprehensive dictionary of preferences for billions of people. Previous studies found the following paths to assign preferences to each user. The user has this preference because: (i) "This is a preference the user added, " (ii) "what the user does on Facebook, such as pages the user has liked or ads the user clicked, " (iii) "the user clicked on an ad related to. . .", (iv) "the user installed the app. . .", (v) "the user liked a page related to. . .", (vi) "the user comments, posts, shares or reactions the user made related to. . .". [36] The goal here is to measure faultlines (strength and distance) using the features extracted from different the popularity of different topics among groups of people living in specific geographic regions. The following section explains how we extracted these features from Facebook data. In previous work, we presented the first large-scale analysis of measuring culture using tens of thousands of interests to define human group culture and examined the validity of this approach using the world values survey (WVS), among other sources. Our findings showed that the Facebook measurement encompasses a broader range of cultural explanatory dimensions than the WVS [37]. --- Faultline distance According to distance theory, team members in one subgroup feel psychological distance from team members in other subgroups, making them less likely to cooperate [22]. Thus, measuring the distance between the behavioral attributes of the subgroups will shed light on the status of the faultline. Faultline distance reflects the extent to which formed subgroups differ from one another in terms of behavioral characteristics [38]. The distance between the group-level attributes of two subgroups is used to calculate faultline distance. Consider group G consists of n members A j (j = 1 to n). G = {A 1 , A 2 , . . . , A n }. Each member of the group may be interested in topic i (a i = 1) or may not be interested in that topic (a i = 0). Then we can assign a vector of p dimensions (attributes) to each member (e.g. member j). -→ A j = a 1 j , a 2 j , . . . , a p j . We compute group-level attributes ( -→ V g ) using mean vectors (average value of group members for each attribute). The pth group level attribute (a p ) is calculated by averaging pth attribute (a p j ) across all group members (n). a p = n j=1 a p j n , -→ V g = a 1 , a 2 , . . . , a p . Faultlines by definition are hypothetical lines splitting group (V ) into subroups (v 1 , v 2 ). We assign a vector of p dimensions to each subgroup ( -→ v 1 , -→ v 2 ): -→ v 1 = a 1 1 , a 2 1 , . . . , a p 1 , -→ v 2 = a 1 2 , a 2 2 , . . . , a p 2 . The faultline distance (D g ) is the Eclucidian distance between two subgroup attribute vectors (v 1 , v 2 ): D g = | -→ v 1 - -→ v 2 | = p i=1 a i 1 -a i 2 2 . --- Faultline strength Thatcher and Patel (2003) described faultlines as potential splits that yield "relatively homogeneous subgroups based on the attributes of the team members. " [39]. Faultlines, as the definition implies, are imaginary lines that separate homogeneous groups, and faultline strength measures how homogeneous these subgroups are. As a result, to calculate faultline strength, referred to as Fau, we compute the variations within each group. This measurement is based on self-categorization theory, which distinguishes between in-group and out-group, which explains why the measure can detect only two subgroups. In theory, polarization is one outcome of group conflict, making within-group differences more salient [40]. Therefore, faultline strength is a valid measurement for groups with strong faultlines. They illustrated the differences between faultline and distance measurement using a comparison table. The Table 1 shows two groups of four people with different demographics. In the first group, there are two distinct subgroups with demographic characteristics that are homogeneous within subgroups. Members of the second group, on the other hand, have a wide range of demographic characteristics. These two groups have the same faultline distance measurement. However, due to the demographic attributes alignment of the subgroup members, the faultline strength measurement of the first group is higher. Thatcher formulated the faultline strength based on information on p attributes of each group member as follows: Fau g = p i=1 2 j=1 n g j (a i j -a i ) 2 p i=1 2 j=1 n g j k=1 (a i jk -a i ) 2 , g = 1, 2, . . . . In this equation, p denotes the number of attributes in the data, n g j is the count of members in the subgroup j divided by the split g (we assume the faultlines each time splits the group into two subgroups), a i j is the mean value of attribute i in subgroup j, a i is the mean value of attribute i in the whole group and a i jk is the ith attribute of the kth member in subgroup j. Fau can take values between 0 and 1, corresponding to faultline strength [39]. Groups split into two relatively homogenous subgroups will have larger Fau values. Bezrukova et al. (2009) introduced a new faultline measurement by multiplying faultline distance and faultline strength (Fau), which is more explanatory than the previous measurements [38]. S Be = Fau g × D g . --- Case study (Spain) With 505,990 square km, Spain is the second-largest country in the European Union, including regions in Africa and several islands in the Atlantic Ocean and the Mediterranean Sea. Spain is divided into 17 autonomous communities (CCAA) regions with six official languages. Spain's continental European territory is located on the Iberian Peninsula. Its insular territory includes the Balearic Islands in the Mediterranean sea and the Canary Islands in the Atlantic ocean in front of Morocco. The minimum Euclidean distance between continental Spain and the Balearic and Canary islands is 87 and 1701 kilometers, respectively. People in Spain live under the same flag and share the same resources. Therefore, they consider themselves belonging to the same group and people in other countries belonging to a different group. According to the theoretical discussion we had in Sect. 2.2, which led to Proposition 1, people living in Spain should have less distant behavioral attributes among them than with the people in other countries (H1). --- Traces of recent conflicts in Spain Cultural differences and increased interaction among Spanish people as a result of transportation improvements result in regional conflicts and, as a result, radical independence movements in recent history. --- Basque region conflict The leaders of this region's independence movements began political activities in 1958 with the formation of the Euskadi 'ta Askatasuna (ETA) group. The activities of the ETA were organized around four pillars: political, cultural, military, and economic. Throughout its history, the ETA has had numerous conflicts with the national government. The previous regime (General Franco's dictatorship) imprisoned many ETA members as the strategy of action-repression-action took hold. Assassinations against Franco's prime minister, Luis Carrero Blanco, began in December 1973. Two years later, Koordinadora Abertzale Sozialista (KAS) established itself as a coordinating body of this group by executing two members of the political-military branch, prompting widespread condemnation within the Basque region. Later that year, in 1976, KAS introduced a platform of minimum conditions for MLNV (Basque national liberation movement) participation. This umbrella term encompassed all social, political, and armed organizations based on ETA ideas. ETA's illegal military activities claimed the lives of 92 people in 1980. ETA terrorist attacks continued until 2011, but radical movements continued until 2014. [41]. --- Catalonia region conflict In 2003 and 2004, the political situation in the Catalonia region changed quite dramatically. For the first time in the constitutional period, the national and Catalan governments coincided with the same party (the Socialist party). In 2013, openly secessionist parties successfully owned the majority of the seats in the regional parliament (74 seats out of 135). Secessionist parties successfully got more than 49 percent of the popular vote supported. Moreover, according to most polls in 2013, 55 percent of Catalans wanted an independence referendum, and 43% would vote for complete secession from Spain [42]. In October 2017, the Catalonia regional government unilaterally declared independence from Spain. This event triggered: (i) a political reaction from the elected Spanish government that dismissed the Catalonian government, overtaken its responsibilities, and announced new regional elections by December 2017. (ii) a legal reaction that led to the imprisonment of a significant part of the Catalonian government under the accusation (among others) of breaking the Spanish Constitution (iii) the exile of the Catalonian prime minister as of October 2017 and part of its government to different European countries. This dispute resulted in an activated strong faultline dividing the Spanish society into two parts: (i) the ones supporting the right of Catalan people to become an independent country (ii) the ones supporting the national government arguing that all the Spanish citizens have the right to decide about the destiny of Catalonia. --- Insular effects in Spain Spain is a vast country with islands distant from the other CCAAs located in the Iberian Peninsula. The Canary Islands are the southernmost of the Spanish CCAAs and are located in the African Tectonic Plate. The closest part of these Islands to Africa is only 100 kilometers away from west of Morocco. The results from a survey on territorial selfidentification conducted between 1990 and 1995 show that the regional identities are strongest due to cultural specificity in the Basque Autonomous Region (49.7%), the Canary Islands (47.9%), and Catalonia (31.8%). The other CCAAs like Galicia (24.2%) and Andalucía (20%) show less prevalent regional identity. The surveys also reveal that nearly half the population feels only Basque, Canarian, and Catalan than Spanish. The insularity (the Islands) and distinctive lifestyle are some of the cultural specificities which cause this attitude among Spaniards [43,44]. As a result these Islands ran through deadly independence movements e.g. MPAIAC. 2 Therefore, we expect the faultline strength in this region has a relatively high value. Based on the theoretical discussion we had in Sect. 2.3, we should find the strongest faultlines in Spanish Islands (Canary and Balearic) (Proposition 2) and the regions which experienced secessionist conflicts (Proposition 3) (Catalonia and Basque country) in their history (H2). --- Methodology --- Spain as a supergroup We consider Spain a supergroup for all populations grouped by CCAAs, with a unique national identity, sports teams, currency, and history. Therefore, we expect more alignment between interests and behavioral attributes between Spanish people than between Spanish/non-Spanish people. By implication of the theory and measurement technique, we introduced in the previous section, the distance between behavioral attributes of people inside Spain should be less than the distance between Spanish/non-Spanish people. Polzer et al. (2006), by a study on 45 teams consisting of members from 10 different countries, theorized that geographically dispersed teams are likely to activate faultlines [29]. They also found that these faultlines were stronger when dividing the group into two equally sized subgroups with homogeneous nationality. In Spain, the regional identities of the population in different geographically separated CCAAs are strong. Populations in geographically distant regions such as islands develop stronger regional identities and faultlines due to the physical separation and insularity. Besides, the history of independent movements and political leadership activism in the Catalonia and Basque regions enhance the self-categorization process, leverage regional identity, and strengthen faultlines between regional/non-regional people. Therefore, we hypothesize that the strongest faultlines in Spain separate the population in the Islands (due to geographical distance) and Catalonia, and Basque regions (due to conflict history). --- Application of marketing data To measure the faultline distance using the Facebook marketing platform, we constructed a vector consisting of the popularity of interests in each region. We calculated the interest penetration (IP) for interest i in geographic region b using the following formula: IP i b = MAU i b MAU total b . In the previous equation, MAU i b is the monthly active Facebook users who are interested in the topic i in region b and MAU total b is the total monthly active Facebook users in region b. We constructed Interest penetration vector (IPV m b ) using m interest for the geographic region b: IPV m b = IP topic 1 b , IP topic 2 b , . . . , IP topic m b . We consider IPV b m a proxy for the group level attributes of people living in geographic regions b. We calculated the cosine distance between behavioral attributes of people living in geographic regions k and l using the following equation: The cosine distance will be a value between 0 and 1, where 0 yields the least cosine distance between two regions. To compare the regions at the first place we used cosine distance of the IPVs which is calculated using the following equation: Distance = 1 - IPV k • IPV l IPV k IPV l . In the previous equation, k and l are referring to region k and region l respectively. We calculated the Bezrukova faultline strength measurement using the Euclidean distance between the two vectors [38]. We use the following formula to calculate the Euclidean distance between two IPVs: D g = | --→ IPV k - --→ IPV l |, D g = p i=1 IPV topic i k -IPV topic i l 2 . --- Dataset We aim at creating a dataset that allows us to construct vectors -→ V for a large number of dimensions for each of the Spanish regions (and other countries) considered in our work. To this end, we will rely on the Facebook's marketing API 3 that allows retrieving the number of users interested in a particular element in a given geographical area (e.g., country, region, etc.). Marketers use the configurations available in the Facebook platform to target the relevant audiences for their campaigns. Facebook ads manager is a public interface for any Facebook user that allows marketers to define the group they want to target. The group specifications can include the geographic location (e.g., country, region, city, and zip code), demographics (e.g., gender, age, language, and education), behaviors (e.g., mobile device, operating system, and browser), and interests (e.g., sports, food, cars, and art). The Facebook marketing API provides monthly (MAU) and daily (DAU) active users and relevant advertising costs for a given set of audience specifications. For our research, we listed a set of interests for each set of (interest × region) and sent programmatic queries to Facebook to retrieve monthly and average daily users. Facebook provides three types of location to target specific people (recent location, home location, and travel in). From these options, we chose the users' home location because it is a more reliable and permanent way to identify and locate users who use both mobile and desktop computers. To correctly identify a user's home location, Facebook employs a number of techniques, including information based on IP address, current city in user's profile, and friends' declared profile locations. The data collecting task was performed between February 10th and February 12th, 2021. We evaluated another sample taken between February 25th and February 27th, 2021, to confirm the validity of the results given in the paper is stable in both considered data samples. The goal to accomplish our research is being able to define a very long vector including tens of thousands of elements with all sorts of information that captures the culture and interests of a region (from food or sports preferences to religion or political issues). Unfortunately, Facebook does not provide a comprehensive list of all the available interests. Therefore, we had to define our own agnostic mechanisms to create the refereed interest vector. We defined two methods to create vectors, including tens of thousands of interests. Comparing the results obtained with these two methods, we can confirm that the results of our work remain the same as long as the interest vector includes a large enough sample of interests. Following, we describe both interests' definition methods: • DBPedia: We rely on the interest vector used in our previous work [37]. We downloaded 12,301,672 entries from DBPedia (http://es.dbpedia.org/), mapped them into 399,182 Facebook interests, and selected only those linked to audiences with more than 500K FB users worldwide. This process led to a vector of 77,523 interests. • FDVT: To obtain the second vector, we retrieved all the interests assigned to Spanish users who downloaded our online app Facebook data valuation tool (FDVT) [45]. FDVT is a tool that informs FB users of an estimation of the revenue they generate for FB based on the targeted ads they visualize and click on. Using the FDVT, we collect FB's interests from users who have installed the tool. Overall, we collected 67,270 interests from 2101 FDVT Spanish users. While we acknowledge those users are not a representative sample of Spanish users, they are enough to achieve our goal of creating a very long vector. In addition, in this method, many interests are likely related to specific regional elements. The cosine distance between any pair of regions considered in the paper deference was less than 0.001 when considering DPpedia vs. FDVT methods. As a result, because the FDVT interest vector was obtained directly from Spanish users, we conduct our analysis and measurement with such dataset. We create one interest vector per CCAA using the FDVT interest vector (Spain has 17 CCAAs). This collection process yielded a vector of 67,270 interests for each CCAA, including the monthly active users (MAU) interested in each of these 67,270 interests. We also counted the total number of MAU in each CCAA. The MAU of each interest was then divided by the MAU of all Facebook users in each CCAA, yielding a value ranging from 0 to 1, which we used as a proxy for the behavioral characteristics of people living in each CCAA. Note that we obtain the same data for six European countries (France, Germany, Greece, Italy, Portugal, and the United Kingdom) and two neighbour regions of Catalonia in France (Midi-Pyrenees and Languedoc-Roussillon). Facebook imposes a lower bound of 1000 users for MAU for privacy reasons, so it reports 1000 instead of lower values. To address this issue, we replace MAU with DAU (daily active users) whenever MAU equals 1000. It should be noted that Facebook's minimum reported DAU is 20. In addition, to avoid the lower bond limitation, we targeted geographic regions with high user counts. We collected data for our analysis by programmatically querying the Facebook Marketing API with 67,271 queries across each geographical region (17 Spanish CCAAs, six European countries and two French regions). --- Representativeness of the data To determine the representativeness of the Facebook data, we examined the data in relation to Facebook coverage in the studied regions. We obtained the Facebook audience counts by querying the Facebook marketing API. We also obtained the population of each Spanish CCAA from the Spanish National Statistics Institute (www.ine.es). The penetration of Facebook in each region was then calculated by dividing the audience count by the population of each region. The distribution of Facebook penetration in Spanish CCAAs is depicted in Fig. 1. According to the findings, Facebook penetration in all Spanish CCAAs exceeds 50%, with Catalonia and the Community of Madrid having higher penetration than the rest of Spain. We performed a similar analysis for some European countries using population data from the www.statista.com website, and the results are shown in Fig. 2. --- Results and discussion We assumed that the population is divided into 17 subgroups and measured the faultlines separating each CCAA from Spain. This way, we can identify and measure strong faultlines. --- Distance analysis Figure 3 depicts the cosine distances between the IPVs of Spanish CCAAs and those of several European countries and the considered French regions. Each box plot depicts the distribution of cosine distances between each CCAA's IPV and the remaining Spanish CCAAs. The boxes for the European countries present the cosine distances between the country/region's IPV and all Spanish CCAAs. The primary observation in Fig. 3 is that the cosine distance between IPVs of the Spanish CCAAs is roughly one order of magnitude lower than the cosine distance between European countries/regions and Spanish CCAAs. Table 2 provides the average distance between the IPVs of geographic regions and Spanish CCAAs. The average distance between IPVs between Spanish CCAAs ranges from 0.01 to 0.03 while the distance between other countries ranges from 0.14 to 0.19. People living in Spanish CCAAs (in-group because Spain is a supergroup for all CCAAs) are an order of magnitude closer compared with people living in other European countries (out-group for people living in Spain). This finding is consistent with our theoretical discussion that members of a group (Spanish CCAAs) feel less distance from one another than members of other groups (Countries). Moreover, according to Table 2, Spanish Islands (regions with insularity effect) and Catalonia (strong regional identity) present the highest average distance among the Spanish geographic regions. To test these initial observations regarding hypothesis 2 (H2), we used an honestly significant test (HDS) designed by Tukey [46] to compare the distance between distributions pairwise. According to the results of this test, we found enough evidence to reject the null hypothesis for the similarity of Canary Islands to all 16 remaining Spanish CCAAs with 95% certainty (Table 7). This confirms, for Canary Islands, the furthest insular Spanish territory. Similarly, we found enough evidence to reject the null hypothesis for Catalonia's similarity to 11 out of the 16 remaining CCAAs with 95% certainty (Table 8). Moreover, we compute the HDS for the 17 Spanish CCAAs and the considered 6 European countries and 2 neighboring French provinces. The HDS results allow us to reject the null hypothesis of similarity in all cases. Tables 9, 12, 13, 14, 15, 16, 10 and 11 in the Appendix show the results of this analysis for various regions and countries. When we compare the distances in these tables, we can see that the mean difference between Canary Islands and the rest of the Spanish CCAAs (Table 7) is one order of the magnitude less than the mean difference between other foreign countries/provinces. This finding is aligned well with the first hypothesis (H1). According to Spain's faultline analysis, the strongest faultlines separate Catalonia, Basque, and the Canary Islands from the rest of the regions. Table provides the cosine distance between the Spanish Islands, Catalonia, Basque country to the rest of Spain. We observe that the Canary Islands are far from all other CCAAs compared to distances observed in the other three regions evaluated). Interestingly, the closest region is the Balearic Islands (0.0256), which share the Canary Islands' insularity property. It is worth noting that Canary Islands is the furthest away region from any other Spanish region. The strongest faultlines separate Catalonia, Basque Country, and the Canary Islands from the rest of Spain, according to faultline analysis. The Table 3 shows the cosine distance from the Spanish Islands, Catalonia, and the Basque Country to the rest of Spain. In comparison to the distances observed in the other three regions evaluated, the Canary Islands are far from all other CCAAs. Surprisingly, the closest region is the Balearic Islands (0.0256), which have the same insularity as the Canary Islands. It is worth noting that when compared to any other region, Canary Island is the farthest away. The three closest regions to the Balearic Islands are Valencia, Catalonia, and the Community of Madrid, in that order. It is reasonable because the Valencia and Catalonia are the closest geographical regions and share a common local language with the Balearic Islands, and all three have a language that is similar to Catalan. Because Madrid is the country's capital and has strong communication links with all regions, it is reasonable to expect Madrid to never appear among the most distant CCAAs for the other regions. Analyzing Catalonia, we find six regions with distances less than 0.02: Madrid, the Community of Valencia, the Basque Country, the Balearic Islands, Navarre, and Aragon. Madrid and Catalonia are Spain's most developed and international regions, with good transport links. It allows the people who live in these areas to communicate and exchange cultural values. Table 3 also shows reciprocity with the Balearic Islands and Valencia due to the common local language and because Valencia is a neighboring region. People who live in border regions develop common cultural values, as is the case in Aragon. Finally, as we argue in Sect. 3, the Basque Country and Catalonia have strong cultural ties because they both had secessionist political movements. The Basque country secessionist movements have such a strong influence in Navarre that nationalist Basque parties argue that Navarre should be a part of the Basque country. Furthermore, there are many areas in Navarre where some people speak Euskera (the local language spoken in the Basque Country). As a result, Navarre is significantly closer to the Basque country than any other region. The insular regions farthest from the Basque Country are the result of the previously described insular effects. The Canary Islands are the furthest away from Catalonia, while the Community of Madrid is the closest. This finding is interesting because the two sides of the dispute in this region were the national government based in Madrid and the regional government in Catalonia. Therefore, to make the faultlines in Catalonia less distant, seems to not be enough to manage conflicts between the regional and the national governments. --- Faultline strength We considered the faultlines that split Spain into two subgroups, i.e., each faultline separates one of the CCAAs from the rest of Spain. Table 4 shows the calculated Fau strength, distance and distance × strength values for each faultline using our dataset. According to Table 4, the strongest faultlines belong to the Spanish Islands, Catalonia, and the Basque country, which confirms the hypothesis (H2). The existing literature on cultural diversity [43] and the evidence of the historical conflicts in two CCAAs (Catalonia, Basque Autonomous Region) shows that the faultlines in these regions are relatively strong. The measurement results confirm the methodology's validity we introduced in this paper. We analyzed the robustness of our methodology using clustering analysis of the IPV s [11]. Clustering relaxes the assumption made by Fau measurement that only two subgroups exist in the group. This clustering analysis aims to find all the subgroups using the IPVs of Spanish CCAAs and report the strongest faultline and the relevant calculated strength. The results confirm the findings by the former measurements by revealing that the strongest faultline separates the population in the Canary Islands from all the remaining CCAAs with a faultline strength measurement value equal to 0.29. Figure 4 and Fig. 5 present the distribution of the faultline strength in Spanish geographic regions. Looking at Fig. 4 we can observe relatively strong faultlines in the Spanish CCAAs. The methodology we introduced in this paper provides politicians with a tool to better allocate resources to manage regional conflicts and reduce similar regional identity-based disputes. --- Singularity analysis We analyzed interest penetrations in CCAAs to find unique interests in each region. We define Singularities as interest categories that satisfy the following conditions: (1) interest penetration in one region is ten times (10×) larger than in the rest of the regions (2) interest penetration is more than 5% in that region. We repeated our analysis by changing the first conditions' threshold from 10× to 5×. We reported the results of this analysis in Table 5. With a 10× threshold for Catalonia, we found "Time Out Barcelona" and "Catalunya Experience". The first one seems to be corresponding to the political interests of the population in Catalonia. In Rioja, we found "Rioja (wine)" to be the sole singularity using the 10× threshold. Rioja is a touristic region which is famous for the wine [44]. Hence, these results suggest that, using this methodology, we can identify some indentitarian interest categories. Applying social identity theories to this analysis, we can explain the group-level behavior of the people in Spanish regions. For example, Social identity theory explains that in Rioja, people tend to maintain their self-esteem by a cognitive bias assigning positive attributes such as "the best wine quality" to their regional identity [20]. If the same people belonged to other Spanish regions like "Ribera del Duero, " they would probably assume Ribera wine has the best quality. --- Variables which may affect faultline measurement We did a multiple linear regression to control the economic factors, population, and Facebook penetration in different CCAAs. We normalized the independent variables (Population, GDP and Penetration) and reported the multiple linear regression model coefficients in Table 6 considering the dependent variable here is faultline strength. None of the dependent variables are statistically significant with a 5% threshold, providing strong evidence for the null hypothesis that the dependent variable are not correlated to the independent variables. Therefore, we retain the null hypothesis that there is no relationship between independent and dependent variables and reject the alternative hypothesis. --- Conclusion The extensive coverage and tracking capability of online social networks provide new tools for addressing conflict and faultiness research at the scale of large geographical regions, overcoming some limitations of traditional approaches. We contributed to the literature on faultline measurement in this paper by introducing a new methodology for analyzing and measuring faultlines between people living in different geographic regions using data from social network platforms. We first divided people into geographic regions and extracted behavioral attributes from Facebook data. To measure faultlines on an unprecedented scale, we use the well-known faultlines distance and strength measurement techniques. The social science literature has introduced management tools to deal with group faultlines, such as increased communication and cultural diversity. The methodology presented in this paper can help politicians assess the effectiveness of differnet policies in managing these faultlines. Our methodology enables us to: (1) monitor the formation of faultlines by identifying the appearance of strong faultlines (i.e., an increase in faultline distance or strength in specific regions); (2) evaluate the evolution of existing faultlines (e.g., see if the strength or distance of the Catalonia faultline increases or decreases); and (3) have a direct measurement method for the performance of political/social action (e.g., if the Spanish government implements some active policy on the Catalonia conflict, then the impact on the strength/distance of the faultline can be monitored. We tested our technique's ability to accurately measure faultlines on a large scale using the administrative regions of a large European country like Spain. The obtained results are consistent with the expectations. The administrative regions with the strongest faultlines are the insular CCAAs (the Canary Islands and the Balearic Islands), which are physically separated from the continental Spanish regions, and (ii) the CCAAs with active political conflicts in recent Spanish history (Catalonia and Basque Country), which developed stronger regional identities. The literature in social sciences emphasizes the importance of identifying faultlines as a source of conflict in order to take corrective measures. In this regard, our methodology contributes to the field by bridging the gap between faultlines theory and its practical application in large-scale contexts. While we studied faultlines that separate groups of people divided by geographic regions in this paper; faultlines between generations or gender can also be analyzed with the proposed methodology. Finally, we would like to expand our work to other countries or collaborate with multidisciplinary researchers who are willing to do so in the future. We decided to focus our efforts on only one country due to the technical complexity, the time required to collect the data, and the requirement of explaining the context of the country under analysis to allow the reader to assess the faultline strength results (see Sect. 3). --- Limitations On the one hand, This research is based on data collected from online social networking platforms. The underlying assumption in our analysis is that social network users, specifically Facebook users, properly represent the entire population in the considered geographic regions. Despite the fact that social network users represent a significant portion of the population in each considered geographic region, certain socio-demographic groups may be underrepresented in the Facebook user database. In our study, we consider the entire population of a region as our study group. Facebook's coverage is over 50% of the population in every region. Hence, we can assert that we have a coverage of the representativeness of each of the 67,270 behavioral interest topics across at least 50% of the considered population. We believe that despite its imperfection our methodology offer better representativeness than traditional methods in the considered large-scale usecase. On the other hand, this paper does not study how each of the considered interests contribute to the distance or the strenght of faultlines. This requires a very involved and deep analysis that we leave for future research. To this end we plan to use different machine learning techniques (e.g., regression models) as well as other more traditional techniques such as Principal Components Analysis (PCA). --- Availability of data and materials The data and the source code we developed to do this research is publicly available on the following online repository. 4 --- Appendix This Appendix present the results of the HSD Tukey test comparing each of the 6 European countries considered and the 2 French regions. The results are presented in Tables 7,8,10,11,9,12,13,14,15 and 16. --- Declarations --- Competing interests The authors declare that they have no competing interests. --- Authors' contributions RC contributed to the design of the paper, the development of the measurement methodology, and the paper writing. ARM contributed to the design of the paper, the execution of the experiments, and the paper writing. AC contributed to the design of the paper and the development of the measurement methodology, and the paper writing. All authors read and approved the final manuscript. Author details 1 IMDEA Networks Institute, Leganes, Spain. 2 Telematics Engineering Department, Universidad Carlos III of Madrid (UC3M), Madrid, Spain. 3 Big Data Institute, UC3M-Santander, Getafe, Spain. --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Received: 25 November 2021 Accepted: 13 June 2022
Background: Racial disparities have been reported for breast cancer and cardiovascular disease (CVD) outcomes. The determinants of racial disparities in CVD outcomes are not yet fully understood. We aimed to examine the impact of individual and neighborhood-level social determinants of health (SDOH) on the racial disparities in major adverse cardiovascular events (MACE; consisting of heart failure, acute coronary syndrome, atrial fibrillation, and ischemic stroke) among female patients with breast cancer. Methods: This 10-year longitudinal retrospective study was based on a cancer informatics platform with electronic medical record supplementation. We included women aged $18 years diagnosed with breast cancer. SDOH were obtained from LexisNexis, and consisted of the domains of social and community context, neighborhood and built environment, education access and quality, and economic stability. Race-agnostic (overall data with race as a feature) and race-specific machine learning models were developed to account for and rank the SDOH impact in 2-year MACE. Results: We included 4,309 patients (765 non-Hispanic Black [NHB]; 3,321 non-Hispanic white). In the race-agnostic model (C-index, 0.79; 95% CI, 0.78-0.80), the 5 most important adverse SDOH variables were neighborhood median household income (SHapley Additive exPlanations [SHAP] score [SS], 0.07), neighborhood crime index (SS 5 0.06), number of transportation properties in the household (SS 5 0.05), neighborhood burglary index (SS 5 0.04), and neighborhood median home values (SS 5 0.03). Race was not significantly associated with MACE when adverse SDOH were included as covariates (adjusted subdistribution hazard ratio, 1.22; 95% CI, 0.91-1.64). NHB patients were more likely to have unfavorable SDOH conditions for 8 of the 10 most important SDOH variables for the MACE prediction. Conclusions: Neighborhood and built environment variables are the most important SDOH predictors for 2-year MACE, and NHB patients were more likely to have unfavorable SDOH conditions. This finding reinforces that race is a social construct.
Background Breast cancer is the most frequently diagnosed malignancy and the leading cause of cancer death among females globally. 1,2 In the United States, an estimated 300,590 cases will be diagnosed with breast cancer (15.3% of all new cancers) and 43,700 will die of the disease in 2023. 3 The 5-year relative survival for female breast cancer is 90.3%, and it is estimated that by 2030 there will be 5 million breast cancer survivors in the United States. 4,5 High survival expectancy and treatment improvements have increased concern about the concomitant increase in cardiovascular diseases (CVDs) associated with cancer treatment. In postmenopausal female breast cancer survivors, the risk of mortality attributable to CVD is higher than in those without a history of breast cancer, whereas CVD is also the leading cause of death in patients aged .50 years with active breast cancer. 4,6,7 CVD and breast cancer share common risk factors (eg, obesity), and cancer treatments (eg, chemotherapy, hormone therapy, immunotherapy, and radiotherapy) are associated with an increased risk of cardiovascular toxicity, leading to increased CVD morbidity and mortality. 4,8 Racial disparities have been reported for both breast cancer and CVD. Given the estimation that minority populations will exceed 50% of the US population by 2044, there is a concern for these disparate outcomes across racial and/or ethnic groups. 4,9,10 Overall, the prevalence of CVD is higher in non-Hispanic Black (NHB) individuals relative to other racial subpopulations, and this disparity also exists for the primary risk factors for CVD, including hypertension and type 2 diabetes mellitus. 11,12 For patients with breast cancer, mortality is higher among Black women compared with White women, with reports showing that Black women diagnosed with breast cancer are approximately 40% more likely to die compared with their White counterparts. 13,14 The determinants of racial disparities in CVD outcomes have been categorized as a healthcare gap by the American Heart Association. 4 Social determinants of health (SDOH) are known to be an important condition of peoples' living environments that affect health outcomes, and are referenced in the Cancer Moonshot program. 15,16 A large, recently published United Kingdom cancer registrybased analysis has shown that patients living in socioeconomically deprived areas have a higher rate of CVD. 17 However, the individual-level impact of SDOH on racial disparities in cardiac events among female patients with breast cancer remains unknown. 15,16 Thus, the primary objective of this study was to identify and quantify the role of SDOH in racial disparities in major adverse cardiovascular events (MACE). --- Methods --- Data Source The study setting was the University Hospitals (UH) Seidman Cancer Center (Northeast Ohio), which is a tertiary care center that serves urban, suburban, and rural areas and is composed of an integrated network of 23 hospitals, .50 health centers and outpatient facilities, and .200 physicians offices in 16 counties throughout the region. 18 Due to the inner-city population served, the UH patient population includes a higher percentage of Black individuals and a lower percentage of Hispanic and Asian individuals than the overall US population. All patient data were obtained from the UH data repository based on the Caisis platform, which consists of an open-source, web-based, cancer data management system that integrates disparate sources of patient data (eg, Soarian, next-generation sequencing laboratories, Sunrise Clinical Manager, Tumor Registry, Via Oncology, OnCore, MOSAIQ, patient-reported outcome tools, and others). [19][20][21][22][23] The information obtained was subsequently complemented with information from electronic medical records (EMRs) captured via EMERSE (Electronic Medical Record Search Engine) to obtain the most accurate and complete information for each patient, thus avoiding high missingness. 24 All patient records were deidentified, and the study was approved by the University Hospitals of Cleveland Institutional Review Board. The cohort (supplemental eFigure 1, available with this article at JNCCN.org) included females aged $18 years diagnosed with breast cancer (all stages), determined using tumor registry data or ICD-9 and ICD-10 codes obtained from EMRs (ie, C50.xx, C79.81, 174.x, 175.0, 175. 9, 198.81, and 217) between January 1, 2010, and December 31, 2019, providing a minimum follow-up of 2 years by 2022, which was the year that data were collected. 25,26 Patients were excluded from the analysis if they were of male sex and/or had carcinoma in situ. Hispanic individuals were excluded due to the low number of patients with this ethnicity. We included all patients who had SDOH information available; patients without SDOH information were excluded. Demographic information from the catchment area, primarily based on US Census and American Community Survey data, was included to demonstrate the representativeness of the study cohort. 27 Exposure Individual SDOH features were obtained from LexisNexis, the world's largest electronic database for legal and public records-related information, consisting of groups of variables in 4 domains (supplemental eTable 1): social and community context (marital status, number of household members, distance to closest relatives), neighborhood and built environment (crime index, burglary index, car theft index, murder index, neighborhood median household income, neighborhood median home value), education access and quality (education institution rating, college attendance), and economic stability (address stability, residence status, property status, annual income, properties owned, wealth index, household income, total number of transportation properties owned). 28,29 Access to healthcare is the fifth SDOH domain; however, because all patients were able to seek care at our institution, that metric was not analyzed. The LexisNexis dataset includes a combination of all adult patients discharged from a UH facility over a 2.5-year time frame and all adult patients who are members of an Accountable Care Organization. 30 This dataset is composed of a combination of multiple public and private records that are updated at different frequencies. The data obtained referred to the most current records available. --- Outcomes The co-primary endpoints were diagnosis and time-toevent for 2-year MACE following the cancer diagnosis. The MACE included in this study were heart failure (HF), acute coronary syndrome (ACS), atrial fibrillation (A-fib), and ischemic stroke (IS), determined using ICD-9 and ICD-10 codes obtained from the entire history present in each patient's EMR (supplemental eTable 1). 31 A-fib was included in our MACE definition because it is a commonly unaccounted event in patients with cancer. 32 Covariates Demographics, risk factors, tumor characteristics, and treatment data were obtained for all eligible patients. Demographic characteristics included data from the patient's EMR, such as age at diagnosis, race (white, Black, other), ethnicity (Hispanic, non-Hispanic), and payer (Medicaid, Medicare, private insurance, self-pay, other). Risk factors were extracted from the comorbidities list based on ICD codes that were presented in the chart before the MACE diagnosis. These risk factors included EMR-based information, such as self-reported smoking status (yes, no, former, unknown), Charlson comorbidity index, and cardiovascular history/risk factors (yes, no). 33,34 Cardiovascular history/risk factors were considered positive if one or more of the following diagnoses were present in the patient's EMR: hyperlipidemia, cardiomyopathy, known coronary artery disease, previous myocardial infarction, carotid artery disease, previous transient ischemic attack/stroke, and/or chronic kidney disease (supplemental eTable 1). These factors were combined into a single variable due to the strong correlation between them, with the objective of generating a variable that characterized patients with high cardiovascular risk. 35 The number of cardiovascular history/risk factors was defined as the sum of diagnoses of each component of this variable for each patient. Tumor characteristics included EMR-based information regarding date of cancer diagnosis, hormone receptor status (estrogen receptor [ER], progesterone receptor [PR], and HER2), histologic type (ductal or lobular, not otherwise specified, other, or unknown], and TNM staging group (stage 0-IV). Treatment characteristics included appointment completion rates and use of single or combination therapy during a lifetime, such as radiation of the breast (right, left), chemotherapy, endocrine therapy, and immunotherapy. Specific medication groups included anthracyclines, PIK3CA/mTOR inhibitors, HER2-targeted agents, ER antagonists, luteinizing hormone-releasing hormone agonists, aromatase inhibitors, and newer therapies (supplemental eTable 1). --- Descriptive Analysis Data were stratified according to race/ethnicity (NHB, non-Hispanic whites [NHW]) and were presented as absolute values and percentages for categorical variables and as medians and quartiles for continuous variables. The category "other" race was considered/included for general demographics but not for the racial comparison analyses. Pearson's chi-square test was used to compare categorical variables by race/ethnicity. Data distribution assumptions for continuous variables were confirmed using histograms and the Kolmogorov-Smirnov test. We then performed Student t tests for normally distributed factors, and nonparametric Kruskal-Wallis tests for nonnormal factors. --- Machine Learning Modeling The impact/weight of SDOH in 2-year MACE was determined via machine learning (ML) models (overall data with race as a variable [race-agnostic model] and racespecific data in NHW and NHB patients separately; supplemental eFigure 2). The ML approach was applied because of its superior performance compared with the traditional regression models and its capacity to learn from data and to deal with multiple data structures. [36][37][38] Specifically, we used the tree-based Extreme Gradient Boosting (XGBoost) method from mlr3proba, an R package, for ML in survival analysis. 39 This method is widely used with clinical data and was selected because of its explainability, offering crucial perspectives into clinical decision-making. 38,40 In addition, XGBoost is notable for being 10 times quicker than other widely used solutions and for its capacity to handle sparse datasets and process hundreds of millions of instances/observations. 41 On tabular-style datasets with characteristics that are individually meaningful, treebased models outperform deep learning, which is another ML option. 40 The data were chronologically partitioned as 60% for training, 20% for testing, and 20% for validation. 42 Feature selection was performed comparing the variables according to MACE (yes vs no) in the training set, selecting those with P,.30 (supplemental eTable 2). 43 The testing set was used for hyperparameter tuning (supplemental eTable 3) applying a 10 times 10-fold cross-validation with 100 iterations, prioritizing the concordance index (C-index). This approach avoids overfitting (when a model is too adapted to the peculiarities of a dataset) and allows the model to learn and improve based on multiple iterations, consequently increasing external validity. 44 Subsequently, the tuned model was applied in the validation set with 10 times 10-fold cross-validation. The ML performance was measured via the mean C-index (the most precise and appropriate technique for calculating prediction error) and its 95% confidence interval. 45 Variable importance scores for the predictors were obtained using SHapley Additive Explanations (SHAP). 40,46 The SHAP score (SS) for each feature shows how the model prediction changes when that feature is taken into consideration, illustrating how that factor contributes to explaining the discrepancy between the average model prediction and the instance's actual prediction. 40,46 An ML prediction, f(x), is represented as a fixed base value plus the sum of SHAP values: f(x) 5 base value 1 sum(SHAP values). 46 Finally, the SDOH were ranked according to the SS from each model. --- MACE Risk The Fine-Gray method for Cox proportional hazards models was used to examine racial disparities in the risk of 2-year MACE accounting for the competing risk of allcause mortality. The variables selected for the multivariable models were among those that received the top 15 feature importance scores from the ML model. Sensitivity analysis was performed using only the Fine-Gray modeling approach, selecting variables that achieved P,.15 in bivariate analyses and those deemed to have clinical importance by study investigators. Both approaches were presented side-by-side to ensure the robustness of the approach and clarity for the reader regarding conclusions drawn from the data. To account for the healthcare access domain, a subgroup sensitivity analysis was performed in patients with private insurance. Results were presented as subdistribution hazard ratios (SHRs) and 95% confidence intervals. --- Adversity/Unfavorable SDOH Conditions To determine and quantify the level of adverse SDOH (according to the ML model's ranking), adversity markers (the point at which the variable becomes associated with a positive prediction of MACE) were defined via partial dependence plots (PDPs). 47 PDPs show the change in the average prediction for the outcome as a specified feature varies and also demonstrate what the relationship between the target and the feature is. 47,48 We presented the racial stratification of SDOH using the PDP approach. --- Statistical Considerations Independent variable correlations were checked by correlation plots, and the variables found to be statistically significantly correlated were not included simultaneously. P,.05 was considered significant in the final models, and missing values were not included in the analysis. All analyses were performed using RStudio software. We used the STROBE cohort checklist to assess and report outcomes. 49 --- Results --- Population We included 4,309 patients with breast cancer, of whom 765 were NHB females. The cohort's median age at diagnosis was 63 years (IQR, 53-72 years); 49.2% of the diagnoses were ductal carcinoma, 5.7% were stage III, and 1.9% were stage IV. ER positivity was present in 44.9% of the cases, PR positivity in 40.2%, and HER2 positivity in 6.8%. Most of the patients were never-smokers (50.6%) and had a cardiovascular history/risk factors (74.6%), with a median Charlson comorbidity score of 4 (IQR, 2-7). Surgery was performed in 60% of the cohort, whereas 28.2% received chemotherapy, 46% received endocrine therapy, 4.7% received immunotherapy, and 39.4% received radiotherapy. The catchment area is comprised of a 17.4% Black/ African American population and a 94.4% non-Hispanic population, and 91.5% of adults have health insurance. The prevalence of comorbidities in this region are 32.4% for high cholesterol, 35% for hypertension, and 12.5% for diabetes. Median household income is $62,780; there is an average of 2.3 persons per household; the crime rate is 586.1 crimes per 100,000 persons; 31.4% of people aged .65 years live alone; 29.8% of individuals live in a singleparent household; and 9.5% of the households do not have a vehicle. Two-year MACE was diagnosed in 11.4% of patients, with a median time-to-event of 177 days (IQR, 45-414 days). HF was the most frequently diagnosed event (6.9%), followed by A-fib (3.7%), IS (2.4%), and ACS (2.3%). NHB patients, when compared with NHW patients, had higher rates of MACE (19.2% vs 9.9%), HF (13.1% vs 5.5%), and ACS (4.8% vs 1.7%) (all P,.001). No racial differences were noted in time-to-event. Race-stratified descriptions are summarized in Table 1 and supplemental eTables 4 and 5. --- Predictors of MACE and the Impact of SDOH With an excellent performance (C-index, 0.79; 95% CI, 0.78-0.80; supplemental eTable 3), the race-agnostic model classified the number of cardiovascular history/ risk factors (SS 5 0.59), age at diagnosis (SS 5 0.26), previous cardiomyopathy (SS 5 0.17), time to surgery (SS 5 0.11), and neighborhood median household income (SS 5 0.07) as the top 5 important variables for predicting 2-year MACE (supplemental eFigure 3, supplemental eTable 6); Black race ranked as the eighth most important variable (SS 5 0.05). Among the SDOH variables, however, the 5 most important variables for predicting 2-year MACE were neighborhood median household income (SS 5 0.07), neighborhood crime index (SS 5 0.06), number of transportation properties in the household (SS 5 0.05), neighborhood burglary index (SS 5 0.04), and neighborhood median home values (SS 5 0.03) (supplemental eTable 7). The NHB-specific model achieved a fair performance (C-index, 0.66; 95% CI, 0.63-0.69; supplemental eTable 3), and classified the total number of cardiovascular history/ risk factors (SS 5 1.10), neighborhood median household income (SS 5 0.62), age at diagnosis (SS 5 0.54), time to surgery (SS 5 0.53), and time to chemotherapy (SS 5 0.53) as the top 5 important variables (supplemental eTable 6). The model developed for NHW patients achieved an excellent performance (C-index, 0.78; 95% CI, 0.76-0.79; supplemental eTable 3). Among the important variables, the top 5 were total number of cardiovascular history/risk factors (SS 5 0.30), age at diagnosis (SS 5 0.15), previous cardiomyopathy (SS 5 0.07), neighborhood crime index (SS 5 0.06), and time to surgery (SS 5 0.05) (supplemental eTable 6). --- MACE Risk Including SDOH as adjustments and ethnicity/race as a covariate, Fine-Gray competing risk models showed no racial differences in the risk of 2-year MACE (adjusted SHR [aSHR] for NHB, 1.22; 95% CI, 0.91-1.64), HF (1.45; 95% CI, 0.96-2.17), ACS (1.75; 95% CI, 0.91-3.38), IS (0.98; 95% CI, 0.50-1.90), and A-fib (0.72; 95% CI, 0.39-1.30). The traditional modeling approach and subgroup analysis in patients with private insurance showed similar results (supplemental eTable 8). --- Adversity Using the PDP methodology, neighborhood median household income ,$60,000, neighborhood crime index .160, ,2 transportation properties in the household, neighborhood crime index .160, and neighborhood median home value ,$400,000 were considered adversity cutoffs (higher association with the prediction of 2-year MACE) for the top 5 SDOH variables from the ML model (Table 2, supplemental eFigures 4 and 5). Applying the adversity cutoffs, NHB patients were more likely to be in adversity (P,.05) in 80% of the top 10 SDOH variables from the ML predictive model (Table 2). --- Discussion This is the first study to demonstrate and rank the impact of individual-level SDOH factors from different domains in the development of MACE among patients with breast cancer using race-agnostic and race-specific ML models. We demonstrated that neighborhood and built environment variables were the most important SDOH for predicting 2-year MACE. Thus, racial differences noted in CVD outcomes in women with breast cancer may be explained by adverse SDOH, as shown by this singleinstitution cohort. Policies and efforts focused on increasing equity may be able to reduce the burden of CVD in women with breast cancer. In addition to our main findings, number of cardiovascular history/risk factors, age at diagnosis, time to treatment, previous cardiomyopathy, and appointment completion rates were important predictors of MACE. Examining racial disparities, NHB females with breast cancer, when compared with their NHW counterparts, were diagnosed at later stages, received higher rates of treatment, had lower rates of appointment completion, lived in poorer SDOH conditions, and had higher rates of 2-year MACE, without racial differences in time-to-event. Moreover, NHB females with breast cancer were shown to be in adversity conditions in 8 of the 10 most important SDOH variables to predict MACE. NHB females have a higher incidence of triple-negative breast cancer, known as the most aggressive subtype, and are diagnosed at later stages. 50,51 It is also known that risk factors and modifiable social behaviors (eg, smoking, alcoholism, sedentarism) are higher in NHB females when compared with NHW females, and this may be explained by lifestyle, income differences, and environmental factors. [52][53][54] Moreover, NHB females have a lower probability of receiving the most suitable treatment and care approach compared with their NHW counterparts. 22,50,55,56 Our results reinforce these reports, with the higher treatment rates in NHB females likely a result of diagnosis occurring at more aggressive stages in this population. A combination of factors contributes to the racial disparity in the incidence of MACE. Black patients are estimated to have one of the highest rates of hypertension in the world, with hyperaldosteronism having a significant correlation with cardiovascular risk factors. 53,57,58 This population is more likely to have symptoms of and functional impairment from ACS, which can lead to a bias in diagnosis. 59,60 Later breast cancer stages at diagnosis also play a role, with recent studies showing a significant correlation with A-fib. 61 Different rates in breast cancer treatment are another important factor, because chemotherapy, immunotherapy, endocrine therapy agents, and radiotherapy are known to cause a variety of MACE. 4,62 Age at diagnosis is also a key factor corroborated by our study, as younger age at cancer diagnosis is linked with higher CVD risk. 63 This association may be due to an early exposure to risk factors for cancer and CVD, both of which are strongly correlated with social conditions. [64][65][66] Racism is a central topic when analyzing racial disparities. 67 It is defined as "an organized system premised on the categorization and ranking of racial/ethnic groups into social hierarchies, assigned differential values and access to power, opportunities, and resources, resulting in disadvantage." [68][69][70][71][72] The existence of racism is due to a historical factor, determined by slavery, which began in the United States in the 17th century, and the attempt to classify Black individuals as an inferior race subject to inferior rights and opportunities. 73 There are different forms of racism, and many directly and indirectly affect health, with studies already showing direct links between the self-reported experience of personally mediated racism and negative physical and mental health outcomes. 68,74 Examples of this are the reports of inequities in factors such as income, education, employment, and living standards, in concordance with our findings, which have an impact on living environments and exposure to risk and protective factors. [68][69][70]75,76 SDOH play a central role in elucidating some of the mechanisms underlying racial disparities. 77 Factors such as poverty, cultural and social injustice, overall lower income, and education level, mediated by structural racism, influence conditions such as lifestyle and healthcare access. 50,78,79 These poor conditions that lead to limited access to healthcare are demonstrated by our findings of lower rates of appointment completion in NHB patients, despite the higher treatment rates in this population. Regarding cardiovascular health, a recent study showed that the addition of SDOH parameters improved the prognostic utility of prediction models in Black patients with HF. 36 The SDOH and structural factors were reported to be significant drivers of the racial/ethnic disparities seen in coronary artery disease and stroke. 80 In dilated cardiomyopathy, the interplay of social and economic factors was identified as a driver of the poor outcomes in Blacks. 81 Moreover, a recent review called for the need to identify the biological markers associated with SDOH that predict CVD and to develop personalized interventions for patients at highest risk. 65 In agreement, the results from our study show clearly the social construct of race, because the addition of SDOH as covariates equalized the previously related racial difference in MACE risk. 11,12 Thus, the biological concept of race has a small role in the higher risk of MACE in NHB patients. 82 Taking this all into consideration, it is clear that interventions and programs focused on improving healthcare quality and safety should address the drivers of disparities in health outcomes both within and outside healthcare systems to make these programs more effective. 83 Our results demonstrate an important role of neighborhood and built environment in the prediction of MACE, especially in NHB females. Historically, places with a large Black population were segregated as a result of social divestment in local infrastructure, perpetuating a disadvantage for this population. 65,84,85 This segregation has been associated with higher levels of neighborhood violence, crime, and poverty as well as reduced work possibilities, economic stability, and access to a high-quality education. 65,86 Reports demonstrate that Black individuals living in areas with increased segregation have a 12% higher risk of CVD. 87 Annual income, related to economic stability, also played an important role in our model, supporting findings that show an association between low socioeconomic status and atherogenesis and a proinflammatory state. 72,88,89 This study has several limitations. Our institutional database is EMR-based, and some of the information in the EMRs may be incomplete. Additionally, due to the retrospective nature of this study, some variables were not available (eg, use of cardiovascular medications prior to the breast cancer diagnosis). Because this was a singleinstitution study, some patients may have been lost to follow-up or sought emergency care at other institutions, but because the institution is an oncology center, it maintains close follow-up with patients. The criteria for data availability in LexisNexis may have generated a selection bias. The results presented may be reflective of the catchment area and its characteristics, and the sample may be representative of individuals with greater healthcare-seeking behavior. The inclusion of both patients with curable and incurable breast cancer may have impacted the reported rates of MACE. The definition of MACE did not include a wider range of conditions (eg, arrythmias other than A-fib, valvular heart disease, cerebrovascular conditions). On the other hand, the integration of disparate sources, including individual-level SDOH, allowed access to detailed information on the patients' longitudinal trajectory not commonly available in other datasets. The inclusion of census data for the catchment area allowed interpretation of the representativeness of the study results, considering that no dataset is fully representative. These facts, added to an ML approach improved through multiple iterations and validations, with the ability to deal with missingness, provided a robust reliability for our results. At the patient level, the key clinical and practical implications of this study are the need for proactive screening and management of cardiovascular risk factors and CVD in patients at high risk of MACE who are being considered for anticancer treatments. For the screening phase, allostatic load-a measure of chronic stress that accounts for SDOH factors-has been shown to be an effective marker of CVD risk in patients with cancer. 90 Moreover, at a population level, our results reinforce the increased need for cardio-oncology services, especially in specific (underserved) populations. --- Conclusions Our findings showed that neighborhood and built environment played an important role in the development of 2year MACE in patients with breast cancer and showed that NHB females in our study live in unfavorable SDOH conditions. Race increasingly needs to be understood as a social construct, and public health policies must focus on equity to mitigate the effects of racial disparities in health outcomes, including cardiovascular outcomes. Future studies should focus on increasing and diversifying the covariates analyzed, using multicenter designs, using national cohorts of data, developing specific models for each CVD/MACE, and examining the geographical variation of SDOH within different countries and regions in order to provide personalized care according to specific local needs. --- Receiving treatment with HER2 agents Yes Endocrine therapy 1 chemotherapy 1 radiotherapy 1 immunotherapy No Address stability index a Yes Receiving treatment with newer therapies Yes Elderly household members a Yes HER2-positive No Number of properties owned Yes Lumpectomy No Young adult household members Yes Owns a property Yes Time to immunotherapy No Receiving treatment with nonanthracycline agent No Number of transportation properties in the household Yes Receiving treatment with nonanthracycline agents No Endocrine therapy 1 radiotherapy No HER2-positive Yes Mastectomy No Number of properties owned a Yes Middle-age household members Yes Chemotherapy No Owns a property a Yes Radiotherapy Yes Smoking status No Attending college Yes Address stability index Yes Annual income a Yes Radiotherapy for left breast No Attended college Yes Neighborhood murder index a Yes Time to chemotherapy No Current residence status Yes Attended college a Yes ER-positive status No Receiving treatment with nonanthracycline agents Yes Neighborhood median home values a Yes Time to radiotherapy No Ethnicity No Histology No Triple-negative receptor status No Time to radiotherapy No Number of properties owned a Yes Immunotherapy No Number of household members Yes Radiotherapy No Immunotherapy 1 radiotherapy No Triple-positive receptor status No Neighborhood burglary index a Yes Receiving treatment with ER antagonists No ER-positive status No Endocrine therapy 1 radiotherapy No Radiotherapy for right breast No Chemotherapy 1 radiotherapy No Attending college Yes Appointment completion rates No Triple-negative receptor status No Neighborhood car theft index a Yes Number of household members Yes Educational institution rating Yes Marital status Yes Histology No Radiotherapy for right breast No Neighborhood crime index a Yes Endocrine therapy No Time to immunotherapy No Address stability index a Yes Endocrine therapy 1 chemotherapy 1 radiotherapy No Histology No Endocrine therapy 1 chemotherapy 1 radiotherapy No Receiving treatment with anthracyclines No Lumpectomy No Distance to closest relatives Yes Triple-positive receptor status No Endocrine therapy No Triple-negative receptor status No Receiving treatment with LHRH agonists No Receiving treatment with anthracyclines No PR-positive status No Chemotherapy No Marital status Yes Neighborhood median household income a Yes Radiotherapy No Endocrine therapy 1 chemotherapy 1 radiotherapy 1 immunotherapy No Endocrine therapy No Educational institution rating Yes Endocrine therapy 1 chemotherapy 1 radiotherapy No HER2-positive status No Closest relatives distance Yes Young adult household members Yes Triple-positive receptor status No Marital status Yes PR-positive status No Chemotherapy 1 radiotherapy No Chemotherapy 1 radiotherapy No Endocrine therapy 1 radiotherapy No ER status No Lumpectomy No Chemotherapy No Owns a property a Yes Endocrine therapy 1 chemotherapy 1 radiotherapy 1 immunotherapy No --- Data availability statement: The University Hospitals (UH) Seidman Cancer Center database is available at UH Cleveland Medical Center, and access is restricted to researchers who have approval from the Institutional Review Board. Disclosures: A. Pandey has disclosed receiving grant/research support from Gilead Sciences, Applied Therapeutics, and HeartSciences; serving as a principal investigator for Applied Therapeutics, Gilead Sciences, and SC Pharmaceuticals; serving on an advisory board for Roche Diagnostics, Lilly USA, Bayer, and Cytokinetics; and serving as a consultant for Tricog Health Inc., Rivus, Emmi Solutions, Axon Therapies, Sarfez Pharmaceuticals, Science 37, Alleviant Medical, Palomarin Inc., and Pieces Technologies. The remaining authors have disclosed that they have not received any financial considerations from any person or organization to support the preparation, analysis, results, or discussion of this article. Disclaimer: The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. None of the funders had any role in the conduct of the study; in the collection, management, analysis, or interpretation of the data; or in the preparation, review, or approval of the manuscript. Correspondence: Nickolas Stabellini, BS, University Hospitals Seidman Cancer Center, Department of Hematology-Oncology, Breen Pavilion, 11100 Euclid Avenue, Cleveland, OH 44106. Email: [email protected]
In order to further improve the effectiveness of environmental pollution control and improve the quality of the atmospheric ecological environment, this article discusses regional environmental pollution control from the perspective of sociological theories and methods. *erefore, the article starts with the characteristics of environmental air pollution, combined with linear regression analysis and PSR model principal component analysis, focuses on the impact factors of environmental pollution, and concludes that the weights of pressure layer, state layer, and response layer for the impact of environmental state are 0.4824, 0.261, and 0.1207, respectively. On this basis, from the perspective of social, collaborative governance, and public management, this article focuses on the political measures of environmental pollution.
Tis article has been retracted by Hindawi following an investigation undertaken by the publisher [1]. Tis investigation has uncovered evidence of one or more of the following indicators of systematic manipulation of the publication process: (1) Discrepancies in scope (2) Discrepancies in the description of the research reported (3) Discrepancies between the availability of data and the research described (4) Inappropriate citations (5) Incoherent, meaningless and/or irrelevant content included in the article (6) Peer-review manipulation Te presence of these indicators undermines our confdence in the integrity of the article's content and we cannot, therefore, vouch for its reliability. Please note that this notice is intended solely to alert readers that the content of this article is unreliable. We have not investigated whether authors were aware of or involved in the systematic manipulation of the publication process. Wiley and Hindawi regrets that the usual quality checks did not identify these issues before publication and have since put additional measures in place to safeguard research integrity. We wish to credit our own Research Integrity and Research Publishing teams and anonymous and named external researchers and research integrity experts for contributing to this investigation. Te corresponding author, as the representative of all authors, has been given the opportunity to register their agreement or disagreement to this retraction. We have kept a record of any response received. --- Introduction At this stage, environmental pollution is an important topic of global research, and it is also a hot topic of common concern all over the world. *e development of the world economy has brought about more and more serious air pollution problems (as shown in Figure 1). Environmental pollution-induced climate change and other hazards have seriously affected the health of the public and have gradually become an important obstacle to building a new face of modern social governance. Environmental pollution political work has also become the focus of the construction of a modern governance system of governments at all levels. Some local governments have carried out a series of air pollution control work in combination with national laws and regulations and the decisions of the central government and have taken a series of measures in combination with the actual situation of regional environmental pollution, and have achieved certain results. However, on the whole, the long-term social project of environmental pollution control needs to be adhered to for a long time in the aspect of pollution control. *is study starts with the characteristics and impact factors of environmental pollution to explore long-term pollution control measures [1]. --- Literature Review With the difficult situation of air pollution, the government has also increased the research on the countermeasures of air pollution control. *e government should base itself on the law itself, improve the enforcement of the law, adopt a governance method independent of the government, adopt the establishment of professional institutions, and punish air pollution control, emission reduction, and violations. After recognizing that air pollution is dispersive in harmfulness, the government's legislation lags behind, and the means of implementation is single, it is difficult to effectively control air pollution only by legislative and other imperative policy tools. Howse et al. proposed reducing pollution sources in legal and administrative ways, encouraging emission reduction with market-oriented economic tools, and promoting the development of atmospheric governance and the environmental protection industry. Focus on promoting the use of environmental protection facilities and equipment and low consumption facilities and equipment, make full use of participatory policy tools and multichannel experts and citizens to participate, increase the number of governance participants, and announce and supervise the governance effect and other 10 air pollution governance issues. It can be seen that foreign scholars have studied the participation of air pollution control through policy tools since the middle of the twentieth century, especially the full use of three types of policy tools (command control, social participation, and economic incentive) to carry out pollution control research [2]. Liu et al. and others believed that "the cross regional joint governance of atmospheric environmental pollution should be implemented by adopting auxiliary green policies and measures during legislation," reflecting the combined use of policy tools [3]. Luo et al., on the evaluation of government governance, believed that "the evaluation process of government governance policies is not limited to static discussion of existing government governance measures, but also combined with the improvement direction of government governance measures" [4]. Rodríguez et al. believed that in terms of air pollution control, while improving and integrating the implementation system of governance tools in the mode of government regulation in different regions, the particularity of air pollution control in different regions should be effectively enhanced, and the implementation quality of air governance in different regions should be improved based on an economic assessment means [5]. Sun and Wang, while focusing on the main policy tools of western countries to control fixed point source air pollution, focused on the analysis of China's policies to control fixed point source air pollution in the middle and lower reaches of the Yangtze River and their effects. On the basis of comparing the characteristics and applicable conditions of various types of policy tools, this article puts forward the adjustment and optimization methods of our government's policy choices in the future [6]. Anh et al. proposed that "we should make full use of the mandatory role of government public power to give play to the mixing of market economic means, stimulate the voluntary participation of different social organizations and the public, and solve the problem of the failure and deviation of government regulation in the air pollution control problem of this public policy problem" [7]. --- Analysis of Environmental Pollution Influencing Factors --- Impact Analysis of Meteorological Factors on Air Quality Index. Taking a province as the research object, we plan to use meteorological factors such as average temperature ( °C), relative humidity (%), precipitation (mm), and wind speed (m/s) to analyze the correlation of the monthly average AQI data of cities and prefectures in a province. In this section, the min-max normalization method suitable for small data scenarios is selected for processing. Data standardization refers to scaling factors of different orders of magnitude and units according to a certain proportion so that their change interval is transformed into [0, 1]. *e standardized data value eliminates the problem of unit and value size and is conducive to the analysis of different indicators. Its standardized formula is as follows: X ′ � X -X min X max -X min . (1 ) In the above formula, X ′ is a normalized vector with an interval of [0, 1], which belongs to a dimensionless pure digital form. X is the original data, and X max , X min are the maximum and minimum values of the original data, respectively. --- R E T R A C T E D As shown in Figure 2, by analyzing the change trend between the monthly average of AQI and the monthly average of temperature after standardization in cities and prefectures of a province, it can be seen that during the period of high average temperature, the whole atmospheric environment is prone to strong convective weather, which will accelerate the dilution and transmission of pollutants by the atmosphere. *erefore, in the process of average temperature rising, the AQI value has a certain downward trend, and there is a strong negative correlation between the two. As shown in Figure 3, by analyzing the change trend between the monthly average of AQI and the monthly average of precipitation after standardization in cities and prefectures of a province, it can be seen that in the season of high rainfall, pollutants in the atmosphere will be purified and diluted, and the greater the rainfall, the more significant the purification and dilution effect [8,9]. *erefore, when the average precipitation rises, the corresponding AQI value has a certain downward trend, indicating that there is a strong negative correlation between the two. As shown in Figure 4, the change trend between the monthly mean value of AQI and the monthly mean value of relative humidity after standardization in cities and prefectures of a province shows that in seasons with high relative humidity, pollutants are not easy to diffuse, and a large number of pollutants are attached to the air. *erefore, when the relative humidity rises, the corresponding AQI value also has a certain upward trend, indicating that there is a certain positive correlation between the two [10]. As shown in Figure 5, the change trend between the monthly mean value of AQI and the monthly mean value of the average wind speed after standardization in cities and prefectures of a province shows that a certain wind speed is conducive to the diffusion of pollutant concentration. When the average wind speed rises, the corresponding AQI value has a certain downward trend, indicating that there is a certain negative correlation between the two. After comparing the change trend chart of AQI monthly mean with conventional meteorological factors (average temperature, precipitation, relative humidity, and average wind speed), it is found that the level of AQI monthly mean is related to the change of conventional meteorological factors to a certain extent. Based on the positioning of this logical relationship, a preliminary inference is made: AQI monthly mean is correlated with conventional meteorological factors. Using the bivariate analysis function in SPSS software, the correlation between AQI monthly mean and different conventional meteorological factors is tested respectively (see Table 1 for the results) so as to fit the optimal curve equation [11,12]. Main calculation results and conclusions are as follows: (1) AQI monthly mean and average temperature show a significant negative correlation (double-tailed) on the 0.01 layer, and the correlation coefficient is -0.755. (2) AQI monthly mean is significantly negatively correlated with precipitation at 0.01 layer (doubletailed), and the correlation coefficient is -0.780. (3) AQI monthly mean is significantly positively correlated with relative humidity at 0.01 layer (double-tailed), and the correlation coefficient is 0.726. ( 4) *e monthly average of AQI is significantly negatively correlated with the average wind speed on the 0.01 layer (double-tailed), and the correlation coefficient is -0.791. (5) *e correlation coefficient between AQI monthly mean and conventional meteorological factors is arranged as follows: average wind speed > precipitation > average temperature > relative humidity. --- Linear Regression Analysis --- Linear Regression Prediction. If the scatter plot of the data roughly shows a linear distribution, the equation of the prediction model is as follows: Y � a 0 + a 1 X. (2) --- Curve Regression Prediction. If the scatter plot of the data shows a certain curve change law, the equation types of the prediction model are mainly as follows: Logarithmic model: Y � a 0 + a 1 InX.(3) Quadratic model: Y � a 0 + a 1 X + a 2 X 2 . (4 ) Cubic model: Y � a 0 + a 1 X + a 2 X 2 + a 3 X 3 . (5 ) Logistic model: Y � 1 1/u + a 0 a 1 X 􏼁 . (6 ) Index model: Y � a 0 ea 1 X. (7 ) --- Multiple Linear Regression Prediction. If there is more than one factor affecting the prediction index, the multiple regression equation should be used for data analysis. *e equation of the prediction model is as follows: Y � a 0 + a 1 X 1 + a 2 X 2 + a 3 X 3 + ... + a n X n . (8 ) --- Data Processing and Result Analysis. *is section will evaluate the correlation between the overall air environmental quality (AQI) of a province and the standardized meteorological data and fit the corresponding linear regression equation using Pearson's correlation analysis, --- R E T R A C T E D combined with the monthly routine meteorological factor data (average temperature, precipitation, relative humidity, and average wind speed) of cities and prefectures in a province from 2015 to 2017. Usually, the Pearson correlation coefficient between two continuous variables (X, Y) is defined as the quotient of the product of the covariance of the variable and the standard deviation. *e specific form is as follows: ρ xy � cov(x, y) δxδy � E x -μ x 􏼁 y -μ y 􏼐 􏼑 δxδy ,(9) --- R E T R A C T E D --- R E T R A C T E D where ρ xy represents the correlation coefficient of the variable and cov(x, y) represents the covariance of the variable. Among them, the sample correlation coefficient is defined as the covariance and standard deviation of the sample corresponding to the covariance and standard deviation of the population in the following form: r � 􏽐 n i�1 x i -x 􏼁 y i -y 􏼁 ����������������������� � 􏽐 n i�1 x i -x 􏼁 2 􏽐 n i�1 y i -y 􏼁 2 􏽱 , (10 ) where r is Pearson's correlation coefficient; x, y is the data variable; x, y is the mean value of variables x, y; x, y is the i th observation of variables x, y. When r > 0, the two continuous variables show positive correlation; when r < 0, the two continuous variables show a negative correlation. If the absolute value of the correlation coefficient is larger, it means that the correlation between variables is stronger. Since the value of the Pearson correlation coefficient is [-1, 1], when r � 1, it means that the variables are completely positive correlation; when r � -1, it means that the variables are completely negative correlation, and when r � 0, it means that there is no correlation between variables. *ere is a significant correlation between AQI monthly mean and average temperature, precipitation, relative humidity, and average wind speed. In order to explore the overall relationship between AQI value and conventional meteorological factors, we can analyze it by establishing a linear multiple regression equation, assuming that the multiple equation is as follows: C � z 0 + z 1 T + z 2 W + z 3 P + z 4 V. (11 ) where C represents the AQI value, z 0 is the constant term, z 1 , z 2 , z 3 , z 4 are the partial regression coefficient, and T, W, P, V represent the values of average temperature, precipitation, relative humidity, and average wind speed, respectively. Table 2 shows the overall fitting of the established model 4, with a complex correlation coefficient (R) of 0.861, a goodness of fit R 2 of 0.741, and a Durbin-Watson test statistic of 0.741, indicating that the residuals are independent [13][14][15]. As shown in Table 3, the observed value of the F statistic in the model is 13.594, and the probability sig (P value) is <0.01. Under the condition of a significance level of 0.05, it can be considered that C (AQI monthly mean) is linearly correlated with t (average temperature), w (precipitation), P (relative humidity), and V (average wind speed). Table 4 shows various parameters in the model, such as partial regression coefficient (b), standard error (std.error), constant (constant), standardized partial regression coefficient (beta), t-statistic observation value of regression coefficient test, and corresponding probability P value (SIG). Based on this multiple model, the multiple linear regression equation between meteorological factors and AQI monthly mean can be obtained as follows: C � 103.406 -8.537T -13.381W + 7.614P -32.463V. --- (12) *rough the established linear regression equation (12), it can be seen that the monthly mean change of AQI in cities and prefectures of a province will be significantly affected by meteorological factors, and the influence degree of each meteorological factor on AQI is as follows: average wind speed > precipitation > average temperature > relative humidity. --- PSR Model Construction Analysis . *e index system under the PSR model is shown in Table 5. In order to minimize the influence of subjective factors, this article will use principal component analysis to determine the weight. In this section, the principal component extraction will choose whether the cumulative variance contribution rate reaches 85% as the standard, which means to ensure that the amount of information contained in the principal component accounts for more than 85% of the original data, so that the amount of information lost is less than 15% [16]. In this article, the original data and the standardized data are replaced by P1-P10 and ZP1-ZP10, respectively, where ZP1-ZP3 is the pressure layer index data, ZP4-ZP8 is the state layer index data, and ZP9-ZP10 is the response layer index data. Before using SPSS software for correlation analysis, the KMO test and Bartlett's sphericity test were performed on the data, respectively. *e test results are shown in Table 6. Among them, the size of the KMO test value represents the strength of the correlation between the principal components, and its value range is between 0 and 1, while the size of the Bartley sphericity test value represents the strength of the independence between various variables. When the test results meet, the KMO test coefficient >0.5 and the significance probability (P value) of Bartley test value <0.05; it indicates that the correlation between the principal components is strong, and the selected dataset is suitable for principal component analysis, so the constructed PSR model can have complete structural validity for the next step of factor evaluation and analysis, in which the kmo test statistics are as follows: KMO � 􏽐 􏽐 r 2 ij 􏽐 􏽐 r 2 ij + 􏽐 􏽐 p 2 ij . (13 ) As shown in Table 7, taking the critical value 1 of the initial characteristic value as the standard, the initial values of the principal components required to be obtained are greater than 1. *rough SPSS software calculation, a total of 3 principal components can be selected, which are identified as Z1, Z2, and Z3, respectively, and the cumulative variance contribution is 86.416%. According to the numerical relationship between each index and the three principal components in the component score matrix (Table 8), the expression of the three principal components can be obtained: --- R E T R A C T E D )14 *e corresponding index data of cities in a province are substituted into the expression to calculate the ranking of cities under each index benchmark and the ranking of 21 cities in a province in the pressure layer, state layer, response layer, and comprehensive index layer of the model. Among them, the higher the ranking of the pressure layer, the greater the environmental pressure faced by the representatives; the higher the ranking of the state layer, the worse the atmospheric environment quality; the higher the ranking of the response layer, the more sufficient the response measures taken on behalf of the government departments; the comprehensive index layer represents the comprehensive evaluation of the overall performance [17]. --- Environmental Pollution Remediation Measures from the Perspective of Social Collaborative Governance total of 300 copies. *e questionnaire was conducted anonymously, and the effective questionnaire rates were 92%, 87%, and 93%, respectively. --- Insufficient Horizontal Coordination between Government Departments. Although L City has issued the work responsibilities of various municipal departments in the field of environmental protection, the specific contents of the responsibilities are not detailed enough. More importantly, it has expanded and extended in the field of environmental protection from the perspective of the three fixed plans of the department. However, the actual work task is intertwined with many problems and aspects. Often, work involves multiple departments and requires the coordinated efforts of multiple departments. In this micropractical operation, the department's responsibilities are not clear. According to the questionnaire for government staff, 46% of the respondents believed that the clarity of responsibilities was "average," 16% of the respondents believed that the clarity of responsibilities was "unclear," and 38% of the respondents believed that the clarity of responsibilities was "clear" or "relatively clear" (Figure 6). --- 7e Synergy between the Government and the Public Is Poor. At present, the way of public participation in air pollution control in L City only stays at the letter and visit participation through the telephone network and other channels. L City has not yet established other diversified --- R E T R A C T E D public participation systems, and the way of public participation only stays at the initial stage. *e questionnaire to the public showed that 91% of the respondents believed that the government's dominance in the current air pollution control was "very high," 18% of the respondents believed that the public's participation in the air pollution control was "poor," and 74% of the respondents were "unclear" about the public's participation in the air pollution control, as shown in Figures 7 and8. *e effectiveness of public participation is insufficient. Air pollution control is a systematic project, which requires the active cooperation of various subjects. In fact, although the public has a strong intention to participate and abhors pollution behaviors, they often fall into a situation where they are willing but unable [18]. Public participation is mostly in the form of individuals, generally acting alone and lack of organization. In practice, the participation ability can not effectively form a joint force, which leads to the lack of participation ability and affects participation efficiency. Moreover, public participation in air pollution control is at the end of the stage. Only when pollution occurs and produces obvious sensory discomfort, such as vision and smell, will it intervene, and the participation is too passive and lagging. --- Low Enthusiasm of Enterprises to Participate in Governance. *e main source of air pollutants is the production and operation activities of enterprises. Enterprises are important producers of pollution, and enterprises must bear the main responsibility in air pollution control. On the one hand, there is an obvious lack of awareness of the social responsibility of enterprises to control pollution, and the lack of governance concept is relatively serious, although the state has legislated and issued policies to strengthen the main responsibility of enterprises and has taken a series of methods to increase the illegal cost of enterprises to effectively investigate the responsibility of enterprises for illegal emissions. However, the production of enterprises has the nature of giving priority to reducing costs and improving the return on capital, and the phenomenon of unwillingness to consciously perform or even avoid their own environmental and social responsibilities is still widespread. On the other hand, as a profit-making organization, enterprises pursue the maximization of economic interests and give priority to economic interests rather than social benefits. Participating in air pollution control will inevitably lead to an increase in enterprise operating costs and the reduction of profits, and even cause losses. At the same time, due to the publicity and nonexclusivity of the atmospheric environment, some --- R E T R A C T E D enterprises have a strong "free rider" mentality. For the consideration of reducing the treatment cost and increasing profits, enterprises do not have a strong initiative and enthusiasm to carry out air pollution control. --- Countermeasures to Improve the Effectiveness of Environmental Pollution and Social Collaborative Governance 4.2.1. Clarify the Relevant Responsibilities of Various Departments within the Government. On the basis of establishing a highly authoritative air pollution control organization headed by the main responsible comrades of the Party committee and the government, we should strengthen field research, fully study the specific operability, scientificity, and feasibility of the control plan, further clarify the work content and scope of authority of relevant government departments in work, promote the meticulous and refined implementation of responsibilities, and try to avoid unclear rights and responsibilities the responsibility is unknown [19]. Establish awareness of the whole process, respond in time to different problems at different work levels, coordinate and communicate as soon as possible, enhance the predictability of work, change work methods, turn passivity into the initiative, and build a new and efficient comprehensive management system. We will further promote the clarification of the rights and responsibilities of horizontal government departments and clearly determine the responsibilities and obligations of departments in air pollution control according to the situation that different departments are responsible for different air pollution sources. For example, the development and reform department is responsible for industrial access and resource consumption, the industry, and information technology department is responsible for the elimination of backward production capacity, the housing and construction department is responsible for the dust control of construction sites, the commerce department is responsible for the supervision of motor vehicles (ships) and oil products, the ecological environment department is responsible for the pollution control of industrial enterprises, and the urban management and law enforcement department is responsible for road cleaning and catering lampblack control, so as to form a list of departmental rights and responsibilities and reduce the buck passing between departments. At the moment of the reform of government institutions, combined with the current situation of unbalanced law enforcement power and uneven level of departments, it is necessary to break the barrier between department law enforcement, form a comprehensive and cross-department law enforcement agency, realize the integration of law enforcement, and make the flow of information more smooth. Make up for the defects of the current department law enforcement mode of fighting alone with the joint force of law enforcement, and expand the exchange and cooperation between departments to promote collaborative governance through the unification and innovation of the law enforcement supervision mode. --- Effectively Use Fault Tolerance and Error Correction Mechanism to Break the Dishwashing Effect. In air pollution control, we need to effectively use the fault-tolerant and error correction mechanism, break the negative impact of the dishwashing effect on government officials, remove the ideological burden that hinders government officials from innovating their working methods and shouldering the heavy burden, encourage officials to be brave in taking on responsibilities, bold in innovation, and actively act, and actively participate in the work of air pollution control based on their own job responsibilities. Establish both result orientation and process orientation. Officials who work steadily and conscientiously in accordance with the established work deployment and responsibility objectives will be exempted from accountability even if there are mistakes so as to form a good orientation and maintain the enthusiasm of officials. However, it is necessary to pay attention to the scope of application of the fault-tolerant mechanism so that it can really play a positive guiding role and avoid the use of the fault-tolerant mechanism to "loosen discipline," "give mercy outside the law," and take fault-tolerant as a "protective umbrella." For those who do not act and act indiscriminately, they should be resolutely held accountable to form clear and correct guidance. --- Reform the Evaluation Methods and Increase the Proportion of Pollution Control. At present, although the government is not only a hero in the comprehensive assessment of economic and social development, economic development still occupies a leading position, and the proportion of environmental governance is still too small. *erefore, it is necessary to combine the new development concepts of innovation, coordination, green, openness, and sharing, optimize and adjust the content of the current performance appraisal, establish a more coordinated and specific performance appraisal method, and promote the change of the governance concept and thinking of officials with the optimization of the appraisal content. We should further improve the assessment method of air pollution control, increase the content of process assessment, and pay attention to the phased improvement of various assessment indicators in addition to legal assessment indicators. At the same time, we should implement the responsibility system of "the same responsibility of the party and the government, one post and two responsibilities" for environmental protection, and compact the responsibility of air pollution control to party committees and governments at all levels and departments, so as to force them to perform their duties. We should build a performance appraisal and accountability model focusing on major government officials and pay attention to the construction of a responsibility system, such as implementing the departure audit system for natural resource assets of leading cadres, establishing a lifelong accountability system for ecological environment damage, and exploring the establishment of a government natural resource balance sheet system. We should truly combine the effectiveness of air pollution control with the selection and appointment of leading cadres to achieve "the best, the --- R E T R A C T E D mediocre, and the worst" and improve the effectiveness of local government environmental governance. --- Strengthen the Participation of Other Subjects First, both government regulatory information and unit emission information are made public. *e supervision of the society, media, and the public is the best way of supervision, and it is also an effective way to encourage the government and pollutant discharge units to perform their duties in strict accordance with the relevant requirements. *e government's relevant information disclosure is also a key measure to protect the public's right to know. *e government needs to increase the detailed information disclosure on air pollution control to make the environmental situation as comprehensive as possible known to the public. Continue to unswervingly promote the disclosure of government information and try to achieve the goal of "openness is the norm and nonopenness is the exception" in accordance with the requirements of the State Council, so that the government can work in the sun, and the whole process of work deployment, implementation, operation, and evaluation is transparent, and consciously accept supervision in the vision of society, the media, and the public. Give play to the guiding and publicity role of government information disclosure, build new and effective ways of communication and connection, and reduce the cost of public access to the atmospheric environment in all aspects. Constantly enrich the public information content; especially the information related to examination and approval, planning, and other sensitive aspects must be made public as much as possible. Continue to expand the communication channels of information disclosure, not only in the regular communication channels but also in the mobile terminals with a high frequency of public use, such as Weibo and apps, so as to improve the acceptance of information disclosure and fully reflect the effectiveness of government information disclosure. At the same time, due to the strong professionalism of air pollution control, the government also needs to establish a platform to popularize science to the public and improve the professionalism of the public. On the other hand, pollutant discharge units also need to strengthen the awareness of information disclosure, avoid refusing to disclose information on the grounds of trade secrets, and must disclose accurate pollutant discharge information to the public, subject to the supervision of relevant government departments and the supervision of all sectors of society [20]. Second, use policies, finance, and other means to encourage enterprises to control pollution. Firstly, strictly implement the principle of "whoever pollutes, who governs," and bring enterprises into the framework of the ecological compensation system. Clarify the provisions of the corresponding polluters, stakeholders, and other parties, build a mechanism that causes cost pressure on enterprise emissions, control the total emission of air pollutants, and reward enterprises that discharge air pollutants with high standards, and the polluters bear their own responsibilities and compensate the stakeholders. Secondly, implement emissions trading. In this regard, we can learn from some foreign advanced experience and use market means to let enterprises consciously take measures to limit the emission of air pollutants. It can also reduce the emission cost of enterprises and reduce the possible adverse impact of air pollution control on the economy. At the same time, the implementation of an emission trading system can also promote the unification of pollutant emission standards of enterprises and contribute to the effective promotion of air pollution prevention and control related work. --- Conclusion *is study first analyzes the changes of the city as a whole, regional distribution, and different time nodes of the pollution indicators of cities in a province from two different dimensions of time and space. Secondly, the correlation analysis, regression analysis, and superposition analysis of the overall pollution situation in a province are carried out by collecting the ground meteorological factor data (average temperature, precipitation, relative humidity, and average wind speed) and digital elevation data (DEM) in the corresponding period. Finally, based on the preliminary understanding and mastery of the spatial and temporal characteristics of air pollution in cities in a province, in order to further clarify the causes and mechanisms of how human activities cause air pollution, the air pollution evaluation system of cities in a province is constructed by establishing a PSR (pressure state response) model. *e weight of the pressure layer, state layer, and response layer on the impact of the environmental state is 0.4824, 0.261, and 0.1207, respectively. Secondly, using the theory of collaborative governance, this article analyzes the problems existing in the two aspects of government internal collaboration and government social public collaboration in the air pollution governance of L City and finally puts forward countermeasures and suggestions. *at is to clarify the internal responsibilities of the government, deepen the supervision and promote the implementation of departmental responsibilities, effectively carry out fault tolerance and correction, reform the assessment and evaluation, promote the research and production of pollution control, make the government regulatory information and the unit emission information both public, use policy and financial means to stimulate enterprises, mobilize public participation enthusiasm through multiple channels, and work together internally and externally, providing a certain reference for L City's air pollution control. --- Data Availability *e labeled dataset used to support the findings of this study is available from the corresponding author upon request. --- Conflicts of Interest *e author declares that there are no conflicts of interest.
Generally, men in sub-Saharan Africa make reproductive decisions that affect their partners. We examined the predictors of fertility desires among married men across three age cohorts: 20-35 years, 36-50 year, and 51-59 years. Using the 2014 Ghana Demographic and Health Survey dataset, we conducted ANOVA and multivariate binary logistic regressions on 1431 monogamous married men aged 20-59 years. Two indicators of fertility desire are constructed: (i) the comparison of men's ideal versus women's ideal family size, and (ii) the desire for more children. The results indicate that the fertility desire of men is stronger than that of women. The predictors of fertility desire are age, parity, religion, contraceptive use, wealth quintile, couples' age difference and couples' difference in education. At ages 20-35 years, men using modern contraceptives were more likely to desire more children compared with those not using any modern contraceptives. However, at ages 36-50 years, men using modern contraceptives were less likely to desire more children. This finding suggests that men change their fertility desires in response to changes in their ages.
Introduction Globally, there is evidence to show that there is a significant decline in total fertility rates (TFR) although some sub-Saharan African countries continue to have high fertility rates (Matovu et al. 2017;Mbizvo et al. 2019;Novignon et al. 2019). Ghana is among the countries in sub-Saharan Africa performing remarkably in relation to reducing high fertility (Ahinkorah et al. 2021b). This is evident in the reduction in TFR from 6.4 in 1988 to 4.2 in 2014 (Ghana Statistical Service et al. 2015). Notwithstanding, the desire for more children continues to be a major demographic issue in sub-Saharan African countries including Ghana (Bongaarts and Casterline 2013;Van Lith et al. 2013). While men in most sub-Saharan African countries prefer to have a higher number of children than their partners, those in developed countries prefer to have fewer children compared to their partners (Matovu et al. 2017). This situation is what is referred to as fertility desire discordance (FDD). Fertility desire discordance refers to the situation whereby partners have different fertility goals and expectations (Gibbs and Moreau 2017). In sub-Saharan Africa, men play a dominant role regarding the reproductive decisions of women (Dodoo 1994;Vouking et al. 2014). Furthermore, FDD can happen as a result factors that may include age, relationship duration, and presence of children from previous relationship (Gibbs and Moreau 2017). Unintended and mistimed pregnancies can happen within unions and marriages (Tsegaye et al. 2018); therefore, understanding the factors associated with FDD underscores a great opportunity for developing countries like Ghana to reduce the likelihood of unintended and mistimed pregnancies in marital unions. In Ghana, although the TFR declined from 6.4 in 1988 to 4.2 in 2014, women and men desire large families: 4.3 children for all women and 4.5 children for all men (Ghana Statistical Service et al. 2015). The preference among married women and men is for 4.7 and 5.1 children, respectively (Ghana Statistical Service et al. 2015). Extant studies on fertility desires have focused mainly on women (Ahinkorah et al. 2021b;Kebede et al. 2021;Keesara et al. 2018;Yeboah et al. 2021b) and persons living with HIV (Kimani et al. 2015), with few studies having been conducted on men (Akinyemi and Odimegwu 2021;Wawire et al. 2013). Sarnak and Becker (2022) examined the accuracy of wives' proxy reports of husbands' fertility preferences in SSA and found that wives across a number of countries either inaccurately perceive or are uncertain of their husband's fertility preferences. Nevertheless, most studies measuring the concordance of couple's fertility preferences rely on the report of women as proxies regarding the fertility desire of their partners (Diro and Afework 2013;Gebreselassie and Mishra 2011;Uddin et al. 2017;Yeboah et al. 2021b;Matovu et al. 2017). As such, little is known about the factors associated with fertility desires' discordance between husband and wife from men's perspective. This paucity in empirical evidence limits the comprehensive understanding of men's fertility desires. The number of children an individual wants to have during his/her reproductive life is not fixed but changes due to changing circumstances at the individual level (Trinitapoli and Yeatman 2018;Yeboah et al. 2021a). Hence, factors influencing fertility preferences may not be the same across different age cohorts. Gaps in age, education and economic circumstances among couples predispose their marital relationship to an unequal sexual behaviour (Longfield et al. 2004;Luke 2005). Men usually have greater control over decisions regarding family size and contraceptive use. This study examines the association of age difference and education difference between marriage couples with men's fertility behaviour. Specifically, this study seeks to determine whether age difference, education difference and other covariates between marital partners are negatively associated with men's fertility desire. To the best of our knowledge, not many studies have undertaken a quantitative assessment using nationally representative data from Ghana on the relationship between men's characteristics and fertility desire, and whether it is similar across different age cohorts. Using nationally representative data would aid our understanding of Ghanaian men's fertility desires. We assess three synthetic cohorts of men, 20-35-year-olds, 36-50-year-olds and 51-59-year-olds, to examine differences in individual characteristics according to fertility desire across the three age groups. Drawing on the husband-and-wife reports on fertility desires, this study examined characteristics associated with men who desire a higher number of children than their partners. This paper also seeks to ascertain the contribution of the fertility behaviour, socio-economic and cultural characteristics of married men as well as couple characteristics on their fertility desires in Ghana using the 2014 Ghana Demographic and Health Surveys (GDHS). --- Methods --- Data Source Data for this study come from the 2014 Ghana Demographic and Health Surveys (GDHS). The GDHS is a cross-sectional, comparable and nationally representative survey that collects data on key populations and health indicators from women aged 15-49 years and men aged 15-59 years. The DHS uses a three-stage sampling technique, through a multistage, stratified cluster design. The GDHS uses three different questionnaires: household, women's and men's questionnaires. We relied on data from the couple datasets file. The couple data were derived by linking eligible interviewed men and women from the same households who are in a union. The sample size was restricted to couples in monogamous marriages because we were able to match the husbands' responses to any specific wife or partner within a monogamous marriage. Therefore, the results from this analysis are generalized only to men in monogamous marriages. Among married men aged 15-59 (1828), 83.6% were in a monogamous marriage, while 16.4% were in polygamous marriage. The number of couples in monogamous marriages was 1528. Of this sample, we further restricted the data to men and women who provided a numerical response to questions on ideal family size. Thus, couples with non-numerical response (n = 39) were excluded from the study. Finally, sterilized and infecund men (n = 90, 5.7%) were excluded from the study since they do not have any fertility desire. We applied weighting to the dataset to obtain unbiased estimates according to the DHS guidelines. The survey command in Stata was used to adjust for the complex sampling structure of the data in all of the analyses. A weighted sample size of 1431 was used for the analysis. --- Measurement of Variables Dependent variable: Fertility desire was the dependent variable. We constructed two indicators of fertility desire. The first indicator is based on the ideal family size question: "If you could go back to the time, you did not have any children and could choose exactly the number of children to have in your whole life, how many would that be?" This question yielded both numerical and non-numerical responses; however, the data were restricted to men and women who provided numeric responses. This variable was used to compare men's ideal family size with women's ideal family size. The difference yielded three outcomes: 'wives desire more', 'same couple desire' and 'husbands desire more'. Since the study's focus was on husbands' or partner's desire for more children, we excluded 'wives desire more' and reduced the multivariate analysis to two categories (equal desire = 0; and husbands desire more = 1). The second indicator was a prospective measure of fertility desire derived from the question "Would you like to have a (another) child with your partner, or would you prefer not to have any more children with her?". Men who responded they want another child were considered as having a desire for more children and coded as 1. On the contrary, those who responded that they want no more were considered as not having a desire for more children and coded as 0. For our multivariate analysis, we relied on using the second indicator since research has confirmed it as more reliable and valid than the first indicator (Casterline and Han 2017;Fayehun et al. 2020). Nevertheless, the first indicator was also used for comparison purposes (See Supplementary Material Table S1). Independent variables: The study used nine independent variables, grouped into individual-and couple-level factors. The individual characteristics were age, current contraceptive use, parity (number of living children), place of residence, religion, wealth quintile and age at first marriage/cohabitation. The couple characteristics are age difference and difference in education. Details of how each of these variables were coded can be found in Table 1. --- Statistical Analysis Analyses of data were carried out at three levels-univariate, bivariate and multivariate-with the use of Stata software. Frequencies and percentages were used at the univariate level to describe the socio-demographic and other characteristics of respondents, while one-way ANOVA was carried out at the bivariate level of analysis to examine the differences in mean ideal family size (fertility desire) among couples across the three age cohorts (see Table 2). Multivariate binary logistic regression models were conducted to assess the relationships between men's characteristics and fertility desire. Models were run for all men aged 20-59 years, as well as across the three age cohorts using the two different measures of the outcome variable and the fertility desire. The statistical software package, STATA version 16, was used for the analyses. For all models, an adjusted odds ratio (AOR) with its respective 95% confidence interval (95%CI) was computed and reported. --- Ethical Approval The DHS survey obtained ethical clearance from the Ethics Committee of ORC Macro Inc. as well as the Ethics Boards of the partner organizations, such as Ministry of Health and Ghana Health Service. During the survey, written or verbal consent was provided by the women. In this study, we sought permission from MEASURE DHS website for access to the data. Hence, the data analysed are available in the public domain "www.measuredhs.com" (accessed on 3 April 2023) after obtaining the necessary approval. The data do not contain any identifying information. --- Results The distributions of the two indicators are displayed in Figures 1 and2. The results indicate that both Figures 1 and2 show differences in fertility desire, and age is statistically significant. Comparing the two distributions, the first indicator shows that 41.1% desire to have more children (Figure 1). According to the second indicator, 39.9% desire to have more children (Figure 2). This is a clear indication that the first indicator is upwardly biased, while the second indicator is downwardly biased. Both indicators show that desire for more children increases with age. Genealogy 2023, 7, x FOR PEER REVIEW 5 of 11 --- Ethical Approval The DHS survey obtained ethical clearance from the Ethics Committee of ORC Macro Inc. as well as the Ethics Boards of the partner organizations, such as Ministry of Health and Ghana Health Service. During the survey, written or verbal consent was provided by the women. In this study, we sought permission from MEASURE DHS website for access to the data. Hence, the data analysed are available in the public domain "www.measuredhs.com" (accessed on 3 April 2023) after obtaining the necessary approval. The data do not contain any identifying information. --- Results The distributions of the two indicators are displayed in Figures 1 and2. The results indicate that both Figures 1 and2 show differences in fertility desire, and age is statistically significant. Comparing the two distributions, the first indicator shows that 41.1% desire to have more children (Figure 1). According to the second indicator, 39.9% desire to have more children (Figure 2). This is a clear indication that the first indicator is upwardly biased, while the second indicator is downwardly biased. Both indicators show that desire for more children increases with age. The individual and couple characteristics of all married men across the age groups in our sample are depicted in Table 1. The majority of the married men were aged 36-50 years (50.7%), were not using any contraceptive methods (73.9%), lived in urban areas (51.3%), were adherents of the Christian religion (71.5%) and were first married at 20 years and over (87.8%). A little over 90% of the men in this study's sample were older than their spouses; specifically, half of men (50.2%) were 6 or more years older than their partners. Half of the men (50.9%) were more educated than their spouses. Table 2 displays the mean of the ideal family size (fertility desire) of couples. We categorized the ages into three cohorts for both men and women. For men, we were focused on young men (ages 20-35 years), young male adults (36-50 years) and older male adults (51-59 years). For women, we were focused on young women (15-35 years), young female adults (36-40 years) and women ending their reproductive life (41-49 years). Generally, the mean ideal number of children for men and women was 5.4 and 4.7, respectively. The mean fertility desire of both men and women increases with age. Table 3 shows the multivariate level of analysis where age, religion, wealth quintile, contraceptive use, parity and differences in education were the significant predictors of a desire for more children (using the second indicator). The model revealed the adjusted odds ratios of the selected independent variables on fertility desire. Men belonging to the Islam religion (OR: 5.14; 95%CI: 3.07-8.60) were more likely to desire more children compared with those adhering to Christianity. The likelihood of desire for more children was lower among those aged 36-50 years (OR: 0.45; 95%CI: 0.29-0.71), those aged 51-59 years (OR: 0.09; 95%CI: 0.05-0.19), those using any contraceptive methods (OR: 0.85; 95%CI: 0.62-1.18), those with a parity of 2-3 children (OR: 0.06; 95%CI: 0.03-0.15) and those with a parity of 4+ (OR: 0.01; 95%CI: 0.00-0.03), and those who were more educated than their spouse (OR: 0.47; 95%CI: 0.32-0.69). In addition, we found that as the wealth quintile increased, the desire for more children significantly decreased. Across the three age groups, we observed similar patterns in the relationship between religion and fertility desire, wealth quintile and fertility desire, and parity and fertility desire, as well as differences in education and fertility desire. We observed similar patterns in the relationship between contraceptive use and fertility desire, except for the 20-to-35-year-olds, as those using any contraceptive methods were more likely to desire more children compared with those not using contraceptive methods. To complement the multivariate findings, using the first indicator, we found that correlates of fertility desire included religion, wealth, parity and age difference. There was consistency with the first indicator of fertility desire regarding the direction of coefficients The individual and couple characteristics of all married men across the age groups in our sample are depicted in Table 1. The majority of the married men were aged 36-50 years (50.7%), were not using any contraceptive methods (73.9%), lived in urban areas (51.3%), were adherents of the Christian religion (71.5%) and were first married at 20 years and over (87.8%). A little over 90% of the men in this study's sample were older than their spouses; specifically, half of men (50.2%) were 6 or more years older than their partners. Half of the men (50.9%) were more educated than their spouses. Table 2 displays the mean of the ideal family size (fertility desire) of couples. We categorized the ages into three cohorts for both men and women. For men, we were focused on young men (ages 20-35 years), young male adults (36-50 years) and older male adults (51-59 years). For women, we were focused on young women (15-35 years), young female adults (36-40 years) and women ending their reproductive life (41-49 years). Generally, the mean ideal number of children for men and women was 5.4 and 4.7, respectively. The mean fertility desire of both men and women increases with age. Table 3 shows the multivariate level of analysis where age, religion, wealth quintile, contraceptive use, parity and differences in education were the significant predictors of a desire for more children (using the second indicator). The model revealed the adjusted odds ratios of the selected independent variables on fertility desire. Men belonging to the Islam religion (OR: 5.14; 95%CI: 3.07-8.60) were more likely to desire more children compared with those adhering to Christianity. The likelihood of desire for more children was lower among those aged 36-50 years (OR: 0.45; 95%CI: 0.29-0.71), those aged 51-59 years (OR: 0.09; 95%CI: 0.05-0.19), those using any contraceptive methods (OR: 0.85; 95%CI: 0.62-1.18), those with a parity of 2-3 children (OR: 0.06; 95%CI: 0.03-0.15) and those with a parity of 4+ (OR: 0.01; 95%CI: 0.00-0.03), and those who were more educated than their spouse (OR: 0.47; 95%CI: 0.32-0.69). In addition, we found that as the wealth quintile increased, the desire for more children significantly decreased. Across the three age groups, we observed similar patterns in the relationship between religion and fertility desire, wealth quintile and fertility desire, and parity and fertility desire, as well as differences in education and fertility desire. We observed similar patterns in the relationship between contraceptive use and fertility desire, except for the 20-to-35-year-olds, as those using any contraceptive methods were more likely to desire more children compared with those not using contraceptive methods. To complement the multivariate findings, using the first indicator, we found that correlates of fertility desire included religion, wealth, parity and age difference. There was consistency with the first indicator of fertility desire regarding the direction of coefficients of religion and wealth (See Supplementary Material Table S1). However, men with 4 or more living children were more likely to desire more children compared with those with a 0-1 parity. We observed the same pattern across the age cohorts of 20-35 years and 36-50 years. In addition, men older than their spouse by 1-5 years were more likely to desire more children compared to those couples with same age. Men older than their spouses by 6 or more years were more likely to desire more children compared to couples of the same age. This pattern was similar across those aged 20-35 years and 36-50 years (See Supplementary Material Table S1). --- Discussion This study investigated the discordance in the fertility desires among monogamous married men in Ghana. Generally, men are observed to have higher fertility desires compared to their partners/wives. This was true in all the different age categories of married men. This finding means that married couples have to deal with the perceived differences in fertility preferences. The high fertility desire of husbands compared to that of wives showed the pronatalist tendency of men in sub-Saharan Africa, including Ghana. It is interesting to note that men are likely to wish for more children compared to their spouses because they do not primarily go through the physical and psychological demands of pregnancy. Women, who primarily undergo the stress associated with pregnancy and child birth, seem to be modest in their fertility desires compared to their partners. The gendered discordance in fertility desires has been previously described in other studies carried out in Africa (Ibisomi and Odimegwu 2011;Wawire et al. 2013). Within the West African context, women are often coerced to meet the higher fertility desires of the men; however, persuasion is often used when the women have higher fertility desires (Ibisomi and Odimegwu 2011). This differential resolution of conflicts arising from discordance in fertility desires is underpinned by cultural values that assert a male dominance in fertility decision making (Ibisomi and Odimegwu 2011;Wawire et al. 2013). Men in sub-Saharan Africa are the heads of most families, are usually older than their partners and are expected to make decisions affecting their wives. This study found the following factors to be significant independent predictors of discordance in fertility desires between husbands and wives in monogamous marital unions: age, religion, wealth quintile, contraceptive use, parity, and differences in education. This study showed an inverse relationship between age and the desire for more children. As the age of men increases, the less likely it is for them to desire more children. This is similar to the results of a previous study carried out in China (Eklund 2016). The study found that men who had attained a higher education than their partners were less likely to have a desire for more children compared to that of couples with the same level of education. This observation was similar in the age-stratified analysis carried out for married men aged 20-35 years and those aged 51-59 years. It appears that after receiving some level of formal education, men are able to have a more informed expectation regarding their fertility desires matching those of their female partners. There is a mixed relationship between educational attainment and fertility desires in men (Berrington and Pattaro 2014). However, higher education attainment and aspirations may reduce fertility desires in men as they would not want childbirth to interfere with their educational attainment, especially when career growth and fertility desires are difficult to attain at the same time (Berrington and Pattaro 2014). As seen in this study, this was especially the case for married men aged 20-35 years, who are more likely to be focused on career advancement than higher fertility desires. Our finding is in agreement with those of a recent study in Nigeria, which reported that higher education was associated with decreased fertility desires among men (Ahinkorah et al. 2021a). The relationship between wealth and fertility is generally reported to be positive among men but negative among women (Stulp and Barrett 2016). This study found that those who were poorer were less likely to have higher fertility desires than their female partners. Therefore, household wealth appears to be protective against higher fertility desires. This finding is in contrast with the tenets of microeconomic theory on fertility, which assume a positive association between income and fertility desires (Robinson 1997). This study found that the number of living children is negatively associated with having higher fertility desires. This was not surprising given that one would have expected men to have decreased fertility desires as the number of living children increases (Ahinkorah et al. 2021a). In addition, men with a large family size in Ghana may not desire to have additional number of children due to economic and social problems. This finding contradict the results of a study in Europe which found that fertility expectation of parents increases after the birth of additional children (Heiland et al. 2008). Our finding may be attributable to the sense of satisfaction in family life in the presence of few living children, which reinforces the desire of the men for more children. Nevertheless, using the first indicator of fertility desire, we found that high parity increases with men's desire for more children. This suggests that the measurement of fertility desire influences the direction of the independent variables used in a model. Interestingly, we found that as compared to men not using contraceptive methods, men using any contraceptive method were less likely to have higher fertility desires. We find similar patterns across the age groups 36-50 years and 51-59 years. The only exception observed was with those aged 20-35 years. Among young women (<36 years) using any contraceptive method, they were more likely to desire more children. At a younger age, men may be using contraceptives with the intention of spacing out childbirth and not necessarily for limiting childbirth. This finding is similar to those of the study carried out by Yeboah et al. (2021b) who explained that modern contraceptives used by women were used to space out high parity rather than with the intention of limiting child births. However, at older ages, men might use contraceptive methods to prevent having additional children. Further studies, including the use of a qualitative methodology, are recommended to explore and better understand this finding. The odds of a large family size desire were higher among Muslims than among Christians. This finding corroborates those of studies conducted in Nigeria (Odusina et al. 2020). The desire of Muslim men for more children could be due to the practice of polygamy, which is mostly affiliated with the Muslim religion. From a policy standpoint, our findings highlight the discordance in fertility desires between Ghanaian men and their female partners. Our study uncovers some socio-economic factors that underpin such a discordance. Public health professionals can use our findings to guide their public health practices by taking into account the socio-economic complexities of the gendered discordance regarding fertility desires. --- Limitations of the Study Despite the merits of our study, this study used secondary data from a cross-sectional survey; hence, the associations observed in this study do not imply a causal relationship. Our study was also restricted to only variables available in the dataset. --- Conclusions We investigated the discordance in fertility desires among couples with reference to a high fertility desire in men and its associated factors among married men. The independent predictors of discordance in fertility desires between husbands and wives in marital unions included education, wealth status, religion and parity. The findings reflect the pervasive role of education, wealth and socio-cultural norms on the complexities of the gendered discordance regarding fertility desires. --- Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/genealogy7030048/s1, Table S1: Multivariate analysis of factors associated with married men desire for more children (0 = equal desire and 1 = husband desire more). Institutional Review Board Statement: We did not seek ethical clearance for this study because the dataset used is freely available in the public domain. However, we sought permission from the MEASURE DHS, and approval was given before using the data for this study. We ensured that all the ethical guidelines concerning the use of secondary datasets in the publication were strictly adhered to. Informed Consent Statement: Detailed information about DHS data usage and ethical standards is available at http://goo.gl/ny8T6X, accessed on 3 April 2023. --- Data Availability Statement: The dataset freely accessible at www.measuredhs.com, accessed on 3 April 2023. --- Conflicts of Interest: The authors declare no conflict of interest.
Studies on intergenerational social mobility usually examine the extent to which social positions of one generation determine the social positions of the next. This study investigates whether the persistence of inequality can be expected to stretch over more than two generations. Using a multigenerational version of GENLIAS, a largescale database containing information from digitized Dutch marriage certificates during 1812-1922, this study describes and explains the influence of grandfathers and greatgrandfathers on the occupational status attainment of 119,662 men in the Netherlands during industrialization. Multilevel regression models show that both grandfather's and great-grandfather's status influence the status attainment of men, after fathers and uncles are taken into account. Whereas the influence of the father and uncles decreases over time, that of the grandfather and great-grandfather remains stable. The results further suggest that grandfathers influence their grandsons through contact but also without being in contact with them. Although the gain in terms of explained variance from using a multigenerational model is moderate, leaving out the influence of the extended family considerably misrepresents the influence of the family on status attainment.
Introduction In a fair and an efficient society, individuals are matched to occupations and their accompanying privileges (such as status and wealth) based arguably on their talent Demography (2016) 53:1219-1244DOI 10.1007/s13524-016-0486-6 rather than the family into which they were born. Thus, many stratification scholars have studied the extent to which occupational attainment is determined by family background. The vast majority of these studies have looked at how the social position of one generation is influenced by the social position of their parents (Breen and Jonsson 2005;Ganzeboom et al. 1991). However, more recently, some have argued that in order to fully understand the social reproduction of families, it may be important for certain contexts to look beyond parents and to take the extended family into account (Mare 2011). A growing body of research has examined whether the dominant Markovian parent-offspring approach is adequate, or whether it is necessary to adopt a multigenerational perspective to understand intergenerational social mobility. Nevertheless, the number of studies is limited and restricted mostly to grandfathers (but see Campbell and Lee 2003, 2008, 2011). For occupational social mobility, some studies have found direct net effects of grandparents (Allingham 1967;Beck 1983;Chan and Boliver 2013;Goyder and Curtis 1977;Pohl and Soleilhavoup 1982), but others have reported that grandparents play no part after the role of parents has been accounted for (Erola and Moisio 2006;Warren and Hauser 1997). Because the results are both limited and mixed, the pervasiveness of the influence of generations more remote than parents is unclear. Partly, this is a descriptive empirical problem: more studies need to be conducted to get a reliable picture. However, this is also an explanatory empirical problem: we need to test the mechanisms thought to underlie multigenerational effects to understand the contexts in which we can expect such effects to be prominent. This article confronts both problems by studying the influence of grandfathers and great-grandfathers on the occupational status attainment of men in the Netherlands in the second half of the nineteenth century and the beginning of the twentieth century. Two mechanisms have been proposed for the influence of grandparents and greatgrandparents (over and above that of parents) on the social positions of their grandchildren and great-grandchildren. One mechanism involves the transfer of resources through socialization, requiring contact between the generations (Bengtson 2001). The other mechanism does not presuppose contact: it involves the transfer of durable resources, which are likely to still exist for subsequent generations to benefit from even if the original holder has passed away (Mare 2011). However plausible these mechanisms may be, they have hardly been systematically tested (for a notable exception, see Zeng and Xie 2014). Testing the mechanisms is not easy because of several complicating factors. Probably the most important deterrent is that few large-scale data sets cover more than two generations; even fewer also contain detailed information on, for example, contact between grandparents and grandchildren, or the level of durable resources in family lineages. Because the data that I use largely overcome these problems, this study makes substantial headway in testing what I refer to as the "contact mechanism" and the "durable resource mechanism." I analyze a large-scale database, GENLIAS, which contains digitized information from Dutch marriage certificates for the period 1812-1922-a period in which just a small percentage of the population did not marry. These marriage records contain information on the occupations of those who married and of their parents. Where possible, the marriage certificates have been linked to the marriage certificates of parents for 5 of 11 provinces. I study only men because the status attainment of women in this time frame was quite different (Bras 2002;Schulz 2013) and deserves a separate study. I also exclude the familiesin-law, given that it is unlikely in the context studied that they were willing to invest resources in the groom before marriage (and thus before the measurement of occupational status). Altogether, I am able to apply multilevel sibling models to 43,242 paternal grandfathers, 64,062 of their sons, and 119,662 of their grandsons. For 25,433 men, I can even study the influence of 9,116 greatgrandfathers. An advantage of using multilevel models is that they allow studying both conventional measures of family influence (father-son and grandfather-grandson correlations) and what are often regarded as more comprehensive measures of family influence (brother and cousin correlations) (Hällsten 2014;Jencks et al. 1972;Knigge et al. 2014a). The Netherlands during industrialization forms a fruitful context in which to study multigenerational influence. First, although the Netherlands had its own peculiarities (such as an early developed service sector), it can be considered exemplary for other Western modernizing societies in many respects, including the modernization processes that took place. The present study is the first to provide empirical evidence on whether the conventional two-generation view is adequate to enable an understanding of intergenerational mobility in the context of a modernizing Western society, or whether a multigenerational view seems warranted. Furthermore, the effects of the two aforementioned mechanisms can be separated to some extent because of the specific characteristics of this period. Durable resources are thought to have been especially relevant for attaining status in the nineteenth century, albeit decreasingly so because of modernization processes. This claim can be tested because for great-grandfathers, contact was virtually impossible given the prevailing life expectancy. Thus, if greatgrandfathers had an influence, it must have been through durable resources. The contact mechanism, on the other hand, may have become more important in this period because increasing life expectancy resulted in a longer period of shared lives between grandfathers and grandsons. Although I do not have a direct measure of contact, I can measure the likelihood of contact by looking at whether grandfathers lived near (in time and space) their grandsons. One final contribution of this study is that it tests to what extent the influence of grandfathers is actually that of uncles. Because this article tests mechanisms for direct effects of grandfathers, teasing out any indirect effects via uncles is important. Some historical studies have suggested that Dutch uncles played a role in the lives of their nephews (Kok and Mandemakers 2010;Kok et al. 2011). On one hand, uncles may have helped their nephews by providing work or resources. On the other hand, the presence of uncles may have meant competitive claims to grandparental resources. By testing whether uncles have an effect, this study will show whether research should start developing and testing theories on the role of uncles (and other extended family members) in the future as well. --- Theoretical Background and Hypotheses Influence of Grandfathers and Great-grandfathers on Status Attainment --- Influence Through Contact Parents influence the status attainment of their children through the transfer of resources, such as financial, cultural, human, and social capital (Blau and Duncan 1967;Bourdieu andPasseron 1977/1990). Grandparents and great-grandparents can influence the status attainment of their grandchildren/great-grandchildren in the same way by taking over or complementing the parents' role (Bengtson 2001;Zeng and Xie 2014). For example, grandparents can look after their grandchildren while parents work, or grandparents/great-grandparents can make financial contributions to the cost of educating their grandchildren/great-grandchildren. In the Netherlands in the nineteenth century, it was almost impossible for great-grandfathers to help raise their greatgrandsons because the low life expectancy made contact between them unfeasible. One could argue that grandfathers did not play a central role in the lives of their grandchildren, either. Nuclear families were the standard, with an average household size of approximately 4.8 (Kok and Mandemakers 2010). Most families consisted of a married couple with or without children, and extended-family households were not common. 1 Moreover, life expectancy was much lower than it is nowadays. Men born in 1820 who reached the age of 30-the age at which they were likely to have their first son (Van Poppel 2012)-were expected to die at age 63. 2 Thus, many children never knew their grandfathers because grandfathers, on average, would die before or soon after the birth of grandchildren. Because of the limited frequency of extended-family households and the low life expectancy, only about 9 % of children were born into a household with at least one grandparent present; and by age 15, hardly any children lived with their grandparents. However, it is not unlikely that grandfathers had an impact on their grandsons' status attainment through direct contact. First, although coresidency was generally not common, most grandparents lived close to their grandchildren (Van Poppel 2012). Furthermore, both life duration and the age at which people had their first child varied greatly: for example, 40 % of men born in 1820 who reached the age of 30 died at the age of 70, and 15 % died at the age of 80 (Van Poppel 2012). Therefore, many grandchildren's lives did overlap with that of at least one grandparent. Post et al. (1997) estimated that approximately 75 % of children who were aged 0-20 in the period 1850-1900 had at least one grandparent still alive (but fewer than 5 % had all four grandparents still alive). Because of this possibility for contact, I expect to find the following: 1 Stem families were more common, mostly among farming families, in the southeastern Netherlands than in the rest of the country. In the southeastern provinces (Drenthe, Overijssel, Gelderland, and Limburg), approximately 22 % of children were born into extended-family households, compared with 12 % in the northwestern provinces (South Holland, North Holland, Friesland, and Groningen) and 17 % in an intermediate region (Utrecht, Brabant, and Zeeland) (Kok et al. 2011). 2 These data are calculated from generation life tables (generatie-sterftetafels) from Statistics Netherlands (CBS). The calculations, made by Frans Van Poppel (Netherlands Interdisciplinary Demographic Institute), were received through personal communication (e-mail) on September 13, 2013. Hypothesis 1 (H1). Grandfathers' occupational status positively influenced their grandsons' occupational status in the Netherlands during modernization. Transferring resources through contact was difficult if grandparents died soon after their grandchildren were born, and obviously impossible if they died before. Therefore, I expect the opportunities for grandparents to influence their grandchildren through direct contact to be fewer when the overlap of the lives of grandparents and grandchildren is smaller: Hypothesis 2a (H2a). The positive influence of grandfathers' occupational status on their grandsons' occupational status is lower the less that their lives overlap. If grandfathers live far from their grandsons, it is also more difficult for them to have an influence through direct contact. Geographical distance formed a serious obstacle in the nineteenth century, with the development of mass transportation and mass communication only just starting (Knippenberg and De Pater 2002). This leads to the following hypothesis: Hypothesis 2b (H2b). The positive influence of grandfathers' occupational status on their grandsons' occupational status is lower, the greater the geographical distance between them. --- Influence Without Contact: Durable Resources Mare (2011) proposed several arguments regarding how grandparents could influence their grandchildren's status attainment without being in contact with them. I classify these modes of influence under the heading "durable resource mechanism." To start, Mare argued that many resources relevant for attaining status are quite durable. Some resources, such as human and cultural capital, which are relatively important for educational attainment, can typically be transferred only as long as the holder is alive. However, other resources, such as financial and physical wealth (e.g., land and property), are much less perishable: that is, such resources may still exist for future generations to benefit from, even if the intermediate generation did not benefit. Such durable resources are expected to have been relatively important in the Netherlands in the nineteenth century because a large share of the population (40.3 % in 1849) was employed in agriculture (Smits et al. 2000), and educational opportunities were still limited (Mandemakers 1996). Further, Mare (2011) argued that social institutions, too, outlive individuals and may therefore be seen as potential durable resources. Especially at the top and bottom of the hierarchy, social institutions could lead to extreme advantages and disadvantages. As an example of institutionalized advantage, Mare mentioned the university legacy admission systems in the United States, by which grandsons can enter a top university more easily if their grandfather graduated there. This system did not exist in the Netherlands, but the nobility system and the student fraternities (studenten corpora) are examples of institutionalized advantage in the Dutch case. Moreover, it is highly possible that informal reputation mechanisms produced similar effects ("I knew your grandfather, he was a great man, and therefore I will help you"). In the absence of diplomas to signal productivity, employers may rely more on the reputation of family lineages. Also, the reputation of successful grandfathers may serve as an inspiration for their grandchildren. In conclusion, it is highly possible that grandfathers influenced their grandsons through durable resources, providing a second mechanism for H1. Similarly, greatgrandfathers can be expected to have had an influence on their great-grandsons through durable resources, but for them, this would have been the only mechanism. Hypothesis 3 (H3). Great-grandfathers' occupational status positively influenced their great-grandsons' occupational status attainment in the Netherlands during modernization. --- Changes in the Influence of Grandfathers and Great-grandfathers Over Time Many scholars have claimed that in Western societies in the past, family background was much more important for attaining status than it is in contemporary societies. The argument is that modernization processes (such as industrialization, educational expansion, and mass communication) rendered ascriptive characteristics (roughly, family background) less decisive and achieved characteristics (roughly, educational attainment) more decisive in the statusattainment process (Blau and Duncan 1967;Kerr et al. 1960;Treiman 1970). On the other hand, status maintenance theory argues that in modernized societies, elites found alternative strategies to transmit status to the next generation: for example, by ensuring that their children received a good education (Bourdieu andPasseron 1977/1990;Collins 1971). In the Netherlands, the modernization processes discussed by Treiman (1970) occurred in the second half of the nineteenth century. An initial wave of industrialization in the form of mechanization of labor occurred around 1865, and a second, more significant wave occurred in the period 1895-1914(De Jonge 1968;Van Zanden and Van Riel 2004). Industrialization caused shifts in the proportions of the labor force employed in agriculture, industry, and the service sector. In 1807, 43.1 % of the total labor force was employed in agriculture; 26.2 %, in industry; and 30.8 %, in services. By 1909, these figures were 30.4 %, 34.4 %, and 35.4 %, respectively (Smits et al. 2000). Knigge et al. (2014b) found, in line with modernization theory but not with status maintenance theory, that the influence of family background on the status attainment of Dutch men declined in the second half of the nineteenth century, and was less where communities were more modernized. If Dutch society did indeed become more open because of modernization, one would expect not only fathers to have had less influence but also grandfathers and greatgrandfathers (and uncles) because a change from ascription to achievement meant that the extended family, too, would have been less of a help or a hindrance in attaining status. However, we must take into account another development before formulating hypotheses. Evidence suggests that the lives of grandfathers and grandsons overlapped more over time. Figure 1 shows that life expectancy at age 30 rose steadily, from 33 years for men born in 1820 to 37 years for men born in 1850.3 Also, the percentage of 30-year-old men living at least another 40 years increased in the same period, from 40 % to 50 %. Although this evidence is far from conclusive, it suggests that the opportunities grandfathers had to influence their grandsons through contact increased over time, which would have counteracted the trend resulting from the lessened importance of (durable) family resources attributable to modernization. Because there is no convincing argument regarding which of the opposing developments had the most impact on the influence of the grandfather, it seems appropriate to expect no change in grandfather influence over time. Because great-grandfathers were unable to influence through contact but only through durable resources, the influence of great-grandfathers is expected to have declined over time. Hypothesis 4 (H4). The positive influence of grandfathers' occupational status on their grandsons' occupational status remained stable during modernization in the Netherlands. Hypothesis 5 (H5). The positive influence of great-grandfathers' occupational status on their grandsons' occupational status declined during modernization in the Netherlands. ), which contains digitized information from Dutch marriage certificates for the period 1812-1922. A marriage certificate typically states date and place of marriage; names, place of birth, age, and occupation of the bridegroom and bride; and names and occupations of the couple's parents. For the provinces Groningen, Overijssel, Gelderland, Limburg, and Zeeland, the marriage certificates have been linked to the marriage certificates of the parents. A computer algorithm matched the first and last names of the parents as stated on both certificates. To avoid incorrect links, the computer algorithm used additional information, such as the age of the bride and groom, to ensure plausibility in terms of chronology (for more details, see Oosten 2008). From this database, I created a three-and four-generation version by matching the entries in which an individual is a groom in one and the father of the groom in another. Further filtering as well as deleting cases with missing data (see the following section) yield 43,242 grandfathers married between 1812 and 1881 on whose married sons and grandsons I have data. Put otherwise, I can identify the father, paternal grandfather, uncles (father's married brothers), brothers, and cousins of 119,662 men married between 1854 and 1922. For 25,443 grooms, I can perform analyses that include 9,116 great-grandfathers. --- Selections and Missing Data As discussed in the Introduction, I study neither women nor the families-in-law. Also, I include only men marrying for the first time because I want to ensure that each person appears in the database only once and because family influence might work differently when an individual marries for the second time. This results in a database of 952,587 grooms married between 1812 and 1922. The marriage certificates of 526,119 of these grooms could be linked to the marriage certificate of the father. In turn, in 248,777 of these cases, the marriage certificate of the father could be linked to that of his father (the grandfather). In 67,964 of these cases, we also know the great-grandfather. A significant proportion of the marriage certificates cannot be linked for several reasons. First, the fathers of grooms who married shortly after 1812 will certainly have married before 1812 and will not be part of the database. None of the grooms who married before 1831 can be linked to their father. The same issue occurs in linking fathers' certificates to grandfathers' certificates, and linking grandfathers to greatgrandfathers. The earliest date for which I can link a groom (via the father) to his grandfather is when the groom married in 1854; and the earliest date for which I can link a groom to his great-grandfather is when the groom married in 1871. Figure 2 exhibits the number of grooms that married in each year, as well as the proportion of these grooms that could be linked to their father, grandfather, and great-grandfather, respectively. The proportion linked to their grandfather remains less than .01 until 1862 but then increases to become .60 in 1922 (the average for the period 1854 to 1922 is .38). The proportion linked to their great-grandfather remains less than .01 until 1887 and reaches .36 in 1922 (the average for the period 1871 to 1922 is .13). Keep in mind when interpreting the results that the proportion of successful links is thus limited, especially in the first few years after 1854 and 1871. Second, individuals could be linked only within and between 5 of 11 provinces, so grooms could not be linked where the father or grandfather/great-grandfather had married outside these five provinces. However, I do not expect the proportion that could not be linked for this reason to be very large, as I explain in the next section. Third, variation in the spelling of names may result in failure to establish a link. The computer algorithm was designed to allow for minor variations in the spelling of names. However, a conservative approach was taken in this respect to minimize the number of incorrect links at the expense of not maximizing the number of total links. Finally, nonlinkage may result from errors in digitizing the certificates. Cases cannot be analyzed when the certificates lack occupational data sufficient to assign a status score to grooms (1.44 % of the cases in the three-generation data set; 1.14 % of the cases in the four-generation data set), to fathers (19.4 %; 15.9 %), grandfathers (23.6 %; 21.3 %), uncles (28.2 %; 30.2 %), and great-grandfathers (N.A.; 25.6 %). Listwise deletion of these cases (51.9 %; 62.6 %) results in the 119,662 and 25,443 grooms mentioned earlier. --- Reflection on Possible Selection Bias in the Data These data provide a rare opportunity to study multigenerational processes over an extensive period while covering a broad geographical area. Nevertheless, like most historical data, they have certain drawbacks. An obvious limitation of using marriage certificates is the exclusion of people who never married. This exclusion is less problematic than might be expected because marriage was common in the Netherlands in the nineteenth and early twentieth centuries: approximately 87 % of all men born in 1800 and 91 % of all men born in 1900 married at some point (Ekamper et al. 2003). Furthermore, Engelen and Kok (2003) did not find many significant --- Number of Grooms 1 8 1 2 1 8 2 2 1 8 3 2 1 8 4 2 1 8 5 2 1 8 6 2 1 8 7 2 1 8 8 2 1 8 9 2 1 9 0 2 1 9 1 2 1 9 2 2 Time (marriage year groom) Total number of grooms Linked to father Linked to grandfather Linked to great-grandfather Fig. 2 Number of grooms and proportion linked per year differences (in terms of family background, religion, region, and birth cohort) in the likelihood of men born between 1890 and 1909 remaining unmarried. Schulz (2013) found no significant difference in status between married and unmarried Dutch men during the period that she studied (1865 to 1930). Because records were linked within and between 5 of 11 provinces, I lose grooms if they, their fathers, or paternal grandfathers migrated from the region; and I lose family members if grooms, their fathers, or paternal grandfathers migrated to the region. Migrants are not a random selection given that they tend to have a higher status, but I do not believe this will influence the results substantially, for two reasons. First, the number of people I miss because of migration is not very large. Census data show that in 1849, just 8 % of people lived in a province other than the one in which they were born; the corresponding figures for 1899 and 1930 were 13 % and 15 %, respectively (Knippenberg and De Pater 2002). Furthermore, the data set does include those who migrated between the five provinces in the data, or who moved away after marrying. Second, Knigge et al. (2014b) performed several checks on the same data and showed that the effect of family influence on status attainment changes little when including less or more migrants. Finally, marriage certificates frequently lack information on the father's occupation. Linking the data alleviates this problem because the marriage certificates of siblings can be used as sources of information on the father's occupation (for fathers, the problem is reduced from 32.7 % to 19.4 % of cases). Still, because the problem affects grandfathers, great-grandfathers, and uncles as well, the combined number of missing cases is considerable. If a father's occupation is missing on his child's marriage certificate, the most likely reason is that the father was deceased; other explanations include migration or unemployment. Fortunately, in line with other studies (Maas et al. 2011;Zijdeman 2010), I find little difference between those with and those without information on the father's occupation. For example, occupational status differs by less than 1 point on an 88-point scale (47.28 and 48.25, respectively), and the status correlation between brothers is also rather similar (0.51 and 0.54, respectively). Moreover, the father-son correlation is not substantially different for those with and those without information on the grandfather's occupation (0.52 and 0.54, respectively). --- Measures --- Dependent and Independent Variables Occupations have been coded using the Historical International Standard Classification of Occupations (Van Leeuwen et al. 2002), which is the historical equivalent of the International Labour Organization's International Standard Classification of Occupations (ISCO68). These occupational codes were subsequently mapped onto the HISCAM status scale (Lambert et al. 2013), which uses the same technique as the contemporary CAMSIS status scales (Stewart et al. 1980). In theory, the HISCAM scale runs from 1 to 99; in practice, however, it runs from 10.6 (servant) to 99 (judge, for example). The occupational status of the youngest generation-the dependent variable-is based on the occupations stated on the marriage certificate. Table 1 provides descriptive information on all variables (separately for the three-and fourgeneration data sets). The histogram in Fig. 3 gives more detail on the distribution of grooms' occupational status, showing that it approximates the normal distribution but with a few spikes for frequent occupations, such as worker (32.5) and farmer (50.7). Father's occupational status is the average status of the occupations that he reported on his children's marriage certificates. The reliability of this group-averaged score can be calculated using the Spearman-Brown prediction formula (Winer et al. 1991: appendix E) and is estimated by Stata's loneway command to be .875 for the averagesized family. Thus, the intergenerational correlations will be slightly underestimated. The occupational status of great-grandfathers, grandfathers, and uncles are similarly derived from their children's marriage certificates. Because a groom may have more than one uncle, I take the mean of all married uncles. Moreover, to prevent losing cases, I substitute the father's occupational status for those who do not have an uncle (and adjust for this in the analyses; see the Control Variables subsection). Time is operationalized as the marriage year of the grandson/great-grandson. I rescale by subtracting the first year (1854 for analyses without the great-grandfather; 1871 for analyses with the great-grandfather) and then dividing by 10. To approximate whether a grandfather influenced a grandson directly through contact, I use two indicators for the likelihood that they were in contact. Temporal distance is given by the age difference between grandfather and grandson. I assume that the smaller the age difference, the greater the chance that grandfather and grandson had overlapping lives. Geographical distance is given by the distance in kilometers between the grandfather's place of marriage and the grandson's place of marriage. Because this measure would be right-tailed, I take the natural log (after adding 1). I assume that the smaller the geographical distance between grandfather and grandson, the greater the chance that they were in contact. --- Control Variables I include several control variables that might be confounding factors (e.g., Bras et al. 2010). At the individual level, these are the age at marriage of the groom as found on his marriage certificate, and birth order, the birth rank of a groom among his married siblings. At the family level, this is sibship size, which is approximated by the number of married full brothers and sisters; and a dummy variable representing whether the father was a farmer (1) or not (0) (cf. Erikson and Goldthorpe 1992): a father is labeled a farmer if more than one-half of his children providing information about their father's occupation state that he is a farmer (HISCO codes 61110 to 61290). At the extendedfamily level, this is the number of married uncles and aunts (as they and their children may be competitors for grandparental resources) and whether the grandfather/greatgrandfather was a farmer (constructed in the same way as for the father). 4 Finally, to correct for substituting uncles' status with father's status for grooms without any uncles, I include a dummy variable representing whether a groom has at least one uncle (0) or no uncles (1). More importantly, I include an interaction of this dummy variable with the variable status of uncles to ensure that the coefficient of status of uncles reflects only the effect for those who have an uncle (one would expect the effect for those without uncles to be insignificant). For the same reason, a three-way interaction with the dummy variable is included if the status of uncles is interacted with time in the analysis. 5 Analytical Strategy I perform multilevel linear regression with four hierarchical levels (individuals, fathers, grandfathers, and communities) 6 using the package that runs MLwiN from within Stata (Leckie and Charlton 2013;Rasbash et al. 2013). I include communities as a fourth level because individuals growing up in the same time period and the same geographical area tend to resemble one another. In reality, brothers and especially cousins might grow up in different communities, but the cross-classified models that would do justice to this structure are too complex to estimate. Therefore, I simplify and keep the multilevel structure hierarchical by defining the community as the marriage year and marriage place of the grandfather. To describe how large the influence of the family and extended family is on occupational status attainment, I start by estimating the intercept-only model: Y ijkl ¼ β 0000 þ c 0l þ g 0kl þ f 0 jkl þ s 0ijkl ;ðM1Þ where Y ijkl is the occupational status of individual i with father j and grandfather k from community l, β 0000 is the population mean status, c 0l e 0; σ 2 c 0l is the error term at the community level, g 0kle 0; σ 2 g 0kl is the error term at the grandfather level, f 0 jkl e 0; σ 2 f 0 jkl is the error term at the father level, and s 0ijkl e 0; σ 2 s 0ijkl is the error term at the individual level (Snijders and Bosker 1999). 7 The proportion of variance at the father, grandfather, and community levels is given by ρ cþgþ f ¼ σ 2 c 0l þ σ 2 g 0kl þ σ 2 f 0 jkl σ 2 c 0l þ σ 2 g 0kl þ σ 2 f 0 jkl þ σ 2 s 0ijkl ;ð1Þ which is the expected correlation between two randomly selected brothers. This brother correlation is often considered a comprehensive measure of family impact because it captures all the aspects of family background that siblings share (Björklund et al. 2009), including not only all (measurable and nonmeasurable) shared family resources but also, for example, shared neighborhood characteristics and brothers' influence on one another (Jencks et al. 1972). Given that cousins have the same grandfather (and the same community because of the 5 As a robustness check, I also performed the analyses without those who have no uncles. The results were not substantially different. 6 I add a fifth level when analyzing great-grandfathers. 7 Multilevel models assume that the error terms are normally distributed. The residual errors deviate somewhat from normality. However, given the findings of Maas and Hox (2004), who showed that the estimates of fixed and random effects as well as the standard errors of the fixed effects are robust against violations of the normality assumption, I do not expect serious problems. mentioned modeling simplification) but not the same father, the expected correlation between two randomly selected cousins is given by ρ cþg ¼ σ 2 c 0l þ σ 2 g 0kl σ 2 c 0l þ σ 2 g 0kl þ σ 2 f 0 jkl þ σ 2 s 0ijkl :ð2Þ The observed values for these measures can be compared with what would be expected if intergenerational status transmission followed a Markovian pattern (i.e., one generation was directly influenced only by the previous generation and not by more remote generations). Another way to assess whether a two-generation model adequately represents family influence is to add status measures of the (extended) family. In Model 2, Y ijkl ¼ β 0000 þ β 0100 FSTAT j þ c 0l þ g 0kl þ f 0 jkl þ s 0ijkl ;ðM2Þ the regression coefficient β 0100 shows the extent to which the father's occupational status contributes to attaining status. I subsequently add the status of the grandfather (+β 0010 GSTAT k ) in Model 3 and the average status of uncles (+β 0200 USTAT j ) in Model 4 to see whether they have an effect over and above that of the father (H1).8 I add controls in Model 5. Model 6 shows how the three family effects change over time (H4) by including the following interactions with time: þβ 1100 FSTAT jkl TIME ijkl þ β 1010 GSTAT kl TIME ijkl þ β 1200 USTAT jkl TIME ijkl À Á : To test the contact mechanism, I add the following interactions of temporal distance (H2a) and geographical distance (H2b) with the grandfather's status in Model 7: þβ 2010 GSTAT kl TDIS ijkl þ β 3010 GSTAT kl GDIS ijkl À Á : Finally, to test the durable resource mechanism, I analyze the subset of cases that can be linked to their great-grandfather. I start in Model 8 by estimating an intercept-only model similar to Model 1, except that there is now an additional great-grandfather level, h 0lme 0; σ 2 h 0lm , and-to keep a hierarchical structure-the community level is defined as the marriage year and the place of the great-grandfather instead of the grandfather. Model 9 includes controls and the status measures of father, grandfather, and uncles. Model 10 includes the status of the great-grandfather to see whether he has an additional influence (H3), and Model 11 tests whether this influence declines over time, as expected (H5). --- Results Influence of Father, Grandfather, and Uncles on Occupational Status Attainment --- Family Influence: Status Resemblance of Brothers and Cousins Model 1 in Table 2 shows that for the Netherlands, in the second half of the nineteenth and the early twentieth century, the status resemblance of brothers-the comprehensive measure for family impact-is ρ c + g + f = .502 (Eq. ( 1)).9 Also, male cousins are rather similar in status (ρ c +g = .321; Eq. ( 2)), even though they are much more "remote" family than brothers. These results are not congruent with the Markovian model, in which individuals are influenced only by their parents. A correlation between the status of father and son of .7 would produce the observed fraternal resemblance of about .49 (.7 × .7). In a Markovian world, the expected correlation between the statuses of grandfather and grandson would then also be .49, and that of cousins would be .24 (.49 × .49). The latter is much lower than the observed correlation between cousins (.321), perhaps because the process of status attainment is influenced not only by the parents but also by the grandparents.10 Family Influence: Status Measures of the (Extended) Family Model 2 shows that men profit greatly from having a father with a high status: if father A has 10 status points more than father B, the son of father A is expected to have about 6.4 status points more than the son of father B (b 0100 = 0.640; p < .001). By including the father's occupational status, we can understand much of the impact that the family has. The variance that brothers share σ 2 c 0l þ σ 2 g 0kl þ σ 2 f 0 jkl is reduced from 76.6 in Model 1 to 32.5 in Model 2-a reduction of 57.6 %. The largest proportions of explained variance are at the grandfather (76.8 %) and community levels (64.6 %), indicating compositional effects: communities and grandfathers tend to produce fathers with similar status. Based on Model 3, I conclude that grandfathers have an influence on the status attainment of men over and above that of fathers (b 0010 = 0.177; p < .001). By including the grandfather's occupational status, the effect of the father is reduced from 0.640 in Model 2 to 0.564 in Model 3. In other words, part of the effect attributed to the father is actually an effect of the grandfather. The net benefits of having a grandfather with a high status are about one-third of the benefits of having a father with a high status. Although the effect of the grandfather is substantial, it does not help explain much better the variation in status attainment: in Model 3, 60 % of the variance shared by brothers is explained, only 2.4 % more than in Model 2. One reason, as shown earlier, is that if the occupational status of the grandfather is omitted, the father assumes part of the effect of the grandfather. In the next step, I include the average status of the father's brothers to examine whether grandfathers still have a net effect after the inclusion of uncles. Model 4 shows that the effect of the grandfather's occupational status declines from 0.177 to 0.143 but remains significant (p < .001). This finding means that (1) grandfathers have a direct influence on the status attainment of their grandsons, in line with H1; and (2) 19.2 % of the grandfather effect found in Model 3 is an indirect effect: grandfathers influence their own sons (i.e., sons other than the father), who in turn influence their nephews. The average status of the uncles has a significant positive effect (b 0200 = 0.105, p < .001) that is about one-fifth of the father's effect. Again, the increase in explained shared variance is slight, at only 0.8 %. The effects of the extended family remain after adding controls in Model 5 (see Table 3: if anything, the effects increase). In conclusion, leaving out the grandfather's and uncles' occupational status would overestimate the effect of the father by 23.1 % (0.640 instead of 0.520), and if I were to base statements about the influence of the family solely on father's occupational status, as is often done, I would substantially underestimate the family influence compared with statements based also on the occupational status of the extended family. One additional status point for everybody in the extended family would have a combined effect of (0.520 + 0.143 + 0.105) = 0.768, which is 20 % higher than the family effect in the parent-offspring model (0.640). Thus, although men benefit most from having a father with a high status, the status of their grandfather and uncles substantially helps (or hinders) their own social position, too. --- Influence of the (Extended) Family Over Time In line with the modernization thesis and previous findings, the effect of the father decreased during the nineteenth and early twentieth centuries (b 1100 = -0.014 per 10 years; p < .001; see Model 6 in Table 3). A new finding, again consistent with modernization theory, is that the effect of uncles, too, decreased during modernization (b 1200 = -0.012 per 10 years; p < .01). I expected that the expanding role of grandfathers in the lives of their grandsons compensated for the effect of modernization (see H4). Indeed, the effect of the grandfather did not decrease but remained constant (b 1010 = -0.000, n.s.). Figure 4 graphs the changes in the (extended) family effects. The influence of the father's occupational status is approximately 0.6 for men who married in 1854 and approximately 0.5 for men who married in 1922, which shows a decrease of 16.7 % in 67 years. The influence of the uncles reduced by almost one-half (0.09) of what it was (0.17). When the effects of the father, grandfather, and uncles were summed, the family influence decreased 18.3 %, from 0.93 in 1854 to 0.76 in 1922.11 --- Multigenerational Influence Through Direct Contact How did grandfathers influence the status attainment of their grandsons? The most obvious mechanism is through direct contact, by which resources can be passed on directly. I predicted that if this mechanism was at work, the grandfather effect would decline with the lower likelihood of direct contact between the grandfather and his grandson(s), that is, when the temporal (H2a) and geographical distance (H2b) between them increased. Model 7 supports both these predictions: the grandfather effect becomes smaller with increasing temporal distance (b 2010 = -0.001; p < .05) and geographical distance (b 3010 = -0.011; p < .001). Figure 5 plots the grandfather effect against geographical distance (for those married in 1904, the mean marriage year) for five values of temporal distance: (1) the minimum value (grandson born 36 years after his grandfather), (2) two standard deviations below average (born about 47 years later), (3) average (67 years), (4) two standard deviations above average (87 years), and (5) the maximum value (125 years). The graph shows that if the temporal distance increases, the predicted grandfather effect starts to move toward 0 but never reaches 0. With respect to geographical distance, the graph shows that the grandfather effect is about 0.04 (26.7 %) higher for those grandfathers and grandsons who married in the same municipality (value 0 in the graph) than for those who married about 50 km apart (approximately value 4 in the graph; 95 % of the cases married within 50 km of each other). Taken together, the grandfather effect is predicted to be 0.20 for those most likely to be in contact (temporal distance = 36 years; geographical distance = 0 km) and approximately 0.06 for those for whom it was practically impossible to be in contact (temporal distance = 125; geographical distance = e 6 ≈ 400 km). This large difference is evidence that in the nineteenth century, Dutch grandfathers influenced their grandsons' status attainment through direct contact. That the effect never becomes 0 may indicate that grandfathers can also have an influence without necessarily being in direct contact with their grandsons. --- Multigenerational Influence Without Contact: The Influence of Great-grandfathers To further examine the idea that one generation can influence another without direct contact, I test for a subset of the data whether great-grandfathers have an influence (given that it was more or less impossible for them to have been in contact with their great-grandchildren). Because those who can be linked to their great-grandfather may form a special selection, I first check whether results for the subset differ in any way from the results presented for all cases. Model 8 in Table 4 shows that the brother correlation is virtually the same (ρ c + h + g + f = .503) as that found in Model 1 (ρ c + g + f = .502). Also, the (extended) family effects are of the same order (Model 9 versus Model 5), lending confidence that the results presented next are not biased by the selection of those who could be linked to great-grandfathers. Model 10 shows that in line with H3, great-grandfathers have a significant positive effect (b = 0.092; p < .001) on status attainment, independent of fathers, grandfathers, and uncles. This finding supports the idea that a certain generation may influence subsequent generations "well beyond the grave" because durable resources and certain institutions do not cease to exist after a generation passes away. If great-grandfathers are able to influence their great-grandchildren without being in contact, grandfathers must also be able to influence their grandsons without contact. Furthermore, Model 8 shows that the status resemblance of second cousins (i.e, those sharing the same greatgrandfather but a different grandfather) is ρ c + h = .196. This result is 66.1 % higher than the expected correlation between second cousins if status transmission were to follow a two-generation Markovian process: .7 3 × .7 3 = .118 (.7 3 is the expected correlation between great-grandson and great-grandfather given a father-son correlation of .7, which is deduced from the observed correlation between brothers: .7 × .7 ≈ .5). I expected that the importance of durable resources and institutions that promote multigenerational influences would have declined with modernization. Therefore, I predicted that the possibility for great-grandfathers to influence their great-grandchildren also decreased as modernization proceeded (H5). Although I find that the great-grandfather effect diminished, this change is not significant (b = -0.008, n.s.; see Model 11). This finding could mean that influence without contact did not lose importance in the period studied, but alternatively that the period of observation is too short (see Fig. 2). Indeed, the literature suggests that fairly long periods are necessary in order to detect trends in social mobility (Breen and Luijkx 2004;Ganzeboom et al. 1989). --- Conclusion and Discussion Studies in the field of intergenerational social mobility usually take a two-generation approach: the influence of the family on status attainment is equated with the influence of the parents. The first aim of this article was to study whether this assumption is justified in the context of a modernizing Western society. Specifically, I studied whether taking a multigenerational perspective by including grandfathers and great-grandfathers leads to a more accurate understanding of the occupational status attainment process of Dutch men who married between 1854 and 1922. I conclude that a parent-offspring perspective is too narrow and misrepresents the impact of family background on the Dutch status attainment process during modernization. I base this conclusion on the finding that grandfather's and great-grandfather's occupational status have a substantial influence on their grandsons' status (independent of fathers and uncles), and on the finding that the status correlation between (second) cousins is higher than would be expected had family influence been limited to that of parents. The association between the status of father and son-sometimes referred to as the intergenerational status correlation/elasticity-is often used to compare societies in terms of their openness (see, e.g., Björklund and Jäntti 2000;Ganzeboom et al. 1991;Yaish and Andersen 2012). The multigenerational model shows that this twogenerational measure underestimates the influence of (extended) family background in the Netherlands during modernization. However, in terms of predicting an individual's status (explained variance), the gain from a multigenerational model is moderate. The second aim of this article was to gain more insight into the operation of multigenerational influence. Two important mechanisms have been proposed in the literature: influence through contact and influence without contact through durable resources and institutions. I found evidence suggesting that both mechanisms are at work. On one hand, the grandfather influence was stronger the greater the likelihood of contact between grandfather and grandson. On the other hand, a grandfather effect remained even if it was highly unlikely for a grandfather to have been in contact with his grandson. Moreover, because contact was virtually impossible for great-grandfathers, I see their effect as further support for Mare's (2011) claim that multigenerational influence does not necessarily require contact: some privileges may endure even after the original holder has passed away. The Netherlands modernized rapidly after 1850. Treiman (1970) and other modernization theorists have claimed that societies became more open because of these modernization processes. Therefore, durable resources were expected to have lost importance over time as a mechanism for multigenerational influence: in a more meritocratic society, status-maintaining institutions are likely to break down, and durable resources (physical capital for instance) are likely to lose ground to more perishable resources (human capital, for example). In line with this predicted change from ascription to achievement, I found that the influence of fathers, uncles, and great-grandfathers on status attainment decreased over time (although the latter was not significant). In the same period, life expectancy increased in the Netherlands. Therefore, in the case of grandfathers, the contact mechanism was expected to have gained importance over time because contact between grandparents and grandchildren was more likely. In other words, while grandfathers were expected to lose influence because of modernization processes, they were also expected to gain influence because of the greater overlap in lives with their grandsons. The results suggest that these opposing developments cancelled each other out: grandfathers were able to retain their influence. Because the Netherlands is a prototypical case in the sense that these developments (modernization and increasing life expectancy) occurred in many Western countries, one would expect similar findings for other Western societies. Only empirical evidence can prove whether this is true, and an exciting development in this respect is the ongoing digitization of vital registers across the world (Van Leeuwen and Maas 2010). Hopefully, it will be just a matter of time before the generations within these data are linked so that this study can be replicated. Although the historical data used are rich in terms of allowing one to study the influence of fathers, uncles, grandfathers, and even great-grandfathers over a long period and for a large geographical area, these data have limitations. As mentioned in the Method section, a difficult issue for studies on grandfather effects is to rule out the possibility that an observed grandfather effect is partly or wholly a statistical artifact resulting from the inability to measure perfectly all the relevant resources of the intermediate generation (father, mother, uncles, and aunts) (Clark 2014). For example, the grandfather effect might (partly) reflect mother's influence because mother's level of resources is not directly measured and is correlated with her father-in-law's occupational status through assortative mating (Zijdeman and Maas 2010). Whereas most studies control only for the father's status, an advantage of this study is that it also controls for the status of uncles. Still, these measures may not be detailed enough to filter out all effects of the intermediate generation (such as those of the mother). Chan and Boliver (2013) showed for contemporary Britain that a grandfather effect remained even after they added additional measures for parental resources (parental education, income, and homeownership). This result may offer some comfort but only to the extent that their results are generalizable to the Dutch historical context. Unfortunately, the possibilities of adding more measures of parental resources are limited when using historical data, and so the results of this study should be interpreted with some caution. Currently, studies tend to establish whether a grandfather effect exists in a certain context. With evidence growing that such effects are indeed present in many contexts (Allingham 1967;Beck 1983;Campbell and Lee 2003, 2008, 2011;Chan andBoliver 2013, 2014;Goyder and Curtis 1977;Pohl and Soleilhavoup 1982), researchers also need to start explaining these effects. Showing that observed grandparent effects are truly the result of the mechanisms proposed in the literature helps to build confidence that grandparent effects are not just unobserved parent effects. I have taken an initial step in testing the mechanisms, although the indicators used are certainly not perfect. For example, less overlap in grandparents' and grandsons' lives might indicate fewer possibilities for contact but may also reflect that differences between the cohorts of the grandfather and the grandson are greater, which could inhibit the grandfather's influence even if there was contact. Future research should advance efforts to test the mechanisms by using data with more direct measures of durable resources and of contact between grandparents and grandchildren. Contemporary studies can be designed specifically to include more direct measures. Historical studies could benefit from further digitization and linkage of historical records, which may provide information on, for example, coresidence and timing of death of family members. Finally, the realization that a parent-offspring approach may be too limited in scope to allow an understanding of social stratification in certain contexts has prompted studies mainly of grandparents. However, this study found that the influence of uncles was almost as large as that of grandfathers and that even great-grandfathers had an impact. Without a clear idea why and under which conditions uncles or greatgrandfathers can be expected to have an influence, it is difficult to claim that observed effects reflect anything more than unobserved parental or community characteristics. Therefore, we need to widen our view and develop and test theory not just on grandparents but on other extended family members as well. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Preconception and prenatal stress impact fetal and infant development, and women of color are disproportionately exposed to sociocultural stressors like discrimination and acculturative stress. However, few studies examine links between mothers' exposure to these stressors and offspring mental health, or possible mitigating factors. Using linear regression, we tested associations between prenatally assessed maternal acculturative stress and discrimination on infant negative emotionality among 113 Latinx/Hispanic, Asian American, Black, and Multiethnic mothers and their children. Additionally, we tested interactions between stressors and potential pre-and postnatal resilience-promoting factors: community cohesion, social support, communalism, and parenting self-efficacy. Discrimination and acculturative stress were related to more infant negative emotionality at approximately 12 months old (M = 12.6, SD = .75). In contrast, maternal report of parenting selfefficacy when infants were 6 months old was related to lower levels of infant negative emotionality. Further, higher levels of parenting selfefficacy mitigated the relation between acculturative stress and negative emotionality. Preconception and prenatal exposure to sociocultural stress may be a risk factor for poor offspring mental health. Maternal and child health researchers, policymakers, and practitioners should prioritize further understanding these relations, reducing exposure to sociocultural stressors, and promoting resilience.
health across the lifespan (Barker, 2007;Gluckman et al., 2016;Hentges et al., 2019). This model includes the idea of fetal programing, wherein fetal development is impacted by changes to the intrauterine environment as a result of maternal context, including biological and psychological adversity. DOHaD and fetal programing hypotheses are supported by research linking prenatal maternal stress, mood, and adversity to the fetal environment and to adverse birth outcomes, disrupted motor and cognitive development, long-term health risks such as obesity and cardiometabolic disorders, and mental health challenges including depression, anxiety, attention deficit hyperactivity disorder, conduct disorder, and more general internalizing and externalizing symptoms (Davis & Sandman, 2010, 2012;Essau et al., 2018;Glynn et al., 2018;Graignic-Philippe et al., 2014;Hicks et al., 2019;Irwin et al., 2020;Lupien et al., 2009;Park et al., 2014;Racine, Plamondon, et al., 2018;Sandman et al., 2015;Van den Bergh et al., 2017). Importantly, studies attempting to understand psychiatric risk earlier in life have identified negative emotionality in infants, which includes frustration, fear, discomfort, sadness, and low soothability (Rothbart, 2007), as an indicator of future mental health challenges (Bush et al., 2017;Crawford et al., 2011;Luecken et al., 2015). --- Sociocultural stressors Although there has been substantial growth in evidence supporting DOHaD and links between maternal stress and child development, there has been relatively little focus in this area on sociocultural stressors that disproportionately impact communities of color (Conradt et al., 2020;D'Anna-Hernandez et al., 2015;Liu & Glynn, 2021). In fact, there is a dearth of research that examines preconception or prenatal influences of offspring behavioral development among diverse and/or low-income populations, despite the greater risk of exposure to stress within these communities (Bush et al., 2017;Conradt et al., 2020;Demers et al., 2021). Therefore, guided by both DOHaD and the Integrative Model, we examine two prevalent stressors among populations of color in the current studydiscrimination and acculturative stressas potential prenatal stressors related to offspring mental health. Discrimination refers to unjust, unequal, or biased attitudes or behavior towards an individual because of their race, sex, class, or other characteristics. Importantly, women of color often hold multiple marginalized and minoritized identities (including gender and race/ethnicity) and are therefore at higher risk of experiencing multiple forms of discrimination (Earnshaw et al., 2013;Watson et al., 2016). Acculturative stress describes the stress of adapting to new cultures, including new dominant behaviors, customs, schools of thought, and values (Berry, 1997;D'Anna-Hernandez et al., 2015;Sam & Berry, 2010). Acculturative stress is often discussed in the context of immigrant populations, however, it has also been described as a phenomenon facing all members of historically nondominant cultural groups within the US (Walker, 2007). Both discrimination and acculturative stress have been linked to maternal mental health (Canady et al., 2008;D'Anna-Hernandez et al., 2015;Ertel et al., 2012). Discrimination has also been associated with physiological change during pregnancy and adverse birth outcomes such as preterm birth and low birthweight (Alhusen et al., 2016;Chaney et al., 2019;Dominguez et al., 2008;Giurgescu et al., 2011;Hilmert et al., 2014). Although there is very little existing research examining the impact of preconception or prenatal maternal exposure to discrimination or acculturative stress on offspring health, one recent study did find an association between prenatal discrimination and negative emotionality and inhibition/separation problems among infant offspring (Rosenthal et al., 2018). These findings support existing concerns regarding intergenerational impact of sociocultural stressors and highlight the urgent need for further study in this area. --- Resilience-promoting factors In addition to calling for recognition of unique cultural contexts and oppressive systems facing populations of color, García Coll's Integrative Model emphasizes the importance of identifying factors that contribute to positive development (García Coll et al., 1996). In the current study, we are particularly interested in factors contributing to resilience, or adaptation and wellness in the presence of adversity and risk (Masten & Coatsworth, 1998;Rutter, 1987). Resilience processes can be (a) compensatory/ promotive, when a resource exerts a main, positive effect on an adaptive outcome in the presence of a risk factor; or (b) protective, when a resource reduces the relation between a risk factor and maladaptive outcome, as reflected in an interaction effect of the risk factor and resource on outcome (Fergus & Zimmerman, 2005;Zimmerman et al., 2013;Zolkoski & Bullock, 2012). Few studies have addressed specifically the pre-or postnatal resilience-promoting factors that contribute to infant mental health. Therefore, for this exploratory investigation, we informed our selection of resilience-promoting factors with both theory and evidence. Following the Integrative Model's emphasis on different ecologies that surround a developing child (e.g., family and community), as well as culturally specific influences (i.e., communalism) (Cabrera, 2013;García Coll et al., 2000;Perez-Brena et al., 2018;Umaña-Taylor et al., 2015), we chose to investigate three domains of resilience-promoting factors with existing empirical support outside of infant mental healthsocial capital, communalism and parenting self-efficacy. Community cohesion and social support encompass two facets of social capital that have been linked to reduced levels of stress and other mental health problems (Hong et al., 2014;National Academies of Sciences Engineering and Medicine, 2019;Saleem et al., 2018;Svensson & Elntib, 2021;Yamada et al., 2021). Social support refers to one's network of social connections that provide both emotional and tangible forms of support, while community cohesion is defined as the presence of mutual trust and solidarity within one's local community (Sampson et al., 1997). Although both community cohesion and social support have been related to better birth outcomes, albeit inconsistently (Feldman et al., 2000;Hetherington et al., 2015;National Academies of Sciences Engineering and Medicine, 2019;Schetter, 2011), less research has examined the relation between prenatal maternal social capital and child outcomes. Communalism is a cultural orientation towards interdependence (and thought to stand in contrast to the Eurocentric value of independence) that emphasizes social bonds, social duties, and the importance of collective well-being, both outside and within one's family (i.e., familism) (Abdou et al., 2010;Schwartz et al., 2010). Communalism has been highlighted by researchers as a potential culturally specific resilience-promoting asset for racially/ethnically diverse populations, including Black, Latinx/Hispanic, and Asian American groups (Moemeka, 1998;Schwartz et al., 2010;Woods-Jaeger et al., 2021). However, evidence has been mixed in its level of support for this hypothesis (Abdou et al., 2010;Gaylord-Harden & Cunningham, 2009;Harris & Molock, 2000), and some scholars have suggested that higher levels of communalism may sensitize one to the presence of stressors such as discrimination (Goldston et al., 2008;Perez-Brena et al., 2018). Finally, the concept of parenting self-efficacy is grounded in social cognitive theory (Bandura, 1986(Bandura, , 1997) ) and describes a caregiver's confidence in their ability to parent successfully. Self-efficacy is informed both by one's individual beliefs as well as their observations and experiences in their environment (Bloomfield & Kendall, 2012;Raikes & Thompson, 2005). Parenting self-efficacy has been linked to better psychological health and adjustment among both children and parents (Albanese et al., 2019;Wittkowski et al., 2017), but there is a lack of research on parenting self-efficacy in the context of sociocultural stressors. --- Current study Drawing on the Integrative Model's framework for studying unique determinants of risk, as well as resilience among youth of color (García Coll et al., 1996), and DoHAD's emphases on early life antecedents of health and development (Barker, 2007;Gluckman et al., 2016), the current study prospectively examines intergenerational risk and resilience pathways to infant mental health. Specifically, we examine risk pathways by testing relations between sociocultural stressors assessed prenatally (discrimination and acculturative stress) and infant offspring negative emotionality. We test for both compensatory/promotive and protective resilience pathways by assessing main and interactive associations (with sociocultural stressors) of community cohesion, social support, communalism, and parenting self-efficacy with negative emotionality. Given the dearth of existing research on maternal prenatal sociocultural stress, infant temperament, and resiliencepromoting factors, this was largely an exploratory study. We did anticipate sociocultural stress to be associated with infant negative emotionality, and that resilience-promoting factors may buffer these relations. --- Method --- Participants Study participants were a subsample of 113 mothers and their children (53% male) who identified as a race or ethnicity other than White, drawn from a larger longitudinal study beginning in pregnancy. Our sample consisted of 69% of mothers selfidentifying as Latinx/Hispanic, 15% as Asian American, 11.5% as Multiethnic, and 4.4% as Black. 1 Participants were recruited from Southern California medical clinics during their first trimester of pregnancy. Inclusion criteria included singleton intrauterine pregnancy, being 18 years of age or older, English-speaking, absence of tobacco, alcohol, or drug use during pregnancy and medical conditions impacting endocrine, cardiovascular, hepatic, or renal functioning. Participant characteristics are reported in Table 1. On average, mothers were 28 years old (SD = 5.65, range = 18.05-41.57) with a median household income of $52,158. Approximately a third of participants (31.9%) were born outside the U.S. Similarly, about a third of the sample spoke a language other than English in their household (35.4%). There was also a wide distribution of maternal education level, with 30.1% completing high school or less, 49.6% having some college, an associate degree, or a vocational or certificate program degree. Lastly, 20.3% of the sample had completed college or graduate school. --- Procedure All study procedures were approved by the responsible Human Subjects Review Board. Study subjects participated in a series of pre-and postnatal study visits that included questionnaires and structured interviews to collect information on maternal and infant demographics, mood, health, risk, resilience, and infant development. The current study sample includes all participants who completed the 12-month postpartum study visit. Supplementary Table 1 provides an overview of data collection. --- Measures Gestational age at birth (GAB) was determined using the last menstrual period and an ultrasound prior to 20 weeks gestational age, in line with guidelines of the American College of Obstetricians and Gynecologists (American College of Obstetricians and Gynecologists Committee on Obstretic Practice, 2017). $52,158 (49,392) 1 How one identifies their own race/ethnicity is not straightforward, and we acknowledge the ways in which imposed categorizations/terminology can, either advertently or inadvertently, feel inaccurate and/or cause harm. Therefore, we aim to define which terms we use to denote our participants' race/ethnicity, while acknowledging they are not universally standard or identified with by various cultural groups within the U.S. In our study, Black is used to refer people of African ancestry, Asian American to refer to people of Asian descent, Latinx/Hispanic/Hispanic to refer to participants from Spanish-speaking countries and/or Latin American descent, and Multiethnic to refer to participants belonging to multiple racial/ethnic groups (American Psychological Association, 2019; Noe-Bustamante et al., 2020; U of SC Aiken, n.d.). We also report participants' self-reported verbatim race/ethnicity: three participants stated they were African American, six Asian, one Asian American, one Asian Japanese, one Black, one Black and White, one Black Latin, one Black Mixed, one Caucasian and Pakistani, one Chinese, one East Indian, two Filipino, one Half Chinese/Half Caucasian, one Half Mexican, Half White, one Hispanic, one Hispanic and African American, one Hispanic and Black, one Hispanic and Native American, one Hispanic/Latina, one Hispanic/ Mexican, one Hispanic/Pilipino/Hawaiian, one Indian, one Latina, one Latina/Hispanic, one Mexican, one Mexican/Hispanic/Latina, one Mexican/White, one Mixed, one Mostly White, Hispanic, Native American, one Filipino, one South Asian, one Taiwanese, one Vietnamese, one White and African American. --- Sociocultural stressors Acculturative stress was measured at a 25-week prenatal visit with the Societal, Attitudinal, Environmental, and Familial Acculturative Stress Scale short version (SAFE; Mena et al., 1987). Subjects were asked to rate the stressfulness of 24 items on a 5-point Likert scale (e.g., "It bothers me that family members I am close to do not understand my new values" and "It bothers me when people pressure me to assimilate (or blend in)"), with higher scores indicating greater acculturative stress. The SAFE has demonstrated strong reliability and been used with different immigrant and later generation populations (Ahmed et al., 2011;D'Anna-Hernandez et al., 2015;Mena et al., 1987;Shattell et al., 2008). Internal reliability in the current sample was 0.88. Discrimination was assessed at the 25-week prenatal visit with Williams' Major and Everyday Discrimination Scales (Williams et al., 2007). The Major Experiences of Discrimination scale asks participants if they have experienced unfair treatment as it pertains to nine different situations (e.g., ever been unfairly fired, stopped by the police, and so forth). The number of situations participants endorse are summed for a total score. The Everyday Discrimination Scale asks participants how often in their day-to-day life they have experienced discriminatory treatment in 10 contexts, such as being treated with less courtesy than others or followed around in stores (never, once, two or three times, four or more times). The number of times reported is summed for a total score. Both scales have demonstrated construct validity (Taylor et al., 2004). --- Resilience-promoting factors Community cohesion was assessed at the 25-week prenatal visit with 12 statements from the Social Ties Scale (Cutrona et al., 2000) referring to different types of community support which participants indicated as either true or false (e.g., neighbors get together to deal with community problems, neighbors help and look out for one another). Endorsed items were summed for a total score. This scale has demonstrated adequate reliability in previous research (Cutrona et al., 2000) and had an alpha value of 0.86 in the current study. Social support also was measured at the 25-week visit with the Medical Outcomes Study Social Support Survey (MOS-SS; Sherbourne & Stewart, 1991). The MOS-SS consists of 19 items asking about tangible support, positive social interaction, affection, and emotional/informational support. This scale has been used extensively and demonstrates strong reliability (Racine, Madigan, et al., 2018;Sherbourne & Stewart, 1991). The current study uses a standardized total score, and internal reliability was .98. Communalism was assessed at the 35-week prenatal visit with a 28-item scale developed by Abdou et al. (2010) from two wellestablished scales assessing familism and communalism. Participants responded to items such as "I owe it to my parents to do well in life" or "I would take time off from work to visit a sick friend" on a 4-point scale ranging from "strongly disagree" to "strongly agree." Items are summed for a total score. The scale has demonstrated good reliability with pregnant women (Abdou et al., 2010) and had an alpha value of 0.84 in the current study. Parenting self-efficacy was collected at 6 months postpartum with the Maternal Self-Efficacy in the Nurturing Role Questionnaire (Pedersen et al., 1989). The questionnaire consists of 16 items on a 7-point Likert scale asking mothers' how representative they feel different statements are of their parenting experience. Example items include, "I am concerned that my patience with my baby is limited" and "I trust my feelings and intuitions about taking care of my baby." Items are summed for a total score. This scale has previously demonstrated adequate test-retest reliability and internal consistency (Pedersen et al., 1989;Porter & Hsu, 2003) and had internal reliability of 0.84 in the current study. --- Infant temperament Infant Negative Emotionality at 12 months was assessed with the Infant Behavior Questionnaire (IBQ; Gartstein & Rothbart, 2003), a 191-item measure that has been used extensively in developmental research and demonstrates good reliability and validity (Goldsmith & Campos, 1990;Worobey & Blajda, 1989). In order to reduce maternal reporting bias, questions assess the infant's concrete behaviors in clearly defined situations, for example, "During a peek-a-boo game, how often did the baby smile?" and "How often during the last week did the baby startle to a sudden or loud noise?" Individual item responses can range from 1 "never" to 7 "always." The questionnaire items map onto three primary temperament dimensions: Negative Emotionality, Surgency/ Extraversion and Orienting/Regulation. The Negative Emotionality dimension, the focus of this investigation, is comprised of four subscales: Sadness, Fear, Falling Reactivity, and Distress to Limitations. Internal reliability of this dimension in the current study was 0.92. --- Data analyses Descriptive analyses included examination of sample demographics, data distributions, bivariate associations, and levels of sociocultural stressors and resilience-promoting factors by race/ ethnicity. To select model covariates, we examined demographic variables with prior theoretical or empirical support for association with infant temperament with bivariate correlations, including biological sex at birth, GAB, household income, parental cohabitation, birth order, maternal nativity status, and maternal education. All variables significantly associated (p < .05) with temperament were included in subsequent models. The amount of missing data averaged less than 4% across all variables included in analyses, and Little's MCAR test indicated there was no systematic pattern of missing values (χ 2 (52) = 55.36, p = .35). Hierarchical linear regression was utilized to first assess the impact of sociocultural stressors on infant negative emotionality above and beyond covariates. Only sociocultural stressors that showed bivariate associations with negative emotionality were included in regression models. Promotive (main) and protective (interaction) effects of resilience-promoting factors were subsequently tested in a third step (community cohesion, social support, communalism, and parenting self-efficacy). The same four models were repeated with each sociocultural stressor. Lastly, because a number of DoHAD findings have suggested that vulnerabilities to prenatal stress are sex-differentiated (Braithwaite et al., 2017;Clayborne et al., 2021;Glynn & Sandman, 2012;Hicks et al., 2019;McLaughlin et al., 2021;Rosa et al., 2019;Sandman et al., 2013;Sandman et al., 2015;Sharp et al., 2015), we tested the interaction of stressor X sex in a fourth step. All continuous variables contributing to interaction terms were centered prior to analysis. Hayes' (2018) PROCESS macro for SPSS was used to plot and probe significant interactions at the 16th and 84th percentiles of the predictor variable (indicating low and high levels of the predictor; Hayes, 2018), and the 16th, 50th, and 84th percentile of the moderator (indicating low, moderate, and high levels of the moderator). The Johnson-Neyman technique was utilized to define regions of significance (Hayes, 2018). --- Results --- Descriptive information Table 2 displays means and bivariate associations of study variables. All sociocultural stressors were positively associated with one another. Among resilience-promoting factors, social support was associated with both community cohesion and communalism. Everyday discrimination and acculturative were related to more negative emotionality, while parenting self-efficacy and GAB were related to less negative emotionality. Participants' average levels of sociocultural stressors and resilience-promoting factors are presented by race/ethnicity in Table 3. Although differences across race/ethnicity were not tested statistically, descriptively Black and Multiethnic participants experienced the greatest amount of discrimination on average, while Asian American participants had the highest levels of acculturative stress. Black participants reported the highest average levels of all resilience-promoting factors. --- Risk and resilience analyses Both everyday discrimination and acculturative stress (assessed prenatally) significantly predicted infant negative emotionality after adjusting for GAB (see Step 1 in Tables 4-5 and Figure 1). When entered into the linear regression model with everyday discrimination, parenting self-efficacy had a promotive/main effect on negative emotionality such that higher levels of parenting selfefficacy were associated with less infant negative emotionality (see Table 4, Figure 1a). When entered into the model with acculturative stress, parenting self-efficacy had a significant protective/interaction effect, with moderate and higher levels of parenting selfefficacy buffering the relation between acculturative stress and infant negative emotionality (see Table 5, Figure 1b; Conditional Effects: Low PSE: t = 3.40, p < .001; Moderate PSE: t = 1.21. p = .23; High PSE: t = -0.57, p = .57). The Johnson-Neyman Regions of Significance test indicated the relation between acculturative stress and negative emotionality was no longer statistically significant when level of parenting self-efficacy was above 97.02 (approximately 43rd percentile in the current sample). As parenting self-efficacy levels increased beyond 97, the relation between acculturative stress and negative emotionality continued to decrease in strength. No other resilience factors (community cohesion, social support, or communalism) were found to have significant main or interaction effects on negative emotionality in models with acculturative stress or everyday discrimination (six models in total). These results are provided in the Supplementary Material (see Supplementary Tables 4567). Results testing the interaction of sex and sociocultural stressors are reported in the Supplementary Material. The interaction of sex and acculturative stress during pregnancy reached statistical significance in some (see Supplementary Tables 5, 7, 9), but not all models. Plotting this interaction suggests the association between acculturative stress and negative emotionality may be stronger for girls than boys (see Figure 2). --- Discussion The far reaching effects of mental health disorders in childhood are disproportionately felt by youth of color (Alegria et al., 2010;Marrast et al., 2016), emphasizing the importance of understanding and ultimately intervening with early antecedents and precipitants of childhood mental health challenges. Although it is established that preconception and prenatal stress impacts offspring mental health (Graignic-Philippe et al., 2014;Park et al., 2014;Van den Bergh et al., 2017), to date stressors emphasized by the Integrative Model as critical for understanding development in populations of color, including discrimination and acculturative stress (García Coll et al., 1996), have been understudied in this context. Further, questions remain unanswered about which maternal factors promote infant resilience to preconception and prenatal adversity exposure (Liu & Glynn, 2021). Results of the current study add empirical evidence to these existing research gaps. Specifically, we found that maternal experiences of acculturative stress and everyday discrimination, assessed prenatally, predicted infants' greater negative emotionality at 12 months of age, but that mothers' parenting self-efficacy at 6 months of age counteracted these effects. Our findings have implications for future study as well as prevention and intervention. Accumulating research has shown that acculturative stress and discrimination are harmful to one's health (Bekteshi & van Hook, 2015;D'Anna-Hernandez et al., 2015;Paradies et al., 2015;Revollo et al., 2011;Williams et al., 2019). Here, we see evidence that they may also be risk factors for mental health of the next generation. Notably we did not find an association between major events of discrimination and infant negative emotionality, consistent with previous research findings that more chronic, everyday discrimination may be more harmful to health and development (Ayalon & Gum, 2011;Bennett et al., 2010;Wheaton et al., 2018). Therefore, the development of, and research on, policies and programs to reduce maternal stress exposure must consider acculturative stress and chronic discrimination. Because we found associations between parenting self-efficacy and lower levels of infant negative emotionality, and did not find links of the same magnitude for communalism, social support, or community cohesion, our results also indicate the potential of intervention and prevention programs that enhance parenting self-efficacy to promote child emotional health and reduce later mental health challenges. One method for achieving this goal could involve brief parenting support and parenting skill sessions embedded into pediatric well-child visits (Weisleder et al., 2016) or home visiting programs (Granado-Villar et al., 2009) throughout a child's first year of life. There are a number of brief parenting interventions, such as Triple P, that have already been shown to improve parenting self-efficacy (Gilkerson et al., 2020;Tully & Hunt, 2016). The current study also highlights some priority areas for future research. More studies are needed to replicate and expand the links between prenatal sociocultural stress and infant negative emotionality, including through the study of additional indicators of emotional and cognitive development, and later mental health outcomes. Notably, analyses found some evidence to suggest that the association between acculturative stress and negative emotionality differed by sex. Specifically, in some models, the effect appeared stronger for girls than boys. Future research examining this moderation by sex with larger sample sizes could improve understanding of this potential interaction, but these results do align with those of previous researchers (Braithwaite et al., 2017;Hicks et al., 2019;Sandman et al., 2013). Evidence suggests mechanisms behind these results may include differential HPA axis and placental responses to stress based on fetal sex (Hicks et al., 2019). Another important observation from this study was the presence of statistically significant associations among all of the sociocultural stressors examinedsuggesting that women of color face multiple co-occurring stressors related to their race/ethnicity and social position in a society built on structural racism (Bailey et al., 2017). For this reason, we did not examine specific types of discrimination, such as sexism and racism, but rather assessed the impact of any form of chronic discrimination on birth outcomes. However, future research that considers distinctive sources of discrimination and other stressors, including structural racism, is necessary. Next steps for this work should also focus on identifying the psychobiological mechanisms of intergenerational transmission of sociocultural stress from mother to child during pregnancy. Acculturative stress X Parenting self-efficacy 0.00 0.00 -0.25 .011 Note. Model run with the inclusion of household income, parental cohabitation, infant sex, and parity did not substantively change results. a Measure was assessed prenatally. One possible mediating factor is earlier GAB, which was related to infant negative emotionality in the current study, and has previously been linked to maternal experiences of prenatal discrimination (although we did not see associations with acculturative stress or everyday discrimination; Alhusen et al., 2016;Christian, 2020). Maternal distress stemming from sociocultural stress also could be an intermediary factor; this hypothesis is supported by research findings that discrimination and acculturative stress are linked to depressive symptoms (Canady et al., 2008;D'Anna-Hernandez et al., 2015) and infant temperament (see Supplemental Table S10). In this study, community cohesion, communalism, and social support did not show statistically significant resilience-promoting effects. However, continuing to test these factors is important because their effectiveness may be dependent on certain contextual factors, such as cultural background or acculturation level. It is important that future research statistically test these differences in larger samples, as it may guide understanding of what forms of resilience and protective factors are more likely accessible within communities of color. There also are limitations to the current research that are important to note. First, the sample was limited in terms of its racial/ethnic diversity, with the majority of participants (69%) being Latinx/Hispanic. Although discrimination is a universal experience for all populations of color, these experiences are not distributed equally or uniformly across populations (Lee et al., 2019). Research on larger samples that allows for the statistical testing of these differences, as well as the examination of relations between sociocultural stressors and infant outcomes separately for mothers and infants of various race/ethnicities, could help to elucidate if these stressors are more salient for some populations versus others, and why. Second, while our conclusion that parenting self-efficacy may have a positive impact on subsequent infant negative emotionality is heightened by distal and sequential separation of the measures (6 and 12 months) it is important to acknowledge that these findings are maternal report and correlational. As such, there is a possibility that parenting self-efficacy may be enhanced when parenting an infant with an easier temperament. However, prior research findings suggest that parenting self-efficacy prospectively predicts infant negative temperament, but not the other way around (Verhage et al., 2013). It is also conceivable that these relations may be influenced by maternal bias, although a recent study of 935 mothers found configural, metric, and scalar invariance between mothers with and without a lifetime history of depression across dimensions of maternal-rated child temperament, leading authors to conclude that maternal report of youth temperament is not biased by maternal mental illness (Olino et al., 2020). The likelihood of bias is also reduced by the IBQ's specific measure design to avoid maternal reporting bias through asking questions about infants' concrete behavior in specific situations. Still, future research that confirms the relation between sociocultural stress, parenting self-efficacy, and infant temperament as measured with independent observers or with behavioral measures would strengthen the validity the current findings. Lastly, this study did not include postnatal measures of discrimination and acculturative stress, thus limiting our understanding of whether preconception or prenatal sociocultural stress has a distinctive impact on infant temperament over and above postnatal sociocultural stress. Future research that includes these measures at both pre-and postnatal timepoints can assist in understanding how timing matters for stress transmission. Despite this study's limitations, the implications of our findings are clear. In the field of maternal and child health, several truths must be acknowledged. First, women of color face more stressors compared to their White counterparts because of societal systems that marginalize people of color, and second, this stress exposure translates to health risk for offspring, potentially perpetuating health disparities across generations. This recognition is particularly important in the current moment, as we are presently experiencing a unique historical time of societal upheaval and threat against communities of color. The years leading up to 2022 have been marked by rising national rates of White nationalism and White supremacy, racially motivated hate crimes, anti-immigrant rhetoric and policies, and racist police brutality (Boyd, Krieger, et al., 2020;Seaton et al., 2018). The year 2020 was defined by the COVID-19 pandemic, which illuminated and exacerbated economic and health inequities across racial lines and exposed the deeply embedded nature of racism in the U.S (Boyd, Krieger, et al., 2020;Liu & Modir, 2020;Seaton et al., 2018). The consequences of COVID-19 for communities of color, in conjunction with continued police killings of unarmed Black people and the massive protests that followed, has led many cities and organizations across the country to make a long overdue declaration: that racism is a public health crisis (Krieger, 2020). Findings of the current study and others suggest that heightening racism across the country will have a long-lasting impact on maternal and infant health, yet COVID-19 is ironically likely to shift attention away from these areas in the short-term, due to urgent competing health priorities (Jacob et al., 2020). Therefore, there is a need for research, policy, and practice that pushes for further understanding of relations between prenatal sociocultural stress and infant outcomes, that reduces stress exposure, and that promotes resilience in this context. Having a comprehensive understanding of the role of structural racism in health disparities is critical for anyone engaging in this work, and advocates have called for researcher and practitioner training in topics such as structural competency, cultural humility, structural determinants of disease, defining race as a social and power construct, and institutional inequities as a root cause of injustice (Bailey et al., 2017;Barkley et al., 2013;Cerdeña et al., 2020;Metzl et al., 2018;Metzl & Hansen, 2014). Lastly, while interventions focused on prenatal health and child and family well-being are needed and important, true primary prevention to achieve health equity necessitates the creation of policies and systems that no longer systematically undermine the health of people of color (National Scientific Council on the Developing Child, 2020). Supplementary material. The supplementary material for this article can be found at https://doi.org/10.1017/S0954579422000141. --- Conflicts of interest. None.
The idea of resource scarcity permeates health ethics and health policy analysis in various contexts. However, health ethics inquiry seldom asks-as it should-why some settings are 'resource-scarce' and others not. In this article I describe interrogating scarcity as a strategy for inquiry into questions of resource allocation within a single political jurisdiction and, in particular, as an approach to the issue of global health justice in an interconnected world. I demonstrate its relevance to the situation of low-and middle-income countries (LMICs) with brief descriptions of four elements of contemporary globalization: trade agreements; the worldwide financial marketplace and capital flight; structural adjustment; imperial geopolitics and foreign policy. This demonstration involves not only health care, but also social determinants of health. Finally, I argue that interrogating scarcity provides the basis for a new, critical approach to health policy at the interface of ethics and the social sciences, with specific reference to market fundamentalism as the value system underlying contemporary globalization.
Introduction The idea of resource scarcity permeates health ethics 1 and health policy analysis, whether the context is the micro-level of selecting interventions in a clinical setting, the meso-level of allocating resources within a regional organization, or the macro-level of choosing among options for reducing the global burden of disease. Consider three real-life situations: (1) Researchers select the most cost-effective package of interventions to reduce maternal mortality in 'resource-scarce settings' based on per capita budgets as low as US$0.50 per year for maternal health (Prata et al. 2010). The need for such interventions is acute: approximately 350 000 women die every year in pregnancy and childbirth, almost exclusively in low-and middle-income countries (LMICs) (Abou Zahr et al. 2010;Hogan et al. 2010). (2) A questionnaire distributed by ethics researchers asks participants at a Canadian government conference on public health ethics to respond to this hypothetical: 'You are the Medical Officer of Health 2 of a large health unit that must make dramatic budget cuts. You need to decide how to cut services and programs' (Pakes and Upshur 2007). (3) Critics of the US$8-10 billion per year spent worldwide on AIDS prevention and treatment argue that the amount is excessive because so much less is spent on such health-related objectives as providing clean water in developing countries (Cheng 2008) and that lives are being lost because spending on AIDS programmes 'takes resources away from other diseases' (Easterly 2009). The first two exercises may be operationally valuable to health service managers who have little control over the resources available to them, and as a result face troubling decisions. However, operational value in such settings is not the only objective of ethical inquiry, and such exercises and similar ones aimed at setting priorities for treating other conditions including breast cancer (Eniu et al. 2006) and multidrugresistant tuberculosis (Nathanson et al. 2006) in 'resourcescarce settings' rarely ask, in a formulation patterned after the title of a standard text in population health (Evans et al. 1994), why some settings are resource-scarce and others not. 3 In the third situation, the zero-sum assumption that the quantum of financial resources available for improving the health of the poor through development assistance is somehow fixed and immutable, in a world where (for instance) the US Department of Defense spends US$1.5 billion daily, is not questioned. A leading global health researcher has perceptively described failure to ask such questions as ' ''public health machismo,'' the idea that ''someone has to make the decision who lives and dies'' . . . ' (J Y Kim, quoted in Petryna and Kleinman 2006: 6). I describe asking where scarcities come from and who makes the decisions that create and maintain scarcities of resources for health as interrogating scarcity. Interrogating scarcity, relentlessly and when necessary impolitely, is a central task and a professional obligation for health ethics and health policy analysis in all settings that are characterized by major, socioeconomically patterned disparities in health. The contemporary preoccupation with priority-setting is disturbing in its failure to recognize this imperative. In the second section of the article I explain the rationale for interrogating scarcity and briefly explore its application within the limits of a single political jurisdiction. However, I am mainly concerned to demonstrate the relevance of the strategy to issues of justice across national borders, as 'global health has come to occupy a new and different kind of political space that demands the study of population health in the context of power relations in a world system' (Janes and Corbett 2009: 168). This demonstration, which comprises the third section of the article, involves not only health care, but also social determinants of health: the conditions of life and work that make it easy for some individuals to lead long and healthy lives, and all but impossible for others. I take as given the adequacy of the evidence base assembled by the World Health Organization Commission on Social Determinants of Health (2008) and other authors (Yong Kim et al. 2000;Birn et al. 2009;Labonte ´and Schrecker 2011). Those who doubt the adequacy of this evidence base, despite the near ubiquity of socio-economic gradients in health, will simply need to hold their doubts in abeyance as they read on. (The central ethical issue here relates to the choice of a standard of proof, a topic that merits an article on its own.) In the final section, I argue that interrogating scarcity provides the basis for a new, critical approach to health policy at the interface of ethics and social sciences, with specific reference to the neoliberalism or market fundamentalism that is the value system underlying contemporary globalization. --- Scepticism about scarcity Resource scarcities that confound efforts to reduce health disparities by providing health care or eliminating causes of illness are rarely natural or absolute, in the sense exemplified by shortages of compatible donor organs for transplantation or (in a hypothetical example) of a geologically rare mineral that cannot be synthesized and has no substitute in the manufacture of a life-saving medical device. Far more common, in the words of Calabresi and Bobbitt's Tragic Choices, are situations in which 'scarcity is not the result of any absolute lack of a resource but rather of the decision by society that it is not prepared to forgo other goods and benefits in a number sufficient to remove the scarcity' (Calabresi and Bobbitt 1978: 22). Their remarkable book focused on the various mechanisms that societies adopt to make life-and-death choices and to rationalize, sometimes to camouflage, the underlying ethical presumptions. In the context of this article, as suggested by the three examples that introduced it, 'resources' in the first instance are usually financial or budgetary. The budgets in question may be public budgets for health care provision; they may also be the straitened budgets of households impoverished by structural economic change, for which prerequisites of healthy living are unaffordable. And my aim is not to provide a genealogy of the concept of scarcity that links its current form to the work of early economic theorists like Adam Smith and Thomas Malthus (e.g. Xenos 1987;Boal and Martinez 2007;Samuel and Robert 2010) by way of twentieth-century microeconomics (Fine 2010;Samuel and Robert 2010). Neither do I offer a critique of the unreflective use of the concept that is routine in environmental politics (Enzensberger 1974;Hartmann 2001;Hartmann 2010), although I refer to some such critiques in the final section of the article. My aim is more modest: demonstrating the indispensability of Calabresi and Bobbitt's injunction that: 'We must determine where -if at all -in the history of a society's approach to the particular scarce resource a decision substantially within the control of that society was made as a result of which the resource was permitted to remain scarce. . . . Scarcity cannot simply be assumed as a given' (Calabresi and Bobbitt 1978: 150-1;emphasis added). Examples and potential applications are abundant. I completed the penultimate version of this article in a jurisdiction that hosts the largest treatment and research complex in the United States and possibly the world: the towering Texas Medical Center (Figures 1 and2), offering and advertising world-class treatment for those with enough private wealth or private insurance. At the same time, one in four Texas residents, the highest percentage in the country, had no health insurance in 2009 (US Census Bureau 2011). Political leaders in the United States have chosen to leave provision of health insurance to the market, with a residual publicly INTERROGATING SCARCITY financed (but often for-profit) sector, and to accept both the high overall costs of health care that result and the corollary inadequacy of provision for the un-and under-insured who experience delayed or denied treatment, easily avoidable complications and often premature death (Reynolds 2010). The distinctive US approach, and the political arrangements sustaining it, underscore the connection between resource scarcity in health care settings and political choice. Texas, and the United States, could easily afford to provide health insurance coverage for all their residents. On one estimate, providing coverage for all uninsured US residents would have cost US$100 billion a year before the financial crisis hit: just half the annual direct cost of the country's military adventure in Iraq (Leonhardt 2007) and a small fraction of the sums that the US government was able to place at risk, in short order, to bail out financial institutions (Barofsky 2009). Most other high-income countries provide health insurance to all, or nearly all, of their population, often with superior results in terms both of crude outcome measures like life expectancy and of the steepness of socio-economic gradients in health (see e.g. Murray et al. 2006;Hertzman and Siddiqi 2008). Calabresi and Bobbitt's injunction directs our attention to such variables (an oversimplified list) as a long history of opposition to so-called socialized medicine on the part of the medical profession, the private insurance industry and large segments of the business community; and a regime of election financing that magnifies the influence of such interests (Center for Public Integrity 1995a;Center for Public Integrity 1995b;Center for Public Integrity 1996;Quadagno 2004). It also directs our attention to the revenue side of the equation. Texas is one of a few states that collect no state income tax, and federal income tax reductions during the first decade of the 21st century reduced national government revenues by more than US$2 trillion, with half the resulting increase in after-tax incomes accruing to the richest 1% of taxpayers (Citizens for Tax Justice 2009). Claims that providing access to health care would be unaffordable cannot be isolated from political choices about the level and incidence of taxation. These insights do not apply only to rich countries. In 2001, the member states of the African Union (AU) committed themselves, without setting a target date, to increasing public spending on health to 15% of their general government budgets. Ten years later, only 6 of 53 AU member states had achieved this target, with important consequences in terms (for instance) of continued high rates of maternal and newborn mortality (Committee of Experts of the 4th Joint Annual Meetings of the AU Conference of Ministers of Economy and Finance and ECA Conference of African Ministers of Finance Planning and Economic Development 2011). AU finance ministers had the previous year actually urged abandonment of the health spending commitment (Njora 2010). In contrast to the situation in high-income countries, no one would seriously suggest that most African governments, even were they to live up to the Abuja commitment, are able on their own to finance even minimally adequate health care for their populations (Sachs 2007). However, this is not the end of the story. Just as in far richer countries, using available resources and fiscal capacity to protect health, especially the health of the poor, is often not high on the agenda of the elites that dominate choices about public budgets even under conditions of formal democracy. In an interconnected world, Calabresi and Bobbitt's focus on the origins of scarcity in decisions 'substantially within the control' of a given society does not go far enough. Over the past few decades globalization, '[a] pattern of transnational economic integration animated by the ideal of creating self-regulating global markets for goods, services, capital, technology, and skills' (Eyoh and Sandbrook 2003: 252), has introduced new influences on scarcity as it is invoked and experienced within national borders. Critical choices may now be made by corporate managers, portfolio investors or bureaucrats in multilateral financial institutions half a world away; their priorities, in turn, create new incentive structures for domestic actors. The section of the article that follows expands on these points, in a way that is necessarily stylized and selective. 4 Globalization and scarcity in an interconnected world Uruguayan-born essayist Eduardo Galeano (2000: 166) describes globalization as 'a magic galleon that spirits factories away to poor countries'. Reorganization of production and many forms of service provision across multiple national borders over the past few decades (Dicken 2007) has placed jurisdictions into intense competition to attract foreign investment and contract production. A senior official of the US Department of the Treasury during the Reagan-Bush era described the competition more graphically than is usual in the academic literature: 'The countries that do not make themselves more attractive will not get investors' attention. This is like a girl trying to get a boyfriend. She has to go out, have her hair done up, wear makeup . . . .' (David Mulford, quoted by Henwood 1993). Combined with a doubling in the size of the global workforce as India, China and the transition economies opened to foreign investment, the effect has been to generate strong downward pressure on wages and working conditions. In particular, the threat of 'exit' (to a lower-cost jurisdiction) has shifted the balance of power decisively in favour of corporate managements. Distributional conflicts are no longer contained within national borders and governments in many LMICs find it attractive to attract investment by way of 'the discipline of labour' (Amsden 1990). A number of additional processes can be identified as contributing to scarcities of resources for health in LMICs. Only some are described here, since my intention is not to offer a comprehensive critique of globalization based on its effects on health, but to show the value of a particular way of studying it. Trade agreements provide essential legal infrastructure for global reorganization of production, and may effectively 'constitutionalize' it by creating formidable economic and legal obstacles to reversing trade liberalization and other elements of market-oriented economic policy (Grinspun and Kreklewich 1994;Schneiderman 2000). 5 In 1995, the world entered a new era of trade policy with the creation of the World Trade Organization (WTO) regime and its binding dispute resolution procedures; since then, bilateral and regional trade and investment treaties that often go beyond the provisions of the WTO framework have proliferated. The content of these agreements routinely reflects the unequal bargaining power of the parties, arising in the first instance from differences in market size: access to the US market (for instance) is more significant for a small economy like Ecuador or Guatemala than its domestic markets will ever be to the US or European Union. These disparities affect not only the negotiation of trade agreements but the conditions under which parties make use of dispute resolution procedures (Stiglitz and Charlton 2004). Major losses of livelihood can sometimes be traced directly to competition from low-cost, perhaps highly subsidized imports newly permitted into an LMIC market (Jeter 2002;Atarah 2005;Buechler 2006;de Ita 2008); workers and agricultural producers are, if not impoverished, driven into precarious employment or the informal economy. Tariffs are among the easiest forms of revenue for governments to collect, which is why at least until recently they were a major element in LMIC revenue streams, and still are for some countries. Tariff reductions undertaken as part of trade liberalization slashed these revenues, arguably compounding the effects of competition for investment. The treasuries of some low-income countries, in particular, still have not recovered (Baunsgaard and Keen 2005;Glenday 2006;Baunsgaard and Keen 2010), leading to reduced fiscal capacity for public spending on areas such as education and health, although detailed countryspecific assessments are hard to find. More visible and familiar are effects on access to essential medicines associated with requirements for harmonizing intellectual property (IP) protection under the Agreement on Trade-Related Aspects of Intellectual Property (TRIPS) (Correa 2009). As originally drafted, TRIPS would have enabled pharmaceutical manufacturers to charge whatever price the traffic would bear by eliminating existing legal options to issue compulsory licenses, produce generic versions, or import these from elsewhere. Several years of negotiation post-1995 led to official reinterpretations that restored some of these options, but cumbersome and complicated procedures impede their use (Haakonsson and Richey 2007;Kerry and Lee 2007;Muzaka 2009). Of equal concern is the tendency of the United States, in particular, to negotiate IP provisions that go beyond TRIPS in bilateral and regional agreements, undermining flexibilities previously negotiated and creating new barriers to producing or importing essential medicines at affordable prices (Roffe et al. 2008;Shaffer and Brenner 2009;Muzaka 2011). For a cash-strapped LMIC public sector health system, and for the majority of the population in countries where most medicines are still paid for out-of-pocket, the link between globalization, scarcity and health could not be clearer. --- INTERROGATING SCARCITY Trade agreements often incorporate provisions facilitating the flow of investment across borders, and limiting the regulation of such flows. Such provisions along with competitive financial deregulation, especially in the United States and the United Kingdom, have led to the emergence of a worldwide financial marketplace in which considerable power has shifted from national polities to a global capital market that 'now has the power to discipline national governments . . . . These markets can now exercise the accountability functions associated with citizenship: they can vote governments' economic policies in or out, they can force governments to take certain measures and not others' (Sassen 2003: 70; see generally Schrecker 2009). In the aftermath of Mexico's 1994-95 financial crisis, a former head of the International Monetary Fund (IMF) described the consequences for governments that fail to manage their economies in accordance with the priorities of this 'global, cross-border economic electorate' (Sassen 2003: 70) as 'swift, brutal and destabilizing' (Camdessus 1995). Along with the growth of private banking (Anon 1990) and the multiplication of opportunities to manipulate prices charged in trade between firms that are part of the same corporate organization, the global financial marketplace facilitates capital flight: a process in which domestic elites shift their wealth out of a jurisdiction, sometimes but not always illegally, in search of higher returns and lower risks. Capital flight is of special importance for understanding scarcity in LMICs because it deprives nations of desperately needed resources that could be used for investment in development or health (Helleiner 2001). To indicate the magnitudes involved, Ndikumana and Boyce (2011) estimate the value of capital flight from 33 sub-Saharan countries plus imputed interest between 1970 and 2008 at US$944 billion (in 2008 dollars), much of this figure related to straightforward looting through misappropriation of loans and trade misinvoicing. They estimate that on average 60 cents of every dollar received from external lenders left those countries as flight capital in the same year, and that the resulting reduction in public spending on health was responsible for 77 000 infant deaths per year in 2005-07 (Ndikumana and Boyce 2011: 82). Further, capital flight has often magnified sovereign debt crises that ushered in an era in which many countries lost control of their domestic policies to the World Bank and the International Monetary Fund (IMF). Structural adjustment entered the development policy lexicon in the early 1980s, when the World Bank and IMF-institutions dominated by the G7 countries-began large-scale loan programmes to ensure that indebted LMICs could repay their external creditors. The urgency of such lending grew after 1982, when the possibility of Mexican default on loans made by US banks threatened the stability of financial systems in the industrialized world. Loans were conditional on a relatively standard package of policies emphasizing deregulation, privatization of state-owned firms, reduction of domestic government spending, trade liberalization with the aim of prioritizing production for export and elimination of controls on foreign investment. The ostensible aim was to create conditions for sustained economic growth in countries where they were applied. By the mid-1980s, informed observers were critical of this expectation (see e.g. Lever and Huhne 1985: 64); in retrospect, it is clear that the measures were designed to protect creditor interests, and also to advance a larger project of refashioning the world economy on investor-friendly lines (Przeworski et al. 1995: 5;Babb 2002: 1). Resulting economic dislocations and domestic austerity measures often had destructive effects on livelihoods and other social determinants of health, which were demonstrated as early as 1987 by a ten-country UNICEF study (Cornia et al. 1987). Subsequent reviews of the evidence have found a preponderance of negative effects on health (Breman and Shelton 2007;Stuckler and Basu 2009) and probably understate these effects because, except in the most drastic cases, it is hard to capture the long-term health consequences of deteriorating socio-economic conditions using epidemiological standards of proof (Pfeiffer and Chapman 2010). Opportunities for capital flight often meant that the costs of adjustment were borne primarily by those who did not have the option of shifting their assets out of the country; publicly financed rescues of collapsing domestic banks (Halac and Schmukler 2004;Mannsberger and McBride 2007) are a case in point. Thus, the adjustment process imperiled the livelihoods (and opportunities to lead healthy lives) of many while wealth and economic opportunity were shifted upward to the few. At least before 2008 the IMF had become less important as a source of last-resort lending, but remained powerful as a gatekeeper for development assistance and debt relief (Gore 2004). IMF approval is also valued as assurance to private investors that a country's macroeconomic policies are sound (Sachs 1998). Considerable evidence suggests that the era of structural adjustment is not over. IMF policy apprehensions about 'fiscal expansion' (Working Group on IMF Programs and Health Spending 2007), based on textbook microeconomics and public finance, have continued to limit countries' ability to spend on health and education (Ooms and Schrecker 2005; Centre for Economic Governance and AIDS in Africa and RESULTS Educational Fund 2009). For example, IMF insistence on public expenditure ceilings led to a situation in which 'thousands of trained nurses and other health workers remain[ed] unemployed' in Kenya circa 2006, and thousands more had left the country in search of work elsewhere, 'despite a health worker shortage across all health programs' (Korir and Kioko 2009: 2). The history of structural adjustment shows that economic policies and institutions cannot be understood in isolation from imperial geopolitics and policy. The hegemonic role of the United States was captured in a 1990 codification of emerging, market-oriented wisdom as the Washington consensus, responding to a political climate that 'was essentially contemptuous of equity concerns' (Williamson 1993(Williamson : 1329)). By the early years of this century, the aggressive unilateralism of the Bush II administration had moved the concept of US imperialism into the academic mainstream (Falk 2004), and it is useful to view many aspects of globalization's recent history, in addition to the politics of World Bank and IMF-driven economic restructuring, from this vantage point. Consider for example US support for coups d'e ´tat in countries like Iran and Guatemala dating back to the 1950s and subsequent assistance to homicidal but market-friendly regimes, like Pinochet's in Chile and various governments and counterinsurgency movements in Central America. President Reagan's Central American policies led to the deaths of some 200 000 people and drove several times that number into exile, many into subaltern positions as undocumented workers in the United States (see generally Robinson 2003), creating a landscape of social and economic desolation from which many countries in the region are only starting to heal. Reagan administration policies included financing political formations like the right-wing Salvadoran think tank Fundacio ´n Salvadoren ˜a para el Desarrollo Econo ´mico y Social (Salvadoran Foundation for Economic and Social Development) (FUSADES) in El Salvador, which in 1990 ran advertisements urging foreign investors in the garment industry to hire 'Rosa' at 57 cents an hour. In 1991, Rosa's advertised price dropped to 33 cents an hour (Kernaghan 1997). Thus, we are brought back to Galeano's magic galleon and Mulford's beauty contest, and to the fundamental point that resource scarcities in the context of health policy must always be understood with reference to their origins in political choices and macro-scale social and economic processes. --- Market fundamentalism and the construction of scarcity Interrogating scarcity advances that understanding, but is not a set of substantive principles of justice. Methodologically, the strategy presupposes only Calabresi and Bobbitt's generic scepticism about scarcity. That presupposition distinguishes it from the mainstream approach exemplified by Daniels and Sabin's effort to find procedural solutions to problems of scarcity associated with the operation of private, for-profit managed care organizations in the United States, while not questioning the justice of the basic organization of health care provision and the health care industry (see Figures 1 and2) (Daniels and Sabin 1997). Such efforts often degenerate into calls for 'practices that can be sustained and that connect well with the goals of various stakeholders in the many institutional settings where these decisions are made' (Daniels 2000(Daniels : 1300)), eschewing questions about the origins of scarcity. Such procedural solutions are worthwhile in a broad range of situations in which the goals of 'stakeholders' are ethically defensible and structural inequalities of power and resources not extreme, 6 but that defensibility cannot be presumed; no procedural algorithm will humanize Sophie's choice. In the international frame of reference, interrogating scarcity normatively implies only a weak, generic cosmopolitanism that regards drivers of scarcity that originate outside the jurisdiction's borders as prima facie appropriate for ethical analysis. In other words, the proposition that we (whoever we are) have obligations related to the health of non-compatriots is not rejected out of hand, but the content and limits of those obligations are not specified. Interrogating scarcity is thus congruent with (indeed exemplified by) Pogge's powerful argument that global responsibility is inescapable given the nature of historical and contemporary interconnections, as embodied in economic institutions as well as discrete policy choices. His central point is that ethical responsibility for health disparities follows causal responsibility across national borders, in particular with respect to the health damage that is associated with extreme poverty (Pogge 2002;Pogge 2004;Pogge 2005;Pogge 2007b). 'By avoidably producing severe poverty, economic institutions substantially contribute to the incidence of many medical conditions. Persons materially involved in upholding such economic institutions are then materially involved in the causation of such medical conditions' (Pogge 2004: 137). Pogge's attribution of responsibility depends on the existence of plausible alternative sets of institutions that would be more conducive to reducing or eliminating poverty. As shown in the preceding section of the article, this test is not difficult to meet. One can readily imagine alternative policies of 'adjustment with a human face', in the words of the UNICEF study of structural adjustment impacts cited earlier; a regime of international law in which health-related obligations under human rights treaties would 'trump' demands for macroeconomic policies that exacerbate shortages of health workers and restrict access to essential medicines (Pogge 2007a); or-leaving aside for the moment the formidable political obstacles (Stiglitz and Charlton 2005)-an international trade policy regime 'in which trade rules are determined so as to maximize development potential, particularly of the poorest nations in the world' (Rodrik 2001). Pogge notes the pernicious consequences of the 'resource privilege', which permits rulers to dispose of natural resources within their borders even when they remain unaccountable for the use of the revenues-think of how little revenue from exploitation of oil resources reaches the majority of Nigerians or Angolans-and the 'borrowing privilege', which permits rulers to incur external debts on behalf of subjects who may have no meaningful opportunity to accept or reject these obligations. This latter characteristic of the international order, in particular, could be changed by national policies or multilateral agreements that defined such debts as 'odious' under international law (King et al. 2003;Mandel 2006;Ndikumana and Boyce 2011: 84-95). Interrogating scarcity can therefore provide factual foundations for prescriptive statements about global justice that apply to local situations. It is also a promising basis for research at the interface of ethics and the social sciences that connects global-scale power relations and domestic political choices with the ways in which health-related scarcities are experienced differently, and the options for addressing them framed differently, by various protagonists on the ground. Exemplary work in this vein has been done on water, access to which is a key social determinant of health. In a case study of a particular district in India, Mehta (2007) has shown that scarcities of water must be understood with reference to local histories of human activity, and that the range of remedies considered feasible-in this instance, a contentious major dam project being actively promoted by the World Bank-may be defined by alliances of powerful domestic and external actors. Both Mehta and Mirosa Canal (2004) and Goldman (2007) have connected local constructions of scarcity with the projects of powerful supranational actors, including transnational water utility corporations, as they promote private investment in water service provision. Mehta and Mirosa Canal (2004: 4-7) are also explicit in identifying IMF/World Bank conditionalities as having created the conditions in which private provision of water as a marketed commodity appeared as the only viable solution. A useful parallel can be drawn with the Bank's INTERROGATING SCARCITY aggressive advocacy of market-oriented health sector 'reform' on the basis that private purchase of care or insurance was the norm from which all departures required justification (Laurell and Arellano 1996;Lee and Goodman 2002;Lister and Labonte ´2009). Srivastava (2010) makes a similar point about the World Bank's preference for market-based strategies in its role as a major supplier of development assistance for education, emphasizing that 'while developing countries have constrained public budgets, the persistence of scarce resources for education, particularly for basic education, is not a fixed variable. It exists because we let it ' (p. 525). Further comparative research on scarcity in the context of social determinants of health-including water and education, but also such factors as food security, adequate income and access to health care itself-will clearly be useful. The examples just cited indicate that contemporary constructions of scarcity must be situated with reference to what Somers (2008) has called market fundamentalism (in preference to neoliberalism, the more familiar terminology but confusing to North American audiences), the institutions that promote it and its local particularities. Market fundamentalism presumes that markets are the normal and natural basis for organizing almost all areas of human activity; assigns a heavy burden of proof to those who would organize human interactions on any other basis; and tends to define citizenship in terms of participation in markets, as a producer and (informed) consumer. Market fundamentalism is the value system at the core of contemporary globalization (Harvey 2005;Ward and England 2007), and infuses the construction of scarcity in many public policy contexts. In addition to the illustrations already provided, Lurie et al. (2008) observe, without evident appreciation of the irony, that health care organizations in the United States often insist that a 'business case' needs to be made for interventions to reduce health disparities, based on their anticipated return on investment. A 2008 think tank report characterized the US President's Emergency Program for AIDS Relief, which has financed antiretroviral therapy for a million people, as a 'state supported international welfare program' that was 'hard to justify on investment grounds' (Over 2008). And Ruiters (2006;2009) interprets policies that provide free, but seriously inadequate minimal increments of water and electricity to the poor in South Africa, thereafter charging users on a cost-recovery basis with disconnection automated through installation of prepaid meters, as a strategy of social control concerned with inculcating a 'payment morality' (in the words of the Department of Finance), while implicitly conceding that domestic poverty can only be managed rather than substantially reduced. This discussion may appear to have wandered far from issues of health, but that is not the case if the frame of reference includes social determinants of health, as it should. Rather, inquiry into how scarcities are constructed and maintained returns health policy to the insights of an earlier era, notably Virchow's about the importance of political as well as pathological causes of disease. Against today's background of financial markets with global reach and widespread invocations of the need for austerity in which governments are seldom challenged as they ritualistically turn their pockets out and complain that the cupboard is bare, neither disease causation nor health ethics can sensibly be separated from politics and economics. Redefining the scope of health ethics and health policy analysis will inevitably encounter objections based on the impracticality of interrogating scarcity, or at least its irrelevance to daily operational contexts. The appropriate reply comes from feminist scholar Catharine MacKinnon (1987: 70), addressing the limits of incremental approaches to eliminating sex discrimination: 'You may think that I'm not being very practical. I have learned that practical means something that can be done while keeping everything else the same'. 4 For more extensive treatments see, for example, Yong Kim et al. (2000); Labonte ´et al. (2009); Gill and Bakker (2011). 5 The investor-state dispute resolution provisions (Chapter 11) of the North American Free Trade Agreement are a case in point. 6 In particular, no 'stakeholder' must be able to define the permissible limits of discussion or to terminate the deliberation altogether-as for instance when corporate managers threaten to relocate production to another jurisdiction in response to demands for adequate livelihoods and elimination of exposure to workplace hazards, or the propertied can use the prospect of capital flight to limit redistributive policies. --- Conflict of interest None declared. Endnotes 1 An admittedly ambiguous term, which I take to include prescriptive or normative analysis of how decisions that affect health should be made both in clinical settings and in the broader universe of settings that are relevant to public or population health. 2 In Canada, a Medical Officer of Health is a physician and the senior public servant in a municipal or regional public health organization that provides a range of preventive and protective interventions, including assuming responsibility for communicable disease control in the event of outbreaks; such units do not usually provide clinical services. 3 Wellington (2000, Chapter 1) makes this point with reference to the dilemma in moral reasoning presented by Lawrence Kohlberg, in which a poor man is faced with the choice between stealing a drug he cannot afford or watching his wife die for want of the drug. Discussing Carol Gilligan's restatement of the dilemma, Wellington points out that neither Kohlberg nor Gilligan asks a rather obvious question: why does the drug cost so much? The answer takes us into the realm of the international political economy of intellectual property rights, scientific research and the political power of the pharmaceutical industry.
This article discusses the resurgence of the term 'patriarchy' in digital culture and reflects on the everyday online meanings of the term in distinction to academic theorisations. In the 1960s-1980s, feminists theorised patriarchy as the systematic oppression of women, with differing approaches to how it worked. Criticisms that the concept was unable to account for intersectional experiences of oppression, alongside the 'turn to culture', resulted in a fall from academic grace. However, 'patriarchy' has found new life through Internet memes (humorous, mutational images that circulate widely on social media). This paper aims to investigate the resurgence of the term 'patriarchy' in digital culture. Based on an analysis of memes with the phrase 'patriarchy' and 'smash the patriarchy', we identify how patriarchy memes are used by two different online communities (feminists and anti-feminists) and consider what this means for the ongoing usefulness of the concept of patriarchy. We argue that, whilst performing important community-forming work, using the term is a risky strategy for feminists for two reasons: first, because memes are by their nature brief, there is little opportunity to address intersections of oppression; secondly, the underlying logic of feminism is omitted in favour of brevity, leaving it exposed to being undermined by the more mainstream logic of masculinism.
Reuse Items deposited in White Rose Research Online are protected by copyright, with all rights reserved unless indicated otherwise. They may be downloaded and/or printed for private study, or other acts as permitted by national copyright laws. The publisher or other rights holders may allow further reproduction and re-use of the full text version. This is indicated by the licence information on the White Rose Research Online record for the item. --- Takedown If you consider content in White Rose Research Online to be in breach of UK law, please notify us by emailing [email protected] including the URL of the record and the reason for the withdrawal request. --- Introduction The word 'patriarchy' is having something of a resurgence after some years in the backwaters of out-offashion structural feminism. This revival of the term is apparent in the mainstream media (e.g. Higgins 2018) and popular feminist publications (e.g. Ms Magazine, 2018). It appears emblazoned on placards at global Women's Marches, and on T-shirts. Our online lives are punctuated by references to patriarchy, particularly in the form of memes (images with superimposed text that are shared widely on social media, their provenance usually unknown). The concept of patriarchy has not typically been viewed as a useful theoretical lens for understanding the multifaceted oppression of women since the 1980s, at least in academic circles (Hunnicutt, 2009). Theorised heterogeneously by feminists in the 1960s, 1970s and early 1980s to articulate the systematic, structural oppression of women, it provided frameworks through which to make links between seemingly distinct areas of women's experiences. However, criticisms (not always accurate) that the concept was ahistorical, homogenising and unable to account for gendered experiences that intersected with other structural oppressions -alongside the 'turn to culture' (Barrett 1990) in the 1990smeant that the term fell from academic grace. Whilst it has recently resurfaced in academic texts (Enloe, 2017;Gilligan and Snider, 2018;Clisby and Holdsworth, 2016), suggesting a reclamation of the concept as a valuable analytical tool, questions remain over whether its theoretical problems have been sufficiently addressed. Why, then, has a concept critiqued for its blindness to race, class and other intersections become once again so visible? How do we make sense of the renewed currency of 'patriarchy', particularly within online spaces? This paper aims to investigate the resurgence of the term 'patriarchy' in digital culture specifically, as a site where the term is especially visible. Based on an analysis of memes with the phrase 'patriarchy' and 'smash the patriarchy', we identify how patriarchy memes are used by two different online communities (feminists and anti-feminists) and consider what this means for the ongoing usefulness of the concept. We argue that, whilst performing important community-forming work, using the term is a risky strategy for feminists for two reasons: first, because memes are by their nature brief, there is little opportunity to address intersections of oppression; secondly, the underlying logic of feminism is omitted in favour of brevity, leaving it exposed to being undermined by the more mainstream logic of masculinism (Brittan, 1989;Nicholas and Agius, 2018) and anti-feminism. First, we situate our intervention in relation to literature on online feminism and networked misogyny, and to theoretical debates on the concept of patriarchy. We then outline our methodology, before turning our analytical attention to patriarchy memes. We address how feminist memes mobilise the concept of patriarchy (and more precisely 'the patriarchy') to provide a sense of feminist collectivity, and consider the risk this poses for intersectional feminism. We then examine anti-feminist memes and detail how the concept is re-appropriated to undermine feminism. --- Digital feminisms and networked misogyny It is instructive to consider the resurgence of patriarchy online through feminist scholarship identifying new visibilities of feminism in contemporary media and digital culture. Unlike the period of the late 1990s and 2000s, when the cultural landscape was characterised by a post-feminist repudiation or disavowal of feminist vocabularies and identities (McRobbie, 2009), in recent years feminism appears to have become acceptable and even popular (Banet-Wesier 2018): from celebrity feminism, to the #metoo movement, to an array of feminist merchandise, often sporting the phrase 'smash the patriarchy'. This 'new cultural life of feminism' (Gill, 2016) has been variously described and extensively debated, however it is widely accepted that digital culture has been a particularly significant site for this resurgent feminist activity. Indeed scholars have identified digital culture as important spaces for feminist community-formation and consciousness-raising, and for critiquing sexism and anti-feminism (Mendes et al., 2019;Lawrence and Ringrose, 2018). However, as Banet-Weiser warns, whilst feminist 'discourses have an accessibility that is no [longer] confined to academic enclaves' (2018: 1), feminism is most likely to achieve visibility when it is 'palatable' and 'media friendly': 'happy' (rather than angry), and conducive to the logics of consumer culture and neoliberalism. She argues that these expressions not only eclipse feminist structural critiques of systems of class inequality or racism, but privilege white, middle-class cis women. Scholarship on 'post-' (Gill, 2016), 'popular-' (Banet-Weiser, 2018) and 'neoliberal-' (Rottenburg, 2018) feminism is valuable for thinking critically about how the feminist concept of patriarchy has gained visibility, and the ideological work that it might do. However, there is a risk of collapsing all expressions of feminism together, ignoring the multiple and diverse iterations of feminism that exist across media and digital culture. For this reason, we focus on one particular area of digital culture that has specific conventions and meanings for its audiences: memes. In everyday usage, a meme is often an image or gif, circulating online, shared by friends, with comic or ironic text across it, e.g. a picture of a cat appearing next to the humorously mis-spelled and grammatically flawed text 'I can has cheezburger'. Memes may seem like amusing diversions with little power to hold our attention or affect our thinking. However, the expansion of social media has meant that memes are now a prevalent part of our digital lives and a key way in which we communicate online (Miltner, 2014). This makes them an important site for critical investigation. Knobel and Lankshear (2006) argue that memes are worth studying because they tell us something about 'mindsets, new forms of power and social processes, new forms of social participation and activism, and new distributed networks of communication and relationship ' (2006: 201). Furthermore, Gal, Shifman and Kampf (2016) state that memes are important for how norms are formed and/or subverted: they argue that memes do performative work and are 'performative acts' (Gal, Shifman andKampf, 2016: 1700). Examining memes is therefore a valuable way to examine which ideas become prominent and in what forms. For example, Lawrence and Ringrose (2018) argue that feminist memes operationalise humour as a mechanism for expressing rage, forming communities, and calling out sexism and antifeminismwhat Rentschler and Thrift (2015) call 'digital feminist warfare'. Lawrence and Ringrose also highlight the limitations to these practices, detailing how some feminist memes (such as misandry memes) endorse violence, reify essentialist notions of biological difference, and exclude intersectional perspectives. Notwithstanding the different interpretations of this resurgent feminist visibility online, there is consensus that the luminosity given to feminism exists 'in tandem with intensified misogyny' (Gill 2016: 610). Just as digital platforms have created opportunities for feminist activity, they have also amplified forms of misogyny and anti-feminism; from men's rights activism, to rape threats, to more generalised hostility to women (and feminists) online (Ging and Siapera, 2018;Mendes et al., 2019). Online spaces exist as sites of struggle and confrontation between different groups. As we show, the term 'patriarchy' finds life not only in feminist digital culture but also across networked popular misogyny (Banet-Weiser, 2018), where the concept performs very different kinds of ideological and community-forming work. --- The concept of patriarchy The concept of patriarchy was of central unifying importance in the Women's Liberation Movement (Beechey, 1979). In seeking a reason for women's subordination across a range of cultural and historical sites, discussions of 'patriarchy' therefore examined roles in the family (Delphy, 1977;Millett, 1971), the incest taboo and exchange of women (Mitchell, 1975) and the political differentiation of biology (Eisenstein, 1979), amongst other approaches. The concept was important: it provided a way to theorise 'feelings of oppression' (Beechey, 1979, p.66) and offered a unifying theory both inside and outside academia. Having said that, patriarchy was not theorised monolithically or ahistorically, as has been sometimes claimed (e.g. Acker 1989). Feminists laboured to theorise the workings of patriarchy in specific contexts, e.g. French farming families (Delphy, 1977), to define its varying manifestations, e.g. in sub-Saharan Africa, the Middle East and East and South Asia (Kandiyoti, 1988), and account for it shifting interactions with racism and capitalism (Walby 1990). However, in her critical overview of how the term has been used, Fox (1988) argues that a theory of patriarchy must consider both superstructure and subjectivity. Moreover, she claims that the term is in urgent need of nuanced reconceptualization with specificity as to how we understand patriarchy working at both the structural and individual level. By the end of the 1970s and early 1980s criticisms emerged with regards to the universal quality of the theory and its failure to address how women's experiences differ across race, class and sexuality (Combahee River Collective, 1997[1977]; Lorde, 1994;hooks, 1984;Crenshaw, 1989), although it did later form one axis of oppression in Crenshaw's definition of intersectionality (Crenshaw, 2011). Yet, in using 'patriarchy', Bhopal raises concerns that 'racial divisions are relegated to secondary importance as the notion of 'race' and ethnicity have been 'added on'" (sic 1997). Butler (1990) argues that in aiming to theorise a universal concept of patriarchy, Western feminists have sought examples from non-Western cultural contexts. In doing so they co-opt those cultures in a Neo-colonial way, causing damage through the subtle construction of these as barbaric, reading a Western version of oppression onto them. She calls this a 'colonizing epistemological strategy ' (1990: 48) which inhibits the ability to understand 'different configurations of domination ' (1990: 48). Furthermore, in theorising patriarchy, no historical cause of women's oppression could be agreed on (Beechey, 1979, Jackson, 1998). Meanwhile, Pollert (1996) argues that, the concept of patriarchy, particularly as theorised by radical feminists, relies upon notions of women and men as different groups, with something essential linking women. But what might that essential quality be? Without an understanding of historical causes of male dominance, the concept of patriarchy is implicitly reliant on dimorphic biological reasoning relating to role(s) in reproduction, she claims. This biological essentialism raises the question of 'what is a woman?', since bearing children is not the single defining characteristic of those designated as women (Beechey, 1979). But this criticism is unfair, as Brickell (2006) outlines: ethnomethodologists (e.g. Kessler and McKenna, 1978) and materialist feminists (e.g. Wittig, 1992) all argued that bodies are socially constructed. Meanwhile, Marxist feminists left the underlying social theory of capitalism unexamined and therefore retained some of the problems of Marxism, problems that would have benefited from feminist analysis (Jackson 1998). A fundamental problem was a lack of agreement on how patriarchy worked or how it had arisen, and what its relationship with capitalism may be (Jackson, 1998). The theory was not completely abandoned however. Walby's (1990) theorisation argues for a more flexible conceptualisation of patriarchy. Engaging with criticisms of the term, she argues that there are six main structures that together constitute the system of patriarchy, and which may have different emphases and levels of importance in different developed countries. Yet in Walby's theorisation there is minimal discussion of those who do not fit into the social category 'women' or of lesbians. This is primarily a theory about women who live cis, heterosexual lives. Nor are the intersections of race and class particularly well addressed. The concept has found renewed life in feminist research on domestic and sexual violence, particularly in work on the global South (e.g. Mahadeen, 2015). Other attempts are being made to reformulate the theory. For example Hunnicutt (2009) revises the concept as 'varieties of patriarchy', arguing that discussion of violence against women needs a theory which can show the gendered nature of violence and how men are caught up in hierarchies in which they are disempowered. Thus Hunnicutt argues for a theory of patriarchy that acknowledges men's position in relation to other men; and pays attention to race and class hierarchies. Such a theory must enable analysis where structure and ideology may be divergent (e.g. patriarchal ideology may remain where gender equality is making gains), whilst also consider '"terrains of power" in which both men and women wield varying types and amounts of power ' (2009: 555). Cynthia Enloe (2017) calls for feminist attention to the minutiae of patriarchy's workings, defining patriarchy as 'a systema dynamic webof particular ideas and relationships' which is 'stunningly adaptable' (2017: 16). These re-theorisations move away from critiques of 'patriarchy' as monolithic in favour of considering it flexible, an argument that finds common ground with theorisations by Walby (1990) and Kandiyoti (1998). Enloe indicates that patriarchy operates hegemonically, responding to challenges from feminists and shifting the territory to maintain male dominance. Whilst rethinking 'patriarchy' as a more flexible system strikes us as necessary, questions about how patriarchy is enmeshed with racism and classism remain. In spite of the myriad criticisms that have been made of the concept and its resulting apparent toxicity (to the extent that Walby (2011) chose to use 'gender regimes' as more palatable to policy makers), the term still holds worth for some academics. As Clisby and Holdsworth say, it is valuable to use 'patriarchy' because it makes visible that which is 'unacceptable ' (2016: 22). With this renewed interest in patriarchy theoretically, it is valuable to consider how feministsand othersare utilising the concept. Within the brief space of the meme, can meme-makers (and their sharers) articulate these reformulated ideas? And what work does 'patriarchy' do in these spaces? --- Methods To investigate these questions, we employ textual analysis of memes. Memes are designed to convey a potent (if only for the purpose of amusement) message in seconds. Therefore unpicking the multiple discourses at work across a spectrum of patriarchy-related memes enables us to identify the meanings of 'patriarchy' in digital culture and the work the concept does. This method precludes us from commenting on the circulation and reception of memes by internet users, beyond our own experiences, and we identify this as a valuable area for further research. We collated patriarchy-related memes and examined them using discourse analysis. Our sample comes from a 7th November 2018 Google.co.uk image search (the computer's search history cleared to limit the personalisation algorithm) of the terms 'patriarchy' and, given the prevalence of the phrase in our social media timelines, 'smash the patriarchy'. Additionally we searched for '"smash the patriarchy" meme' and 'patriarchy meme', which enabled us to capture different ways in which the phrase and term are being used online. Our sample comprises images in the top four rows (c. thirty images) of search results from each of these search terms, with duplicates excluded (n=122). Using Google Image Search provides a quick snapshot into the highest page-ranked images across Google's indexed web and a quick view into what kinds of images are frequently seen by those using the world's most widely used search engine. However Google's algorithm can be 'gamed' to position some pages nearer the top of the list (Marres, 2017: 71) and algorithms are far from neutral. Indeed, as the most dominant search engine, Google represents a site of cultural struggle. As Safiya Umoja Noble's (2013;2018) work on the search engine's representation of black women powerfully demonstrates, far from being neutral and depoliticised, 'search engine results perpetuate particular narratives that reflect historically uneven distributions of power in society ' (2018: 71). Consequently, creating a snapshot of its search results is a valuable means of capturing a sample of these broader struggles (Noble, 2013). Google image search provides details of the websites on which the memes are located, but typically gives little to no information about who made the images and for what purpose. Nor do we know how the images are engaged with (an area for further research). Image research online is notoriously tricky as it is difficult to put together a sample due to the web's 'enormous size and mutability' (Shifman and Lemish, 2010: 876). Tech companies do not provide access to all their data, and social media privacy settings mean that not all the images circulating at one time will be available. It is also difficult to track the provenance and circulation of images. Whilst tools are available to help overcome some of these issues, these require significant resources (boyd and Crawford, 2012). We used critical discourse analysis (CDA) to analyse the textual and visual discourses in the memes. We began from the position that language and visuals are inherently political and both make use of and are constructive of ideological messages (Griffin, 2007). Paying close attention to visual and textual recurrences, we coded the memes for imagery (e.g. images or text referring to hammers, conspiracy, flowers), political position (feminist/anti-feminist/neutral/ambiguous) and types of people in the images (famous/not famous), alongside noting metaphors and joke style (where relevant). We assessed the implied addressees and authorial positions. The '"smash the patriarchy"' results show numerous images with the phrase 'smash the patriarchy' or that used the phrase for comic effect (e.g. 'the patriarchy isn't going to smash itself') (thirty three), sometimes accompanied by images of hammers (four) or flowers (five), sometimes as part of cartoons which feature fictional super heroines (e.g. Wonder Woman). They also show how intensely commodified the phrase is, appearing on t-shirts, mugs and other merchandise. This speaks to wider commodification of feminist language and imagery within mainstream media and consumer culture (Banet Weiser, 2018). We coded all these images as 'feminist'. We coded twenty of the "patriarchy" memes as 'feminist', one as 'antifeminist', and eight as ambiguous or neutral. The results included feminist cartoons (ten); the phrase 'if I had a hammer I'd smash patriarchy' accompanied by a woman with a hammer smashing a wall (one); educational images offering graphic explanations of key terms in gender theory (two); images with the slogan 'smash the patriarchy' (four) or 'fuck the patriarchy' (one); two book covers of feminist books. Notably the datasets produced with the word 'meme' appended, gave starkly different results to those without. These images were nearly all memes in the image + white capitalised text format, à la LOLcats. Here are many images that we categorise as 'anti-feminist' where the humour of the memes is at women's and feminists' expense. Twenty-five of the thirty one "patriarchy meme" results we coded as 'antifeminist' and only four as 'feminist'. This tells us something about how meme websites enable the creation and spread of anti-feminist memes. On the other hand, the majority of the "smash the patriarchy meme" results we coded as 'feminist' (nineteen of thirty one images), which indicates that the phrase itself does important work for a feminist identity, as we discuss below. --- Community-formation: collectivity and humour for feminists Memes operate as a shared communicative language. They express and assume a common identity and 'insider' status (Miltner, 2014;Massanari and Chess, 2018). Memes are inter-textual, building on other memes and cultural texts to transmit their message. The joke or meaning of the meme depends on this language being understood by the reader (Kanai 2015). Thus, memes play a role in the 'border work' of feminist collective identity construction as they hail us to 'get the joke' and share their perspective. In this way memes can have an important function for individual and collective identity formation (Miltner, 2014;Milner, 2016;Knobel and Lankshear, 2006). In this section we examine how feminist patriarchy memes do this community-forming work. The phrase 'smash the patriarchy' appears widely across our dataset, often accompanied by visuals that complement the violence of 'smash'. The hammer in particular also references a feminist joke that starts with the Peter, Paul and Mary song 'If I Had a Hammer'. The joke appears in our dataset and goes: 'if I had a hammer… I'd smash patriarchy. I found it!' (see Figure 1). It is usually accompanied by an image of a woman with a hammer (the origins of this cartoon are possibly Rebekah Putnam and Carri Bennett in Habitual Freak zine, 1994, but many new memes have been created based on this). Figure 1 In the song the singer wishes for a hammer so that they can hammer all the time, everywhere in order to bring about peace (or perhaps remove love -the lyrics are ambiguous). Part of the meme's joke is that indiscriminate hammering is what little boys do when they get their first toy hammer; but the more important joke is that hammering indiscriminately is not good enough. Hammering requires an object if it is to effect change. The joke is further amusing because it breaks the rhythm of the song with an angry declaration. The hammer imagery of the memes therefore taps into this shared language and existing joke. Flowers, hearts and other symbols of romance, childhood (e.g. Figure 2) and femininity which are not associated with violent destruction also appear in the feminist memes, often alongside the phrase 'smash the patriarchy'. Whilst the hammer imagery also hints at a second-wave empowering of women to be selfreliant and embrace traditionally male roles (such as DIY), this other imagery makes reference to a different set of feminist ideas relating to embracing the 'subversive' power of the feminine -an argument more akin to third-wave feminism (Nicholas, 2013). Figure 2 The use of the term 'patriarchy' alongside a destructive verb ('smash' is sometimes replaced by 'burn', for example) is an important indicator of feminist alignment. It articulates a collective politics through the widespread use of the phrase 'smash the patriarchy', which, according to Google trends, has increased in worldwide usage over the last decade, with specific peaks in November 2016 (perhaps due to the election of Donald Trump to the US presidency) and January 2017 (possibly reflecting the increased feminist activism around the Women's March). The direct instruction 'Smash the Patriarchy' can be seen as a call to arms, instructing other feminists to join the fight. Some memes do this more explicitly, featuring several women together inviting others to join them, such as Figure 3 below which features the protagonists from the teen film Mean Girls (2004) in a car with the words 'Get in Loser. We're going to smash the patriarchy'. --- Figure 3 Whether one is aware of the symbolism in these memes and their associations with different strands of feminism, the words 'patriarchy' and 'smash' are enough of a shared language. We argue that these memes can be theorised as having performative and community-forming functions within feminist communities online, not just through assuming a shared digital literacy (Kanai 2015) but also by mobilising a shared feminist vocabulary. We suggest that typically people viewing the memes and recognising themselves in them are likely to be already (to varying degrees) familiar with, and supportive of, the ideas presented. This challenges the idea that memes are always necessarily consciousness-raising, because those looking at and sharing them are already 'bought in'. Nevertheless we argue that memes do important feminist work: they impress an urgency and activity on the viewer, and an assertion that feminism matters. The importance of this should not be understated in a wider context in which feminism as a political project is being vehemently undermined. This is not only occurring online (as we demonstrate below), but structurally, embedded within institutionalised processes and practices (Banet-Weiser, 2018); including the roll back of women's reproductive rights and intensifying attacks on Gender Studies. --- '(The) Patriarchy' reformulated as a smashable 'thing' 'Smash the patriarchy' utilises the concept of 'patriarchy' in a distinctive way, subtly different from older theorisations of patriarchy, in which the concept was used grammatically without a definite article, e.g. article titles such as Beechey's 'On Patriarchy' (1979) and Walby's 'Theorising Patriarchy' (1989). In its online life, 'patriarchy' has gained a definite article: 'smash the patriarchy'. This produces a vision of something that can be done in one go -like knocking down a garden wall -and implies a recognisable 'thing', a target for feminist anger and action. 'The patriarchy' is universal, a singular entity. We suggest this visualisation of patriarchy as a 'thing' rather than a system of diffuse power working through individuals and institutions is a powerful feminist collectivising technology. When 'patriarchy' becomes 'the patriarchy' it becomes a monolithic thing, and the meme works as a call to action. Yet it is not without its contradictions or limitations, not least the lack of a sense of what 'patriarchy' actually is. We return to these issues shortly. Eight memes made reference to the days of the week (e.g. 'On Tuesdays we smash the patriarchy'). These references work alongside this 'thinginess' of 'patriarchy', suggesting that smashing the patriarchy might be part of the mundanity of our lives, such as a day's 'to do list' filled only with 'smash the patriarchy'. We argue that the humour of this meme lies in the knowing juxtaposition between the mundane and everyday connotations of the to-do list or diary, and the bombastic act of destroying a global system of inequality. The scheduling of such an unruly act plays on the gendered norms through which women are expected to be diligent, compliant and organised. These memes present a sense of urgency. Patriarchy is not to be smashed in some distant future, but today (or at least scheduled). The connotation of the need for smashing the patriarchy to be timetabled in to the week signals that feminism is 'work'. Moreover it connotes that the transformative work of feminism is ongoing and requires us to think strategically in order to bring about change (Ahmed, 2017: 93): it is every Wednesday that the patriarchy needs smashing. Thus whilst we may still be unclear what patriarchy is, we know that we must keep at smashing it. If memes do important community-building work for those who are already feminists, we can theorise that they also do motivational work through recognising feminist struggle and legitimating rage that was previously 'illegible' (McRobbie 2009). In their analysis of feminist memes, Lawrence and Ringrose (2018: 229) contend that using 'humour and sarcasm to articulate female rage is a critical component for feminism'. For feminist memes 'patriarchy' provides a point around which to organise, where patriarchy is unequivocally the enemy. What 'patriarchy' is doing online now, then, is the same as what Beechey argued it was doing for the women's movement in 1979: 'patriarchy' is useful as a way to theorise and explain 'feelings of oppression ' (1979: 66). Its resurgence online may be because it provides a way for feminists to register the presence of injustices, to render these unacceptable, and to challenge them. To quote Ahmed in her discussion of the value of 'sexism' as a concept for feminism: When we put a name to a problem, we are doing something. . . Making sexism and racism tangible is also a way of making them appear outside of oneself, as something that can be spoken of and addressed by and with others. It can be a relief to have something to point to, or a word to allow us to point to something that otherwise can make you feel alone or lost (2015: 8-9). However, for all the positive work that feminist patriarchy memes do online, the return to a universalised concept is not necessarily a happy one. Pollert (1996) argues that, in academic theorising, patriarchy is used as a 'short-hand' (p.639) in ways that slip between 'description and explanation' (emphasis in original). The effect of this slippage is to lose sight of the micro levels of social relations in the perpetuation of oppressive structures, obscuring 'the tension between agency and structure necessary to understand social processes ' (1996: 640). This problem remains within feminist memes where the use of '(the) patriarchy' as a shortcut would seem to only relate to the structure -the smashable thing -thus obscuring the complexity of the microsocial relations of living in patriarchal societies. --- The risk of losing intersectional perspectives The question of who is being hailed and what kind of collective identity is being formed by feminist patriarchy memes raises difficulties for contemporary, intersectional feminism. Using '(the) patriarchy' in a meme context is a risky strategy. It is risky because using the concept without any kind of reformulation means that the theory cannot be free from the criticisms leveled at it by black feminists in particular. Specifically, in gaining a definitive article ('the'), it conjures a singular and monolithic patriarchy towards which our work as feminists must be oriented, isolating this from other injustices and mechanisms of power. A meme has to be brief to be memorable and sharableits success depends on it. 'Smash the patriarchy' is much catchier than the longer formulation 'if I had a hammer, I'd smash patriarchy', but this brevity does not provide space for discussion of the problems inherent in the theory. Nor is there room to articulate more complex reformulations of the concept which attempt to address the critiques and to take intersections of oppression into account (e.g. Walby, Hunnicut). Furthermore, the 'thingyness' of 'the patriarchy', with its newly acquired definitive article and implications of monolithicism, expressly denies newer understandings of patriarchies as flexibly hegemonic (Enloe, 2017), context dependent, and working with and through other forms of discrimination and oppression. Thus the use of 'patriarchy' online maintains its bias towards middle-class, white women's concerns, i.e. prioritising gender, to the neglect of black, minority ethnic, working class, lesbian, bisexual and transwomen's particular experiences of oppression. Using 'patriarchy' in memes is therefore a risky strategy since it can exclude many women from the collective feminist sociality that it generates. Indeed, if memes build feminist communities through recourse to a shared vocabulary and assumed object of concern (in this case 'the patriarchy'), it is vital that we consider how wider relations (of class, race, sexuality and so on) organise feminist socialities online and shape their terms of participation (Khoja-Moolji 2015). So far we have argued that patriarchy memes operate as a shared visual language through which feminist sociality andto some degreeresistance can be generated online. However, the problem of foregrounding gender rather than addressing the intersections of multiple oppressions remains. We now move our focus to anti-feminist patriarchy memes which offer a new definition of patriarchy altogether and seize upon the potential reductionism of the term to undermine feminism. --- Anti-feminist collectivity and identification of a target In the previous section, we highlighted how humour operates in spaces of online feminist sociality, as memes function as a lingua franca among those who see the value in, and necessity for, feminism. The community-forming potential of patriarchy memes however extends beyond pro-feminist communities. Anti-feminist memes also create a collectivity through humour, but by inviting the reader to share in jokes at the expense of feminists and/or feminism. A key point of distinction from the feminist memes is the use of recognisable targets: individual women who are mocked directly. Two notable figures that emerged in our data set are Anita Sarkeesian (who created the website Feminist Frequency, and who came under fire from men in the gaming community, and Canadian LGBT rights campaigner Chanty Binx. In one meme from the "patriarchy meme" results, Sarkeesian's image is used with the phrase 'Criticism? More like harassment' (Figure 5) in a screengrab of a tweet by Feminist Frequency. The Sarkeesian image is used to 'correct' Feminist Frequency's use of the term 'online harassment', which the tweeter @FullMcintosh argues should actually be 'criticism'. This is making use of the frequently-seen argument about feminists being 'snowflakes' who cannot take criticism. It works alongside the anti-feminist argument that what feminists call 'trolling' is actually 'free speech'. --- Figure 5 Here the concept of 'masculinism' (Brittan, 1989) is useful, particularly as employed by Nicholas and Agius (2018) in their discussion of men's rights activism online. Brittan (1989) defines masculinism as the acceptance of dimorphic biological gender and associated 'naturalness' of heterosexuality, differing 'natural' gendered roles in labour, including the dominance of men in public and private life. He describes it as 'the ideology of patriarchy' (Brittan 1989: 4) and posits its basis in the ancient Greek philosophical approach to logic and reason, a position which has been well critiqued by Lloyd (1993) and others as justifying ideas of men as superior to women. Nicholas and Agius (2018) build on these ideas to understand how masculinist ideas manifest and are mobilised online, arguing that masculinism also involves a logic of individual choice and the rejection that individual agency is shaped or limited by society. In the case of the memes discussed above, the repositioning of trolling as 'free speech' is underpinned by that logic, and, indeed, according to this logic the idea of patriarchy is nonsense, a myth made up by feminists. This logic runs counter to radical or Marxist feminisms that understand women's oppression as structural and systematic. We return to these points shortly. Memes featuring Chanty Binx (Figure 6), whose exchange with anti-LGBTQ campaigners was filmed and widely disseminated online (Don & Y F, 2018), also exhibit the logic of masculinism. Binx has become a figure of numerous anti-feminist memes, known as 'Big Red'. She appears three times in our data set, but many more times in the longer search results. In these she appears angry, with text that replicates the masculinist idea that feminists are illogical, ignorant and doctrine-driven (one Big Red meme includes the text 'Shut the fuck up | memes are patriarchy'). In depicting feminists as illogical, those reading the meme and agreeing can position themselves as bearers of reason, seeming to neutralise rationality, rather than it being a priori linked to the Western philosophical tradition that privileges masculinist views of the world over women's perspectives (Nicholas and Agius, 2018). Figure 6 That feminists are 'unattractive' is another part of the joke that this set of memes in our sample mobilise, and this works to bolster claims of irrationality. Echoing historic caricatures of feminists as sexually undesirable, the memes depict Binx and other feminists as outside of conventional norms of feminine attractiveness: rejecting mainstream beauty choices by dying hair bright red, having dreadlocks, or celebrating 'fat' bodies. In their analysis of the Social Justice Warrior caricature, Massanari and Chess (2018) suggest that depictions of feminists as excessive (corporeally and emotionally) work to discredit feminism, shoring up ideas of feminists as 'intellectually damaged and (therefore) morally corrupt' (2018: 530). Emotion, in its alignment with 'the feminine', becomes antithetical to 'reason' and 'logic', and thus plays a role in discrediting feminism. We see this elsewhere: whilst men rarely featured in these memes, one meme depicts a crying man, with the text 'I tried to help her smash the patriarchy. She still won't touch my peepee' (Figure 7). This not only depicts feminist men as strategic and inauthetic (performing feminism as a means to get sex) but the tears and baby-like speech ('peepee') place them outside the realms of hegemonic masculinity (Connell, 1995) too feminine and too childlike. This hostile and often violent i targeting of feminists (and feminist allies) is not mirrored in the feminist memes, whose power comes from a call to action, but whose foe is not clearly defined. Banet-Weiser (2018) argues that whilst feminism is characterised by ambivalence and contradiction, popular misogyny is the oppositeit is a zero sum game. It is feminism's complexity in its critical questioning of gendered norms that antifeminists call out, misidentifying this as contradiction so as to discredit feminism. --- The risk of 'patriarchy''s co-option for masculinism: patriarchy as conspiracy theory As we have discussed, one way that anti-feminist memes discredit feminism is byconstructing feminists as delusional, irrational and hypocritical. Our analysis also reveals a very distinct deployment and reappropriation of the feminist concept 'patriarchy' as a means to undermine feminist critiques of power. Using humour to 'belittle the problem that feminism names' (Banet-Weiser, 2018: 58), the notion of patriarchy as a system of oppression is itself the subject of the joke: figuring as, at best, incorrect, and at worst a lie spread by women to oppress men (Marwick and Caplan, 2018). One of the key themes in antifeminist memes is the notion of patriarchy as a conspiracy theory. This is done through image association, for example Giorgio Tsoukalos (Figure 8) from the television programme Ancient Aliens, which discusses theories of ancient links between humans and extraterrestrials, a popular topic for conspiracy theorists. The image of a suited green alien in front of a US flag (Figure 9) similarly draws on conspiracy theory imagery, as does the meme (Figure 10) which suggests that feminists are replacing theism with a belief in another non-existent omnipotent imaginary thing. Similarly the actor Keanu Reeves's image (Figure 11) links to the Reddit The Red Pill community, which uses the idea from the film The Matrix that taking the red pill will open up one's eyes to the reality of the world. The connotation here is that one should wake up from the feminist dream world. Like the memes portraying feminists as irrational, Figure 12 shows the explicit linking of 'patriarchy' to the notion of feminists as illogical thinkers. This would seem to depict a woman making two contradictory statements at once: that using an app to expose how a woman 'really' looks (without makeup or filters) and manipulating a woman's image are both manifestations of 'misogynistic patriarchy'. As feminists we (the authors) see the logic of how these two statements make sense togetherthey both critique the idea that women are only valuable for their appearanceand the complexities of power they speak to. However, such discussions are too long to fit on a meme and so, superficially and without the support of a feminist framework, the juxtaposition of the two statements can appear to express a flawed logic. --- Figure 12 The ideological work of anti-feminist memes, therefore, is to redefine patriarchy as fantastical thinking. The denial of the tenets of feminism is writ large across networked popular misogyny, for example, in claims that feminists are 'imagining' or making up sexism (Marwick and Caplin, 2018). This discourse that patriarchy is 'nothing more than a conspiracy'and created in order to victimise menhas important implications for feminist political claim-making. It not only encourages and justifies the harassment of feminists, but provides the ideological under-girding for wider attempts to discredit feminism (Garcia-Favaro and Gill 2016). --- Conclusion So what does our analysis of memes tell us about the concept of patriarchy? Alongside Ahmed (2017) and Beechey (1979), we assert the value of naming oppressive forces for identifying how we might effect change, even when those names are theorised in multiple and sometimes contradictory ways. As Delphy has argued, 'we can't stop concepts from traveling' (Delphy in Calvini-Lefebvre 2018: 4) and taking on new meanings as they move through popular culture. What is needed, she posits, is for concepts to be attached to their definitions when we use them, something which is time consuming and wordy and not at all in harmony with meme culturewhere memes must be brief. Examining both feminist and anti-feminist memes tells us about the continued problematics of the concept in its popular usage. In feminist memes, patriarchy appears as a foe to fight and unite feminists, and something that requires action and collectivity. For anti-feminists, it stands for a conspiracy theory, and marks feminists out as illogical. We argue that the use of 'patriarchy' (and in particular 'the patriarchy') in feminist memes as a way to identity inequalities is a risky strategy. In reclaiming the term as a shorthandsymbolic of the political identity 'feminist', rather than as a fleshed out theorythis brevity exposes it to ridicule by anti-feminists and an undermining of feminist claims. Without the underpinning feminist logic and, coming instead from a viewpoint that steadfastly maintains the 'common sense' and reasonableness of the dominant perspective (Nicholas and Agius, 2018), the concept of patriarchy is a shortcut which can be used to pull the rug out from underneath feminism. The anti-feminist masculinist individualised logic denies any structural effect on our lives (Nicholas and Agius, 2018), thereby enabling the refusal that patriarchal societies exist. Furthermore, the use of '(the) patriarchy' in feminist digital culture glosses over intersectional injustices that affect women's lives in different ways, and mediate connections to the very resurgent feminist communities feminist memes organise online. With this in mind, it is worth examining our own positions as researchers and to query the questions we have asked, the sample we have created and the analysis we have undertaken. In our analysis we did not always understand the memes in our sample, or even know what we were overlooking.ii Whilst this is a common problem with online research with disparate communities that cross national, political and other kinds of boundaries, of which the researchers are not part, this also reflects our position as white middleclass Western feminists who are attuned to some arguments and logics, and out of step with others. We chose to search for 'patriarchy' -not 'intersectional patriarchy' or 'white supremacist patriarchy' or 'kyriarchy'. Our sample was thus already skewed by our choice of search terms, in a way that was very likely to preclude memes addressing more intersectional forms of oppression. Our sample was further skewed by our choice of search engine: Google's algorithms are written with the racist and sexist biases of their creators (Noble 2018). In effect any search results we returned have been returned by a racist search process, more so if you count our own blindness to other relevant search terms and memes. Strikingly, the image below by Odile Bree (Figure 13), came up in one of our less scientific searches with the terms 'smash the patriarchy race'. Figure 13 by Odile Bree (https://odilebree.com/) Bree's illustration poignantly satirises consumer culture's appropriation of feminism, particularly with respect to the ubiquity of 'smash the patriarchy' t-shirts available for sale on many online platforms. In articulating how the buying of feminist t-shirts relies on the exploitation of garment workers in the Global South, it serves as a stark reminder of (White) Western feminists' ignorance of what Haraway calls 'women in the integrated circuit ' (1991: 149); how we are all linked together in networks of oppression and privilege. Anti-feminists' denial of the existence of systematic oppression on grounds of gender and race (Nicholas and Agius, 2018) suggests to us that as a counter some articulation of the structural nature of inequalities remains vital. In as far as the concept of patriarchy can do some of this work it may remain useful, but it is a risky strategy for feminists.
The COVID-19 pandemic highlighted adverse outcomes in Asian, Black, and ethnic minority groups. More research is required to explore underlying ethnic health inequalities. In this study, we aim to examine pre-COVID ethnic inequalities more generally through healthcare utilisation to contextualise underlying inequalities that were present before the pandemic. Design This was an ecological study exploring all admissions to NHS hospitals in England from 2017 to 2020. Methods The primary outcomes were admission rates within ethnic groups. Secondary outcomes included age-specific and age-standardised admission rates. Sub-analysis of admission rates across an index of multiple deprivation (IMD) deciles was also performed to contextualise the impact of socioeconomic differences amongst ethnic categories. Results were presented as a relative ratio (RR) with 95% confidence intervals. Results Age-standardised admission rates were higher in Asian ] in 2019) and Black ) and lower in Mixed groups (RR 0.91 [0.90-0.91]) relative to White. There was significant missingness or misassignment of ethnicity in NHS admissions: with 11.7% of admissions having an unknown/not-stated ethnicity assignment and 'other' ethnicity being significantly over-represented. Admission rates did not mirror the degree of deprivation across all ethnic categories. Conclusions This study shows Black and Asian ethnic groups have higher admission rates compared to White across all age groups and when standardised for age. There is evidence of incomplete and misidentification of ethnicity assignment in NHS admission records, which may introduce bias to work on these datasets. Differences in admission rates across individual ethnic categories cannot solely be explained by socioeconomic status. Further work is needed to identify ethnicity-specific factors of these inequalities to allow targeted interventions at the local level.
Introduction Despite increasing overall health and income status, health inequalities in England have been worsening over the last twenty years [1], with 1 in 3 premature deaths thought to be attributable to socioeconomic inequality [2]. However, compared to measures of social deprivation, determinants of ethnic health inequalities remains under-investigated and poorly understood [3,4]. During the COVID-19 pandemic, patients from Asian, Black, and minority ethnic groups experienced higher rates of hospital admission [3,5] and worse outcomes [6] compared to white groups. A number of explanatory factors may account for these differences, including preexisting conditions, socioeconomic, and environmental and structural determinants of health [3,[7][8][9][10]. However, the relationship between ethnicity and health is complex, in part due to the inter-relationships between these numerous factors [3]. It is important to understand how preexisting inequalities contributed to outcome discrepancies in COVID-19. --- 3 Pre-pandemic studies showed Black, Asian, and other minority ethnic groups reported worse baseline health [7,11,12], lower healthcare access and less satisfaction with the services provided [13][14][15]. However, many of these studies focused on specific chronic diseases [3,5,13,[16][17][18], on regional populations [3,19,20] or on national populations prior to 2014 [13]. It was only in 2021 that the UK Office for National Statistics (ONS) first released mortality statics by ethnic group [21], which is a welcomed but overdue step forward in the evaluation of health inequalities more generally. However, there is a lack of evidence on hospital admission rates across ethnic groups immediately prior to the pandemic to allow comparison with the COVID-19 admission trends. In this study, we aimed to explore pre-COVID-19 disparities in secondary healthcare use across ethnic groups more generally to help identify national inequalities. We envisaged this might contribute to targeted policy interventions to improve the overall health status of minority ethnic groups. We evaluated the rate of hospital admissions nationally across ethnic groups in England from publicly available NHS England Hospital Episode Statistics (HES) data from 2017 to 2020. We also carried out a prespecified sub-analysis of admission rates across deciles of the index of multiple deprivations (IMD: a measure of relative deprivation for small geographical areas based on seven key dimensions such as income, employment, and health [22]) to contextualise the impact of socioeconomic differences amongst ethnic groups. We hypothesised that pre-COVID-19 hospital admission rates would be significantly higher for ethnic minority groups compared to the White group. Furthermore, we hypothesised that similar patterns of admission rate discrepancies between minority ethnic groups in COVID-19 were also present more generally in the pre-COVID-19 population. --- Methods --- Study Design This was an ecological study exploring the hospital admission rates and patterns across ethnic groups in England each year, from 2017 to 2020. Sub-analysis was performed for admission rates across IMD deciles. IMD distributions within ethnic groups were also analysed to contextualise our findings. --- Data Sources We used publicly available, open-access data from NHS Digital between April 1, 2017, and March 31, 2020, comprising aggregated national summary data on admitted patient care (APC) taken from Hospital Episode Statistics (HES) for both ethnicity [23][24][25] and IMD [26][27][28]. Data within these was divided by academic/financial year; therefore, we used the starting year as the assigned year for that data, e.g. April 2017 to March 2018 is represented as 2017 in our results. Office of National Statistics (ONS) data was used for population estimates, with the 2011 census population [29] and the 2018 IMD population [30] for ethnicity and IMD deciles, respectively. The European Standard Population was used to age-standardise our datasets [31]. --- Variables The Office of National Statistics (ONS) has highlighted that there is no true consensus on what defines an ethnic group [32]. A variety of elements such as ancestry, culture, identity, religion, language, and physical appearance may contribute. However, it is self-defined, and the concepts it includes are subjective to what is meaningful to an individual [33]. Ethnic categories and their groups in this study were defined by the sixteen (plus 'not stated') categories used in the NHS Digital HES datasets, which mirrors the same grouping as the ONS 2001 census [34,35]. Analysis was performed on both individual ethnic categories and aggregated into the five higher ethnic groups, namely: Asian ('Indian', 'Pakistani', 'Bangladeshi', and 'any other Asian background'); Black ('The Caribbean', 'African', and 'any other Black background'); mixed ('White and Black Caribbean', 'White and Black African', 'White and Asian', and 'any other mixed background'); other ('Chinese' and 'any other ethnic group'); White ('British', 'Irish', and 'any other White background'). Although the ONS have updated these groupings in the 2011 and 2021 censuses, the NHS and NHS Digital have not. The grouping of ethnic categories in our study reflects the NHS ethnicity data collection groupings [33]. The index of multiple deprivations is derived from seven key dimensions: income, employment, health, education, barriers to housing, services, crime, and living environment [22]. An aggregate score is calculated for each lower layer super output area (LSOA) comprised of roughly 1500 people. These are then ranked across England from the most deprived to the least deprived and segregated into deciles, with IMD decile 1 representing the most deprived and IMD decile 10 the least [22]. Admissions are defined as any inpatient episode of care with at least one overnight stay in the hospital and included those admitted via emergency, waiting list, planned or another admission method route. Our primary outcomes were the admission rates per 100,000 population annually within each ethnic group. Secondary outcomes included mean age of admission, 'agespecific' admission rates, and 'age-standardised' admission rates within each ethnic group. Age-specific admission rates were defined as the admissions per population of each ethnic group within six defined age categories, namely: 0-24; 25-49; 50-65; 65-75; 75-85; 85 + years. Age-standardised admission rates were calculated using the European Standard Population, a theoretical population adding up to a total of 100,000 that is widely used to produce such rates [31]. Prespecified sub-analysis was performed on all outcomes across IMD deciles to help evaluate differences between ethnicity and IMD effects on admission rates. --- Data Processing Admission rates at the population level were calculated per 100,000 population within each ethnic group or IMD decile. Age-specific admission rates were calculated as rates per 100,000 population within the defined age group. The ESP was used to calculate age-standardised rates to remove age as a confounding factor. Age-specific and standardised admission rates for each group are presented as a relative ratio (RR) to the defined baseline (White or IMD 10) to allow clear comparisons between the groups. As there were no overall or age-grouped populations for the 'unknown' categories, they were removed from population-dependent results. This may introduce bias into our results, which we explore further in our limitations. --- Statistics The use of the European Standard Population [31] and relative ratios introduced an estimate for which confidence intervals (CI) have been calculated. The programming language R version 4.0.2 (R Core Team 2020), was used for all data and graphical analysis. --- Results --- Population Distributions The total population assigned to ethnicity in the 2011 census was n = 53,012,456. The total population assigned to an IMD decile in the 2018 dataset was n = 55,977,178. The majority of the population was White (85.4%, Table 1), with Asian the second-most populous (7.1%). All minority ethnic groups had a larger proportion of young people compared to White (Fig. S1). --- Population Admission Patterns Admission rates were similar across Asian, Black, and White groups (range 24,044 to 28,978, Table 1). Much higher admission rates were seen in the 'other' group (range 36,830 to 40,459, Table 1). Lower admission rates were observed in the mixed ethnicity group (range 16,559 to 18,035, Table 1). The mean age of hospital admission was significantly lower in the Asian (41.8 years in 2019), Black (42.9), mixed (26.9), and 'other' (42.1) groups when compared to White (56.5) across all years. For IMD, the least affluent decile had the lowest mean hospital Admission rates between IMD deciles displayed a consistent stepwise change, with the highest rates of admission being seen in the least affluent decile (IMD decile 1, range 32,638 to 33,730 per 100,000, Table 1) and lowest rates in the most affluent (IMD decile 10, range 26,556 to 27,765 per 100,000). --- Age-Specific Admission Rates Across all age groups, Black and Asian populations showed higher rates of admission when compared to White, seen most prominently in age groups above 75 years old (Fig. 1a). Overall, the mixed ethnic population had a lower admission rate across nearly all age groups. The 'other' population had a significantly higher rate of admission than all other ethnicities across all age groups. Admission patterns within age groups for IMD (Fig. 2a) showed a similar pattern to that seen for the total population, Fig. 1 a Admission rates per population within ethnic groups across six different age groups, expressed as a relative ratio to a defined baseline (White population). b Age-standardised admission rates within ethnic groups, expressed as a relative ratio to a defined baseline (White population) Fig. 2 a Admission rates per population within IMD deciles across six different age groups, expressed as a relative ratio to a defined baseline (IMD-10). b Age-standardised admission rates within IMD deciles, expressed as a relative ratio to a defined baseline (IMD -10) with increased admission rates associated with higher deciles of deprivation. However, in contrast to ethnicity, these differences were seen most prominently in the younger and middle age groups. --- Age-Standardised Admission Rates Asian and Black populations had higher age-standardised admission rates (RR 1.40 CI 1.38-1.41 and RR 1.37 CI 1.37 -1.38, respectively, in 2019) compared to White (Fig. 1b). The mixed population had a lower age-standardised admission rate (RR 0.91 CI 0.90-0.91 in 2019). These admission rate discrepancies were only apparent for agespecific or age-standardised rates (Table 1, Fig. 1). The 'other' group showed the highest age-standardised admission rates, with over a two-fold relative rate compared to White across all years. Notably, the admission rates of all four ethnic minorities are shown to be increasing over time relative to the White population moving from 2017 to 2020 (Fig. 1b). In addition to the main results of large ethnic groups, agestandardised admission rates were analysed for individual ethnic categories (Fig. S3). Some ethnic categories with the highest degree of deprivation, including Pakistani, Bangladeshi, Black African, and other Black, had some of the highest age-standardised admission rates. However, certain ethnic categories such as Black Caribbean, mixed White and African, and Mixed White and Caribbean had comparable or lower age-standardised admission rates compared to White British (Fig. S3). This was seen despite these minority ethnic groups having a comparably higher degree of deprivation (Fig. 3) compared to White British. Notably, within most ethnic groups, categories labelled as 'other' (i.e. other Black, other mixed) had the highest admission rates across most groups. Age-standardised admission rates for IMD deciles again showed a very consistent pattern of increasing rates for more deprived deciles in a stepwise manner (Fig. 2b). --- Discussion --- Findings The principle finding of this ecological study is Black and Asian ethnic groups show higher admission rates compared Fig. 3 IMD distribution of ethnic populations to White across all age groups and when standardised for age. There is evidence of incomplete and misidentification of ethnicity assignment in NHS admission records, as seen by the large unknown/non-stated and 'other' groups, respectively. This is likely to introduce significant bias to our results and all other studies using similar datasets. However, the results show differences in admission rates across individual ethnic categories cannot solely be explained by socioeconomic status. --- Findings in Context The cause of higher admission rates in Black and Asian groups is likely linked to a number of explanatory factors that may account for these differences. Baseline health, healthcare access and prevention, discrimination, genetics, migration status and socioeconomic, and environmental and structural determinants of health may all have interrelated roles contributing to these discrepancies [3,[7][8][9][10]. It is well documented that certain chronic illnesses have a higher prevalence and manifest at a younger age in certain ethnic minority groups; for example, diabetes across all ethnic minorities, heart disease in South Asian groups (Bangladeshi and Pakistani), and hypertension and stroke in Black Caribbean and African [7]. This leads to poorer baseline health [11], higher degrees of comorbidity [12], higher disease burden [7], higher admission rates for particular conditions [3], and overall discrepancies in outcomes between ethnic groups [3,9]. Health outcome discrepancies emerge in early adulthood and increase with age [14], with the older populations of Asian and Black groups reporting the greatest discrepancies in overall health and health-related limitations compared to White groups [36]. This is reinforced by our findings of greater admission rate discrepancies between Black and Asian groups compared to White in the older age groups. Alongside an overall increased burden of disease, evidence suggests ethnic minorities may experience increased barriers to healthcare access and less effective healthcare provision [13][14][15]. It was found that ethnic minority groups may have less disease monitoring and slower intensification of therapy for certain chronic conditions [15]. Generally, they were less satisfied with the care they received [14] and reported a lower quality of care compared to White groups [14]. Worryingly, ethnic minority groups are also shown to wait longer for a medical appointment and longer to be referred to a specialist for certain conditions, including cancer [13]. Language barriers, less knowledge of available services, and discrimination may all be contributory factors [37]. Discrepancies in the timely provision of needed healthcare may contribute to worsening morbidity in ethnic minority groups. This, in turn, may exacerbate co-morbidity and complications, leading to an increased risk of acute hospital admission in the future [3,12,15]. Furthermore, disproportionate acute hospital presentation may reflect differences in access to healthcare in the community [3]. We did not have community data to complement our hospital admission findings, and further work is needed to compare these. Focusing policy and interventions to help remove any discriminatory or system barriers should form part of the goal to reduce these health inequalities [37]. In contrast to this, recent work on populations in Scotland [19,20] and London [3] suggests Asians, Black, and mixed ethnicity have all-cause better survival rates after hospital admission than White. This suggests that higher admission rates of ethnic minorities may be driven by illness less associated with a high risk of death [3] despite the higher disease burden. Despite this, studies of the Scottish population [19,20] show minority ethnic groups still have higher avoidable hospital admission, unplanned readmissions and avoidable deaths [19,20]. Together, these highlights that allcause mortality by ethnicity does not reflect the full picture. A further disease-specific analysis is crucial to identifying avoidable morbidity and mortality within ethnic groups. The high incidence of 'unknown' and 'not-stated' ethnicity is a major issue for reliable analysis of this type. Guidance for the NHS and ONS states the gold standard for ethnicity recording is by self-assignment from the patient themselves rather than ascribed by someone else [38]. However, it is unclear to what extent NHS organisations are following and encouraging these principles. Previous evidence showed that, in practice, only 57% of healthcare professionals use the self-assignment method, with 21% assigning ethnicity by their own observer assessment [39]. Some studies showed only 70% of healthcare professionals routinely collected ethnicity data at all [39]. Barriers to comprehensive ethnicity data collection include lack of knowledge of staff about its importance, logistical time pressures, and lack of confidence in asking what could be perceived as sensitive information [38]. Patients themselves may also be unsure or apprehensive about how the data is used [38]. It highlights the need for training courses and protocols in the hospital to empower healthcare providers to consistently ask for and record ethnicity data. Such courses should also highlight the ways of reassuring patients about its use and importance [33]. Evidence shows that the excessive and growing numbers of unassigned ethnicity coding in NHS admissions disproportionately affect ethnic minority groups [38]. Similarly, the over-represented large 'other' group reflects misidentification, which is again likely to cause the underrepresentation of ethnic minorities [38]. Work by the Nuffield Trust assessed the quality of ethnicity coding in England NHS datasets and showed one-third of patients with multiple admissions had inconsistent ethnicity codes [38]. A total of 40% of those assigned 'any other ethnic group' also had an alternative ethnic group, with minority ethnic groups comprising two-thirds of patients impacted [38]. 'Other' categories within individual ethnic groups are often not accurately assigned, for example, 10% of Black Caribbean patients also had a code of 'other Black' in these studies [38]. Misassignment of ethnic categories within the same group hinders important conclusions between distinct populations within the same group, e.g. Bangladeshi, Indian, and other Asian. Importantly, guidance for the collection of ethnicities has not been updated in the NHS since 2001 and is no longer in line with the census categories for 2011 and 2021 [38,40]. This presents challenges in comparing the health data across populations, and patients are not being presented with the same survey response options as those used in population estimates. The outdated classification in the NHS may also confound the selection of 'other' or 'any other' as the narrowly defined categories do not represent their ethnicity. For example, Arab, Gypsy, or Irish Traveller were not defined in the NHS categorisation but are in the more recent ONS census [40]. Linkage of hospital admission data to census-assigned ethnicity has been done by Public Health England (PHE) and other groups who have access to the individual patient data [19,20]. This is one method of reducing ethnicity missingness or misassignment. However, the point remains that improving accuracy in ethnicity coding would allow a more robust and reliable analysis of freely accessible datasets. The differences in admission rates for Black and Asian groups were found in our study despite these factors causing underrepresentation, suggesting the differences would likely be greater if coding was more accurate. Deducing how other factors such as genetics, migration effects, and reason for admission may impact admission rate discrepancies is not possible from the population-level data we have. However, there is widespread consensus that genetic factors contribute only marginally to ethnic inequalities, and socially constructed ethnic groups are poor markers for genetic traits, aside from specific examples such as sickle-cell anaemia [37]. A history of migration and ongoing transnational mobility can increase exposure to particular health risks [37]. Further work on more granular datasets would be useful to help identify particular disease causes that may be contributing to differential admission rates. The strong influence of IMD on health status and outcomes is well documented [2,41] and supported by our results of increased admission rates in more deprived deciles. It has been shown that social and economic inequalities make a substantial contribution to ethnic inequalities in health [14]. This is supported by our results showing several ethnic categories with the highest degree of deprivation, such as Bangladeshi, Pakistani, Black Caribbean, and other Black had comparably higher age-standardised admission rates. However, the different facets of socioeconomic status, such as employment, education, housing, and deprivation, all likely play different roles and exert different influences on overall health within individual ethnic categories [42]. Our results support this in the findings that specific ethnic categories (Black Caribbean, mixed White and African, and Mixed White and Caribbean) have comparable or lower admission rates than White, despite higher degrees of deprivation. One explanation for this may be 'health resilience', in which robust social communities within ethnic groups can shield individuals from poor health outcomes that may be associated with their degree of deprivation [43]. However, such a phenomenon would reflect partial compensation for inequalities rather than suggesting inequalities do not exist. The finding of higher proportions of young people in the lower IMD deciles is also difficult to interpret due to the aggregate nature of the IMD marker. It may be that this is largely due to income, in which younger people who have moved out of their family home have lesser salaries compared to older generations [2]. Wealth accumulation through a person's lifetime may contribute to these differences. However, it is possible that these findings are partially driven by those in the lowest IMD deciles living shorter lives, which has been shown in several studies [2,14,41]. Other studies have shown that even when admission rates are adjusted for age and deprivation, discrepancies in admission rates across ethnic groups remain [13]. This supports that ethnic inequalities in health are driven by factors other than deprivation, including overall health and health-seeking behaviours alongside discrimination and marginalisation [13][14][15]. Addressing socioeconomic determinants of health is necessary but not sufficient to eliminate ethnic inequalities [42]. Further work to identify the specific and unique socioeconomic pressures on different ethnic categories is required to facilitate targeting action to improve the health inequalities each group faces [37]. The establishment of the NHS Race and Health Observatory [44] represents a significant step forward to help establish health inequalities as a national priority. However, it relies on better access to high-quality data, with more accurate categorisation and protocols to improve ethnicity data imputation within healthcare systems. Effective solutions to address health inequalities require an understanding of the complexity of ethnic inequality [45]. Identifying the specific needs of different ethnic groups is imperative. Initiatives such as the introduction of integrated care systems (ICSs) represent initial steps in addressing health inequalities at the local level. These ICSs represent forty-two divided areas in the UK, allowing partnerships between NHS organisations and local authorities to facilitate the delivery of services for specific populations' needs [46]. Presenting national discrepancies in hospital admissions and inequalities helps overall health delivery, tracking national progress over time, and comparison internationally [13,47]. These can help frame policy at the national level, which should 1 3 empower local partnerships to address region-specific differences. This could be done, for example, by allocating the necessary resources to the ICSs to enact meaningful targeted interventions to reduce preventable hospital admissions and improve outcomes in ethnic minority groups [37,44]. However, the British Medical Association's recent analysis [45] of the UK government's commission on race and ethnic disparities report [48] stated that more needs to be done to 'implement models of proportionate universalism to put proportionately more resource towards tackling the causes of worse health outcomes linked to ethnicity'. They concluded that initiatives so far have not gone far enough and 'the structural factors that cause unlawful disparities between racial groups should not and cannot be ignored if we are to make progress.' This highlights that there is more work to be done by the established cross-governmental committees, in partnership with local authorities, to help reduce these ongoing ethnic health inequalities. --- Strengths/Limitations Key strengths of this work include the large and comprehensive datasets used at the population level over three consecutive years. Dividing the populations into six distinct age groups allowed reliable comparison of admission rates despite different age distributions within ethnic groups. Analysing age-standardised admission rates also removed the effects of age, which is the strongest confounding factor. However, the data included in these large datasets represents aggregated data, which can fail to elucidate more granular differences at the individual level. Without linkage data to individual patient records, no association can be made with individual admission causes, and outcomes for ethnic groups or ameliorate coding missingness or misassignment. Headline ethnic groups were also heterogeneous and represent crude conglomerations of disparate groups, which may also introduce misrepresentation of actual background. The large 'other' and 'unknown' groups make definitive conclusions difficult. For admissions assigned 'unknown' ethnicity, there is no population distribution, and it was not possible to reassign these to the correct ethnicity with the data granularity in our study. Therefore, we removed these admissions from the analysis of admission rates, which may introduce bias into our results. However, the size of these populations in HES data is an important finding in its own right, as previously discussed. Although widely used, IMD is an aggregate indicator of seven dimensions, and its use limits our ability to make interpretations about individual factors, such as income or employment. The use of the 2011 ethnicity census population was necessary as it is only released once every 10 years; however, the changes in population over this time were not captured. This also explains why there is a discrepancy in the total population numbers in the IMDdefined population (2018 dataset) and the ethnicity-defined population (2011 dataset). We do not have community data to complement our hospital admission findings, and further work is needed to compare these. --- Conclusions This study shows Black and Asian ethnic groups have higher admission rates compared to White across all age groups and when standardised for age. There is evidence of incomplete and misidentification of ethnicity assignment in NHS admission records, which may introduce bias to work on these datasets. Differences in admission rates across individual ethnic categories cannot solely be explained by socioeconomic status. Further work is needed to identify ethnicityspecific factors of these inequalities to allow targeted interventions at the local level. --- Data Availability The data that support the findings of this study are openly available in NHS digital and ONS datasets. Please see references 21, 23, 24 ,25, 26 ,27, 28, 29, 30, 31 for full details to access. --- Supplementary Information The online version contains supplementary material available at https:// doi. org/ 10. 1007/ s40615-022-01464-7. --- Author Contribution --- Declarations Ethics Approval Ethnic approval was not required for this study. --- Conflict of Interest The authors declare no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Purpose of the Study: Older homeless adults living in shelters have high rates of geriatric conditions, which may increase their risk for acute care use and nursing home placement. However, a minority of homeless adults stay in shelters and the prevalence of geriatric conditions among homeless adults living in other environments is unknown. We determined the prevalence of common geriatric conditions in a cohort of older homeless adults, and whether the prevalence of these conditions differs across living environments. Design and Methods: We interviewed 350 homeless adults, aged 50 and older, recruited via population-based sampling in Oakland, CA. We evaluated participants for common geriatric conditions. We assessed living environment using a 6-month follow-back residential calendar, and used cluster analysis to identify participants' primary living environment over the prior 6 months. Results: Participants stayed in 4 primary environments: unsheltered locations (n = 162), multiple locations including shelters and hotels (n = 88), intermittently with family/friends (n = 57), and, in a recently homeless group, rental housing (n = 43). Overall, 38.9% of participants reported difficulty performing 1 or more activities of daily living, 33.7% reported any falls in the past 6 months, 25.8% had cognitive impairment, 45.1% had vision impairment, and 48.0% screened positive for urinary incontinence. The prevalence of geriatric conditions did not differ significantly across living environments. Implications: Geriatric conditions were common among older homeless adults living in diverse environments, and the prevalence of these conditions was higher than that seen in housed adults 20 years older. Services that address geriatric conditions are needed for older homeless adults living across varied environments.
Introduction The median age of the U.S. homeless population is increasing (Hahn, Kushel, Bangsberg, Riley, & Moss, 2006). Currently, half of single homeless adults are aged 50 and older (Culhane, Metraux, Byrne, Stino, & Bainbridge, 2013), compared to 11% in 1990 (Hahn et al., 2006). Homeless people are thought to experience "accelerated aging" relative to the general population (Cohen, 1999;Gelberg, Linn, & Mayer-Oakes, 1990). Homeless adults have disproportionately high rates of chronic illnesses and poor health status (Garibaldi, Conde-Martel, & O'Toole, 2005;Gelberg, Linn, & Mayer-Oakes, 1990;Kimbler, DeWees, & Harris, 2015), premature age-adjusted mortality rates (Baggett et al., 2013;Hwang, Orav, O'Connell, Lebow, & Brennan, 1997), and high rates of geriatric conditions in individuals in their 50s and early 60s (Brown, Kiely, Bharel, & Mitchell, 2012). Geriatric conditions, such as functional impairment, falls, and urinary incontinence, typically first occur in housed adults aged 75 and older (Inouye, Studenski, Tinetti, & Kuchel, 2007) and are strongly associated with adverse health outcomes including acute care use, institutionalization, and death (Inouye et al., 1998;Inouye, Studenski, Tinetti, & Kuchel, 2007;Tschanz et al., 2004). Environmental factors play a central role in older adults' ability to adapt to these conditions. Older adults who live in stable housing may be able to modify their environment to adapt to geriatric impairments (Szanton et al., 2011;Wahl, Fange, Oswald, Gitlin, & Iwarsson, 2009). In contrast, older homeless adults may have great difficulty changing their environment, leading to a mismatch between their abilities and environment. As suggested by Lawton and Nahemow's environmental press model, this mismatch may make it more difficult to function independently, and may be most severe in older homeless adults living in more demanding environments, such as individuals staying in unsheltered places or moving frequently between different locations (Kushel, 2012;Lawton & Nahemow, 1973). In previous work, we found that geriatric conditions were common among older homeless adults recruited from homeless shelters (Brown et al., 2012). However, this study did not sample unsheltered individuals or those living temporarily with family or friends. These individuals make up the majority of homeless people nationally (Opening Doors: Federal Strategic Plan to Prevent and End Homelessness Update 2013, 2014) and may be at high risk for poor outcomes associated with geriatric conditions (Bamberger & Dobbins, 2014;Nyamathi, Leake, & Gelberg, 2000). Understanding how the prevalence of geriatric conditions varies among homeless persons living in differing environments is critical for targeting limited resources and planning appropriate services and programs for older homeless adults. Therefore, we examined the prevalence of common geriatric conditions in a population-based sample of older homeless adults, and determined whether the prevalence of geriatric conditions differed by living environment. We hypothesized that the prevalence of geriatric conditions would be higher among homeless individuals living in more demanding environments, such as unsheltered places, as these individuals may experience a larger mismatch between their abilities and environment. --- Design and Methods --- Design Overview We interviewed homeless adults, aged 50 and older, recruited via population-based sampling in Oakland, CA. These interviews were part of a cohort study, Health Outcomes in People Experiencing Homelessness in Older Middle agE (HOPE HOME). We developed the study methods in consultation with a community advisory board. The institutional review board of the University of California, San Francisco, approved the study. --- Sample and Recruitment Similar to our prior research with homeless adults living in San Francisco (Weiser et al., 2013), we sampled homeless individuals from low cost meal programs and shelters. We extended the sampling frame to include recycling centers and places where unsheltered people stayed. Sampling sites included all overnight homeless shelters in Oakland that served single adults over age 25 (n = 5), all low-cost meal programs that served homeless individuals at least 3 meals per week (n = 5), a recycling center, and places where unsheltered homeless adults stayed. For the latter, we randomly selected days to accompany an outreach team that served unsheltered homeless people. We set total sampling goals for each sampling frame based on best estimates of the number of unique individuals who visited that site, or were unsheltered, annually. The study team randomly selected individuals at each site to meet these sampling goals. Individuals who met a brief eligibility screen were invited to participate in an enrollment interview. The study team conducted enrollment and baseline interviews from July 2013 to June 2014 at St. Mary's Center, a non-profit community-based center in Oakland that serves lowincome older adults. Individuals were eligible to participate if they were aged 50 or older, able to communicate in English, and currently homeless as defined in the federal Homeless Emergency Assistance and Rapid Transitions to Housing (HEARTH) Act (Homeless Emergency Assistance and Rapid Transition to Housing Act of 2009). Individuals who were unable to communicate due to severe hearing impairment were excluded. After determining eligibility, study staff used a teach-back method to obtain informed consent (Dunn & Jeste, 2001) and excluded individuals unable to provide consent. Study staff conducted in-depth structured baseline interviews with eligible participants. Individuals received a $25 gift card for completing the eligibility and baseline interviews. Of 1,412 people approached for eligibility screening, 536 met preliminary eligibility criteria and were scheduled for an enrollment interview (Figure 1). Another 505 were ineligible, and 335 declined to participate before we assessed eligibility. Of 536 people scheduled for an enrollment interview, 350 attended and were enrolled, 4 were ineligible, 7 declined, and 175 did not attend. People who declined to participate or did not attend the interview were similar to enrolled participants by sex, but were more likely to be African-American by observed race/ethnicity (82.3 vs. 79.7%, p =.04) and more likely to be recruited from meal programs (55.3 vs. 49.1%) and from unsheltered areas or recycling centers (20.1 vs. 15.7%, overall p = .003). --- Measures --- Geriatric Conditions Participants reported if they had difficulty performing 5 activities of daily living (ADLs; bathing, dressing, eating, transferring, toileting) (Katz, 1983), and six instrumental activities of daily living (IADLs; taking transportation, managing medications, managing money, applying for benefits, setting up a job interview, finding a lawyer) (Sullivan, Dumenci, Burnam, & Koegel, 2001). We assessed IADLs using the Brief Instrumental Functioning Scale, a validated instrument developed for use in homeless persons (Sullivan, Dumenci, Burnam, & Koegel, 2001). We defined ADL impairment as difficulty performing 1 or more ADLs; we defined IADL impairment similarly. We defined mobility impairment as self-reported difficulty walking across a room (Katz, 1983). Participants reported how many times they had fallen over the past 6 months and whether they had required medical treatment (Health and Retirement Survey (HRS), 2012). We assessed cognition using the Modified Mini-Mental State Examination (Bland & Newman, 2001). A licensed neuropsychologist trained research staff to administer this instrument and observed random interviews to ensure adherence to the protocol. We defined cognitive impairment as a score below the 7th percentile (i.e., 1.5 standard deviations below a reference cohort mean) or inability to complete the assessment (Bland & Newman, 2001;Bravo & Hebert, 1997). We defined visual impairment as a corrected visual acuity worse than 20/40 on a Snellen chart ("Screening for impaired visual acuity in older adults: U.S. Preventive Services Task Force recommendation statement," 2009). We defined hearing impairment as self-reported difficulty hearing (Moyer, 2012). Participants reported if they used a hearing aid. We assessed urinary incontinence using the three Incontinence Questions adapted for a 6-month period (incontinence defined as reporting having leaked urine during the prior 6 months) (Brown et al., 2006). We assessed depressive symptoms using the Center for Epidemiologic Studies Depression Scale (range 0-60; symptoms of major depression defined as a score >16) (Radloff, 1977). --- Living Environment We assessed living environment using a follow-back residential calendar (Tsemberis, McHugo, Williams, Hanrahan, & Stefancic, 2007). Each participant reported where he or she had stayed over the previous 6 months and the number of days spent in each location, including homeless shelters, unsheltered places, housing belonging to family/friends, transitional housing, hotels or single room occupancy units, rented rooms or apartments, homes they owned, medical facilities, drug treatment facilities, and jail or prison. We identified each participant's primary living environment using cluster analysis. Participants also reported where they had stayed each night during the 2 weeks before the interview and the date when they last had stable housing, defined as living in noninstitutional housing for at least 12 months. --- Participant Characteristics --- Sociodemographic Variables Sociodemographic characteristics included age, gender, race/ethnicity (African-American, white, Latino, multiracial/other), marital/partner status, and highest level of education. Participants reported the age at which they first experienced homelessness as an adult. --- Health Status We assessed self-rated general health (fair or poor versus good, very good, or excellent) (Ware, Kosinski, & Keller, 1996). Participants reported if a health care provider had ever told them that they had hypertension, coronary artery disease or myocardial infarction, congestive heart failure, stroke, diabetes, chronic obstructive pulmonary disease or asthma, arthritis, or HIV/AIDS (National Health and Nutrition Examination Survey (NHANES), 2009). We assessed history of mental health problems using measures adapted from the National Survey of Homeless Assistance Providers and Clients (Burt et al., 1999) and the Addiction Severity Index (McLellan et al., 1992). Participants reported if they had ever experienced serious anxiety, depression, difficulty controlling violent Participants who declined after being approached (335) declined before being assessed for eligibility. Therefore, the number of participants who were ineligible for the study may have been higher than the numbers presented in this table. behavior, hallucinations that were not a result of substance use; had attempted suicide; or had been prescribed medication by a doctor for psychiatric problems. We defined a history of mental health problems as having experienced any of these issues (Burt et al., 1999). Participants reported if they had ever been hospitalized for a psychiatric problem. --- Health-Related Behaviors Participants reported their history of cigarette smoking using questions from the California Tobacco Survey (never smoker, former, current) (Al-Delaimy, Edland, Pierce, Mills, & White, 2011). We defined a history of alcohol use problems as reporting drinking to get drunk three or more times a week, and a history of drug use problems as reporting using drugs three or more times a week (Burt et al., 1999). We assessed alcohol use disorders in the past 6 months using the Alcohol Use Disorders Identification Test adapted for a 6-month period (range, 0-20; alcohol problem defined as a score ≥8) (Babor, Higgins-Biddle, Saunders, & Monteiro, 2001). We assessed illicit drug use in the past 6 months using the World Health Organization Alcohol, Smoking, and Substance Involvement Screening Test adapted for a 6-month period (range 0-39; drug problem defined as a score ≧4 for use of either cocaine, amphetamines, or opioids) (Humeniuk, Henry-Edwards, Ali, Poznyak, & Monteiro, 2010). --- Health Care Access Participants reported if they had a regular location to obtain health care other than the emergency department (National Health Interview Survey (NHIS, 2012). Adult Access to Health Care and Utilization. 2012). --- Statistical Analyses We described geriatric conditions and participant characteristics using descriptive statistics. To identify the primary environment where each participant stayed, we used cluster analysis, which identifies existing patterns within data to generate similar groups of participants (Everitt, Landau, Leese, & Stahl, 2011;Kohn et al., 2010;Lee et al., 2016). Participants were assigned to a housing group based on the total number of days they reported staying in each location over the previous 6 months. For those with recent homelessness, these locations could include places where they had been housed. We chose to use cluster analysis rather than other methods of categorizing the data for several reasons. In an effort to best approximate a sample of older adults experiencing homelessness in Oakland, our study sampled homeless individuals from homeless shelters, unsheltered places, meal lines, and recycling centers. Similarly, we used a follow-back residential calendar to capture variability in living environment over a 6-month period, rather than assessing living environment cross-sectionally based on an individual's location at the time of recruitment. Rather than categorizing this complex data using a priori living environment categories determined based on studies with narrower sampling frames, we used cluster analysis to identify naturally occurring groups within the data that we might not have otherwise predicted. We used two cluster methods to identify living environment groups. For our primary analysis, we used Ward's linkage to minimize the sum of squares difference within groups (Ward, 1963). We performed visual analysis of a dendrogram representing the data structure to select an optimal number of clusters, and used bivariable matrices to confirm that we could identify natural groupings. We then verified these cluster classifications using k-medians cluster analysis for a set number of 3-8 clusters (Calinski & Harabasz, 1974;Hair, Black, Babin, & Anderson, 1987). To measure the distinctness of the groups generated by these two cluster methods, we used the pseudo-t 2 and pseudo-F stopping rules (Calinski & Harabasz, 1974). To confirm that there were significant distinctions between groups, we performed one-way ANOVA. To test for differences in geriatric conditions and participant characteristics across housing groups, we used the Kruskal-Wallis test of medians and chi-square tests for categorical variables. We used multivariable logistic regression models to determine how the association of living environment with each geriatric condition changed after adjusting for key factors including age, sex, alcohol, and drug use problems. Where differences in association were found, we wished to assess whether they were reflective of underlying vulnerabilities in the population or whether they persisted even after adjustment. We used separate models for each condition and treated living environment as an indicator variable in which one group was the referent. We considered sex as a potential effect modifier of the association between environment and each geriatric condition, as men are more likely to live in unsheltered environments than are women (North & Smith, 1993). As living environment may reflect the length of time an individual has been homeless, we conducted separate unadjusted logistic regression analyses substituting time since last stable housing (modeled as a linear variable) in place of environment. Analyses were conducted using SAS version 9.2 (SAS Institute, Cary, North Carolina) and Stata version 11 (StataCorp). --- Results --- Participant Characteristics The median age of the cohort was 58 years (IQR, 54, 61), 77.1% were male, and 79.7% were African American (Table 1). Nearly half (43.6%) experienced their first episode of adult homelessness at age 50 or older. The majority of participants (55.7%) reported poor or fair health status; chronic medical conditions were common. Nearly threequarters (71.3%) had a history of mental health problems. Most participants smoked tobacco (65.4%) and more than half had a lifetime alcohol and/or drug use problem. Participant characteristics including health status and health-related behaviors did not differ significantly across housing groups, with the exception of sex, having a first episode of adult homelessness at age 50 or older, and having a regular location to obtain health care (Table 1). --- Living Environment Based on Cluster Analysis Cluster analysis of the locations where participants stayed over the previous 6 months yielded 4 groupings, as previously reported (Lee et al., 2016). The first group of participants spent most of their time unsheltered ("unsheltered," n = 162). The second moved between multiple locations including homeless shelters, unsheltered places, hotels, and jails ("multiple location users," n = 88). The third spent most of their time staying with family and/or friends ("cohabiters," n = 57). The fourth group had only recently become homeless, and prior to becoming homeless had spent most of their time in rental housing ("recently homeless," n = 43). Unsheltered participants spent on average 85.6% of nights unsheltered; multiple location users spent 39.4% of their nights in shelters, 15.8% unsheltered, 13.2% in hotels, and 8.1% in jail/prison; cohabiters spent 71.2% of nights staying with family/friends; and recently homeless individuals spent 80.2% of nights in rental housing. Additional group characteristics are reported elsewhere (Lee et al., 2016). At the time of the interview, 46.9% of participants reported that they had stayed exclusively in an unsheltered location over the previous 2 weeks, 33.1% had stayed exclusively in a homeless shelter, 8.0% had stayed in both a homeless shelter and an unsheltered location, 2.9% had stayed in transitional housing, 2.6% had stayed with family/friends, 1% had stayed in a hotel, and, in recently homeless individuals, 6.0% had stayed in their own apartment or house. The median time since participants had stable housing was 2.1 years (IQR, 0.6, 5.8). --- Geriatric Conditions Overall and by Living Environment Over a third of participants (38.9%) reported difficulty performing 1 or more ADLs and 49.4% reported difficulty performing 1 or more IADLs (Table 2). Nearly one-fifth (17.1%) had difficulty performing three or more ADLs. More than one-quarter of participants (26.9%) reported difficulty walking, and 33.7% reported one or more falls in the past 6 months; 14.3% fell three or more times. Of participants who reported falling, one-third required medical treatment. One-quarter of participants (25.8%) screened positive for cognitive impairment. Visual impairment was present among 45.1% of participants and hearing impairment was reported by 35.6%, yet only three participants had a hearing aid. Nearly half of participants (48.0%) screened positive for urinary incontinence and 38.3% reported symptoms of major depression. The prevalence of each geriatric condition did not differ significantly across housing groups. The exception was vision impairment, which was more prevalent in unsheltered participants than in other groups (p =.04, Table 2; standardized residual for unsheltered group, 2.80). In analyses to determine if age, sex, or substance use problems confounded the relationship between environment and each geriatric condition, the odds of each condition changed less than 10% after adding these variables to the model (data not shown). In analyses to determine if sex modified the association between environment and geriatric conditions, the interaction term for sex and environment was significant only in the model for ADL impairment (unadjusted p for interaction =.04; p adjusted for age, sex, and substance use = .01). Based on the adjusted model including the interaction term, we estimated odds ratios for ADL impairment in women (versus men) in each environment. Women renters, cohabiters, and multiple location users had a higher odds of ADL impairment than men, although the odds ratios for women renters and cohabiters crossed 1. Unsheltered women had a lower odds of ADL impairment than men, though the odds ratio crossed 1 (data not shown). Duration of time since last stable housing was not significantly associated with the presence of each geriatric condition (data not shown). --- Implications We found that the prevalence of geriatric conditions was high in a population-based sample of older homeless adults. Despite a median age of 58 years, participants had rates of geriatric conditions similar to or higher than adults in the general population with a median age of nearly 80 years (Kelsey et al., 2010;Leveille et al., 2008). Our findings are consistent with earlier research showing that geriatric conditions are common in older homeless people recruited from homeless shelters (Brown et al., 2012), but extend this earlier work through population-based sampling that includes people who meet the federal HEARTH definition of homelessness (Homeless Emergency Assistance and Rapid Transition to Housing Act of 2009). We did not find differences in the prevalence of geriatric conditions across different environments, contrary to our hypothesis (Nyamathi, Leake, & Gelberg, 2000). Our findings suggest that services to address geriatric conditions are needed for older homeless adults living in a range of environments. Consistent with previous work (Brown et al., 2012), we found that despite this cohort's relatively younger age, the prevalence of most geriatric conditions was higher compared to both the general older population and the older population living in poverty. Compared to a population-based cohort of adults with a median age of 79 years, rates of several conditions were higher in the older homeless cohort, including ADL impairment (38.9% older homeless vs. 22.6% general older population), IADL impairment (49.4% vs. 40.4%), cognitive impairment (25.8% vs. 12.0%), visual impairment (45.1% vs. 13.8%), and urinary incontinence (48.0% vs. 41.1%) (Brown et al., 2012;Kelsey et al., 2010, Leveille et al., 2008). Although few data are available on the prevalence of geriatric conditions in older adults living in poverty, results from a cohort of community-dwelling adults aged 65 and older (mean age 71.7 years) with income less than 200% of the federal poverty level are similar. Older homeless adults had a higher prevalence of falls (33.7% older homeless adults vs. 21.9% older adults living in poverty), visual impairment (45.1% vs. 12.0%), urinary incontinence (48.0% vs. 29.5%), and depression (38.3% vs. 11.3%) (Counsell et al., 2007). While the overall prevalence of geriatric conditions in the cohort was disproportionately high compared to older individuals in the general population, the prevalence of geriatric conditions did not differ across living environments. The similar prevalence of geriatric conditions in each environment may reflect several factors. First, it is possible that we lacked power to detect a difference in prevalence due to the relatively small size of the environment subgroups. However, relatively small differences in prevalence are unlikely to be important for clinical practice or policy, and geriatric conditions were prevalent in all subgroups. Second, older homeless people who develop geriatric conditions that are influenced by the person-environment interaction may seek the environment that best fits their abilities, resulting in a "leveling" of the prevalence of geriatric conditions across environments. Survival bias may contribute to this leveling, as older people who are unsheltered and have geriatric conditions may be more likely to be admitted to nursing homes or to die. Finally, the prevalence of key risk factors for geriatric conditions was similar across environments; the similar distribution of risk factors may contribute to the similar prevalence of geriatric conditions. Different homeless environments pose different challenges in managing geriatric conditions. Adaptive equipment such as glasses or walkers may be lost, damaged, or stolen in any environment, but this risk may be highest in unsheltered environments. These challenges may have contributed to the significantly higher prevalence of vision impairment in unsheltered people; differing access to regular medical care may have also played a role. Our finding that sex modified the association of living environment and ADL impairment may reflect a greater tendency for women with ADL impairment to seek out sheltered environments versus men. The high prevalence of geriatric conditions in homeless people living in diverse environments has implications for planning services and care. In the general population, approaches to managing geriatric conditions include rehabilitation, environmental modification, and addressing polypharmacy; such interventions reduce adverse outcomes associated with geriatric conditions, including acute care use and institutionalization (Counsell et al., 2007;Gill et al., 2002;Tinetti et al., 1994). However, these interventions are difficult to implement in the environments in which homeless individuals live. This difficulty points to the need for broader solutions that address both geriatric conditions and homelessness. Permanent supportive housing, defined as subsidized housing with closely linked or on-site supportive services, maintains housing and may reduce acute care utilization among homeless adults (Sadowski, Kee, VanderWeele, & Buchanan, 2009;Stergiopoulos & Herrmann, 2003;Stergiopoulos et al., 2015). Currently, many older homeless adults who have functional impairment and other geriatric conditions may be placed in nursing homes due to a lack of other appropriate options (Bamberger & Dobbins, 2014). However, permanent supportive housing may be able to meet the needs of the aging homeless population, with modifications including personal care attendants and environmental adaptations. Further study is needed to determine if such adapted housing programs could allow formerly homeless individuals to age in place, delaying or preventing the need for nursing home care. This study has several limitations. We excluded individuals with severe hearing impairment (n = 4) and those unable to provide informed consent (n = 5), potentially leading to an underestimation of the prevalence of hearing and cognitive impairment. Measures of function may not appropriately measure function in vulnerable groups (Tennant et al., 2004). However, we used an IADL assessment tool specifically developed for use in homeless populations (Sullivan et al., 2001). We assessed living environment over the prior 6 months using self-reports, which may be less accurate among persons with cognitive impairment. However, we employed a follow-back residential calendar technique validated for use in homeless populations (Tsemberis et al., 2007). Because the study was conducted in one city, our findings may not be generalizable to other areas. However, participant characteristics were similar to those in nationally representative data (Opening Doors: Federal Strategic Plan to Prevent and End Homelessness Update 2013Update , 2014)). As the population of older homeless adults continues to grow, developing appropriate services for this group is increasingly important. These services must address the high prevalence of geriatric conditions in older homeless adults living across a range of environments. Housing programs that incorporate interventions to address geriatric conditions provide a promising model of care for this vulnerable and growing population.
An orientation model for implementing and sustaining integrated health and social care hubs for early childhood development.
Michael Hodgins 1 , Katarina Ostojic 1 , Si Wang 1 , Kim Lyle 2 , Kenny Lawson 4 , Tania Rimes 3 , Sue Woolfenden 1.5 1: The Population Child Health Research Group, UNSW, Randwick, NSW, Australia Evidence is emerging on the efficacy of Integrated Health and Social Care (IHSC) hubs to improve the early detection and intervention of developmental vulnerability in children from culturally and linguistically diverse and/or socioeconomically disadvantaged backgrounds. IHSC hub models typically involve co-located child and family health services and non-government organisations, which deliver a range of psychosocial services, for example playgroups, domestic violence support, mental health support, early childhood education. However, there remains a dearth of evidence on how to successfully implement and sustain the integration of health and social care. Our project aimed to evaluate the impact and implementation of IHSC hub s for migrant and refugee populations in three sites in New South Wales, Australia with high proportions of migrant and refugee communities. To help implement and sustain IHSC hubs, we developed an orientation model detailing the operational principles required for successful integration of IHSC hubs. This presentation is based on a qualitative exploration of the barriers and enablers to implementing IHSC hubs in three sites in New South Wales, Australia guided by the Consolidated Framework for Implementation Research. Development of the orientation model involved semistructured interviews and five workshops with 25 participants including clinicians, providers, and managers from child and family health services and non -government social services to understand their perspectives on the barriers and facilitators to implementing an IHSC hub. Important findings from this work included the need for tangible guidelines that detail the activities that best enable the successful integration of services within a hub model. Our orientation model details the operational principles of integrating health and social services for early childhood health. These include the setting up phase activities of buy-in which details approaches for developing a common agenda and partnership development, which outlines mechanisms for fostering collaboration between health and social services. Following this, our orientation model articulates the need to establish connecting support, including infrastructure governance and resources that support integration between services; ongoing integration activities such as the feedback mechanisms and ongoing communication channels necessary for successful integration; and activities that enhance a hubs relevance for the community it services. This model establishes key components for implementing IHSC hubs, which are Hodgins: An orientation model for implementing and sustaining integrated health and social care hubs for early childhood development garnering increasing attention in early childhood contexts globally. Future work will involve disseminating the orientation model broadly across child and family health and social servic es and evaluating the uptake of the model in broader contexts.
In this study we explored the level of awareness and practice on HIV prevention among married couples from selected communities in Malawi.We carried out the study from October to December, 2008 in four communities, two each from Chiradzulu and Chikhwawa districts of Malawi. We conducted face-to-face in-depth interviews with 30 couples in each district using a semi-structured interview guide. The interviews lasted approximately 60-90 minutes. The husbands and wives were interviewed separately. The interviews were audio taped using a digital recorder. We wrote field notes during data collection and later reviewed them to provide insights into the data collection process. We computed descriptive statistics from the demographic data using SPSS version 16.0. We analyzed qualitative data using Atlas ti 5.0 computer software. The coded data generated themes and we present the themes in qualitative narration.The couples' ages ranged from 20 to 53 years, the majority (52%) being in the 20-31year age group. Most of the couples (67%) attained only primary school education and 84% had been married only to the current partner. Most couples (83%) depended upon substance farming and 47% had been married for 3 to 9 years. The number of children per couple ranged from 1 to 10, most couples (83%) having between 1 and 5 children. All couples were aware of HIV prevention methods and talked about them in their marriages. Both wives and husbands initiated the discussions. Mutual fidelity and HIV testing were appropriate for couples to follow the HIV prevention methods. For most couples (54) there was mutual trust between husbands and wives, and members of only a few couples (6) doubted their partners' ability to maintain mutual fidelity. Actual situations of marital infidelity were however detected among 25 couples and often involved the husbands. A few couples (5) had been tested for HIV. All couples did not favor the use of condoms with a marriage partner as an HIV prevention method.The level of HIV prevention awareness among couples in Malawi is high and almost universal. However, there is low adoption of the HIV prevention methods among the couples because they are perceived to be couple unfriendly due to their incompatibility with the socio-cultural beliefs of the people. There is a need to target couples as units of intervention in the adoption of HIV prevention methods by rural communities.
Introduction The majority of sexually active adults in Malawi are married. According to the 2004 Malawi Demographic and Health Survey, 67% of the women of the reproductive age and 63% of the men are married 1 . Marriage would be protective if both partners were HIV negative at the time of marriage and maintain a monogamous relationship 2 . However, this is not the situation in many marriages. Married couples face a substantial risk of contracting HIV from their partners, presumably through premarital and extramarital sexual --- HIV prevention awareness and practices among married couples in Malawi Ellene Chirwa 1 , Address Malata 2 , Kathleen Norr 3 1.Kamuzu College of Nursing, University of Malawi, Blantyre Campus, P.O. Box 415, Blantyre. 2.Kamuzu College of Nursing, University of Malawi, Lilongwe Campus, P/Bag 1, Lilongwe. 3.University of Illinois at Chikago, College of Nursing, USA. behavior 3 . The major setback to HIV prevention among married couples is that HIV prevention methods emphasize mutual fidelity, abstinence and condom use which are not readily accepted by most married couples 4 . Abstinence for example is culturally not readily accepted in a marriage setting and mutual fidelity poses a problem for many married couples because of gender inequality, cultural norms, lack of trust and communication barriers 4 . Maintaining fidelity is a challenge especially in communities where traditionally polygamy is accepted 3 . Acceptability of polygamy has resulted to the husbands being more likely than wives to report extramarital sexual partners and wives being more likely than husbands to suspect that their spouse has been unfaithful 3 . The use of condoms as an HIV prevention method is culturally viewed as inappropriate for married couples, consequently, condom use among married couples in Malawi is as low as 4% 1 . The current HIV prevention methods have therefore not fully addressed the needs of married couples 5 . The major reason has been the approach used in the implementation of the HIV prevention methods. Men do not actively participate in reproductive health services because these services are combined with antenatal clinics whose main clients are women. Women have been the focus of these services due to the fear that if men are included the women will lose their voices 6 . Focusing on women only has reinforced the belief that women are responsible for safer sex 7 . The problem with this approach is that it ignores the dynamic nature of sexual behavior, which means that HIV risk reduction is not fully controlled by either partner 8 . In a marriage setting, both the partners and the community have a long-term commitment to preserving the relationship 9 . The ideal situation is therefore to target both couples in marriage in order to consider the structural and environmental forces as well as socio-cultural context that shape HIV vulnerability for men and women 10,13 . Couple based approaches have increased adherence of couples to HIV prevention methods in the USA 14 , Kenya, Tanzania and Trinidad 15 . --- Objective The aim of this study was to explore the level of awareness and practice among couples in Malawi on HIV prevention methods. --- Methods --- Design We used a cross sectional design employing qualitative data analysis tools to gain in-depth understanding of husbands' and wives' levels of awareness on HIV prevention and their actual practices. --- Study Population and Size We conducted the study in southern Malawi, in Chiradzulu and Chikhwawa districts from October to December, 2008. Two communities were selected from each district in a way that one was near to the district headquarters, representing a town setting and another in an area remote from the town, representing a rural setting. We used a purposeful sampling to enrol participants for the study. Initially, the sample size was 30 couples per district based on the number of couples in each community but data saturation was reached when the sample size reached 15 couples per district. The final sample size was therefore 60 participants (30 couples). --- Inclusion and exclusion criteria To be recruited for the study, participants had to be; (a) traditionally or legally married for 3 or more years; (b) living together with the spouse; (c) in a monogamous married relationship; (d) at the home district for either husband or wife; (e) at least 18 years old; (f) a wife of childbearing age (i.e., less than 45 years old); (g) at least with a child; (h) able to speak Chichewa; and (i) willing (both spouses) to participate in the study. Spouses that were not legally married, separated or divorced were excluded from the study. In addition, couples below 18 years and those with wives of above 45 years or those without children and did not consent to the study were also excluded from the study. --- Data Collection and Analysis We used a semi-structured interview guide to collect data. The first part of the interview contained close ended questions that collected demographic variables of age, length of marriage, tribe or ethnic group, education level, socioeconomic status, marriage lineage tradition, and number of children. The second part of the interview guide had open-ended questions that collected qualitative data. The questions were translated into vernacular language. The questions focused on the level of HIV awareness, communication and prevention methods among couples. The instrument was pilot tested with different communities in the two districts to assess the clarity of the questions and feasibility of the collected data. We collected through face-to-face in-depth interviews which lasted approximately 60-90 minutes. The husbands and wives were interviewed separately to facilitate discussion of sensitive topics. In addition, the female principal investigator interviewed the wives, while the male research assistant interviewed the husbands. The husband and wife were not able to see or hear each other during individual interviews. The interviews were conducted at an agreed public and private place. The discussions were audio taped using a digital recorder. Field notes were written during data collection and later reviewed to provide insights into the data collection process. The participants were assured of confidentiality and all the data were locked in the researcher's office. Descriptive statistics were computed for the demographic data using SPSS version 16.0. Qualitative data were analyzed using Atlas.ti 5.0 Computer Software. The qualitative data management and analysis followed the following steps; (1) organizing and preparing data for analysis; (2) reading through all data; (3) detailed analysis with a coding process; (4) using coding process to generate themes for analysis; (5) advancing how themes will be presented in a qualitative narration; and6 interpretation of the data 16 . --- Ethical Consideration The study was approved by the internal review boards of the University of Illinois at Chicago in the USA and the Research and Ethics Committee of the University of Malawi's College of Medicine. Permission to access the communities at district level was sought from the District Commissioners. District Health Officers from both districts were informed of the study and permission was sought to use community health workers from each district hospital to assist the investigator with identification of potential participants from the communities. The chiefs granted permission for researchers to access the homes of potential participants. --- Results --- Participant Characteristics In Chiradzulu district eight couples came from Mwanje community that was close to the district township and seven couples from Njagaja, a rural area. Similarly in Chikhwawa, eight couples came from Mbenderana 1 and seven from Moses, representing township and rural communities respectively. The participants had been in the village for over two years. The age composition of the participants is shown in Table 1. The majority of the participants were of the younger age group and most of them were between 20 and 40 years of age (Table 1). The participants that were over 45 years were husbands. The length of stay in the village ranged from 2 to 53 years. There was equal proportion (33%) of the participants in the length of stay categories of 2-20, 21-40 and 41-53 years. The education levels of the participants are shown in Table 2. Most of the participants had attended primary school education. The combined proportion of participants who were literate was 91% (Table 2). The proportion of the participants with adequate food throughout the year was 50%. The majority (83%) was of low socioeconomic status and depended on subsistence farming and small businesses for their income. For the remaining 17%, the husbands were on salaried jobs. The employed participants were either primary school teachers, brick layers, messengers, office assistants, watchmen or grocery assistants. The employment enabled 80% of the participants to provide support to their relatives, 90% to own radios, 53% to own bicycles, and 30% to have iron sheet roofed houses. The predominant tribe in Chiradzulu was Lomwe (84%). The Yaos comprised 12% and the others (Ngoni and Chewa) were 4% of the participants. In Chikhwawa, 66% of the participants were Man'ganja. The Senas comprised 16% and the remaining 18% comprised the other tribes. The majority of the participants (82%) from both districts had been married once, that is, only to the spouse they were living with during the time of the study. The matrilineal system was predominant in the two districts and was practiced by 60% of the participants. The majority of the participants (83%) had 1-5 children in their families and the remaining 17% had high number of children that is, between 6 and 10. Most of the couples (47%) had been married for 3 to 9 years and there were equal proportions of couples that had been married for 10 to 15 year (23%) and 16-20 years (23%). The proportion of couples (7%) that had been married for 21-23 years was low. --- HIV prevention The participants described HIV prevention strategies in their marriage relationships. Their descriptions were placed into the categories of HIV prevention awareness, HIV communication and HIV prevention methods. --- HIV prevention awareness All couples knew that HIV in a married relationship was contracted mainly through extramarital sexual relations. In addition, the participants were aware that either the husband or wife could be involved in extramarital sexual relations. However, both the women and men reported that men were more likely to indulge in extramarital relationships than women. One of the men narrated as follows; "This is common to men and not women, you go out and sleep with other women and you come back and sleep with your wife. If you contract the infection you can easily transmit it to your wife." Apart from extramarital sexual relationships, all participants mentioned that HIV could also be transmitted through exchange of razor blades, needles, safety pins, and toothbrushes. One woman narrated how HIV could be transmitted through use of safety pins as follows; "Supposing you use an HIV infected needle to remove a thorn and the infected blood on the needle touches yours, you will contract AIDS." The sources of information for all the participants were the radio, hospital, and village meetings held by different organizations. For the village meetings, some non-governmental organizations, the church and civil society and drama groups visited the villages and hold rallies where the messages about the spread of HIV were delivered. The teachers (three of the participants) in addition to these sources also mentioned books and newspapers as sources of information. The Church was also an important source of information regarding the spread of HIV for four of the participants. --- Communicating about HIV Prevention The majority (27 couples) discussed HIV prevention in their marriages. For those who did not, two couples did not give reasons but one couple explained that there was no need to discuss the issue since they got information from the radio. A total of 28 husbands and 27 wives explained that wives initiate communication on HIV prevention. The women (19 wives) explained that they found an opportune time to talk about HIV when they were chatting with their husbands. The women frequently initiated the communication by asking the husbands how they should protect each other from HIV. Others suggested to the husbands to go for an HIV test, after they learnt about the test at the hospital during under five or antenatal clinic visits. Other women spoke about HIV and initiated the communication when advising their husbands against extramarital sexual relations. Some women inquired from their husbands if they were having extramarital sexual affairs. A wife from Chikhwawa shared: "There was a time when I became very sick and I asked my husband if he had been indulging in extramarital sexual relationship with other women. He denied ever having sex with other women. I however doubted him and asked him again but he denied. I thought that possibly my illness was due to HIV and that he was the one who infected me. After he denied any extramarital sexual relationships, we went to a government hospital where we were tested. We were told that we were not HIV-positive. Fourteen participants (six husbands and eight wives) described situations in which communication was initiated by the husband. The husbands initiated the communication when they were chatting with their wives in the evenings before going to bed. Some men were prompted by HIV messages in the radio, some had received some education on HIV prevention and some by just seeing people suffering from HIV or those involved in risky sexual behavior for contacting HIV. The communication was initiated in the form of cautioning the wife about risky sexual behaviors and the dangers of HIV. The husbands and wives described reasons for discussing HIV prevention in their marriages. Fourteen participants (six husbands and eight wives) explained that the love, respect, and trust that existed in the family helped them to listen to and understand one another, as narrated by one of the husbands "I take my wife as my mother and because of love, I listen to what she says and she too listens to me, hence we listen to each other." The concern about death and leaving children as orphans prompted some of the couples to discuss HIV prevention. One of the men had this to say: "we need to take care so that we should look after our children; they should grow healthy because if one of us dies then our children will also be in problems." The couples that communicated HIV prevention reported that they openly discuss the issue with their spouses. However, a few couples did not openly communicate with their husbands as narrated by two women (one from each district). One of the women explained that her husband did not show interest in discussing HIV prevention. The other woman explained that the husband accused her of having extramarital affairs when she initiated communication on HIV prevention. Under these situations the wives failed to influence their husbands to discuss HIV prevention. --- HIV Prevention Methods --- Maintaining Fidelity Fifty-four participants (28 husband and 26 wives) mentioned that they maintained mutual fidelity in their marriages. The husbands and wives explained that they encouraged and advised each other to be faithful to one another. Some mentioned that they advised each other to "be responsible for one another;" "to trust one another;" "act like one body;" or "to protect themselves." Based on the descriptions, the husbands and wives discussed this topic and agreed with one another to maintain fidelity. This was the method they felt was best for married couples. Eight participants (five husbands and three wives) reported that they had never had extramarital sexual relations since marriage and were certain that their partners were also maintaining fidelity. Here is what one husband shared: "I believe women are the same and I do not see a reason for going out with other women. My wife satisfies my sexual desire and vice versa." The husbands and wives explained that there was nothing that made them suspect that their partners were having extramarital relations. Forty participants (22 husbands and 18 wives) reported that they would terminate the marriage relationship if their partners had extramarital relationships. This indicated that maintaining fidelity was strongly expected in the marriage relationships. Despite the reports that husbands and wives depended on maintaining fidelity in their marriages, seven wives mentioned that they doubted the fidelity of their husbands. The wives explained that "A man is never satisfied with one woman." Therefore, they felt husbands were likely to have extramarital affairs. One wife shared: Yes, I cannot know what he does whenever he is out and comes home at 1:00 am. He tells me that he was watching soccer on TV, but I cannot be sure whether he was alone or with other women. I just tell him that whatever he does will one day be revealed." Despite the norm of marital fidelity, 25 participants (15 wives and 10 husbands) discussed situations of their partners having extramarital sexual relations. Husbands and wives differed in the way that they reported the situations. The husbands reported in a way that avoided giving the impression that they had extramarital affairs. For example, four husbands reported that their wives had received rumors that they were having extramarital affairs yet this was not true. Two husbands reported that they had actually been involved in extramarital sexual relations which their wives later discovered. One of the husbands mentioned that he usually got involved in extramarital affairs when he was away from home but he used a condom. The issues were resolved by advising the husband to change and warning them that their behavior would lead to contracting AIDS and leaving their children as orphans. The husbands or close relatives apologized on behalf of the husband. Concern about children's welfare was the main reason that made the wives accept the apology and forgive their husbands as narrated by one woman: "(laughter) because I have children. He apologized and his relatives also apologized and advised that I should forgive him. So I forgave him." There was a traditional way of discovering that the husband was having extramarital sexual relations that three wives from Chikhwawa reported. This was related to the norm of abstaining from sexual relations when the child is born up to 6-12 months of the child age. If the husband was having extramarital sexual relations, then the child or wife would become ill. Two wives reported that their children became ill with diarrhea, fevers, or edema and the elders advised them that it was because the husband was having extramarital sexual affairs. One wife reported that she herself was started feeling general body weakness and the child became ill. In all of the situations, the parents of the husbands or wives took the husband, wife and child to the traditional healers where the husband confessed infidelity, traditional medicine was prepared and the child or mother were healed. Another wife did not report any illness, but the husband confessed that he was having extramarital sexual relations when they wanted to resume sexual relations and traditional medication was given to the husband. In all situations, the wives did not act negatively because they were concerned about the child's well being. There was only one situation when the husband reported that his wife was involved in extramarital sexual relations. The husband was tipped by friends and later caught her red handed. The husband reported the issue to his close relatives and chief. They told him that they could not resolve the issue but that he needed to go to the court. The husband chose to forgive his wife and defended her by saying that she did not know what she was doing. The issue was resolved and marriage was sustained. Regarding maintaining fidelity, all couples agreed to the words of one of the women, "it is normal for men to go out with other women" but "it is not appropriate that a woman should have multiple partners." --- HIV Testing Fifty-one participants (27 husbands and 24 wives) mentioned that testing is a way of knowing HIV status and finding ways of prevention. However, actual HIV testing was reported by only five couples (three from Chiradzulu and two from Chikhwawa). They reported that they had both been tested for HIV. The factors that motivated them to go for testing were: spouse illness; wife being told at antenatal clinic; and just wanting to know their HIV status. In all situations except one, the husband and wife agreed to go for HIV amicably as shared by one of the women. "both of us initiated this. It was as if we were thinking along the same lines…both of us have had the test four times. Now we just encourage each other because we are not infected by HIV." Three of the five couples reported that they were HIV-negative. However, one couple reported that the wife was positive and another reported that both were positive but the husband reported only about his wife's HIV positive status. Both couples mentioned that they used condoms but not consistently and were planning to have more children in the future. One couple was taking ARVs and attended an ART clinic at the district hospital. Five couples from Chiradzulu reported situations where only one partner was tested. They explained that the partners were planning to go for testing later or the partners were not planning to go for testing because they assumed that if one of the partners was HIV-negative then their status was also the same. In four out of the five situations, the wives tried to persuade their husbands to go for HIV testing, but they were not successful. Eight couples (four from each district) reported that they felt that there was no need for them to go for testing. Some felt that they did not think that they needed to take an HIV test because they were not at risk of getting HIV. --- Condom Use Fifty-seven participants (29 husbands and 28 wives) said they did not use condoms in their marriages. Only two wives from Chiradzulu and one husband from Chikhwawa mentioned that they used condoms as a means of HIV prevention because one or both of them were HIV-positive. Therefore, they had been advised at the hospital to use condoms. However, the partners of the HIV positive wives or husbands did not mention that they used condoms. Almost all participants expressed that condom use was not appropriate in a married relationship because it meant that there was no trust. They said that condoms are meant for extramarital sexual relationships. Other husbands and wives expressed that they did not like condoms because they are not reliable; they are mostly expired and would burst while in use. Despite the negative feelings about condom use in married relationships, four husbands and eight wives mentioned that they would demand condom use if they suspected that their partners were having extramarital sexual relations. A total of 5 wives expressed desire for condom use but had not been successful in persuading their husbands. --- Discussion The results show that all couples were aware of how HIV is contracted and how it can be prevented. The level of HIV awareness among the couples in this study was almost universal due to efforts of government, church, the media and other nongovernmental organizations on community sensitization about the dangers and prevention of HIV. Regarding maintaining fidelity as an HIV prevention method, all couples were aware of the method. However, there were challenges regarding actual practice by the couples. Results show that despite couples advising and encouraging each other to be faithful some couples did not maintain fidelity. The results that men were more likely to indulge in extra marital relationships than women agree with the findings of other studies in Malawi. It has been reported that husbands were more likely than wives to report that they had extramarital sexual partners and wives were more likely than husbands to suspect that their spouses had been unfaithful 3 . The results that some husbands were actually involved in infidelity are also consistent with the findings of other researchers who reported that husbands are likely to have extramarital affairs 17 . Hence, married couples are at risk of HIV primarily because of the current behaviors of their partners 2 . The results of wife infidelity in this study are in agreement with the findings of other studies that only few women admit to infidelity 3 . The reason for low reporting of women infidelity is the belief that it is inappropriate for women to have multiple sexual partners. However, in a society that strongly discourages wife infidelity, it was surprising that the husband reported to have caught a wife red-handed and also to have forgiven her. These results may be due to the way that couples perceive their vulnerability to HIV in which a women's own infidelity is not associated with their own or their spouse's assessment of risk 3 . There was open communication about HIV prevention among the couples in this study which was initiated by both parties. The communication was initiated because of concerns for a healthy family life, desire to know their HIV status and family planning. The fear for death and leaving children as orphans was a major concern among the couples. The results show that shared decision making was used when husbands and wives discussed to protect each other from HIV. This finding is supported by a study conducted with married women in Malawi 18 . The women identified a range of contextually appropriate ways to resist exposure to HIV that included starting discussions with their husbands about the dangers of extramarital partners and convincing them of the risks. Couples did not favor condom use because of the traditional belief that husbands and wives cannot use this method in a married relationship. This is consistent with results from other studies in Malawi which reported lack of reference to condom use during spousal discussions on strategies to deal with the mutual risk of contracting HIV within marriage 1,4 . In addition, the desire for children among the HIV positive couples also discourages the use of condoms. Consequently, condom use among married couples in Malawi is very low 1 . There is a need for more discussions with couples to raise the awareness on the importance of protected sex, especially among discordant couples. In this study, results show some participants desired to demand a condom if they were convinced that their spouses were HIV positive or leading high risk behavior for contracting HIV. In this study, very few couples had been tested for HIV, despite the fact that all the participants mentioned HIV testing and knowing one's status as being useful in the prevention of HIV transmission. These results may be attributed to the fact that the study targeted married couples. Testing is more common among urban residents, single sexually active women and men, and men and women who are no longer married 1 . Some married couples may not see the need for HIV testing because of trust in their partners. Lack of testing services in the rural areas may have influenced the results of this study. Results in this study show that despite high level of awareness among couples in some districts of Malawi and possible reporting bias that leads to overestimation of one's own and spouse's HIV risk 3 , actual practice regarding HIV prevention is very low. There is still some level of infidelity affecting both patrilineal and matrilineal systems of marriage, low condom use and low HIV testing. There is therefore need to target couples as a unit of HIV prevention in Malawi, in order to break the socio-cultural barriers that prevents the adoption of the HIV prevention methods. --- Conclusion Most couples are aware of HIV prevention methods. There is communication among the couples regarding HIV prevention which is initiated by both husbands and wives. Despite the knowledge, most couples have not adopted the HIV prevention methods. Actual practice shows prevalence of infidelity, low condom usage and low HIV testing among the couples. There is a need to reach out the couples in the rural areas with couple based HIV prevention messages that should aim at removing the socio-cultural barriers that prevent couples from adopting the HIV prevention methods.
Background: Prior to the COVID-19 pandemic, parents of infants in the Neonatal Intensive Care Unit (NICU) frequently reported high levels of stress, uncertainty, and decreased parenting confidence. Early research has demonstrated that parents have had less access to their infants in the hospital due to restrictions on parental presence secondary to the pandemic. It is unknown how parents have perceived their experiences in the NICU since the beginning of the COVID-19 pandemic. The purpose of this study was to describe the lived experience of parents who had an infant in the NICU in the context of the COVID-19 pandemic to inform healthcare providers and policy makers for future development of policies and care planning.The study design was a qualitative description of the impact of the COVID-19 pandemic on parents' experiences of having an infant in the NICU. Free-text responses to open-ended questions were collected as part of a multi-method study of parents' experiences of the NICU during the first six months of the pandemic. Participants from the United States were recruited using social media platforms between the months of May and July of 2020. Data were analyzed using a reflexive thematic approach.Free-text responses came from 169 parents from 38 different states in the United States. Three broad themes emerged from the analysis: (1) parents' NICU experiences during the COVID-19 pandemic were emotionally isolating and overwhelming, (2) policy changes restricting parental presence created disruptions to the family unit and limited family-centered care, and (3) interactions with NICU providers intensified or alleviated emotional distress felt by parents. A unifying theme of experiences of emotional distress attributed to COVID-19 circumstances ran through all three themes. Conclusions: Parents of infants in the NICU during the first six months of the COVID-19 pandemic experienced emotional struggles, feelings of isolation, lack of family-centered care, and deep disappointment with system-level decisions. Moving forward, parents need to be considered essential partners in the development of policies concerning care of and access to their infants.
Background The COVID-19 pandemic created unprecedented conditions for administrators and clinicians working in Neonatal Intensive Care Units (NICU) and greatly affected parents of infants requiring hospitalization. Prior to the COVID-19 pandemic, parents of infants admitted to a NICU reported high levels of stress, anxiety, uncertainty, and decreased parenting confidence when compared to parents of healthy full-term infants [1][2][3][4][5][6]. Approximately 28-40% of mothers of infants admitted to a NICU were diagnosed with a new mental illness, such as depression or perinatal post-traumatic stress disorder [7]. Fathers of infants requiring NICU hospitalization also reported significant stress and need for reassurance and support [8,9]. Adverse parental mental health associated with NICU admissions affects parent-infant bonding, parental physical health, and infant cognitive development outcomes [10][11][12][13][14]. Studies have shown that when an infant requires NICU hospitalization, the normative transition to parenthood can be altered, resulting in worsened parental mental health and confidence [15,16]. For this reason, many hospitals have implemented family-centered care practices to help mitigate the disruption of the transition to parenthood [17][18][19] and provide unrestricted access to their hospitalized infant to optimize neurodevelopmental outcomes and parent mental health [20][21][22]. Like many aspects of life prior to the pandemic, parenting and family life were exceptionally susceptible to unanticipated changes during the COVID-19 pandemic, potentially resulting in elevated levels of stress and uncertainty [23,24]. Subsequently, when families experienced an infant's admission to the NICU, this likely resulted in further exacerbation of stress. While there is sufficient evidence demonstrating the negative parental outcomes secondary to having an infant hospitalized in the NICU prior to the COVID-19 pandemic, there is little qualitative data regarding how parents have experienced infant hospitalization during the COVID-19 pandemic. Recent reports document a decrease in parental presence by 32% and participation in rounds by 30% in the United States [25]. Globally, 52% of parents from 56 different countries reported restricted access to their infants while hospitalized in the NICU, with more restrictive policies being associated with reported worry by parents [26]. Moreover, these restrictions to parental presence are associated with decreased bonding and negatively impact breastfeeding [27]. Several commentaries have called attention to visitor practices and downstream effects of new COVID-19 policies, such as risk for moral distress, fear for safety and injury to providers [28,29] and increased risk to infant and family well-being [30]. Yet, to date there are few studies describing the parent experience of COVID-19-related policies and the parent experience navigating infant hospitalization during the COVID-19 pandemic. Accordingly, we sought to understand the NICU experience from parents' perspectives in order to provide a more comprehensive and detailed understanding of the experiences and needs of families. Our aim was to describe the lived experience of parents, who had an infant hospitalized in a Neonatal Intensive Care Unit in the context of the COVID-19 pandemic during the first wave of infection in the United States. This research was conducted to serve all NICU healthcare providers and policy makers in order to inform future decisions regarding supporting families in the NICU. --- Methods We used a qualitative descriptive design to analyze openended, free-text data collected as part of a larger multimethod study to describe parents' experiences of NICU hospitalization during the COVID-19 pandemic [31]. Subjects were recruited between the months of May -July 2020 using social media platforms such as NICU parent support groups, Facebook, Twitter, and Instagram. Parents were eligible for participation if they had an infant requiring NICU hospitalization between February 1, 2020 -July 31, 2020. These dates were chosen as eligibility criteria to ensure the data captured would reflect the parenting experience during the first six months of the COVID-19 pandemic [32]. An anonymous online survey was developed using the Research Electronic Data Capture (REDCap) research database [33]. The survey included questions about parent demographics, infant health, and hospital-related characteristics; family, social, and NICU environments; several validated measures related to parent experience; and five open-ended questions. The study was deemed exempt by the Institutional Review Board at the University of Michigan and is reported in accordance with the Journal Article Reporting Standards for Qualitative Research [34]. Participants did not receive compensation for participation. Participants were asked to respond to five open-ended questions concerning the impact of the COVID-19 pandemic on the experience of having a baby in the NICU, the birth experience, the transition home, interactions with healthcare providers, and parental presence experience (see Supplementary Materials). The free text responses were exported from REDCap to a document that was imported to NVivo 11 [35]. An organic, descriptive, thematic analysis was employed to identify shared patterns of meaning-making in parents' experiences in the NICU during COVID-19 [36]. Our approach was constructivist in that we centered parents' experiences while acknowledging the investigators' role in interpreting and synthesizing those experiences in the present thematic description [37]. A sociologist with qualitative expertise (JM) led the coding and analysis with regular consultation and discussion with the study team, which was composed of nurse scientists with expertise in parenting, stress, and the NICU. Analysis followed the steps of reflexive thematic analysis [36], beginning with immersion in the data through rereading. During a first round of open coding, topic and category codes were developed inductively, though informed by the investigators' prior NICU research (for example, we anticipated the important topics would likely include emotion experience, staff interactions, and uncertainty). The team then generated tentative themes through reorganizing, consolidating, prioritizing, and mapping codes and categories, followed by a second round of coding focused on these thematic areas (for codebook, see Supplementary Material). Next, we checked initial themes against the overall dataset, considering alternative explanations and outliers. Finally, we named, defined, and described the themes through discussion and analytic memo writing [38]. --- Results Of a total of 178 online survey respondents, 169 answered at least one of the five open-ended questions (94.9%). Respondents lived in 38 states in the United States and 97% identified as the mother (n = 164). Parental, infant, and hospital characteristics are provided in Table 1. Answers to questions ranged in length from a phrase or sentence to a full page. Some parents reported facts impassively while others elaborated on how they felt about them; some volunteered information on topics not specifically solicited, such as work and finances. We began examining the dataset for continuities and discontinuities between parents' experiences in the NICU during the COVID-19 pandemic above and beyond nonpandemic times. Through this lens, we developed three broad themes: (1) parents' NICU experiences during the COVID-19 pandemic were emotionally isolating and overwhelming, (2) policy changes restricting parental presence created disruptions to the family unit and limited family-centered care, and (3) interactions with NICU providers intensified or alleviated emotional distress felt by parents. Exemplar quotes representing these three themes are displayed in Table 2. A unifying theme running through all three themes was experiences of emotional distress attributed to COVID-19-related circumstances (see Fig. 1). --- Theme 1: parents' NICU experiences during the COVID-19 pandemic were emotionally isolating and overwhelming Over half of the parents wrote about the emotional and mental impacts of having an infant in the NICU during the COVID-19 pandemic. The isolating and overwhelming theme of parent experiences was exemplified by three subthemes: (1) isolation and disconnection, (2) distress and trauma, and (3) intense emotional expressions. 35,000 -75,000 > 75,000 Not answered (14) Isolation and disconnection. One of the most prevalent emotional and mental experiences described was that of isolation and disconnection: "I was alone. I had absolutely no family beside my new baby. It was one of the worst experiences I've ever had. " (Mother of 2 from Michigan, race/ethnicity unknown). Isolation and disconnection were attributed to various impacts of COVID-19 such as visitor restrictions; requirements to wear masks and gloves; the inability to kiss, hug, and touch infant; frequent staff turnover; and reduced or remote-only interactions with staff. Although related in cause and circumstance, isolation, loneliness, and separation were described in emotionally painful terms whereas feelings of disconnection were more often described as alienating, strange, and cold (see Table 2). Distress and trauma. Another primary emotional experience was the sheer stress, difficulty, and overwhelming nature of the situation. Parents used expressions such as, "extremely difficult, " "awful, " "impossible, " "traumatizing, " and "brutal. " One parent wrote, "this experience has been the worst of my life" (White mother of 2 from Tennessee), another said, "it was a horrible experience, and I would never wish it on anyone" (Asian mother of 1 from Texas), and a third wrote, "this is another level" (Mixedrace mother of 2 from Georgia). Some parents reflected consciously on how much of their experience was due to the added stressors of the pandemic. While having an infant in the NICU is already stressful, many parents felt that the COVID-19 experience compounded the NICU stress to a high degree: "COVID made difficult situations even more difficult as we had restrictions accessing NICU. These restrictions made my relationship with my wife (baby mother) also difficult. " (White father of 3 from Maryland). Intense emotional expressions. There were many emotions reported by parents that ranged from grief, sadness, and anxiety to uncertainty, heartbreak, and fear. Parents' descriptions of fear ranged from worry and anxiety to panic. Almost half the mentions of fear were related to COVID-19 in the NICU. Other fears were of losing visitor access, the infants' well-being, worries about breastfeeding, and general anxiety from the stressors described above. Uncertainty due to changing pandemic-related policies also contributed to fear: --- "We felt very much out of control and in constant fear of not knowing. " (Black mother of 1 from Louisiana). Grief over losses such as infant death and debility are not unusual in the NICU, but more unique to the pandemic were expressions of sadness and heartbreak due to family separations, limited visits, or lost experiences with the infant resulting from COVID-19 policies. --- "Visitations were always bittersweet because one parent or the other wasn't able to be there. " (Mixedrace mother of 1 from Utah). Although the present analysis focuses on parents' experiences while their infant was in the NICU, it is important to consider the holistic experience of isolation parents described during and after their time in As schedule allowed 2 Not answered 40 (24) Video visitation with infant No 89 (53) Yes 40 (24) Not answered 40 (24) Hospital type NICU in a children's hospital 47 (28) Community hospital with a NICU 49 (29) NICU at an academic medical center/regional care center 27 (16) Observational/small special care nursery 1 Unsure/Other 5 Not answered 40 (24) the NICU. COVID-19 restrictions often required mothers to be in the hospital with few or no visitors before, during, and after the birth, sometimes for weeks at a time. Numerous mothers emphasized that their birth experiences during COVID-19 were "scary" and "lonely. " Several mothers discussed their physical limitations of not being able to drive or move easily, which compounded their struggles with pumping, breastfeeding, or getting to the NICU. After the infant's discharge from the NICU, parents described having limited in-person contact with family and friends due to concerns about spreading COVID-19 infection. This broader context of isolation likely colored parents' experiences and memories of time in the NICU. This quote poignantly expresses the strange alienation of one mother's experience of giving birth without the anticipated connection and community. --- "I feel like I missed out on a lot of the joys of pregnancy and birth because of the pandemic. Somehow don't have any pictures of me pregnant, had no baby shower, no friends come meet the baby, no hospital visitors. Didn't get to share the joys of it with family and friends. It's almost like it all didn't happen except for the fact there is now a baby hanging around. " (White mother of 1 from New Jersey). --- Theme 2: disruption of the family and family-centered care The second major theme was that NICU policies regarding parental presence disrupted the family unit instead of prioritizing infant and family-centered developmental care. Overall, parents expressed the strongest emotions about the direct and indirect impacts of policies restricting parental presence during the COVID-19 pandemic. Some parents reported confusion because of frequently changing policies. Among the more restrictive policies were those that restricted all parental presence in the NICU, only one designated parent/caregiver for the NICU stay, or switch-offs between parents/caregivers weekly. The most common visitation policy, reported by 77 parents (46%), was parental presence was limited to one parent/caregiver at a time. --- Table 2 (continued) Subtheme: Professionalism and consistency "It was interesting to see the changing protocols, almost daily, as the hospital navigated the safest screening parameters for the both the maternity levels of the hospital and the NICU. I feel like the care we received was not impacted, everyone was incredibly professional. " (White mother of twins from California) "Some nurses wouldn't wear a mask and face shield together (which was required) every time they came in the room which bothered me. Many touched their masks and touched my infant. Many pulled down their masks when they needed to catch their breath instead of walking out of the room, but I didn't feel like I could speak up about it. " (White mother of 2 from Colorado) "Lack of information, changing protocols, general paranoia amongst nurses did not support confidence in the team to the point we did not dare to leave NICU fearing not being allowed to enter again. " (Hispanic mother of 1 from Ohio) "It was unnerving to be in the same hallway as covid babies. I did not like when I would see my nurse have to gown for one patient and then come into our room. It was also difficult to hear staff talk about going out on weekends while we were quarantined as much as possible to keep our baby safe. " (White mother of 1 from Florida) --- Fig. 1 Themes and Subthemes Within this family theme, there are two subthemes. First, restrictive policies undermined parents' role as essential to the infant's caregiving team. Second, parents felt the resultant conflicts and losses as egregious, and much of their grief, isolation, overwhelm, confusion, and anger centered around these policies which they perceived as limiting family-centered care. Parents' essential caregiver role. Many parents implied, and seven explicitly stated, that visitation restrictions denied their ability to serve as essential caregivers for their infant. --- "I believe we should be seen as members of the care team and not visitors. " (Mixed-race mother of 2 from Georgia). While some parents accepted the policies as a regrettable necessity, others believed they were contradictory and even "nonsensical" (Mother of 1 from Michigan, unknown race/ethnicity). A few expressed their objections in the strongest terms as a violation of their parental rights. --- "My husband didn't see his child for over a month. Which I feel is incredibly wrong. How can someone deny a parent access to their own child?" (White mother of 1 from North Carolina). "My husband and I felt that our rights as parents were violated. My husband and I should have both been able to see our daughter. Instead, I was alone in the NICU. It caused us a great deal of trauma and pain. Not to mention that our daughter lost out on essential bonding time with her father. " (Mixed-race mother of 3 from Arizona). The new policies regarding parental presence impacted the direct care parents were able to provide. Parents reported constraints on breastfeeding and skin-to-skin care, which, they noted, healthcare providers cannot provide. --- "As the mother I had to continuously pump around the clock as well as do her cares (diapering, feeding) and then trying to find time to hold her to do skin-to-skin but that wasn't possible because I didn't have an extra set of hands that could help while I pumped. The nurses would take care of 3 babies at a time so they weren't able to help out as much either, or they would prioritize other babies. " (Asian mother of 1 from Texas). Restricted access also interfered with their ability to advocate for their infant and for both parents to communicate with specialists. When policies restricted parental presence to one designated caregiver only, parents prioritized birth mothers remaining in the NICU, especially if they were breastfeeding or if the other parent needed to work, given concerns for employment security during the pandemic. This situation not only excluded partners from caregiving, advocacy, and learning opportunities, it also placed extra burdens on mothers. "The hospital policy changed so only one parent was allowed in the NICU at a time. This resulted in me receiving difficult news with no spouse or support person; making decisions about baby's care without baby's father present; myself physically navigating the NICU and interacting with baby without physical support even while I was recovering from delivery and had a broken rib. " (White mother of 3 from Texas). Sometimes physically and emotionally fragile after birth, mothers provided care and advocacy alone without in-person practical or emotional support from family. The mother would sometimes not receive any wellness breaks and were burden with the added strain of needing to relay critical information to the absent partner. Egregious loss. In addition to interfering with parents' functional role as essential caregivers, parents described restrictive parental presence policies that resulted in experiences of egregious loss. Descriptions of loss are summarized in three main ways: (1) loss of bonding, (2) loss of experiences, and (3) loss of time (see Table 2). Due to masking requirements, opportunities to hold, touch, or kiss infants were reduced. Parents worried that the lack of visible smiles and other facial expressions would impact infant development. Masks and visitation restrictions also interfered with photos for celebrating the family unit and introducing infants to family members. Parents also felt the loss of opportunities for bonding between the infant and the second parent or sibling. This loss was attributed not only to the reduced time each family member could spend with the infant, but the loss of experiences that parents and siblings could share together as a family unit. For example, parents grieved not being able to share infants' first holding or bathing. Three parents reported that the separations, differential burdens, and loss of shared experiences strained their marital relationship. The most restrictive policies regarding parental presence forced parents to make painful and difficult choices about care and being with their infant. In describing lengths of time separated from infants, parents often used modifiers like only, just, still, over, never until, and whole to express egregiousness, or by using extremely precise counts of days or hours. Parents used such expressions for both long and relatively brief time periods, suggesting that any length of separation could be experienced painfully. --- Theme 3: interactions with NICU providers intensified or alleviated emotional distress The last theme identified was that NICU staff could either exacerbate or mitigate parents' emotional strain. The significant role of healthcare providers in this theme is exemplified in three main subthemes: (1) support and validation, (2) alienation and inclusion, and (3) professionalism and consistency. Support and validation. Parents voiced a deep need and desire for sympathy, acknowledgement, and support from the NICU providers to validate the reality of their difficult experience. There was a striking contrast between reports of parents who felt staff acknowledged the extreme difficulty of the NICU during the COVID-19 context and those who did not perceive that acknowledgement. Parents who received sympathetic recognition found it validating and supportive, while those who did not found this discordance with staff added to their burden. "The nurses were AMAZING. I felt empowered, comforted, respected, and cared for. There was a general understanding that NICU is hard enough but then adding COVID made it even worse. That felt validating. " (White mother of 1 from North Carolina). "Really great team and very caring. They all empathized with us knowing this is unprecedented times. " (White mother of 1 from Texas). "They didn't seem to acknowledge that it's a very difficult time to have a baby in NICU never mind during the pandemic. Some nurses were mean and nowhere near as supportive as they should have been. A couple of nurses were AMAZING. Some doctors were also harsh and only seemed to see us as another number and not humans needing individual care. " (Mixed-race mother of 3 from California). Alienation and inclusion. Parents described feeling alienated or isolated from the caregivers responsible for the care of their infants. This was attributed to COVID-19 changes including exclusion of parents from rounds, remote consults, perceptions of staff shortages, high staff turnover, reduced parent services (e.g., lactation support, mental health support), social distancing, and masking. --- "It was hard to recognize faces of hospital staff (doctors, nurses, techs, ect) with masks which made things feel more distant. " (White mother of 1 from North Carolina). Some parents were concerned that turnover compromised infant care through reducing staff familiarity with the infant as well as increasing the number of contacts and potential COVID-19 exposures. Others reported that due to restricted access to staff, their ability to advocate for their infant was significantly hindered. Several parents wanted to feel more included either in care or in giving feedback to the hospital system. Parents wanted an opportunity to express feelings concerning policies restricting parental presence; inconsistencies between recommendations, policies, and practices; and their desire to participate more in their infants' care. "We really wish the hospital involved parents in such decision making, it would have avoided a lot of unnecessary trauma. " (Asian mother of 1 from Michigan). Professionalism and consistency. Parents reported intense discomfort with inconsistent adherence to COVID-19 precautions on the part of some NICU staff, as well as with a lack of professionalism in staff conversations about infant care or changing policies. --- "[Nurses were] talking loudly about each baby's care and their parents, discussing what they did and didn't agree with about Covid changes. Sometimes touching hair, face, ect with gloves on and not changing them. " (Black mother of 1 from New Mexico). Some parents were concerned that turnover compromised infant care through reducing staff familiarity with the infant as well as increasing the number of contacts and potential COVID-19 exposures. --- Discussion The purpose of this study was to critically examine the experience of parents of infants admitted to a NICUs during the first six months of the COVID-19 pandemic. We acknowledge the unprecedented circumstance of the COVID-19 pandemic and the rapid responses required to maintain safety for the most vulnerable patients in times of extreme uncertainty. We also acknowledge the lack of evidence-based solutions for policies regarding parental presence in the NICU during the early months of the pandemic. Even so, the descriptions provided by parents during their infant's hospitalization during this time provides valuable insight into this challenging situation and offers strategies to improve care. Our results add to the growing body of knowledge surrounding the detrimental impact of restrictions on parental presence in the NICU [39,40]. Overwhelmingly, parents described their experience of neonatal hospitalization through painful expressions of separation, disconnection, and isolation. These descriptions are similar to other accounts of experiences under pandemic restrictions in other populations [41,42]. Importantly, the qualitative descriptions provided by parents in our study demonstrate that vulnerable populations, such as parents of infants requiring hospitalization, call for unique consideration when providing healthcare in the face of uncertainty and rapid change. Previous research has established that parents who experience feelings of isolation, stress, and uncertainty are more likely to suffer from poor mental health after their time in the hospital [43,44]. These outcomes have long-term consequences for the health of parents as well as that of infants [10]. These findings are striking given that providing infant and family-centered developmental care and supporting healthy transitions into parenthood are fundamental to the care provided in the NICU [45]. Consistency in care is especially important during periods of transition. Maternal-infant healthcare providers are ideally positioned to recognize the unique and emerging needs of the families they serve. When families experience frequent turnover from care providers while in the NICU, there is a loss of trust and increased feelings of abandonment [18]. Parent health must be prioritized when allocating clinical resources in the NICU during times of rapid changes. During times of great uncertainty, purposeful assessment of parents' needs is required so that resources, such as psychology and nursing support, can be tailored accordingly. Shared decision making should be emphasized, and parents represented in development of policies and procedures when possible. Additionally, parents themselves offer a strategy to mitigate feelings of loss of control and uncertainty, by valuing them as "essential care" [46]. If parental presence in the NICU must be limited secondary to legitimate concerns for safety, more resources like psychology services, strong family advisory boards, and personalized care planning must also be prioritized. Our findings indicate parents of hospitalized infants during the COVID-19 pandemic report overwhelming emotional strain, splitting the family unit, and a deep need for more NICU staff support. As the effects of the COVID-19 pandemic continue to reverberate throughout our society, it is important to consider the unintended injury that families of hospitalized infants experienced during this time. Healthcare providers must recognize the unique needs of these families during exceptional circumstances and adjust in the support that is provided. Furthermore, there are lessons to be learned from these parents' reports. First, providing space and creating systems that allow families to tell their stories can help the healing of the trauma experienced from having a hospitalized neonate. Researchers and healthcare providers are charged with supporting parents through their processing as well as facilitating parenting skills and adaptation [47]. Second, providing consistency in care, through both staffing and messaging, is a low-effort and high-impact method to support families. The remarkable fact, found in both in our study and previous research, is that parents of hospitalized infants do not need healthcare providers to have all the answers, rather they need to be heard and supported [48,49]. Parental perceptions of illness are crucial to the health of the family [5]. Having a consistent model of care to support families as they make sense of their situation and plan for the future is imperative. This is even more important during times of great stress, like that experienced during the COVID-19 pandemic. Finally, incorporating families in the creation of policies should continue to be the standard of care. Even, and especially, in times of rapid change and great uncertainty, we need mechanisms for real-time feedback and input from families and opportunities for shared decisionmaking models of care. Figure 2 illustrates these potential interventions. --- Limitations Our findings are limited by the homogeneity of the sample. The majority of the respondent parents were white mothers. This may be a reflection of the parents who frequent and use NICU parent support groups on the internet. A quarter of respondents did not report on race or ethnicity. We explored potential demographic relationships to themes but observed no strong trends. It will be important to engage parents from minority communities when designing future research and policies to ensure their experiences are understood. Another limitation is that the anonymous online survey format, which prevents the ability for follow-up questions. While our study was available to a geographically diverse sample, we may have excluded parents who express themselves more comfortably orally than in writing. These methods also prevented follow-up contact with participants. Finally, the limited number of questions asked of the respondents creates the possibility that other important themes may not have been developed with the answers we received. --- Conclusion This qualitative study included parents of infants requiring specialized care in the NICU during the COVID-19 pandemic. The descriptions of parent experiences document the emotional struggle of being separated from support systems, feelings of isolation, lack of family-centered care, and exacerbation of emotional distress already known to be common to the NICU journey. The experience of parents included intense and frequent disappointment at the system level of having their rights to be present with their infant restricted and desire for more empathy, validation, and inclusion in decision making. It is important to remember that the restrictions placed on parental presence and access to infants during the beginning of the COVID-19 pandemic were not made maliciously by hospital administrators. Rather, policy makers and care providers were forced to make important decisions with little information, and a great deal of responsibility for safety. Now that we know more, we must do better as we move forward. We have begun to understand the lived experiences of parents with infants in the NICU and we have an opportunity to shape future policy decision making processes, especially in times of crisis. Parents and families need to be considered fundamental in this process. The COVID-19 pandemic uprooted the lives of nearly everyone. For parents of infants requiring hospitalization in the NICU during this time, the pandemic exacerbated this already challenging experience. Moving forward, healthcare providers and researchers can use our results to focus assessments, offer supportive services and emotional support, and remain steadfast in valuing the essential role of parents, when families encounter an infant hospitalization. --- Funding Not applicable. --- Availability of data and materials The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. --- Abbreviations NICU: Neonatal Intensive Care Unit. --- Supplementary Information The online version contains supplementary material available at https:// doi. org/ 10. 1186/ s12887-021-03028-w. --- Additional file 1: Supplemental --- Declarations Ethics approval and consent to participate Given the anonymous nature of the study, the Institutional Review Board (ethics board) at the University of Michigan deemed the study exempt. Neither written nor verbal consent was required for participation in the study however, an opportunity to decline participation was offered prior to starting the survey questions. --- Consent for publication Not applicable. --- Competing interests The authors declare that they have no competing interests. • fast, convenient online submission • thorough peer review by experienced researchers in your field • rapid publication on acceptance • support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year --- • At BMC, research is always in progress. --- Learn more biomedcentral.com/submissions Ready to submit your research Ready to submit your research ? Choose BMC and benefit from: ? Choose BMC and benefit from: --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
We aimed to investigate the impact of socio-economic inequalities in cancer survival in England on the Number of Life-Years Lost (NLYL) due to cancer. METHODS: We analysed 1.2 million patients diagnosed with one of the 23 most common cancers (92.3% of all incident cancers in England) between 2010 and 2014. Socio-economic deprivation of patients was based on the income domain of the English Index of Deprivation. We estimated the NLYL due to cancer within 3 years since diagnosis for each cancer and stratified by sex, age and deprivation, using a non-parametric approach. The relative survival framework enables us to disentangle death from cancer and death from other causes without the information on the cause of death. RESULTS: The largest socio-economic inequalities were seen mostly in adults <45 years with poor-prognosis cancers. In this age group, the most deprived patients with lung, pancreatic and oesophageal cancer lost up to 6 additional months within 3 years since diagnosis than the least deprived. For most moderate/good prognosis cancers, the socio-economic inequalities widened with age. CONCLUSIONS: More deprived patients and particularly the young with more lethal cancers, lose systematically more life-years than the less deprived. To reduce these inequalities, cancer policies should systematically encompass the inequities component.
BACKGROUND Patients living in more socioeconomically deprived areas (referred hereafter as 'more deprived' patients) tend to have worse cancer outcomes than those living in less deprived areas ('less deprived' patients), in the UK and other countries [1][2][3][4]. In England, in order to improve cancer survival and reduce the inequalities, the first-ever NHS Cancer Plan was implemented in 2000, followed by several successive policy initiatives, mainly focusing on promoting early diagnosis, optimising treatment pathways and maximising available resources to bring better treatment options, care and infrastructure [5][6][7][8][9]. However, the indisputable overall increase in cancer survival over the last 25 years has been accompanied by a minimal or lack of improvement in socio-economic inequalities, reflected on persistent poorer cancer prognosis of the more deprived patients [10]. Similar patterns have been repeatedly reported regarding cancer screening uptake [11,12] and vaccine coverage [13][14][15][16][17]. Such inequalities pose a challenge for the National Health Service (NHS) which is committed to equity of access in healthcare, i.e. equal access for equal need for the whole population. Research has shown that cancer awareness, clinical (comorbidities) and tumour-related (tumour stage) factors can only explain part of the inequalities in England [18][19][20] and that more emphasis should be given to the observed variation in cancer screening uptake [21][22][23] and management of patients [24][25][26]. However, communication of these epidemiological findings with political forces and stakeholders has been suboptimal, evidenced by the lack of initiative to target inequalities in a more methodical fashion. Socio-economic inequalities in England have been described previously through survival or mortality probabilities [4,10,27]. Although these measures are necessary for evaluating the patients' prognosis, they do not fully reflect the burden on the society, unlike alternative measures such as the crude probability of death from cancer (CPr) [28] or the Number of Life-Years Lost (NLYL) due to cancer [29,30]. The NLYL measures how many years patients diagnosed with cancer can lose due to their cancer. The measure, easy to communicate to a large audience [29], can also be translated into societal or economic cost. This study aims to quantify the population burden of socioeconomic inequalities (measured with the income deprivation domain of the Index of Multiple Deprivation for a given area) in cancer survival using the CPr and NLYL due to cancer, to identify specific components for improvement, and to consider how this can be integrated with public health policy and resource allocation. --- METHODS --- England National Cancer Registry data revision of the International Classification of Diseases (ICD-10) [31] while the second edition of the International Classification of Diseases for Oncology (ICD-O-2) was used for morphology and behaviour [32]. We included 23 of the most common cancers in males and females. Socio-economic deprivation of patients was based on the income domain of the English Index of Multiple Deprivation (IMD 2004) [33], an ecological measure of relative deprivation. The income domain score measures the proportion of the population with low income living in a given Lower layer Super Output Area (LSOA) [34]. LSOAs are census-based administrative spatial areas developed by the Office for National Statistics (ONS) and designed for reporting small area statistics in England and Wales. Cancer patients were assigned to their LSOA of residence at diagnosis (32,482 LSOAs in England, mean population 1500). They were allocated to a deprivation category (from 1, 'least deprived', to 5, 'most deprived') based on the quintiles of the national distribution of all LSOAlevel income domain scores of the IMD 2004. Among the seven domains of the IMD, we used the income domain firstly because of its overall high degree of agreement with the overall composite IMD measure [35]. Also, using the overall IMD can lead to misinterpretation because it contains components about access to public services, therefore access to optimal care, which is strongly linked to inequalities in cancer survival. --- Cancer survival measures The estimation of cancer survival measures requires competing risks methods to account for the fact that cancer patients may die from causes other than the cancer under study [29,36,37]. However, as the cause of death is often unavailable or unreliable in population-based data, survival measures are estimated using methods from the relative survival framework. Assuming that the overall mortality hazard can be expressed as the sum of the cancer-related hazard ('excess hazard') and the hazard of death from other causes ('expected hazard'), the basic principle in the relative survival framework is that the expected hazard is derived from the mortality hazard in the general population where patients come from, i.e. lifetables. The England lifetables are here defined by sex, age (0-99 by 1-year age groups), deprivation (1-5 using IMD) and for the calendar period 2010-2015 (by calendar year for 2010 and 2011, and assuming a plateau afterwards) and extracted from a dedicated website [38]. The NLYL can be estimated directly from the CPr, which is the probability of dying from cancer before or at time t in the presence of competing causes of death [39]. By integrating the CPr function from 0 to time t we can derive the NLYL which can be interpreted as the meantime patients would lose due to cancer death within a specific time period [0,t] [40,41]. Although we provide a brief explanation in the Appendix, methods to estimate the CPr from a given cause in the relative survival framework have been fully described elsewhere [39][40][41][42][43]. NLYL is estimated in a pre-specified follow-up time window to account for the inability to estimate the entire survival function due to rightcensoring. We estimated the CPr and the NLYL due to cancer within 1 and 3 years after cancer diagnosis according to deprivation, age and sex. We present here the comparison of Life-Years Lost (LYL) within 3 years since diagnosis between the least and the most deprived patients. More detailed results (in particular for 1 year since diagnosis and all deprivation levels) are presented in the Supplementary file and the web-tool (https://CPr of death and NLYL due to cancer by deprivation/). Calculations were performed with R software version 4.0.4 and the package 'relsurv' version 2.2-3 [40]. To estimate 95% confidence intervals for the NLYL, we used the R-package 'boot' [44] version 1.3-28, for non-parametric bootstrap (1000 bootstrap replicates). To describe the (cancer j)-specific burden among all different cancers combined in each group of patients defined by the combination of sex, age group and deprivation, we also present the proportion of NLYL due to each cancer over the total NLYL due to all cancers under study (k = 1,…, 23) for this group of patients. This quantity is weighted with the cancer-specific proportion of patients with each cancer over the total number of cancer patients in that group of patients. So, within a combination of sex/age/ deprivation, this proportion can be expressed mathematically as follows: P j ¼ NLYL j P k NLYL k Á n j P k n k where j = 1,…, 23 defines the cancer and n j the number of cases observed for that cancer and the specific subgroup studied. --- RESULTS During 2010-2014, more than 1.2 million patients were diagnosed with one of the 23 cancer sites in England, representing 92.3% of all incident cancers in England. Based on the area of residence at diagnosis, 20-21% of the patients were in each of the deprivation levels 1 (least deprived) to 4, contrasting with 17% in the most deprived group (level 5). Among the most frequent cancers, colon, prostate and breast (female) cancers were more common in the less deprived whilst lung cancer largely predominated in the more deprived patients (Table 1). Cervical, stomach, liver and oesophageal cancers were more frequent in the more deprived than the less deprived patients. In contrast, pancreatic cancer was equally common in all deprivation groups. --- Number of Life-Years Lost due to the cancer The estimates of CPr and NLYL within 3 years divide naturally the cancer sites in 'good' (CPr: 0-0.25) or 'moderate' (CPr: 0.25-0.75) and 'poor' (CPr: 0.75-1) prognosis (Fig. 1; Supplementary Fig. 1). The cancer sites with the highest probability of death due to cancer within 3 years since diagnosis were brain, lung and all the upper-digestive organ cancers (pancreatic, liver, oesophagus and stomach) (Supplementary Fig. 1). For these cancers, the CPr within 3 years was between 0.75 and 1 and the NLYL within 3 years was between 1.75 and 2.3 years (Fig. 1). Cancer sites with relatively low CPr within 3 years (<0.25) and NLYL of less than 0.5 years within 3 years, were Hodgkin lymphoma, thyroid, skin melanoma, female breast cancer and cancers of the reproductive organs, such as prostate and testicular cancer in male, and cervical and uterine cancers in female. The remaining cancers presented an intermediate CPr within 3 years (0.25-0.50), with 0.5-1.2 LYL within 3 years, and included the cancers of colon, rectum, kidney, bladder, larynx (men), ovary and leukaemia, myeloma and Non-Hodgkin lymphoma (NHL) (Fig. 1). --- Number of Life-Years Lost in different deprivation groups The NLYL within 3 years was consistently higher in the older age groups in both sexes (Figs. 2,3), reflecting an overall worsening cancer prognosis with increasing age. Also, the most deprived patients had more LYL due to cancer than the least deprived for most of the cancer sites considered. However, the magnitude of the inequalities in the NLYL varied by sex and age group. For the group of poor-prognosis cancers, the largest socioeconomic inequalities were seen mostly in younger adults less than 45 years old. In particular, the most deprived male patients with pancreatic cancer lost 1.81 years within 3 years (95% CI: 1.56, 2.07) in contrast to the least deprived who lost 1.38 years (95% CI: 1.05, 1.71). Similarly, the most deprived female patients of less than 45 years old with lung cancer lost 1.49 years (95% CI: 1.34, 1.63), 0.54 years more than the least deprived (0.95; 95% CI: 0.77, 1.16) (Fig. 2; Supplementary Table 1). In contrast, an almost nonexistent deprivation 'gap' was seen for brain cancer in (particularly male) patients more than 65 years old, with the NLYL within 3 years reaching nearly 2.5 years. For the majority of the moderate and good-prognosis cancers (colon, rectum, kidney, leukaemia (female), myeloma (male), Non-Hodgkin lymphoma, testis, female breast, ovary, uterus), the difference in the NLYL between the most and least deprived mostly widened with age. For thyroid cancer, the deprivation difference peaked at 65 plus with no pattern in the other age groups. In contrast, the deprivation gap narrowed with age for bladder cancer in females and laryngeal cancer in males (Fig. 3; Supplementary Table 2; Supplementary Table 3). One of the most striking socio-economic inequalities among all cancer-sex-age combinations for the moderate/good prognosis cancers was observed for Hodgkin lymphoma particularly in patients aged 55-64. In this age group, most deprived patients lost almost 0.4 additional years (within 3 years) compared to the least deprived in both male and female patients (Fig. 3; Supplementary Table 3) while no such wide inequalities were seen in the younger or the older age groups. In females, the largest difference was seen for bladder cancer in young women less than 45 years old, although deprivation differences -albeit smaller-were observed in most age groups. The NLYL in the most deprived women less than 45 years with bladder cancer was 1.26 years within three years (95% CI: 0.89, 1.65), 0.63 years more than the least deprived (NLYL = 0.63; 95% CI: 0.16, 1.15). In males, in addition to Hodgkin lymphoma, the deprivation difference was also particularly high for laryngeal cancer in adults less than 45 years and, thyroid and testicular cancer in the over 65 year olds (Fig. 3; Supplementary Table 2; Supplementary Table 3). In contrast, the deprivation gap in the NLYL was small for skin melanoma in both male and female patients. Also, small variations between age groups and relatively small deprivation inequalities were seen for prostate cancer and for cervical and thyroid cancer in women. A reversal of the difference was observed for ovarian cancer in patients less than 45 years and Hodgkin lymphoma in female patients more than 65 years old. --- The proportion of Life-Years Lost More life-years were lost due to cancer among most deprived patients, compared to the least deprived, although the age pattern of these inequalities varies according to cancer prognosis. The observations slightly differed when focussing on the proportion of the total LYL instead of their number. Poor-prognosis cancers still accounted for the largest proportion of the total LYL for all cancers regardless of age and deprivation. However, figures can vary widely by deprivation. For example, in the most deprived, lung cancer contribution ranges from 13% (young female) to over 40% in age group 65+ (both sexes) (Fig. 4), while lung cancer represents only 21% of all incident cancers included in this deprivation group (Table 1). In the least deprived, the highest lung cancer contribution remains below 30% of LYL (65+ male) (Fig. 4) while 10% of cancers are from lung in this group (Table 1). Lung cancer remains the largest contributor of NLYL in all age groups, with the exception of female patients, aged 15-44 years, for whom the largest contributors of NLYL were breast cancer (least and most deprived) and cervical cancer (most deprived) (Fig. 4). A few cancer sites, such as brain, bowel, leukaemia, ovary and breast, are larger contributors of LYL in the least deprived than in the most deprived groups. --- DISCUSSION Our study, including the additional online infographic, clearly show that more deprived patients systematically lose more lifetime due to cancer, and that most deprived patients tend to stand out from the other deprivation categories with generally much higher NLYL. Those living in the most socioeconomically deprived neighbourhoods in England, accounting for around 17% of the incident cancers included in this study, lost 1.5 times more NLYL than the least deprived (0.98 years vs. 0.67 within 3 years; results not shown). To obtain these results, we used a relative survival approach, which allows the competing risks of death from other causes to be controlled without any information on the cause of death. Overall, the burden of poor-prognosis cancers is the highest, both regarding the NLYL and their proportions. The largest socio-economic inequalities in NLYL were seen mostly in younger adults less than 45 years diagnosed with poor-prognosis cancers whilst for the moderate/good prognosis cancers the socio-economic inequalities varied substantially but with an overall widening, counterintuitive, trend with increasing age. The disproportionate socio-economic inequalities in younger adults were more specifically seen for the cancers related to tobacco smoking, such as pancreatic, lung and oesophageal cancers which presented the largest gaps in this age group. The prognosis of these cancers is so poor in older patients that survival differences can no longer be observed. In contrast, the narrow socio-economic inequalities from the good prognosis cancers, particularly among young patients, may be due to the 'ceiling effect', when survival in the less deprived is so high that it cannot improve further [4,10]. Pancreatic cancer illustrates well this age-related pattern. In the age group less than 45 years, the most deprived male patients lost about 5 months more than the least deprived within 3 years, while in the age group 65 plus, this difference is only about 1 month. This is more likely due to very low survival probabilities, rather than reduced inequalities in the oldest age group. Five-year net survival from pancreatic cancer in England ranges between 36% in patients less than 45 years and 3% in those over 75 [45], which makes it almost impossible to detect any differences in this age group. The lack of early symptoms and advanced stage at diagnosis dramatically affect the probability of receiving surgical resection which is the only curative treatment for pancreatic cancer [46]. Similar phenomenon combined with lower use of a potentially curative treatment particularly in young deprived patients could explain the larger deprivation inequalities observed for lung cancer in younger patients. Surgical resection remains the major potentially curative treatment of lung cancer (particularly non-small-cell carcinoma). The receipt of surgical treatment decreases dramatically with age and deprivation, even after accounting for comorbidity [24], which is less of a concern among younger patients because of low comorbidity prevalence [47]. With the exceptions of youngest females and youngest least deprived males, lung cancer is also the largest contributor to LYL (Fig. 4). In the most deprived group, lung cancer represents a fifth of the incident cases (Table 1) and accounts for around 13-42% of all NLYL from all cancers combined, depending on sex and age. This highlights that a targeted lung cancer screening is justifiable given a large number of LYL that could be avoided [48,49]. In addition to the aforementioned cancers, the largest socioeconomic inequalities in NLYL overall were also seen for bladder cancer in young female patients and for laryngeal cancer in young male patients, both cancers related to tobacco smoking. Bladder [50] and colon [51] cancer cases illustrate the persisting gender inequalities in diagnosis, with early symptoms such as haematuria and pelvic pain less prone to further diagnostic investigations among women [51]. These inequalities are probably exacerbated among more deprived patients, who may not get access to a specialised healthcare facility for urologic surgery, either because of greater travel distance or lack of social support [52]. Regarding laryngeal cancer, the large deprivation gap in LYL seen in young men is unlikely to be explained by late diagnosis (i.e., advanced stage) [53], and more likely by suboptimal care, such as delayed treatment [54], or because of the poorer ability of deprived patients in navigating the complex laryngeal cancer pathway [55]. Cervical cancer is another important contributor, especially in women younger than 45 years, where it accounts for 15% and 7% of all LYL in most and least deprived patients, respectively, illustrating the need for increasing the cervical cancer screening uptake and HPV vaccine coverage among young women, particularly in more deprived population. The study findings highlight the fact that reducing inequalities in younger adults is equally as important as tackling inequalities in the older population as it would result in many life-years gained. From a societal aspect, the LYL due to cancer in adults of working age can have a significant societal and economic impact. Studies in the US and Europe have consistently shown that premature loss of life attributed to cancer, results in reduced productive capacity and therefore loss in labour force earnings [56][57][58][59]. In the UK, it was estimated that in a single year over 50,000 people of working age lose their lives from cancer and in 2014 these people could have contributed £585 million to the UK economy [60] short survival cancers or other co-morbid chronic diseases [61]. It is estimated that among cancer survivors only around 63.5% will return to employment with the majority reducing the working hours and limiting voluntary activities and caregiving [62]. Literature on the societal and economic impact of socioeconomic inequalities in cancer remains scarce [63,64]. Moreover, similar studies on this topic have mostly used the loss in life expectancy, which requires extrapolation of cancer survival of the cohort individuals up to the end of their expected life [65]. Our metric of LYL does not rely on such extrapolation as it is time-bound to the point where all patients have been followed up. We acknowledge that the social and economic costs of a patient death go far beyond 3 years. However, our estimates bounded at 3 years make the costs easier to estimate by health economists and more usable politically and for health policy planning. From a public health policy perspective, it is vital to address these inequalities as this will reduce the overall impact of cancer on society. The wider inequalities among young patients potentially emphasise the structural components that may play a key role and pose a serious challenge to the healthcare system and society. Moreover, the range of these across-cancer inequalities poses the question of their causes. Mechanisms underlying such inequalities within a universal health coverage setting are still not well understood [66]. In a context of an increasing shortage of resources in both primary and secondary care sectors [67], the COVID-19 pandemic has exacerbated the inequalities [68,69]. It also emphasised that the suboptimal distribution of resources between areas according to their deprivation level [70,71] is likely to play an important role in the inequalities in accessing optimal healthcare [72] and, ultimately, in cancer outcomes [73]. The inequities component should be systematically and carefully considered in any policies aiming at improving cancer outcomes (including for earlier detection or new treatment) before their implementation in order to reduce these inequalities or even avoid further widening. --- DATA AVAILABILITY The data used for this study are the English National Cancer Registry data 1971-2014. Cancer registration data consist of patient information and as such, it is protected under the Data Protection Act 1998 and GDPR 2018 and cannot be made available as open data. Formal requests for release of cancer registration data can be made to the data custodian Public Health England (PHE), Office for Data Release (ODR) at [email protected]. The researchers will have beforehand obtained all the ethical and statutory approvals required for accessing sensitive data. Detailed information on the application process can be found at https://www.gov.uk/government/ publications/accessing-public-health-england-data/about-the-phe-odr-andaccessing-data. --- AUTHOR CONTRIBUTIONS BR, DKK, AB and AE designed the study. AE and DKK conducted data analysis and created figures and tables. DKK designed the web-tool. AE and BR drafted the manuscript. All authors contributed to the interpretation of the results, revised and critically reviewed the manuscript and approved the submitted version. --- COMPETING INTERESTS The authors declare no competing interests. --- ETHICAL APPROVAL The authors have obtained the ethical and statutory approvals required for this research (PIAG 1-05(c)/2007); ethical approval updated 6 April 2017 (REC 13/LO/0610). --- ADDITIONAL INFORMATION Supplementary information The online version contains supplementary material available at https://doi.org/10.1038/s41416-022-01720-x. Correspondence and requests for materials should be addressed to Aimilia Exarchakou. Reprints and permission information is available at http://www.nature.com/ reprints Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Background-Alcohol research focused on underage drinkers has not comprehensively assessed the landscape of brand-level drinking behaviors among youth. This information is needed to profile youth alcohol use accurately, explore its antecedents, and develop appropriate interventions. Methods-We collected national data on the alcohol brand-level consumption of underage drinkers in the United States and then examined the association between those preferences and several factors including youth exposure to brand-specific alcohol advertising, corporate sponsorships, popular music lyrics, and social networking sites, and alcohol pricing. This paper summarizes our findings, plus the results of other published studies on alcohol branding and youth drinking. Results-Our findings revealed several interesting facts regarding youth drinking. For example, we found that: 1) youth are not drinking the cheapest alcohol brands; 2) youth brand preferences differ from those of adult drinkers; 3) underage drinkers are not opportunistic in their alcohol consumption, but instead consume a very specific set of brands; 4) the brands that youth are heavily exposed to in magazines and television advertising correspond to the brands they most often report consuming; and 5) youth consume more of the alcohol brands to whose advertising they are most heavily exposed.
Introduction Epidemiological studies have consistently demonstrated that youth drinking is an important predictor of negative social, developmental, and behavioral health effects (Swahn, Simon, Hammig, & Guerrero, 2004;Hingson, Heeren, & Winter, 2006;Gil & Molina, 2007;The Lancet, 2008; Committee on Substance Abuse, 2010; Kim, Asrani, Shah, Kim, & Schneekloth, 2012;Rehm et al., 2014). Recent survey data indicate that despite declining trends in overall past-month drinking prevalence, roughly 70% of U.S. high school seniors have ever consumed alcohol, while about 25% of American youth ages 12-20 have done so in the past 30 days (Johnston, O'Malley, Bachman, & Schulenberg, 2011;Chen, Yi, & Faden, 2013;Substance Abuse and Mental Health Services Administration, 2013;Center on Alcohol Marketing and Youth, 2014). Substantial research has been conducted to identify the causes of youth drinking, with numerous studies focused on the potential influence of exposure to alcohol advertising on youth drinking intentions and behaviors. The findings have been mixed. While some studies have shown a strong association between advertising exposure and subsequent alcohol consumption (Ellickson, Collins, Hambarsoomians, & McCaffrey, 2005;Anderson, de Bruijn, Angus, Gordon, & Hastings, 2009;Smith & Foxcroft, 2009), other research has found only weak, partial evidence or no significant relationship at all (Franke & Wilcox, 1987;Smart, 1988;B. Lee & Tremblay, 1992;Edward, Moran, & Nelson, 2001;Bonnie et al., 2004;Nelson, 2011). One potential flaw of these studies is that they typically describe the relationship between youth exposure to alcohol marketing and alcohol consumption at either the aggregate level (e.g., using data on per capita alcohol consumption or alcohol sales) or the beverage category level (e.g., using self-reported beer, wine, and spirits consumption). Since alcohol advertising, pricing, and consumption occur at the brand level, assessing the impact of alcohol marketing on adolescents using global rather than brand-specific variables could be masking any true effect of advertising on youth alcohol behavior. Presently, to the best of our knowledge, there are no comprehensive data on youth drinking available at the level of specific brands. It is important to eliminate this information gap first and foremost because considerable evidence indicates that alcohol brands are marketed to youth and that this marketing is engineered to build brand capital-that is, the positive, compelling characteristics that consumers associate with a particular brand-by way of carefullydesigned advertising content (Saffer, 2002;Collins, Ellickson, McCaffrey, & Hambarsoomians, 2005;Hastings, Anderson, Cooke, & Gordon, 2005;Kessler, 2005;Saffer & Dave, 2006;Henriksen, Feighery, Schleicher, & Fortmann, 2008;Ross, Ostroff, & Jernigan, 2014). Second, research on adolescents' cigarette brand preferences and their exposure to youth-oriented marketing was instrumental to the development of stricter advertising regulations intended to protect youth. This line of work suggests that brand-level alcohol research is not only relevant but may indeed be one of the keys to reducing alcoholrelated use among youth drinkers (Pucci & Siegel, 1999;King & Siegel, 2001;Cummings, Morley, Horan, Steger, & Leavell, 2002;R. Lee, Taylor, & McGetrick, 2004;Hafez, 2005;Krugman, Quinn, Sung, & Morrison, 2005;Ibrahim, 2010). Consider the case of Camel brand cigarettes, manufactured by R. J. Reynolds. Introduced in the late 1980s, the brand's Joe Camel ("Smooth Character") marketing campaign was heavily criticized for using cartoon illustrations that public health advocates argued were appealing to youth (Cohen, 2000;Fischer, Schwartz, Richards, Goldstein, & Rojas, 1991;Pierce et al., 1991;Pierce, Gilpin, & Choi, 1999). Due to concerns about Camel's marketing strategy, several research studies examined the relationship between tobacco brand advertising and youth cigarette use. These studies demonstrated that the Joe Camel logo was highly recognizable, even to very young children (Fischer et al., 1991;Pierce et al., 1991); that the Camel brand was one of the most popular cigarette brands among youth (Centers for Disease Control and Prevention (CDC), 1994;Pierce et al., 1991;Pucci & Siegel, 1999); and that youth initiation of Camel cigarette smoking directly mirrored the timing of the Joe Camel campaign's ten-year run from 1988 to 1998 (Pierce et al., 1999). This body of evidence played a role in convincing the United States Department of Justicein a civil litigation process that began in 1999 and lasted until the trial decision was issued in 2006-that tobacco companies had violated the Racketeer Influenced and Corrupt Organizations Act (RICO) (Judge Gladys Kessler, 2006;United States Department of Justice, 2014). In conjunction with more recent litigation, notably the 2009 Family Smoking Prevention and Tobacco Control Act (Rep. Henry A. Waxman, 2009), these successful lawsuits led to the U.S. Government being granted greater control over the regulation of tobacco and other nicotine-containing products, including their marketing. Were it not for the brand-level research conducted on youth smoking, these regulations might not exist. While the field of tobacco control can draw upon data on brand-specific youth smoking data going back to 1989, little comparable information exists on brand-specific alcohol use among youth. To address this gap, our study team launched the Alcohol Brand Research Among Underage Drinkers (ABRAND) project to collect national data on the alcohol brandlevel consumption of underage drinkers in the U.S. and then examine the association between those preferences and several factors including youth exposure to brand-specific alcohol advertising, corporate sponsorships, popular music lyrics, and social networking sites, and alcohol pricing. This paper summarizes our major findings, plus the results of other published studies on alcohol branding and youth drinking. Please note that the legal drinking age in the U.S. is 21, and while the minimum legal drinking age varies internationally, we believe the findings discussed in this article are relevant to any public health professional interested in reducing the harms related to adolescent alcohol use. --- Methods To date, this four-year project has generated a total of 23 manuscripts that are published or currently in press. Summaries of each paper may be viewed at www.youthalcoholbrands.com. Our team's research (the ABRAND project) can be categorized into four major topic areas: 1) surveillance and epidemiology; 2) pricing and purchasing expenditures; 3) social and popular media; and 4) advertising and marketing. Although categories three and four both relate to media depictions of alcohol brands, here the term "social and popular media" refers specifically to our team's analysis of U.S. music lyrics and U.S. and international Facebook page content. In contrast, category four (advertising and marketing) refers to paid, brand-sponsored television and magazine advertising and marketing. --- Surveillance and Epidemiology From December 2011 through May 2013, we surveyed a national sample of 1032 underage drinkers in the United States between the ages of 13 and 20 using a pre-recruited Internet panel maintained by Knowledge Networks, Inc. Each respondent had consumed at least one alcoholic drink in the past 30 days. The online, self-administered survey assessed respondents' overall and brand-specific alcohol consumption during that time period, based on a comprehensive list of 898 brands compiled by our team. Additionally, respondents answered questions regarding risky alcohol-related behavior (such as heavy episodic drinking, fights, and injuries), the source of their most recently consumed alcohol (parents, an underage friend, a liquor store, etc.), and their role in selecting the brand of their most recent drink. --- Pricing and Purchasing Expenditures Our team assembled data on alcohol brand prices and purchasing expenditures in both control and license states through a number of strategies, but primarily by reviewing brandspecific prices and other beverage characteristics that U.S.-based alcohol vendors posted online in 2011. --- Social and Popular Media We collected social and popular media data from a number of different sources. For our analysis of alcohol brand mentions in music lyrics, we searched Billboard Magazine's yearend charts from 2009 to 2011 to identify the most popular songs in four genres: Urban, Pop, Country, and Rock. Our research on Facebook involved systematically reviewing brandspecific, company-sponsored pages for content potentially viewable by underage drinkers. --- Advertising and Marketing We used data obtained from Nielsen Monitor-Plus (New York, NY) and Kantar Media (New York, NY) to identify the brand-specific advertising that appeared in the full-run, national editions of 124 magazines published in the U.S. in 2011. Next, through a licensing agreement with GfK MRI (Growth from Knowledge, Mediamark Research & Intelligence, New York, NY), we acquired data on the demographics of each magazine's readership. With this information, we could determine the extent to which underage youth were exposed to magazine advertising for each alcohol brand, which we then related to the brand-specific consumption levels reported by our youth survey respondents. To examine youth exposure to alcohol advertising on television, our survey asked respondents to indicate which of 20 television programs they had viewed in the past 30 days. For each survey respondent, we calculated a standard measure of cumulative exposure to each brand's advertising that aired on those shows during the preceding 12 months, based on Nielsen estimates of the youth audience for each show's telecasts. Our primary analysis related this exposure data to each survey respondent's reported consumption of the advertised brands. --- Literature Search To identify additional published studies on alcohol brand-related behaviors among underage drinkers, we conducted a literature search via Web of Science using the following keywords and word stems: alcohol, brand*, underage, youth*, and adolescent*. We included studies conducted domestically and internationally as long as the authors described a comprehensive and systematic approach to studying underage alcohol use in relationship to specific alcohol brands. Studies were excluded if they examined only a small number of selected alcohol brands (such as the top 10 most advertised brands in their country) or if they asked participants brand-related questions but did not name the brands when presenting their results. In this paper, we present the findings from our team's research augmented with additional evidence from the literature (Gentile, Walsh, Bloomgren, Atti, & Norman, 2001;Kearns, James, & Smyth, 2011;Tanski, McClure, Jernigan, & Sargent, 2011;Primack, Nuzzo, Rice, & Sargent, 2012). --- Results Please see Table 1 for a summary of our overall findings and corresponding references. --- Surveillance and Epidemiology We used two methods to estimate the survey respondents' total alcohol consumption during the past 30 days. The first was a frequency-quantity measure, which asked respondents to report how many days they drank during a certain time period and then how many drinks they typically had on a day when then drank. While this traditional measure is easy to administer, past research has suggested that it may underestimate respondents' actual alcohol consumption (Rehm, 1998;Dawson, 2003;Bloomfield, Hope, & Kraus, 2013). In addition to this measure, our survey asked respondents to report how many drinks of each alcohol brand they had drunk in the past 30 days. This second method allowed us to determine the total number of drinks a respondent reported having across his or her list of consumed brands. Compared to the traditional measure, respondents reported an average of 11 additional drinks per month (a 62% increase) when asked to report their brand-specific alcohol consumption. Status as a recent heavy episodic ("binge") drinker-defined in our study for both males and females as consuming five or more drinks in a row-and consuming a greater number of alcohol brands significantly predicted the disparity between the two measures (Roberts, Siegel, DeJong, & Jernigan, 2014). In the first published study on underage drinkers' brand-level consumption patterns, Gentile and colleagues found that the beer brands most heavily advertised in 1998 and1999 (including Budweiser andBud Light, Miller Genuine Draft andMiller Lite, Coors andCoors Light, Corona andCorona Extra, andHeineken) were also the brands that youth in the U.S. reported preferring and drinking most often (Gentile et al., 2001). More recently, a nationwide telephone survey of U.S. youth ages 16-20 found that two-thirds of the underage drinkers surveyed had a preferred alcohol brand, with Smirnoff vodka and Budweiser beer being ranked as their favorites (Tanski et al., 2011). Data from a 2011 pilot study conducted among Irish youth ages 14-18 also identified Smirnoff vodka and Budweiser beer as favorite brands, despite the availability of less expensive brands (Kearns et al., 2011). In line with this small pool of past studies, our research team found that the alcohol brands with the highest past 30-day consumption prevalence were Bud Light beer (consumed by 27.9% of underage drinkers), Smirnoff malt beverages (17.0%) and Budweiser beer (14.6%) (Siegel, DeJong, et al., 2013). We discovered that the top 25 brands preferred by underage drinkers accounted for nearly half of the total market share (48.9%), defined as the proportion of total drinks consumed by the entire sample attributable to a specific brand (Siegel, DeJong, et al., 2013). We also examined demographic differences in underage drinkers' alcohol brand preferences (Siegel, Ayers, DeJong, Naimi, & Jernigan, 2014). Two brands of beer, Bud Light and Budweiser, were popular among underage youth regardless of their demographic characteristics. A preference for liquor brands (e.g., Smirnoff vodka, Jack Daniel's whiskey) appeared to increase with age. We also found that some flavored alcoholic beverages (e.g., Smirnoff malt beverages) and wine coolers (e.g., Bartles & Jaymes) were substantially more popular among females than males. In contrast, liquor brands tended to be more popular among males than females, although some were popular among both sexes (e.g., Absolut vodka, Smirnoff vodka, Bacardi rum). Finally, we found the most variability in alcohol brand preferences when we stratified our analyses by race/ethnicity. Nearly half of the top 25 alcohol brands popular among Black youth did not appear on the list of top 25 preferred brands among non-Hispanic White Youth. The 12 alcohol brands found to be uniquely popular among Black respondents were Hennessy cognac, Ciroc vodka, 1800 tequila, Seagram's gin, E & J Gallo brandy, 1800 margaritas and cocktails, Bud Ice beer, Andre champagne, Gallo wines, Miller High Life beer, Christian Brothers brandy, and Colt 45 malt liquor. Further investigation revealed that underage youth also exhibit strong brand preferences when engaging in binge drinking. We found that two-thirds of the total alcohol consumed by our sample was drunk during binge drinking episodes (Naimi, Siegel, DeJong, O'Doherty, & Jernigan, 2014). We identified 25 brands that accounted for almost half (46.2%) of all reported heavy episodic drinking, with Bud Light beer, Jack Daniel's whiskey, and Smirnoff malt beverages topping the list. In a separate analysis, we also found that respondents who reported engaging in heavy episodic drinking had significantly higher odds of experiencing alcohol-related fights and injuries (Roberts, Siegel, DeJong, Naimi, & Jernigan, 2015). Furthermore, we identified eight brands that were significantly more popular among youth who had experienced fights and injuries: Jack Daniel's whiskeys, Absolut vodkas, Heineken, Bacardi rums, Bacardi malt beverages, Hennessy cognacs, Jack Daniel's Cocktails, and Everclear 190 (grain alcohol). In another investigation, we analyzed our survey data on where and from whom underage drinkers obtain alcohol and whether they themselves select the brands they consume (Roberts, Siegel, DeJong, Naimi, & Jernigan, 2014). We found that most underage drinkers typically obtain alcohol from passive sources such as an underage peer or an adult of legal drinking age. When we stratified the data according to respondents' cited source of alcohol and their role in brand choice, the lists of consumed brands were extremely similar. To assess whether the youth alcohol brand preferences identified in our research simply reflect the preferences of adult drinkers, we compared our national survey data on adolescent brand preferences to data on the brand preferences of adults aged 21 and older (Siegel, Chen, et al., 2014), as measured via the Gfk MRI's Survey of the Adult Consumer. We found that while most brands of alcohol popular among adolescents were also top brands among adult drinkers, a total of 15 brands were found to have a disproportionately high prevalence and market share ratio among youth, with Smirnoff malt beverages, Jack Daniel's whiskey, Mike's malt beverages, and Absolut vodka topping the list. Of note, we found that over half of underage drinkers in our sample reported using caffeinated alcoholic beverages (CABs) in the past 30 days (Kponee, Siegel, & Jernigan, 2014). We categorized CABs into "traditional" (alcohol mixed with coffee, tea, or soda) and "non-traditional" types (pre-packaged alcoholic energy drinks or alcoholic beverages mixed with caffeine pills, energy drinks, or energy shots). Older respondents (ages 19-20) were significantly more likely to drink CABs of both types compared to younger respondents (ages 13-17). Respondents who reported CAB use, especially non-traditional CAB use, were more likely to report drinking greater volumes of alcohol and drinking more days per month, while also being more likely to report heavy episodic drinking in the past 30 days. Replicating data from our pilot study (Giga, Binakonsky, Ross, & Siegel, 2011), our national survey of underage drinkers revealed that flavored alcoholic beverages (FABs) are very popular among youth, with nearly half of our respondents having drunk FABs in the past 30 days (Fortunato et al., 2014). Smirnoff malt beverages, Mike's, Bacardi malt beverages, and Four Loko/Four MaXed were the most frequently consumed FABs (Fortunato et al., 2014). The five most popular FAB brands accounted for nearly half of the respondents' total FAB consumption. Finally, we found that approximately 1 in 5 underage drinkers ages 16-20 reported consuming Jello shots in the past 30 days. Jello shots are typically self-made (rather than purchased pre-packaged) from sweetened, flavored gelatin mixed with alcohol, typically spirits. The mix is cooled in a refrigerator and served in small containers as a "shot." For these drinkers, Jello shots comprised an average of nearly 20% of their overall monthly alcohol consumption, with this figure rising to 95% for some respondents (Binakonsky, Giga, Ross, & Siegel, 2011;Siegel, Galloway, Ross, Binakonsky, & Jernigan, 2014). Compared to the other respondents, youth who reported consuming Jello shots were more likely to report heavy episodic drinking (1.5 times more), higher alcohol consumption overall (1.6 times greater), and experiencing alcohol-related fights/injuries (1.7 times more) (Siegel, Galloway, et al., 2014). --- Pricing and Purchasing Expenditures We found greater price variation between brands within beverage categories than between the overall beverage categories. Moreover, we found that percent of alcohol by volume varied greatly between brands within each beverage category (DiLoreto et al., 2012). Because of these variations in alcohol content and price, 21 of the 25 least expensive alcohol brands were priced at less than $1.00 per standard drink (Albers et al., 2013). We found a general relationship between lower brand prices and drinking preference among youth, but the brands our survey respondents reported consuming most frequently were not the cheapest available (Albers, DeJong, William, Naimi, Siegel, & Jernigan, 2014). Among the 951 brands for which we obtained both price and youth consumption data, the three most popular brands among underage drinkers were Bud Light beer ($1.60/ounce of alcohol), Smirnoff malt beverages ($2.38/ounce), and Budweiser beer ($1.29/ounce). In terms of the relative cost of these top brands (where 1 = least expensive per ounce of alcohol and 951 = most expensive), Bud Light beer was ranked 253 rd cheapest, Smirnoff malt beverages was ranked 455 th cheapest, and Budweiser beer was 186 th cheapest. --- Social and Popular Media Primack and colleagues found that music popular among youth in the U.S. between 2005 and 2007 contained lyrics with frequent alcohol references, with about one-quarter of alcohol mentions citing a specific brand (Primack et al., 2012). The top three most frequently mentioned alcohol brand names were Patron tequila, Grey Goose vodka, and Hennessy cognac. Our review of 720 popular songs listed in Billboard Magazine's U.S.based year-end charts for 2009 to 2011 revealed that nearly one-quarter of the song lyrics include an alcohol reference, with 6.4% of all songs specifying a particular alcohol brand (Siegel, Johnson, et al., 2013). The contexts for these references were overwhelmingly positive, with few popular songs associating alcohol with any undesirable consequences. Patron tequila, Grey Goose vodka, Hennessy cognac, and Jack Daniel's whiskey were among the most-mentioned alcohol brands. Alcohol companies have a substantial presence on social networking sites, but until recently no study has systematically assessed the frequency of company-sponsored alcohol brand sites on Facebook, a popular site among youth (Nhean et al., 2014). Our research team found over 1,000 company-sponsored alcohol brand pages, excluding user-generated content such as fan pages or individual users' posts about an alcohol brand. Led by spirits (554 total sites) and followed by beer (230 total sites), wine (212 sites) and alcopops (flavored alcoholic beverages; 21 total sites), it appears that alcohol companies are frequently utilizing social networking platforms as a strategy to reach consumers. --- Advertising and Marketing Substantial research has examined the association of youth exposure to alcohol marketing and drinking behavior (Ellickson et al., 2005;Snyder, Milici, Slater, Sun, & Strizhakova, 2006;Anderson et al., 2009;McClure, Stoolmiller, Tanski, Worth, & Sargent, 2009;Smith & Foxcroft, 2009;Grenard, Dent, & Stacy, 2013). Using data from our survey, our research team analyzed this relationship at the brand level. We found that any exposure to brandspecific alcohol advertising on television programs popular among adolescents was significantly associated with brand-specific alcohol use among youth (Ross, Maple, et al., 2014). This relationship was significant even after controlling for individual-and brandlevel variables such as demographic characteristics, reported media consumption patterns, status as a recent binge drinker, brand prices, and overall national brand market share. Importantly, we also discovered a significant association between exposure to brand advertising and the number of drinks of the corresponding brand youth consumed in the past 30 days (Ross, Maple, et al., 2014). In a similar study, we assessed the relationship between underage drinkers' reported alcohol brand preferences (Siegel, DeJong, et al., 2013) and their brand-specific marketing exposure in magazines. We examined the alcohol advertising content of 124 nationally distributed magazines and readership data for male and female youth audiences aged 12-20 (Ross, Ostroff, Siegel, et al., 2014). Male youth ages 18-20 were the demographic group most heavily exposed to advertisements for 11 of the top 25 brands preferred by underage male drinkers, plus another six alcohol brands. Strikingly, females ages 18-20 were the group most heavily exposed to advertisements for 16 of the 25 most popular brands among female underage drinkers, plus two additional brands. In addition to studying mainstream advertising channels, our team identified 945 brandspecific corporate sponsorships between 2010 and 2013 for the 75 most popular alcohol brands among underage drinkers (Belt et al., 2014). The five brands with the most sponsorships were Miller beer (including Miller Lite, Miller Genuine Draft, and Miller High Life), Twisted Tea hard iced teas, Jim Beam bourbon, Jack Daniel's (including Jack Daniel's whiskey and Jack Daniel's cocktails), and Pabst Blue Ribbon beer. --- Discussion This paper presents the first review of research on underage drinkers' brand-specific alcohol preferences and consumption patterns, with a primary focus on findings from the Alcohol Brand Research Among Underage Drinkers (ABRAND) project. It should be noted that the ABRAND studies were cross-sectional in nature, and therefore we cannot infer a causal relationship between any of the variables of interest and youth alcohol consumption. While further research is necessary to determine whether a causal relationship exists, the ABRAND project has several important implications for research on youth alcohol consumption, prevention programming, and policy.. --- Research Implications Not only did we determine that it is feasible to comprehensively assess brand-specific adolescent alcohol use, this method may actually generate more complete self-reports of alcohol use. We believe this could have important implications for conducting survey research on youth alcohol consumption, as a more nuanced measure of alcohol use may provide additional insight into what influences drinking behavior. Collecting and monitoring brand-level data seems particularly important in light of our findings that 1) there are alcohol brands disproportionately preferred by adolescents compared to adults; and 2) underage drinkers are viewing a substantial amount of brand-specific advertisements placed in both traditional and non-traditional media channels. Additionally, it is worth noting that we found Jello shot consumption to be prevalent among underage youth and associated with negative alcohol-related health consequences, which suggests that this form of alcohol use may be an important addition to alcohol survey measures. --- Program Implications Those involved in public health programming and practice could utilize these findings to tailor alcohol education in school and community settings. For example, in 2014 author Dr. Michael Siegel designed and conducted a media literacy learning module with high school students in a suburban U.S. city. The module incorporated brand-specific data on alcohol use among youth, but focused on empowering adolescents to deconstruct and analyze alcohol branding and its implications for health behavior. Inspired by the American Legacy Foundation's U.S.-based "truth®" campaign to discourage cigarette smoking among youth (Farrelly, Nonnemaker, Davis, & Hussin, 2009), a similar tactic could be replicated for alcohol counter-advertising campaigns. --- Policy Implications We found that alcohol advertising in magazines reached underage youth more effectively than it did adults, and that the alcohol brand advertisements youth are heavily exposed to correspond to the brands that underage drinkers prefer. Additionally, we found that exposure to brand-specific alcohol advertising on television is associated with brand-specific alcohol consumption among youth, and that higher amounts of brand advertising exposure on television are related to increased levels of consumption of those brands. These findings underscore the emphasis that alcohol companies place on brand-level rather than beveragecategory marketing. In light of this, one policy recommendation relevant to the U.S. policy landscape would be for the Federal Trade Commission take steps to explore the nature of brand-specific alcohol advertising placement and content. Such data could also be used to develop recommendations for government action (both in the U.S. and internationally) to ensure that youth health behavior surveillance systems capture data on brand-level alcohol consumption. --- Conclusion We found that brand-level alcohol use is an important aspect of youth drinking behavior that deserves further study. We hope to see future research comprehensively assess the relationship between underage drinking and brand-specific factors. We are particularly interested in further exploration of adolescent drinkers' brand-level alcohol use patterns and related risk behaviors among internationally representative samples. Our globalized economy is driven by the advertising, marketing, and personalization of an enormous array of goods and services. Brands affect our lives, and our children's lives. Why wouldn't they affect our health? --- Author Manuscript Author Manuscript Roberts et al. Page 16 --- Finding --- References Youth tend to prefer a fairly narrow set of alcohol brands. • Gentile, D. A., et al. (2001). Frogs Sell Beer: The Effects of Beer Advertisements on Adolescent Drinking Knowledge, Attitudes, and Behavior. • Tanski, S. E., et al. (2011). Alcohol brand preference and binge drinking among adolescents. Archives of Pediatrics & Adolescent Medicine. • Siegel, M. B., et al. (2013) Youth may be heavily exposed to brand-specific alcohol references in music and social media, through television and magazine advertisements, and via corporate sponsorships. • Primack, B. A., et al. (2012). Alcohol brand appearances in US popular music: Alcohol brand appearances. Addiction. • Siegel, M. B., et al. (2013). Alcohol brand references in U.S. popular music, 2009-2011. Substance Use & Misuse. • Nhean, S., et al. (2014). The frequency of company-sponsored alcohol brand-related sites on Facebook™ -2012. Substance Use & Misuse. There is an association between the specific alcohol brands underage drinkers are exposed to in magazine and television advertisements, the alcohol brands they prefer, and the number of drinks they consume of those brands. Ross, C. S., et al. (2014). The relationship between brand-specific alcohol advertising on television and brand-specific consumption among underage youth. Alcoholism, Clinical and Experimental Research. --- • Author Manuscript
Z DCT 'APOLOGY STRATEGIES BY IRAQI MALES AND FEMALES': You have promised your son of bringing a gift in some occasion, but you did not bring it, You apologize saying: You asked your friend for a device, it was disabled You apologize saying: You borrowed abook from your teacher, but it was torn off You apologize saying:
Phatic communion, which is related to the maintenance of social relations among humans, is one of the main functions of language. In interaction, the participants' assumptions and expectations about people, events, places, etc. play an important role in the performance and interpretation of linguistic expressions. The choice of such expressions to convey certain communicative purpose is controlled by social conventions and the individual's assessment of the situation (Nureddeen, 2008). According to Van Dijk (1977: 155) language users employ various speech acts to achieve their communicative aims. A speech act is an utterance that serves a communicative function such as greeting, apologizing, warning and the like (see Hatch, 1992: 22). In the current paper, the speech act of apology in Iraqi Arabic will be investigated. Apology is defined as a speech act which is intended to provide support for the hearer who was actually or potentially malaffected by a violation (Olshtain, 1989: 156). Generally, apologies fall under expressive speech acts, where the speaker tries to indicate his own state or attitude. In order for an apology to have an effect, it should reflect true feelings.. Gooder and Jacob (2000;272) points out that the proper apol.ogy acknowledges the fact of wrong deed, accepts ultimate responsibility, expresses sincere regret and sorrow, and promises not to repeat the offense. Moreover, the effectiveness of apology includes certain principles: familiarity with the victim since intimacy determines the style of apology; intensity of the act warranting the apology since the more trivial it is, the less of an apology it needs, and vice versa; the relative authority of the offender and the victim since the styles of apology reflect how superior or inferior the victim is to the apologizer; the relative ages of the two participants; sex of both participants since females tend to apologize more than males; and place of exchange since it influences the formality and strategy of apology (Soliman, 2003). The main characteristics of apology are summarized by Edmondson (1981: 144 ) as follows (key: S=speaker; H=hearer): S wishes H to believe that S is not in favour of an act (A) performed by S as against the interests of H. In 'apologize', S may be held to regret that he did A, and to discredit himself socially for having done so. Apology can be used strategically in the speech act set of other acts such as 'complain' and so on. In addition, apologies are clearly Hsupportive, such that its tokens (and strategies) are highly conventionalized. --- Studies on apology Sociolinguistic studies of apology have been limited. However, most studies aim to indicate evidence of pragmatic transfer in the order, frequency, and content of the semantic formula (or strategy) used in apologies. Thus, the goal is somehow pedagogical. The paper will chronologically sketch some of these investigations. Fraser (1981: 263) conveys that in order for an apology to be convincing, the offender has to use a combination of two or more of the following strategies: (1) announcing that apology is achieved by clauses such as I (hereby) apologize…; (2) stating the offender's obligation to apology with words like I must apologize;(3) offering to apologize to show the sincerity of the act with sentences like Do you want me to apologize?; (4) requesting the acceptance of the given apology with clauses like Please accept my apology; (5) expressing regret for the offense through the use o intensifiers like very, so, and extremely; (6) requesting forgiveness for the offense; (7) acknowledging responsibility for the wrong act; (8) promising not to repeat the action; and (9) showing readiness for compensation. Olshtain and Cohen (1983: 20-24) indicate that the speech act set of 'apologize' contain the following acts: 1. An expression of apologizing (a. Expressing regret; b. An offer of apology; c. A request for forgiveness), 2. An explanation or account of the situation, 3. An acknowledgement (a. Accepting the blame; b. Expressing selfdeficiency; c. Recognizing the other person as deserving 'apologize'; d. Expressing lack of intent), 4. An offer of repair, and 5. A promise of forbearance (see Gass & Neu, 2006: 193). Similarly, Trosborg (1987: 150) supposes that the offender determines which apology strategy to use from the following ones: (1) minimizing the degree of offense through ( discussing the preconditions of the offense, and blaming another person for the offense); (2) acknowledging responsibility for which he lists the following six types depending on the degree offender accepts the blame: (implicit acknowledgement, explicit acknowledgement, expression of lack of intent, expression of self-deficiency, expression of embarrassment, and explicit acceptance of the blame); (3) explicit or implicit explanation as a kind of mitigation; (4) offer of repair through ( a literal offer in which the offender states s/he will pay or the damage, and a compensation which might balance the offense); (5) promise not to repeat the act; and (6) expression of concern for the offended person to calm him. Barr (1989) shows another notion of apology concerning the Japanese culture. Apologies are important and should be sincere. They even are rituals that should be adhered to. Even criminals must apologize for their mistakes. Rizk (1997) examines apology strategies used among Arab non-native speakers of English, studying the answers of 110 Egyptian, Saudi, Jordanian, Palestinian, Moroccan, Lebanese, Syrian, Tunisian, Yemeni, and Lybian speakers of English to a questionnaire he designed. His results prove the conformity of apology strategies between native and non-native speakers o English in all situations that warrant an apology except for one. Unlike the natives, Arabs do not apologize to children; instead they try to make the child forgive them through sentences like Do not feel sad, baby. Hussein and Hammouri (1998) have investigated the use of apology by Americans and Jordanian speakers of English. The conclude that Jordanians use more strategies to apologize than Americans; while both groups resort to the expression o apology, the offer of repair, the acknowledgement o responsibility, and the promise o forbearance, only Jordanians use strategies like praising Allah (God) for what happened, attacking the victim, minimizing the degree of offense and interjection. Another study on apology is Lev's (2001) in which he shows that apologies in China are less ritualistic and more goaloriented. In the Chinese culture, apology is used to solve problems. If a person acts wrongly, s/he should first apologize , and then talk with the victim about what is to be done next. Apologies in China do not necessarily come with the risk o losing face or feeling humiliated. Unlike Americans, the Chinese are not afraid of litigation and, thus, are ready to apologize to wipe off a multitude of sins. Soliman (2003), in his contrastive study of apology in Egyptian and American, discovered the following similarities and differences: (1) intensifiers are used in both cultures to show sincerity;(2) interjections like oh are important to convey the offender's care about what happened;(3) people in both cultures tend to express embarrassment for the wrong act; (4) Egyptians tend to attack the victim when the offender thinks the victim cannot justify his position as in the incident where the headmaster blames the janitor he bumped into for the incident instead of apologizing to him; (5) Egyptians praise Allah (God) for everything that happens, whether good or bad. --- Problem and Aims of the study This study is concerned with the speech act o apology. It should be mentioned that one should not lump all Arabic speaking countries. Arabic in Iraq, like Arabic all over the Arab world, is of a diglossic nature. There are two varieties used: a 'formal variety' (Fusha) which is similar to classical Arabic and a 'colloquial variety' (Ammiyya) which is used in everyday communication. Various dialects of Arabic are districts in that they reflect the social norms that are specific to those speech communities. Thus, by looking at the speech acts of apology to in Iraqi Arabic reflect fundamental cultural values that may be specific to Iraqi speech community. Whereas all pervious studies have looked at the interaction between non-native speakers and native speakers of English in the form of comparative studies discussing the differences in the performance of speech acts., there is no single study done on the performance of Arabic native speakers and more specifically Iraqis, as far as the speech act of apology is concerned. Moreover, the study will look at the strategies used in a dialect language, i.e. Iraqi Arabic. Moreover, there will be a comparison of a sociolinguistic type concerning gender differences in the use of apology. The research questions are: 1. What are the frequently used strategies by Iraqis when apologizing? 2. How do Iraqis realize the speech act of apology in terms of the three dimensions of semantic formulas: the order, frequency, and content in each of the three situations? 3. How do Iraqi males and females realize the speech act of apology when the offender is lower, equal, or higher in status to the victim? --- Subjects Fifteen Iraqi males and same females from al-Najaf city from various sectors participated in the study. Participants were native speakers of Arabic and were pooled from one community in Iraq. All participants were natives of Iraq, shared the same regional dialect. Their ages ranged from 25 to 40 years. --- Data Elicitation The primary data collection tool for this study was a Discourse Completion Test (DCT). The DCT consists of three different situations designed to elicit the speech act of apology. The situations were set to be familiar to the Iraqi life and culture. Each situation aims to find out the distinction between the relationships of the participants, i.e. when the speaker is of lower, equal or higher status. Since the study aimed to collect responses that are as close to naturally occurring conversation as possible, it seemed more realistic and valid to ask informants to produce responses in the everyday language they speak although it is not common to use that variety in writing. Thus, subjects were encouraged to write in the low variety, and to put the informant in the required mood, the situations themselves were written in colloquial Arabic. Respondents mostly responded using the Najafi dialect ( Iraqi Arabic). --- Data analysis Data was classified into semantic formulas in terms of the order (sequences), frequency, and content of semantic formulas. The number of each semantic formula was counted and the frequently used semantic formulas in each item. --- Classification of apology strategies In this research, Sugimoto's (1997) strategies will be used as a model for the data analysis since they are the most comprehensive although it will be kept in mind that the other strategies surveyed above may be useful or some examples. This depends on how the data will drive the analysis on. These strategies are as follows: 1. Primary strategies are those strategies frequently used by offenders when attempting to apologize, which include: a. statement of remorse (regret), in which the wrongdoer shows that s/he has done something wrong, b. accounts. In which the wrongdoer tells of what has happened (taking into consideration the fact that this is highly subjective depending on the way one tells the story and the role s/he played in it), c. description of damage, in which the wrongdoer describes what changes have been incurred on the object in discussion or the repercussions of a certain deed on others, and d. reparation, in which the wrongdoer tries to repair the damage s/he has incurred on others by offering words that may cause the harm done to be forgotten. 2. Secondary strategies which include: a. compensation, which differs from reparation in that the wrongdoer offers to replace the damaged object or pay for it, and b. the promise not to repeat offense, in which the wrongdoer does his/her utmost to assure the injured party that what has taken place will not occur in the future. 3. Seldom used strategies which includes: a. explicit assessment of responsibility, in which the wrong doer tries to describe his/her role in what has happened and whether or not s/he was responsible, b. contextualization, in which the wrongdoer gives the whole context of the injury and what has happened in order to make the injured party see the whole picture, c. self-castigation, in which the wrongdoer claims his/her responsibility for what has happened and is harsh in his/her rendition of his/her character, and d. gratitude, in which the wrongdoer shows how grateful s/he is that the injured person is even giving him/her the time to speak and finding it in his/her heart to forgive. --- Results and discussion This section presents the results and discussions obtained in the three apology situations. Results and discussion will include the realization of the speech acts of apology of males and females in terms of the three dimensions of semantic formulas: the order, frequency, and content in each of the three situations will be analyzed. In addition to that, the realization of the speech act of apology when the offender is lower, equal, or higher in status to the victim will also be examined. --- Semantic formulas Table (1) shows the descriptive number of the main semantic formula employed by the subjects (males and females) in the three situations. The most distinguished semantic formula used by the females is "compensation" (45). Another distinguished feature is that females utilized ''statement of remorse'' (regret) (30) in the first position of their apologies. On the other side, males adopt the strategy ''explicit assessment of responsibility'' (50) and ''reparation'' (33) more than other strategies. In addition, subjects employed a number of direct and indirect strategies concerning the achievement of their apologies. In other words, it seems that Iraqis tend to use direct apology using words translated into 'sorry and apology' to express their regret and remorse. These strategies refer to verbal messages that embody and invoke speaker's true intensions in terms of their wants, needs and discourse process. This corresponds to Brown and Levinson's on record politeness strategy (1987) with respect to the precisions and clarity of the communicative intention. Besides, one can note that females use friendly vocatives more than males to be more tactful and intimate. b. Accounts: --- Strategy This strategy is used less than the other ones. Some illustrations from the data are given as follows: ‫ﺟ‬ ‫اﻟﻜﻬﺮﺏﺎء‬ ‫اﻟﺠﻬﺎز‬ ‫ﻓﺨﺮﺏﺖ‬ ‫ﺷﻲ‬ ‫ﻣﻮ‬ ‫ﺎﻧﺖ‬ Ilkahrabaa chaanat mu shee fakharbat eljihaaz Power was idle so it disabled the device. ‫زیﻨﻪ‬ ‫هﺪیﺔ‬ ‫ﻟﻜﻴﺖ‬ ‫ﻣﺎ‬ Melgeat hadiyya zeana I didn't find a good present. ‫وﻣﻄﺮت‬ ‫ﺏﻴﺪي‬ ‫ﺟﺎن‬ ‫اﻟﻜﺘﺎب‬ Elkitaab chan beedi w-matrat It rained when the book was in my hand. This strategy is called 'explanation' ,'justification' or 'motivation' by some pragmaticians. The resort to motivations or justifications for issuing speech acts is usually regarded as a sign of politeness (see Brown andLevinson, 1978: 194, Van Dijk, 1977: 215). Similarly, Ferrara (1980: 240) argues that justifications have an essential extraconditional role where the subordinate speech act must relate to a state of affairs which counts as an adequate, plausible reason for the performance of the main (component) speech act. In other words, a speech act functions as a condition for appropriately or effectively carrying out a next speech act. This is the main function of 'accounts'. c. Description of damage: This strategy is also used by both males and female less than some other ones. Here, the offender describes the nature of the damage or the wrong deed, in general. Here are some instances; ‫اﺕﺄﺛﺮ‬ ‫ﺷﻮیﺔ‬ ‫اﻟﻜﺘﺎب‬ Elkitaab shwayya it'ethar The book was slightly affected ‫اﻟ‬ ‫ﺏﺎﻟﺸﻐﻞ‬ ‫یﺘﺎﺥﺮ‬ ‫آﺎم‬ ‫ﺏﺲ‬ ‫ﺠﻬﺎز‬ Eljihaaz bas gaam yit'ekhar bilshughul The device became only somewhat slow in work. It can be seen that function of this strategy is 'explication'. Explicative speech acts are those acts which are performed by the speaker to show that s/he more explicitly indicates the particular speech act s/he is making (Van Dijk, 1980: 61-62). In sum, it can be concluded that it is possible that speakers redefine the pragmatic context by becoming more specific and more general with a next speech act. d. Reparation: The data show that males used this strategy to reinforce the degree of apology. Males tend to amend things rather than compensation. Some examples are below; ‫اﻟﺠﻬﺎز‬ ‫اﺻﻠﺤﻠﻚ‬ ‫اﷲ‬ ‫ﺷﺎء‬ ‫ان‬ Inshaa'allah asallehlak eljihaaz God's will, I will amend your device. ‫اﻟﻜﺘﺎب‬ ‫اﺟﻠﺪ‬ ‫راح‬ ‫اﻧﻲ‬ ‫اوﻋﺪك‬ Awe'dak aani raah ejallid elkitaab I (hereby) promise that I'll bind the book. ‫ﺕﻔﻴﺪاﻟﺠﻬﺎز‬ ‫ادوات‬ ‫ﻋﻨﺪي‬ Indi adawaat etfeed eljihaaz I've tools which may be useful for the device. It can be said that many examples concerning this strategy indicate the use of direct speech act of 'promise'. This act can be considered as a pragmatic realization of 'reparation'. --- Secondary strategies They are two strategies. The analysis shows that 'compensation' ( 45) is the mostly used strategy by females. The following are examples for illustration. a. Compensation: ‫ﺟﺪیﺪ‬ ‫ﺏﻜﺘﺎب‬ ‫اﻋﻮﺿﻚ‬ ‫اﷲ‬ ‫ﺷﺎء‬ ‫ان‬ Insha'Allah a'awthak bkitaab jideed God's will, I will compensate you with a new book. ‫ﺟﻬﺎزﻏﻴﺮﻩ‬ ‫اﺷﺘﺮیﻠﻚ‬ ‫راح‬ Raah ashtereelak jihaaz gheara I will buy another device for you. --- ‫آﻠﺶ‬ ‫ﺏﻬﺪیﺔ‬ ‫اوﻋﺪك‬ ‫اﻧﻲ‬ ‫ﺣﻠﻮة‬ Aani awe'dak bhadiyya kullish hilwa I promise I bring you a very nice present. ‫اﻓﻀﻞ‬ ‫ﻧﻮﻋﻴﺘﻪ‬ ‫ﺟﻬﺎز‬ ‫اﺟﻴﺐ‬ ‫راح‬ Raah ajeeb jihaaz naw'eeta afdhal I will bring you a device of a better quality. In this strategy, it is found that the speech act of 'promise' is also one of the realizations. In addition, the use of ' Insha'Allah' reflects the cultural context of the Iraqi conventions as a mirror of the Islamic community. b. The promise not to repeat offense: This strategy conveys that apology can be achieved by 'promise'. This strategy is used by females more than males. This may mean that woman consider this as prestige-observation or a sign of politeness. The following examples illustrate the situation; ‫ﻣﻴﺘﻜﺮر‬ ‫ﺏﻌﺪ‬ ‫هﻠﺸﻲ‬ Halshi ba'ad mayitkarrar This will not happen again. ‫ﻣﻴﺼﻴﺮ‬ ‫ﺏﻌﺪ‬ ‫اﻟﺼﺎر‬ Elsaar ba'ad maiseer What happened will not be repeated. ‫ﻣﺮة‬ ‫ﺁﺥﺮ‬ ‫هﺎذي‬ ‫ااآﺪﻟﻚ‬ A'akkidlak haathi aakhir marra I assure you that this will not happen again. --- Seldom used strategies These strategies are the least used strategies by both males and females. a. Explicit assessment of responsibility this strategy can be also said to be functioning as an 'explanation' or 'justification'. ‫ﻃﺒﻌﺎ‬ . ‫ﻗﺼﺪي‬ ‫ﺟﺎن‬ ‫ﻣﺎ‬ Tab'an maa chaan qasdi Of course, I didn't do it on purpose. --- ‫ﻣﻨﻲ‬ ‫ﻣﻮ‬ ‫اﻟﺴﺒﺐ‬ Issabab mu minni It is not my fault ‫اﻟﺠﻬﺎز‬ ‫ﻋﻄﻞ‬ ‫اﻟﻠﻲ‬ ‫اﻧﻲ‬ ‫ﻣﻮ‬ Mu aani illi attal eljihaaz It was not me who disabled the device. --- b. contextualization: This is another type of 'justification'. It gives the physical context of the offense in order to mitigate the situation and support the apology. We notice that the speaker uses the speech act of 'telling' which has the following features (key: S=speaker, H=hearer); 1. S wishes H to gain information about him/herself and thus create or cement a social bond between self and H. 2. In performing a 'telling', S may be held to assume that H may be interested to gain the acquaintance of or further familiarity with his/her person. 3. S believes that H cannot be expected to know whether the information is true or not. When H thinks that what is told is false, s/he will regard S as 'liar' not 'ignorant' or 'misinformed'. 'Telling' can be sub-categorized into other speech acts such as 'identification'. (Edmondson, 1981: 144-45) ‫وﺳﻜﺖ‬ ‫اﻟﺠﻬﺎز‬ ‫ﺷﻐﻞ‬ ‫اﺥﻮیﺔ‬ Akhuya shaghal eljihaaz w-sakat My brother switched the device on and it was disabled. --- ‫ﻣﻌﺰﻟﺔ‬ ‫ﺟﺎﻧﺖ‬ ‫اﻟﻤﺤﻼت‬ Ilmahallaat chaanet m'ezla The shops were closed. ‫اﻟﻜﺘﺎ‬ ‫ﺏﺎﻟﺴﻴﺎرة‬ ‫ﻧﺴﻴﺘﻬﺎ‬ ‫اﻟﻠﻲ‬ ‫ﺏﺠﻨﻄﺘﻲ‬ ‫ﺟﺎن‬ ‫ب‬ Ilkitaab chaan bjanitti illi niset-ha bissayyaara The book was in my bag which I forgot in the car. c. self-castigation: Here, the speech act of 'admitting' as a reflection of selfresponsibility of the offense. ‫اﻟﺼﻮج‬ ‫ﻣﻨﻲ‬ Issuch minni females, use more mitigation strategies than in addressing persons with lower status. The role of status in relation to the realization of speech act is addressed in the third research question 'How do Iraqi males and females realize the speech act of apology when the offender is lower, equal, or higher in status to the victim?'. According to Table (2), it can be seen that when the apologizer is higher than the victim, males tend to use reparation (8), statement of remorse (regret) (4), accounts (2), and description of damage (2). There is no resort neither to explicit assessment of responsibility, self-castigation, nor gratitude. The case in females is similar to the males', but the former use statement of remorse (regret) (10) more than males. In addition, females use compensation (15) instead of reparation of males. This reflects the fact that females take more care of their kids' emotional side. When the apologizer is of equal status of the victim, the analysis shows that males use reparation ( 12), statement of remorse (regret) (6), description of damage (2), compensation (2), the promise not to repeat offense (2) and gratitude (2). Females tend to use statement of remorse (regret) (10), compensation (10), reparation (5), the promise not to repeat offense (3), accounts (2). It seems that females use compensation more than males when the victim is of equal status. In addition, they are similar in the lack of explicit assessment of responsibility and self-castigation in their apologies. Moreover, it can be said that the extreme use of statement of remorse (regret) (10) by females reflects their opinion and attention to this polite sign more than males. Finally, when the apologizer is equal in status to the victim, males are found to use reparation (13), statement of remorse (regret) (10), compensation (7), accounts (3), description of damage (3), the promise not to repeat offense (3) and self-castigation (0). On the other side, females seemed to use compensation (20), statement of remorse (regret) (10), Accounts (4), reparation(4), the promise not to repeat offense (4), explicit assessment of responsibility (3), self-castigation (2), gratitude (2) and contextualization (1). These results show that Iraqi females try to use similar strategies of apology with all types of victim's status. On the contrary, males used more strategies with victims with higher status, such as the use of reparation, compensation and direct statement of remorse. --- Sequence of the Semantic formulas Throughout analysis, it is founded that the most repeated sequence in apology is as follows: statement of remorse (regret) + reparation + compensation This means that this speech act set or semantic formula is the most used one among both males and females. --- Conclusions It can be concluded that both Iraqi males and females have been tactful with the victim in apology situations, but females try to be more tactful by insisting on using the strategy of compensation rather than reparation. Besides, one can note that females use friendly vocatives more than males to be more tactful and intimate. In addition, females try to be at the same strategy level or type although victims belong to various social statuses. On the contrary, males have been more prestige-conscious and rank-conscious. They rely on different strategies according to each status of the victims. Therefore, males can be said to be selective according to the tenor of the situation. Concerning the semantic formula, which can be called ' pragmatic collocation' since certain speech acts tend to be used altogether in certain situations, seem to be generalized to both males and females. This formula reflects a great deal of carefulness to the explicit use of regret and additional use of supportive strategies of justification and explanation such as those of compensation and reparation. --- Bibliography --- It is my fault. ‫ﺏﺎﻟﻌﻄﻞ‬ ‫اﻟﺴﺒﺐ‬ ‫اﻧﻲ‬ Aanissabab bil'atal Idleness was my wrong. d. Gratitude It is found from the analysis that speakers use the speech act of 'thanking' as a sign of gratitude. The speech act of 'thanking' has the following characteristics(key: S=speaker, H=hearer); 1. S wishes H to believe that S is in favour of an act A, performed by H as in the interests of S. 2. In 'thank', S may be held to believe that H did A knowingly, and that benefits to S consequent to A were known by H to be involved at the time of his/her doing A. 3. Thanks are clearly H-supportive, and the verbal strategies of performing this illocutionary speech act are explicit. (ibid.: 144) ‫اﻋﺘﺬاري‬ ‫اﻗﺒﻠﺖ‬ ‫ﻻن‬ ‫اﺷﻜﺮك‬ Ashkurak li'an qbalit I'itithaari Thank you since you have accepted my apology. ‫ﺳﻤﻌﺘﻨﻲ‬ ‫ﻻن‬ ‫ﺷﻜﺮا‬ Shukran li'an sma'itni Thanks for listening. --- Semantic formulas according to apologizer's status Apologies are made up of different selections from these formulas in accordance with the status and power relationship between speaker and hearer. In apology, someone with lower status, Iraqi males who are in a higher status do not use direct apology or remorse (regret). On the contrary, females use this strategy with the lower victims. Besides, in apologizing persons with higher status, Iraqis, both males and
Social distancing is considered as the most effective prevention techniques for combatting pandemic like Covid-19. It is observed in several places where these norms and conditions have been violated by most of the public though the same has been notified by the local government. Hence, till date, there has been no proper structure for monitoring the loyalty of the social-distancing norms by individuals. This research has proposed an optimized deep learning-based model for predicting social distancing at public places. The proposed research has implemented a customized model using detectron2 and intersection over union (IOU) on the input video objects and predicted the proper socialdistancing norms continued by individuals. The extensive trials were conducted with popular state-of-the-art object detection model: regions with convolutional neural networks (RCNN) with detectron2 and fast RCNN, RCNN with TWILIO communication platform, YOLOv3 with TL, fast RCNN with YOLO v4, and fast RCNN with YOLO v2. Among all, the proposed (RCNN with detectron2 and fast RCNN) delivers the efficient performance with precision, mean average precision (mAP), total loss (TL) and training time (TT). The outcomes of the proposed model focused on faster R-CNN for social-distancing norms and detectron2 for identifying the human 'person class' towards estimating and evaluating the violation-threat criteria where the threshold (i.e., 0.75) is calculated. The model attained precision at 98% approximately (97.9%) with 87% recall score where intersection over union (IOU) was at 0.5.
Introduction First and second waves of Covid-19 hitting the globe and affecting nearly 180 countries across the globe, governments and health fraternities have realized that social distancing is the only way to prevent the spread of the disease and break the chain of infections. However, there are cases in countries such as India, France, Russia, and Italy where either people are heavily populated or do not adhere to follow preventive techniques like social distancing at crowded places [1]. Social distancing is a healthy practice or preventive technique which evidently provides protection from transmission of Corona (Covid-19) virus when there is a maintenance of minimum of 6 feet distance between two/more people [2]. Social distance does not have to be always people's preventive technique rather it could also be a practice that could be adhered to reduce physical contact towards transmitting diseases from the virus-affected person [3,4]. Deep learning like its role in every field has found to be a significant technology in addressing this problem [5]. Manually monitoring, managing and maintaining distances between people in a crowded environment such as colleges, schools, shopping marts and malls, universities, airports, hospitals and healthcare center, parks, restaurants and more places would be evidently impractical and henceforth adopting the machine language, AI and deep learning techniques in automatic detection for social distancing is essential. Object detection with RCNN in machine languagebased models had been adopted by researchers since it offers the investigators with faster detecting options where the faster RCNN is found to be more advantageous, supports faster convergence and also provides higher performance [6] even under low-light environment. In regions of America, Europe and South-East Asia due to poor social distancing and improper measures against Covid-19 by the people, it has been confirmed that in 2020, many cases were legally recorded where people died due to violation of threshold in social-distancing measures during Covid-19 [4]. The social distancing has been adopted as a concept by the researchers so far to examine varied factors, namely with/without face mask, face recognition, object (human) identification, people monitoring and diseases monitoring during Covid-19 [7]. Deep learning and machine learning-based AI models surpass various criteria such as time consumption and labor intensive where human interventions are must in research and also to obtain results/outcomes that have minimal errors and loss with alternate predictions by shortening the period of investigations from years, months and days into several days [8]. Majorly R-CNN (region-based CNN) and faster R-CNN are utilized for accurate and faster predictions. To measure the predicted outcomes, the examiners generally utilize the metric-evaluation approaches/techniques such as regressors (Random-Forest-Regressor, Linear Regressor), IOU (intersection over union) and more. For object detection, the IOU is commonly adopted. The research has developed a customized deep learning model with detectron2 and intersection over union that would predict whether social distance is maintained or not by the individuals, especially in crowded places and public places. The studies by the authors Vinitha and Velantina [9] have focused on the prediction of social distancing post-Covid-19 where the deep learning and the machine language have been adopted for their model development. The studies utilized the python for designing the model algorithm and networking. Pandian [10] utilized the TWILIO and the authors Vinitha and Velantina [9] utilized the YOLOv3 for their architecture. The studies used the faster R-CNN and found that the architecture is flexible and faster than other approaches, and the conclusions revealed that efficient and effective object detection is attained with the YOLOv3 and faster R-CNN model. Rezaei and Azarmi [11] developed the YOLOv4-based ML model with COCO datasets where the DNN (deep neural network) is utilized for the architecture. The model is the most identified and recognized model that had attained 99.8% accuracy with speed of 24.1 fps in real-time analysis. The model is developed to examine the social distancing of people post-Covid-19 and infectious assessment. The study is still identified as popular model-based investigation where the accuracy is higher than the existing model. Saponara et al. [12] had developed AI-based social distancing and people detecting model post-Covid-19 which measures the distances among two or more people. The authors adopted YOLOv2 with the Fast R-CNN for detecting the objects where here it was humans. The study's algorithm and the architecture were solely developed to identify the social distances and the outcome where the accuracy was 95.6% with precision of 95% and recall rate at 96%. Arya et al. [13] examined different measures towards monitoring the social distancing via computer vision as an extensive analysis of existing literature-based review. The study focused on security threats' identification and facial expressions based analyses and models that adopted deep learning, computer vision through real-time datasets with video-streams. The authors found YOLO to be effective in the detection models among other AI-based models and evidently concluded that two-staged detectors are far better and efficient than single detector which provides effective outcomes and reliable results that remain valid and constant in similar surroundings. Henceforth, adopting the two-stage object detectors is wiser, efficient, effective and accurate. Yang et al. [14] investigated the social-distancing concept post-Covid-19 by developing the vision-based along with critical-density-based detection system. They developed the model based on two major criteria, first to identify the violations in social distancing via real-time vision-based monitoring and communicating the same to deep learning model, second precautionary measures are offered as audio-visual cues through the model, to minimize the violation threshold to 0.0 without manual supervision and thus reducing the threats and increasing the social distancing. The study adopted YOLOv4 and faster R-CNN where the precision average (mAP) was achieved higher in BB-bottom method with 95.36% with accuracy of 92.80% and recall score at 95.94%. Though the study provided better outcomes, the targets were initially small and occlusions occurred due to heavy-density accumulation. Finally, the authors were able to train and develop the model with huge mass resulting with 2% error in critical-density stating that it could be refined in future research with better understanding of the targeted people and density accumulation algorithm. Thus, conclusion revealed that maintaining social-distancing practices within family members in crowded areas is very essential which had caused the major issue in the research which could be avoided in future research. Ahmed et al. [3] and Ahmed et al. [4] developed deep learning-based architecture that utilizes the social-distancing concept as base for their evaluation and people monitoring/management post-Covid-19. The authors Ahmed et al. [3] utilized YOLOv3 for identifying humans and faster R-CNN as the social-distancing algorithm where they achieved 92% accuracy in tracking without transfer learning and 98% with transfer learning; similarly, the model obtained 95% as tracking accuracy stating social-distancing detection with YOLO with tracking transfer learning technique being effective. Punn et al. [2] proposed a study on monitoring Covid-19 social distance with person detection and tracking through fine-tuned YOLOv3 and deep sort techniques. The deep learning model is used for task automation of supervising social distance using surveillance video. The proposed structure uses the model of YOLOv3 object detection to segregate background people and uses deep sort methods to track the identified people with the use of assigned identities and bounding-boxes. The deep sort tracking method with YOLO v3 scheme shows good outcomes with balances score of FPS and mAP to supervise real-time social distance among people. The main aim of Rahim et al. [6] study is to offer an efficient monitoring solution for social distance in low-light surroundings in pandemic circumstances. The developing disease of Covid-19 caused by the SARS-Covid-2 virus has acquired a worldwide crisis with its deadly distribution all over the globe. People find ways to come out of their homes with their families during nighttime to take fresh air. In such circumstances, it is essential to consider efficient steps to supervise the criteria of safety distance to avoid positive cases and to manage the toll of death. In this research, a deep learning-based method is proposed. The proposed structure uses the YOLOv4 structure for measuring social distance and detection of real-time object with a single motionless ToF (time of flight) camera. Through the reviews, it is evidently understandable that YOLO with CNN, R-CNN and faster R-CNN are majorly utilized in the recent years towards social distancing-based object detection models. The objective of this proposed research work is to: • Implementing "detectron2" and IOU as machine language-based model with faster RCNN towards examining and detecting the social distancing between people, especially in the crowded areas. • Developing a customized model for social distance prediction through video object detection. The overall organization of the paper is presented as follows: Section 1 describes the introduction and research objectives, Sect. 2 explains the research methodology and algorithm implementation in the proposed work. The experimental evaluation and analysis is presented in Sect. 3 followed by conclusion and future recommendation in Sect. 4. --- 3 New Generation Computing (2023) 41:135-154 --- Research Methodology Following the input image of COCO dataset, the next step is to apply pre-processing and train the dataset by filtering and annotations specific for person category only. Then, validate the model and evaluate mAP scores for checking the performance as follows: For testing the dataset using intersection over union (IOU) method, detect the social distancing between people using the overlapping (intersection) area. If the value of IOU is non-zero, then it can be said that people are not at proper social distancing from each other. Hence, social distancing among people can be detected. Figure 1 represents the graphical form of the proposed research steps. 95 categories from MS-COCO datasets library have been created and only one class "person" is assigned as "head" class to identify people and categorize them through bound-box through people detection. (1) Though detectron and detectron2 have no huge gaps in its function and ability in evaluating and processing datasets and training datasets, the more advanced and modular design-based detectron2 is identified to be extensible, highly flexible and also provides rapid training upon both multiple/single GPU servers. Henceforth, for object detection and training datasets, researchers majorly have been depending on faster R-CNN network with YOLO and similarly, MS-COCO along with PASCAL-VOC datasets, than other applications [2]; however, Wen et al. [15] had argued that, comparative to YOLOv4 models, the detectron models and datasets are not satisfying and considered as not a best configuring aspect for huge datasets but may assist the researchers for custom datasets and object detection. mAP = 1∕|classes| ∑ | | TP c ∕ | | FP c | | | | + | | TP c | | No Yes Input COCO The studies existing and most commonly utilized datasets and applications where thoroughly studied and examined and thus in this research, "detectron2" with MS-COCO datasets was adopted since the datasets are customizable and the model is layered with 'Faster-R-CNN-ResNet50 and 101' based CNN which provides, higher accuracy, precision, speed and minimal loss. The detectron2 as the machine language is an advanced machine learning-based software adopted by researchers towards detecting objects with more than 90 labels in the library with pre-defined utilities that could be installed which currently works with GPU only. Once the detectron has been installed and hard-coded onto the computer into "database catalogue", users could train his/her model, modify code inferences based on dataset configurations to evaluate scripts in-order to integrate the same with the final end-product. Currently, PyTorch allows the users to implement detectron2 which was initially identified as a "maskrcnn-benchmark" based detectron groundup rewrite. The researcher intends to adopt the IOU metric-evaluation to evaluate the precision rate with the models recall rate. In addition, the model is developed with detec-tron2 algorithm which makes customization of data more flexible. In this research, the 95-COCO categories under the detectron2 library opted are given in Table 1 [16]. In this, different class and associated objects/items are presented for training the model. Though the existing models are of YOLO, RCNN, RFCN and SSD, the lack of detectron2 algorithm has motivated the researcher to develop the model with faster RCNN where the precision and recall rate could be predicted and evaluated for accuracy of object detection. Once the data are satisfying, the approach is applied on remaining datasets in runtime towards predicting the valid social distancing (green box) and invalid social distancing (red box) between two or more persons. --- Intersection over Union --- Proposed Model Architecture Among the R-CNN, faster R-CNN and fast R-CNN, the CNN of faster R-CNN is viewed and identified as faster, but, however, all three ResNets are stated as similar approaches which identifies, detects, computes and classifies. Though there are numerous methods and approaches in object detection using machine languagebased deep learning, faster R-CNN with detectron-based algorithm is proved to be effective than the previous models [2]. The flow of the developed model is represented in Fig. 1 where the layers of faster R-CNN are explained in detail. --- Algorithm Used for Social Distancing In the research, the focus is upon social distancing through image segmenting and object detection, and thus, the researcher was motivated to adopt the "detetcron2" as the object detection technique unlike the common techniques by existing research where YOLO is commonly adopted and compared with RCNN and FCN path-based models. Henceforth, the faster R-CNN with detectron2 as the base for the model is implemented through the following algorithms: --- Faster R-CNN Algorithm The faster R-CNN for the social distancing and object detection is carried through the following algorithm: Step 1.Initially, cloning the repository for the faster R-CNN implementation is carried out; Step 2.The folders (training datasets and testing datasets) along with the training file (.csv) are loaded to the cloned repository; Step 3.Next, .CSV file is converted into .txt format/file with new set of data-frame and the model is then trained using train_frcnn.py as file in python keras; Step 4.Finally, the outcomes are the predicted images with detected objects as per the norms in the codes and the results are saved in separate folder as text images with bounding-box. The architecture in this model (Fig. 2) is developed with faster R-CNN as the model for transfer learning. The reason for adopting and using faster R-CNN as the backbone in the developed model is for its features, namely: accuracy, precision and speed. In the developed model, the architecture includes 5 Convolutional layers of 128, 256, 512, 1024, and 2048. The research includes deep learning of 8 layers of ReLu activation. 3*3Conv Kernel size is utilized. --- Detectron Algorithm for Object Detection and Social Distancing The algorithm for the developed social-distancing evaluation through object detection for detectron2 with faster R-CNN-50 and 101 is designed as Step 1. First, the images from COCO datasets are loaded and accessed as inputs for the developed model; Step 2. Initially, the images are passed through the ConvNet of faster R-CNNs of 50 and 101; Step 3. Merge the model-zoo files from faster R-CNN-50 and train the sample dataset and initialize the training; Step 4. Choose the LR with good outcome rate, where here 300 is predicted to be enough for the sample dataset; Step 5. Next, focus on maintain the learning rate by preventing the decaying of learning rate; Step 6. Set the ROI-head size as 512 for training datasets and label 'person class' is alone selected and the metric is obtained; Step 7. Once the datasets are trained, the model is tested with remaining images with bounding-box identifying 'valid person' under class; Step 8. The same processes for sample evaluation estimation are repeated in testing and if the identified person-class satisfies the norms for social-distancing criteria 'green bounding-box' is applied for the image, else 'red bounding-box' is applied onto image and the result obtained and stored in a file under the person-class. --- Training the Model with Accumulated Datasets The detectron2 model training is done by adhering to threshold value and norms as the following steps: • Initially, the LR (learning rate) is set as 0.001 for the iterations of 30 k and then later the LR is gradually decreased up to 0.0001 for following 70 k iterations; • Next, for about 40 k iterations, the LR is set at 0.00001 and then for the last round of 20 k iterations, the LR is set at 0.000001; • The optimizer utilized here is the SGD (Stochastic-gradient-descent). --- Procedure for Testing the Accumulated Datasets for Social Distancing Initially, the bounding-boxes predicted through the model post-training are considered and examined with the set threshold (0.75) and if the scores are lesser, they are disregarded; Each combined bounding-boxes, i.e., 2 boxes as a pair is evaluated for the metric calculation in the IOU at 0.5; According to the IOU scores, if the IOU > 0, there is no social distancing between two people, and thus they are bounded with red box and if the score of IOU < 0, then the people are bounded with green box. Thus, the algorithm, model and the norms of bound-box for social distancing are estimated/predicted and compared with the outcomes through the model developed. --- Results and Performance Evaluation --- Random Sample Dataset The random sample dataset has been considered for implementation. The datasets for training the model are selected randomly (refer Fig. 3a,b) to predict its reliability and accuracy and to compare the outcomes obtained with estimated outcomes. --- Ground Truth Versus Predicted by Model The following outcomes (predictions) are the results obtained from the tested model where the ground truth is compared and weighed against the models' predictions based on the developed algorithm and architecture. The results from trained detec-tron2 with faster R-CNN model are: --- Classification of Individual Person The classification of individual person is explained as follows. Figure 4a represents the ground truth of identified person in the picture versus Fig. 4b representing the prediction made by the model by identifying the personclass only. It could be inferred that accuracy and precision is similar to the outcomes. However, the study is focused on social distancing with green bound-box as correct distance and red bound-box as incorrect distancing between people. Since there is no other person involved in the picture, the algorithm identified the individual with green bound-box stating that, norms of social distancing is satisfied by the 'person'. The picture is of mass-people where the ground truth identified 8 individual people in Fig. 5a with class-person and the predicted model provided outcomes with same head-counts of 8 people and also identified that there is no social distancing between the people and thus resulted Fig. 5b is obtained with red bounding-boxes. --- Classification of Group of People Versus Objects --- Classification of People and Animals Figure 6a represents the ground truth of identified person in the picture versus Fig. 6b representing the prediction made by the model by identifying the personclass only. The class 'person' is identified from the rest of categories and evaluated for social distancing. According to the developed algorithm, only people are identified for social-distancing evaluation and the predicted outcome is found to be negative where there is no social distancing between people the identified in the input image . --- Classification of People from Other Categories See Fig. 7) --- Graphical Outcomes Based on the trained and tested model and estimated and obtained outcomes, the graphical representation of the time, data time, total loss and eta-seconds of the developed model had been evaluated in python and represented in the following graphical representation of Fig. 8. The losses have been calculated by considering the expected outcomes of the proposed model with the actual outcomes during the testing phase. Considering number of iteration and percentage of losses, the graph has been plotted. It has observed that the percentage of losses decreased from 0.75 to 0.3 with variation of numbers of iteration, which indicates the effectiveness of the proposed model. It is inferred through Fig. 9a-c that no huge variations are identified between the results obtained from the model and similarly the estimated outcome and obtained result of total loss is less at 0.15. Figure 9a represents the Loss_RPN (region-proposal-networks)-LOC outcome response. There are no huge variations in loss for te class in pooling. At the 1.8th K iterations, loss minimized from 0.2 to 0.05%. Concluding that, the model is accurate and precise in detecting objects and perfectively measures the social distancing. Similarly in Loss_RPN_Class (Fig. 9b), the loss parameters are exponentially decaying from 0.03 to 0.01 approximately as the number of iterations increases. This signifies the best outcomes of the proposed model. Again, Fig. 9c represents the loss from regression of bounding-box outcomes. The results evidently insist that regions are overlapped, and thus, NMS (non-maximum suppressions) is used towards minimizing the proposal numbers. Therefore, the loss is minimized at 0.15 from 0.34 as total loss of the developed model. The outcomes from Fig. 10a-c denote that faster R-CNN outcome of accuracy is achieved at 98% where the foreground accuracy is achieved at 10% and false negative at 0.1% concluding that the model is accurate and precise in detecting objects and measuring social distancing with higher outcome than estimated/ predicted outcome of 75% (threshold value) and above. Figure 10a interprets that there are no such remarkable variations of exact iteration value and moving average vale in actual outcomes. From 600 to 1.8 k iteration, the value remains steady at 0.2 loss which is a good indication of systems performance. With the help of fast RCNN model, the foreground (FG) class accuracy has been computed and the response is presented in Fig. 10b. The conclusion from the response is that as the order of iteration increases, the FG class accuracy also increases and maintained steady response from 1.6th K iterations. The foreground class accuracy (Fig. 10c) was calculated in fast RCNN model. Initially, the class accuracy dropped from 0.4 to 0 at 200th iteration and again the accuracy increased with increasing in iterations. The steady response has spotted from 1.4th K iterations which is approximately equal to 0.8 or 80% --- Scheduling LR The learning rate is set to 4 k intervals where the developed model attained successful learning rate at 1000 k and remained the same till 23 k iterations stating that there is no sudden dropping down in the learning rate but rather up to 70 k iterations, the LR steadily decreased and remained constant from 1000 k. --- Scores Post-testing and Training the Model The scores for the developed model after testing and training the processed dataset have been obtained, and through outcome values, the study concludes that the detectron2 model with faster RCNN as architecture where IOU metric is adopted to evaluate the model. The outcome of the trained model conforms a mAP of 84.5, precision of 97.9%, 87% recall and with total loss of 0.1%. --- Performance Metrics The object detection (human) in the model developed has acquired recall rate of 87% and precision of 97.9% which is more than average 75% stating that the model is a success with effective precision with minimal total loss of 0.1 and mAP at 84.5%, where existing models has lack precision in human detection towards social-distancing threshold violation measures (refer Fig. 11). Thus, the researcher examined and evaluated the datasets with developed detectron2 model where it has evidently concluded that the developed model is a success and good fit for object detection-based analysis models and for violation threshold-based applications in object detection and monitoring. Majorly for evaluating the social distance criterion, the model is reliable, accurate, precise and also has a fine recall score (87%) with better mAP (84.5%) that exceeds the average score of existing models. Figure 11 exemplifies that among the existing models, the developed model with detectron2 with faster R-CNN architecture acquired higher precision rate of 97.90% (98% approximately) than other models, where • R-CNN with TWILIO communication platform architecture attained 96.30%; • Fast R-CNN with YOLOv2 architecture attained 95.60%; • Faster R-CNN with YOLOv4 architecture attained 95.36%; • TL with YOLOv3 architecture attained 86.0%. The investigation particularly aimed at analyzing and developing a better model with object detection-based machine language-adopted architecture where it could attain higher precision rate than average metric scores attained by the existing models. The study adopted IOU metric-evaluation that uses mAP (mean average of precision) which examines and evaluates the developed object detection model. Table 2 summarizes the performance of both proposed model and other existing model in terms of training time (TT), number of iterations (NI) mAP and total loss (TL) during the simulation of training phase. It is observed that the developed model (RCNN with detectron2 and fast RCNN) architecture acquired higher precision rate of 97.90% (98% approximately) than other models. --- Conclusion and Future Recommendation The study mainly aimed to developing an Optimized deep learning approach for the prediction of social distance among individuals in public places in real time, which mainly focus with object detection-based machine language-adopted architecture where it could attain higher precision rate than average metric scores attained by the existing models. The extensive trials were conducted with popular state-of-the-art object detection model: RCNN with detectron2 and fast RCNN, RCNN with TWILIO communication platform, YOLOv3 with TL, fast RCNN with YOLO v4, and fast RCNN with YOLO v2. Among all, the proposed (RCNN with detectron2 and fast RCNN) delivers the efficient performance with precision, mAP, TL and TT. The outcomes of the proposed model focused on faster R-CNN for social-distancing norms and detectron2 for identifying the human 'person class' towards estimating and evaluating the violation-threat criteria where the threshold (i.e., 0.75) is calculated. The model attained precision at 98% approximately (97.9%) with 87% recall score where IOU was at 0.5. In future, the study may be applied by researcher on similar context through two-stage detectron instead of single with fast R-CNN or DNN as the architecture to find variations and relationships between the same datasets under different techniques. --- Authors Contributions The authors contributed equally in this work. --- Funding This work is funded by Al-Mustaqbal University College in Iraq. Availability of Data and Materials Simulation software. --- Declarations Conflict of Interest No competing interests. --- Authors and Affiliations --- Santosh
Qur'an literacy is limited to the terminology of teaching hijaiyah letters and the rules of reading al-Qur'an with the goal of improving reading (tahsin). Children are taught al-Qur'an from an early age, beginning with the introduction of hijaiyah letters, reading pronunciation, and the law of recitation (rules for reading al-Qur-an). 2 This is based on the al-Qur'an's function as a guide for human life, making it
of teaching al-Qur'an to children at an early age. 13 There are children who memorize al-Qur'an (khatam 30 juz). 14 In addition, a number of academic, police, and military scholarships create distinct pathways for memorization of al-Qur'an. 15 This delighted phenomenon is slightly inversely proportional to the discovery of the elderly who are illiterate in al-Qur'an. 16 Obviously, the attitude is to provide opportunities for the elderly to learn al-Qur'an, beginning with letter recognition, pronunciation, and ultimately resulting in proficient reading. 17 This clearly shows that age is not a barrier to learning because Islam teaches the principle of lifelong education. Attention aimed at the elderly was specially programmed by Ma'had Abu Ubaidah bin al-Jarrah Medan, one of the non-formal educational institutions that has a vision of spreading Islamic values based on Arabic language education, da'wah and teaching of al-Qur'an. 18 The activity of recitation al-Qur'an is one of Ma'had Abu Ubaidah's programs in educating al-Qur'an to the public, starting from the age of children, adolescents, adults, to the elderly. 19 According to Bahtiyar et al., tahsin is a solution for eradicating illiteracy in al-Qur'an. 20 Furthermore, Zuliana et al. mentioned that tahsin activities can be given to individuals of every age because learning is repetitive and rapid. 21 Through tahsin activities, students are given the introduction of letters, teaching the pronunciation of the letters of al-Qur'an, to the rules of tajwid (the law of reading in al-Qur'an). 22 Based on preliminary study, the leader of Ma'had Abu Ubaidah provided information that the tahsin program provided at all age levels was a form of implementation of the Prophet Muhammad's hadith. about "the best man," namely learning and practicing al-Qur'an, and the obligation for every Muslim to recite al-Qur'an even if he stammers (at first). Because reciting al-Qur'an that does not correlate to the line (harakat), does not correlate to the pronunciation of the letters hamzah and 'ain, will change the meaning of the reading. Furthermore, reading al-Qur'an also adjusts to the reciting rules (tajwid science). 23 This is to enhance one's ability to recite al-Qur'an, such as the rule of reciting nun sukun or tajwid get-together with hijaiyah letters, the rule can be izhar, ikhfa, idgham, or iqlab. 24 In tahsin activities, students gain knowledge about the rules of improving the reciting of al-Qur'an. According to Yandi and Harianja, proper tahsin guidance is also provided to the elderly, as teachers (ustaz/ustazah) have control over reading. 25 Furthermore, Ma'had Abu Ubaidah bin al-Jarrah Medan provides flexibility for the elderly in learning, namely the implementation of online and offline tahsin. The elderly are also provided audio media assistance and online deposits to encourage them to keep reading on a regular basis. In response, Nurmalasari et al. explain that integrating offline and online learning makes it more convenient for students to receive lessons. 26 This certainly gives a positive value for the elderly who are eager to learn even though they have entered "old age." Commonly, children are taught al-Qur'an from a young age in an effort to eradicate illiteracy. 27 However, this essential requirement is frequently overlooked by the elderly. In order to prevent Qur'anic illiteracy, non-formal educational institutions are accountable for assisting the elderly in learning al-Qur'an. 28 Non-formal educational institutions also include study groups, taklim gatherings, centers for teaching and learning activities, and recitations for parents. 29 Indeed, academic studies on the activities of recitation of al-Qur'an in eradicating illiteracy have been investigated from various scientific aspects (points of view). Among other things, discussing the aspect of planting religious values (character) through the tahfiz and tahsin al-Qur'an programs for children with good indicators of success in reading al-Qur'an and at least memorizing 3 sections of al-treasure in strengthening the improvement of al-Qur'an literacy and living Qur'an in society (children, adolescents, adults, to the elderly). This study uses a qualitative approach with a descriptive study method. Data collection was carried out using in-depth interviews with key informants (leaders, staff, tahsin teachers, and the elderly at Ma'had Abu Ubaidah bin Al-Jarrah Medan), participant observation, and study of relevant documents, which was refined by data triangulation. The background of this research is located at Jl. Kutilang Number 22 Sei Sekambing B Medan Sunggal. The basic reason for choosing this place is because the ma'had facilitates learning tahsin al-Qur'an for all age levels, especially the elderly who are the focus of the research theme. The task and role of the researcher as the key instrument is to observe carefully and routinely the learning activities of tahsin recitations of al-Qur'an in ma'had. Furthermore, the researchers summarized all descriptions of tahsin activities, both online and offline, then reduced them according to the needs of research data presented systematically (according to the systematics of writing scientific papers). 42 Finally, the researchers ensured the data's validity by measuring its credibility, transferability, dependence, and confirmability, beginning with efforts to extend the duration of observation of tahsin activities, persistence of continuous observations, and data triangulation. --- B. Learning System for Tahsin Recitation of Al-Qur'an at Ma'had Abu Ubaidah bin Al-Jarrah Medan Tahsin recitation of al-Qur'an is one of the flagship programs at Ma'had Abu Ubaidah bin Al-Jarrah Medan. In practice, tahsin activities are implemented in two distinct ways: face-to-face guidance and e-learning tahsin. Face-to-face learning is conducted exclusively and to a limited extent in classrooms, where students receive direct instruction and guidance from teachers with a qiro'ah chain. The duration of face-to-face guidance implementation is four months or sixteen meetings (every week 1 meeting). This activity has a unique age restriction that includes the elderly. According to Izzah and Hidayatulloh, face-to-face tahsin learning allows the elderly to focus and hear the teachers' pronunciation firsthand. 43 In line with this, Pangestu explains that tahsin activities and the provision of face-to-face guidance make it easier for teachers to correct reading errors in the elderly, so that they can immediately be given an explanation for the pronunciation errors in the reading of al-Qur'an. 44 In another context, Hanafi explains that the ideal and effective tahsin activity is done face-to-face because it is feared that internet network constraints will interfere with hearing the elderly's reading pronunciation if it is done online via a smartphone. 45 In the context of the elderly, learning tahsin focuses on improving the reading of hijaiyah letters in accordance with makharijul Qur'an and tajwid. This is so that the elderly students at Ma'had Abu Ubaidah can correctly read al-Qur'an. In accordance with this viewpoint, Ustazah Masyitoh Oktaviani, Lc., stated: "... that's right, ma'am, we carry out tahsin activities face-to-face and online. The face-to-face is intended to introduce the letters correctly to the elderly. In addition, the elderly also directly practice reading the hijaiyah letters one by one, so that reading errors can be corrected from the basic level, so that we hope to minimize reading errors and illiteracy of the Qur'an among the elderly". 46 In addition to face-to-face instruction, tahsin e-learning application enables online learning. In the application (platform) provided by ma'had, there are videos explaining tahsin material, videos with the concept of learning the talaqqi method, tahsin insight test quizzes, e-books, and other platforms that can help the elderly access lessons anywhere and anytime. This is sought as a form of acceleration and providing convenience for the elderly in the process of perfecting the reading of al-Qur'an. According to Putra, the spread of Islamic knowledge in Indonesia begins with the exchange of information via trade, then learning interactions at the surau or langgar, and the introduction of the hijaiyah letter as the beginning of studying al-Qur'an. 47 Supporting this, Akhiruddin explained that the introduction of hijaiyah letters is believed to be the main capital for Muslim learning in studying al-Qur'an, hadith, and turats books in Arabic. 48 Furqan strengthens the previous opinion, namely the science of 'tools in Arabic in the form of nahwu, sharaf, and others using hijaiyah letters. 49 On this basis, it is understood that the elderly students at Ma'had Abu Ubaidah are focused on correctly recognizing hijaiyah letters, correctly pronouncing them, and understanding the rules of recitation as the foundation for learning al-Qur'an. As quoted from an interview with Ustazah Arifatul Makkiyah, said that: "...The ease of learning facilities provided by the teacher of Ma'had Abu Ubaidah bin Al-Jarrah Medan, has its own role in fostering learning motivation among the elderly. Why is this so? The elderly are taught about their obligation to learn al-Qur'an and and they are taught that learning al-Qur'an is not difficult; of course there is a solution. Furthermore, we know that the elderly's literacy level in recognizing hijaiyah letters is still relatively low." 50 In response to the above opinion and interview excerpt, Nurdianto and bin Ismail argue that the survey results demonstrate the elderly's poor literacy in recognizing and correctly pronouncing hijaiyah letters, particularly letters that are close to pronunciation, such as Jim, Dzal, and Za. 51 Furthermore, Nazih stated that tahsin e-learning activity is a new innovation implemented by Ma'had Abu Ubaidah bin Al-Jarrah Medan to assist the elderly in learning al-Qur'an. 52 According to Musafak, technological sophistication should be used as an interactive learning medium for all people, including the elderly. 53 This is done so that the elderly have "starting capital," implying that learning al-Qur'an is not difficult if the will and sincerity is present. Additionally, ma'had seeks to regulate both face-to-face and online learning. This is described in several rules relating to the study schedule, the application of permission (reason for absence), the cleanliness of the learning location, the clothes worn by the elderly during the learning process, and the along the to questions and answers (discussion) during the learning process. The main goal is to regulate the tahsin learning process, specifically improving the reading of al-Qur'an for the elderly, beginning with recognizing the hijaiyah letters, correctly pronouncing them, and reading al-Qur'an according to the rules of recitation. Thus, it is clear that ma'had's online and offline learning systems are well and effectively implemented. --- C. Description of the Implementation of Tahsin Recitations of Al-Qur'an --- Preliminary Activities Preliminary activities in tahsin recitations of al-Qur'an in Ma'had are divided into 2 (two), namely seating management (students sit in a semi-circle) and prayer. According to the findings of the researcher's analysis, the students (the elderly) took a sitting position before performing the core activities. This is done so that all students are directly in front of ustazah (tahsin teachers) and their readings can be clearly heard by other students. Taking a seat reflects the neatness, creation and order of the learning system. This is in line with the method used by Ma'had, namely the talaqqi method, where teachers and students face each other in the learning process. As stated by Ustazah Arifatul Makkiyah (teacher of tahsin) the following: "...the ma'had set the talaqqi method for learning tahsin. We deliberately use this method because it helps the placement of the elderly sitting position, making it easier for students to listen to each other's readings. Then, the talaqqi method is expected to create a more conducive atmosphere." Furthermore, another tahsin teacher, namely Ustazah Elfi Zahra Pane, Lc., M.A. said that: "...Actually, ma'am, these elderly students are old, old, no need to be told about sitting in a conducive and orderly manner. However, I intentionally sat in a semi-circle, so that I could easily monitor all the students, so that I could observe things that I didn't want, such as chatting while studying, falling asleep, not concentrating. If I don't sit like that, I'm worried that I will only focus on 1 student, while the others are not monitored." 55 In response to the figure (1) and interview excerpts above, Salim explained that the semi-circular sitting position in the tahsin learning process will allow for student harmonization. 56 Furthermore, Shaleh adds that the use of a sitting position is an attempt to foster an intense relationship between students and teachers. 57 This effort is also a type of classroom management that aims to make the learning environment effective, conducive, and efficient. In addition to the sitting position, students must pray before beginning the main activity. The prayer readings begin with reading basmalah (pre-activity speech), hamdalah (praise to Allah swt.), salawat on the Prophet Muhammad, and reading prayer studies. In line with this, Abd Rahman explains that praying is a form of hope for blessings from Allah swt. on all humanity. 58 In this regard, Vachruddin explains that praying is the most beautiful human request to Allah swt. through words, actions, appreciation, and tears. 59 Furthermore, Akmal explains that every Muslim must pray before beginning any activity, particularly learning to read al-Qur'an. 60 Based on data analysis related to seating management activities, it was carried out in such a way that when participants (the elderly) read al-Qur'an (tahsin activities) they were immediately confronted with the ustazah (tahsin teacher), whereas other students could listen to the readings from their friends. Sitting in a semi-circle encourages students to independently correct their readings, while ustazah can control the atmosphere of learning tahsin so that it remains conducive and runs smoothly. Then, learning al-Qur'an begins with prayer in the hope of receiving blessings from Allah swt. --- Core Activities The core activities are divided into 3 (three), namely the elderly reading al-Qur'an face to face with ustazah, ustazah directly correcting the students' readings and marking reading errors in each elderly's mutaba'ah books. To begin with, the process of reciting al-Qur'an is facing each other (tahsin teachers and the elderly). Ustazah (teacher of tahsin) initiated this activity by allowing each elderly to read al-Qur'an for a maximum of 8 minutes (adjusted to the total number of elderly present, so that 2 hours is sufficient for all elderly). In this process, ustazah easily corrects the readings of each elderly. According Ustazah Arifatul Makkiyah the following: "...Sitting in a semi-circle puts the position of each potential student directly face to face. Sitting opposite is done so that the voice of the elderly in reciting the Qur'an can be heard clearly by me. In addition, I can also directly correct the lip movements of students (mothers), whether they are in accordance with the rules of makharijul letters." 61 In line with the interview excerpt above, Ustazah Masyitoh Oktaviani, Lc. said: "...If sitting opposite like this, it makes it easier for me to hear clearly the sounds that come out of the mouth of each recitation mother, and to see firsthand the movement of the lips according to the rules of makharijul letters. Because, if I say tahsin by telephone, it will definitely interfere with my hearing of the pronunciation sounds issued by students, it could be a network problem or something else." 62 Based on the figure (2), interview excerpt above and the explanation of the interview results, it is clear that the placement of positions carried out in the preliminary activities allows tahsin teachers to listen while also correcting the elderly's reading errors. Suriansyah claimed that this method requires teachers and students to face each other (face to face) in order to identify errors from aspects of makharijul letters, nature of letters, and character of letters. 63 Furthermore, Achmad, et al state that this convenience is based on the distance of the tahsin teacher who can observe the lip movements of the elderly in pronouncing letters and reading al-Qur'an. 64 In fact, the teacher can invite the elderly one by one to imitate the reading that tahsin teacher has demonstrated. 65 In more detail, Ustazah Elfi Zahra Pane, Lc., M.A. describe the advantages of the talaqqi method: "...The rules of reading al-Qur'an have existed since the time of the Prophet Muhammad. it's not the least bit lacking in letters, not the least bit lacking in lines, not the least bit lacking in length, so when the mother reads it incorrectly, the ustadzah immediately reprimanded her, 'saj', not like that, ma'am, but like this, what are the rhythms? read fast or slow or medium or he or she swings continuously the level of mad if it is read 2,2,2 it must be uniform from the beginning to the end of the mad 2." 66 In response to the interview excerpt above, Ezani and Zulkarnain explained that talaqqi method is rational in the process of eliminating illiteracy in al-Qur'an among the elderly. 67 Because, there is a check and re-check process between tahsin teachers and the elderly repeatedly in reciting the hijaiyah letters, and reading al-Qur'an according to the nature of the letters and the reading rules (laws) of reading. Thus, the application of the talaqqi method in tahsin activities facilitates reading corrections from teachers to the elderly, because the distance is affordable for teachers to reprimand and correct the readings of the elderly. This convenience is felt not only by tahsin teachers, but also provides convenience and comfort for the elderly while reciting al-Qur'an. Furthermore, correction of lip movements is also an important aspect in core activities (tahsin recitations of al-Qur'an). According to the researchers' observations, there were errors in the reading of the elderly that were corrected by the tahsin teacher, namely reading ‫ِرﻟﻲ‬ ‫اﻏﻔ‬ ِ ‫بّ‬ ‫.رﱠ‬ Where, the tahsin teacher explained that the letter ‫ب‬ which has the character of kasrah (bottom row), is pronounced with the shape of the jaw or mouth that must be lowered again, so that the sound does not become slanted. Following that, the teacher demonstrates proper reading technique and is imitated by students. In another case, ustazah corrected reading errors in the elderly in the law of mad (long) reading, specifically the pronunciation of ‫بُ‬ ِ ‫َﺿر‬ ‫.ﯾ‬ Where, the letter ‫ض‬ lacks a vowel (breadfruit/dead), then it is read with restraint, not reflected. Ustazah also stated that the base of the edge of the tongue should be held against the upper molars. Then, ustazah demonstrated proper reading technique, and the tahsin participants followed (the elderly). In addition to reading corrections, ustazah core activity also marks the participants' reading errors in the mutaba'ah book. Mutaba'ah book is a book to evaluate errors that occur when participants read al-Qur'an, both errors that occur when participants read al-Qur'an, both errors in the makharijul letters, the nature of the letters, and the tajwid rule found in the verses of al-Qur'an. There is a note column in the book for ustazah to write down errors that occur when reading al-Qur'an, containing verses contained in chapters 28, 29, and 30. In fact, mutaba'ah book serves as a barometer for students' reading progress as they follow the process of tahsin recitations of al-Qur'an at Ma'had Abu Ubaidah bin al-Jarrah Medan. According to Daniapus, et al, mutaba'ah book serves as a control of the learning activities that students must complete. 68 Where ustazah can control the reading progress of the elderly on a regular basis as a form of routine evaluation of students' tahsin abilities. In fact, using the indicators in the mutaba'ah book, tahsin teacher can determine each student's stage of improvement and success. This is done to keep tahsin participants are motivated and able to self-evaluate their progress and changes. 69 Based on the description above, it is understood that the implementation of the core activities of tahsin recitation of al-Qur'an provides improvements to students' reading with recitation explanations and examples of mouth movements that are adapted to the places where the letters of hijaiyah (makharijul huruf) come out in the classroom learning process. Thus, learning tahsin with the talaqqi method makes it easier for teachers to correct the readings of tahsin participants, on the other hand, tahsin participants (the elderly) benefit from this method as well because they can identify reading errors and directly imitate the correct reading from tahsin teacher. --- Closing Activity Closing activities in the implementation of tahsin recitations of al-Qur'an at Ma'had Abu Ubaidah bin Al-Jarrah Medan were divided into two categories, specifically strengthening in the form of recitation theory and praying to close the tahsin activities. It is performed after all tahsin participants (the elderly) have had their turn to read al-Qur'an in front of ustazah in an attempt to strengthen the theory of 68 Daniapus, et al., Pejuang Al-Qur'an, (Jawa Tengah: Pustaka Rumah Cinta, 2020). 69 Caswita, Manajemen Evaluasi Pembelajaran Pendidikan Agama Islam, (Yogyakarta: Deepublish, 2021), 12. See also J. Subando, Pengembangan Model Evaluasi Kurikulum Al-Irsyad, (Jawa Tengah: Redaksi, 2021), 36. recitation. Reading errors, corrections given, and tracking the reading progress of the elderly culminated in tahsin teacher reinforcing the tajwid theory. Then, the teacher teaches how to pronounce the letters correctly in general. As stated by Ustazah Masyitoh Oktaviani, Lc. in the interview information below: "...I intend to strengthen this recitation theory so that the foundation for reading for elderly mothers is also strong. Which reading must be clear in sound is called izhar, which must be buzzing, which must be reversed (iqlab). Everything becomes important to be practiced directly by elderly mothers. In addition, there are some elderly mothers who forget the letters of ikhfa or izhar, so the tajwid theory must be strengthened again. In addition, it is also necessary to study the nature of the letters, so that elderly mothers are able to recite fluently according to the rules of reading the Qur'an". 70 Based on the interview excerpt above, it is understood that the strengthening of tajwid theory is intended to ensure that tahsin participants understand the rules of reading and the nature of letters in pronunciation. Because, as Mursyid explains, the sound of the Arabic dialect must match the movement of the mouth in issuing hijaiyah letters in al-Qur'an. 71 Suriansyah demonstrates how to read, known as musyafahah. In the application of musyafahah, students imitate the mouth or lip movements practiced by the tahsin teacher. 72 According to Priyano, the pronunciation of this musyafahah method helps students imitate reading correctly, because it is directly practiced in front of the tahsin teacher to pronounce and sound the letters correctly. 73 Furthermore, Annuri recommends the use of musyafahah because tahsin participants immediately imitate the lip movements of the teacher. 74 Readings with the properties of jahr, syiddah, isti'la, ithbaq, and qalqalah can be distinguished slowly by students (the elderly) on this basis. The need for knowledge of recitation theory is very important given to the elderly (tahsin participants). Because, the right to read al-Qur'an is accommodated in the science of recitation, it began to be revealed to the Prophet Muhammad through the Angel Jibril. Al-Qur'an and its reading rules are still relevant and sustainable until the end of time. This is one of the proofs of the miracles of al-Qur'an with reading rules that do not change with the times. Based on the description above, it is clear that tahsin teaching team (ustazah) provides the closing activity in the form of strengthening the theory of recitation after all students (elderly) have the opportunity to read in turns. Previously, corrections, improvements, and signs of reading progress were given to the elderly. As a result, this strengthening is an attempt to provide understanding to tahsin participants about the nature of reciting the verses of al-Qur'an which contain the rules for reading them. Furthermore, the tahsin activity was closed with the reading of alhamdulillahirabbil'alamin, prayer for the kafaratul assembly, and salawat on the Prophet Muhammad. --- D. Discussion The introduction of hijaiyah letters to children/students is the initial method used by al-Qur'an teachers. 75 This is intended so that students are able to pronounce the letters correctly according to the rules of makharijul huruf, and are able to read al-Qur'an correctly according to the rules of tajwid. Similarly, Ma'had Abu Ubaidah Medan initiated the teaching of al-Qur'an, beginning with the introduction of hijaiyah letters, reading pronunciation, and improving al-Qur'an reading for the elderly. The special grouping of the elderly is based on the request of several elderly people to learn tahsin al-Qur'an, in order to strengthen the reading of al-Qur'an, especially Surah Al-Fatihah which is always read in fardu prayers. 76 According to Gumilar, the elderly's level of knowledge of hijaiyah letter is limited to knowing which letters are ba, ta, and so on. 77 However, pronouncing hijaiyah letters does not follow the rules of makharijul huruf. Furthermore, Santoso refers to the phenomenon of the limited hijaiyah pronunciation letter as evidence of the low level of elderly literacy in the hijaiyah letter. In addition, educational factors also have a complex meaning in influencing the literacy level of al-Qur'an, starting from the aspects of educators, methods, strategies, teaching materials, to the learning curriculum. On this basis, Fadillah explained that the innovation of learning al-Qur'an continues, resulting in the birth of various Qur'anic learning methods, the dirosah method, the tilawati method, the iqro' method, the baghdadi method, and the bashohi method, including the tahsin method. 82 Hilmy added that this also underlies the spread of various books on practical methods of studying al-Qur'an, recognizing and being able to pronounce hijaiyah letters well, as well as strategies for learning al-Qur'an for the elderly. In fact, now there are various components of learning al-Qur'an assisted by internet technology. 83 Regarding educational and economic factors, Nurdin argues that social factors have a major influence on the low level of al-Qur'an literacy in the elderly. This is because the elderly who work as housewives assume that the beginning of a family does not yet have the desire to study al-Qur'an intensely and prefer to focus on taking care of the family. 84 In addition, Fitrianingsih suggests that another social factor that affects the level of al-Qur'an literacy in the elderly is the perception that there is a sense of shame when they have to learn al-Qur'an when they are teenagers or adults, whereas the early and childhood phases do not seriously study the rules of reading al-Qur'an. 85 These problems are certainly often experienced by the elderly, so they need personal awareness and encouragement from the surrounding environment to socialize the importance of lifelong education. Because, education is an entity of the sustainability of human life. 86 Through education, various methods were born in studying al-Qur'an, including dirosah method, tilawati method, iqro' method, baghdadi method, and bashohi method, including tahsin method. The advantage of tahsin method for the elderly lies in the aspect of efforts to improve the quality of reading, because the elderly are familiar with hijaiyah letters, but are not yet precise in reciting the reading of al-Qur'an. 87 --- E. Conclusion This research concludes that tahsin method shares many similarities with other methods; the difference is that the focus of eradicating al-Qur'an illiteracy for the elderly is to improve reading, so this method is appropriate to use. Furthermore, the application of tahsin method is coherent, beginning with the introduction of letters according to the makharijul huruf and how to read al-Qur'an according to the rules of tajwid. Furthermore, economic limitations, low levels of education, and social influences become three aspects of life that are frequently the primary reasons for the occurrence of illiteracy in al-Qur'an among the elderly. Thus, various factors that hinder learning for the elderly can be reduced with a special tahsin program for the elderly at Ma'had Abu Ubaidah Medan. The increase of elderly participation in the Tahsin program suggests increasingly awareness among the Muslims to learn foundational knowledge of the religion. F.
Based on the stress process model of family caregiving, this study examined subjective stress appraisals and perceived schedule control among men employed in the long-term care industry (workplace-only caregivers) who concurrently occupy unpaid family caregiving roles for children (double-duty child caregivers), older adults (double-duty elder caregivers), and both children and older adults (triple-duty caregivers). Survey responses from 123 men working in nursing home facilities in the U.S. were analyzed using multiple linear regression models. Results indicated that double-and triple-duty caregivers appraised primary stress similarly to workplace-only caregivers. However, several differences emerged with respect to secondary role strains, specifically workfamily conflict, emotional exhaustion, and turnover intentions. Schedule control also constituted a stress buffer for double-and triple-duty caregivers, particularly among double-duty elder caregivers. These findings contribute to the scarce literature on double-and triple-duty caregiving
Introduction Men constitute a minority in caregiving professions in the U.S., representing only 11% of certified nursing assistants (CNAs), 10% of registered nurses (RNs), and 8% of licensed practical nurses (LPNs) in 2011 (Landivar, 2013;Paraprofessional Healthcare Institute (PHI), 2013). Societal views mirror this demographical profile, as caregiving professions are typically equated with women (O'Connor, 2015). Collectively, these trends depict men as a major untapped resource for prospective healthcare talent (Sherrod, Sherrod, & Rasch, 2005;Rajacich, Kane, Williston, & Cameron, 2013). With a growing workforce shortage and a rising demand for long-term care services underway, the healthcare industry has increased recruitment and retention efforts targeting this resource (American Association of Colleges of Nursing, 2014;Andrews, Stewart, Morgan, & D'Arcy, 2012;Hart, 2005;Landivar, 2013). Consequently, more men are entering caregiving professions (Andrews et al., 2012;Landivar, 2013). However, gender diversification has been a slow process, gender-related barriers have yet to be successfully addressed, and workplace, recruitment, and retention processes require further modification to effectively target men (Rajacich et al., 2013;Sherrod et al., 2005). For instance, some researchers argue that the homogeneous gender of caregiving professions preserves outdated and sexist notions, obstructs a contemporary portrayal of such professions, and marginalizes men, all of which may undermine recruitment efforts (Christensen & Knight, 2014;Hart, 2005;Jordal & Heggen, 2015). Similarly, differential treatment from colleagues (e.g., expectations to perform more physically strenuous tasks) and patients (e.g., treatment refusal), suspicion regarding intimate touch and the capacity for caring, experiences of isolation or loneliness, felt difficulty in enacting masculine behavior within a female-dominated profession, and a lack of male mentors may impede the effectiveness of retention strategies (MacWilliams, Schmidt, & Bleich, 2013;O'Connor, 2015;Rajacich et al., 2013). Amidst the present workforce shortage and call for gender diversity, a better understanding of the unique challenges experienced by professional caregiving men is essential for facilitating targeted recruitment and retention strategies. As older adults' proliferating health and long-term care needs strain an under-resourced system, they are concurrently driving an unprecedented need for family caregivers (Reinhard, Feinberg, Choula, & Houser, 2015). As with caregiving professions, women have dominated unpaid family caregiving roles (Reinhard, Houser, & Choula, 2011). Men are increasingly occupying family caregiving roles, though, and currently represent 40% of adults informally caring for dependent family members in the U.S. (National Alliance for Caregiving (NAC) & the American Association of Retired Persons (AARP) Public Policy Institute, 2015). Recent evidence also indicates that men are investing greater time and becoming more involved in their children's lives (Gregory & Milner, 2011;Humberd, Ladge, & Harrington, 2014). Further, men's presence as caregivers of aging relatives is projected to become more prevalent and long-term than ever before (Thompson, 2002). Indeed, prior research suggests that working husbands invest a comparable amount of time to elder care as their employed wives and take on significant elder care responsibilities (Hammer & Neal, 2008). In response to these trends, researchers have begun to highlight the need for organizations to acknowledge and respect men's work-family interface (Gregory & Milner, 2011;Humberd et al., 2014). An important, but neglected, aspect of men's growing presence in both professional and family caregiving is that, compared to men from earlier cohorts, they may have an increased likelihood of partaking in each type of care simultaneously (combined caregiving). Researchers have traditionally studied paid, public and unpaid, private caregiving domains separately, thereby producing limited knowledge regarding combined caregiving (Ward-Griffin et al., 2015). Within this literature, double-duty caregiving refers to professional caregivers who informally care for children (double-duty child caregiving) or older adults (double-duty elder caregiving). Triple-duty caregiving pertains to professional caregivers who informally provide sandwiched care, or care for children and older adults. The few studies considering the convergence of caregiving domains have consistently shown that double-and triple-duty caregivers report various decrements in well-being relative to professional caregivers without family caregiving obligations (referred to as workplace-only caregivers hereafter), including more stress, psychological distress, work-family conflict, physical and mental fatigue, and sleep deprivation (Boumans & Dorant, 2014;DePasquale, Davis, Zarit, Moen, Hammer, & Almeida, 2014;Scott, Hwang, & Rogers, 2006). Nearly all of this research, however, is based solely or predominately on women. To our knowledge, a foundational qualitative examination from Anjos, Ward-Griffin, and Leipert (2012) regarding double-duty elder caregiving men's caregiving experiences and personal health is the only study that focuses exclusively on men with combined caregiving roles. Thus, additional information regarding double-and triple-duty caregiving men's well-being is needed. This information, in turn, will illuminate the potential work-family pressures experienced by double-and triple-duty caregiving men, which can then be integrated into the healthcare industry's recruitment and retention strategies. Thus, the objective of the present study was to examine subjective stress and perceived schedule control among men employed as CNAs, RNs, and LPNs working in nursing homes in the U.S., half of whom occupy family caregiving roles. This study will also partially replicate a previous investigation on double-and triple-duty caregiving women's psychosocial stress from the same population described herein (DePasquale et al., 2014) to descriptively compare the stress of double-and triple-duty caregiving on men and women. --- Conceptual Framework Our investigation is guided by an adaptation of the stress process model of family caregiving (SPM; Pearlin, Mullan, Semple, & Skaff, 1990). The SPM defines stress as the conditions, experiences, and activities that are problematic for family caregivers and distinguishes between primary and secondary stress. Primary stress is directly rooted in caregiving hardships and can be objective (i.e., based on care recipient conditions) or subjective (i.e., based on caregiver experiences). Secondary stress, specifically subjective role strains, originates from caregiving demands but spreads to multiple life domains (e.g., work). In this paper, we focus on men's double-and triple-duty caregiving role occupancy as predictors of subjective primary and secondary stress, or subjective stress appraisals. --- Primary Stress We consider one indicator of primary stress, perceived stress. Although workplace-only and double-and triple-duty caregivers are all exposed to professional caregiving stress, the Anjos et al. (2012) investigation highlighted stress specific to family caregiving. For example, double-duty elder caregiving men described familial pressure to have "the right answers" and provide support for a range of health problems, regardless of their expertise, because they were deemed the "health go-to person in the family" (pp. 113, 117). One double-duty elder caregiver likened his experience to "stepping in a minefield" in which he dealt with "a lot more emotional hooks" than at work (pp. 117). Others discussed their felt obligation and familial expectations to continue providing family care despite the stress they experienced. Therefore, double-and triple-duty caregiving may increase men's stress exposure beyond that encountered at work. Alternatively, workplace-only and double-and triple-duty caregiving men's primary stress appraisals may not differ. Dissimilar to caregiving women, men often employ a managerial approach to family caregiving (Thompson, 2002). This caregiving style blends masculine, traditional workplace values such as task-orientation, leadership, authority, control, and selfefficacy with emotional, nurturing care provision (e.g., Calasanti, & King, 2007;Russell, 2001;Thomas, 2002). Men emulating a managerial caregiving style typically compartmentalize their family caregiver identity and do not allow it to permeate other roles, thereby reducing caregiving burden (Thompson, 2002). Given that professional caregiving men also use this approach (Cottingham, 2015), it is plausible that this caregiving style is more prevalent among double-and triple-duty caregiving men. Indeed, the Anjos et al. (2012) study found that men typically assumed a managerial position within their familial care network. Double-and triple-duty caregiving men, then, may use caregiving styles that shield them from primary stress. --- Secondary Stress Researchers have only recently begun to consider the work-family interface for men with combined caregiving roles (Anjos et al., 2012). The designated secondary stress indicators of the present study therefore focus on role strains within the major institutions of work and family. Specifically, we examine work-family conflict, work-to-family positive spillover (WFPS), turnover intentions, emotional exhaustion, and job satisfaction. Work-family conflict reflects a bidirectional process in which any role characteristic that affects time, involvement, strain or behavior within the work domain is capable of producing conflict with the family domain (work-to-family conflict or WFC) and vice versa (family-to-work conflict or FWC; Greenhaus & Beutell, 1985;ten Brummelhuis & Bakker, 2012). Conversely, WFPS occurs when experiences in the work domain improve role performance in the family domain (Hanson, Hammer, & Colton, 2004). Consistent with previous research (DePasquale et al., 2014), we consider WFPS and job satisfaction indicators of strain with respect to the degree in which satisfaction is lacking. In their qualitative study, Anjos et al. (2012) highlighted how men's combined caregiving created secondary role strains. Family members often expected men to capitalize on their professional status to access workplace resources to benefit family care (e.g., timely appointments), which sometimes led men to act inappropriately at work (e.g., overstepping boundaries). Other unique cross-pressures, dilemmas, and role strains included tension between professional caregiving (e.g., deciding when to provide professional versus emotional support) and family member (e.g., husband) roles, discomfort providing family care, changes in family relationship dynamics, difficulty managing competing demands, and compromised emotional health. Secondary stress appraisals could therefore reflect doubleand triple-duty caregiving men's subjective responses to role strains as well as their dissatisfaction with how their workplace addresses their unique work-family needs and alleviates role strains. However, double-duty elder caregiving men also noted that, although family caregiving is sometimes "a frustrating experience, as it can be at work," it is also a "rewarding" endeavor (pp. 117). Rewarding caregiving experiences across work and family domains, then, could offset strains produced from their convergence and generate WFPS and job satisfaction, as hypothesized in the role enhancement literature (Marks, 1977). --- A Potential Moderating Resource According to the SPM (Pearlin et al., 1990), the negative consequences of stress are conditional, in part, on access to resources that modify the effects of stress. In this study, we consider the potential moderating effects of perceived schedule control (referred to as schedule control henceforth). Schedule control is a psychological, time-based work resource that reflects employees' felt ability to determine when they work (Kelly & Moen, 2007) to accommodate their personal needs and capacities (Krausz, Sagie, & Bidermann, 2000). The construct of schedule control has conceptual ties to the job demands-control model, which proposes that work strain and dissatisfaction are more likely in the context of high demands and low control; increasing employees' autonomy and discretion over the work environment is thus considered key for work performance, health and well-being, and coping with job demands (Karasek, 1979). Schedule control constitutes a complementary extension of this model by focusing on when, rather than how, work is done (Kelly & Moen, 2007). Researchers have theorized that schedule control may counteract time pressures as well as enhance health, well-being, and productivity (Kelly & Moen, 2007) through time-regulation and recovery-regulation processes (Nijp, Beckers, Geurts, Tucker, & Kompier, 2012). The time-regulation mechanism implies that schedule control permits employees to manage conflicting work and family time demands, thereby reducing work-family conflict. The recovery-regulation mechanism views schedule control as a key factor in preventing work overload, preserving a favorable effort-recovery balance, and stimulating work performance by allowing employees to modify work time to facilitate recovery opportunities. Among professional caregivers, schedule control is positively associated with organizational commitment and job satisfaction, and negatively related to exhaustion, turnover intentions, and risk of psychological distress (Choi, Jameson, Brekke, Anderson, & Podratz, 1989;Hurtado, Glymour, Berkman, Hashimoto, Reme, & Sorensen, 2015;Krausz et al., 2000). Schedule control may be particularly relevant for men working in nursing homes given that these facilities offer 24-hour care that is dependent on shift work, meaning that they may work outside traditional morning-to-afternoon hours. Although there are benefits and some employees prefer shift work, this non-standard work schedule can adversely affect physical, mental, and social well-being and presents challenges such as constant changes in lifestyle habits (Blachowiz & Letizia, 2006;Vogel, Braungardt, Meyer, & Schneider, 2012). Shift work also creates additional challenges for double-duty child and triple-duty caregivers, like negotiating and allocating work and family time on a tight schedule and managing conflicts with children's schedules or needs (Maher, Lindsay, & Bardoel, 2010). Prior research suggests, however, that schedule control is a critical factor in determining whether shift work is disruptive or harmful for employees (Fenwick & Tausig, 2001). Further, researchers have found that schedule control matters a great deal for employees' family and health outcomes, regardless of schedule type, and is more salient for employees' psychological responses to work than objective work conditions, such as actual work schedule and workload (Fenwick & Tausig, 2001;Krausz et al., 2000;Seashore & Taber, 1975). Additionally, previous qualitative findings imply that schedule control would be beneficial for professional caregivers with family caregiving obligations (Maher et al., 2010). Therefore, schedule control may constitute a valuable resource in double-and triple-duty caregiving men's stress process. --- Research Questions Research on double-and triple-duty caregiving men remains a largely uncharted territory. By examining subjective stress and schedule control exclusively among professional caregiving men, we aim to address a critical gap in existing research and advance understanding of the stress experienced by double-and triple-duty caregiving men relative to their workplace-only caregiving counterparts. Specifically, we pose the following research questions: RQ1) How do double-and triple-duty caregivers differ from workplace-only caregivers in their subjective stress appraisals? RQ2) Does schedule control constitute a workplace resource for double-and triple-duty caregivers? --- Methods This study is based on data from the Work, Family and Health Study (WFHS). The WFHS is part of a large research network effort to understand how workplace practices and policies affect work, family, and health outcomes among employees working in the long-term care industry. The WFHS was approved by several internal review boards, and a detailed description of its protocol and design can be found in Bray et al. (2013). --- Sample Employees were recruited from 30 nursing home facilities throughout New England that were owned by the same long-term health and specialized care company. Eligible employees worked at least 22.5 hours per week in direct care on day or evening shifts. Of 1,783 eligible employees, 1,524 (85%) participated, 125 of whom were men and comprise the focus of the present study. The gender distribution of WFHS participants (8% male) is consistent with national data on the gender distribution of nursing occupations in 2011 (9% male; Landivar, 2013). However, WFHS participants reported a lower median annual household income (WFHS: $45,000-49,999; U.S.: $53,482), lower level of educational attainment (WFHS: 24% of persons age 25 or over have a Bachelor's degree or higher, U.S.: 29%), and more racial diversity (WFHS: 51% White, includes persons reporting more than one race, U.S.: 64%) when compared to U.S. census data from 2010 to 2014 (U.S. Census Bureau, 2014). --- Procedures Trained field interviewers administered computer-assisted personal interviews at a private location in the workplace on a rolling basis from September of 2009 to July of 2011. Employees provided information about sociodemographics, work experiences, and wellbeing. Interviews averaged 60 minutes and employees received $20 for their time. --- Concepts and Their Measurement Double-and triple-duty caregiving role occupancy-Consistent with prior research (DePasquale et al., 2014;DePasquale, Bangerter, Williams, & Almeida, 2015;Scott et al., 2006), we categorized employees into mutually exclusive workplace-only and double-and triple-duty caregiving groups. Double-duty child caregivers had children 18 years of age or younger living with them for at least four days per week. Double-duty elder caregivers provided care (i.e., assistance with shopping, medical care, or financial/budget planning) at least three hours per week in the past six months to an adult relative, regardless of residential proximity. Triple-duty caregivers fulfilled child and elder care criteria. The remaining men were classified as workplace-only caregivers. SPM-Our analysis is based on an adaptation of the SPM (Pearlin et al., 1990), as shown in Figure 1. We incorporate the following three domains of this model: 1) background characteristics and situational context, 2) subjective primary and secondary stress, and 3) moderating resources. --- Background characteristics and situational context: According to the SPM, caregivers' background characteristics and situational context can potentially affect the extent to which they are exposed to stress. In particular, caregivers' ascribed statuses, including age (in years) and race (1=White, 0=other) as well as educational (1=Bachelor's degree or higher, 0=less than Bachelor's degree), occupational (1=CNA, 0=RN or LPN), and economic attainments (annual household income of $39,999 or less, $40,000-54,999, or $55,000 or more per year) are embedded throughout the stress process. We therefore examine these attributes as potential covariates. Additionally, we assess several work context features accounted for in previous double-and triple-duty caregiving studies (Boumans & Dorant, 2014;DePasquale et al., 2014), including average number of hours worked per week, company tenure (in years), and work-related injuries in the past six months (1=yes, 0=no). Given its positive associations with perceived stress and WFC among long-term care workers (DePasquale et al., 2014), we also consider psychological job demands with a three-item measure (e.g., job requires very hard work) from Karasek, Brisson, Kawakami, Houtman, Bongers, & Amick (1998); higher scores reflect more demands (α=.72). Moreover, we examine family context features such as marital status (1=cohabiting or married, 0=single) because partners may provide support at home. We also assess men's dual-earner couple status and the average number of hours partners work per week; unemployed partners may substantially contribute to family caregiving (Hertz, 1997), thereby lessening men's family caregiving duties. Further, we account for the presence of residential children with a range of health conditions and disabilities (e.g., developmental disabilities; 1=yes, 0=no), as fathers with disabled children report increased stress (Darling, Senatore, & Strachan, 2012). We also examine whether men have non-residential children as a proxy for care or support to these children (DePasquale, Polenick, Davis, Moen, Hammer, & Almeida, 2015b). Subjective stress: Unless stated otherwise, men indicated the extent to which they disagreed or agreed with statements using a five-point response scale ranging from 1 (strongly disagree) to 5 (strongly agree) for all subjective stress measures. Scale scores were computed by calculating the mean of items, with higher values signifying higher mean scores. We measured primary stress with a global, four-item measure of perceived stress (e.g., confident about ability to handle personal problems) pertaining to the last 30 days (Cohen, Kamarck, & Mermelstein (1983). Responses ranged from very often (1) to never (5). We reverse-coded two items (a=.68). We examined secondary role strains with six measures. We used the WFC and FWC scales from Netemeyer, Boles, and McMurrian (1996). Five items pertained to WFC (e.g., work demands interfere with family/personal time, a=.91) and five items assessed FWC (e.g., family-related strain interferes with job-related duties, α=.84) in the past six months. WFPS was assessed with the four-item affective spillover subscale (e.g., being happy at work facilitates happiness at home, α=.85) from Hanson et al. (2004). Turnover intentions were measured with a two-item scale (e.g., seriously considering quitting company for an alternative employer, α=.80) from Boroff and Lewin (1997). Emotional exhaustion was examined with the three-item (e.g., feel emotionally drained from your work, α=.84) emotional exhaustion subscale from The Maslach Burnout Inventory (Maslach & Jackson, 1986); responses ranged from never (1) to every day (7). Job satisfaction was measured with a three-item (e.g., like working at your job; α =.78) scale reflecting global job satisfaction (Cammann, Fichman, Jenkins, & Klesh, 1983). --- Moderating resource: We examined the potential moderating resource of schedule control with a modified measure from Thomas and Ganster (1995). Employees rated the extent to which eight statements (e.g., control over when vacation or days off are taken) accurately depicted their perceived control over their work hours using a response scale ranging from very little (1) to very much (5). The mean score was 2.71 (SD=.77, range=1-5; α=.61), with higher scores reflecting higher mean schedule control. Analytic Strategy-The analyses presented here focus on a reduced analytic sample of 123 men. Reasons for exclusion included holding an administrative position (n=1) and missing schedule control data (n=1). We first examined background and context characteristics by conducting ANOVAs to identify mean differences between men with and without combined caregiving roles. We used Games-Howell post-hoc tests to account for unequal and small group sizes. We then examined any variables on which the groups differed, as well as child disability, in correlational analyses to detect potential multicollinearity issues and finalize covariate selection. Next, given that men were nested within facilities, we calculated an intraclass correlation (ICC) for each dependent variable by fitting empty models that decomposed variance into individual-level (men) and facility-level components. WFC (.11), emotional exhaustion (.21), and turnover intentions (.07) had ICCs above 5% whereas the remaining dependent variables had ICCs below 3%. We subsequently performed separate multiple linear regression models to predict subjective stress appraisals. We accounted for shared variance by obtaining robust standard errors (Huber-White correction) for the WFC, emotional exhaustion, and turnover intentions models. We did not modify the remaining models based on the reasonable assumption of statistical independence across facilities. Model 1 included binary indicators for each combined caregiving role (with workplace-only caregivers as the reference group), schedule control, and covariates. In Model 2, we added interaction terms for each combined caregiving role with schedule control to examine the extent to which schedule control conditioned doubleand triple-duty caregivers' stress appraisals. When a combined caregiving role by schedule control interaction was significant, estimate commands were used to calculate the simple slope for each role. --- Results --- Background Characteristics and Situational Context Table 1 presents men's background characteristics and situational context. Overall, 50% of men occupied double-and triple-duty caregiving roles. There were 62 (50%) workplaceonly, 27 (22%) double-duty child, 22 (18%) double-duty elder, and 12 (10%) triple-duty caregivers. ANOVA analyses indicated that workplace-only and double-and triple-duty caregiving groups differed on psychological job demands, marital and dual-earner couple statuses, and child disability. Specifically, triple-duty caregivers reported more psychological job demands and had a higher proportion of dual-earner couples than workplace-only caregivers. Double-duty child and triple-duty caregivers had higher proportions of cohabiting or married men. Based on ANOVA results, we examined correlations among psychological job demands, marital and dual-earner couple statuses, and child disability. Marital and dual-earner couple statuses were highly correlated (r=.78, p<.001) and could not be considered in the same model. However, only dual-earner couple status was correlated with stress appraisals and therefore retained. Additionally, child disability was not correlated with stress appraisals and subsequently excluded from model testing in favor of parsimony. Final models included psychological job demands and dual-earner couple status as covariates. --- RQ1: Subjective Stress Appraisals Table 2 displays multiple regression results. Workplace-only and double-and triple-duty caregivers' primary stress appraisals did not differ. As for secondary role strains, triple-duty caregiving was positively associated with WFC and all three combined caregiving roles predicted greater FWC. Additionally, triple-duty caregiving was associated with greater emotional exhaustion whereas double-duty child caregiving was related to lower turnover intentions. Workplace-only and double-and triple-duty caregivers' WFPS and job satisfaction appraisals did not differ. --- RQ2: The Potential Moderating Resource of Schedule Control In Model 2, evidence for the moderating effects of schedule control emerged only for double-duty elder caregivers. Specifically, schedule control moderated double-duty elder caregivers' appraisals of perceived stress (B=-2.09, SE=.89, p<.05), WFPS (B=.45, SE=.21, p<.05), turnover intentions (B=-.54, SE=.26, p<.05), and job satisfaction (B=.40, SE=.20, p<.05). We conducted follow-up analyses using a simple slopes test to determine for which group (i.e., workplace-only versus double-duty elder caregivers) schedule control was significantly associated with each outcome. These analyses indicated that, for every one unit increase in schedule control, double-duty elder caregivers reported less perceived stress (B= -2.17, SE=.72, p<.01) and lower turnover intentions (B=-.60, SE=.17, p<.01) as well as more WFPS (B=.52, SE=.17, p<.01) and job satisfaction (B=.46,SE=.16,p<.01). Figures 2345present visual representations of these effects by displaying model estimated means for each outcome at low (one standard deviation below the mean) and high (one standard deviation above the mean) values of schedule control. In the context of low schedule control, double-duty elder caregivers indicated greater perceived stress (Figure 2) and turnover intentions (Figure 3) as well as less WFPS (Figure 4) and job satisfaction (Figure 5) relative to mean scores on the same variables in the presence of high schedule control. These same patterns were also evident among double-duty child and triple-duty caregivers, but not workplace-only caregivers. --- Discussion This investigation partially replicates a recent study based on women from the same sample described in this paper (DePasquale et al., 2014). Guided by the SPM, the current and earlier investigations examine double-and triple-duty caregivers' perceived stress, work-family conflict, and WFPS relative to workplace-only caregivers. Whereas the earlier investigation includes partner relationship role strains, this study emphasizes additional work role strains and considers the moderating effects of schedule control. When applicable, findings from RQ1 are descriptively compared to the previous investigation to further contextualize how double-and triple-duty caregiving affects stress subjectively experienced by men. Results suggest that workplace-only and double-and triple-duty caregiving men appraise primary stress (conceptualized as perceived stress) similarly. These findings are in contrast to DePasquale et al. (2014), in which double-duty elder and triple-duty caregiving women reported more perceived stress. There are several potential explanations for the lack of effects in the current study. First, the male subsample drawn on here is substantially smaller than the female subsample from DePasquale et al. (n=123 versus n=1,399 respectively). Therefore, this study may lack statistical power to detect smaller differences between workplace-only and double-and triple-duty caregivers relative to the earlier investigation. Second, double-and triple-duty caregiving men may emulate a managerial caregiving style. The protective nature of this caregiving approach could enable men to occupy multiple caregiving roles with minimal primary stress (Anjos et al., 2012;Cottingham, 2015;Thompson, 2002). Third, this finding is based on a single indicator of subjective primary stress. Other indicators (e.g., overload) may produce different results or be more applicable for double-and triple-duty caregiving men. Fourth, both applications of the SPM focus on caregivers' subjective experiences rather than care recipient conditions. Future applications of the SPM should integrate objective primary stress indicators that focus on care recipients' health, behavior, and functional capabilities as well as the surveillance, work, and time required by family caregivers as these may be more relevant for double-and triple-duty caregiving men. Several differences emerged, however, with respect to secondary stress appraisals. Consistent with DePasquale et al. ( 2014), triple-duty caregivers reported more WFC, double-and triple-duty caregivers indicated greater FWC, and there were no differences in WFPS appraisals. Comparable to prior research linking dependent children to nurses' lower turnover intentions (Stewart et al., 2011), double-duty child caregivers also reported lower turnover intentions. Although adult relatives are also linked to lower turnover intentions, workplace-only caregivers' and double-duty elder and triple-duty caregivers' turnover intentions were similar. Additionally, triple-duty caregivers indicated more emotional exhaustion. This finding complements previous evidence suggesting that professional caregiving men informally caring for older adults are at risk of emotional burnout (Anjos et al., 2012). Given that triple-duty caregivers also perceived more work-family conflict, this particular group may be struggling to maintain professional and family caregiving role boundaries (Ward-Griffin, 2004). Emotion regulation, or the strategic management and experience of feelings to create desired, observable facial expressions in accordance with contextual expectations and norms (Ekman, 1992;Wharton & Erickson, 1993), represents one mechanism that may facilitate the erosion of such boundaries. In the SPM, emotion regulation constitutes a secondary role strain as emotion regulation performance in one role may affect emotion regulation and outcomes in other roles (Wharton & Erickson, 1993). Both professional and family caregiving entail emotion regulation, likely constituting a substantial portion of caregiving responsibilities in both domains and pitting triple-duty caregivers' three caregiving roles against one another for scarce energy (Goode, 1960). That is, the expenditure of energy for managing emotions in both family caregiving roles may limit triple-duty caregivers' emotional resources or energy for professional caregiving and vice versa, thus facilitating emotional exhaustion. Overall, a descriptive comparison of findings from the present study and the DePasquale et al. ( 2014) investigation suggests that subjective stress appraisals among double-and tripleduty caregiving men and women do not vastly differ. Findings from the present study, however, warrant additional research examining how double-and triple-duty caregiving men negotiate workplace and family caregiving role boundaries, utilize the managerial caregiving approach or employ other caregiving styles at work and at home, and regulate emotions when transitioning in and out of workplace and family caregiving roles. From a practice standpoint, the stress experienced by double-and triple-duty caregiving men will only become a greater concern for the healthcare industry as it strives to recruit and retain men with an increased likelihood of family caregiving. Given the gendered barriers, discrimination, and stigma experienced by professional caregiving men (MacWilliams et al., 2013;O'Connor, 2015;Rajacich et al., 2013), the inclusion of family caregiving men in work-family programs, practices, and policies is imperative and may signify a pivotal step toward discarding the healthcare industry's gendered image. Indeed, a lack of understanding for or oversight of double-and triple-duty caregiving men's work-family challenges may exacerbate or reinforce preexisting notions about the homogenous gender of caregiving professions and subsequently deter potential talent or increase turnover. --- Perceived Schedule Control Schedule control emerged as a resource for double-and triple-duty caregiving men's stress process (RQ2). Specifically, moderation results revealed that double-duty elder caregivers reported less primary stress and lower turnover intentions as well as more WFPS and job satisfaction with increased schedule control. Model estimated means for the conditional effects of double-duty child and triple-duty caregiving also mirrored these findings but may not have achieved statistical significance due to insufficient power. Descriptively, differences calculated between primary and secondary stress appraisal scores in the context of lower and higher schedule control were greater for all combined caregiving configurations compared to workplace-only caregivers, thereby illustrating the significance of schedule control for double-and triple-duty caregiving men. At a time in which the healthcare industry is actively targeting men in recruitment and retention efforts, these findings are particularly noteworthy. According to a recent report on employer strategies to attract, retain, and engage workers amidst a workforce shortage, organizations that offer or provide benefits that employees find useful or valuable will retain talent (AARP, 2015). In applying this logic to the present study, double-and triple-duty caregiving men's lower turnover intentions in the presence of greater schedule control reinforces the notion that they benefit from and/or value schedule control. Further, our findings suggest that schedule control will not only help recruit and retain family caregiving men, but it may yield a positive return-on-investment. Namely, turnover in the healthcare sector has serious, wide-ranging implications ranging from system costs to resident outcomes (Hayes et al., 2012;Trinkoff, Han, Storr, Lerner, Johantgen, & Gartrell, 2013). If lower turnover intentions associated with increased schedule control translate to actual behavior, the healthcare industry may experience a reduction in turnover-related costs, more stability and continuity of care in its workforce, and better health outcomes among residents and employees. Schedule control, then, may constitute a resource beyond the employee-level. Moreover, these findings reflect and extend prior research regarding professional caregiving men's satisfaction with their work role and traditional constructions of masculinity. Previous studies suggest that professional caregiving men express concerns about and experience stress because of the gendered organizational climate engulfing caregiving professions; nonetheless, men still convey passion, enthusiasm, and optimism for their work role (e.g., Hart, 2005;Sherrod et al., 2005). It is feasible that the challenges associated with family caregiving make double-and triple-duty caregiving men more susceptible or reactive to workplace stress, ultimately detracting from their job satisfaction. In that case, potential benefits derived from schedule control (e.g., addressing work-family needs) may help these men reconnect with the desires, preferences, or selling points of the profession that initially attracted them to their work role and increase WFPS and job satisfaction. Additionally, schedule control may be a particularly appealing work resource for double-and triple-duty caregiving men given that control is characteristic of traditional masculinity ideology (Fournier & Smith, 2006). Double-and triple-duty caregiving men may be exposed to more gender-based discrimination, barriers, or stigma as well as conflicting masculinity norms encountered at both work (e.g., lack of support for work-family balance) and home (e.g., engaging in care traditionally provided by women) because of their family caregiving roles (Anjos et al., 2012). Therefore, schedule control may enable double-and triple-duty caregiving men to maintain their masculine identity by exercising more control over when they work and partake in the family domain, thus attenuating stress appraisals. Conversely, low schedule control may exacerbate men's perceived loss of masculinity. With a lack of prior research, we can only speculate as to how or why schedule control favorably conditions double-and triple-duty caregiving men's perceived stress, WFPS, turnover intentions, and job satisfaction. These findings suggest, though, that a key factor in recruiting and retaining double-and triple-duty caregiving men is accommodating their work-family interface. Thus, the availability, utilization patterns, and relevance of as well as organizational climate surrounding workplace practices (such as and in addition to schedule control), programs, and policies for double-and triple-duty caregiving men represent pivotal future research directions that will yield pertinent information for the development of appropriate and targeted work-life initiatives. Further, these timely findings are novel and provide initial evidence regarding the potential benefits of schedule control for double-and triple-duty caregiving men as well as the healthcare sector. We believe they provide essential baseline information for family caregiving men considering or currently in caregiving professions, long-term care employees, and healthcare providers who counsel professional caregiving men. --- Limitations and Strengths The present study has several limitations. First, the cross-sectional design precludes causal ordering, a common limitation of previous studies on caregiving men (Bookwala, Newman, & Schulz, 2002). Second, although heterogeneity in men's working conditions is inherently controlled for in the WFHS sample, non-probability sampling of nursing home facilities from a company in one region (New England) of one country (the U.S.) limits generalizability of our study findings to the population of men working in the long-term care industry. A third limitation of this study is its sample size. Sensitivity power analyses revealed that data used in the current study were powered to detect approximately medium effect sizes. A much larger sample may be required to detect smaller differences between workplace-only and double-and triple-duty caregiving men. Therefore, future research in this area should intentionally oversample men, when possible, to ensure sufficient sample size, increase statistical power, and enable a precise evaluation of double-and triple-duty caregiving men. It should be noted, though, that professional caregiving men are considered a "difficult-to-obtain" workforce segment and the size of our analytic sample is consistent with previous studies (e.g., Rochlen, Good, & Carver, 2009, p.53;Wallen, Mor, & Devine, 2014;Zamanzadeh, Valizadeh, Negarandeh, Monadi, & Azadi, 2013). Further, much smaller significant differences between workplace-only and double-and triple-duty caregiving men may not be practically meaningful. Finally, we conducted a secondary analysis of existing data not specifically designed to study caregiving. The data lacked ideal information regarding caregiving intensity, but it enabled us to construct combined caregiving role occupancy measures consistent with prior research (e.g., DePasquale et al., 2014DePasquale et al., , 2015a;;Scott et al., 2006). Still, it should be acknowledged that this approach operationalizes child and elder care differently. Specifically, the child care measure does not assess care provision. Instead, dependency is implied by age and cohabitation. The average age of residential children (double-duty child caregivers: M=5.67, SD=4.57; triple-duty caregivers: M=3.83, SD=3.95), however, affirms dependency. Conversely, the elder care measure specifies criteria for care provision and includes a more stringent time commitment than required in prior double-duty care research (e.g., Ward-Griffin, 2004). It should also be noted that this measure may encompass care for adult relatives other than aging parents (e.g., spouses). Nonetheless, one advantage of a caregiving role occupancy approach is that, given the diversity of family caregiving situations yielded by our measures, our sample may be more representative of double-and triple-duty caregivers than a sample selected for a certain threshold of care or care recipient diagnosis (DePasquale et al., 2014). To be sure, our findings are suggestive; it is important that they are viewed as an initial step toward developing a more complete understanding of double-and triple-duty caregiving men. We encourage other researchers to replicate and extend our study using larger, representative samples; longitudinal research designs; more sensitive family caregiving measures; and previously described expansions of the SPM (Pearlin et al., 1990). The aforementioned limitations, however, should not outweigh the contributions and knowledge gained from the present study. Previous double-and triple-duty caregiving studies comprise a small, limited body of work primarily based on qualitative evidence, RNs, health professionals working outside of the U.S., informal elder care, and women (Boumans & Dorant, 2014;DePasquale et al., 2014;Giles & Hall, 2014;Scott et al., 2006;Ward-Griffin, 2004;Ward-Griffin, Brown, Vandervoort, McNair, & Dashnay, 2005;Ward-Griffin et al., 2015). We address these gaps and contribute to existing literature by exclusively focusing on men working in nursing homes in the U.S., the majority of whom are CNAs; considering different workplace and family caregiving configurations; and providing new evidence regarding double-and triple-duty caregiving men's subjective stress and schedule control. Additionally, the inclusion of workplace-only caregiving men as a reference group, rather than women, is beneficial in that it enables an assessment of within-group variables and provides a more accurate context for understanding the stress of family caregiving on men (Bookwala et al., 2002;Rochlen et al., 2009). Finally, our preliminary study lays the groundwork for future research on double-and triple-duty caregiving men. It is our hope that the issues discussed here will motivate other researchers to further investigate and expand this important line of empirical inquiry. Concepts and measures for the analysis of double-and triple-duty caregiving men's subjective stress appraisals. Model estimated means for the conditional effects of double-and triple-duty caregiving on perceived stress, an indicator of primary stress. *p<.05, ** p <.01, *** p <.001 --- Acknowledgments --- Author Manuscript DePasquale et al. --- Author Manuscript DePasquale et al. Page 25
This study sought to evaluate the effectiveness of Project ACTS: About Choices in Transplantation and Sharing, which was developed to increase readiness for organ and tissue donation among African American adults. Nine churches (N = 425 participants) were randomly assigned to receive donation education materials currently available to consumers (control group) or Project ACTS educational materials (intervention group). The primary outcomes assessed at 1year follow-up were readiness to express donation intentions via one's driver's license, donor card, and discussion with family. Results indicate a significant interaction between condition and time on readiness to talk to family such that participants in the intervention group were 1.64 times more likely to be in action or maintenance at follow-up than were participants in the control group (p = .04). There were no significant effects of condition or condition by time on readiness to be identified as a donor on one's driver's license and by carrying a donor card. Project ACTS may be an effective tool for stimulating family discussion of donation intentions among African Americans although additional research is needed to explore how to more effectively affect written intentions.
toward donation among African Americans as compared to people of other racial and ethnic backgrounds (McNamara et al., 1999). Thus, in the United States, there are expanded educational efforts to increase African Americans' deceased and living donation intentions; this is accomplished through exposure in the national and local media, community interventions, and the dissemination of best practices (National Institute of Diabetes and Digestive Kidney Diseases, 2003). The church represents a potentially effective mechanism for developing and implementing a community intervention to shape African Americans' views on donation. Although religious objections to donation are often cited (Boulware et al., 2002;Callender, 1987;Durand et al., 2002;Gillman, 1999), almost all major religious organizations support donation; many even have supportive policy statements about organ donation (Gallagher, 1996). Thus, delivering an intervention in a church setting that conveys religious support for donation while addressing nonreligious concerns, such as inequalities in the organ allocation system, has the potential to increase donation intentions among African Americans. With demonstrated effectiveness, such an intervention could affect African American donation rates when taken with other coordinated intervention efforts. Realizing the critical impact that religious views have on donation intentions, the authors developed an intervention to address many religious objections to donation. Project ACTS: About Choices in Transplantation and Sharing is a culturally sensitive organ donation education intervention that targets church-going African American adults. This intervention was designed to address the specific donation concerns of African Americans and encourage individuals to make their donation intentions known by designating their wishes on their driver's licenses, signing donor cards, and talking with their families. Most donation-related interventions encourage the written expression of donation intentions, but increasing emphasis is being placed on verbally sharing one's wishes with family (Schutte & Kappel, 1997). In the case of deceased donation, family members' awareness of one's donation intentions is one of the most important steps in the process of becoming an organ donor (Callender, 1987). In most states, the family will be asked to consent to the donation of a deceased family member's organs and tissues even in the presence of a signed donor card; therefore, fostering family discussions about donation is critical to closing the gap between the supply and demand imbalances for transplant organs (DeJong et al., 1998;Schutte & Kappel, 1997). Family communication and acceptability are key to donation decisions among African American families because they are oftentimes characterized by strong extended relationships, shared decision making, and strong religious orientation (Kane, 2000). Whereas less than 50% of families have had discussions about organ donation, it is likely that such discussions can serve to increase both positive attitudes and donation intentions (DeJong et al., 1998;Morgan & Miller, 2002). The purpose of this study is to test the effectiveness of a culturally sensitive, family-focused intervention, Project ACTS. Because the act of serving as an organ donor is a rare event, effectiveness was measured by applying the transtheoretical model and stages of change (Prochaska & DiClemente, 1983) to the expression of donation intentions via one's driver's license, donor card, and family discussion. This model has been applied to donation intentions in previous research (Hall et al., 2007;Robbins et al., 2001) and proposes a continuum of behavior change that consists of precontemplation (with no intentions of becoming a donor), contemplation (thoughts of becoming an organ donor), preparation (seeking out information about organ donation), action (expressing donation intentions either verbally or in writing), and maintenance (having expressed donation intentions more than 6 months ago). During each stage, specific processes and techniques are theorized to help individuals advance along the continuum of behavior change. We hypothesize that from baseline to follow-up individuals receiving the Project ACTS intervention materials will demonstrate significantly greater increase in their readiness to express written (via one's driver's license and donor card) and verbal (via talking to one's family) donation intentions as compared to those who receive educational materials that are currently available to consumers. --- MATERIALS AND METHODS --- Design The primary aim of this longitudinal, randomized, effectiveness trial was to assess whether stage of readiness to express donation intentions among participants who received the Project ACTS intervention was significantly different than among participants who did not receive the intervention. Nine churches were randomly assigned to one of two conditions: (a) control (received donation education materials in the form of pamphlets and videotapes that are currently available to consumers) or (b) intervention (received the Project ACTS video and written materials). Church size ranged from 100 to 5,000 members, with most churches in the range of 500 to 1000 members. The religious denominations represented are African Methodist Episcopal (two churches), Baptist (five churches), Christian Methodist Episcopal (one church), and Lutheran (one church). Five churches were assigned to the control group and four churches were assigned to the intervention group. Data were collected at two points in time: at baseline and 1-year follow-up during after-church luncheons conducted at each participating church. At baseline, participants in both groups were given self-education materials to take home and review during the 1-year follow-up period. This study was conducted with the approval of the Emory University Institutional Review Board. --- Formative Research To identify the specific donation-related concerns of this population, we conducted focus groups with African American clergy and parishioners. There were 4 focus groups with clergy (n = 26) and 10 focus groups with parishioners (n = 42; Arriola, Perryman, & Doldren, 2005;Arriola, Perryman, Doldren, Warren, & Robinson, 2007). Results suggested the need to address concerns that stem from religious beliefs and inequalities in the transplantation system and to provide donation-related statistics that highlight the need among African Americans. Additionally, to maximize the accuracy and currency of our messages as well as the appropriateness of evaluation instruments and analytic strategies, an advisory council (AC) and community advisory board (CAB) were created. The AC included individuals with expertise in donation, transplantation, and mass communication, and the CAB included pastors and administrators from local churches. Both entities were formed to help provide ideas for conveying health messages using religious themes and to review project related materials. Following the synthesis of initial focus groups, a draft video was developed and reviewed by members of the AC/CAB and additional experts in the donation and transplantation field. A rough cut of the video was also shown to focus group participants and their families. Feedback from these sources guided final editing of the Project ACTS video. --- Intervention and Control Group Materials The Project ACTS intervention package consisted of the video described above (in the form of a DVD or VHS), an educational pamphlet, a donor card, a National Donor Sabbath pendant, and several additional items embossed with the project name and logo (e.g., pen, notepad, refrigerator magnet, and bookmark). The DVD/VHS was hosted by gospel singing group Trin-i-tee 5:7 and featured excerpts from individual and family conversations about beliefs, attitudes, myths, misconceptions, and fears about the organ donation/transplantation process. Interspersed throughout the video were biblical and spiritual themes to encourage organ donation (e.g., an excerpt from the biblical book of Acts 20:35, "It is more blessed to give than to receive"). Additionally, the DVD/VHS sought to motivate viewers by including heartfelt, personal stories from individuals who are organ recipients, donor family members, on the waiting list to receive an organ, or living donors. In contrast, the Project ACTS educational booklet contained statistical information on the overrepresentation of African Americans on the waiting list, information on how the allocation system works, resources for additional information, and a donor card. After examining all of the existing donation education materials that were available at the time of this study, we selected the control materials that clearly targeted African Americans. In doing so, we hoped to provide the most rigorous test of the effectiveness of the newly developed Project ACTS intervention materials. Thus, control participants received materials that were currently available to all consumers (in other words, standard of care): the African American Health Passport developed by the Department of Health and Human Services, a donor card, and several items from the Donate Life America Zero Lives Will Be Saved if You Do Nothing campaign (e.g., pen, bookmark). Additionally, participants were notified that the Minority Organ Tissue Transplant Education Program (MOTTEP) video, "How Do I Say Thank You?" was available to be checked out from their church library. --- Data Collection Procedures Through a process of networking with clergy (via telephone and face-to-face meetings) and colleagues, we identified nine churches to participate in this data collection effort. All of the pastors who agreed to participate in this project were either members of the project's CAB or nominated a liaison to the board. The authors worked with the pastor of each church to identify a suitable date for data collection, and a liaison was appointed to handle the actual data collection logistics. Data were collected during project-sponsored luncheons conducted after worship services. Project staff explained what participation in the study entailed and distributed a packet containing the consent form and questionnaire to each interested and eligible participant. Participants were considered eligible if they self-identified as African American, were 18 years of age or older, and did not reside in the same household. Prospective participants read and signed the consent form and completed the questionnaire independently, except in several cases in which participants requested assistance. The questionnaires took approximately 15 minutes to complete. Participants returned completed surveys to project staff and received their monetary incentive, which was either $10 in cash or a $10 donation to the church on their behalf. (The method of payment was a church-level decision, made by the pastors prior to data collection, so all participants at the same church were given the same incentive.) During the 1-year study period, participants received postcards, holiday cards, and birthday cards to remind them to review intervention materials and attend the follow-up data collection. At post-test, participants were asked to complete the same 15minute questionnaire. Those who completed the questionnaire at a scheduled after-church luncheon received a $15 monetary incentive. Those who did not attend a luncheon were mailed a questionnaire directly, asked to return the questionnaire using a prestamped selfaddressed envelope, and offered a $25 incentive to reflect the additional effort required of them. --- Measures The primary independent variables of interest are condition (intervention or control) and time (baseline or follow-up). The primary outcome of interest was readiness to engage in deceased donation. Readiness to donate was measured via three items developed by the authors that represent each stage on the continuum of behavior change theorized by the transtheoretical model and stages of change (Prochaska & DiClemente, 1983; one each for readiness to be designated as a donor on one's license, carry a donor card, and talk to family about one's wishes). Each item asked the respondent to select the statement that best described his or her readiness to be designated as a potential organ donor by means of one of the three mechanisms. For each of the three items (license, card, and discussion with family), there were five response options, one corresponding to each of the five stages of change. For example, the response options for the family discussion item are as follows: (a) I have not talked to my family about organ donation, and I don't plan to do so any time soon The last section of the questionnaire included demographic items (e.g., age, gender, ethnicity, education, income, and marital status). --- Statistical Analysis Preliminary Analyses-First, we computed χ 2 statistics to determine whether there were any differences in age, gender, marital status, income, highest level of education, monthly church attendance, or prior written intentions to serve as a donor between participants in the two conditions (intervention and control). The purpose of this analysis was to determine whether any potentially confounding variables differed by condition. Using logistic regression, we then regressed condition upon all variables for which there was a significant difference on the χ 2 test to assess which variables remained related to condition. Income was the only variable that remained significantly associated with condition in the logistic regression, thus it was included in the outcome analysis as a covariate. In addition, we computed χ 2 statistics to determine whether there were any differences in use of the materials by the intervention and control groups. Main Outcome Analysis-The main outcome analysis used generalized estimating equations, specifically logistic models because of the binary dependent variable. The purpose of this analysis was to allow for analysis of repeated measurements (data were collected at two points in time) and the use of nested terms (because participants were nested within church). By including in the model a subject effect that was a church-by-participant interaction, we were able to control for within church variability in participant responses. The model effects that were tested were condition (intervention vs. control), time (baseline vs. follow-up), their interaction, and income (less or more than $30,000). The three binary outcome variables were created to measure whether participants were in an early (i.e., precontemplation, contemplation, or preparation) or late (i.e., action or maintenance) stage of readiness to be identified as an organ donor on their driver's license, carry a donor card, or talk to their family about their donation intentions at follow-up. All analyses were conducted using SPSS 16.0. An α level of .05 was used to determine statistical significance. --- Results --- Sample At baseline, a total of 425 participants were recruited into the study from the nine participating churches. The number of participants per church ranged from 19 to 70 (M = 47.2, SD = 2.9). Of the 425 participants, 337 (or 79.3%) completed the 1-year follow-up survey. There was no significant difference in the rates of follow-up between intervention and control participants (78.5% vs. 80.2%; χ 2 [1] = 0.19, p > . 05). Participants tended to be female, married, and relatively well-educated (see Table 1). With regard to differences in characteristics of participants in the two conditions, intervention participants were slightly younger in age (t[314], p < .05) and reported having a lower household income (χ 2 [2] = 16.05, p < .01) and a lower level of educational attainment (χ 2 [3] = 9.10, p = .05) than control participants did. However, no significant differences were seen with respect to donation intentions. When condition was regressed upon age, education, and income together, only income remained significant; thus, income was used as a factor in the generalized estimating equation. --- Use of Materials In the analysis of use of the intervention materials, intervention participants were more likely than control participants to review a donation-related video (56.6% vs. 23.6%; χ 2 = 37.74, p < .001) and written materials (69.1% vs. 50.9%; χ 2 = 11.63, p < .001). --- Main Outcome Analysis: Donation Intentions For the first logistic model, the dependent variable was early (i.e., precontemplation, contemplation, or preparation) or late (i.e., action or maintenance) stage of readiness to be identified as a donor on one's driver's license. Results indicate no significant effects of condition, time, condition by time, or income. For the second logistic model, the dependent variable was early or late stage of readiness to be identified as a donor by carrying a donor card. Results indicate no significant effects of condition, condition by time, or income; however, there was a main effect for time such that at follow-up participants were 1.53 times more likely to be in the action or maintenance stage for readiness to carry a donor card than at baseline (p = .01). For the third logistic model, the dependent variable was early or late stage of readiness to talk to one's family about one's donation intentions. Results indicate no significant effects of condition, time, or income; however, there was a condition-by-time interaction such that participants in the intervention group were 1.64 times more likely to be in action or maintenance at follow-up than participants in the control group (p = .04). --- Discussion We conducted a randomized effectiveness trial of a culturally sensitive intervention designed to increase organ and tissue donation intentions among African American adults. Not surprisingly, intervention participants were more likely than control group participants were to report watching a donation-related video during the 1-year follow-up period because they were given personal copies of the video to take home. However, both groups were given written materials to review and, indeed, intervention participants were more likely to report reviewing these materials than control participants were. Regarding the main outcome analyses, results indicate that condition was not significantly associated with having an increased readiness to express donation intentions on a driver's license or by carrying an organ donor card. Additionally, all respondents, regardless of condition, were more likely to be in the action or maintenance stage in their readiness to carry a donor card at follow-up. Finally, intervention participants were significantly more likely to be in the action or maintenance stage in their readiness to talk to family about their donation intentions at follow-up as compared to control participants. The effect sizes are small (OR < 2) but significant. It is unclear why the intervention yielded such small effect sizes; multiple possibilities exist. It might be that the relatively low use of materials attenuated the effect sizes. In the intervention group alone, just more than half of participants (57%) reported reviewing the video, and 69% of these individuals reported that they reviewed the written materials. Thus, the small effect sizes may be because between one third and one half of intervention participants did not review the intervention materials. We are currently exploring the factors that motivate individuals to review the Project ACTS intervention materials so that revisions can be made to the intervention and/or its delivery to maximize uptake. Another reason for the small effects may be because the MOTTEP control group materials are effective as well. One of the few interventions that has addressed donation education with racial and ethnic minority adult populations is MOTTEP, and there is evidence that it was effective at increasing positive attitudes and donor consent rates among racial and ethnic minorities (Callender, Hall, & Branch, 2001). The Project ACTS intervention is similar to MOTTEP in that it is culturally sensitive; however, unlike MOTTEP, it was developed with a focus on addressing the religious barriers to donation and encouraging family discussion. The importance of discussing organ donation with family members is underscored in research that finds that donation rates are higher when individual wishes are known within the family (DeJong et al., 1998;Smith, Kopfman, Lindsey, Yoo, & Morrison, 2004). Perhaps this is why Project ACTS was most effective for this particular dependent variable (encouraging family discussion). Both control and intervention group materials included a donor card amongst additional educational information. This may account for the main effect of time from baseline to follow-up and the lack of significance between groups, indicating that the Project ACTS intervention performed no better than MOTTEP did at increasing readiness to provide written documentation of one's donation intentions. Given that many states are moving toward enacting legislation that would strengthen the ability of the organ procurement agency to recover organs strictly based on the written documentation of donation wishes, the need for family consent may diminish over time (although family consent would always be desirable in the case of deceased donation). Thus, more research is needed to explore how to effectively encourage the written documentation of donation intentions among African Americans. --- Limitations Limitations of this study relate to the use of a convenience sample of Christian, African American parishioners within the southeastern United States. Moreover, by virtue of their willingness to volunteer, it might be that participants were generally more supportive of donation than were those who did not agree to participate. However, the great variability in donation intentions suggests that this probably was not the case (i.e., the data did not indicate overwhelming support for donation). Additionally, the food and monetary incentives may have helped us recruit individuals with a range of motivations for participating. Moreover, as the first investigation of the effectiveness of Project ACTS, this study was designed to place a greater emphasis on internal than external validity. Thus, this study was not designed to generalize findings to African American parishioners in other locales, to those holding non-Christian religious beliefs, or to the non-church-attending population. Additionally, the overrepresentation of women among our sample of parishioners may have affected the findings, although it is notable that this gender disparity is also seen in the churchgoing population more generally (Park, 1998). Thus, the gender distribution reflected in our sample mirrors what exists naturally. Future research is intended to explore the effectiveness of the intervention in a more heterogeneous population of African American adults. Finally, it was not optimal that control group participants were not given their own personal copies of the MOTTEP video. Logistical and financial constraints prevented the project from supplying all 202 control group participants with this video to take home with them. Moreover, doing so would have undermined our goal to distribute materials that are normally available to consumers. Much like accessing the video would have required a highly motivated consumer who was willing to pay $10 to $15 for the video, the highly motivated control group participant in our research study would have had to check the video out from the church library. Nevertheless, the control group written materials could be accessed free of charge at the time the study was conducted, so both groups were given written materials, and significant differences in self-reported review of these materials were still found. --- Conclusion A considerable amount of research has been conducted over the past two decades to understand the motivators, attitudes, and barriers to organ donation among ethnic minorities. Specifically regarding African Americans, numerous studies have explored knowledge, beliefs, attitudes, and cultural reasons for low donation rates, such as a lack of awareness of the need for transplantable organs, mistrust of the health care system, fear of premature death, racism, and religious misconceptions (Callender, Miles, & Hall, 2002;Davis et al., 2005;Siminoff, Burant, & Ibrahim, 2006). Despite all of this research, educational campaigns and interventions incorporating these results have been slow to materialize, and very few have been systematically evaluated for their effectiveness. Project ACTS is a culturally sensitive intervention that was developed out of a desire to address the donationrelated concerns of African American adults residing in the southeast region of the United States. The findings of this study can move the field toward a better understanding of successful methods to encourage family discussion of donation intentions among African American adults. Project ACTS can be modified and transferred to other populations, contingent on additional research on effectiveness. Given that family discussion is still such a critical mechanism for expressing donation intentions, this study offers new direction for effective donation education efforts targeting African Americans. --- PRACTICE IMPLICATIONS This intervention study demonstrates the effectiveness of an organ and tissue donation selfeducation intervention package in encouraging family discussion of donation intentions among African American adults. With continued evidence of its effectiveness, organ procurement organizations, civic organizations, churches, and public health departments that are targeting African Americans may distribute this intervention to members of their target populations to improve consent rates. Additionally, intervention materials could be adapted to fit other racial and ethnic groups in the United States to improve knowledge, attitudes, and beliefs relative to organ and tissue donation. Smith SW, Kopfman JE, Lindsey LLM, Yoo J, Morrison K. Encouraging family discussion on the decision to donate organs: The role of willingness to communicate scale. Health Communication. 2004;16:333-346. [PubMed: 15265754]
External shocks embody an unexpected and disruptive impact on the regular life of people. This was the case during the COVID-19 outbreak that rapidly led to changes in the typical mobility patterns in urban areas. In response, people reorganised their daily errands throughout space. However, these changes might not have been the same across socioeconomic classes leading to possibile additional detrimental effects on inequality due to the pandemic. In this paper we study the reorganisation of mobility segregation networks due to external shocks and show that the diversity of visited places in terms of locations and socioeconomic status is affected by the enforcement of mobility restriction during pandemic. We use the case of COVID-19 as a natural experiment in several cities to observe not only the effect of external shocks but also its mid-term consequences and residual effects. We build on anonymised and privacy-preserved mobility data in four cities: Bogota, Jakarta, London, and New York. We couple mobility data with socioeconomic information to capture inequalities in mobility among different socioeconomic groups and see how it changes dynamically before, during, and after different lockdown periods. We find that the first lockdowns induced considerable increases in mobility segregation in each city, while loosening mobility restrictions did not necessarily diminished isolation between different socioeconomic groups, as mobility mixing has not recovered fully to its pre-pandemic level even weeks after the interruption of interventions. Our results suggest that a one fits-all policy does not equally affect the way people adjust their mobility, which calls for socioeconomically informed intervention policies in the future.
Introduction Inequality is a prominent feature of today's society. Unequal distribution and access of resources, among others, stand as a preliminary setting. Untangled paths to income [1], education [2], and employment [3] seed inequality, which further are moulded into behavioural preferences in daily life, mostly reflecting proximity to own socioeconomic and demographic background. Eventually, these unequal configurations can lead to segregation that potentially limits the social dynamics. Socioeconomic segregation is not the only factor that is linked to inequality. There are numbers of ways, such as residential [4], employment [3], income [1] or race along which people are segregated, to mention a few. Residential segregation is manifested as separation of different groups of people into different neighbourhoods within a city. Residential segregation is fuelled by the quality of neighbourhoods moving farther away from each other and result in the highly segmented residential places profile between low and high income neighbourhoods [5,6]. Therefore, housing plays an intermediary role in reproducing inequality throughout the coupling effects between income inequality and residential segregation [6]. It has also been shown that growing proportion of high-income segment among workforce increases demand for residential units located in inner city neighbourhoods, due to the centrality of location and accessibility of urban living [4,7]. Mobility patterns follow on restrictions and preferences on residence and employment in order to meet daily errands. An interplay between inequality and the way people organise their mobility in urban space is inevitable. In line with Urry [8], Olvera et al. [9] define inequality in mobility as behavioural differences in the level of transport use due to differences in the distribution of monetary ownership such as income or wealth. Furthermore, they find that car ownership is a strong determinant to mobility pattern and residential locations and diminishes potential interaction with people with heterogeneous backgrounds (compared with shared space in public transportation). As a result, segregation patterns come out as an entanglement between inequality and mobility. In urban mobility network, social stratification in conjunction with unequal access to transport infrastructures brings social exclusion [10,11] and social segregation [12,13]. Such inequalities may change due to external shocks, such as the COVID-19 outbreak, natural catastrophes (earthquakes and floods), or political riots (like war and conflicts). The consequences of such events can dramatically change existing socioeconomic configuration and individual mobility patterns, which in themselves are already constrained by socioeconomic stratification [14][15][16]. People's capacities to adjust preferences and their way of living in response to disruptions are limited by their socioeconomic status, limited financial resources or due to their jobs that demand physical presence. As existing literature suggests, people with higher income may have the capacity for larger mobility reduction, while mobility inflexibility and less social distancing are observable among low-income, raising disparity in mobility [17][18][19]. In the literature, it is argued that social fabric and inequality shape mobility patterns [8,20]. Spatial distribution of commercial areas, residential units, workplaces, and schools, among others, encourages people to move across urban landscape. Built up on the notion of unequal distribution at individual level, mobility is also engendered and reinforced by inequality [21]. The presence of individual preferences over socioeconomic characteristics of places could be further signified at the socioeconomic (SE) level by taking the visit ratio of people coming from particular SE class to places distributed in various other classes [22,23]. We build our approach on this finding by using mobility as an operational concept to analyse socioeconomic stratification and spatial isolation brought by the external shocks. This research investigates the impact of the COVID-19 outbreak and non-pharmaceuticals interventions (NPI) that are later followed in the urban areas of Bogota, Jakarta, London, and New York. Our ultimate goal is to study the changing dynamics of isolation and segregation patterns in mobility due to external shock. We also observe whether such phenomena is temporary, caused by timely restrictions such as lockdown, or they induce long term residual effects. To test this, firstly, we capture the changing segregation pattern by quantifying mobility stratification in every sequence of pandemic periods. Secondly, we empirically point out behavioural effects of spatial and socioeconomic exploration in mobility by computing entropy measures derived from spatial and socioeconomic property of visited places. Moreover, we identify types of interventions contributing to aforementioned behavioural effects and their impacts on mobility segregation. Interestingly, these procedures lead us to the still presence of residual effect of shocks even after the removal of interventions. --- Results In this study we focus on aggregated mobility data that is provided by Cuebiq [24], a location intelligence and measurement platform (for more details on the data see Materials and Methods). The dataset contains geolocation of places upleveled at census block which were visited by anonymous smartphone users along with timestamps. Time period starts from 1 January 2020 with last day of observation that varies between cities. Given the differences in observation lengths among them, they all come with the time window that adequately covers an extensive period during pandemics before lockdown, during lockdown, and after reopening as presented in Supplementary Material (SM) Section A. From this dataset, we acquire individual trajectories of 995,000 people with different sample sizes between cities. To detect home location, we use home inference algorithm [25][26][27] where home location is defined as the most frequent location visited by each individual during the night time (between 9PM to 6AM). Using this method, we obtain 597,000 of home located people. Consequently, places other than home locations found in the trajectories are classified as place of interest (POI). Details of dataset coverage and the home inference algorithm is specified respectively in Materials and Methods (Section 4.1 and Section 4.2). At the same time, we use income related features at spatial resolutions comparable to census tract which are released by respected bureau of statistics, multidimensional poverty index in Bogota [28], poverty rate in Jakarta [29], total annual income in London [30], and per capita income [31] in New York. We combine these mobility data with socioeconomic maps using geospatial information to infer socioeconomic indicator for both people and places. The algorithm pipeline and inference of this study are provided in Materials and Methods (Section 4.2) and SM Section A. In addition, to quantify policy responses, we use the stringency index released on the Oxford COVID-19 Government Response Tracker (OxCGRT) dataset [32]. Using this data we identify different intervention periods with more or less homogeneous policy restrictions: before lockdown, lockdown, and reopening. --- Mobility stratification To quantify socioeconomic stratification in mobility, we take the strategy earlier proposed [23,33] by constructing stratification matrix from mobility network that codes the frequency of visits of people to places. It is defined from their mobility trajectory and indicates the existence of socioeconomic assortativity in visiting patterns. A stratified mobility network is formally constructed as a bipartite structure G = (U, P, E) where individual u is an element of a node set U and place p is constituted to a node set P . Visit to p by u is defined as edges e u,p ∈ E, weighted based on the frequency of visit occurrence w u,p . In addition, SES of people is defined in terms of the socioeconomic status c u = i ∈ C U of their home location. Following similar method, places are also assigned with a c p = j ∈ C P associated to the socioeconomic status of the census tract of their location. --- Baseline mobility segregation Segregation in the socioeconomic network appears as patterns of assortativity where people of different socioeconomic characters meet less likely than with similar others in the same socioeconomic level. We take the first step to capture stratification tendency by transforming mobility network into mobility stratification matrix M i,j , denoting the probability of people from a given socioeconomic class to visit places with a given socioeconomic class. As a result, mobility stratification in each period is summarised in a single matrix. To standardise the assortativity measure for the sake of comparability and reproducibility, we compute the mobility assortativity index r defined as a correlation coefficient of M i,j [22,34,35]. Assortativity index values closer to one signal the higher concentration of visiting venues closer to one's own socioeconomic range (assortative mobility), while 0 pinpoints the dispersion in visiting pattern throughout classes (nonassortative mobility). Otherwise, negative values indicate the tendency to visit places opposite one's own socioeconomic class (disassortative mobility). Complete technical note on transformation technique and assortativity computation is discussed in Materials and Methods (Section 4.3). To demonstrate these metrics and to follow up on the dynamical changes of segregation during different phases of crisis interventions, we take the example of London. Fig. 1a provides snapshots of mobility stratification patterns in London, starting from before lockdown and followed by the interchangeable periods between series of lockdown and reopening. In Fig. 1a, x-axis represents socioeconomic classes of people i while y-axis denotes socioeconomic classes of places j they visited. As people move, we calculate the frequency of visits for each pair of classes (people-place), proportional to total visits made by everyone who belongs to c u = i (column-wise normalisation). Colour shades differ the visit magnitude where it becomes lighter as visit proportion gets larger. London is visualised in a matrix form composing visit probabilities of individuals in each class to places located in various other classes. Fig. 1a reveals that larger visit proportion happens in a bin with lighter colour grades along diagonal elements across periods: Before Lockdown (BL), Lockdown (L1/L2), and Reopening (R). The strength of assortative mixing is quantified by a correlation coefficient between i and j denoted as r. We find stronger diagonal concentration during lockdown, denoting considerable visits to locations within own SES. Therefore, enforcing lockdown levels up assortative mixing. This is considered as a change in mobility preference due to NPI. Fig. 1b is constructed by implementing sliding window algorithm. For every 1 week window with 1 day slide interval, a mobility matrix is generated with computed r. Increasing r overlaps with lockdown period. Colour shades of line and block denotes city. being home or non-home areas. Note that to refine the observation, we isolate home location effects on visiting pattern by removing own home location from mobility trajectory of each individual. The computational result of this sanity check shows weaker but consistent segregation pattern (see SM Section B.2). Assortative mixing is consistently pronounced regardless types of policy imposed on mobility restriction, for instance lockdown and reopening. Moreover, it validates the finding as the revolving pattern persists even after we exclude own home location from mobility trajectory of each individual. We consider next the persistence of the segregation patterns during the baseline period. Here we use the baseline segregation level shown by the mobility assortativity r value during Before Lockdown (BL) as the reference point to which the changing patterns in segregation could be adequately compared. Looking at the first matrix in Fig. 1a, we obtain an assortativity index r = 0.416, indicating baseline segregation in mobility where to be fairly large, due to the visits that are concentrated on areas with similar SE status to of the visitors', even they were far from their home location. Subsequently, we continuously observe how segregation changed daily over an extensive period before the COVID-19 pandemic. In Fig. 1b, we look at more granular temporal length by using sliding windows to construct a sequence of daily mobility stratification matrix (Fig. 1b). For every 2 weeks window with 1 day slide interval, we create a matrix and measure its assortativity index r. The initial value of r indicates respectively Bogota (green), Jakarta (orange), London (light blue), and New York (purple). Looking at the baseline assortativity index values, New York stands out with r around 0.571, while Bogota reaches r value around 0.317. Assortativity degree in daily individual mobility in Jakarta is about 0.366 and London records the r value approximately at 0.416. Apart form that, we see that the assortativity level in mobility during baseline period tends to be constant without remarkable jump or drop between days. --- Segregation dynamics due to external shocks As we can see on Fig. 1b, the assortativity index r sensitively reflects changes in mobility segregation during different intervention periods. More prominently, the implementation of lockdown (L1 and L2), harnessed mobility at large and encouraged people to visit POI within their own socioeconomic spectrum. This leads the coefficient r to reach its peak at 0.608 during the first lockdown (L1) in London, after a 46% increase from its baseline level at 0.416. In this city, mobility is reintroduced during reopening (R1), and visiting more places became possible again. Chance for higher socioeconomic mixing in mobility was opened, resulting in lower r at 0.474. However, it has not retrieved back to the original level before lockdown but remaining 14% higher than the baseline level. A weaker impact of lockdown were found during the second phase (L2) even resulting in a in 11% higher r at 0.461 as compared to the baseline level. We recognise this phenomena as induced assortativity. Similar matrices computed for other urban areas are presented in SM Section A.1. The general overview of assortativity dynamics in Fig. 1b indicates that mobility assortativity is found in all investigated cities except New York. Since the implementation of lockdown policy onward, increase in r value in Bogota was visible with the highest value recorded at 0.613 during the first phase of lockdown. It suggests that the large spike of visitation to places located in own socioeconomic status. In the following periods, r value tended to stabilised around 0.5, still higher than the baseline level. In Jakarta, once the lockdown was introduced, r value was staggering around 0.6 in the periods that came after. The intermittent reopening phase only decreased the r value temporarily and it surged again after the second phase of lockdown was taken into account. In the end, the r value was still twice larger than the original magnitude before lockdown. Mobility assortativity in New York remained relatively stable across the time without any significant temporal cycle. This invariant pattern in New York could be accounted to the imbalance and asymmetric mobility between five boroughs within its territory: Manhattan, Brooklyn, Queens, Bronx, and Staten Island. In related studies, Rajput and other [36] state that stay-at-home orders implemented in the midst of COVID-19 outbreak disturbed 80% typical daily movement within city in New York from as early as the second half of March 2020. Recalling that Manhattan is the epicentre of the city's human dynamics where various mobility motifs and activities occur, we observe the case on Manhattan separately along with mobilities within and between other boroughs in New York. Our results are summarised in the SM Section E to clarify the upsurge in assortativity during lockdown that already found in other cities. --- Residual isolation To further refine the observation related to changing segregation pattern, we measure the presence of residual isolation. The ultimate recovery is expected when mobility pattern and assortative mixing during the reopening stage are on the same level as before lockdown. If such conditions hold, sudden changes triggered by external shock namely COVID-19 outbreak might only carry short-temporal effect inducing any barrier for people to return to the normal pre-pandemics configuration. To quantify such effects we define the mobility adjustment matrix S i,j = M t1 i,j -M t2 i,j is set by taking the difference between mobility stratification matrix M i,j in two consecutive periods, for instance between baseline period M BL i,j and the first lockdown M L1 i,j . Therefore, the matrix element a ij in M i,j entails the difference in proportion of frequency of visits between a pair of consecutive periods as seen in Fig. 2. Fig. 2a reveals the difference between a pair of intervention periods before lockdown and the first lockdown, inferring that the first lockdown is the most stringent among others. It tells us that the induced assortativity develops into isolation. In case of London, the upper diagonal elements of S i,j are dominated by negative value, indicating as away less visits to these places located in the higher socioeconomic class during the first lockdown as compared to the baseline level. The arrival of the second lockdown period pushes the visiting proportion to higher SES places to a lower level again, but not as large as in the first period during the pandemic namely Lockdown (L1/L2) and Reopening (R) as to compare to Before Lockdown (BL). Green shades indicate more visits made before the enforcement of lockdown, white blocks constitute equal visits, otherwise brown blocks appear. Therefore, we observe contrast proportion on the upper diagonal elements in London as visits to these places touch the lowest level in L1 relative to BL, burst in R1 and drop in L2 (Fig. 2a). Residual isolation effects as measured by average value of main diagonal trace in each matrix µre. Comparative measure across cities in terms of average residual isolation effect µre is provided in Fig. 2b. Purple block shows the difference between before lockdown baseline and reopening stage. lockdown. The relaxation on mobility restriction during the reopening period increases the visits to these places to an extent, although negative values are still found in some cells. Quantitative measure of residual isolation µ re = tr[Mi,j ] j∈C P is provided by taking the summation over main diagonal elements of M i,j and divide the value by the number of socioeconomic classes which is ten. It results in the average value of matrix diagonal elements as shown in Fig. 2b. In each city even in New York, in the extreme degree, individuals during lockdown restrict their preference to be present in the areas within own socioeconomic boundary more than they used to be. As the reopening is imposed after the first lockdown, the pattern is reversed. The difference between reopening and the second lockdown is very subtle. Interestingly, the reopening is not necessarily able to restore the typical configuration to before lockdown. We still see negative value along main diagonal traces, even higher than -0.2, as shown by the negative diagonal gradient, revealing the existence of residual isolation effect. In Jakarta, people tend to spend almost more than 30% frequent activities in the class they belong to. Average residual effect in Bogota, is captured around 20% and nearly about 10% in New York. However, the reopening (compared to before lockdown/BL-R) does not directly bring µ re equal to zero in any cities we observe, indicating the prevalent residual isolation. Weaker average residual isolation is found after removing local visits (see SM Section B.2) and pushes µ re closely distributed around zero. --- Restriction and behavioural effects Pandemic brings another complexity in the way people move from one location to numerous others across space. During the COVID-19 outbreak, mobility is not merely driven by established personal preference but also supplementary necessity to align with prescribed mobility restrictions. 3a) takes into account the heterogeneity of places in individual trajectory with value range from 0 (visiting same locations) to 1 (visiting various locations). SES mobility entropy Hs(X) (Fig. 3b) takes similar computation after replacing set of locations with socioeconomic status of area where those places located implying visit variation between socioeconomic isolation (0) and socioeconomic diversity (1). In London, we observe less heterogeneity in both locations and socioeconomic status of places visited by individual during lockdown. Even after some relaxations are allowed, people do not experience mobility at pre-pandemics level. Similar observation also become evident in other cities globally (Fig. 3c). With this in mind, we look at heterogeneities of where-to-go decision from two different aspects: spatial and socioeconomic composition. We use an entropy based measure, which we develop on top of Shannon's formula, to measure the heterogeneity of mobility traces in term of geolocation. Here we define spatial mobility entropy H m (X) = -x∈X p (x) log 2 p (x) where geolocation and SES is x ∈ X and SES mobility entropy H s (X) = -x∈X p (x) log 2 p (x) for which socioeconomic class is x ∈ X. In the formalisation of spatial mobility entropy H m (X), we compose a scalar for each individual trajectory containing geographic location of places visited a single people. For SES mobility entropy, we replace the geographic location information with socioeconomic classes where visited places belong to. In both types of entropy, lower values correspond to higher domination of particular locations/SES of locations in the visit pattern, signalling the extensive locational/socioeconomic isolation. Given that the measure is normalised by period, the upper cut-off is 1 (absolute heterogeneity) and the lower cut-off is 0 (absolute homogeneity). Formal formulation of entropy is available in Materials and Methods (Section 4.4). As shown in Fig. 3a andb, in London, we deal with four phases of pandemic: Before Lockdown (BL), Lockdown I (L1), Reopening (R1), and Lockdown II (L2). While Fig. 3a reveals the distribution of locational mixing degree in individual trajectory. Fig. 3b follows the similar way but rather emphasising on socioeconomic setting of those listed locations. In both figures, skewness of the curve moves to the left (to the direction of zero) in the first lockdown (light green), so does in the second lockdown (dark green). It points out the tendency of upholding more homogeneous visiting pattern. In respect of spatial scale, urban explorability drops once policy limiting mobility flow implemented. Consequently, the set of visited places becomes more narrow (centred to smaller set of places) and localised (closer to where home is located). Similar pattern also holds with regard to socioeconomic range. As set of locations is shrunken by distance, it becomes highly concentrated to particular socioeconomic level that reflects own well-being. We check the shifting magnitude by computing the average value (µ) and standard deviation (σ) of the two entropy distributions for the different cities. In Fig. 3c, the initial phase of lockdown (L1) characterises mobility pattern to be locationally more homogeneous since spatial mobility entropy H m (X) is lower than before lockdown period (BL). Spatial concentration largely happened in Bogota during L1, reaching the average value at 0.35. Jakarta recorded the average spatial diversity at 0.37. In addition, the average value in New York and London respectively was around 0.4 and 0.5. The reopening phase that follows (R1) does not bounce the variability of locational and socioeconomic preference back to original level before lockdown even though it goes to recovery direction. Compared to spatial mobility entropy, SES mobility entropy H s (X) in Fig. 3d receives grave repercussions caused by the outbreak even more as µ ranges from about 0.5 to lower values. During L1, People in Bogota and Jakarta experience deeper socioeconomic isolation as H s (X) falls below 0.2. London is close to 0.35 while New York is around 0.4. (Fig. 4a) and socioeconomic exploration Hs(X) (Fig. 4b) is presented as covariates β. In all cities except New York, public information campaign (HI/light purple) is the most influential instrument that highly affect both spatial and economic exploration. The R 2 of respected regression models namely for Hm(X) and Hs(X) differs across cities. The nine types of restrictions explain around 59% to 76% of the variance in spatial exploration and turns out lower in socioeconomic exploration from 36% to 47%. --- Mobility interventions To this point, we have revealed residual isolation effects of shocks even after mobility restrictions were gradually lifted. However, the kind of restriction that significantly contributes to such configuration is still unknown. Data on NPI [32] contains the strictness level of every single restriction k = 9 categories over period of time, including closing of main venues such as school, workplace, and others. For a complete list see Table 1 in SM Section A. We weight the impact of those restrictions listed as NPI by running multivariate linear regression where the dependent variable an entropy (H m (X) or H s (X)) and the independent variable a stringency level of each restriction s k ∈ S K . The methodological definition for this approach is further explained in Materials and Methods (Section 4.5). Individual exploration occurs not only over socioeconomic dimension, but also beyond physical space. Therefore, enforcement of mobility restrictions NPIs also reduce socioeconomic diversity of visiting places. Indeed, from the results shown in Fig. 4, public information campaign (H1/light purple) is the most preponderant in each city, simultaneously affecting mobility in terms of spatial and socioeconomic diversity of visited places. However, the magnitude that public information campaign restriction brings to mobility is not uniform between physical and socioeconomic space. The covariates ratio is defined as β m,s = β k H m(X) β k H s(X) to indicate relative impact of a type of restriction on those two aspects of exploration. Once this restriction is imposed in London, for instance, its impact on the shrinking spatial diversity in individual trajectory is 3.33 times higher. This number is 3.08 in between Bogota and 3.47 in Jakarta. Meanwhile in New York, the cancellation of public events (C3) concurrently diminishes spatial exploration 1.33 times more than socioeconomic exploration. Looking at the R 2 , we find that the overall values are lower for the model with dependent variable of SES mobility entropy H s (X) as compared to the one fitting on spatial mobility entropy H m (X). We compute ratio values of R 2 for H m (X) over R 2 for H s (X), formally expressed as R 2 m,s = R 2 H m(X) R 2 H s(X) . In Bogota, the same set of NPI explains a much higher variance of H m (X), 1.76 more than the variance of H s (X). Similar range of ratio values of R 2 m,s is also obtained in London (2.10), Jakarta (1.93), and New York (1.25). As the results show that composition of socioeconomic preference over places in individual visiting patterns is still largely shaped by unobserved factors other than mobility restriction, it could be an indication that socioeconomic exploration incorporates more complex dimension than the delineation spatial boundary alone. --- Discussion and conclusions In this study, we took a step to analyse the impact of COVID-19 outbreak on structural preference reflected in mobility pattern by looking at the mobility dynamics in Bogota, Jakarta, London, and New York. We found that in-class visits dominate mobility pattern in every temporal snapshots, ranging from before lockdown, lockdown, to reopening. Dependency patterns of assortative behaviour dependencies were also detected as the assortativity coefficient r remained highest during lockdown. Subsequently, the emergence of reopening did not directly bring the typical mobility mixing pattern to the original level observed before the enforcement of lockdown, indicating the existence of residual isolation effect. We further measured the degree of residual isolation by comparing stratification in mobility pattern between two consecutive periods (see Fig. 2a). It validated the presence of residual isolation effects where visits within own class during reopening is still higher than the usual rate. Another feature of isolation in mobility that has been presented in this study is the decreasing heterogeneity of where-to-go decision from two distinctive aspects: spatial and socioeconomic composition (see Fig. 3). Entropy measures revealed that visits became highly concentrated to particular locations and socioeconomic classes. To understand which type of NPI does constrain mobility across time window, we proposed multivariate regression model composing all mobility restrictions to examine their magnitudes in intervening the diversity configuration of visiting patterns. In cities, except New York, we observed the impact of public information campaign (H1) gained its highest importance among any other type of restrictions. The observed variability of magnitude could be related to the structure of urban fabric in respected city as well as the level of socioeconomic well-being. Apart from the computations demonstrated to this point, we realise that stronger evidence for residual isolation in the longer term could be presented if the access to more recent data is available. Our latest data only covers the initial period of reopening where NPI and the COVID-19 protocols were still at the frontier in controlling the outbreak. It solely depends on the behavioural conformity and attitude towards mask wearing and social distancing without any intervention from vaccination policy. Another boundary that we would like to underline is the limitation in direct comparison between cities. This issue is raised due to the different metrics and levels of spatial resolution we use to define SES indicators, that are strongly depending on the availability of data. This study contributes to the scientific importance in refining the impact of pandemic on the reorganisation of mobility segregation. It allowed us to comprehensively understand potential occurrence of residual isolation during pandemic interventions at higher spatial and temporal resolution. Afterwards, it taps the pivotal aspect of societal impact as additional detrimental effects induced by residual isolation might not be equally distributed across socioeconomic class, indicating a higher vulnerability faced by lower socioeconomic class that should be better mitigated by adaptive policy design in the future. Therefore, as a future goal, we consider the importance of conducting class-wise analysis to study how different classes are impacted differently. --- Materials and methods --- Data description Mobility data is provided by Cuebiq, a location intelligence and measurement platform. Data were shared under a strict contract with Cuebiq through their Data for Good COVID-19 Collaborative program where they provide access to de-identified and privacy-enhanced mobility data for academic research and humanitarian initiatives only. Mobility data are derived from anonymous users who opted to share their data anonymously through a General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) compliant framework. All final outputs provided to partners are aggregated in order to preserve privacy. The aggregation procedure is specified as data upleveling where some proportions of real locations are deterministically shuffled within Census Block Group (CBG) in the US or geohash level 6 in other countries. This protocol aims to mitigate the risk of re-indetification without affecting the analysis in this study since we infer socioeconomic status at a level broader spatial delineation namely census tract as we discuss further in details in the following section. In the actually analysed dataset, the starting point for all observed city is January 2021. Bogota retains longest temporal observation until May 2021, followed by London (February 2021), Jakarta (December 2020), and New York (July 2020). Each individual in every city has a set of trajectories constituting timestamps (start and end) whenever detected at a certain location (latitude and longitude). We focus on mobility traces of people whose home locations are successfully identified at the census tract level as discussed in details in Materials and Methods (Section 4.2). In Bogota, there are approximately 55,000 people containing 25 million trajectories. The number of people fluctuates among cities, so do total trajectories: Jakarta (around 65,000 people/26 million trajectories), London (almost 200,000 people/ 115 million trajectories), and New York (about 277,000 people/30 million trajectories). To check the general reproducibility of mobility pattern in New York, we also use the SafeGraph dataset [37], which is available at coarser resolution (census tract level) and longer temporal coverage (until May 2021) which is presented in SM Section F. We overlay socioeconomic layer on top of the existing mobility layer. Income related features are fitted for this purpose. In Bogota, multidimensional poverty index [28] at urban section developed by Colombian Bureau of Statistics (DANE) becomes the basis for socioeconomic status computation. It captures quite comprehensive dimension of individual well-being: health, education, utilities and housing, as well as employment. A simpler version of poverty index called poverty rate [29] is used in Jakarta at village-level resolution, taking the proportion of people living below particular amount of average monthly income. Meanwhile, socioeconomic configuration of London and New York is plotted respectively based on total annual income recorded by Office for National Statistics (ONS) [30] in 2015 at middle layer super output area (MSOA) level and per capita income in 2018 at census tract level taken from American Community Survey (ACS) [31]. In each city, we group the people by income distribution in the dataset into 10 equally populated groups from the lowest SES/poorest (1) to highest SES/riches (10). It should be taken into account that direct comparison between cities could not fully established because of diverse characterisation by nonidentical SES indicators and different spatial resolution they are provided at. Nevertheless, comparison across period of the same city is possible to derive in this context. To synchronise the movement along mobility points and to derive observable structural break in mobility pattern induced by the epidemiological outbreak and policies coming after, we refer to the stringency index on Oxford COVID-19 Government Response Tracker (OxCGRT) dataset [32]. We validate this with actual implementation at city level to ensure policy alignment between national and local government. --- Algorithm pipeline and inference We construct an algorithm to detect home and POI (non-home) locations. Our methodology combines the spatial and temporal attributes such as frequency of visit, time window of visit, as well as duration of stay at given locations. We take a further step to infer socioeconomic status for each people (based on home location and POI) by performing spatial projection and merge it with demographic data (average income) from bureau of statistics. namely geographic locations and timestamp (trajectory). Demographic data covers average income of given spatial unit (eg: census tract). We build an algorithm to separate home u and POI locations p and identify the inferred income based on its spatial delineation. Discretisation on distribution of inferred income results in two separated SES label: SES People i and SES POI j. --- Income Distribution --- Demography Data Home Location: Detecting home location is a primary step in dealing with mobility data because spatial identifier serves as an intermediary information that allows to couple heterogeneous source of data, including census data. Various decision rules have been developed to identify the whereabouts of people reside. In mobility literature, a single rule home detection algorithm is widely applied in both continuous (e.g.: global positioning system/GPS data) and non-continuous location traces (e.g.: call detailed record/CDR data) [25][26][27]. Home is defined as the location where highest proportion of activities occurs during night hours with variations regarding time window. To compensate the unavailability of ground truth to be used as validation set, we design more conservative algorithm in determining home location by combining these criteria: a point where an individual is mostly located between 9PM to 6AM for uninterrupted duration at least 6 hours. It results in 50% people in our dataset of which home locations are being successfully identified. POI Location: Apart from home, human individual activities evolve around other areas for some reasons, including work. Trip between home and work location dominates daily mobility, while visits to other locations are broadly distributed with short inter-event times [38]. We set criteria for POI location as place other than home where people with identified home locations are present during weekdays from 9AM until 3PM. Afterwards, the rest of locations that do not fall into either home or work category are labelled as others. --- Socioeconomic Status (SES): We assign SES label to every individual and and POI based on socioeconomic data. The first step to SES people is to identify socioeconomic feature of area where they live (home location). Similarly, SES POI is inferred by mapping out the area where points (work and other locations) are spatially positioned. We sorted the values by ascending order and split them into equally populated bins of 10 SES labels, making SES 1 to be the poorest and SES 10 to be the richest. --- Mobility matrix In Section 2, we rely on the basic formulation of stratification extracted from the mobility stratification matrix M i,j that is defined based on the mobility network G = (U, P, E). The network G is a bipartite graph that connects people u in the set of node u ∈ U and POI p from set of node p ∈ P if u visited p, represented as a link e u,p ∈ E exists. Frequency of visit is counted as edge weights w u,p . Stratification is introduced in the network by labelling class membership c u = i ∈ C U to every people and c p = j ∈ C P to every POI based on their inferred income. As defined earlier in [23], we have: M i,j = U,cu=i P,cp=j w u,p j∈C P U,cu=i P,cp=j w u,p ,(1) where the probability of frequency of visits (matrix elements a ij ) is generated by column-wise normalisation (SES People i) of the frequency matrix. As an example for a mobility stratification matrix see Fig. 1. Given a pair of mobility stratification matrix M i,j in two consecutive periods, we define mobility adjustment matrix S i,j where the matrix element b ij entails the difference in proportion of frequency of visits. More formally: S i,j = M t1 i,j -M t2 i,j ,(2) where t 1 denotes the initial period and t 2 is the succeeding rolling period. For instance, if we have three periods namely Before Lockdown (BL), Lockdown (L1) and Reopening (R1), we could generate three S i,j respectively: S BL-L1 i,j = M BL i,j -M L1 i,j ,(3) S L1-R1 i,j = M L1 i,j -M R1 i,j ,(4) S BL-R1 i,j = M BL i,j -M R1 i,j ,(5) while S BL-R1 i,j shows the difference between period before enforcement of lockdown and reopening (removal some mobility restrictions in the post-lockdown). The result of this computation is provided in Fig. 4. The degree of socioeconomic isolation is computed by the assortativity of the mobility stratification matrix. This mobility assortativity coefficient r [22,34,35] is computed based on the Pearson correlation between row i ∈ c u and column j ∈ c p r N = i,j ijNi,ji,j iNi,j i,j jNi,j i,j i 2 Ni,j -( i,j iNi,j ) 2 i,j j 2 Ni,j -( i,j jNi,j ) 2 .(6) Values closer to 1 indicate the higher concentration of visiting venues within own socioeconomic range, while lower cutoff values at -1 reveals the tendency of visiting places outside own class. If the value is equal to 0, this measure indicates dispersion in visiting pattern throughout classes without any structural choice preference regarding socioeconomic status of places. --- Mobility entropy Mobility entropy is measured on the basis of generic Shannon's formula [39]. In the context of mobility, entropy could be employed to quantify predictability of a visiting pattern. Generally, higher entropy is in line with lower predictability, eliciting the more heterogeneous preference of places to visit in all individual trajectory. At first, we define (spatial mobility entropy H m (X)) where m is a notation for spatial mobility at individual level in Fig. 5a as: H m (X) = - x∈X p (x) log 2 p (x) = E[-log p (X) ](7) where x is a discrete random variable representing geographic from all possible location in X of POI locations visited by people. We replicate above formulation to measure (ses mobility entropy H s (X)) in Fig. 5b such that: H s (X) = - x∈X p (x) log 2 p (x) = E[-log p (X) ](8) where x is replaced by a discrete random variable representing the SES of POI where an user visited. The value is normalised for each period, therefore the maximum value 1 and minimum value 0 is comparable across temporal snapshots. Upper bound value H m (X) = 1 implies the sporadic visit to heterogeneous POI locations, while lower bound value H m (X) = 0 indicates homogeneous visit pattern to rather limited POI locations. In parallel, H s (X) = 1 (heterogeneous SES POI) shows visit to places located in various socioeconomic classes and H s (X) = 0 signifies visit pattern characterised by strictly preferred socioeconomic class (homogeneous SES POI). --- Restriction impact We aims to identify the kind of restriction that significantly contributes to changes of diversity in visiting pattern and quantify the magnitude brought by those interventions. To rule out the effectiveness of each type of restrictions, we initiate multivariate linear regression model. There are k = [1, ..., 9] restrictions listed as NPI respectively closings of schools and universities (C1), closings of workplaces (C2), cancelling public events (C3), limits on gatherings (C4), closing of public transport (C5), orders to stay-at-home (C6), restrictions on movement between cities/regions (C7), restrictions on international travel(C8) and presence of public information campaigns (H1). Stringency value S for every restriction in each temporal snapshots is obtained from OxCGRT dataset and to be used as independent variable. The dependent variable is two types of mobility entropy, being computed separately: geographic space-based H m (X) and socioeconomic space-based H s (X). To further understand the impact magnitude of a single restriction k ∈ K at timestamp t ∈ T , we fit the data to this form: H m (X) t ∼ {S t k }(9) and H s (X) t ∼ {S t k }.(10) In the equation above, {S t k } denotes a set of variables that represents each type of mobility restriction in NPI. The regression covariates indicate the magnitude of restriction impact on segregation. In details, negative values of those covariates imply reduction in the degree of individual spatial and socioeconomic exploration due to respected mobility restrictions. Therefore, the ratio between a pair of the restriction coefficients allow us to compare different impact sizes. --- Supplementary Materials A Data and Pipeline Human mobility captures multi-layer information with high spatiotemporal resolution. Not only physical movement from one point to million others, it resumes individual behavioural dynamics in exploring spatial boundaries. In order to make meaningful observation related to individual mobility patterns within urban landscape, we map out socioeconomic condition of people and places they visit by inferring income-based metadata gathered from bureau of statistics of respected locations. This method allows us to comprehensively analyse two aspects of individual trajectory over places: spatial and socioeconomic status (SES) distribution. We construct a pipeline comprising data collection, data processing, and data analysis as depicted in Fig. 6. --- Data Collection Data Processing Data Analysis --- Mobility Data Demography Data Epidemiology Data Normalisation in performed by own SES (column-wise). Fig. 7 reveals the generic pattern in which assortative mixing increases during the lockdown as increasing r is found across cities. It reflects the extend individual responds to the pandemics by reorganising their typical mobility configuration. In the case of more than one period of lockdown appears (L1 and L2), the first seems to be stronger in inducing the isolation effect. As the reopening (R1) phase is started, the assortative visit remains higher than the level before lockdown (BL). --- B.2 Without home area visit We repeat the procedure used to generate Fig. 7 after excluding local visits to own neighbourhood to generate Mobility Stratification Matrix for visits outside home area M c ij . This step is considered as robustness control over the persistent assortative mixing. In Fig. 8 we see that the first lockdown is still the most stringent because it alters preference to visit more places within own socioeconomic class. Comparing to Fig. 7, assortativity coefficient r in general is away lower, indicating that short distance visit in the surrounding neighbourhood assumes considerable proportion on mobility pattern. --- C Mobility adjustment matrix Mobility adjustment matrix S ij is constructed to detect the indication of residual isolation effect. We operationalise the computation in Section 5.2 in which the difference in proportion of frequency visits between two consecutive periods is visible in Fig. 9. None of cities in this study exhibit full recovery after the occurrence of reopening as the bin colour remains under brown shades, indicating larger visit ratio to places in own socioeconomic class as to compare with before lockdown period. Therefore, it leads to the notion of residual isolation induced by COVID outbreak. Interestingly, BL-R shows segregated pattern of visit where before lockdown people tend to explore more places in higher socioeconomic ranks (top rows/green shades) while during the reopening places in lower classes contribute more to visit proportion (brown shades) in every cities. Beyond that, Bogota exhibit bimodal segregation where dominant visit before lockdown does not only happen in upper class, but also lower class. --- D Mobility entropy D.1 Spatial mobility entropy Heterogeneity of places visited by individual is quantified by computation of Spatial Mobility Entropy H m (X) proposed in Section 5.3. Dispersion of value may take either to the direction of 0, signifying strict preference on particular locations over the rest and making the trajectory more homogeneous spatial wise. In contrast, as the value takes closer to 1, no strict preference presumed and visits are widely distributed across locational space. We find that people become Figure 10: Mobility Adjustment Matrix for visits outside home area Sc ij . Every Mobility Stratification Matrix for visits outside home area M c ij is paired with the one in the following period. There are three patterns to detect: no difference between those two periods (white), dominant visit in the first period (green), and dominant visit in the second period (brown). more restricted in deciding which locations to visit as the average value H m (X) hits the lowest point than ever in all cities. The introduction of reopening phase does not directly bounce the value back to the normal level before lockdown, in line with condition suggested in Fig. 7 and Fig. 9. --- D.2 Socioeconomic mobility entropy In this section, we redo the computation for trajectory heterogeneity in terms of socioeconomic factor based on entropy formulation in Section 5.3. To measure Socioeconomic Mobility Entropy H s X), we substitute geolocation feature with SES of places. The result in Fig. 12 confirms previous finding where people have stricter preference over places during lockdown. It is beyond spatial boundary since socioeconomic profile of those places is now also heavily skewed, making average value H s (X) touches lowest record in comparison to other periods. Therefore, it reaffirms condition stipulated in Fig. 7, Fig. 9 and Fig. 11. --- E Robustness of mobility adjustment We take into account the robustness check of isolation effect by applying Kruskal-Wallis H Test (non-parametric one-way ANOVA) on Mobility Stratification Matrix for both before (M i,j ) and after removing visits to own home area (M w i,j ). The formulation of the null hypothesis (H0) could be defined as an equal median between before lockdown and another period that comes after. If the p-value appears to be smaller than the confidence level α = 0.05, H0 is rejected. Otherwise, he alternative hypothesis (H a ) remains. Table 3 and 4 provide justification for the presence of different degrees of isolation effect due to the variability of mobility in response to the dynamics of mobility restrictions. New York stands on strikingly opposite pattern as statistically significant difference is seen after removing local visits to the area where home is located while other cities exhibit such pattern for broad visits to any locations. --- (a) (b) (c) (d) Figure 12: Socioeconomic Mobility Entropy H s (X). After replacing geolocation of places in individual trajectory by SES information, we recompute entropy. As the value skews to 0, visiting pattern tends to be concentrated on particular SES, otherwise it is somewhere close to 1. --- E.1 All visits E.2 Without home area visit F Manhattan Effect New York is made up of five boroughs respectively Manhattan, Brooklyn, Queens, Bronx, and Staten Island. Among others, Manhattan is the centre of human activity agglomeration. Manhattan as a borough with the highest economic pull-factors in New York is massively affected, because mobility disruption hit not only movement of people inside borough, but also interborough movement that usually found in commuting pattern to workplace. People who reside in Brooklyn and Queens, for example, stop commuting to Manhattan as many of them switched to working from home practice. It is also reflected in lower use of public transportation and level of road traffic. Segregation pattern changes as a response to mobility restriction imposed due to the pandemic. In Section 2.2, we see that the mobility assortativity r in New York is relatively flat as removing visits to home area across pairs of policy period (M w i,j ). Mobility pattern differs significantly between before and during the first lockdown (BL & L1) in Jakarta (for all elements) but not apparent in other urban areas given the p-value is away lower than the confidence level at α = 0.05. Similar direction also becomes visible between before and during the first reopening (BL & R1). Strict isolation along diagonal elements is not found anywhere. Therefore, levelling up the contribution of local visits in the surrounding of home locations to isolation. to compare to other cities such as Bogota, Jakarta, and London, but a more substantial mechanism at work that shapes urban human dynamics might contribute as well. In this section we take two strategies to disentangle spatial scale. At first, we focus in the area of Manhattan where activities and mobilities are heavily concentrated. Later on, we analyse mobilities in each borough that together unite as New York (intra-mobility), followed by mobilities between a pair of boroughs (inter-mobility). Mobility stratification in Manhattan is visualised as matrix in Fig. 13a. Homophilic mobility defined as movement within own socioeconomic class during the lockdown is 26% higher than before lockdown. Emergence of reopening phase does not directly brings back the normal condition since it still exceeds the original level by 20%. Even after removing local visits (Fig. 13d), the pattern stands still. This finding is consistent with global pattern previously captured in other cities in this study such as Bogota, Jakarta, and London (see Section B.1). Taking a pair of matrices in two consecutive periods, we have another form of matrix to show mobility adjustment as seen in Section 13b. Measures taken during lockdown affect individual Figure 13: Mobility stratification matrix Mi,j, mobility adjustment matrix Si,j, and mobility assortativity r. We impose additional layer of filtering in New York by only looking at the locations within Manhattan boundary. On the left (Fig. 13a-c), we take into account all visits, while on the right (Fig. 13d-f), we remove local visits to home area. Assortative mixing touches the highest level during lockdown (r = 0.656). After reopening, average residual isolation effect µre is still 12.8% higher as to compare to before lockdown period. preference regarding their mobility. There is increase in visits to places within own socioeconomic range by at least 15% (see left matrix). Reopening happen at some points, however nothing such fully recovery exists. We still find that the average value of diagonal elements is 12% higher than before lockdown (see BL-R1). In the case of disregarding dominant local visits to own neighbourhood (Fig. 13e), average residual isolation effect µ re in the reopening still surpasses the baseline period before lockdown by 19%. After all, residual isolation effect remains prominent in Manhattan. Sliding window algorithm is implemented to generate Fig. 13c and Fig. 13f. For every 1 week window with 1 day slide interval, a mobility matrix is generated with computed mobility assortativity r. For both all visits (Fig. 13c) and visits to places other than own neighbourhood (Fig. 13f), increasing r overlaps with lockdown period. Computations for mobility in New York based on Cuebiq dataset (Fig. 14a) are reproduced for SafeGraph dataset (Fig. 14b). The two comes in conformity in terms of the proportion of mobility category in which individual flows within a single borough (intra-mobility) surpasses the fluxes across different territories (inter-mobility). The first is presented in Fig. 14c (Cuebiq) and Fig. 14d (SafeGraph). A striking mirroring degree of assortativity in mobility r within Manhattan is seen, ranging from 0.6 before the implementation of lockdown to 0.8 in the aftermath. While the value of r is slightly different in Bronx (light green), Brooklyn (orange), Queens (purple), and Staten Island (pink), the pattern stays the same: increasing segregation since the lockdown period. One reason behind is that once people stay at residential area, they are bounded not only by spatial scale, but also socioeconomic homogeneity in the surrounding neighbourhoods. On contrary, individual flows across boroughs (inter-mobility) exhibits decreasing segregation as shown in Fig. 14e 14a) and SafeGraph dataset (Fig. 14b). Mobility assortativity is computed at census tract level based on OD matrix, showing similar pattern for intra-mobility mixing namely increasing segregation in the two datasets (Fig. 14c-d). Interestingly, segregation in inter-mobility (mobility between borough) tends to be lower instead, for instance in mobility flow between Manhattan and Bronx (Fig. 14e). undirected mobility network, mobility recorded in Cuebiq dataset (dark green) and SafeGraph dataset (dark blue) indicate the emergence of disassortative mixing with value lower than 0, implying that people abruptly visit places differ from own socioeconomic status whenever they need to step out territory/borough where they reside due to multiple mobility reasons (e.g.: work or school). --- Authors contribution: RMH, VS, MGH and MK conceived the study. RMH and MK designed methodology and analyzed the data. RMH, VS, MGH and MK wrote the manuscript with input from all co-authors.
People may assume that the counseling profession functions with a shared set of values that promote well-being and mental health to individuals, families, and communities across the globe. Common values, such as described in training programs, ethical codes, and other areas, reflect the approach and direction for providing professional counseling services among counseling professionals throughout the world. The researchers designed this qualitative study using a phenomenological approach to explore how counseling values are experienced and implemented across various cultures. The 16 participants of the study include counseling professionals from different countries to increase representation from eight regions of the world. The researchers recognize valued approaches commonly identified among the participants implementing counseling services, including marital and family counseling, child and school counseling, faith integration, indigenous practices, and personcentered safe spaces. While each of these valued approaches is described in detail, final applications of the data offer proposed steps to improve the advancement of counseling on a global scale, including strategies for transcultural counseling training, resource adaptability, and bilateral development in the profession.
creativity when engaging diverse populations around the world (American Counseling Association [ACA], 2014, Ratts et al., 2016). The effort to develop culturally competent counselors remains a priority within the profession, and counselors in training are encouraged to enhance characteristics of flexibility, creativity, tenacity, vision, cultural humility, a desire to learn, and a willingness to eschew traditional ethnocentric views of counseling as often demonstrated in Western counseling strategies (AEgisdóttir & Gerstein, 2005;Consoli et al., 2006;Forrest, 2010;Heppner, 2006;Leung, 2003;Tang et al., 2012). Scholars have identified the lack of international standards for counseling to be an ongoing challenge for the profession (Forrest, 2010;Schofield, 2013;Stoll, 2005;Szilagyi & Paredes, 2010), along with the need for more adequate training and evaluation of counselor competency globally (Forrest, 2010;Jimerson et al., 2008). The COVID-19 pandemic impelled a keen awareness of the interconnectivity between global cultures. While the difficulties of this physical virus expanded globally, the decline of mental health also became pronounced in the wake of this worldwide medical crisis, with as much as a 30% increase in symptoms of anxiety and depression (Panchal et al., 2022). The World Health Organization (WHO, 2022) reported about the pandemic, "Depression is one of the leading causes of disability. Suicide is the fourth leading cause of death among 15-29-year-olds" (para 1). The WHO went further to emphasize the urgency of addressing mental health to avoid inhibiting necessary global supports. People with mental health issues die earlier and experience more human rights violations, and mental health services are not available in many where they are desperately needed (WHO, 2022). To reinforce applications of mental health support from other countries, ongoing research may help inform practice in settings that have not encountered what professional counseling has to offer. For the purposes of this study, counseling values are defined as the common principles, standards, and policies that guide ethical practices and assumptions within the counseling profession. While counseling professionals are defined differently in various cultural contexts, a working definition is articulated later in the recruitment criteria for the study and may generally be understood as professionals recognized in various countries to offer counseling services. Such counseling professionals may be represented under different professional identities or labels depending on the cultural context. These frequently appear as codes of ethics, such as promoted by the American Counseling Association (ACA, 2014), the British Association for Counselling and Psychotherapy (BACP, 2018), and Persatuan Kaunseling Malaysia (PERKAMA) (Ishak et al., 2012), and the Australian Counselling Association AUCA, (2022). Further exploration within the literature provides greater definition to how counseling values may appear across cultures. --- International Counseling Values in the Literature This review of the literature considered professional multicultural counseling practices to establish a foundation for exploring the common values among counseling professionals throughout the world. Scholarly and authoritative content on interna- 1 3 tional counseling continues to expand, with cultural competence remaining an important focus of attention for ethical practice around world (ACA, 2014;AMHCA, 2020;BACP, 2018;Ishak et al., 2012). Within this standard, the application of cultural competence in counseling is often focused on supporting clients domestically who have international backgrounds. Professional researchers in the literature identified these applications as highly valuable, yet they retained a limited scope when considering how to apply principles across other cultural contexts based on the findings of this study. --- Contextual Awareness Hook and Vera (2020) described current global themes in mental health through a study researching leaders in international counseling psychology, which included attention to holistic health, cultural relevance, partnerships, collaboration, and sustainability. They observed that counseling professionals offered greater benefit when responding to expressed community needs rather than the needs outside professionals may assume. They highlighted how their findings suggested research and design methods also must be relevant to local contexts counselors may be investigating (Hook & Vera, 2020). Reinforcing this point, Koç and Kafa (2019) explained how three forms of psychotherapy appeared evident across cultures: (1) imported Western-origin psychotherapy and some spontaneous alterations to observe culture, (2) systematic adaptation of psychotherapy methods according to needs of people in a specific culture, and (3) models that are products of the cultures themselves. They further described how the practice of psychotherapy based on needs in the culture has limited research that takes culture into account. These examples suggested counselors will benefit when applying contextual awareness and different counseling approaches internationally (Hook & Vera, 2020;Koç & Kafa, 2019). --- Adaptive Concepts Counseling concepts that hold value across cultural expressions may appear differently than expected. Chen and Hsiung (2021) found that many student therapists in Taiwan were challenged to articulate and operationalize the essence of the self (i.e. self-concept, self-identity, self-actualization), which is a concept that predominates many Western therapy models and classroom settings. They recommended preserving the general concepts but adapting them with terms like "self in relation to others" and "self in context," which were more culturally familiar. Several contextual factors influenced the Chinese counseling students' engagement in self-reflection with a study conducted in Taiwan, including conforming to collectivist values, valuing academic success and filial piety, saving face in relationships, and observing myths about the helping professions (Chen & Hsiung, 2021). With awareness of how these tendencies influenced these Chinese counseling students, Chen and Hsjung (2021) discovered that a course on counselor self-awareness and self-care improved Chinese counseling students' engagement with these concepts. In a similar way, Matthews et al. (2018) described a study where higher 1 3 racial identity positively correlated with multiculturally competent skills. A strong culturally-appropriate self-awareness offered clear benefit. Other ways to adapt to different cultural settings included the development of the therapeutic alliance in the counseling relationship. Lee et al. (2019) described how counselors need to prioritize culturally sensitive practices in the therapeutic alliance for cross-cultural dyads, which may include negotiating language and various understandings. Cross-cultural training may help counselors adapt to various cultural settings, yielding adaptive traits such as cultural humility, necessary self-analysis, collaborators with service recipients, perseverance, communication skills, and supervision (Hook & Vera, 2020). Other ways that demonstrated how professionals may develop adaptive concepts for counseling included, but were not limited to, increasing cultural awareness (Hays et al., 2010), increasing awareness of varying cultural attitudes toward the counseling profession (Al-Krenawi et al., 2009;Young et al., 2003), emphasizing the importance of therapeutic presence or attunement (Srichannil & Prior, 2014), and cultivating emotional intelligence in counselor education (Miville et al., 2006;Leung, 2003) further promoted counselor development with increased opportunities for international travel experiences, increased funding for programs with an international or cross-cultural focus, and a shift in admission criteria to place greater focus on internationalism, including bilingualism, living abroad, international travel, and other life experiences. --- Diversity Within Groups Counselors demonstrated an easier time offering counseling skills across multiple cultures or identities when they develop an awareness of cultural best practices. Koç and Kafa (2019) highlighted how incorporating indigenous practices can be difficult in any counseling setting, especially because there may be a lack of incorporation of counseling methods between various subcultures or ethnic groups even within the same country (i.e. between Aboriginal and non-aboriginal populations). Many countries have called for greater synergy between modern and traditional methods as a result, so that there is not just one focus to the exclusion of the others (Koç & Kafa, 2019). One example of this multiple focus for diverse groups included the "Clubhouse model" described by Agner et al. (2020), which promoted the use of day programs that fostered social support and activity for people living with severe mental illness. Such programs occurred in Hawaii, and Agner et al. (2020) described the diverse themes important to wellness involving connection to place, connection to community, connection to better self, and connection to past and future. These connections to wellness from this study reflected consistent themes with indigenous cultural values. Multiple studies addressed ways for counselors to thoughtfully approach crosscultural populations, and researchers described important concepts, like emotional intelligence, to communicate safety and reinforce the therapeutic alliance despite cultural differences (Duff & Bedi, 2010;Milville et al., 2006;Srichannil & Prior, 2014;Young et al., 2003). Duff and Bedi (2010) emphasized the importance of the relationship with engaging diverse groups, and carrying a posture of attention to detail, 1 3 honesty, and physical calmness as significant ways for the counselor to communicate care to their client in cross-cultural settings. Researchers also noted that participants identified personal qualities as the paramount tool of the counseling process, even over technical skills or theoretical orientation (Bojuwoye, 2001;Srichannil & Prior, 2014). The purpose of the following study is to explore the common values of counseling that exist among counseling professionals in a variety of international cultural contexts. The study employed a phenomenological approach to understand how the unique experiences of these counseling professionals can inform awareness, practice, and training for multicultural and international counseling practice. --- Methodology This qualitative study reflects a phenomenological theoretical approach to explore the experiences of 16 different counseling professionals serving in a variety of international settings. Researchers applied concepts of Interpretative Phenomenological Analysis (IPA) more specifically, because the approach helps guide the exploration of both participant thoughts and experiences (Cook et la., 2016;Smith & Osborn, 2004). As Palmer et al. (2010) described, "the aim of IPA is to understand and make sense of another person's sense-making activities, with regard to a given phenomenon, in a given context" (p. 99). Though neglecting the social context to focus too heavily on individual experiences has been a criticized risk of IPA (Smith, 2011), the researchers worked to reduce this risk by focusing the research questions directly to the unique cultural experiences and perspectives of the participants. The Qualitative method offers benefit to this effort as Gergen et al. (2015) described, "Added to the goal of prediction are investments in increasing cultural understanding, challenging cultural conventions, and directly fostering social change" (p. 1). A phenomenological theoretical framework allowed the research team to gain further understanding of the unique experiences and perspectives of each participant and their setting, while observing common themes that emerged within the data. The most common method of acquiring qualitative data through IPA is through thorough interviews (Smith, 2011), which were conducted through private and secure online video software while participants remained in the comfort of their natural settings throughout the recruitment and interview process. IPA studies were found throughout the health psychology and mental health literature (Smith, 2011), with similar models used to identify cultural needs within mental health training programs (Thomas & Brossoie, 2019), understand international counseling doctoral student preparation (Li & Liu, 2020), and explore social justice and multicultural competency in counseling school developmental models (Cook et al., 2016). The primary research question for this study examined, "How do counseling professionals connected through international counseling associations experience counseling professional values within different cultural contexts?" From this primary focus, participants were asked the following sub-questions: 1. How do counseling values relate with the participant's culture? 1 3 2. What needs of the participant's culture would benefit from greater attention from the counseling profession? 3. How can the counseling profession best meet those needs? 4. What are key ways the participant's culture can enhance or inform counseling values? 5. What are natural venues from which counseling practice is accepted and valued most readily in the participant's culture? 6. What else can the participant share about experiences with integrating counseling values in the participant's culture? --- Researcher Roles The research team included investigators from the Counseling Department of The Family Institute at Northwestern University. The primary researcher also served as a core faculty member and research sponsor, who organized the team of graduate counseling students to process and evaluate the data for the study. All researchers received proper social research ethics training through the required sources at Northwestern University to ensure compliance with confidentiality and privacy standards, and the study was approved by the Internal Review Board (IRB) at Northwestern University (#STU00207061). Each researcher provided equal insight and reflection on the development of this study. These contributions included researching the literature, transcribing and analyzing data, and identifying themes for further discussion and application from the counseling professional participants of the study. Because of the nature of the project and the inclusion of participants connected through international counseling associations, some of the participants of the study had previous interactions with the primary researcher through professional association interactions. The second and third authors contributed to the study through researching the literature, providing input to the writing, and offering consultation throughout the data analysis process described. --- Procedures The primary researcher conducted 60-minute interviews with 16 counseling professionals around the world. Recommended sample sizes for a qualitative study with a phenomenological framework can range from as few as six participants (Schreiber & Asner-Self, 2011) to as many as 10-15 participants (Johnson & Christensen, 2012;Smith, 2011) emphasized how the intensity of the analysis process with IPA results in the sufficiency of smaller samples sizes. A total of 16 participants were chosen for this study in observation of the eight regions identified by IAC, which employs a model of representation on the Executive Council (EC) by preserving a voice of leadership for representatives from Africa, Asia, the Caribbean, Europe, Latin America, the Middle East, North America, and Oceania (IAC, 2022b). Two participants from each region provided a purposive sampling method with at least two perspectives for each region. The selected participants for this study originated from the countries where they currently lived and worked in professional counseling, and participants from two different countries from each region were selected to enhance the voice of those areas. The goal of this study was to have a diverse population of counseling professionals that could speak knowledgably about counseling values in relation to the profession globally. To ensure equal representation of participants throughout the eight regions of IAC, researchers used a strategy of purposeful snowball sampling to acquire willing participants who could inform the study (Bogdan & Biklen, 2007). Researchers asked each participant for referrals to additional counseling professionals that could be willing to participate in this study. The interviews were conducted through private and secure online video software, and participants remained in the comfort of their natural settings throughout the recruitment and interview process. The interviews lasted 60 min, were recorded with participant consent, and explored the specified research question and sub-questions identified. All associated data were stored on a private and secure data platform only accessible to the research team, who transcribed each interview, reviewed the content, and analyzed the data. All the interviews were conducted in English. With two participants another counseling professional proficient in both English and the language spoken by the participant assisted in conducting the interview. The interpreters also assisted these participants with completing the study's consent form and demographics form. --- Participant Criteria Because the definitions of counseling vary across cultures and countries, the study focused on the definition of what constitutes a professional counselor within a country they live and practice, and as observed in their local culture. The National Board for Certified Counselors-International (NBCC-I, 2012) described this approach to determine eligibility of credentialing candidates internationally based on five universal criteria. ( 1) formalized counselor education, (2) supervised counselor experience, (3) assessment-based credentialing, (4) standards of professional practice and conduct, and (5) continuing education requirements. Individuals excluded from the study sample included anyone who did not meet the criteria for all five of these categories defined by the NBCC-I, which may be specified and interpreted differently between each country. Most of the interviews proceeded with participants who have a working knowledge of English, but two interviews proceeded through the use of an interpreter. In these two cases, another counseling professional proficient in both English and the language spoken by the participant assisted in conducting the interview. The interpreters also assisted participants with completing the study's consent form and demographics form. The demographics form explored details of the participants, verifying the NBCC-I (2012) criteria as defined in the context of their countries. The specific details of each participant are not disclosed to ensure confidentiality, but the demographics offer greater detail to the professional pool of participants. Each participant reported descriptions that provide context for the following: highest degree, current title or position, licenses, certifications, or credentials in counseling or related field, years of 1 3 experience in professional counseling, country of origin, country of current professional service and practice, counseling domains included with professional experience, primary counseling domain, and identified gender (See Table I). Participants were guaranteed their privacy would remain secure for the study, but that only general findings would be shared in relation to their data. --- Recruitment To preserve the diversification of voices from the counseling professionals equally, stratified purposeful sampling was initiated when the primary researcher sent an e-mail to the EC members for IAC (2022b). The EC members were asked to consider participation and recommend any other professional counselors they believed fit the criteria of the study from their region. The email addresses to the EC members throughout the world were obtained through IAC interactions. Once identified through this method, potential participants received a direct e-mail invitation with a description of the study. Those who responded and agreed to participate received a follow-up e-mail to setup the time for the interview. When an agreed time was established, a final electronic calendar invitation was sent confirming the time to participate in the interview, along with the links for their consent, their demographic information, and the video meeting link for the interview. The consent form included details about the study, how their information would be used, the risks and benefits of participation (no financial compensation was included), contact information to reach the primary researcher or the university IRB department, and verification that they may withdraw their information at any time. Once the interviews were completed, the researchers sent a follow-up email to thank the participants for the interview and verify their ability to contact the research team at any time. This five-stage process of reflected the following steps: 1. Request referrals from the IAC EC. 2. Send direct participation requests to potential participants. 3. Send invitation to schedule the interview. 4. Send calendar invitation email to conduct the interview (with links to complete consent forms and demographic forms electronically). 5. Send follow-up email expressing thanks and offering contact information. --- Data Analysis The researchers collected, reviewed, and analyzed the interview data through a process of coding for each transcript of each interview. Interpretation of the data included an inductive method using both first and second levels of analysis to highlight common experiences of the participants interviewed (Bogdan & Biklen, 2007). Phenomenological approaches used to code the data in this study include descriptive coding (first level), open coding (second level), and theming (third level) (Flynn & Korcuska, 2018). The first level involved coding the content of the interview based on basic descriptions of the responses throughout each transcript. The second level involved the researchers going back over the data to code categories that had emerged 1 3 from the data. These coded concepts were compared across all interviews to identify core themes. The third level resulted in condensing and grouping the categories that emerged into themes that may be reported and discussed further. At each step in the process, each researcher reviewed each transcript independently and in great detail before coming together to compare findings in a collective researcher discussion. Each of the three levels of analysis of the data were infused with mutual consultation in researcher discussions to distill the essence of the resulting themes as reflected in Figure I. The demographic data completed for each interview provided additional layers of detail that helped describe the participants of the study and verify their qualifications for discussing the field. --- Reliability, Validity, Generalizability, and Trustworthiness The researchers employed a peer review system of reviewing the data to ensure accurate and reliable interpretation of the transcripts and content (see Figure I). The researchers engaged in the first stage of coding independently, with content categorized inductively for each transcript. After the initial level of coding was completed, the research team deliberated on the content to compare similarities and differences. The vast majority of the first-level coding was consistent across the research team, but with some minor adjustments. At the second level of coding, the research team independently identified themes that emerged from the data. The team met and compared the themes, only to find significant consistency again. The researchers discussed some of the minor differences and agreed upon common language that captured the essence of the data into clear themes. This process allowed for multiple levels of blind review and comparison to enhance reliability of the data. The same research questions and sub-questions were used for each of the participant interviews to ensure continuity and validate the commonality of responses. The research questions were discussed by the research team and formulated to ensure they expressed important values of the counseling profession, multicultural best practices in the field, and the phenomena being explored in the study. While the interview responses were diversified across many cultures, countries, and ethnicities, the data also must be interpreted with some caution regarding generalizability. Two representatives were interviewed from each global region, which offered some level of variety in understanding diverse perspectives on counseling values. However, there are many other counseling professionals that have the potential to answer any of the questions differently. Because of the global focus of the study, a higher number of participants was utilized to enhance proper representation (Johnson & Christensen, 2012;Schreiber & Asner-Self, 2011). Although participant criteria defined counseling professionals in their context, the participants in each region only represent two opinions out of possibly many others. Still, the commonalities across cultures offer a helpful starting point for understanding how the phenomenon of counseling values are understood worldwide. While recognizing this caution, researchers have emphasized that qualitative phenomenological studies do present opportunities for some general applications, particularly with eidetic generalizability with a focus on the phenomenon and not a number assumed to create saturation (Englander, 2019;van Wijngaarden et al., 2017). To reinforce stronger trustworthiness regarding the phenomenon of international counseling values explored in the study, the research team employed several techniques, including memoing, researcher positionality, and participant checking (Bogdan & Biklen, 2007). Memos were taken with each interview to provide context when reviewing transcripts throughout the data analysis process. Figure I reflects the researcher discussions where efforts to recognize researcher positionality as both outsiders with participants culturally but semi-insiders professionally remained an important part of the data analysis (De Cruz & Jones, 2004;Gair, 2011). The researchers also provided each participant with all their referenced material and the context of interpretation so they could verify the accuracy of the data interpretation through member checking. This approach seemed especially important because English was a second language for the majority of the participants, although they largely demonstrated language fluency. The only exception was with the two participants who required translation, and the interpreters used for those interviews were included on the email with the participants to view the interview content and offer feedback. There were no concerns raised by the participants regarding the content and the application of their words to the findings in the study, so the member checking provides greater confidence that the data shows a trustworthy representation of the interviews. --- Results The volume of data provided rich and relevant insights from the professional counseling participants. The overarching themes identified by the research team included: (1) Recognizing valued approaches, (2) Adapting to community settings, (3) Understanding common professional issues, and (4) Maximizing cross-cultural practices. Because the content touched on so many important areas, and the researchers decided to include the voices of the participants as much as possible. This article will focus on the content for 1) Recognizing valued approaches, with future articles sharing data from the remaining three sections that follow, including sub-themes for each category. The following demographic information offers a description of the participants that provided the content for the phenomena explored in the study. --- Demographic Details Demographic information provides greater definition to the credentials, experiences, and identities of the participants, without compromising privacy outlined in the participation agreement. Table I offers a list of features that contextualize participant contributions in the data. --- Recognizing Valued Approaches Participants consistently described several prominent approaches to counseling that occur or are needed within their context. The areas of commonality described in this article became apparent in the data with a variety of sub-themes. These notable themes included marital and family counseling, child and school counseling, faith integration, indigenous practices, and person-centered safe spaces. Table II offers data on what discussions were addressed by which country participant. To preserve anonymity, each participant is identified by the name of the country they represent. Table II details the eight regions of IAC, along with each country participant. The topical areas they addressed are also included in the table. --- Marital and Family Counseling The importance of providing marital and family counseling was vocalized among 69% (n = 11) of the 16 participants. The role of family and the importance of the marital relationship are recognized as high priorities, emphasizing in a variety of ways that counseling professionals must be ready to provide marital and family counseling in many contexts. Samoa highlights, "Most definitely. The family systems here would definitely benefit from more counseling in terms of how family systems operate within the changing world." Russia also concurs with the importance of family work when saying, "I know that family therapy [and] spouse therapy is very widespread here at this time." Counselors come with skills that can address the conflicts that emerge among families, and Canada describes some of the ways professional counseling is needed for families in the country by sharing how counselors are "targeting all levels of the family and often running some parenting programs and some couples programs. . and some family counseling." Preserving marital relationships was viewed as central to family support among counseling professionals. Zimbabwe reports some of the greatest needs exist "between spouses, between partners who could be cohabitating. . . It is also true with the starting a relationship or starting a marriage." Afghanistan also describes how more people are encouraged to seek out pre-marital counseling or divorce counseling in the country. They highlight a need to focus on prevention rather than intervention, and reported others ". . would refer the families that need premarital or divorce counseling and this stuff. If this is expanding in the country, there would be a lot of cases now [in the] family area and for family counseling." Afghanistan goes on to explain, Malawi X Zimbabwe X X X X Asia India X X X Malaysia X X X Caribbean Trinidad & Tobago X X X Jamaica X X X Europe Russia X Wales X X X Latin America Argentina X Uruguay X Middle East Afghanistan X Iran X X X North America Canada X X X X Mexico X Oceania Australia X X Samoa X X X X Note: More content from this study will be represented in upcoming publications of the data 1 3 "this can prevent a lot of the divorce and. . prevent it [from] having a lot of children of families that were divorced." Iran also emphasizes the importance of marital support and described how this form of counseling is much more accepted in the culture. Both Iran and Afghanistan emphasize that the acceptability of counseling for couples and families is high among many of those who are highly educated. Iran describes "Recently, it's much, much better. In premarital counseling for example, a lot of people would go, a lot of new people, and the new bride and groom to be. Before that they would go and see a premarital counselor." The popularity of counseling among married couples offers hope that ongoing support for the family and individuals may occur. Argentina also describes some of the positive ways that counseling can shift the perspectives of people in relationship by offering a metaphorical monologue: "Well, I have a day with my partner. We are angry. I don't talk to anybody and I can see the difference where I stay in this way." Argentina goes on to describe how this relationship can affect other relationships but coming to counseling to talk with someone "is a huge support." The commitment to continue supporting family units remains a common cultural emphasis among the participants represented. When describing how important family is within the culture overall, Jamaica explains: The family unit is critical, so, while it may look fragile and sometimes fractious, as my grandmother would say: Family is family. You know, that's part of it. We have an old proverb that says: '[Your] finger stinks? You don't cut it off.' In other words, not because you have a rotten apple in the family, you disown them. You still try until it's absolutely impossible to do anymore; then you may have to cut the finger off, but that's not your first course of action. And that includes people overseas, family overseas supporting family here. And I think that happens for many migratory populations, not just Jamaicans. Others elaborated on the challenges of lacking resources and economic hardship and their negative impact on the family. Samoa explains that ensuring support for families who are struggling is a primary focus of counseling: "The spiritual leaders, the ministers and pastors, they've worked quite [hard] to support families." The problems have become evident, as Samoa describes, "In terms of family systems, [this] will be really for counseling to really take off, because now we're starting to see a lot of domestic violence here." Malaysia describes similar concerns evident within the culture as a result of the need for family counseling: To be honest with you, family is one of the systems that we really need to work with, and due to that, there are a lot of connected issues, such as the drugs and substance abuse, domestic violence, of course mental health counts as one of the issues. And for your information, throughout my state, if I'm not mistaken, we are the highest. We have the highest rate of divorce and then a lot of broken family involved, and a lot of, what do you call that, orphans, as well. Family counseling interventions may differ according to the context. Participants describe specific approaches for how to consider family involvement in the profes-1 3 sional counseling process. This is evident for India, who offers detailed perspectives on how to support families in their culture: "We started out with the Western model of foster family care, you know, selecting strangers to care for the children and then counsel there, you know, the foster parents and the children." This soon proved to be a poor fit for the culture in India, so they adjusted their approach and, "We realized that those models are not working completely and so we decided. . we had to take on a more familial and community-oriented perspective." Adapting the model of counseling and "kinship care" to support the needs of children and the foster families that take them increased the success of their efforts. India reports this new model of support was adopted because "what works well in our culture is family-centered counseling and community-centered counseling,. . you can't keep families and communities out from the context of that individual." Both India and Samoa explain how even concepts of confidentiality are handled more openly when working with families in counseling within their cultures. As India describes, "Confidentiality, yes, it definitely has a place, but we understand it a little differently, because we have areas where we need to share it." In a similar way, Wales discusses ways of seeing counseling integrated into the culture through a focus on learning how to value listening, not only in families but also in the communities in which they live: "So, I think you've got these informal networks of value within communities, and some of that is within families, but some of it is wider than that." In this way, Wales describes the way counselors can offer greater benefit to the people they serve. --- Child and School Counseling Counseling is frequently identified as highly valuable for children, particularly within school settings. The attention of child and school counseling occurs among 50% (n = 8) of the participant responses. Wales describes with pride, "Here in Wales, for example, we were one of the first places in the UK [United Kingdom] to have a counselor attached to every primary school." This approach leads the way for the rest of the UK to follow the example of Wales so that "counselors can advise schools and their schools' management on aspects of the way they do things which might actually be contributing to the problem." Malaysia also reports, "Most of the schools in Malaysia, they do have school counselors," which is identified as a primary venue from which counseling services are offered. Samoa describes a heightened interest in counseling children, providing an example of a recent training program offering courses in child counseling, where they "just finished one of child abuse, and now it's child counseling skills, and the interest for to take on these courses are quite difficult to get in." This reality convinces Samoa that "a lot of more people are taking counseling seriously in Samoa." Counseling also receives more favorable recognition from the community in Iran. When discussing the values of counseling that exist within the community, Iran shares: The other thing is any kind of counseling for children. So, children, you know parents, doing anything for children. So, before they wouldn't go to see a counselor for, you know, for a children's problem, but at the moment counselors who are working in the area of children, any kind of issue like behavior, school counseling and these things, these are the ones that have a lot of clients. Canada describes the system of support children receive among Canadian mental health professions: "Children and adolescents do have access to school counseling and they would access through that way." This approach provides an entry point for many young people, as they can receive further services if needed by receiving a referral to the local health services department. Anything beyond, "how you fit or don't fit within the school system and problems you're having there," would result in having a referral to public health services for more in depth counseling support. When describing how professional counseling can best support needs of the culture, Jamaica expresses, I think our school counselors have a role to play here, but they are overwhelmed. There are few in a school and what they're being asked to do is just too much. You're going through the motions, but are you having an impact? Yes, you may be impacting a few students, but how can you use your position in a role of advocacy as opposed to being just a part of the system? Other countries, such as Jamaica, reflect a broader concern about how counseling is applied to young people, often centering on behavioral issues. Zimbabwe describes how counseling has often focused on behavioral issues with "teachers in schools and also in communities. . these are the contexts we do this. It's about the discipline, trying to ensure that, you know, pupils to students behave in school settings." Similarly, Trinidad explains, "Trinidad and Tobago are in a time where young people are not valued, especially young people who come from poverty-stricken environment, young people who are really struggling." Trinidad explains how students who need services most are not receiving the services they badly need: "Our education system only focuses on, let's say the 20% who are able to maneuver their way into the kind of education system that we have; it's very academic-oriented." Trinidad calls for increased reform in how the country looks at supporting young people in schools and elsewhere with greater mental health support: I say that if we look at young people as potential criminals, we will do that [have armed guards around schools]. But if we look at them as potentially productive citizens of this country, we will put more guidance officers, more psychologists, more social workers. . .. We need to really look at our young people as productive citizens, potentially productive citizens, and provide the environment, provide professionals, that will be able to help them. --- Faith Integration Among the valued approaches identified by participants, the importance of faith integration became apparent across a variety of cultural, faith, and religious backgrounds. More than a third of participants (38%, n = 6) report the significance of 1 3 faith-based counseling services, describing its integral nature in the culture. Canada reports, "there's a lot of local counseling and some of them are more Christian-based while others are more open to whoever." Jamaica likewise reports, "A key place is church-based counseling." The importance of faith, religious, and spiritual integration in counseling support is regarded highly among the participants who addressed the phenomenon. Samoa describes the value of spiritual needs and the expectations of the people in the culture: "Yeah, I mean the spiritual side of it's always going to be there, because in each village is at least one, two, three different denominations of the spiritual leaders-the pastors and ministers." The churches in Samoa are working to increase mental health awareness among their pastors and leaders of churches. Samoa works as a counselor with the church, but also in a pastoral role. Zimbabwe also describes their own dual roles in both counseling and church ministry: "I am a registered licensed practitioner for counseling, and I am a minister of religion. . . I both assist practicing ministry clergy and also the congregants themselves." The integration of faith is clear with the experiences of both Samoa and Zimbabwe, and in the multiple professional roles they serve in their communities. Participants also report that counseling frequently occurs within religious or spiritual locations. Malaysia describes ways counseling becomes integrated with religious beliefs through the religious settings: "My country is Islamic country, so. . I would say if people can use the mosques. . the churches. . this kind of, what do you call that, medium. . . it should be like that." Malaysia provides reflections on targeting the faith experience of the client, yet they warn, "We need to be very careful as well, when it comes to this churches and mosques and temples. Yeah, because definitely we really need to understand the spiritual, what is it, the religious matters." Even if people do not have a religious affiliation, Jamaica explains how "They will make use of the church-based counseling, because there is an inherent trust in religious leaders and of religious leaders." Integration goes even further with counseling in the church in Jamaica: "Some churches have professional counselors. They offer a space where a professional counselor can operate from at a very discounted rate." Because of the significant reliance on the church in Jamaica, they share further initiatives to encourage clergy in their efforts to provide mental health support: "And then, of course, there is the pastoral counseling, pastoral work done by pastors. One of the things we have tried to do is to help pastors understand where their limitations are so they don't cross over into [other] areas." While many of the participants share the openness of religious or spiritual institutions offering counseling, Trinidad describe ways in which some faith settings may be restricted to only those ascribing to that particular faith, which can pose some challenges: There are some spaces that, you know, are Muslim, and they will not accept other faiths. And there are some spaces that are Hindu and they will not accept [others]. Some schools will not. It's just for Hindu children, [or] it's just for Muslim children. And if you are not of the faith but you get into the school, you have to follow their faith. . .. You have to do things the way they do it. .. which. .. causes a lot of confusion in the young people. --- 3 --- Indigenous Practices Participants demonstrate that counselors can recognize the importance of indigenous and traditional practices regularly by observing the values represented within the culture. Indigenous practices were described by 38% (n = 6) of the participants in detail. Australia depicts multiple levels and values integrated into the helping roles of the those with indigenous heritage, and how traditional practices have begun emerging as central counseling values in the country. According to Australia, ". . healing practices, particularly from Aboriginal and Torres strait islander peoples are being brought into mainstream counseling. For example, in my professional association, we have like a college of indigenous healing practitioners and those long held historical values." Australia goes on to describe some of these practices, including dance therapy, storytelling, and going outside on "bush walks." Canada underscores some of the ongoing activities related to integrating indigenous practices in counseling: "I'm helping with. . the development of competency and have been on two committees for that. . . I've written, and am now working on, indigenous competencies." Canada offers further illustration of the kinds of inclusivity and welcoming behaviors promoted in their culture: .. . in my culture there's something called the 'wampum belt.' It's a belt that shows inclusivity, that we're two different people but we are walking together in the same, um, time period, and the same life. And the idea is that you do good on both sides. You do good to each other, and we have many wampum belts. From the time of the fur traders and beyond and our interactions with government. So, it's. .. a way [of] showing mutual respect. These examples illustrate how indigenous and traditional practices can be integrated into counseling approaches. India reports the value of cultural applications to counseling values and expresses discomfort because of the dissatisfaction with direct applications of Western models. India explains how counseling models did not always answer questions people in the culture carry: "When we were working with people, we needed to indigenize, we needed to look at it, contextualize it, adapt models to contextualize it, and that is what I kept doing all my, through my years of my teaching." The work of contextualizing appropriately still eludes the counseling profession at times. Canada concurs, "I would say that [contextualizing] kind of goes with the first nations and indigenous communities as well. You know, it is hard to get culturally appropriate responses from. . the counseling profession." Professional counselors also identify indigenous leadership structures as important in many traditional models of helping. Zimbabwe describes ways in which counseling services benefit from engaging leadership structures, because "It has also been provided by community leaders here, I mean the chiefs, the headman, we have those structure in villages here. And also respected members of communities. . [have] traditionally been providing counseling." Similar to this experience, Iran describes, "before professional counseling coming to Iran, a lot of people would consult with wise man in their family. . still, they are very strong in that area. . [and one] would find some wise people in your area in our area." Samoa describes how counseling values relate to Samoan culture in the area of listening, because the experiences and assumptions of the traditional culture place a strong emphasis on the leaders simply giving advice while people listen. This can be very different from a counseling approach, and having the counselor in the role of listening may present a different experience altogether. According to Samoa: It's [counseling] moved away from advice giving. It's kind of always been about advice from someone to another person and often from fathers, chiefs, 'matai,' who lead families. It's more about the instructions towards everyone else. The way things are moving in Samoa people are starting to want to be more part of decisions that affect their lives. --- Person-Centered Safe Spaces Several participants (44%, n = 7) discuss person-centered safe spaces as central to valued counseling approaches. The participants describe the key elements of this theoretical approach in terms of the benefits provided to the culture. Australia explains, "The values around person-centered and relational and just letting people, the talking therapy of letting people, tell their story and being encouraged to wonder about this or expand on that." Australia also reflects on more formalized descriptions of personcentered approaches: I guess counseling across Australia taps into the amazing work of, say, Carl Rogers. So the values coming forward around unconditional positive regard, empathy, congruence, they're very solid ways of working with people, the relational ways of working with people in our country. Malawi describes some of the ways counselors can meet the needs of clients, including economic, religious, and sexual orientation differences, in a person-centered manner by exercising acceptance regardless of identity. Malawi explains the value of accepting others unconditionally in whatever need they present: "I think it implies very acceptances is the issue, that is a key word here. Because, like I say, the area shapes somebody and determines their world view, or world view determines who they are." Mexico also describes some of the ways person-centered approaches help people feel accepted and meet their needs by creating this safe atmosphere in counseling: I think that counselors can really meet those [counseling] needs, just by giving a space for people that is not available anywhere else. . .. It's the only place I feel safe, is the only place where I can express these. It's the only place where I feel I'm understood and that I actually am heard. . .. So that people can find that that relief and that safety that they're looking for. And that, I think that's what Mexicans need the most from counseling, just feeling safe, feeling heard, and finding alternatives. The principles of person-centered theory seem to be applied to counseling among the participants in ways that expand beyond original Western applications. India describes ways of counseling individuals with person-centered approaches, but emphasized that applications of these approaches must go beyond just the individual. India clarifies, "Of course we use person-centered approaches, but we use more family-centered approaches. And that has been found much more useful." To further illustrate the point, India explains, "I was counseling women in marital conflict at a shelter for women. And I did that for two years and we used not only individual person-centered counseling, but we also used group-centered counseling." Uruguay describes ways in which the closeness of the counseling relationship in a shared space offers an important way to conceptualize counseling in the country. The notion of contact carries a nuance in the country that highlights further meaning with counseling. Uruguay illuminates how the culture carries a "very particular characteristic of being proximal, being close and intimate. It's something that Uruguay gives to counseling but it's something that counseling brings to Uruguay-bringing these concrete space[s] where we can be close and intimate." Uruguay describes practical applications in the school setting, where there are programs designed to increase contact: .. . but not only the dialogue and the chance of generating a deep meaning contact, not only speaking or rationally understanding that we are connected, but through other means of contact and doing things more experiential than only rational or academic, where the living experience of meeting with oneselfstudents, counselors-all that from the theorical framework that the focus of person-centered approach and the Gestalt offers. Creating safe spaces remains a highly important endeavor among the participants. Trinidad details the importance of establishing a person-centered approach in the spaces they create for clients: "I set up a safe, empathic, non-judgmental space for young people because they were being branded as no good." In organizations and agencies that treat mental health, these principles are reinforced among those providing direct care. The ability to teach empathy skills was extended beyond counselors and mental health workers to reinforce safety and awareness of child development among parents, family, and the overall population of Trinidad and Tobago. Trinidad further clarifies, "I opened this space where young people can come and, you know, talk about their dreams and their hopes and their fears, and what have you." The importance of this application becomes a key area of support. Trinidad concludes, "Counseling is so necessary. We need safe spaces for young people to come in." Wales presents similar thoughts with supporting general awareness of personcentered counseling applications, emphasizing how the general public can benefit greatly by recognizing and practicing some of the core principles of active listening. The contributions of counseling values within the culture provided benefit, as Wales emphasizes: One of the things that, that really has grown exponentially is this whole idea of listening in our culture. That you know, not everybody needs to see a trained 1 3 counselor, but to have somebody who's really good at listening, and there's a modern listening skills courses now for people who aren't counselors. But they just go on these courses to learn how to listen more and realizing that just being a listening ear can be very healing for some people, you know. --- Discussion The research team uncovers clear themes expressed regarding counseling values by this diverse group of participants located all over the world. These participants from 16 countries, scattered throughout the globe, offer depth to the conversation of counseling values, and the data in this article focus on ways counselors can recognize valued approaches throughout the world. The results reflect prominent themes of the study, which included marital and family counseling, child and school counseling, faith integration, indigenous practices, and person-centered safe spaces. The demographic data deliver helpful insights to the knowledge and expertise represented. Two qualified experts in the field of counseling for each of the eight global regions of IAC demonstrated skilled knowledge as counselors, leaders, professors, researchers, and supervisors that provide important insights to the profession internationally. The data reflect advanced degrees, mostly with ten or more years of professional experience, national licenses and certifications, and a variety of leadership roles. --- Direct Applications Applying the knowledge of this study offers a promising direction for global mental health counseling. The identified content areas described in the study provide helpful pillars of counseling values on which further intentional leadership and strategies that may enhance global mental health support. Conceptualizing applications of these data yield three main areas we propose for meaningful and practical applications with those wanting to make a difference in the mental health needs around the world. These applications are summarized as transcultural counseling training, resource adaptability, and bilateral development. --- Transcultural Counseling Training IAC (2022c) launched a transcultural counseling course that offered helpful insights to multiple layers of counseling intervention from countries all over the world. In a similar way that this study integrates input from professionals, this model of exploring transcultural principles in counseling offers a helpful direction for counseling professionals to enhance their awareness of cultural applications applied across cultures. Counselors can utilize this training to develop greater awareness of transcultural principles that transcend culture and identity across individuals, groups, and identities. The results of this study offer a clear path of training that will enhance counselor awareness of transcultural issues. Professional counselors or counseling programs that wish to advance their knowledge and impact on international counseling can benefit from recognizing the valued approaches described in this study. The development of greater transcultural counseling skills regarding the valued approaches may require specific training in marital and family counseling, child and school counseling, faith integration, indigenous practices, and person-centered safe spaces. Counselor education programs can work to include these approaches, and the Council for Accreditation of Counseling and Related Educational Programs (CACREP, 2016) may consider including the items in future curriculum development. These areas can be understood and applied in a variety of ways depending on the context, such as India's expanded use of 'person-centered skills' with broader group or family interactions. Many opportunities for engaging international counseling training and experience exist in a variety of venues for clinical mental health counselors. IAC (2022c) continues to advance global efforts through leadership, education, collaboration, and advocacy, and there are frequent opportunities to volunteer for the many initiatives, including research, conferences, and collaboration, taking place throughout the world. The National Board for Certified Counselors (NBCC, 2012) provided similar experiences to enhance international capacity building for the counseling profession. They currently detail professional activities that develop international knowledge through partnerships, education, and service-learning experiences. Other opportunities to enhance transcultural counseling knowledge are through collaborative initiatives for mental health through WHO (2022) and the United Nations (UN, 2022). These resources offer practical ways for professional counselors to increase knowledge and competency with transcultural counseling development. --- Resource Adaptability Counselors have the opportunity to increase knowledge in these valued areas identified throughout the world, but there remains a need for ongoing adaptability among the approaches. Even with identifying the these important themes, the data reflects how this must be understood differently among various cultures. Counseling professionals can be ready to adapt skills, theories, and techniques to appropriately meet the needs of the individuals, groups, and communities they serve. The research shows a strong focus on person-centered approaches that create safe spaces for people in the counseling room. This aligns with many comments from the literature about how counselor traits and characteristics are far more important than the skills they present (Bojuwoye, 2001;Perron et al., 2016;Srichannil & Prior, 2014). Counselors have a myriad of opportunities to creatively adapt counseling concepts and practices to the appropriate context in which they are offering services. This study proposes an adaptive approach to utilizing counseling knowledge is key for adjusting to needs with marriage and family, child expectations, faith integration, indigenous practices, and person-centered safe spaces. The literature highlighted ways in which this process of adapting well can begin with efforts to enhance counselor self-awareness and development (Chen & Hsiung, 2021;Hook & Vera, 2020). --- 3 --- Bilateral Development The research questions of the study were designed to invite cultural insights that may contribute to global professional counseling development. Many insights from the participants reflected ways in which the counseling profession offers benefit to the culture discussed. In a similar manner, the cultures represented in this study contribute to understanding mental health support across cultures, and counselors can benefit from this bilateral perspective on professional counseling development. Counselors can work to advocate for international counseling benefits different contexts, but counselors can also continue developing the profession based on input from the practice of these international counseling professionals. Counselors can explore concepts foreign their experiences that may enhance the impact of their counseling further. In a similar way, AEgisdóttir and Gerstein (2005) advocated for the need to incorporate indigenous philosophies into counseling practices to increase flexibility and adaptability. This was evident in the way the participants of this study highlighted nuances with the roles of family, marriage, parenting, and children. Encouraging counselors to learn foreign concepts can enhance development within any cultural context, and these concepts can enhance awareness, knowledge, and skill for serving a variety of people and communities in every country (Duff & Bedi, 2010;Hook & Vera, 2020). --- Limitations Part of the design for this study was that all the participants were connected through international associations, and all of them were identified through IAC. Associations attract counseling professionals that are drawn to similar interests, so the team acknowledges some responses may have been different with international counseling professionals that are not connected through the same association. Expanding the research beyond this particular group may help further inform the nuances presented in the data, and such expansion may help reduce biased impressions from a homogenous group. Though the research design provided participants stratified throughout the world, having only two perspectives per region, or one perspective per country, offers only a limited perspective within that region or country. Each country and region no doubt carries many cultures, ethnicities, and people groups that consider mental health and wellbeing differently, and future studies may expand the search for counseling values within each region or country for greater precision of information. Language is such a vital component of communication. The application and reviewing of the content may have carried nuances that remained unrecognized by the research team. Because English was a second language to most of the participants (and two included translation), the ability to capture all points of emphasis has the potential to be lost. The research team acknowledges the potential for this bias in the research, coding, and conceptualization process. Efforts were made to minimize researcher bias by including a blind review process of coding and verifying the use of each member's material through participant checking, but even these processes include the potential for miscommunication or misinterpretation. --- Conclusion This study explored the common values of professional counseling that exist among counseling professionals in a wide variety of international cultural contexts. The research questions offer a broad lens from which participants could identify and define the values they recognize within their cultural contexts. An organized system of diversifying findings offered great depth to the conversation about enhancing cultural awareness and practice for counseling professionals wanting to see international counseling develop. The results provide valuable insights to understanding four categories of knowledge for counseling professionals, including: (1) Recognizing valued approaches, (2) Adapting to community settings, (3) Understanding common professional issues, and (4) Maximizing cross-cultural practices. Due to the depth and detail of content, we desired to enhance the voice of the participants to focus attention on the first concept: recognizing valued approaches to counseling. This content relays the participants' insights to consider approaches to marital and family counseling, child and school counseling, faith integration, indigenous practices, and person-centered safe spaces. Understanding these concepts provide opportunities for counselors to enhance development, both individually and professionally. Counseling professionals throughout the world are encouraged to apply these concepts by engaging further transcultural or international counseling training, creating resource adaptability, and committing to bilateral development of the profession across all cultures and contexts. The remaining findings from this study will be shared in future articles. We believe the results of this study spotlight many areas of future research that can advance the discussion of international counseling issues further. The study provides a starting point to increase the conversation around international needs and issues as they relate to the mental health and counseling professions. Future research may replicate the qualitative method described for this study, and explore the insights of professionals across other associations. Additional studies are encouraged to explore the nature of counseling or mental health professionals in different countries and recognize what areas of education best advance counselor effectiveness in each setting. --- Declarations --- Conflict of Interest We have no known threat or conflict of interest to disclose. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. 1 3
Health science is a sector of the health care industry that spans multiple discipline areas. It uses fields such as science, technology, engineering, and communication to support the health and well-being of humans and animals. Due to their diverse makeup, health science jobs can span public, administrative, and clinical settings. Health science jobs typically require degrees that include laboratory science classes and courses in health-related social science fields such as epidemiology, sociology, and psychology. Students can also take classes in health policy or the business of health care.
Introduction Health Sciences is a field of study linked to the growing development of various disciplines, some of these such as Biology, Chemistry, Physics and Medicine, which assume a leading role by providing concepts, methods and techniques that allow understanding the processes for preserve the health of individuals and society Within the objectives of Health Sciences is to maintain, restore and improve health, as well as prevent, treat and eradicate diseases. Health Sciences are essential to better understand the vital processes of both human and animal organisms. In general, look for a better and maintain the quality of life. Health science is a sector of the health care industry that spans multiple discipline areas. It uses fields such as science, technology, engineering, and communication to support the health and well-being of humans and animals. Due to their diverse makeup, health science jobs can span public, administrative, and clinical settings. Health science jobs typically require degrees that include laboratory science classes and courses in health-related social science fields such as epidemiology, sociology, and psychology. Students can also take classes in health policy or the business of health care. --- What is the role of Health Sciences? The objective of Health Sciences is to maintain, restore and improve health, as well as prevent, treat and eradicate diseases. In addition, they are essential to better understand the vital processes of both human and animal organisms. In general, look for a better quality of life. To give you more detail about the role of Health Sciences, it is divided into four stages: --- Research and study As it is a science, the main function of Health Sciences is focused on the study of Health. His goal is to better understand how animal organisms, including the human body, work; new discoveries are essential for the advancement of disciplines such as Medicine or Pharmacology. --- Disease prevention Of course, to prevent diseases it is essential to understand how they work and what their causes are. With a greater knowledge about the vital processes, the Health Sciences allow the promotion of the appropriate measures to avoid the appearance of diseases and other pathologies. --- Health improvement Thanks to all of the above, it also responds to one of the main objectives of Health Sciences, which consists of improving general health. It is not only about preventing and curing diseases, but also about maintaining a good state of health and extending life expectancy. --- Development of treatments On the other hand, research is important to advance in the field of Health, leading us to the development of new treatments. Advances in Chemistry, for example, are what allow the creation of medicines; the discoveries in Biology and Anatomy allow the devising of new medical procedures; among others. --- Health science jobs Health science professionals may work in various settings. The most common include: • Medical laboratories. • Federal government agencies. • Private agencies. • Insurance companies. • Pharmaceutical companies. • Consulting companies. • Manufacturing companies. • Hospitals. • Non-profit organizations. • Outpatient care centers. • Medical consultants. --- • Clinics The following list reviews 5 diverse jobs found in health sciences, their job responsibilities. --- Figure 1. Main professions in health sciences --- Doctor Primary Duties: A physician may specialize in a specific area or gain a generalized area of expertise. Physicians work as part of medical offices or in their own practice. They diagnose and treat patients for various injuries, illnesses or health conditions. They are responsible for prescribing medications to patients and coordinating with nurses to administer the remedies on site. This career requires one to complete a bachelor's degree in a human sciences area before entering medical school. After medical school, they may be in the residency stage for three to five years before earning their full credentials. --- Dentist Primary Duties: Dentists are responsible for diagnosing problems with a patient's teeth and gums. They are also responsible for performing routine procedures such as cavity filling, tooth extraction, and performing root canals on patients to improve their dental health. A dentist must have a specialty in: dentistry, dental surgery, or dental medicine. --- Nurse Primary Duties: A nurse assists doctors and other medical professionals in treating patients. They are responsible for communication between patients, their families, and doctors, as well as administering intravenous lines and medications prescribed by doctors. --- Dentist Health educator --- Nurse --- Nutritionist --- Doctor To enter this profession, you need at a minimum an associate's degree in nursing or a diploma from an accredited nursing program along with a nursing license. --- Health educator Primary Duties: A health educator may work as part of a hospital, school, or nursing facility, teaching the public about the importance of hygiene and how to prevent the spread of disease from themselves to others. A health educator must have a minimum of a bachelor's degree in health education or public health. To advance in their field, some health educators earn their master's or doctorate in a specialized area of health education. --- Nutritionist Primary Duties: A nutritionist may work in many different settings, including hospitals, care facilities, clinics, schools, and organizations. They assess a patient's dietary needs or restrictions and help create a meal plan that will provide the nutrients they need. Nutritionists may also be responsible for creating educational presentations to present in schools and other facilities. A nutritionist must have at least a bachelor's degree in nutrition. Some advanced nutritionist positions may require a master's degree in nutrition or a related area. --- Conclusions Health science is a sector of the health care industry that spans multiple discipline areas. It uses fields such as science, technology, engineering, and communication to support the health and well-being of humans and animals. Due to their diverse makeup, health science jobs can span public, administrative, and clinical settings. Health science jobs typically require degrees that include laboratory science classes and courses in health-related social science fields such as epidemiology, sociology, and psychology. Students can also take classes in health policy or the business of health care.
Background: Pacific Island countries are vulnerable to disasters, including cyclones and earthquakes. Disaster preparedness is key to a well-coordinated response to preventing sexual violence and assisting survivors, reducing the transmission of HIV and other STIs, and preventing excess maternal and neonatal mortality and morbidity. This study aimed to identify the capacity development activities undertaken as part of the SPRINT program in Fiji and Tonga and how these enabled the sexual and reproductive health (SRH) response to Tropical Cyclones Winston and Gita. Methods: This descriptive qualitative study was informed by a framework designed to assess public health emergency response capacity across various levels (systems, organisational, and individual) and two phases of the disaster management cycle (preparedness and response). Eight key informants were recruited purposively to include diverse individuals from relevant organisations and interviewed by telephone, Zoom, Skype and email. Template analysis was used to examine the data. Findings: Differences in the country contexts were highlighted. The existing program of training in Tonga, investment from the International Planned Parenthood Federation (IPPF) Humanitarian Hub, the status of the Tonga Family Health Association as the key player in the delivery of SRH, together with its long experience of delivering contract work in short time-frames and strong relationship with the Ministry of Health (MoH) facilitated a relatively smooth and rapid response. In contrast, there had been limited capacity development work in Fiji prior to Winston, requiring training to be rapidly delivered during the immediate response to the cyclone with the support of surge staff from IPPF. In Fiji, the response was initially hampered by a lack of clarity concerning stakeholder roles and coordination, but linkages were quickly built to enable a response. Participants highlighted the importance of personal relationships, individuals' and organisations' motivation to respond, and strong rapport with the community to deliver SRH. Discussion: This study highlights the need for comprehensive activities at multiple levels within a country and across the Pacific region to build capacity for a SRH response. While the SPRINT initiative has been implemented across several regions to improve organisational and national capacity preparedness, training for communities can be strengthened. This research outlines the importance of formalising partnerships and regular meetings and training to ensure the currency of coordination efforts in readiness for activation. However, work is needed to further institutionalise SRH in emergencies in national policy and accountability mechanisms.
Background Pacific Island countries (PICs) and territories are some of the most vulnerable to natural hazards, the effects of which are exacerbated by poor development and climate change [1]. Many PICs are situated within or close to the Typhoon Belt and the boundary between the Australian and the Pacific tectonic plates, increasing the risk of cyclones, hurricanes, flooding, earthquakes, tsunamis, and volcanic eruptions [2]. The sexual and reproductive health and rights (SRHR) of women, girls, men, and boys and gender diverse individuals are significant health concerns in all humanitarian contexts, including those caused by natural hazards. The risk of sexual violence increases in insecure and unstable settings and in contexts where protection from legal, social and community support systems have been undermined by displacement or disruption [3]. Humanitarian contexts may increase risk factors for sexually transmitted infections (STIs), including HIV and disrupt access to treatment and prevention services [4]. Maternal mortality is reportedly ten to 30 percent higher in humanitarian contexts compared with non-crisis settings [5]. In these contexts, women and girls will often give birth without skilled birth assistance or necessary resources, increasing the risk of preventable mortality and morbidity. A lack of access to newborn care can also jeopardise infant survival [6]. In response to these critical health needs, the Interagency Working Group on Reproductive Health in Crises (IAWG) has developed a set of objectives, activities, information, and resources focused on: preventing sexual violence and assisting survivors, reducing the transmission of HIV, and managing STIs, preventing excess maternal and neonatal mortality, preventing unintended pregnancies, and moving to comprehensive SRH services as soon as possible [7]. These aims are encompassed in the Minimum Initial Service Package for Sexual and Reproductive Health in Crisis Situations (MISP), a coordinated set of priority activities to be delivered in response to SRH needs. The Sphere Humanitarian Charter and Minimum Standards in Disaster Response have incorporated the MISP for SRH as a minimum standard of care in humanitarian response [8]. The MISP for SRH was initially proposed in the mid-1990s, and updated objectives and activities were included in the Interagency Field Manual on Reproductive Health in Humanitarian Settings of 2010 and most recently in 2018 (See Fig. 1). In 2007, IPPF, with support from the Australian Government, launched the Sexual and Reproductive Health in Crisis and Post-Crisis Situations (SPRINT) initiative. SPRINT was established to improve the health outcomes of crisis-affected populations, focusing on reducing SRHrelated morbidity and mortality. This initiative is led by IPPF in collaboration with its Member Associations and other national and international partners and is dedicated to building country capacity to implement global standards, including the MISP, in crisis contexts [9]. IPPF is the world's largest federated reproductive health Non-government organisation, providing SRH services in more than 160 countries with a strong presence in nine Pacific Island Countries [10]. The current phase of the SPRINT initiative is implemented with 13 locally-owned Plain Language Summary Pacific Island countries experience many disasters, including cyclones and earthquakes. The International Planned Parenthood Federation (IPPF) has been working in the Pacific to help build skills to improve the response to sexual and reproductive health (SRH) and rights during disasters. This paper describes research to identify capacity building activities that helped prepare organisations in Fiji and Tonga and how this affected the delivery of SRH during Tropical Cyclone Winston in 2016, and Tropical Cyclone Gita in 2018. Key informants in senior positions from relevant organisations were recruited and interviewed by telephone, Zoom, Skype and email. We used a framework that described different levels of capacity across phases of the disaster management cycle to analyse the data. Eight key informants described differences in Fiji and Tonga's preparedness activities before Tropical Cyclones Winston and Gita that affected the way in which services were delivered. The Tonga Family Health Association was well established as a key player in SRH service delivery before Gita and had built relationships and delivered training for disaster response to staff across a number of organisations including the Ministry of Health (MoH). These preparedness efforts facilitated a smooth and rapid response. In Fiji, the response was initially affected by a lack of training, role clarity and coordination, but linkages were quickly built to deliver care and services. Participants highlighted the importance of personal relationships, individuals' and organisations' motivation to respond, and strong links with the community to deliver SRH care. This study highlights the need for inclusive activities at individual, organisational and national levels within countries and across the Pacific region to build capacity for a SRH response. --- Keywords: Sexual and reproductive health, Pacific Islands, Humanitarian crisis, Preparedness, Capacity building, Disaster response Member Associations, including the Reproductive and Family Health Association of Fiji (RFHAF) and the Tonga Family Health Association (TFHA) in the Pacific [11]. Since its launch, there have been three phases of SPRINT (2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017)(2018)(2019)(2020)(2021). Activities under the initiative have included advocacy for the MISP, MISP coordination, capacity building, institutional strengthening, and SRH in Emergency (SRHiE) service delivery/response. The program aims to raise the profile of the MISP and promote a comprehensive approach to reproductive health that considers pre, during, and post-crisis phases. The establishment and on-going support of national SRH coordination mechanisms and the provision of capacity building and tools to prepare and respond in the acute phases of crises are central to these aims. SPRINT also works to support the integration of the MISP into country emergency response and disaster risk reduction policies. In 2017, IPPF established its Global Humanitarian Hub in Bangkok and the Pacific Humanitarian (sub-) Hub in Suva. These coordinate humanitarian work across the East and Southeast Asia and Oceania Region (ESEAOR) and the South Asia Region (SAR) in collaboration with Member Associations and other national and international partners. In 2018, IPPF launched its global Humanitarian Strategy (2018-2022), demonstrating a commitment to an integrated and comprehensive approach to SRHR in emergencies and linking this work to its long-term development mandate [12]. A critical component of this is the "colocation of the Sub-Regional Office for the Pacific (SROP) and Pacific Humanitarian Hub in Suva to coordinate and share lessons between humanitarian and development programming in the Pacific" [12]. The United Nations Population Fund (UNFPA) plays a critical role in supporting the work of IPPF at the regional and country-level by prepositioning reproductive health kits containing essential drugs, basic equipment and supplies needed to provide SRH care in crise [13]. The Pacific Island countries of Fiji and Tonga regularly experience cyclones, and in the period 2016-2018 had introduced preparedness measures under the SPRINT program. On the 20th of February 2016 Severe Tropical Cyclone Winston, the most intense tropical cyclone (category 5) in the Southern Hemisphere on record, reached maximum intensity near Fiji, causing extensive damage and 44 deaths. On the 12th of February 2018, Category 4 Tropical Cyclone Gita, the worst the country had experienced in 60 years, peaked, severely impacting Tonga. This paper reports on research that examined capacity development activities undertaken as part of the SPRINT program in Fiji and Tonga, and how these enabled the SRH response to Cyclone Winston and Cyclone Gita. This study identifies the different approaches to capacity building and response in the two settings and delivers recommendations for future efforts and investment in line with the objectives of the MISP. --- MISP for SRH --- Methods This descriptive qualitative study involving eight key informant interviews sought to identify activities that were carried out by SPRINT partners, including IPPF Member Associations (RFHAF and TFHA), other nongovernment organisation (NGO) and community-based organisations, and the Ministries of Health in Fiji and Tonga to foster workforce, organisational and community capacity development before Cyclone Winston (2016) and Cyclone Gita (2018). In addition, interview questions explored how these activities influenced the type, scope, and timeline of SRH response to these cyclones and mitigated challenges to delivering the MISP. We used the reporting guide outlined by O'Brien et al. [14] to present our findings. In this paper, we define capacity development as efforts to improve the knowledge and skills of those providing SRH care, information, and services, building support and infrastructure for organizations, and developing partnerships with communities [15]. The research was informed by a framework designed to assess public health emergency response capacity [16] across various levels (systems, organizational, and individual) and the phases of the disaster management cycle (preparedness, response, recovery, mitigation) (see Figs. 1,2). This study was, however, only concerned with the preparedness and response phases. --- Study setting Fiji and Tonga were selected as case studies to explore preparedness and response to SRH needs in crises. Both countries have a shared experience of tropical cyclones but have different cultural and demographic Fig. 2 A framework of capacity building in SRH in emergencies adapted from [16] contexts. Fiji is a Melanesian country with a population of 897,295 (across approximately 100 inhabited islands), while Tonga is a Polynesian country with a population of 105,845 (across 36 inhabited islands) [17,18]. While both are upper-middle-income countries [19] and have youthful populations, Tonga is more densely populated (49/km 2 compared with 147/km 2 in Fiji). SRH indicators also differ across the nations. Fiji has a contraceptive prevalence rate (CPR) of 30 percent, while Tonga's CPR is 17%. Adolescent fertility rates are similar in both Fiji and Tonga (49 vs. 30 births per 1000 women 15-19 years) [17], while the percentage of women subjected to physical and sexual interpersonal violence in their lifetime (2000-2015) differs (64% vs. 40%) respectively [20]. The Fiji National Disaster Management Office (NDMO) is the Fiji government's coordinating body for natural disasters. While the Ministry of Health and Medical Services has identified maternal, newborn and adolescent care and gender-based violence amongst the top health priorities in a reproductive health response [21], SRH in emergencies (SRHiE) is absent from the Fiji Ministry of Health Reproductive Health Policy [22] and there are SRH-related gaps in the Fiji National Disaster Management Plan [23] that was current at the time of Cyclone Winston. The emergency management and response structure in Tonga is led by the National Disaster Council (NDC) and directed by a national plan that does not include SRH [24]. Disaster management is noted in a generic manner in the National Health Strategic plan [25] while SRHiE is identified in a government SRHR needs assessment [26] published before Cyclone Gita. Both countries have adopted a National Cluster System based on the UN model. The key clusters involved in any SRHiE response include the Health and Nutrition/Health, Nutrition and Water, Sanitation and Hygiene cluster (led by the national Ministry of Health and co-led by WHO and UNICEF) and the Safety and Protection cluster (led by the national Ministry of Women and co-led by UN women). At the time of cyclone Winston and Gita, the 2010 version of the MISP for SRH (see Fig. 1) was the standard applied in both responses. --- Recruitment Study informants were recruited purposively to engage individuals from key organisations, and included staff who were directly involved in the preparedness and response efforts to cyclones Gita and Winston. We sought a diversity of perspectives, including government and NGO workers, across both countries' health and disaster response sectors. Information about the study was sent to key individuals with an invitation to participate in an interview. During the recruitment and data-gathering processes, several communication challenges were experienced due to the interviewers' remoteness, which made it difficult to establish contact with respondents and develop rapport. These were overcome by multiple contacts and discussions with individuals over a 6 month period. --- Data gathering The findings of a desk review informed the development of questions for the interviews and helped identify possible participants. A stakeholder reference group were invited to provide input into the interview questions, and these were piloted in January 2020. Due to the COVID-19 pandemic, Australia closed its international borders in March 2020, prohibiting travel to Tonga and Fiji. As a result, interview data were collected via telephone, Zoom, Skype, and email. Multiple contacts with key informants enabled thick descriptions to be built and saturation to be reached through concurrent analysis that identified no new patterns emerging. Rigor was also sought by inviting some informants to check the data for credibility. KB and AD met regularly to discuss the data and ensure a detailed audit trail was collected. Due to the small number of informants and unique context, respondents have been de-identified as much as possible to ensure confidentiality. To maintain anonymity, direct quotes included in this report are not attributed to individuals. --- Analysis Data were analysed using a template as described by King [27]. Coding was directed according to categories that aligned with the aims of the study and the process was managed using the qualitative research software QSR Nvivo 12. An initial template was developed based on the list of codes to identify themes in the textual data and these were modified as the analysis continued. The Framework of Factors Influencing SRHiE Response, together with a broad understanding of capacity and capacity development programming (see Fig. 1) informed the template used for data analysis. This allowed for the consideration of a wide range of factors that may influence the effectiveness of SPRINT-supported training, other capacity development efforts, and the response. --- Ethical approval This study was granted ethical approval by the Human Research Ethics Committee of the University of Technology Sydney, Fiji National Health Research Ethics Committee, and the Tonga National Health Ethics and Research Committee. --- Findings Eight key informants were interviewed for this study. We outline the findings according to the preparedness and response phases. --- Preparedness: before cyclones Winston and Gita Fiji Before Tropical Cyclone Winston in Fiji, key informants reported that few capacity development activities had been implemented to support the delivery of the MISP. Staff from the IPPF Sub-Regional Office for the Pacific (SROP), the Member Association RFHAF and partners were involved in the response to Winston and of these, only one responder had received training on the MISP. This training had been provided during the second phase of the SPRINT Initiative and a significant amount of time had passed since the completion of this training, with no follow-up refresher training or opportunity for the individual to apply their new knowledge or skills. Staff from the SROP were familiar with the MISP due to their involvement in reporting and supporting regional humanitarian work. However, at that time they had not received a formal orientation to the package. At the onset of the cyclone, two surge capacity staff members from IPPF were deployed to Fiji to conduct a 'crash course' for responders on the basics of the MISP and coordination skills needed to support the response in Fiji. These staff members had been involved in implementing SRH services during crises in different contexts. A key informant stated: …the crash course in Fiji it was really focused on coordination. And how to handle yourself and your staff in crisis situations. How to be more tolerant, more strategic, and how to react quickly, to how fast things are the way things change. So it was really preparing them psychologically and emotionally on what would happen. Because family planning, HIV, maternal health, SGBV, they've been doing this for how many years… They know this stuff. Participants appreciated the practical nature of this training, with one explaining that "when we did the crash course, they focused on what we would be doing" (Respondent). Further capacity development strategies were deployed to ensure those involved in the response, including nursing staff and volunteers were familiar with where their tasks fit within the MISP implementation, to clarify roles, and to explain each medical mission's processes and procedures. In addition to these formal training sessions, these two experienced IPPF staff-members remained with the in-country and SROP-supported response teams for ten days to advise, guide, debrief and build daily on lessons learnt. --- Tonga In Tonga, key informants reported that training had been implemented well before the onset of cyclone Gita. This training had been conducted alongside other preparedness activities, including a national stakeholder meeting on the MISP, training on long-acting reversible contraceptives (LARC), orientation to Sexual and Gender Based Violence in Emergencies (SGBViE), and attendance at cluster meetings and interagency coordination with stakeholders. In 2017, IPPF Humanitarian Pacific team members and the TFHA hosted a national stakeholder meeting to orient participants on SRHiE and the MISP. Training also continued during the response when gaps in the provision of psychosocial support for SGBV survivors were identified, especially on the island of 'Eua. A half-day orientation on SGBV in emergencies was conducted in 2018 for field responders, facilitated by UNFPA and supported by SPRINT response funding in collaboration with TFHA, IPPF Pacific Humanitarian Hub, and the MoH. A total of 42 Tongatapu-based clinical staff nurses and midwives were trained in basic concepts and fundamental guiding principles in dealing with a range of SGBV issues. Gita, therefore, provided the opportunity to upskill clinical staff, building competence, networks, and relationships. Key informants also noted that the TFHA staff had attended several cluster meetings as a key stakeholder. These included meetings with the Health, Nutrition, Water, Sanitation and Hygiene (HNWASH) cluster and the Safety and Protection cluster involving the MoH, UN agencies and NGOs. --- Responding to sexual and reproductive health needs after cyclones Winston and Gita An SRH response was launched in the aftermath of both Tropical Cyclones Winston and Gita. The scope of these responses differed, and Table 1 summarises these against the objectives of the MISP (2010). Key differences are seen in preventing and responding to sexual violence and planning for comprehensive SRH services, integrated into primary care. Safe and rational blood transfusion in place was not reported in either setting. --- Fiji The training at the onset of the cyclone response led trainees, with the support of surge staff, to initiate a Family Health Sub-cluster to facilitate a collaborative SRH response with the MoH, medical services teams and partners. According to one key informant, this was essential "otherwise reproductive health would have been lost in the health cluster because they had so many other concerns". Before the guidance that was provided during this training, staff had found this a challenging time. we had to learn which cluster meetings to go to. We had to see where we fit into the security one and the health clusters. Even in the health clusters, we had to fight even to have a reproductive health cluster within the health cluster which wasn't there before… That's why we were so disadvantaged, there was a lot to handle. Links with the MoH also required strengthening. One informant said: there was collaboration, there was an existing mem- It was therefore reported that "we had to make extra efforts to be brought in". These 'extra efforts' in the form of advocacy by motivated IPPF SROP and MA representatives and guided by surge capacity staff led to the establishment of the sub-cluster in collaboration with Ministry of Health and Medical Services, and the delegation of responsibility to RFHAF-achievements regarded as impressive by several respondents. They also strengthened the relationship with government, an outcome explained by one respondent as: crucial because these are the things that will really hinder you, will make it very difficult for one humanitarian team to operate if you do not have the support from your own leadership and if the government doesn't trust you. While informants noted initial uncertainty regarding which cluster meetings to attend, they were also aware of general confusion at the time of the response "at that time… there were so many organisations that came in with different agendas and they wanted to be the first in. " Despite this, all agreed that coordination had improved post-Winston and support had increased since the establishment of the IPPF Humanitarian Pacific Hub, with one respondent stating that the situation is: [better] coordinated, not like before when we were looking and finding ways with the existing system of the government, but now we know after the MISP, after the set-up of the humanitarian arm here, it's more coordinated and it's quicker. --- Medical missions were launched the day after the brief training in Fiji. The RFHAF/IPPF SPRINT team delivered family planning counselling and referred pregnant women in their third trimester to birthing units. They distributed clean delivery kits, contraceptives, and dignity kits (containing sarongs, undergarments, thongs, whistles, soap, and sanitary pads). The team also provided safe spaces for displaced women and girls and community awareness on GBV, though skill weaknesses in this area were noted: we were just at that point strengthening the objective two components of MISP and so I think at that time we couldn't even consider ourselves a player at that point because we were not involved in the GBV or the protection work in Fiji. Lessons were learnt and applied as the SRH response progressed, with one responder reflecting that: the first intervention… was really disorganised, but after that when we came to the second one we were able to take a lot of lessons and even recommendations from the community about how we could do it best and we even incorporated that intervention when we went to the west. The collation of supplies and logistics also delayed the medical response as no action had been taken for securing these during the preparedness phase. One informant said: "what delayed our trip was we had to buy the stuff and get our dignity kits. " Another stated: at that time we were trying to rent vehicles and they were all out… And that was a drawback because we were a bit late in our response… There was no coordination and we should have booked the car but we had all these competing agendas. Roles were not always clear to responders who reported taking on many functions: --- So, I was everywhere. I don't really understand what was my role at that time because I seemed to be doing everything! I coordinated, I went to the village headmen, I went to the Ministry of Health for meetings, then I wore my nursing cap when I gave the injection and I was also the driver. In addition to MISP work, staff were engaged in activities that were not related to SRH: The Chief of the village we visited was sick. And because it was so far away up the mountains and there was no transport, we had to get the Chief man, because he had something that needed medical attention and because we were there, we had to drive him down to the main hospital. But we had to do it. And after a hurricane it's not that easy to drive the Fiji roads where you have bridges washed away and big potholes. So that was something besides the MISP that we did during our response. However, informants stated that such activities were necessary to build rapport, and the willingness of staff to accommodate these additional needs was well-regarded by recipient communities. Some challenged the importance of SRH response, believing that the focus should be shelter and food. This required a strategic and respectful approach: --- It's actually about convincing the masses why it is important. It was not an easy job but we were able to tell them, during a disaster and after… women won't stop having babies during a disaster…The communities came to appreciate that and that was quite a good feeling. IPPF surge staff remained with the in-country and SROP-supported response teams in Fiji for 10 days to advise, guide, and debrief. One informant recalled: The good thing about it is after every village we went to, no matter how late it was we would sit together as a team… and go through the day… We built on our lessons learnt every day and we had [the two support persons] there and they were really observers when we provided the service except the doctors and counselling. But they would attend the information sessions and go in and see how we would demarcate the areas and the signs and they would help explain properly and they would feedback to us in the evening. More broadly, it was reported that knowledge, skills and relationships developed during this response have been utilised and built upon in subsequent preparedness efforts and humanitarian action. Further advocacy for the integration of SRHiE in emergency preparedness plans; collaboration with government at various levels for capacity development; training of clinical, program and volunteer staff in-country; and coordination with other key NGOs have been undertaken by RFHAF and supported by the IPPF Humanitarian Pacific Hub, established since Winston, "all geared towards being MISP ready and having strong systems in place" (Respondent). --- Tonga Key informants were optimistic about the response to Gita, explaining that "overall, the response was good and the TFHA team felt they were in control". Staff were described as highly motivated, with one informant declaring: "it was new for us and became very exciting for us to provide the MISP, and we were able to get DFAT, who is the donor, to join us on one of our visits and they were happy with what we showed". Comparisons were made with the response in Fiji and one key informant stated: --- Tonga [the population] is much smaller [than Fiji] and the [TFHA] members as well have a very strong relationship with the Ministry of Health. I think those two things, there were a few things to their advantage. For example, one of the National Disaster Management Office coordinators actually sits on the Tonga Family Health board and also a Ministry of Health officer. In addition, relationships and networks developed with the MoH, NGOs and communities during preparedness activities were easily activated in response to Gita. When the MoH made an official request to the TFHA to facilitate SRH services and education to communities affected by Tropical Cyclone Gita on 19th February 2018, the TFHA formed a "core team" with the MoH and NGOs to undertake these activities in coordination with the HNWASH and Safety and Protection Clusters. One individual stated that the TFHA had "a very good relationship with the Australian DFAT (Department of Foreign Affairs and Trade) post in Tonga, maybe because they're just down the road. There's that active engagement even during normal times". However, there was still "a rapid learning curve" when it came to moving from the training room to disaster implementation. The assistance provided by the IPPF Humanitarian Hub including the training and development of a response plan and proposal for funding was regarded as a "big advantage "by a key informant. Staff roles were expanded when the TFHA team agreed with the MoH to include cervical cancer, diabetes and high blood pressure screening in the response "given the high burden of non-communicable disease in the Tongan community ". As in Fiji, it was identified that staff lacked capacity to address objective 2 of the MISP, responding to sexual violence. They instituted a brief training intervention to increase the capability of nurses to counsel and refer identified cases. Despite informants expressing satisfaction with the response, some pointed to necessary improvements including the need to better think through transport to outer islands as staff had to rely on fishing boats, and tailoring dignity kits to suit the local context. Plans to improve preparedness were in train including undertaking MISP readiness assessment, the integration of the MISP into the national reproductive health policy, and lobbying to include the MISP in the Tongan Government's goal to respond within the first 72-h of an emergency. --- Shared insights Key informants agreed on several issues, including that preparation is key for any response and that this must include hands on skill development and building and maintaining strategic relationships and community links. As explained by one participant: 80% of your response lies in how prepared you are. And being prepared doesn't just mean that you have clinicians trained, or the resources prepositioned, it's about being part of a national support network… we need to have those linkages to national level. We need to have those policies in place, we need to have the buy in from the key ministries…and I think we need to have partnerships-these play a great deal in the preparedness needs. And definitely capacity building at the MA level not just for the clinical or program staff but for youths engaged, at the board level for governance and so people are clear about what their role is and how that contributes to the bigger, broader picture of meeting people's SRH needs. The engagement and motivation of SPRINT-supported individuals and teams was regarded as an important driver of the response in both settings. This was seen in the many efforts to overcome obstacles in Fiji and Tonga nd the commitment to dedicate long hours and "heavy work" (Respondent) to meeting the needs of affected communities. This, combined with technical knowledge developed through capacity development was described as key: You need passion and technique. For humanitarian response, you can teach technique, but you can't teach motivation and passion…That's why I was confident with any response, as long as I'm working with the right people. And these were the right people on the ground... But they need the knowledge and that knowledge, that technique, can be provided through training and support. Respondents from both Tonga and Fiji noted a lack of systemic data collection on the status of vulnerable and marginalised groups during the response. This lack of data was seen as a barrier to mobilising an effective SRHiE response and planning future responses. In Tonga, this need for reliable data was reported to extend beyond particular groups to a general shortage of demographic and health-related data at a country level. One informant called for "standards for reporting and country appropriate indicators to allow the comparison of responses. " While UNFPA provided commodities for distribution, they did not assume an implementing function during the cyclone responses. It was suggested, however, that UNFPA involvement in monitoring and evaluation would have benefited the response in both countries. --- Discussion This study found that differences in Fiji and Tonga's preparedness, at the individual, organisational and systems levels prior to Tropical Cyclones Winston and Gita, influenced the type, scope, and timeliness of the sexual and reproductive health response. In Fiji, activities were concentrated on IPPF support to provide training to rapidly scale-up the capacity of responders at the onset of the disaster, and to strengthen relationships and access to platforms for coordination. In Tonga, individual and organisational capacity had already been established alongside inter-organisational networks across the sector and at the national level. Respondents in Tonga reported feeling prepared and confident. This is likely to be linked to the investment in preparedness activities and capacity building before Gita that was not present in Fiji before Winston. Despite an existing memorandum of understanding, with the Fiji MoH, regular communication appears to have lapsed. In contrast, considerable work had been undertaken in Tonga to build and maintain relationships with the government, NGOs and communities for SRH response. These capacity-building and preparedness activities in Tonga allowed the response team to take clear and directed action, engage with established coordination partners and platforms, and implement a relatively harmonised response. The gaps in preparedness in Fiji meant that there was a lack of clarity in initial efforts, and time was lost at the onset of the response. Despite these early challenges, however, adaptations were made to capitalise on the motivation, existing capabilities, position, and relationships of those involved in the response to Winston. This study found that a range of approaches to staff capacity building, such as regular in-service workshops in Tonga and rapid training at the onset and during the disasters in both countries, followed by mentoring and support, motivated and engaged staff in the provision of SRH and broader health services to affected communities. This emphasises the need for regular, on-going training and supportive strategies that are relevant and contextualised. Training is often the focus of capacity development [28], and a review of organisational change in the sector [29] concluded that training is only weakly linked to actual practice in humanitarian agencies and therefore needs to be supported by other capacity development initiatives. The limited effectiveness of training programs highlights the need for training to be situated within a set of buttressing strategies so that staff can apply knowledge and skills in the field. Pearson states that the design of training interventions "should be informed by an indepth understanding of the context and the identification of opportunities and constraints, and appropriately aligned to broader [capacity development] initiatives" (2011 p9). A systematic review of studies examining the transfer of training into practice for SRH in humanitarian settings found that individual, training, organisational, socio-cultural, political and health system factors all contribute to the ability of trainees to apply newly acquired knowledge and skills in their work settings [30]. This highlights the need for comprehensive activities at multiple levels within a country and across the Pacific region to build capacity for an SRH response. The training and subsequent mentoring and technical support provided by IPPF surge staff was reported to be indispensable in Fiji, highlighting the importance of these buttressing strategies to support capacity development efforts and optimise the application of knowledge and skills to action. Our study found that informants highlighted the importance of learning by doing, of feedback and support, and of building capacity through the process of implementation across both country contexts. Role flexibility was noted along with the need to be adaptable in incorporating non-SRH response activities as relevant to the local context. Factors at an organisational level also influenced the SRH response in both contexts. The support of management and program staff and the availability of surge capacity and technical guidance was widely appreciated. While this, together with the formalisation of partnerships and regular meetings and training are important activities to ensure the currency of coordination efforts in readiness for activation, so is the institutionalisation of SRHiE in national policy and accountability mechanisms [31]. National policies that highlight SRHiE as a priority and embed the MISP into disaster risk reduction (DRR) planning with attached investment and key performance indicators support the delivery of essential services in emergencies. The latest Fijian National Disaster Risk Reduction policy 2018-2030, post-cyclone Winston, notes the challenges of gender-based violence and that reproductive health services are likely to be disrupted during disasters. While it includes strategies to support specific groups such as pregnant women and LGBTQI people, the policy stops short of noting the MISP [32]. It has been noted that while the SPRINT initiative has been implemented across several regions to improve organisational and national capacity, preparedness training for communities across the sector more broadly has been largely neglected [33]. At the same time, our research notes that both RFHAF and TFHA have established relationships with communities, and that they could be further supported to prepare for and better respond to disasters. One approach to building relationships could be through participatory training activities with communities using available curriculum in reproductive health and gender [34]. While preparing for anticipated disaster scenarios through training is important so is the ability of individuals, organisations and communities to adapt and be flexible to apply skills to new situations and problems. The COVID-19 pandemic has provided an opportunity to examine more localised ways to address the provision of SRHR and build national and regional capacity to improve disaster risk reduction strategies and plans. A Western Pacific Regional Action Plan for Response to Large-Scale Community Outbreaks of COVID-19 has been developed [35]; however, SRHR is notably absent in this document. Despite this, Pacific Island nations, including Fiji and Tonga have implemented various strategies to ensure a SRHR response, demonstrating their resilience and innovation [36]. Much work remains to be done to better build and connect capacity strengthening activities from the individual to national levels, not just for preparedness and response but for recovery and mitigation efforts. --- Limitations This study is limited by the small number of participants; however, those interviewed were key informants involved in the decision making during the preparedness and response phases of cyclones Winston and Gita. Insights from a diversity of participants in Fiji and Tonga may have provided further detail regarding the activities that were undertaken. These interviews were conducted some-time after the cyclones and the memories of some informants may have been compromised; however, this study was focused on high-level activities, and many had prepared for the interviews by consulting internal documents. Multiple contacts with participants enabled the researchers to follow up on details and check information with informants. We were mindful of possible social desirability bias and interview data was assessed for both positive and negative responses and imbalances were not noted. --- Conclusion This research has outlined the need for comprehensive activities at multiple levels within a country and across the Pacific region to build capacity for an SRH response in crisis situations. While the SPRINT initiative has been implemented across several regions to improve organisational and national capacity preparedness activities, training for communities can be strengthened. The study highlights the importance of formal partnerships, regular communication, institutionalising SRH in policy and accountability mechanisms, and training to ensure coordination efforts are up-to-date in disaster readiness. --- Availability of data and materials De-identified data is available upon request. --- Abbreviations GBV: Gender-based violence; IPPF: International planned parenthood federation; LARC : Long-acting reversible contraceptives; LGBTQI: Lesbian, gay, bisexual, transgender, queer, intersex; MISP: Minimum initial service package for sexual and reproductive health in crisis situations; MoH: Ministry of health; NGO: Non-Government Organisation; OHCHR: Office of the United Nations High Commissioner for Human Rights; RFHAF: Reproductive and Family Health Association of Fiji; SROP: Sub-regional office; SRHiE: Sexual and reproductive health in emergency; SRHR: Sexual and reproductive health and rights; SPRINT: Sexual and reproductive health programme in Humanitarian settings; STIs: Sexually transmitted infections; TFHA: Tonga Family Health Association; UN: United Nations; WHO: World Health Organization. --- Authors' contributions RD and MK conceived the study. KB and AD designed the study and analysed the data. AD drafted the manuscript and KB, MK and RD edited, and approved the manuscript. KB coordinated data collection and conducted interviews. All authors read and approved the final manuscript. --- Declarations Ethics approval and consent to participate This research has received ethical clearance from the University of Technology Sydney Human Research and Ethics Committee (approval number: ETH19-4172), the Fiji National Health Research Ethics Committee (approval number: 31/1/2020), and the Tonga National Health Ethics and Research Committee (approval number: 201921107). --- Consent for publication All participants consented to the publication of de-identified data. --- Competing interests AD and KB do not have any competing interests to declare. RD is employed by IPPF and MK was employed by IPPF at the time of the study. However, RD and MK had no role in the study's design or the collection and analysis of the data and interpretation of the findings. • fast, convenient online submission • thorough peer review by experienced researchers in your field • rapid publication on acceptance • support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year --- • At BMC, research is always in progress. --- Learn more biomedcentral.com/submissions Ready to submit your research Ready to submit your research ? Choose BMC and benefit from: ? Choose BMC and benefit from: --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
To examine the main and interactive effects of the amount of daily television exposure and frequency of parent conversation during shared television viewing on parent ratings of curiosity at kindergarten, and to test for moderation by socioeconomic status (SES).Sample included 5100 children from the Early Childhood Longitudinal Study, Birth Cohort. Hours of daily television exposure and frequency of parent screen-time conversation were assessed from a parent interview at preschool, and the outcome of early childhood curiosity was derived from a child behavior questionnaire at kindergarten. Multivariate linear regression examined the main and interactive effects of television exposure and parent screentime conversation on kindergarten curiosity and tested for moderation by SES.In adjusted models, greater number of hours of daily television viewing at preschool was associated with lower curiosity at kindergarten (B = -0.14, p = .008). More frequent parent conversation during shared screen-time was associated with higher parent-reported curiosity at kindergarten with evidence of moderation by SES. The magnitude of association between frequency of parent conversation during television viewing and curiosity was
Introduction Curiosity, an important foundation for scientific innovation [1], is characterized by the drive to seek out new information [2], desire to explore [3], and joy in learning [4,5]. Higher curiosity has been associated with numerous adaptive outcomes in childhood including more robust word acquisition [6], enhanced learning and exploration [7] and higher academic achievement [8,9], highlighting the potential importance of fostering curiosity from an early age. Our previous work found a positive association between higher curiosity and higher academic achievement, with a greater magnitude of benefit for children with socioeconomic disadvantage [10], raising the possibility that promoting curiosity in young children may be one way to mitigate the achievement gap associated with poverty [11]. To foster curiosity in early childhood, it is necessary to consider the modifiable contexts that may promote or inhibit its expression. One potential modifiable factor associated with differences in early childhood outcomes is the amount of daily television exposure. While there is an increasing interest in the role of digital media exposure on child development, televisions are in 98% of all homes, and television viewing remains the dominant screen activity of young children, accounting for 72% of all screen time [12], making television exposure a relevant developmental context in young children. Children are exposed to an average of 1-4 hours of television per day [13,14], with higher exposure in children who are economically disadvantaged [15,16]. In previous screentime research with infants, toddlers and preschoolers, more television exposure has been associated with impaired self-regulation [17,18], lower language outcomes [19,20], and lower cognitive development [21,22], however, association with curiosity has not been examined, and is a gap in the literature. Screen media exposure, including television, can displace exploratory activities such as play and parent-child interactions [23] that are thought to be necessary for the cultivation of curiosity [24]. We therefore sought to test the hypothesis that higher daily television exposure would be associated with lower curiosity (Hypothesis 1). We also considered that the association between the amount of television exposure on early childhood curiosity may be attenuated in children with higher SES, who may have other resources to foster curiosity, compared with low SES children. Therefore, we sought to test whether the association between higher daily television exposure and early childhood curiosity was moderated by SES, with a greater magnitude of effect seen in low SES /under-resourced families (Hypothesis 2). In addition, because development unfolds through reciprocal interactions between children and their parents, the quality of early dyadic experiences may also play a role in fostering curiosity. Previous work has demonstrated the benefits of parent conversation during shared ECLS-B is a nationally representative, populationbased longitudinal study sponsored by the Institute of Educational Statistics, from the US Department of Education's National Center for Education Statistics (NCES). While the ECLS-B is a publicly available dataset, the data used for this analysis comes from the restricted ECLS-B dataset, which requires special access and permission from NCES, prior to accessing the data. The PI (senior author (PES)) had to enter into a data-use agreement with the Institute for Educational Statistics / NCES prior to receiving access for the restricted use data. Per the requirements of the NCES, the data cannot be freely shared with other investigators, and interested investigators must enter into a data use agreement with the NCES prior to accessing the restricted-use data from the Institution for Educational Statistics. Due to NCES's confidentiality legislation, ECLS-B case-level data are available only to qualified researchers who are granted a restricted-use data license from NCES. Information regarding how to obtain a restricteddata license for the ECLS-B can be found at https:// nces.ed.gov/pubsearch/licenses.asp. television viewing on language development, with more frequent conversation moderating the adverse impact of heavy television exposure [25]. Previous research has also demonstrated that parent-child conversation facilitates children's thinking, learning and exploration (i.e., behavioral indicators of curiosity) through pedagogical exchanges [26]. As such, we hypothesized that more frequent parent conversation during shared television viewing may be associated with higher curiosity (Hypothesis 3a) and may moderate the association between higher television exposure and curiosity (Hypothesis 3b). Furthermore, because the amount and quality of language that young children hear also varies by socioeconomic status [27], (e.g., the 30-million-word-gap) [28,29], we theorized that there may be a similar "curiosity gap" among low income children who are exposed to less conversation. We hypothesized that, consistent with a cumulative risk model [30], socioeconomic disadvantage in combination with less frequent parental conversation may confer an added risk for lower curiosity, with greatest effects seen in children from low SES families (Hypothesis 4). The overarching aim of this work was to identify modifiable factors in the early caregiving environment (e.g., amount of early television viewing, frequency of parent conversation) which may be important for the promotion of early childhood curiosity, and to examine whether these factors were associated with differential effects in children from under-resourced families. Results from this work will help inform anticipatory guidance to promote early childhood curiosity in at risk populations. --- Materials and methods --- Study design and sample Data were drawn from the restricted data of the Early Childhood Longitudinal Study, Birth Cohort (ECLS-B), a nationally representative, population-based longitudinal study sponsored by the US Department of Education's National Center for Education Statistics (NCES). The ECLS-B is based on a nationally representative probability sample of children born in the United States in 2001. Data were collected from children and their parents at age 9 months, 24-months, preschool and kindergarten timepoints, and included parent interviews, and direct and indirect child assessments across multiple settings [31]. Our sample excluded children with congenital and chromosomal abnormalities, and included children born at 22-41 weeks gestation who had kindergarten behavioral data from which we could derive a measure of curiosity. Our study utilized data from birth, 24-months, preschool and kindergarten timepoints, with a final sample of 5100 children. This study was considered exempt by the Institutional Review Board because it involved the use of a publicly available dataset with de-identified participants who could not be linked to the data. --- Measures Outcomes. Curiosity. Because the ECLS-B did not have a measure to examine curiosity, we derived a measure of curiosity from an existing assessment of child behavior available in the dataset, which included questions from the Preschool and Kindergarten Behavioral Scales Second Edition (PKBS-2) and Social Skills Rating System (SSRS). While we were limited by the questions that were available the parent PKBS-2 questionnaire at the kindergarten timepoint, we drew from previous theoretical work and behavioral descriptions of curiosity in young children [32][33][34][35][36][37][38] to select question items that most closely aligned with characteristics of curiosity. While there is no single definition of curiosity [33], there are certain behavioral characteristics of curiosity that are widely accepted, including, (1) the thirst for knowledge, and the drive to understand what one does not know [34]; (2) an exploratory drive to seek novelty [35]; (3) an openness to new experiences [36]; and, in young children, (4) innovation in exploratory play [37,38]. Four question items from the PKBS-2 which aligned with these characteristics of curiosity were chosen for our "curiosity factor." The individual question items included (1) shows eagerness to learn new things (i.e., thirst for knowledge); (2) likes to try new things (i.e., drive for novelty); (3) easily adjusts to a new situation (i.e., openness to new experiences); and (4) shows imagination in work and play (i.e., innovation in exploratory play). At the kindergarten timepoint, parents were asked to report the frequency of behaviors observed in the previous 3 months on a 5-point Likert scale (1, never to 5, very often). Items were reverse coded as appropriate such that higher scores indicated more positive behaviors. A confirmatory factor analysis (CFA) was conducted to assure reliability and to calculate the appropriate loading values for deriving our curiosity factor. Standardized scoring of the curiosity factor was conducted, and good internal consistency was demonstrated (α = 0.70, M = 0.07, SD = 1.2) [10]. Individual question items, loading coefficients, and model fit indices for our curiosity factor are shown in S3 Appendix. Predictors. Hours of television viewing. Hours of television viewing at preschool were determined from a parent questionnaire at the preschool timepoint. Parents were asked ". . .about how many hours of television does [your child] watch at home per day," with responses ranging from 0-24 hours. Respondents who answered "N/A" to this question were not included in the analysis. Because most children (96%) reportedly watched 6 hours or fewer of television per day, the hours of daily television exposure were capped at 6+ hours, reducing the influence of a few statistical outliers. Parent conversation during shared television viewing. Parent conversation during television viewing was determined from a parent questionnaire at the preschool timepoint. Parents were asked, "In a typical week, when your family watches TV together, how often do you or another family member talk with [your child] about the TV programs?" Responses were coded categorically as 1 = never, 2 = hardly ever, 3 = sometimes, or 4 = often. Parents were not asked to report on the amount of time a child watched television without adults, thus we were unable to control for the amount of time children watched television without adult co-viewing. Relatedly, there was also no measure of overall (non-television) parental language for the entire sample. As such, we were unable to control for non-screen time parental language. To address this limitation, using a subsample of 500 parent-child dyads with available data on a structured reading task, we examined the association between parent conversation during TV viewing and parent conversation during the reading task. We found a positive association for the frequency of television-related parent conversation and elaborative parent language during the reading task characterized by use of open-ended questions (p = .008) and relating the book to the child's experience (p = .03). Based on this subsample analysis, we considered that television-related parent conversation may also reflect the quality of the language environment in the home. For the purpose of this study, we considered the amount of parent conversation during shared television viewing to serve as a proxy for the amount of language in the caregiving environment. Covariates. In our primary analyses, we included sociodemographic variables that might be associated with the amount of television viewing and curiosity. Specifically, we controlled for maternal age, race/ethnicity, marital status (married/unmarried), maternal education (< high school; high school graduate; > high school), and poverty (< 185% federal poverty line; � 185% federal poverty line). The latter two variables were integrated into a composite measure of household socioeconomic status (SES) at kindergarten [31]. We also controlled for child sex, child age, the type of childcare/preschool experience (no non-parental care; relative/ nonrelative home-based care; center-based care), and average number of hours of childcare/ center-based care per day. Because lower developmental skills [39] and inability to delay gratification [18] have been associated with increased television exposure, additional covariates included a measure of infant development at 24-months from the Bayley Short-Form Research Edition, and parent-report of delay of gratification at 24-months ("My child is able to wait" dichotomized as "no/yes"). Of note, there was no measure of the content of television programming available in the dataset (i.e., educational programming vs. entertainment), so we were not able to control for television content in our analyses. --- Statistical analyses All analyses were conducted using SAS 9.4 [40] (SAS Institute Inc., Cary, NC). Maternal and child characteristics were examined using descriptive statistics. Multivariate linear regression utilizing the SURVEYREG (SAS) procedure allowed for tests of associations between hours of daily television viewing, frequency of parent conversation during shared television viewing and kindergarten curiosity in linear and non-linear (quadratic) models, with minimal differences between the linear and quadratic models. We included covariates related to television viewing, parent conversation, and curiosity to adjust for theoretically justified confounds. For our primary analyses, in adjusted models, we tested the association between the hours of television viewing and curiosity at kindergarten (Hypothesis 1), and whether the association between hours of television viewing and curiosity was moderated by SES (Hypothesis 2). We examined whether the amount of parent conversation during shared television viewing at preschool was associated with early childhood curiosity (Hypothesis 3a), and whether the amount of parent conversation moderated the association between the amount of television viewing and early childhood curiosity (Hypothesis 3b). Finally, we examined whether the association between parent conversation during television viewing and curiosity at kindergarten was moderated by SES (i.e., our test of a cumulative risk hypothesis) (Hypothesis 4). In all our moderation analyses, we included the interaction term in the final step of the multivariate regression models. When the interaction was statistically significant (p < .05), we performed a stratified analysis of the association between the predictor and curiosity, adjusting for covariates. Because of the complex sample design, sample weights and the Jackknife method [41] were used to account for stratification, clustering and unit non-response, thereby allowing the weighted results to be generalized to the population of U.S. children born in 2001. In accord with the NCES requirements for ECLS-B data use, reported numbers were rounded to the nearest 50. --- Results --- Sample characteristics Of the 6350 children who had behavioral (curiosity) data at kindergarten, 5100 children had television-viewing data at preschool and all covariates, which served as our analytic sample. The 5100 children in our final sample did not differ from the 1250 children who were excluded (due to missing data) on most demographic characteristics. However, children who were excluded were more likely to be non-White, have lower SES, have higher 24-month development, watched fewer hours/day of television, and attended childcare/preschool more hours per day. At the preschool timepoint, parents reported that children watched an average of 2.5 hours of television per day, and almost half of parents (49.8%) reported talking with their children "often" when viewing television together. After applying sample weights, the maternal and child characteristics were generalizable to the US population in 2001. The sample characteristics for the weighted sample are shown in Table 1. Descriptive characteristics of the amount of television viewing and parent conversation during shared television viewing are shown in Table 2. --- Tests of association between hours of daily television viewing and child curiosity at kindergarten (Hypothesis 1), moderation by SES (Hypothesis 2), and main and moderating effects of parent conversation during shared television viewing (Hypotheses 3a and 3b) In adjusted models, higher daily television viewing at preschool was associated with lower curiosity at kindergarten (B = -0.14, p = .008) (Hypothesis 1, S1 Appendix). The association between the hours of daily television viewing at preschool and kindergarten curiosity was not moderated by SES (p = .22) (Hypothesis 2). In adjusted models, we also found that more frequent parent screen-time conversation was associated with higher curiosity at kindergarten (p < .001), (Hypothesis 3a, S1 Appendix), but that more frequent parent conversation did not moderate the association between the amount of television exposure and early childhood curiosity (p = .23) (Hypothesis 3b). --- Tests of association between parent conversation during shared television viewing and child curiosity at kindergarten, and moderation by socioeconomic status (Hypothesis 4) We then examined whether the association between the frequency of parent screen-time conversation at preschool and kindergarten curiosity was moderated by socioeconomic status (SES) (Hypothesis 4). We found evidence of moderation by SES, (S2 Appendix), and proceeded to examine this association further by stratifying by lower SES (� median) and higher SES (> median), adjusting for the a priori covariates. We found differences in parent-reported curiosity between families from high and low levels of SES, for each category of parent conversation (never, hardly ever, sometimes, often), with stronger association among families from under-resourced environments (i.e., low SES) (Table 3). To test and confirm the linear trend between parent conversation and curiosity, in this model only, we then tested the association with parent conversation coded as a continuous variable (1)(2)(3)(4). The linear trend demonstrated that the effect of more frequent parent conversation on curiosity was stronger among low SES families (B = 0.29, p < .001) compared with high SES families (B = 0.11, p < .001) (Fig 1). --- Frequency of parent conversation during shared television viewing and associations with characteristics of childhood curiosity To further examine the psychometrics of our measure of curiosity and consider the value of each question item, we conducted a post hoc analysis to determine if there were specific features of childhood curiosity that were susceptible to the effects of parent conversation. We ran four models, examining the association between the frequency of parent conversation (as a continuous variable), and each curiosity question item as our outcome, adjusting for a priori covariates. In these models, more frequent parent conversation was positively associated with each curiosity question item, with the greatest magnitude of association demonstrated by "shows imagination in work and play," (B = 0.14, p < .001) (Table 4). The relatively similar findings across items suggests that our curiosity measure tends to act as a unified construct. --- Discussion This is the first study examining associations among amount of daily television exposure, frequency of parent conversation during shared television viewing at preschool, socioeconomic status, and parent-report of curiosity at kindergarten using a nationally representative sample. In adjusted analyses, we found that higher daily television viewing at preschool had a small but significant association with lower curiosity at kindergarten (Hypothesis 1), but that this association was not moderated by socioeconomic status (SES) (Hypothesis 2). We found that more frequent parent conversation during shared television viewing was associated with higher curiosity at kindergarten (Hypothesis 3a), but that more frequent parent conversation during shared television viewing did not moderate the association between the amount of television exposure and early childhood curiosity (Hypothesis 3b). While we found an association between higher television viewing at preschool and lower parent-report curiosity at kindergarten, we were not able to include measures of the content of the television programming. Because the opportunities for conversation and scaffolding may differ if dyads are watching educational TV versus other type of programming, our inability to include the content of the television programming in our analyses (due to the constraints of the dataset) limits the interpretability of the association between the amount of television viewing and kindergarten curiosity. We found an association between the amount of parent conversation during shared television watching at preschool and early childhood curiosity (Hypothesis 3a) with evidence of moderation by SES (Hypothesis 4). In both high and low SES families, parents who reported higher amounts of conversation also rated their children as being more curious, with a greater magnitude of association in children from under-resourced families. We have several possible explanations to account for these findings. One interpretation is that parents who report engaging in more conversation may also be more attuned to children's expression of curiosity (e.g., children's asking of questions, and engagement in pedagogical exchanges in conversation), and thus they also report their children as having higher curiosity. However, while greater parental conversational exchanges have been associated with more question-asking from their children [42], this explanation does not explain why the magnitude of association between the frequency of parent conversation and curiosity would be greater in low SES children. One possible explanation is that some parents may engage in frequent conversation with their children in settings other than television, but allow their children watch television alone, which may explain why there is an attenuated association between conversations during shared television watching and curiosity for higher SES parents. An alternate explanation is that while the "cumulative risks" of socioeconomic disadvantage and less frequent parental conversation may confer an added risk for lower curiosity [30], the same children who are more vulnerable to suboptimal development (e.g. "lower curiosity") may also be more susceptible to the effects of more stimulating caregiving environments (e.g. more frequent parent conversation) [43]. This suggests a potential "differential susceptibility" to the quality of the caregiving environment, whereby low-SES children may reap added benefits from languagepromotive environments. Prior research has demonstrated how the quality of the linguistic environment in the home (e.g. quality and quantity of language stimulation) can mitigate the effects of socioeconomic disparities (i.e., poverty) on brain structure and later language and literacy outcomes [44,45]. Our results similarly suggest that the quality of the early linguistic environment (characterized by more frequent parent conversation during shared TV viewing) while promotive of higher curiosity in all children, may be especially beneficial to foster curiosity in children with socioeconomic disadvantage. These findings have implications for the anticipatory guidance provided to parents. There is some evidence suggesting that children with low curiosity fail engage with their environments in ways that foster motivation, achievement, and more specifically, academic development [46]. Building on our previous work which suggested that higher curiosity can help narrow the achievement gap associated with poverty [10], our results suggest that one potential way to foster curiosity is through facilitating conversational exchanges between children and their parents around moments of shared activity, especially for children from low socioeconomic environments. This aligns with previous language-related research which demonstrates that socioeconomically disadvantaged children preferentially benefit from greater childdirected speech and conversational exchanges [27,45,47,48]. Our findings also highlight the importance of parental scaffolding for child engagement and learning. In the same way that parental engagement with children around shared play with toys facilitates children's learning and exploration [49], we found that parent conversation (as measured around shared television viewing) could be similarly scaffolding, associated with higher expressions of child curiosity. Prior research has demonstrated that children learn best in environments that are interactive, encouraging turn-taking, dialogic exchanges and intrinsically motivated questions [47,50]. Our results similarly attest to this, but with an important consideration for children with socioeconomic disadvantage. While incremental increases in the frequency of parent conversation were associated with higher curiosity for all children, for children from under-resourced (i.e., low SES) environments, only parents who often engaged in conversation around shared television viewing had children whose curiosity scores were above the mean. Conversely, children from more-resourced environments (i.e., high SES) had curiosity scores above the mean even if parents hardly ever conversed when viewing television together. The "curiosity gap" between higher and lower SES children was greatest when parents "never" or "hardly ever" engaged in television-related conversation but was not observed when conversational exchanges occurred "often." This suggests that for children from under-resourced environments, more frequent parent conversation may help enable the expression of curiosity. One implication is that parents from low SES environments might benefit from anticipatory guidance regarding the importance of dialogic (back and forth) conversation to promote inquisitiveness and learning. Such guidance may include interventions similar to "parent coaching" to facilitate conversational exchanges to promote early language development [51]. At present, because the dominant screen activity of low-income children involves television viewing [52], and because television viewing is essentially non-conversational and non-interactive [53], fostering opportunities for conversational exchanges around television viewing (in addition to other shared activities) may be one potential naturalistic intervention [47]. Our results also indicate that more frequent parent conversation was associated with parent reports of higher imagination at kindergarten. The topics eliciting a child's curiosity are often related to a child's idiosyncratic interests [54], and are revealed in the context of responsive, interactive exchanges [55]. Because we hypothesize that conversation around shared television viewing likely included pedagogical exchanges, (e.g., "What do you think is going on? Why do you think that happened?"), our results suggest the possibility that more frequent conversation (in all contexts, not just television viewing) can promote imaginative expression (one of the underpinnings of curiosity [56]) at kindergarten. Interventions to promote dialogic exchanges and language-rich caregiver-child interactions have been shown to be beneficial for early imagination and learning, and may be similarly promotive for early childhood curiosity [57,58]. Our study had several strengths and limitations. Strengths include the use of a nationally representative sample which included a child behavior questionnaire from which we could derive a measure of curiosity, whose results are generalizable to the population. One limitation is that our study used parent self-reports to measure the amount of television viewing and parent conversation, and our curiosity factor was derived from a single parent-report behavioral measure at the kindergarten timepoint. As such, we acknowledge the potential bias and shared method variance associated with parent report measures. In addition, although a subsample analysis indicated that parents who engaged in more frequent television-related conversation were more likely to use elaborative language, there was no independent measure of non-television parent-child conversation for the entire sample, so we were unable to control for nonscreen time language. Although there was a teacher-report of child behavior at kindergarten, it did not include all the "curiosity" items, so we could not examine curiosity across reporters. In addition, the dataset did not contain information regarding the content of the television programs watched, which is a potential confounder which we were not able to include in our analyses. We also acknowledge that while we found significant associations between the hours of television viewing, frequency of parent conversation and parent reports of curiosity, our effect sizes were small. Finally, while the ECLS-B is a rich dataset and among the only longitudinal cohorts from the United States, the data are older, and did not include measures of smartphones and other portable technologies on which television programming may be watched, along with more conversational media such as video-chatting, which is an additional limitation. Future research should consider examining these associations in relation to use of conversational and non-conversational digital media across screen platforms. Future research should also examine other features of curiosity that might help mitigate the poverty achievement gap [59], and consider other adaptive outcomes associated with early childhood curiosity [60]. Despite these limitations, we believe that our results have some important implications for caregivers and pediatricians. --- Conclusions Our results suggest that more frequent parent-child conversations around television viewing (which may be a proxy for other conversational exchanges) are associated with higher curiosity, especially in children with socioeconomic disadvantage. This highlights the importance of parents engaging in reciprocal conversations around topics and experiences of mutual interest [47], and suggests the importance of finding opportunities foster conversational exchanges in the context of daily routines (e.g., even when watching television). Aligning with the American Academy of Pediatrics' recommendations on media [61,62], parents can be counseled on the value of parental instructive dialogue during television viewing (e.g. "active mediation") [63], as an opportunity to promote inquiry [64]. Parent-child conversations that are guided by active mediation have been associated with more adaptive social-emotional development in young children, with a greater magnitude of effect in children from low-income families [65]. Our work extends this line of research and highlights the benefits of active mediation on early childhood curiosity. Because parent conversation around television viewing is likely related to parent conversation in the home, our results also suggest the importance of fostering opportunities for dialogic exchanges around all topics (not just television), especially for children from environments of socioeconomic disadvantage [27]. --- Data cannot be shared publicly because the data were obtained from a restricted use dataset. Data are available from the
The rare case of the patient unwilling to disclose genetic data to his or her family provides an opportunity to expand the atomistic conception of the autonomous individual in medical decision-making. Medical practitioners naturally avoid violating patient autonomy and privacy. However, unwilling disclosure can damage the health of people other than the patient. In this situation, professionals must weigh the principle of autonomy against the nature of relationships, duties, and confidentialities between patient, professional, and family. The paradigm case studied is that of a patient with a potentially dangerous heart condition, Long QT Syndrome 3. Patients with Long QT 3 are at high risk for dying of ventricular tachycardia during rest, especially from ages 40-60. Once familial genetic testing was completed, the proband's mother, who was positive for the mutation, chose not to inform her estranged sister of the diagnosis. This paper examines the ethical duties of the physician to inform a patient's extended family of a serious genetic diagnosis, with a focus on the emotional and psychological effects of genetic testing. The need to adapt the process of violating confidentiality around considerations for the patient's emotional state and narrative will be addressed. This approach considers the patient's narrative, standpoint, and relationships as a way to develop a support plan and will present a guideline for cases where the probability of significant harm to others supersedes the patient's preference of non-disclosure as well as the physician's respect of confidentiality. The paper seeks to expand the conversation on genetic testing and autonomy beyond principles by considering all parties involved and emphasizes the use of the varied resources available to medical practitioners, especially to provide the best help possible without overburdening physicians with duties.The proband, a 20-year-old male soccer player asymptomatic for cardiac illness, presents with an abnormal EKG at a routine physical. A second EKG confirms a prolonged QT interval, which is associated with an increased risk of sudden cardiac death. However, given that the proband played soccer for many years with no symptoms, his doctor reassures him that a positive diagnosis is unlikely. Genetic testing results in a positive diagnosis of inherited long QT syndrome type III (LQT3). An electrical disorder of the heart caused by a mutation of cardiac ion channels, LQT3 can cause a ventricular arrhythmia called torsades de pointes (TdP). TdP produces palpitations, fainting, and can potentially cause sudden cardiac death due to ventricular fibrillation.
Different types of Long QT syndrome (LQTS) produce TdP at different times. For example, patients with LQT3 are most at risk during sleep, especially between ages 40-60. LQT3 is among the most lethal forms of the disorder, but it is not particularly common, striking around 10% of sufferers (NIH 2011). Other forms of LQTS can cause arrhythmia during exercise or during high stress moments. The proband's family was referred for genetic counseling. Familial testing determined that the proband's mother was the carrier. The proband's sister and father were negative for the mutation. The mother was then called in by her physician to determine how to best inform family members. The mother told the physician that she had only a sister with two children living in Florida. Both of the mother's parents had passed away, one due to stroke at age 64 and the other due to pneumonia at age 81. When the doctor urged the mother to inform her sister, she replied, "Absolutely not. I'm not calling her." The physician offered a number of other modes of communication, including mailing forms himself, but the mother stated that she did not want to open any line of dialogue with the estranged sister. She said, "She would try to talk to me. I don't want to talk to her." When the physician asked what might be causing these negative feelings, the patient was unreceptive. Given the risk of death for the estranged sister, the physician felt his hands were tied. On one hand, the patient did not have a definitive, life-threatening illness. Rather, the odds of dying from undiagnosed LQT3 are roughly 50% (NIH 2011). The physician did not feel the duty to warn would apply from a legal or an ethical standpoint, but also felt a strong desire to make sure this unknown woman knew the potential life changes necessary to make sure her risk of death was minimized. However, due to the legal concerns surrounding the case, the physician erred on the side of safety and followed the wishes of the proband's mother. He turned to the American Medical Association (AMA 2004) guidelines and found he had a minimal duty to inform the patient of her risk. He left her with the advice, "You should inform your entire family." --- Background The risk for death for some undetected forms of LQTS can be serious. All forms of LQTS have a death rate of approximately 9%, and LQT3 itself is the most lethal type of the disease (NIH National Institutes of Health 2011). For LQT3, the largest risk factors outside of general periods of rest are being an endurance athlete (resting heart rate under 60 bpm) and taking drugs which induce QT prolongation (ibid). The list of QTprolonging drugs is vast, including most antihistamines and decongestants, diuretics, statins, antidepressants, and many common antibiotics (NIH). Generally, LQTS is treated through the use of beta blockers. However, where other forms of LQTS worsen when the heart rate increases, LQT3 becomes worse when the heartbeat slows. Therefore, LQT3 does not respond well to beta blockers. Patients with LQT3 can regulate cardiac ion concentrations by supplementing their diet with sodium and potassium, but the only medical intervention available is the use of an implantable cardiac defibrillator (ICD) at the onset of symptoms. Case law sets precedent to allow clinicians to warn parties outside the physician-patient relationship if a patient intends harm to himself or others. The duty to warn was established by the California Supreme Court's ruling in Tarasoff v. University of California. The case was brought to the court by the Tarasoff family, whose daughter Tatiana had been stalked and murdered by a psychiatric patient at the University of California Berkeley, Prosenjit Poddar. The court ruled that a clinician has a duty to not only her patients, but society at large. If the patient intends to harm himself or others, confidentiality should be breached in order to protect that patient and anyone she might harm (17 Cal. 3d 425). That is, the harm incurred by breaching physician-patient confidentiality is less than the harm potentially incurred by the patient to himself or to society. Later case law is divided on other cases of duty to warn. Some states have statutes that require a physician to warn her patients' next of kin if the patient has HIV and intends to engage in unprotected sex or share needles (Worth et al. 2008). In Pate v. Threlkel, Heidi Pate sued her mother's physician because he did not warn Ms. Pate of her mother's hereditary medullary thyroid cancer (MEN). Like LQTS, MEN is inherited in an autosomal dominant pattern. Early diagnosis of MEN can allow for life-saving intervention, but in Ms. Pate's case, she was found to have advanced thyroid cancer three years after her mother's diagnosis. She filed suit, alleging that if the physician had warned her of her mother's genetic diagnosis early enough, Ms. Pate's disease progress could have been halted (Offit et al. 2004). Essentially, the court decided that the standard of care sometimes is written to the benefit of third parties (662 Fla.). However, the court also declared the duty of the physician fulfilled by warning the patient, not the family. While the duty to warn extends to third parties, it is not required to inform them. Two cases in New Jersey and New York, Safer v. Peck and Tenuto v. Lederle Laboratories, extend the duty to warn immediate family members of risky hereditary conditions and services like vaccinations which may incur harm to unimmunized family members (90 NY2d 606). The duty to rescue is a concept developed in tort law. Duty to rescue exists in two situations: first, when one party creates a situation that is dangerous for another party; second, when a party has a "special relationship" to another, such as a parent to a child or spouses to each other (545 US 748). While case law does not extend the duty to rescue to siblings, it provides an interesting legal concept for the case. First, the mother may have developed a situation that is dangerous for her sisternot informing the sister of a diagnosis that kills around one in ten of its sufferers. Second, her bloodrelatedness seems to imply a special relationship at least similar to that of the fiduciary relationship owed by a physician to her patient. As a legal concept, duty to rescue has a limited applicationbut when considered in the sense that one can owe a duty of rescue to a person of close blood-relatedness, may have value in an ethical analysis. While some state courts extend a vague duty to warn immediate family members with potential risk, physicians and professional societies remain more cautious. The AMA (2007), American Society of Human Genetics, and American Society of Clinical Oncology agree on some variation of the AMA policy: physicians should inform patients of circumstances under which confidentiality would be breached and to "make themselves available to assist patients in communicating with relatives to discuss opportunities for counseling and testing, as appropriate," but that duties are fulfilled by disclosing genetic results to the patient alone. Physicians tend to follow this rule -a survey of 800 practicing or formerly practicing geneticists showed that only 23% of the sample would be willing to disclose information to a patient's family even if the risk to the family member was high (Falk et al. 2003). LQTS inheritance probability is generally 50% (NIH 2011). In the case of violent psychiatric patients, terminal genetic conditions like Tay-Sachs or Huntington's, or even some HIV disclosure cases, doctors are often more aware of impending and largely definite harm than in genetic predispositions like LQTS. In a purely practical sense, a physician would need to parse out a number of different rules in order to determine whether a disclosure is necessary. The duty of care in this case would depend on estimated risk to a given family member. Physicians naturally exercise caution when the patient is unwilling to disclose to at-risk relatives. The dichotomy between the law and actual practice indicates the difficulties inherent to decision-making about the disclosure of genetic dispositions. While physicians may not be required to actively lie or withhold information, as might be the case with exercising therapeutic privilege for a cancer diagnosis to a very optimistic but fragile patient, they can be inclined by the standard of care not to inform at-risk people. The physician can violate the basic respect for the autonomy of his patient to potentially save another life, or fulfill the minimal responsibility to his patient but allow the other life to hang in the balance. While the traditional normative ethical theories, such as principlism or utilitarianism, might be useful in less complex situations, there are clearly multiple conflicts of basic principles and parties here. Beauchamp and Childress (2001) write that autonomy serves as a "right, not a duty of patients." That is, the patient's right to make decisions for a treatment course must be protected by all parties but that autonomous decisionmaking is not demanded in all cases. Moreover, the two authors argue the professional's obligation to the patient is "respectful treatment in disclosing information and fostering autonomous decision-making" (ibid). This conception of autonomous action empowers the individual to make decisions and avoids medical parentalism. Balancing principles can be difficult without a view into the reasoning of both parties in a conflict. The mother's autonomy and trust in the physician-patient relationship is significantly diminished by disclosure, but a life is potentially saved. Physicians also possess a duty to warn in order to avoid deliberately allowing a harm otherwise preventable with genetic testing information. Yet probability remains a factor that renders principles difficult to use -the sister not have the disease, may never present symptoms, or may die. Feminist and narrative methods remind the ethicist to find a degree of understanding with the perspective of the patient. Both theories tend to focus on humanizing the parties involved in ethical dilemmas, rather than applying rigid rules or principles to a situation. On the other hand, narrative and feminist ethics tend to provide very vague or even no solutions to ethical dilemmas. Questions of a person's life story, standpoint, or autonomy in relation to others are still arguably best served by these methods. By obtaining a picture of the whole person, narrative ethics can shift the focus of disclosure dilemmas from violating rules or duties to changing perceptions. Alternative methods to traditional normative means supplement the principlist conception of autonomy with relational aspects, balancing interconnectedness rather than simple individual action. --- Emotional and relational issues The nature of the patient's emotions and relationships can determine the extent to which the patient is willing to disclose. At the outset of a genetic diagnosis, emotions tend to run high but wane with time. Aatre and Day (2011) document a number of emotional issues arising from inherited cardiovascular diseases, ranging from reassurance to outright fear. Interview studies have shown that genetic testing can leave a patient feeling devastated. One patient said, "I was thinking, what other genes are also defective? [. . .] I also wanted to take on a new identity" (Porz 2009). Genetic testing results produce issues of identity crises because they make people feel as if they are no longer self-governing. Patients report feeling "powerless, disoriented, confused" (Porz 2009). The body can now be seen to harbor dangerous genetic flaws or defects and the patient is reminded strongly of their own mortality. The mother in the case study may have felt similarly adrift, and also may have dealt with guilt over passing the disease on to her son. Quality of life may be adversely affected, further adding to questions of her future. Looking at the standpoint of the mother in the case study, she likely feels some degree of guilt over passing this disease on to her child. As a 20-year-old soccer player, her son might be denied the chance to continue playing his sport, or any contraindicated competitive sport, should he inform anyone of his diagnosis. Even without symptoms, a positive LQTS diagnosis is typically enough for a physician to recommend against competitive sports Pelliccia et al. (2005). The mother may be transferring the role of the denier to herself, allowing herself to feel as if "her" disease has guaranteed the son's loss of autonomy. Since the proband was asymptomatic, the mother might feel as if there is no risk to her sister, that nothing truly wrong with her son, and may even mistrust the diagnosis. She could therefore justify nondisclosure by the increasingly present reality of probabilities in her life, thinking, "My son had only a 50% chance of getting this from me, my daughter did not get it, and I only had a 50% chance of getting it from my mother." Taken together, the mother may feel that disclosure is unnecessary because her sister simply does not seem to be under any realistic risk. Another valuable consideration in the case of such significantly diminished autonomy is the idea of control (Aatre and Day 2011). Inherited diseases wrest the power over one's body from the individual and place it in the hands of chance. The establishment of control could be expressed through a number of outlets, including "self-education, maintaining privacy, and active participation in treatment decisions" (ibid). Maintenance of privacy speaks to the case study, where the mother may be seeking to keep secrecy surrounding her condition. Secrecy can be a method of control, as it allows the patient to determine with whom she discusses the diagnosis. The emotional nature of disease can be a difficult subject to broach for people with whom one is uncomfortable, especially when the diagnosis is potentially life-threatening. While legal precedent argues for a duty to warn and the right-to-know, patients have their own perceptions of that right to know. A patient may feel that a genetic diagnosis is his or hers alone, not focusing on the importance of extending that diagnosis to his immediate family. The desire to obtain control in a situation of diminished autonomy can also be tied to the establishment of relational dynamics. Feminist ethics can be useful in examining how relational dynamics affect the situation. Nodding's care ethics provides a relatively simple definition of what variations on relationships exist in the case. For example, the physician-patient relationship would be described as a "caring-for" relationship where the face-to-face encounters between the one-caring (physician) and cared-for (patient) create a direct relationship (2001). The indirect relationship between the physician and the patient's sister seems closer to "caring-about," which Noddings identifies as having a "benign neglect" (2002). However, caring-about is somewhat foundational, establishing the basics for caring-for and a general sense of social justice. The justice-as-fairness derived from caring-about can help explain why the physician feels conflicted over the disclosure case. His sense of justice tells him it is fair for the sister to know information that can affect her future, but directly violating the patient's autonomy seems a greater offense than indirectly and only potentially harming an unfamiliar outsider. With the knowledge granted by the mother's genetic test, the physician can prevent a potential harm. It is this sort of more egalitarian sense of weighed duties that causes problems for a physician. Either perspective, respect for the mother's autonomy or nonmaleficence towards her sister, places the burden on deciding in favor of a single party. The desire to create a fair, just reality for both people establishes the conflict and is arguably impossible to satisfy with an ethic that focuses purely on individuals and not on relational communities. Arguably, the mother is marginalizing her sister by denying her sister access to the reality of her genetic illness. This ties into the concept of causal relational autonomy, where an outside factor (the mother) reduces the autonomy of a moral agent (the sister). One formulation of this version of autonomy involves a theory of "significant options" available to an autonomous agent at the time of a decision (Brison 2000). That is, an agent must have the proper grasp of all external factors in order to have the options necessary for a decision. The sister may be acting in an entirely autonomous manner, but her decisions could be altered by the knowledge that she has a genetic illness. On one hand, an LQTS diagnosis might constrain her actions in a more significant way than not having a diagnosis would. However, the proband's mother has knowledge that prevents her sister from making a fully informed decision. The sister lacks all available options -for example, she could choose not to run a marathon because doing so might put her at risk of arrhythmia. By knowingly withholding key information, the mother reduces her sister's ability to make choices about her lifestyle and the lifestyles of her children. Feminist theorist Annette Baier argues, "persons are essentially successors, heirs to other persons who formed and cared for them" (Baier 1985). That is, the patient's caring-for her sister influences how her sister can exercise autonomy. The sister may be an autonomous agent without knowledge of her genetic illness, but the mother has tools to allow her sister a deeper knowledge of the risks involved in her day-to-day activities. In essence, the broken social relationship between the proband's mother and her sister has reduced the sister's ability to make informed choices. Obviously, these considerations place strain on the principle of individual autonomy. However, it is arguably the focus on autonomy in modern medical ethics which creates the conflict in this case. Feminist theorists recognize that autonomy develops from a conflation of external influences -from personal relationships to the social framework a person inhabits. This context for autonomy reminds the ethicist that free decisions come from a personal narrative, influenced by the encounters and experiences of life. In essence, the autonomous individual cannot separate himself from the outside community in any way. While the case study may not merit a violation of autonomy through disclosure, it does remind the ethicist that medicine cannot always concern itself solely with the individual patient and professional. Other parties are almost invariably involved, be it the impoverished person who might be harmed by improper resource allocation or the sibling whose well-being is threatened by a nondisclosure. Moreover, human reason can be fallible, especially with regard to the future (Levy, 2011). Perhaps a trajectory towards a more communitarian ethic, based in the relatedness of people through their interactions and social development, is needed. Medical ethics adapted libertarian concepts of individuality to protect patients from professionals who held complete power. In doing so, however, medicine tipped the balance too far by ignoring the possibility of constraining patient autonomy. The dynamic fostered by a communitarian ethic would ideally be one of support and understanding, where the medical team is responsible for providing care to a patient in interaction with that team. The current dynamic fosters much advice-giving on the part of the medical team and much decision-making for the patient, but rarely do the two meet as equals. Suggestions for care could be made by both parties and considered with medical expertise and patient values in mind, but a balance could be struck between the patient and physician's empowerment. Rouven Porz attempts to adapt Monica Konrad's "kinship ethics" to situations similar to this one, arguing that the principles valued by medical ethics are insufficient for family members struggling with genetic data. One important principle Porz emphasizes is the idea of loyalty to family members and the relatedness of the human species. Genetic testing unites a person with a larger web of the "new genetic family," the sort of extended family network developed through awareness of a genetic disease (Porz 2009). "Genetic constitution," not blood relatedness, determines the interrelatedness of the genetic family (ibid). Therefore, genetic disclosures can become an issue of loyalty between members of a community composed of more than private individuals or separate unit of blood relatives. Altruism can be one way a family member fulfills this loyalty -the outright givingaway of genetic information. However, for more distant genetic relatives, that sense of altruism may not be present. Kinship offers a second alternative: reciprocity. The reciprocal sharing relationship between two people provides a secondary outlet for genetic information. The narrative of genetic interconnection expands responsibilities of sharing to a larger community through a transformation of the personal narrative. Kinship theory removes the feeling of a patient's ownership of their genetic information by establishing the idea that a given mutation is not unique. Rather, the patient is part of a continuing family narrative, reaching into the past and potentially extending into the future. The owed debt to this family means that while the mother in the case study can withhold information about her treatment for Long QT, she cannot withhold information about the family having a genetic history which predisposes its members to LQTS. This conception of the narrative can provide the physician with a way to frame the situation for the mother. Consider, for example, the idea of duty to rescue for someone with an understanding of kinship ethics. As one's genetic network expands, the duty to rescue can be owed to a number of genetic relationsfamily members at risk for inheriting a disease. Combined with the obligation of rescue or the strength of altruistic intentions, kinship ethics can become a valuable determining factor for disclosure practices. --- Suggestions for practice Knowledge of the different ways in which one's life story can be interpreted is useless without a way to inform the patient of these new concepts. Patient education is one of the most commonly used parts of dealing with difficult issues in the doctor's office. Yet a study of patients with hereditary colorectal cancer, who used educational materials like letters and booklets to understand their disease, did not show significant differences in knowledge compared to control group without the additional material (Gaff et al. 2005). Education after the fact may not be useful. Additionally, legal precedent and professional attitudes conflict. The principle of doctor-patient confidentiality presents an ethical reason not to disclose, and geneticists often feel bound by the limited code of the American Society of Human Genetics, but many are unaware of the professional code (Falk et al. 2003). Even those physicians with knowledge of professional codes report their duty is typically to inform the patient, and no other duty is required to next-of-kin (ibid). The general tendency to avoid disclosure points to a significant valuation of privacy and consent. Therefore, a novel approach for reopening communication might be the establishment of an alternative narrative. The concept of the kinship narrative has already shown such an alternative conception is possible. However, the new narrative is only valuable if it can be used to facilitate communication between at-risk relatives. If providing educational material fails the patient, then perhaps perceptual or behavioral changes succeed. Providing this sort of perceptual change can seem difficult for one physician to accomplish. The increasing awareness of fragmented care in the medical field has led to much pressure on the physician to improve her practice by becoming a sort of scion of morality, competence, and altruism, a Renaissance man operating with ever-limited time for each patient. The duty to contact and warn an estranged relative, as in the case study, might be seen as an example of this. Steel (2009) presents a similar examplethe patient does not want to disclose a life-threatening diagnosis to his estranged cousin in Australia. The physician obviously cannot spend the time finding contact information for this remote cousin without cooperation from the patient. Even in less drastic situations, the physician needs another party to carry some of the burden. A practical suggestion would be to place responsibility in the hands of other health care services. A patient struggling with the emotions of a genetic diagnosis or disclosure can be referred to therapeutic counseling. There, a professional can provide the necessary tools to explore the issues the patient may have with both the disease and the relationship with her sister. Obviously, therapy sessions require that a patient accept the idea of going to therapy as a net good. However, counseling has been offered for an increasing number of conditions which might require significant lifestyle adaptation for a patient. There is no reason to avoid working through the emotions of someone with a difficult diagnosis, no matter what form that diagnosis takes. If counseling should succeed in making the patient more comfortable with disclosure, then the geneticist can go forward with referring at-risk family members. A second route for a physician might be an outright breach of confidentiality. Concerns for autonomy and the physician-patient relationship mean this should be a last resort reserved for extreme situations, but a threshold of likelihoods across which disclosure would be permissible can be useful. In any case, respect for autonomy means that a significant burden of proof is placed upon a geneticist breaching confidentiality and that the patient must be informed of a last-resort policy before any testing occurs, in accordance with the AMA's policy recommendations. This decision-making process does not come without its risks. Since genes only convey probabilities in many situations, decision-making cannot be reduced to seeing if a gene is present or not. For LQT3 patients, the presence of the gene only conveys a probability of passing it along to descendants and an additional probability of cardiac issues. The developing theory of epigenetics offers hope that the field will steer away from reductionism by emphasizing the effects of the environment and other non-gene directors of genetic expression. However, in making decisions to weigh potential inheritance and potential harm, physicians need to be careful that they rely on realistic risk and not simply knowledge that the gene is present in a patient. Not all genes in a given patient will penetrate to all family members, and even penetrant genes may not be expressed or produce symptoms. To force the mother to disclose simply because the LQT3 gene was present in her body would border on discrimination. Adequate weighing of risk and benefit must come before a disclosure decision. Disclosures might be best when the following criteria are met: (1)the at-risk family member can be contacted, (2)the illness has a high chance of inheritance (greater or equal to 50%), (3)the illness will eventually be serious and life-threatening, (4)interventions (treatment or lifestyle changes) can cure or significantly reduce the effects of the disease, (5)the patient is adamant with regard to non-disclosure These considerations outweigh the potential harm to the proband's autonomy because a significant preventable harm can be overcome by the violation of confidentiality, similar to the legal right to the duty to warn enshrined after Tarasoff. On one hand, the proband might be denied the sense of control he desires. However, the uninformed at-risk party would be denied knowledge which could prevent greater losses, such as preventable debilitation or death. If applied to the case under analysis, such criteria would likely not be enough to justify a disclosure because the likelihood of death is relatively low. Moreover, no intervention short of an ICD could affect the disease course. Given that ICDs are generally only implanted at the onset of symptoms, the most that could be done for the sister would be reducing participation in strenuous competitive sports and perhaps changing the diet. There exists little reason to breach confidentiality here, but other cases would certainly merit this saving grace. What, then, can be done for the mother? Certainly counseling sessions can be offered, as her life has changed drastically and she seems to be struggling with family connectedness. Time may be the only way to reach a reasonable disclosurehowever, unilateral decision-making on the part of the physician would represent both a gross violation of patient rights and a discriminatory act. The sister in the case might act differently if she knows she suffers from LQTS, and the focus on autonomy in medical practice neglects that potentiality by valuing only the decision-making of the person in front of the doctor. Yet narrative and relational methods rely on just these sort of connections by recognizing the value of interpersonal effects on decision-making. While a libertarian ethic of medicine protects the individual patient from the individual practitioner, it fails to reflect on the full scope of decisions and harms caused to outside parties by overvaluing singular moral agents. Principlism has succeeded in providing medical ethics with a basis around which to develop patient rights but has arguably failed in adequately ensuring a humanistic dialogue between physician and patient by overemphasizing autonomy. However, principlism need not be discarded, as it offers helpful points on which case decisions can be made. Rather, it needs to be supplemented with a sense of humanity and a respect for the life narrative of the patient and physician, reminding both of their indebtedness to society and those who helped them weave those narratives. Autonomy should not be considered "first" among equals, but rather one of many goals towards which good medical practice strives. People are not merely gaseous molecules, sometimes brushing past each other but otherwise on their own. Every person interacts frequently with the outside world, and the decisions made by a given person can have implications for many others. This kinship indicates while autonomy might be a valuable political concept, it is neither a psychological nor social one. People, whether they are professionals or patients, form a network of supports and constraints. Decisions made "autonomously" often echo through this network, changing the circumstances for other people. --- Abbreviations LQT3: Long QT Syndrome type 3; TdP: Torsades de Pointes; LQTS: Long QT Syndrome; AMA: American Medical Association; NIH: National Institutes of Health; ICD: Implantable Cardiac Defibrillator; MEN: Multiple Endocrine Neoplasm/Medullary Thyroid Cancer. --- Competing interests The author declares that there are no competing interests.
Objectives People aged 16-24 are more likely than other age groups to acquire sexually transmitted infections (STI). Safetxt was a randomised controlled trial of a theorybased digital health intervention to reduce STIs among 16-24 year-old people in the UK. We report results of qualitative research regarding participants' perceptions and experiences of the intervention and trial participation. Design Qualitative thematic analysis following a critical realist paradigm of written open feedback comments provided in the 12-month follow-up questionnaire and semistructured interviews. Setting Safetxt trial participants were recruited from UK sexual health clinics. Participants Trial inclusion criteria: people aged 16-24 diagnosed with or treated for chlamydia, gonorrhoea or non-specific urethritis. Optional open feedback provided by 3526 of 6248 safetxt participants at 12 months and interviews with a purposive sample of 18 participants after the trial. Results We summarise and report results in seven broad themes. According to recipients, the safetxt intervention increased awareness of the importance of avoiding STIs and ways to prevent them. Participants reported improved confidence, agency, sexual well-being and communication about sexual health with partners, friends and family. Recipients attributed increased condom use, increased STI testing after (rather than before) sex with new partners, and more confident partner notification to the intervention. Recipients described a reduced sense of isolation and stigma in having an STI. Control group participants reported that having had an STI and receiving control texts asking them to report any changes in contact details acted as reminders to use condoms and get tested. We also summarise participant recommendations for future interventions and studies. Conclusions While control group participants reported precautionary behaviours were 'triggered' by trial participation, intervention recipients reported additional benefits of the intervention in increasing precautionary behaviours and in broader aspects of sexual health such as confidence, communication, emotional well-being and agency. Trial registration ISRCTN registry ISRCTN64390461.
INTRODUCTION Sexual and reproductive health is defined by WHO as 'a state of physical, emotional, mental and social well-being in relation to all aspects of sexuality and reproduction, not merely absence of disease, dysfunction or infirmity'. 1 In terms of sexually transmitted infections (STIs), younger people aged 16-24 bear the heaviest burden of chlamydia and gonorrhoea with long-term adverse health effects including ectopic pregnancy and subfertility. [2][3][4] Inequalities in sexual health persist; STIs are positively associated with lower educational levels and living in more deprived areas. 2 5-7 High STI rates among young people also reflect broader aspects of poor sexual health, such as lack of --- STRENGTHS AND LIMITATIONS OF THIS STUDY ⇒ Qualitative research has an important role in gaining greater in-depth insight and complementing data of randomised controlled trials, especially if there are unanticipated results, as in the case of the safetxt trial. ⇒ Two sexual and reproductive health researchers not involved in the design and implementation of the safetxt trial, independently analysed 3526 open feedback comments from trial participants and conducted 18 semistructured interviews. ⇒ Obtaining results from different sources, including qualitative data from the open feedback comments and interviews reported here, in addition to the quantitative trial data allowed for triangulation of results. ⇒ Limitations are that many of these optional open feedback comments were only brief, and that we had to end the interview study slightly earlier due to the COVID-19 pandemic. Open access knowledge, skills or confidence in how to carry out safer sex behaviours and how to communicate with partners about sex and desired precautions. 8 We developed the safetxt intervention delivered by text message to reduce STI infection by increasing condom use, partner notification and STI testing before sex with new partners. 9 The intervention development was informed by behaviour change theory, 9 including the 'capability, opportunity and motivation model of behaviour' (COM). 10 This model is incorporated into the comprehensive 'behaviour change wheel' model, which aims to capture the full range of intervention functions involved in behaviour change; these include education, persuasion, environmental restructuring (encouraging people to change their environment to support the behaviour), training and enablement. Each intervention function can be implemented by a wide range of evidence-based behaviour change techniques. 11 In the case of sexual behaviour, knowledge, skills, beliefs, selfefficacy, and social and interpersonal influences have important effects on COM. 8 12 Our intervention aimed to influence these factors to reduce sexual risk behaviour and encourage STI preventive behaviour. The intervention text messages were developed based on the content of effective face-to-face safer sex interventions targeting condom use, [13][14][15] the factors known to influence safer sex behaviours 16 and the views of over 200 people aged 16-24 collected in focus groups, a questionnaire and qualitative interviews. 9 The latter included telephone interviews conducted with 16 young people 2-3 weeks after enrolling in a feasibility trial in 2013. 17 The findings were used to adapt the intervention. Intervention text messages were sent with decreasing frequency over the period of 12 months (online supplemental file 1). 18 Our randomised controlled trial to establish the effects of the intervention on STI, condom use, partner notification and STI testing before sex with new partners was conducted among 6250 people aged 16-24 diagnosed at UK sexual health clinics with chlamydia, gonorrhoea or non-specific urethritis. 19 20 Control group participants received a monthly untailored text message asking for information about changes in postal or email addresses. The safetxt intervention did not reduce STIs, there were slightly more infections in the intervention group with 22.2% (693/3123) versus 20.3% (633/3125) in the control group (OR 1.13, 95% CI 0.98 to 1.31). 19 20 There were some increases in self-reported precautionary behaviours such as condom use at last sex, OR 1.14, 95% CI 1.01 to 1.28. 19 20 Although our intervention did not target sexual partnerships, we assessed at 1-year follow-up the proportion of people who had two or more partners since joining the trial and found that it was slightly higher in the intervention vs control group (56.9% vs 54.8%, OR 1.11, 95% CI 1.00 to 1.24). This result, however, was not statistically significant (p=0.06) but could have contributed to the unexpected trial outcome. Other quantitative results, including on intermediate outcomes, did not clarify either, why a statistically significant effect was shown for the condom use outcome, but not for the biologic trial outcome. 19 20 To shed further light on this, we analysed and triangulated qualitative data from two sources, including open feedback from the last follow-up questionnaires at the end of the trial (at 12 months) 20 and semistructured interviews conducted after 12 months. In this paper, we present and discuss qualitative data on participant perceptions of the safetxt intervention and of 12 months trial participation with a view to exploring for whom, how and why the intervention worked or not and what improvements could be made in the future. --- METHODS We conducted qualitative research (as part of a mixedmethods approach integrated through an advanced intervention framework with embedded methods and narrative staged reporting 21 ) including the analysis of open feedback comments collected in the 12-month questionnaire 20 of the safetxt trial and semistructured qualitative interviews with participants after completing their involvement in the trial. The research team members are mixed-methods researchers within the areas of sexual and reproductive health. During the research, we followed a 'critical realist' (CR) paradigm, 22 23 as in terms of ontology, epistemology and methodology we position ourselves in the middle of a continuum between positivism, naïve realism and objectivism [23][24][25] on the one hand and interpretivism, relativism and constructionism 23 24 on the other. According to CR, there is a reality that exists independent of our thoughts about it, and while we can become more confident about what exists by observing, existence itself is not dependent on observation. CR also sees the social world as layered, complex and an open system and characterised by change. CRs often try to answer the question 'what works for whom, when and why?' and are typically pragmatic in their approach to methodology and methods. 23 26 Below, we provide details on the two data sources used for our qualitative analysis. --- Data source 1: free-text comments The final page of the 12-month questionnaire given to all trial participants (who had provided written informed consent on enrolment) included an open-ended question: 'Did anything good or bad happen as a result of being involved in the study or receiving the text messages? Please describe'. This question was followed by a blank space that participants had the option of completing themselves. Two researchers (AG and SB), who had not been involved in the design, implementation and quantitative evaluation of the intervention, independently coded the free-text comments and categorised data by theme, using Excel 2019 and NVivo v.12 respectively. AG and SB initially took a purposive sample of 12% (n=390) of free-text comments. They ensured that participants from --- Open access different gender and sexuality groups were represented (by adding random samples of comments from each group) and included all comments from participants reporting that someone else had read their messages or reporting partner violence. This was to ensure that the feedback from participants who might have experienced unforeseen intervention side effects was coded in detail. AG and SB then independently coded these comments inductively line-by-line, considering all content (almost all of it was relevant to our research question). They then collated codes into potential themes and compared these to check for consistency of analysis and to reduce the risk of imposing own assumptions and predefined theories onto participants' narratives. Subsequently, AG and SB independently analysed all remaining free-text comments, thereby adding newly generated themes, reviewing and naming themes. AG and SB then compared their findings again (which were consistent) and discussed them within the team. The findings were compared with data from the semi-structured interviews (data source 2). --- Data source 2: semistructured interviews We purposively recruited from safetxt participants based on trial allocation and sociodemographic characteristics (age, sexuality, ethnicity, index of multiple deprivation) to encompass a variety of experiences. Eligible were participants who indicated during trial follow-up that they agreed to be contacted for further research. We sent text messages about the interview study to those who had recently (<6 months) completed trial follow-up. AG and SB then approached and provided verbal and written information to those who were interested in the study. After receiving written informed consent, SB and AG conducted interviews by video conferencing (including Teams, Zoom or WhatsApp) or telephone. We initially focused on the recruitment of intervention participants and found that after 14 interviews data saturation for key themes relevant to our research question regarding the intervention had been reached (based on reflective notes, concurrent data analysis, triangulation with results of open feedback analysis and team discussions). After completing four interviews with control group participants, we had to stop study activities due to the COVID-19 pandemic and related personal circumstances and were unable to resume the work at a later stage as funding could not be extended. Interviews lasted between 30 and 90 min (average about 60 min). The interviewers (AG and SB, both female) introduced themselves as public health researchers who had not been involved in the design of the safetxt trial. Both kept reflective journals throughout the research process and engaged in self-reflexivity not only during interviews, but also during analysis to recognise and avoid imposing own assumptions and predefined theories onto participants' narratives. The interviews followed a semistructured topic guide, which aimed to explore participants' experiences regarding trial participation, whether or not they had been able to carry out the behaviours targeted by the intervention, and (for those from the intervention group) the intervention and how and why the messages did or did not help. We first explored which intervention messages participants recalled without being prompted. We then showed, sent or read to participants some of the messages and asked which, if any, they found particularly helpful or not. We also asked participants to make suggestions for improvements of the interventions. (Topic guides and example intervention messages in online supplemental file 2). New topics not included in the guide were further explored during subsequent interviews. These topics and summaries of reflective field notes were also discussed with RF and/or CF during team meetings. After completing the interviews, participants were offered a £20 voucher as a thank you for their time. --- Analysis Interviews were audiorecorded, transcribed verbatim by a professional transcription service (bound to a confidentiality agreement), and reviewed for anonymity and accuracy of transcription by SB and AG, while listening to the audiorecordings. This was also part of the first step of the thematic analysis approach that we used, including (1) familiarising ourselves with the data, (2) generating initial codes, (3) searching for themes, (4) defining and naming themes and (5) producing the report. 27 This process was iterative as analyses were conducted alongside data collection. During the early stages, SB and AG first independently developed thematic codes from the same four interview transcripts, two of which were also coded by RF, to ensure consistency of coding. Thereafter, SB and AG independently coded their interview transcripts and categorised data by theme using NVivo v.12 and Microsoft Word 2019, respectively. At the later stages of thematic analysis, Microsoft Word 2019 was used to integrate and triangulate themes developed by both researchers from both data sources, based on comparisons and team discussions. During analysis meetings with the research team, results from open feedback comments (source 1) and interviews (source 2) were triangulated with quantitative trial data (including primary, secondary and intermediate outcomes) 19 20 and data from telephone interviews conducted as part of the 2013 feasibility trial 2-3 weeks after starting messages 17 looking for consistencies and inconsistencies across the different data sources and searching for deviant cases. --- Patient and public involvement Patients and members of the public were involved in all phases of the safetxt intervention development and trial, including part of the qualitative components of the safetxt evaluation reported here. Prior to development of the safetxt intervention, possible safer sex interventions were discussed with young people in five discussion groups (25 participants). Subsequently, patients who participated in formal focus group discussions, helped to design the content of the intervention, 9 --- Open access patient representative was included in the trial steering committee. In addition, 14 patient representatives from the King's College Hospital Sexual and Reproductive Health user group helped design the patient information, consent and follow-up procedures and all trial questionnaires, including the open feedback question. Due to time restrictions, we did not seek help from patients for the design and pilot-testing of the interview topic guides, but instead gained input from four young colleagues. After the interviews, most participants indicated that they would be happy to help with the dissemination of results once published. --- RESULTS Fifty-six per cent (n=3526/6248, intervention: n=1745, control: n=1781) of participants provided comments in the open feedback section of the 12-month questionnaire, 72% of those who completed a 12-month questionnaire (table 1). Participants across all sociodemographic backgrounds provided open feedback comments, and the characteristics of respondents were similar to the characteristics of safetxt trial participants. 19 About 27% (intervention: 24%, control: 29%) of those who provided open feedback on whether anything good or bad had happened (see the Methods section for exact question) merely stated 'no', 'n/a', 'don't know', 'nothing', 'neutral', 'no difference' or a brief statement saying either they were unsure or did not notice any change as a result of participating in the study, for example, 'I carried on as usual, nothing good or bad happened'. A further 3% of comments from control group participants merely stated that they were in the control group or did not receive any intervention messages or similar. The remaining free text comments (intervention: 76%, control: 70%) provided another free text response (beyond the aforementioned statements) that was generally mostly only a few sentences long, with some participants providing longer feedback (8% of intervention and 5% of control group comments were >50 words long). We completed 18 interviews between February and May 2020. Respondent characteristics are in table 2. Open feedback was overwhelmingly positive both about the intervention text messages and being in the trial. Many intervention and control group participants commented on the usefulness and convenience of having an STI test kit sent to their home for primary outcome assessment. Intervention group participants commented positively on the tone of the intervention text messages finding them friendly, reassuring, helpful and written in a nonjudgmental manner. Participants also found that mobile phone delivery was a trusted, appropriate and convenient way to access information. Conversely, a few people in open feedback had concerns about keeping their messages private or reported that messages were annoying and many in both, the intervention and control arm, indicated that there was no change and nothing good or bad had happened as a result of being in the study. Findings from open feedback and the interviews were consistent, but interviews allowed to gain greater insight into themes that had had been generated during analysis of open feedback comments. Results from both sources are summarised by major theme below with example quotes provided in box 1 (intervention group) and box 2 (control group). --- Knowledge and awareness of safer sex Intervention group participants reported the messages were 'clear', 'concise' and 'informative'. Participants reported impact on their general knowledge of practising safer sex including new ways to protect themselves, how STIs are contracted, the risks and consequences of unprotected sex and the need to go for regular testing. Some participants appreciated intervention messages as a 'proper' source of information with links to trustworthy internet sites that clarified which information from other less reliable sources was correct. A few participants in the open feedback reported messages only said things they already knew. Many intervention participants, but also some control group participants, reported increased awareness of the importance of safer sex behaviours. Control group participants were 'indirectly' reminded of safer sex importance, because the regular texts reminded them their previous STI and/or because trial participation raised their awareness and motivation. This greater awareness reportedly influenced some intervention and control group participants in being more 'careful' in their choice of sexual partners and/or having less casual sex. --- Confidence, agency, well-being and communication Intervention group participants reported an increased confidence and agency in asserting their needs, for example, greater agency in only having sex when they wanted to. Some participants reported benefits in their sexual well-being such as, 'feeling positive' about their sex lives, respecting their body more or greater sexual pleasure through feeling more in control of their sex lives. In both the intervention and control group, sexual health was reported to be a 'difficult' and a 'taboo' subject to talk about. Sharing intervention text messages with partners, friends, housemates and siblings was a catalyst for facilitating open and honest dialogues about sexual health and helped many participants feel less embarrassed raising the topic. Showing partners messages was also used to reinforce requests to use condoms. One person reported the intervention gave them the confidence to start a new relationship after their STI. --- Changes in condom use Many intervention, but also some control group respondents reported having been 'more cautious' after receiving messages and that texts were good reminders to use condoms. Several participants explicitly reported increased condom use especially with casual or new partners. Intervention group participants attributed this to increases in their Open access confidence and knowledge of how to stay protected from STIs as well as greater confidence in being able to bring up the topic of condom use. Practical tips, including to prevent condoms to break or slip off, had been particularly helpful. One participant, however, said it would be helpful to have more advice on what to do if a partner refuses to use a condom. Those who used condoms did not necessarily use them on every occasion. Reportedly, the messages also led some to encourage their peers to use protection. --- Effects on partner notification Participants in the intervention group commonly reported that the text messages enabled them to speak Open access more confidently (calmly and sooner) to their partners about their infection, impacting on how they told partners. Intervention content that chlamydia was common and easy to treat helped facilitate conversations with partners about infection. This content also reduced concerns about getting chlamydia. There were reports that the messages motivated some participants to tell partners. Some stated that the text message examples they received arrived after they had notified partners, and regretted that they had not received them earlier. Some reported only learning from the messages that the clinic could have informed their partner. Two comments referred to unknown partner contact details. --- Increased STI testing Participants from both groups reported they sought further STI testing as a result of being in the study. Messages made some participants feel it was 'Ok to get tested' and directly or indirectly reminded intervention and control group participants to test or test more frequently than they normally would have. Participants reported going for testing after having sex with a new partner (none mentioned testing before first sex). A few intervention participants reported frequent STI testing rather than condom use as a way of managing STI risk and to 'keep track of partners'. --- Box 1 Continued 'I now have regular texts and have only not had sexual intercourse without a condom once, which was as a result of me and my partner both having alcohol. (21, WSM, OF) 'No, not much has changed in regards to me because I like to, I consider myself quite a safe person so I do wear protection where I can.' (26, MSM, I) 'I have been better at using a condom-but this may be just because of getting chlamydia last year, not because of the texts.' (18, WSM, intervention, OF) 'I look back now and I realise that it definitely was a form of definitely like some sort of self-harming, of like I was just, the only way, you know, I'd have (unprotected) sex with so many people, to make myself feel bad about myself almost.(…)I just didn't really care, I had no self-respect, I didn't really care about myself, my body really, … so I think the study definitely made me realise that' (23, WSM, I) Effects on partner notification 'I think I would have gone a very different way about doing it (notifying partner), I think I would have sort of hid it away and taken, it would have taken me a lot longer to do it because I would have been embarrassed, but the text messages, like I said, they really do make you realise that you're not the only one in this situation, so…' (18, WSM, I) 'I think there was one ….that made me realise that actually it's normal to not want to tell someone, and it's normal to feel really uncomfortable about it, but actually I need to tell them, and(…)the texts inspired me to reach out to my friends, and then my friends help me create a message that I then sent to people, so yeah.' (23, WSM, I) 'The text study was really helpful and insightful it helped me to be able to tell my sexual partner that I had been given a positive result for chlamydia and it helped me understand how to speak to him and tell him.' (23, WSM, intervention, OF) 'I think where it gives examples of how to tell, I think that helps, because … you don't really know how to put it, or how to start it, … a lot of people are actually quite embarrassed or they're scared of what the other person might say or they just don't know what to say so some people actually leave it, which is how other people get infected' (20, WSM, I) 'I remember thinking like 'oh this is so annoying that I got it now and not like on the day when I actually had to like tell them'.(…)Because I was thinking 'oh I've really like gone through all that like internal stress of being like how do I tell him?' and all that stuff like before and like telling him and then getting the text after.'(25, WSM, I) --- Increased STI testing 'I'd say the text messages made me get checked more often but I would have got checked anyway, but probably not as much as I did without the text messages. (21, WSMW, I) 'I got tested sooner after having had unprotected sex than I probably would have done had I not received a safer sex message text.' (21, WSM, OF) '…the texts definitely were probably part of it, but I think just sort of the maturity side of it, and sort of getting in a better frame of mind where I could ask somebody, after I had sex with them, when were you last tested, because I really didn't want to get it again.' (21, WSM, I) --- Reduction of isolation and stigma '…very helpful to feel less like you were the only one.' (21, MSW, OF) 'it was just reassuring to know that it wasn't just me getting them…' (21, WSM, I) --- Continued --- Box 1 Continued 'I think having regular texts written in the way that they were, it's really sort of like reassuring that you're not alone.(…)I'm not ashamed of my sexual health anymore, I don't think, I think before I was, I sort of thought that STIs were something to be ashamed of, but now definitely I know that they are more common than I thought they were, and they can be treated, easier than I thought they could be as well.' (18, WSM, I) 'Good for reminding you to keep getting tested and removes the stigma.' (24, WSM, OF) 'Thanks to studies like these, there is less shame relating to STI testing so I received the help I needed to get right away.' (23 years, WSM, OF) '…when you have that sort of thought at the back of your mind that it could go wrong, what if it does go wrong, I'm scared, it's, you feel sort of alone, but then with the text messages it really did help me sort of come out of that corner… I think it's … the way they were worded, it wasn't sort of, they weren't ordering me to do anything, they weren't demanding us to do anything, they were just suggesting, they were just informing, and I think that's a lot better than being sort of too firm with things.' (18, WSM, I). '…the stigma is still very much there so it's so easy to feel like 'oh I'm the only one, I can't tell anyone, I don't want people to think… because it could be one time but people assume just you're very promiscuous to get an STI… So I think it's really good… it's not just the physical treatment of it in regards to your body but like the mental treatment of it. It's like it's a common bacterial infection, just saying the word common makes people feel less alone so it could help their emotional wellbeing as well.' ( 26 --- Open access For some participants, the STI home testing kit was perceived as a central positive aspect of the study, and knowing that another 'screening' test would be done made one control group participant 'more inclined to use condoms'. --- Reduction of isolation and stigma Many intervention participants said that taking part in the study reassured them and reduced their feeling of being 'the only one', a common feeling after being diagnosed with an STI. Participants frequently commented on the reduction of 'stigma', 'shame' and feeling 'less embarrassed' about having had an STI which was perceived to be reassuring and to have benefits for emotional and mental well-being. In addition, learning that STIs could be easily treated reportedly reassured participants. Some control group participants also noted feeling 'less alone' as they 'belonged to a group of people that have had chlamydia or gonorrhoea', and one reported that being in the study reduced their embarrassment about having an STI. Another control group participant, however, emphasised that the study made her 'feel less alone', but not 'feel less ashamed' and she would have liked to be in the group that received texts with support and information. --- STI diagnosis and trial participation effects Some participants from both groups reported that changes in their behaviour were a consequence of having an STI rather than receiving intervention messages. Additionally, in open feedback many in the control group commented that participating in the study enabled them to make a commitment to changing their behaviour, and a few said that it prompted them to seek help, for example, about abusive relationships. As mentioned in the relevant sections above, the control group texts simply about trial participation had reminded many to adopt precautions such as using condoms, STI testing and asking partners about their last test for STIs. A few participants mentioned that they joined safetxt when they had been at a 'turning point', and would have Box 2 Control group extracts illustrating perceived impact of having a sexually transmitted infection and trial participation 'I'm very happy to have participated and hope that you get some conclusive results.' (24, MSM, OF) 'I have been a lot more insistent of using condoms during sex. This could have been due to contracting chlamydia last summer which was treated and not wanting to get it again. I was part of the placebo group in the study but still got a text every month or so to keep my details updated. This made me thought of the study so could have reminded me anyway.' (21, WSM, OF) 'I guess I've been more inclined to use condoms and have less unprotected sex as a screening was always in the back of my mind.' (24, MSM, OF) 'Made me more aware of my sexual health by receiving the texts, it was almost like a reminder as sometimes sexual health can be at the back of your mind whereas when receiving the texts it was like a reminder and kept it at the forefront of your mind' (18, WSM, OF) '…receiving these texts made me feel good about taking steps towards being more aware and a part of something bigger that helped me be a better adult (18, MSM, OF) 'I didn't receive many messages. However, I became more conscious of my sexual health. I take precautions when I remember although, I haven't always used anything. I have been more conscious of sleeping with new people I don't know that well and have avoided this. (19, WSM, OF) 'I was sort of more wary about who I slept with, it's like I didn't sleep with as many people that I was before, I don't know if that was just because of my age or if… I don't know.(…) Like I went through a bit of a rough patch when I was younger and I feel like that sort of did include sleeping around a bit more and then I came out of it(…) and I was more like, I didn't want to just sleep with anyone, I was sort of more picky.(…) I feel like it did play a little role [joining the study), like agreeing to be part of the safetxt I think was like a turning point as well in its own right. (21, WSM, I) 'Through the whole process of being diagnosed with an STI has made me consider my life choices. … I am reluctant to have a 'one-night stand' as I have previously experienced the consequences of unprotected sex with unfamiliar people. Overall, I have thought more about my actions, not so much as a result of the texts I receive, but instead because of what has happened with my health.' (19, MSW, OF) 'I was made far more aware of how unsafe I was being, when in the past I would make more decisions in the moment which were unsafe and unthoughtful about the consequences. Having regular texts made me far more conscious about safe sex-it was a great reminder; as it is easy to forget.' (19, WSM, OF) '… I was in the group that didn't receive texts about safe sex, however just being involved in the study and completing the questionnaires gave me a greater awareness of the benefits of practicing safe sex even after the shock from my initial diagnosis wore off… (18, WSM, OF) 'Made me more cautious of who to sleep with. Due to constant reminders.' (19, MSW, OF) 'The only kind of messages I was receiving were the ones about confirming my address and contact information. In spite of that, I was still more aware to be cautious and ask people if they were getting tested etc.' ( Open access changed their behaviour anyway, but appreciated the safetxt support during this time of change. One control group participant, who had reportedly meanwhile changed due to the STI and becoming more mature, thought that safetxt support if targeted at younger people could help them avoid having to go through the same 'quite big stressful event' of having an STI. --- Recommendations for future interventions Recipients felt the intervention was especially helpful for younger people such as late secondary school/first year post school (online supplemental file 3). Many interview participants and free-text comments reported that not enough was taught in schools and the texts were much more useful than what they were taught at school. Participants mentioned additional topics that would be helpful to include, such as peer pressure to have sex, further content on dealing with people who do not want to use protection, and pleasurable aspects of condom use; a few women who have sex with men and women requested more information on safer sex between women and two men who have sex with men (MSM) wanted the intervention to cover 'chem sex' (stimulant enhanced and prolonged 'no-strings' sexual sessions between MSM connecting through apps 28 ). A few participants suggested further personalisation of safetxt messages and an option to choose from a wider range of topics from the outset (in addition to the 'text 2 to hear more' option). Some requested better mental health support to explore why people have unprotected sex. Suggestions from participants for changes in the timing and frequency of messages often focused on having some form of control of message frequency, with some wanting less messages (especially at the beginning) and others more (especially towards the end). Although many participants said that certain intervention message content would 'stick' with them, some would have liked to continue receiving texts, as they served as reminders. --- DISCUSSION According to recipients, the safetxt intervention increased awareness of the importance of avoiding STIs and related knowledge about ways to prevent them. Participants reported improved confidence, agency, sexual well-being and communication about sexual health with partners, friends and family members. They attributed to these improvements, increases in condom use, STI testing, more confident partner notification and (for a few) disclosure of diagnoses. There was a reduced sense of isolation, stigma, shame and embarrassment about having an STI which reportedly reassured some participants and improved their emotional well-being. Participants from both the intervention and control group reported that having an STI influenced their safer sex behaviours. Control group participants reported that taking part in the study had influenced their commitment to safer sex behaviours. The control group text message about trial participation reminded many about the importance of safer sex and acted as a trigger for STI testing and condom use. Our qualitative analyses of interviews and open feedback are mainly consistent with the trial results. However, recipients' reports suggest larger differences in behaviour than were demonstrated in the trials results. Possible reasons include social desirability bias, an incorrect attribution of changes in behaviour to the intervention rather than the experience of STI and a strong Hawthorne effect 29 , including the trial participation messages sent to the control group reportedly acting as a prompt for safer sex behaviours. Our findings suggest that young people felt positive impacts of the safetxt intervention on their sexual and reproductive well-being. These benefits include increase in confidence, agency, communication and precautionary behaviours. 1 The perceived value of safetxt from recipients' accounts accords with the trial results showing higher condom use at 12 months. The 'spill-over' effect resulting from participants reportedly encouraging their peers to use condoms, was not quantitatively assessed during the trial. Recipients accounts that the main perceived benefit of the intervention was in 'how' to tell partners rather than 'whether' to tell them about their STI accords with the only slightly higher levels of partner notification in the intervention group. The results are not in line with the public health impact of safetxt as the trial results found STIs were not reduced, with slightly more infections in the intervention group. The trial results suggested (although not statistically significant) that there were slightly more participants with two or more partners and a new partner in the intervention group compared with the control group during the course of the trial (altering partnerships was not an intervention aim). The findings from this study involving interviews and feedback obtained after the 12-month intervention of the safetxt trial were in keeping with the findings from telephone interviews conducted in 2013 during the intervention development 2-3 weeks after receiving the first messages, but included longer-term impacts. 17 A strength of the interviews and the open feedback analysis was that it was conducted by two researchers not previously involved in the intervention development or trial. We analysed all of the open feedback comments left by over 3500 trial participants (56% of participants who enrolled into the trial and 72% of those who completed the 12-month questionnaire). The experience of those not leaving a free-text comment may be different from those who did. However, the characteristics of respondents were similar to the characteristics of trial participants including those from diverse sociodemographic and ethnic groups. It is not possible to blind participants receiving a behavioural intervention, which could introduce bias when obtaining feedback. All open feedback comments were brief, optional and completed at the end Open access of their involvement in the trial, so it was not possible to explore participant views in depth or follow-up on feedback. During interviews, however, (and despite having to stop the interview study slightly earlier due to the COVID-19 pandemic), we were able to gain greater insight into themes that had been generated during analysis of open feedback comments. Our qualitative analyses provide little direct evidence to explain the unanticipated quantitative trial findings, but raise some plausible explanations. Both qualitative analyses and quantitative analyses of intermediate and secondary trial outcomes showed increased correct condom use self-efficacy and increased condom use. This effect did not seem big enough to translate into reduced reinfection rates in the intervention group, given that those who reported increasingly using condoms did not necessarily use them on every occasion. In addition, a few intervention participants seemed to prefer a secondary prevention approach with frequent STI testing over a primary prevention approach with consistent condom use. In both intervention and control groups, there were large reductions between baseline and follow-up in the number of partners in the preceding year, as would be expected if high-risk people were reverting to the norm. However, there was a marginally smaller reduction in the intervention group. Previous trials of group interventions targeting those at high risk for STI, have had unanticipated effects in normalising risk behaviours and increasing STI. 30 The 'shock' of having had an STI and receiving control group messages reminding them of their STI might have deterred control group participants for a longer period from engaging in new relationships than intervention group participants. Some intervention recipients reported feeling less ashamed about their STI, generally more confident in discussing sexual health and/or reassured that their infection could be easily treated. Lower stigma about having an STI carried benefits in emotional well-being and reportedly gave a few the confidence to start a new relationship following their STI. While this was a positive outcome from recipient's perspectives, starting a new relationship confers some additional STI risk. Whether that risk is worth taking depends on what people are getting out of new relationships. Our analysis suggest intervention recipients were better equipped to get the sex they want (and to avoid the sex they do not want). The trial suggests that sex was no less a risk for STIs, but it may have had more value to them. Our qualitative analysis also suggested that testing after sex with new partners was increased, but not before first sex. The safetxt trial indicator assessed STI testing 'prior' to first sex with new partners (showing no difference between groups), whereas the few previous mHealth trials we identified in a systematic review that showed an effect on STI/HIV-testing, only enquired about whether participants had an STI test within a specified time period. 31 Secondary analysis of the safetxt trial data looking at overall testing data in clinics (rather than self-reported tests 'prior' to first sex) is consistent with this with slightly higher clinic testing for STI in the intervention groups (1549/3123, 50%) vs the control group (1477/3125, 47%). --- Conclusion This research has described the perceived impacts of receiving the intervention and control group messages on participants. A randomised controlled trial was needed to identify slightly higher STI diagnoses in the intervention group. The qualitative findings and trial results both show that the components of the safetxt intervention promoting condom use were effective. Since this is a unique finding not seen in any previous similar mHealth interventions, 31 service providers could consider delivering this content. Further research could consider recipients recommendations for future interventions and explore how to achieve and measure positive impacts of reduced stigma about having an STI and increase sexual well-being as well as reduce subsequent STI. Twitter Sima Berendes @BerendesSima --- Competing interests None declared. Patient and public involvement Patients and/or the public were involved in the design, or conduct, or reporting, or dissemination plans of this research. Refer to the Methods section for further details. Patient consent for publication Not applicable. Provenance and peer review Not commissioned; externally peer reviewed. --- Ethics approval Data availability statement Data are available on reasonable request. Deidentified data collected as part of the randomised controlled trial will be made available via the data sharing portal FreeBIRD after publication of the primary and secondary analyses as outlined in the Data sharing statement of the trial publication Free et al (2022). Part of the study materials and anonymised extracts of the interview study conducted after trial completion, have been included in online supplemental files of this article. Sharing of further anonymised qualitative data extracts on reasonable request would have to be in line with data protection laws and subject to appropriate ethics committee approval.
To understand the barriers and enablers for UK healthcare workers who are considering going to work in the current Ebola outbreak in West Africa, but have not yet volunteered.After focus group discussions, and a pilot questionnaire, an anonymous survey was conducted using SurveyMonkey to determine whether people had considered going to West Africa, what factors might make them more or less likely to volunteer, and whether any of these were modifiable factors.
Participants The survey was publicised among doctors, nurses, laboratory staff and allied health professionals. 3109 people answered the survey, of whom 472 (15%) were considering going to work in the epidemic but had not yet volunteered. 1791 (57.6%) had not considered going, 704 (22.6%) had considered going but decided not to, 53 (1.7%) had volunteered to go and 14 (0.45%) had already been and worked in the epidemic. --- Results For those considering going to West Africa, the most important factor preventing them from volunteering was a lack of information to help them decide; fear of getting Ebola and partners' concerns came next. Uncertainty about their potential role, current work commitments and inability to get agreement from their employer were also important barriers, whereas clarity over training would be an important enabler. In contrast, for those who were not considering going, or who had decided against going, family considerations and partner concerns were the most important factors. --- Introduction On 21st March 2014, the World Health Organisation was officially notified of an outbreak of Ebola virus disease due to Zaire ebolavirus in Guinea, Liberia and Sierra Leone. The outbreak was declared a "public health emergency of international concern" on 8th August 2014 [1]. As of January 9 th 2015, a total of 21086 cases (13376 laboratory confirmed) and 8289 deaths have been reported [2]. The epidemic is currently doubling approximately every 4 weeks and the case fatality rate, when based on the most accurate available information, is around 70 percent [1]. Small numbers of cases have occurred in Nigeria, Senegal and Mali. In late September the outbreak became transcontinental with the importation of a previously subclinical case to Texas, USA from Liberia [3]. Onward transmission occurred in the healthcare facility in Texas [3], and also occurred in Spain, following the repatriation of an infected healthcare worker [4]; more countries in West Africa may also be at risk of Ebola [5]. This is an exceptional situation with the potential for spread to almost any country in the world [6]. The global response to the outbreak has been slow. As early as April 2014 Médecins Sans Frontières (MSF) warned that this outbreak was "unprecedented" [7]. MSF has criticised the speed of response on several occasions [8,9] and on 5th September 2014, the number of deaths reported to WHO in this outbreak surpassed those in all other known outbreaks combined [1,[10][11][12][13][14][15]. In October 2014 Oxfam suggested that the world had only two months to get the epidemic under control [16]. Tackling the current Ebola virus outbreak requires a global response in terms of money, infrastructure and people. On the 21st October 2014 MSF had only 270 international staff and 3018 local staff working in Guinea, Liberia and Sierra Leone [17]. The World Bank have called for at least 5000 more medical and support staff [18]. In addition to the World Bank, organisations such as MSF, WHO and UNICEF have called for more qualified staff to help [19]. In the UK, approximately 1000 healthcare workers have so far volunteered to go to West Africa to help in the response [20]. Many more have considered going, but not yet volunteered. There are likely to be many factors that influence a person's decision regarding whether or not to volunteer in a situation like this. We wanted to understand what these factors are, and in particular whether any of them might be amenable to intervention or influence. Knowledge of the relative contributions of the different enablers and barriers might guide UK policymakers as to what is needed to ensure more healthcare workers volunteer to help control the Ebola outbreak. Therefore, we conducted a survey of UK health professionals to understand their attitudes towards going to work in the Ebola epidemic in West Africa. --- Methods To understand what some of the potential barriers and enablers might be that influence the decision of healthcare workers over volunteering to go to West Africa we examined social media, blogs and online comments (see S1 File for information sources). We also conducted small focus group discussions of healthcare workers. Based on these we produced a draft questionnaire which we piloted on a small number of different healthcare workers before modifying into the final version (S3 File). Briefly, the questionnaire asked whether respondents had considered going to work in the Ebola outbreak and what decision they had come to. Two questions investigated what the barriers and enabling factors were according to a 5 point Likert scale from "strongly agree" to "strongly disagree." The fourth question concerned where respondents got their information on Ebola from, and subsequent questions gathered demographic information such as profession, age, sex and level of experience. Free text boxes were included to pick up any other concepts not initially identified in the questionnaire and to enable participants to elaborate on their responses. We used the web based Surveymonkey to create and distribute the questionnaire. --- Ethics statement The questionnaire and study protocol were approved by the University of Liverpool research ethics committee (RETH 000774). The survey went live on Wednesday 15th October 2014 and was disseminated using multiple means including various professional colleges, societies, training bodies, letters to the BMJ [21] and the nursing press. A list of the organisations that disseminated the questionnaire is shown in the S3 File. It was also advertised informally by word of mouth and using social media. The survey, which can be found at www.surveymonkey/s/HPRUebola, was entirely anonymous. Initial data were reviewed after one week, the free text comments were analysed and recurrent concepts identified (1450 respondents). These initial responses were used to modify the questionnaire through inclusion of 2 additional barriers and 4 enabling factors (S4 File). The revised questionnaire went live on Wednesday 22 nd October 2014. Responses were downloaded as comma separated values at 9:20pm on 4 th November 2014 (S1 Dataset). Analysis was conducted using R software version 2.15.3 (R core team 2013). Responses were analysed descriptively and proportions of respondents giving certain answers were calculated. In order to rank the relative importance of barriers and enablers, values were assigned to responses on the Likert scale ("Strongly agree", +2; "Agree", +1"; "Neither disagree nor agree", 0; "Disagree", -1; "Strongly disagree" -2) and the total score for each barrier or enabler was calculated. We wished to explore whether demographic or other factors accounted for any responses observed. We were also interested in identifying whether particular barriers may cluster together. This is important because these barriers would all need to be addressed to affect an individual's decision; barriers that did not cluster could be dealt with in isolation. In order to address these questions simultaneously, we used redundancy analysis, a form of multivariate analysis that combines a principal component analysis, to identify clusters, with regression to identify significant explanatory variables; this analysis used the R package "vegan" [22], according to the methods described by Borcard [23]. A matrix of explanatory variables was constructed for the redundancy analysis, on which the response to each barrier was regressed. A forward selection process was used to select significant variables which explained the greatest proportion of variance in the response data, and permutation tests used to test significance of RDA axes. Triplots were produced according to correlations between variables (scaling 2 in the vegan package). --- Results A total of 3109 people completed the survey between 15 th October and 4 th November 2014. Two thousand and ninety eight (68%) respondents were doctors, 674 (22%) were nurses and the remainder were a mixture of armed forces health professionals, paramedics, pharmacists and a wide range of other allied health professionals (3% did not give their profession). The largest group of respondents, 943 (31%), worked in acute specialties such as acute medicine, emergency medicine and intensive care (Table 1). Medical specialties came next with 728 (24%) respondents, followed by others, including primary care, infection specialties, paediatrics, surgery and obstetrics & gynaecology. Respondents were generally experienced, 77% having more than 5 years of experience since their primary health care qualification and 55% with more than 10 years. Fifty-one percent of those answering had children or other dependents at home. Four hundred and seventy-two (15%) respondents were considering going to West Africa to help with the Ebola virus epidemic, but had not yet volunteered ("Considering"); 1791 (58%) had not considered going ("Not Considered"); 704 (23%) had considered it and decided not to go ("Decided Against"); 53 (1.7%) had made definite plans to go and 14 (0.4%) had already been to West Africa to help in the outbreak. Our analysis focussed on the 472 people in the Considering group, because this is the group who may be willing to go to West Africa. For people in this group, the most important barrier identified for not yet having volunteered was insufficient information to reach a decision (Fig. 1). Some of the areas of what information is required can be summed up in this quote: "Lack of information is my main barrier. I have no idea whether my skill-set would be useful there, if I am needed there, how to go about joining the efforts, or how to negotiate the time off with my trust. Any information would be greatly appreciated." The main areas where information could be targeted are shown in Table 2. In particular, although there is a dedicated website where NHS employees can express their interest (http://ukmed.humanities.manchester.ac.uk/), it is clear from these responses that this website is not widely known by many people (including those who would like to go and help). Additionally the information people are seeking is not available, either on that website or elsewhere. People would appear to welcome a more direct appeal for help. Amongst doctors in training in particular there was a need for clarity on how it would affect their training programmes. There were also many comments regarding the lack of information concerning exactly what skills or experience would be useful. Finally the main area of clarity required is getting timely responses from the organisations sending people out to help. Areas of lesser concern that nevertheless caused respondents to comment were the risk of contracting Ebola and medical evacuation. The next two barriers were the fear of getting Ebola, and a partner's concerns about them going. Further important issues for the Considering group included uncertainty around what their role would be in the epidemic, work commitments at home and not being able to get sufficient time off from their employer. The free text comments exemplified many of these issues: "Lack of information is my main barrier. I have no idea whether my skill-set would be useful there, if I am needed there, how to go about joining the efforts, or how to negotiate the time off with my trust." (Male, gastroenterology registrar.) "As a junior doctor I want to help but don't know if I would be eligible or useful. It would be really helpful if UKIEMR/other organisations published a list of necessary/desirable qualities."(Male, F2, acute medicine,) "My host trust refuses to let me go due to significant staff shortages of middle grade doctors of appropriate training to fill my place, and also currently they do not have the funds to Table 1. Demographic information for all the respondents who completed the survey, divided according to the answer to whether they have considered going to West Africa to help in the current Ebola virus epidemic. replace me for 3 months. I offered to take unpaid leave and for my salary to be used by the Trust to fund a locum". (Female, emergency medicine registrar.) --- Considering The barriers for people in the Considering group were very different to those in the Not Considered and Decided Against groups; for these two groups, many of the issues were similar. Thus family commitments and partners concerns were the two most important issues, and insufficient information was much less important (Fig. 1). Of note, nurses in the group that Decided against going to work in the epidemic were more likely that other groups of respondents to answer that their employer prevented them from volunteering. The demographics of the groups also differed; 43% of those considering going were in the 26-35 age group, compared with 36% of those not considering going, and 39% of those who had decided against going (Table 1; χ 2 test p = 0.002). Those considering going were also less likely to have children (χ 2 test p < 0.0001). They also less frequently "Agreed" or "Strongly Agreed" with the questions that emphasised the barriers to going, for example they were significantly less afraid of contracting Ebola (χ 2 test p < 0.0001). Interestingly, fear of Ebola was associated with getting information predominantly from the media in the group considering going but who had not yet volunteered. Conversely, across all respondents to the survey, Barriers and enablers to going to West Africa to help with the Ebola outbreak for four groups of respondents. The importance of each issue is indicated on a 5 point Likert scale from strongly disagree to strongly agree, for those who were considering going but had not yet decided ("Considering"); those who had not considered going ("Not Considered"); those who had considered it and decided not to go ("Decided Against"); those who had volunteered and were waiting to go ("Volunteered"), and those who had already been ("Already Been"). Issues marked * were introduced in the second version of the questionnaire from 22 nd October onwards (1450 responses). Data are the percentage of respondents giving the answers indicated; and the rank is indicated showing how important that issue was for that group. The values from which the figure is derived are given in S1 and S2 Tables. --- Area of information required Example responses Exactly how people can volunteer "I am frustrated. I don't know how to get out there. I want to go" "I really want to go, but I don't know how and with which organisation." "I would love to go and help, but am unsure of how to go about doing it" "I would be keen to go but I don't know how I would get involved" "Don't know how or to whom I can talk to to make this happen" "I would love to go and help out in West Africa but I have no idea how I would go about volunteering" "I haven't come across a 'one-stop' site for info re potential NHS volunteers. . .. . .. . ..is there one I've missed?" "Need a simple and well publicised sign up procedure" "I would have liked the opportunity to attend an event for potential volunteers to get an idea of whether I could do something useful there without any commitment at this stage" Need for information to be more directly disseminated to front line healthcare workers "I would have liked the opportunity to attend an event for potential volunteers to get an idea of whether I could do something useful there without any commitment at this stage" "If you need NHS staff to volunteer to help with the Ebola outbreak, you need to approach staff more directly. I cannot remember having any email requesting that I consider it." "Until I read the letters in the BMJ I wasn't aware that volunteers were being proactively recruited" "I have not been made aware of any official drive for UK doctors to travel to the outbreak, but would certainly be interested in doing so. Do get in touch." "Bring information to us, i.e. a recruitment drive within the NHS to give us all the info" "I personally haven't been aware of any campaigns/information given to health care workers asking for volunteers. Only aware of it due to media coverage" getting information predominantly from the medical literature (irrespective of how much information was from the media) was associated with a roughly fourfold decrease in fear of contracting Ebola (χ 2 test p < 0.0001). Need For those considering going to West Africa who had not yet volunteered, the most important "enabling" factor, which would make them more likely to go was more information and training (Fig. 1). The availability of effective treatments and/or a vaccine against Ebola also featured highly as factors that would reassure the survey respondents. Interestingly, reassurance about repatriation in the event of contracting Ebola did not feature highly as a barrier, except in the small group of respondents who had actually been to work in the epidemic, where it ranked as the number one concern. To determine how demographic and other factors influenced the barriers, and whether any particular factors clustered together, we performed redundancy analyses, both on the whole dataset and for the three main groups: Considering, Not Considered, and Decided Against. For this analysis the responses to the question regarding barriers to going were the independent variables, and the demographic (or other) data were the explanatory variables (e.g. age, previous experience, profession, specialty). All variables that showed a significant association with responses regarding barriers are shown on the redundancy analysis plots, Fig. 2 and S1 Fig. The demographic characteristics for the Considering group were very diverse, but in general, those variables that might be expected to cluster together did so. For example, having children was closely associated with having family commitments and reporting a partner's concerns as barriers. In this analysis lack of sufficient information, which was the most important barrier, was not strongly associated with any other variable, but was loosely associated with two other clusters of barriers: fears of unrest, and worries about not being repatriated if unwell with Ebola and concerns about what the role would be, and insufficient experience and fear of catching Ebola. These factors were less of a concern for the subgroup of people who were considering going and already had experience of working in sub-Saharan Africa; such healthcare workers were more likely to be male and to have longer professional experience. Across all respondents to the survey, concerns about leaving families or partners were more frequently reported by people with children, and particularly in the 36-45 age group. People with previous experience in sub-Saharan Africa were less concerned that they did not know what they would actually do, or that they did not have the right experience; they were also less concerned about civil unrest or not being repatriated. People who reported that they obtained most of their information about the Ebola crisis through the media were more likely to feel --- Area of information required Example responses Risk of infection "Although I am potentially willing to help there is no clear information about current infection rates amongst staff volunteering, and about methods of keeping safe. . .. . . I need this information to weigh the risks to myself and family against the urge to assist. I do not know where this information could be found." Need for information regarding evacuation "I contacted NGOs working in the West Africa and I once agreed to work in the field with one of these NGOs. However, I could not obtain appropriate information on risk management such as emergency evacuation; thus my employer did not agree for me to work in the west Africa." ". . . there is no clear guidance about what would happen if one of us contracted the virus." "The main concern is the uncertainty regarding medical evacuation" "Once. . ...the rules for medical evacuation are defined I would most likely be willing to go. . ." "I would find it easier to decide if I had more information re options for participating, such as job descriptions, location, set-up, training, medical insurance, medical evacuation if needed." doi:10.1371/journal.pone.0120013.t002 they did not have enough information to inform their decision, and had less idea of what would be expected of them. Doctors with increasing professional experience were most concerned about the impact on their colleagues and families. The younger age groups, and those in allied health professions (pharmacists, biomedical scientists, paramedics and nurses) had more concerns that their career would be adversely affected, or reported that their employer would not allow them to go. These categories of respondents also tended to get more of their information from the media. Those working in infection specialties were the least likely to report having insufficient information. --- Discussion In this study, 15% of more than 3000 healthcare workers who responded said they are considering going to West Africa to help with the Ebola outbreak; the primary reason why they had not yet volunteered was a lack of information to help them decide. In addition, concerns about what their role would be, and the attitudes of their employer also contributed. All of these are factors could potentially be modified. Fear of contracting Ebola also featured highly among the reasons for not yet volunteering, as well as the concerns of a partner. Our redundancy analyses indicated that, as a barrier, lack of information did not cluster tightly with any of the other barriers, though it did associate loosely with respondents' concerns about what their role would be, and having insufficient experience, fears of civil unrest, of catching Ebola and of not being repatriated if unwell with the disease. Many of these concerns would likely be allayed with appropriate information, underscoring the importance of this one factor. The absence of adequate information may leave health workers getting more information from the media, which we have shown is associated with greater fear of contracting Ebola compared with obtaining information from more definitive sources such as the medical literature. Nearly 85% of respondents had either not considered, or had decided against going to help in the epidemic. The overwhelming reasons for this were a partner's concerns, and family commitments, especially having children or other dependents. Interestingly the responses of those two groups were very similar, suggesting that even those who reported they had not considered going, may have considered it at some level, and decided not to pursue it. Compared with these two groups, those considering going to West Africa were less likely to have children or dependents, and generally perceived all of the barriers to be less of a hindrance, except for lack of access to information. Because lack of information seemed to be a key factor, we examined the websites of the main organisations who are sending volunteers to work in the Ebola epidemic (The British Red Cross, International Medical Corps, Médecins Sans Frontières, Save the Children, and UK-Med) to see what information was available (Table 3). In general, these organisations are calling for doctors and nurses who have a reasonable amount of experience (over 3 years post registration), preferably including experience in low resource settings. The desirable specialties were emergency medicine, infectious diseases, critical care and paediatrics. Another issue highlighted by those who were considering going was the need for training specific to the tasks that will be carried out. Many potential volunteers seemed unaware that rigorous training is included in the typical 4-6 week deployment. This again highlights the provision of reliable information as a key missing component of the current response. The redundancy analyses for all the groups of respondents showed a strong relationship between reporting there was insufficient information, and obtaining most of their information from the media. Some respondents commented on the fact that if they were asked directly to help out (e. g. via an e-mail), they would be more likely to consider it, rather than just hearing about the need for volunteers via the media. Additional barriers identified by our respondents in free text comments were uncertainty about whether pay would continue as normal and lack of clarity about whether employers would release NHS staff members. A limitation of our study is that it used a convenience sample and is unlikely to be completely representative of all UK healthcare workers. Given the fast moving nature of the Ebola epidemic the time necessary to fully assess response bias in the UK health worker population would preclude the study results being sufficiently useful to inform policy. A total of 3109 people completed the survey, equivalent to 0.33% of the 937,000 registered doctors, nurses and midwives in the UK [29,30]. Thirty-one percent of respondents worked in acute care specialties, 68% were doctors and 22% were nurses. These figures do not reflect the UK health care worker population, in which nurses outnumber doctors more than two to one. This is perhaps the most significant limitation of our study, as there is a greater need for skilled nursing care than medical expertise in the epidemic. It is possible that this imbalance in respondents has led to us wrongly identifying significant barriers. For example, nurses who had decided against going to work in the epidemic were more likely to cite their employer as a reason not to go than doctors were. However, it is of note that, in the Undecided group, there were no significant differences between the nurses responding and any other group. Two percent of our respondents have either made definite plans to go to West Africa, or have actually been and worked in the epidemic. Generalised across the whole health worker population, this would equate to nearly 19,000 volunteers. In reality, the number is nearer 1000. This confirms the expected bias in our responding population in favour of people who are more interested in helping with the epidemic. However, given that this is primarily the group of interest, particularly the barriers and enablers to them going to West Africa, we do not think this is a major limitation. A further limitation is the fact we changed the questionnaire during the study. Nevertheless, the central conclusion of our study, that lack of information is hindering potential volunteers, is based on questions that were present in all versions of the questionnaire and was not influenced by the alteration a week into the survey period. A final limitation is that we did not specifically ask if the destination country would influence willingness to volunteer. The majority of British health workers volunteering in the Ebola epidemic are being deployed to Sierra Leone because of Britain's historic links with that country. We detected no evidence from free-text comments that the country of destination would influence the decision to volunteer. In summary, our study has shown that many more people are considering going to West Africa than have actually signed up, and one of the major factors holding them back is lack of information. Policies which were aimed specifically at addressing this, such as a well-publicised, high quality portal of reliable information, would likely result in more UK healthcare workers volunteering to help tackle Ebola in West Africa. --- Data are available as a supplementary file. This file has been modified very slightly to remove any potentially personal identifiable information.
In this paper, we explore the role that attribution plays in shaping user reactions to content reuse, or remixing, in a large user-generated content community. We present two studies using data from the Scratch online communitya social media platform where hundreds of thousands of young people share and remix animations and video games. First, we present a quantitative analysis that examines the effects of a technological design intervention introducing automated attribution of remixes on users' reactions to being remixed. We compare this analysis to a parallel examination of "manual" credit-giving. Second, we present a qualitative analysis of twelve in-depth, semi-structured, interviews with Scratch participants on the subject of remixing and attribution. Results from both studies suggest that automatic attribution done by technological systems (i.e., the listing of names of contributors) plays a role that is distinct from, and less valuable than, credit which may superficially involve identical information but takes on new meaning when it is given by a human remixer. We discuss the implications of these findings for the designers of online communities and social media platforms.
INTRODUCTION Networked information technologies have changed the way people use and reuse creative -and frequently copyrightedmaterials. This change has generated excitement, and heated Best Paper Honorable Mention at CHI 2011. debate, among content-creators, technologists, legal academics, and media scholars. Media theorist Lev Manovich argues that remixing is an ancient cultural tradition (e.g., he has suggested that ancient Rome was a "remix" of ancient Greece) but that information technologies have accelerated these processes and made remixing more salient [14]. Sinnreich et al. argue that "configurable culture" has been significantly transformed by networked technologies which introduce perfect copying and allow people not only to be inspired by extant creations but to remix the original works themselves [25]. Legal scholars have stressed the importance of remixing in cultural creation broadly and warned that current copyright and intellectual property laws may hinder creativity and innovation [11,1]. Several of the most influential scholarly explorations of remixing as a cultural phenomenon have focused on youth's remixing practices. For example, work on remixing by Jenkins [10] and Ito [9] has focused on young people's use and re-use of media. Palfrey and Gasser have suggested that the cultural practices of "digital native" youth have had a significant transformative effect on our culture [17]. Throughout his book "Remix," Lessig uses youth's reuse practices to support an argument against what he considers excessive copyright legal protection [12]. Yet, despite a wide interest in remixing and authorship, researchers have only recently engaged in empirical research on the subject [4]. Several recent treatments have presented studies of video remixing communities [5,24], music remixing communities [4], collaborative video game communities [13] and social network sites [18]. There is also another quantitative study of our empirical setting [8] focused on characterizing the variety of responses to remixing. These studies have tended to be general and largely descriptive ex-arXiv:1507.01285v1 [cs.HC] 5 Jul 2015 aminations of remixing practice. This work has pointed to the existence of norms [5] and the territoriality of digital creators [26] and has considered issues of motivation [4]. However, empirical work has yet to unpack in detail the key social mechanisms that scholars have suggested drive behavior, norms, and motivation in remixing communities. Perhaps no mechanism has been more frequently cited as critical for remixing activity than attribution and the related phenomena of plagiarism, reputation, and status. For example, recent survey-based work has suggested that the "authenticity and legitimacy" of creative work "are premised on the explicit acknowledgment of the source materials or 'original creator"' and that such acknowledgment is a key component of how adults assess the fairness or ethical nature of content reuse [25]. Attribution, in this sense, can be seen as an important way that people distinguish remixing from "theft." Judge and law professor Richard Posner stresses the importance of attribution and explains that this is important even when there is no monetary benefit to being attributed. For example, he explains that European copyright law is based on a doctrine of "moral rights" that "entitles a writer or other artist to be credited for his original work and this 'attribution right', as it is called, would give him a legal claim against a plagiarist." Posner also explains that "acknowledgment" of another's contributions to a derivative negates any charge of plagiarism, although it may not establish originality [20]. Attribution plays such an important role in remix culture that Creative Commons made a requirement for attribution a component of all their licenses after more than 97% of licensors opted to require attribution when it was offered as a choice [2]. Young people's perceptions of attribution and complications around copying have also been examined. An article by Friedman describes that adolescents who allowed "computer pirating" -the unauthorized copying of computer programs -did so because technological affordances made it difficult for adolescents to identify "harmful or unjust consequences of computer-mediated actions" [7]. In a second study, psychologists Olson and Shaw have found that by five years old, "children understand that others have ideas and dislike the copying of these ideas" [16]. Yet, despite the fact that researchers in human computer interaction have begun to explore the complexity of attribution and cited its importance to remixing [13], many designers of online communities pay little attention to issues of attribution in their designs -a fact that is reflected in user behavior. For example, research on the use of photos from the photo sharing site Flickr [22], as well as a number of other usergenerated content communities [23], suggests that most reusers fail to attribute re-used content in ways that public-use licenses require. Although theory and survey based work points to a need to design for attribution in user-generated content communities, we still know very little about how attribution works or how designers might go about doing so. Indeed, our study suggests that the most obvious efforts to design for attribution are likely to be ineffective. In this paper, we employ a mixed methods approach that combines qualitative and quantitative analyses to explore users' reactions to attribution and its absence in a large remixing community. First, we introduce our empirical setting; using qualitative data from users forums and comments, we present a rich description of remixing and evidence to support our core proposition that credit plays a central role in remixing in our environment. Second, we contextualize and describe a technological intervention in our setting, responding directly to several user suggestions, that automated the attribution of creators of antecedent projects when content was remixed. Third, we present a tentative quantitative analysis of the effect of this intervention along with a parallel analysis of the practice of manual creditgiving. We find that credit-giving, done manually, is associated with more positive reactions but that automatic attribution by the system is not associated with a similar effect. Fourth, we present analysis of a set of in-depth interviews with twelve users which helps confirm, and add nuance and depth to, our quantitative findings. Our results suggest that young users see an important, if currently under-appreciated and under-theorized, difference between credit and attribution. Credit represents more than a public reference to an "upstream" user's contributions. Coming from another human, credit can involve an explicit acknowledgment, an expression of gratitude, and an expression of deference, in a way that simple attribution can not. Our results suggest that identical attribution information means something very different to users when it comes from a computer, and when it comes from a human -and that users often feel that acknowledgment is worth much less when it comes from a system. We conclude that designers should create affordances that make it easier for users to credit each other, rather than to merely pursue automated means of acknowledgment. Our study offers two distinct contributions for social scientists and for technology designers. The first is an improved understanding of the way that attribution and credit work in user-generated content communities. The second is a broader contribution to the literature on design that suggests an important limitation to technologists' ability to support community norms and a suggestion for how designers might create affordances. Functionality that allows users to express information that a system might otherwise show automatically may play an important role in successful design for social media environments. --- SCRATCH: A COMMUNITY OF YOUNG REMIXERS The Scratch online community is a free and publicly available website where young people share their own video games, animated stories, interactive art, and simulations [15]. Participants use the Scratch programming environment [21], a desktop application, to create these interactive projects by putting together images, music and sounds with programming command blocks (See Figure 1). The Scratch website was officially announced in 2007 and, as of September 2010, has more than 600,000 user accounts who have shared 1.3 million projects. At the time of writing, Scratch users share on average one new project per minute. Examples of projects range from an interactive virtual cake maker, to a simulation of an operating system, to a Pokemon-inspired video game, to an animation about climate change, to tutorials on how to draw cartoons. Like other user-generated content websites, such as YouTube or Flickr, Scratch projects are displayed on a webpage (See Figure 2) where people can interact with them, read metadata and give feedback. Visitors can use their mouse and/or keyboard to control a video game or other type of interactive projects or simply observe an animation play out in a web browser. Metadata displayed next to projects includes a text-based description of the project, the creator's name, the number of views, downloads, "love its," remixes, and galleries (i.e., sets of projects) that the project belongs to. Users can interact with projects by giving feedback in the form of tags, comments, or clicks on the "love it" button, and can flag a project as "inappropriate" for review by site administrators. Participants' self-reported ages range primarily from 8 to 17 years-old with 12 being the median. Thirty-six percent of users self-report as female. A large minority of users are from the United States (41%) while other countries prominently represented include the United Kingdom, Thailand, Australia, Canada, Brazil, South Korea, Taiwan, Colombia and Mexico. About 28% of all users -more than 170,000have uploaded at least one project. --- Remixing in Scratch Scratch users can download any project shared on the website, open it up in the Scratch authoring environment, learn how it was made, and "remix" it. In Scratch, the term "remixing" refers to the creation of any new version of a Scratch program by adding, removing or changing the programming blocks, images or sounds. In this section we use qualitative data from the Scratch website to provide social context for remixing and to suggest that credit plays an important role in how users conceive of appropriate remixing practice. Remixing in Scratch is not only technically possible, it is something that the administrators of the website encourage and try to foster as a way for people to learn from others and collaborate. On every project page, the Scratch website displays a hyperlink with the text "Some rights reserved" that points to a child-friendly interpretation of the Creative Commons Attribution-Share Alike license under which all Scratch projects are licensed. 1 Even the name Scratch is a reference to hip hop DJs' practice of mixing records. A large portion of all projects shared on the Scratch website (28%) are remixes of other projects. That said, remixing is not universally unproblematic in Scratch. Previous quantitative analysis of the the Scratch community showed that Scratch participants react both positively and negatively to the remixing of their projects and found that of those users who viewed a remix of their project, about one-fifth left positive comments while the same proportion of users accused the remixer of plagiarism [8]. This ambivalent reaction to remixing is echoed, and given additional texture, in the comments and complaints left by users on the Scratch website and sent to Scratch administrators. For example, even before the Scratch website was publicly announced, a number of early adopters became upset when they found remixes of their projects on the website. Indeed, one of the very first complaints about Scratch occurred on the discussion forums where a 13 year-old boy asked: Is it allowed if someone uses your game, changes the theme, then calls it 'their creation'? Because I created a game called "Paddling 1.5" and a few days later, a user called "julie" redid the background, and called it 'her creation' and I am really annoyed with her for taking credit for MY project!!2 A similar complaint was sent to the website administrators by a 14-year old boy: I think there should be a way to report plagiarized projects I've been seeing a lot of people's projects taken and renamed. This member, named kings651, has 44 projects, and most of them are made by other people. He even has one that I saw my friend make so I know he actually made it. In other cases, the disagreements over remixing were more public and involved communication via projects and com-ments. For example, user koolkid15 wrote the following message in a comment which was left is response to a remix that shows a cat frowning: Hi i'm koolkid15 the original creator of luigi disco jay-man41 copied me!! and didn't even aknowladge me he didn't change anything !! I wrote or drew!! and jayman...if your reading this think about other people!!!! Despite the fact that Scratch was conceived, designed, and launched as a platform for remixing, these users expressed their displeasure at remixing. That said, none of these users complained directly about the reuse of their project in general, but in terms of unfair "taking credit", plagiarism, and a lack of acknowledgment. Remixing was seen as problematic for koolkid15, for example, because of the nontransformative nature of reuse, the lack of acknowledgment of antecedent contributors, and the confusion about credit that would result. Of course, other, more positive, scenarios around remixing also played out in Scratch. For example, jellogaliboo created a remix of Catham's project and wrote the following in the project notes: "i kinda copied Catham's "jetpackcat" game. i used the kitty, the blocks (i added and changed some), and the fuel thingy." Catham later posted his approval of the remix saying, "I like what you changed about my project!" Like this example, many of these positive experiences involved explicit credit-giving by a remixer to the creator of the antecedent project. --- DESIGN INTERVENTION: AUTOMATING ATTRIBUTION Several user complaints about remixing and plagiarism also included suggestions for how Scratch's designers might address them. For example, in response to the forum thread mentioned in the previous section, a 16 year-old proposed two potential design-based solutions: Make it so you can only download a view of how your game/story/animation works. Or make it so downloadable Scratch files have read only protection. Maybe downloaded Scratch files, after being uploaded, are marked with the creators name at the bottom, and then any DIFFERENT people who edit it after are put on the list. Influenced by these comments, Scratch administrators came to believe that negative responses towards remixing were often due to the fact that Scratch users did not acknowledge the sources of their remixes. As a result, these administrators implemented an architectural design change to the Scratch community along the lines suggested by the user in the second half of the quotation above. The design change in question involved the introduction of a new technological facility that automatically identified and labeled remixes and inserted hyperlink pointers under each remix to the remix's antecedent and the antecedent's author (see Figure 3). Two days after the introduction of this feature, functionality was added to link to a comprehensive list of derivative works from the pages of antecedent projects (see Figure 4). The new feature was announced in the discussion forums by an administrator of the website and user responses were positive. User terminator99 suggested that the change was, "Awesome." Another user, marsUp, posted a comment saying, "That's a very useful feature! I like that we can do pingpong like modding in Scratch." Users who did not visit the discussion forums also responded well to the the new feature. For example, user greekPlus posted a comment on a remix he created saying, "i remixed it for you but i do not know how to ad credit to you for thinking of it in the first place." A few minutes later he realized that the remix automatically displayed the attribution and posted the a comment saying, "never mind it did it for me. cool!" --- STUDY 1: HUMAN AND MACHINE ATTRIBUTION Although initial user feedback to the automatic attribution feature was positive, users continued to complain about remixing. In Study 1a, we present a quantitative analysis to more fully evaluate the effect of the technological design change described in the previous section. In Study 1b, we offer a parallel analysis of the relationship between manual crediting-giving by users and users' reactions to being remixed. Both studies build on a dataset used in previous work by Hill, Monroy-Hernández, and Olson [8]. This dataset includes remix-pairs determined by an algorithm using detailed project metadata tracked by the Scratch online community. The dataset is limited in that it does not include projects whose concepts were copied by a user who had seen another's work but who did not actually copy code, graphics or sound. Similarly, the dataset contains no measure of the "originality" of projects or an indicator based on ideas that were taken from a source outside Scratch (e.g., a user may have created a Pacman clone which would not be considered a remix in our analysis). The data presented here includes each coded reactions of the author of antecedent projects (i.e., originators) on remixes of their projects shared by other users in the site during a twelve week period after Scratch's launch from May 15 through October 28, 2007. Although 2,543 remixes were shared in this period, we limit our analysis to the 932 projects (37% of the total) that had been viewed at the time of data collection by the project originator -a necessary prerequisite to any response. Of these 932 remixes that were viewed by a project originator, 388 originators (42%) left comments on the remixes in question. The remaining were coded as "silence." Comments left by originators were coded by two coders, blind to the hypotheses of the study and who were found to be reliable [8], as being positive, neutral, or negative. They were also coded as containing accusations of plagiarism (projects in which the the originator directly accused the remixer of copying, e.g., "Hello mr plagiarist", "Copycat!") or hinting plagiarism (projects in which the originator implied that the remixer had copied but did not state this explicitly, e.g., "I mostly pretty much made this whole entire game"). Unless it also contained an explicitly negative reaction, an accusation of plagiarism was not coded as "negative." However, because plagiarism tends to be viewed as negative within Scratch (as suggested by the quotations in the previous section) and more broadly in society [20], we re-coded accusations of plagiarism (both direct and hinting) as "negative" except, as was the case in several comments coded as "hinting plagiarism," when these accusations were in comments that were also coded as positive. Previous published work using this dataset, and subsequent robustness checks, show that our results are substantively unchanged if we exclude these explicit charges of plagiarism from the "negative" category or exclude only the weaker "hinting plagiarism" accusations. --- Study 1a: Automatic Attribution To test the effectiveness of automatic attribution, we consider the effect of the design intervention described in the previous section. The design change took place six weeks after the public launch of the Scratch community and at the precise midpoint in our data collection window. The intervention affected all projects hosted on the Scratch online community including projects shared before the automatic attribution functionality was activated. As a result, we classify originators' reactions as occurring outside a technological regime of automatic attribution when a project was both uploaded and viewed by a project's originator before automatic attribution functionality was activated. A comparison of the distribution of coded comments between positive, neutral, negative, and silent in the periods before and after the intervention suggests that the introduction of automatic attribution had little effect on the distribution of reaction types (See Figure 5). Although the period after the intervention saw a larger proportion of users remaining silent and a smaller proportion of both positive and negative comments, χ 2 tests suggest that there is no statistically significant difference in originator reactions between remixes viewed before or after the introduction of automatic attribution (χ 2 = 3.94; df = 3; p = 0.27). As a result, we cannot conclude that the there is any relationship between the presence, or absence, of an automatic attribution system in Scratch and the distribution of different types of reactions. These results suggest that automatic attribution systems may have limited effectiveness in communities like Scratch. Of course, our analysis is not without important limitations. For example, the existence of an automatic attribution regime may also affect the behavior of users preparing remixes. A remixer might avoid making perfect copies of projects if they know that their copies will be attributed and are more likely to be discovered. --- Study 1b: Manual Crediting While the introduction of an automatic attribution feature to Scratch appears to have had a limited effect on originators responses to remixes of their projects, the presence or absence of credit was a recurring theme in discussions on Scratch online forums -as shown in the quotes in the previous section -and in many of the coded reactions from the periods both before and after the introduction of automatic attribution. Indeed, in project descriptions or notes from the periods both before and after the change, remixers frequently "manually" gave credit to the originators of their work. Even after remixes were automatically attributed to originators, remixers who did not also give credit manually -essentially producing information redundant to what was already being displayed by the system -were criticized. For example, after the introduction of automatic attribution functionality, a user left the following comment on a remix of their project: Bryan, you need to give me Pumaboy credit for this wonderful game that I mostly pretty much kinda totally made this whole entire game ... and that you need to give me some credit for it For this user, automatic attribution by the system did not represent a sufficient or valid form of credit-giving. In the following study, we test for this effect of "manual" creditgiving by remixers on coded response types using a method that parallels the analysis in Study 1a and that uses the same dataset. Manual crediting can happen in multiple ways. Exploratory coding of 133 randomly selected projects showed that 35 (36%) of each remix pair gave credit. Of these 35 projects, 34 gave credit in the project description field while 1 project only gave credit in a "credits" screen inside the game. As a result, the authors of this study split the sample of projects used in the Study 1a and coded each of of the user-created descriptions for the presence or absence of explicit or manual credit-giving. To first establish that we are examining distinct behaviors, we attempted to establish that automatic and manual attribution do not act as substitutes for each other. As suggested by our qualitative findings and our results in Study 1a, we found little difference in the rate of explicit credit giving between projects created in the presence or absence of automatic attribution. Overall, 276 (about 30%) of the 932 projects in our sample offered explicit credit in the description field of the project. Manual crediting-giving was a widespread practice both before automatic attribution, when 31% of projects in our sample offered explicit credit, and after, when 27% did so. The difference between these two periods was not statistically significant (χ 2 = 1.41; df = 1; p = 0.24). Previous work studying Jumpcut, a video remixing website, supports the idea that automatic and manual credit giving are not interchangable phenomena. One Jumpcut user with permission to creative derivative works commented that they, "still feel a moral obligation to people as creators who have a moral right to be attributed (and notified) despite the physical design which accomplishes this automatically" [5]. We measured effectiveness of manual credit giving using a parallel analysis to Study 1a. As in Study 1a, we compared the distribution of originator reactions in the presence, and absence, of manual credit-giving by remixers. We found that negative reactions are less common in the presence of manual credit but that this difference is very small (from 16% without manual credit to 14% with it). However, we see that the proportion of users who react positively almost doubles in the presence of credit-giving (from 16% with no crediting to 31% in its presence). A graph of these results are shown in Figure 6. Tests show that we can confidently reject the null hypothesis that these differences in the distribution of reactions are due to random variation (χ 2 = 27.60; df = 3; p < 0.001). Also important to note is a difference in the number of users who are silent after viewing a project (62% in the absence of manual credit versus 49% in its presence). This larger proportion of commenting in general may have an important substantive effect on the discourse and behavior on the site because silent originators may, for obvious reasons, have a more limited effect on attitudes toward remixing and user experience than vocal users do. As a robustness check, we considered the reaction of only originators who left comments (n = 388) and found that even with a smaller sample, our result were stronger. In the restricted sample, 41% reacted negatively when they were not given credit. However, only 27% did so when they were credited. Similarly, 42% of users who left comments on projects that did not give credit manually left positive messages. Nearly two thirds of comments (61%) were positive when credit was given. These differences, in the reduced sample that includes only explicit reactions, were also statistically significantly different (χ 2 = 14.09; df = 2; p < 0.001). We include the large number of silent participants because we believe that nonresponse is an important type of reaction with real effects on the community. Understanding the reasons behind nonresponse and the effect of silence in response to different types of credit giving remains an opportunity for further research. Although not presented here due to limited space, we followed the general model of previous work using this dataset [8] and tested logistic regression models on dichotomous variables indicating the presence of negative and positive reactions and found that basic relationships described above were robust to the introduction of a control for the the intervention, to an interaction between these two variables, and to controls for the gender and age of originators and to the antecedent project's complexity. Both before or after the intervention, manual crediting resulted in more positive comments by the originators of remixed projects. Of course, the results presented here are uncontrolled, bivariate, relationships and we caution that these results, while provocative, should still be viewed as largely tentative. As we show in the subsequent qualitative analysis, attribution and credit-giving are complex social processes. We do not claim that the preceding analyses capture it fully. --- STUDY 2: INTERVIEWS WITH PARTICIPANTS In order to explore the reasoning behind young people's remixing behavior and attitudes toward attribution as we observed it in Study 1, we engaged in a second qualitative study and directly asked kids what role attribution and credit plays in their moral evaluations of remixing. --- Methodology We conducted twelve one-hour semi-structured interviews with kids aged 8 to 17 years old. All of the interviewees had experience using computers and had access to the Internet at home. All the interviewees live in the United States except for one who lives in New Zealand. The participants were recruited via the Scratch website and during meet-ups with educators, teachers and young Scratch users. Eight of the interviews were conducted in person, in the Boston area, and the rest over the phone or voice over IP. The interviews were audio-recorded and transcribed before fully analyzing them. Nine of the interviewees were members of the Scratch community. The remaining three did not use Scratch but were included as a way to check if people who do not use Scratch have similar views about remixing, attribution, and credit. We found no substantive difference between the Scratch users and non-users in their answers to questions related to the hypothetical automatic and manual mechanism for attribution. Before each interview, subjects completed a survey that elicited demographic information and posed questions about their familiarity with other technologies and which was primarily designed to get a sense of the interviewees' social and technical background. Interviews were structured around a protocol that included a set of nine fictional remixing cases intended to elicit conversations about remixing. 3 The cases were inspired by Sinnreich et al.'s theoretical work and from three years of experience moderating the Scratch community. They were designed to present cases where remixing could be controversial but where there is no clear "correct" answer. The goal of the cases was to offer a concrete, and common, set of dilemmas to stimulate broad conversations about attitudes toward remixing. The cases were presented in the form of printed screenshots of different project pages from the Scratch website (anonymized to avoid referring to real cases that users might 3 Our interview protocol including example cases is available at http://www.media.mit.edu/∼andresmh/chi2011/interview.html. have seen). The print outs were shown to the interviewees (or discussed over the phone) while explaining each case. All the cases included a remix and its corresponding antecedent project. The cases varied in the presence of automatic attribution, manual credit, and the degree of similarity between the remix and its antecedent. For example, the first three cases were: 1. A remix and its antecedent are identical. The project notes only describe how to play the video game. The remix shows the automatic attribution but no manual credit on the notes. 2. A remix and its antecedent are different (as seen visually and in project metadata) but one can clearly see the influence of its antecedent project. The project notes of the remix show manual credit but no automatic attribution. The interviewee was told to imagine the site had a glitch that prevented it from connecting it to its antecedent. 3. The same set of remix and antecedent projects as in (2) but this time automatic attribution is displayed but manual credit is not. Each of the interview logs was coded using inductive codes and grounded theory [3]. The coded responses were analyzed based on categories related to how interviewees answered specific questions about the distinction between automatic attribution and manual credit. --- Results Confirming the results of Study 1, for users of Scratch, automatic attribution was generally seen as insincere and insufficient. Throughout the interviews, we found that for most of the kids, getting explicit credit from another person was preferred over attribution given automatically by the system. When asked why, kids often responded that knowing that another person had cared enough to give credit was valued more than what the computer system would do on its own. The fact that it takes some work, albeit minimal, to write an acknowledgment statement, sends a signal of empathy, authenticity and good intentions [6]. Amy articulated this when explaining why she preferred getting credit from another person: I would like it even more if the person did it [gave credit] on their own accord, because it would mean that [...] they weren't trying to copy it, pirate it. Similarly, Jon explained, "No [the "Based on" is not enough], because he [the remixer] didn't put that, it always says that." For Jon, automatic attribution is not authentic because it is always there and, as a result, it is clear that is not coming from the person doing the remix. Most of the interviewees seemed to have a clear notion of what they think a moral remix should be. For some, it is all about making something different. Jake for example, defines a "good" remix as, "if it has a bunch of differences then it's a good remix. If it has like two, then it's bad." In addition to the differences between the remix and its antecedent project, for some, manual credit is part of what makes it moral. Charles said, "[remixing] is taking somebody else's project and then changing a lot of it and sharing it and giving credit." Continuing, Charles explained: If Green had actually said in the project notes, "This is a remix of Red's project, full credit goes to him," then I would consider it a remix. But this [pointing at a remix without manual credit] is definitely a copy. Likewise, Ryan mentions that a fictional remix was, "perfectly fine because they gave credit in the project notes." Interviewees suggested that manual credit also allows users to be more expressive. For example, Susie explained that expressiveness is the reason that she prefers manual credit through the project notes saying, "I think the manual one is better because you can say 'thank you' and things like that. The automatic one just says 'it's based on."' Susie also notes that for her, the project notes are a space where a creator can express her wishes in regards to her intellectual property, independent, and even in contradiction to, the license of the projects: If I do a project that has music that I really like, I often download the project, take the music. Unless it says in the project notes, "Do not take the music." For Susie and other users of Scratch, the project notes are a space for more than just instructions on how to interact with one's project; they are an expressive space where one can communicate with an audience without having to encumber the creative piece of work with it. Others point at the fact that people do not pay as much attention to automatic attribution statement as much they do to the manual credit left in project descriptions. Jake, for example, explains that, while he agrees there is some usefulness to having both, project notes still are more important, "because, you know, sometimes people just like skim through a project and you don't see it 'til the end." Jake continued to say that creators that do not have both should get a "warning." Even though interviewees value manual credit, they still see the usefulness of the automatic mechanism as some sort of community-building prosthetic device -an explanation for the positive reactions to the feature's initial introduction. For example, Nicole argues that while manual credit on the notes has more value for her, the automatic attribution is useful as a backup and because it provides a link: Well, I think that they should probably write in the notes that -then it should also say "Based on blank's project," just in case they forget, and also because it gives a link to the original project and it gives a link to the user so you don't have to search for it. A similar explanation was articulated on a comment exchange on one the website's galleries. A teenage girl that actively participates in Scratch explained the pragmatic value of automatic attribution saying, "the 'based on' thingy, it gives a link, and we all luv links, less typing," before reiterating that manual credit is more valuable: at the beginning i thought that you don't have to give credit when the "based on" thingy is in there, but i realized a lot of people don't look at that, and i noticed people confused the remix with the original. Creating a Scratch project is a complicated task. A project's sources can be diverse and the creator can easily forget to acknowledge some, as Paul explains, when asked to choose between a system of manual credit or automatic attribution: The thing is, it would be a lot better if they had both. Because, sometimes people probably just forget to do that. And then people would not know. There are also situations where interviewees recognize what Posner calls the "awkwardness of acknowledgment," that is, situations where credit is not really needed and it can be an unnecessary burden or go against the aesthetics of the work [20]. For example, Paul mentioned that sometimes, there are some projects in Scratch that are remixed so much -like the sample projects that come with Scratch or some "remix chains"4 -where credit is not necessary: There's this one called "perfect platformer base" which a lot of people remix. So I don't think that needs any credit. It's not actually a real game. It's all the levels and stuff are just demonstrations. Since manual crediting has a higher emotional value, some kids mentioned that conflicts over remixing could be addressed by the administrators of the site by editing the project of the remix in question, as a way to enforce credit without transforming it into attribution. Doing so would make it appear that a remixer had credited an antecedent when they had not. Susie offers a suggestion along these lines when asked about how the administrators of the website should deal with a case of a complaint over a remix that is a parody of someone else's project. Susie suggested that, "I might remove the project but I might not, you know, maybe I would edit the notes to to give credit Although not designed to be a random sample, these interviews support the proposition that both Scratch participants and other young people share a set of norms about characteristics that determine what a "good" or moral remix is. Among these norms, acknowledging one's sources seems to play a central role. However, participants also seem to share the opinion that this norm is not satisfied through an automated process. They clearly understand the pragmatic value of automating acknowledgment-giving, but they do not see it as a substitute for adherence to the social norm of creditgiving. They also see it as void of emotion and expressiveness. For Scratch users, normative constraints are separate from architectural constraints and one cannot replace the other. These findings support and enrich the results from our first study and help us understand better how Scratch participants, and perhaps kids in general, experience authorship norms and automation in online spaces. --- CONCLUSIONS Our results from Study 1a called into the question the effectiveness of automatic attribution functionality in encouraging more positive user reactions in Scratch. We build on these results in Study 1b to suggest that manual crediting may do the work that Scratch's designers had hoped automatic attribution would. Results from the analysis of user interviews presented in Study 2 help to answer the question of "why?" and suggest that users find manual credit to be more authentic and more meaningful to users because it takes more time and effort. Usually, UI improvements are designed to help reduce the time and effort involved in using a system. But in trying to help users by attributing automatically, Scratch's designers misunderstood the way that attribution as a social mechanism worked for Scratch's users. Our fundamental insight is that while both attribution and credit may be important, they are distinct concepts and that credit is, socially, worth more. A system can attribute the work of a user but credit, which is seen as much more important by users and which has a greater effect on user behavior, cannot be done automatically. Computers can attribute. Crediting, however, takes a human. As we suggested in our introduction, this fundamental result leads to two distinct contributions. First, and more specifically, our analysis offers an improved understanding of the way that attribution and credit works in user-generated content communities over what has been available in previous work. Our two studies suggest that scholars are correct to argue that credit plays an important role in social media communities and offer empirical confirmation for the important role that authenticity plays in how users conceptualize credit. In our in-depth interviews, we explain some of the reasons why this may be the case. Second, through our evaluation of an unsuccessful technological design, our work offers a broader, if more preliminary, contribution in suggesting an important limit of designers' ability to support community norms in social media systems. As the literature on design and social media grows, the importance of good support for communities with healthy norms promoting positive interactions is likely to increase. In attempting to design for these norms, we suspect that researchers will increasingly encounter similar challenges. We argue that designers should approach interventions iteratively. This design approach can be understood through the theoretical lens of the social construction of technology [19]: designers can't control technological outcomes which must be built through a close relationship between designers and users. Designers must move away from seeing their profession as providing solutions. They must channel users, work closely with them, and iterate together, to negotiate and achieve a set of shared goals. The prevalence of user-generated content sites stresses the importance of how online social spaces should deal with issues of attribution and our results are likely to be immediately relevant to designers. For example, the Semantic Clipboard is a tool built as a system of automatic attribution for content reuse [22]. Developed by researchers who found a high degree of Creative Commons license violations around the re-use of Flickr images, the tool is a Firefox plugin that provides, "license awareness of Web media," and enables people to automatically, "copy [media] along with the appropriate license metadata." Our results suggest one way that this approach may fall short. However, automatic attribution is not the only way that technologists can design to acknowledge others' contributions. Indeed, our results suggest that there may be gains from design changes which encourage credit-giving without simply automating attribution. For example, Scratch's designers might present users with a metadata field that prompts users to credit others and suggests antecedent authors whose work the system has determined may have played a role. This affordance might remind users to credit others, and might increase the amount of crediting, while maintaining a human role in the process and the extra effort that, our research has suggested, imbues manual credit giving with its value. We suggest that in other social media communities, similar affordances that help prompt or remind users to do things that a system might do automatically represent a class of increasingly important design patterns and a template for successful design interventions in support of community norms.
Africa is home to 54 United Nation member states, each possessing a wealth of ethno-cultural, physiographic, and economic diversity. While Africa is credited as having the youngest population in the world, it also exhibits a unique set of "unfortunate realties" ranging from famine and poverty to volatile politics, conflicts, and diseases. These unfortunate realities all converge around social inequalities in health, that are compounded by fragile healthcare systems and a lack of political will by the continent's leaders to improve smart investment and infrastructure planning for the benefit of its people. Noteworthy are the disparities in responsive approaches to crises and emergencies that exist across African governments and institutions. In this context, the present article draws attention to 3 distinct public health emergencies (PHEs) that have occurred in Africa since 2010. We focus on the 2013-2016 Ebola outbreak in Western Africa, the ongoing COVID-19 pandemic which continues to spread throughout the continent, and the destructive locust swarms that ravaged crops across East Africa in 2020. Our aim is to provide an integrated perspective on how governments and institutions handled these PHEs and how scientific and technological innovation, along with educational response played a role in the decision-making process. We conclude by touching on public health policies and strategies to address the development of sustainable health care systems with the potential to improve the health and well-being of the African people.
INTRODUCTION The evidence is clear that public health emergencies (PHEs) can dramatically impact the substantial gains made in primary health care initiatives (1), with estimates suggesting that each year one out of five World Health Organization (WHO) member states experiences a PHE (2). According to the Model State Emergency Health Powers Act, a PHE can be defined as: "an occurrence or imminent threat of an illness or health condition, caused by bioterrorism, epidemic or pandemic disease, or novel and highly fatal infectious agent or biological toxins, that poses a substantial risk of a significant number of human fatalities or incidents of permanent or long-term disability" (3). The effects of PHEs are further exacerbated within continents that are comprised of fragile states-such as Africa-with inadequate health care systems (4). The African continent exhibits a unique set of characteristics including a large ethnic diversity (5), distinct physiographic patterns (6), vast mineral wealth (7) and a burgeoning youth population (almost 60% of the continent are aged below 25) (8). While intriguing, several countries across the continent continue to experience complex emergencies and health crises ranging from civil conflict and infectious diseases (e.g., HIV/AIDS and malaria) to severe drought and malnutrition. These humanitarian crises place significant strain on personal lives leading to socioeconomic instability, forced migration, and long-term refugee problems (9,10), that in turn have an adverse effect on attainment of the United Nations (UN) Sustainable Development Goals. Given the inadequate funding to PHE and disaster preparedness in many African countries (11), there remains some degree of uncertainty regarding the ability of the continent's countries to adequately respond to these concerns (12)-although the last decade has seen a significant increase in funding for research capacity in Africa (13). Due to the unpredictable nature of PHEs, it is perhaps unsurprising that PHE preparedness (PHEP) is an inherently complex process that involves a range of prevention, mitigation, and recovery activities that extend beyond just enabling a response to emergencies (14). Notably, PHEP is conceptualized as comprising 3 broad elements including: (1) pre-planned and coordinated rapid-response capability; (2) strengthening expertise and building a fully staffed workforce; and (3) ensuring accountability and quality improvement (14). Here we draw attention to African governments and institutions in relation to the handling of 3 distinct PHEs-namely, Ebola, COVID-19, and locust swarms-and how scientific and sustainable technology implementation, along with other effective and innovative responses, have played a role in their managing of these humanitarian crises. --- EBOLA OUTBREAK IN WEST-AFRICA Of the 34 documented Ebola outbreaks that have occurred since the first description of the virus in the Democratic Republic of Congo (DRC) in 1976 (15), the 2013-2016 West Africa epidemic was the largest and most widespread in historyculminating in more than 28,000 cases and over 11,000 deaths (16). Epidemiologists identified the index case of the outbreak in Meliandou, Guinea, and fruit bats are believed to have served as a reservoir of the virus and to be involved in the zoonotic spillover effect that led to a cascade of contagion that spurred the high number of Ebola cases and case fatality rate (CFR) in West Africa (15,17). An estimated US$ 2.2 billion in gross domestic product (GDP) was lost in 2015 by the 3 most affected countries (Liberia, Sierra Leona, and Guinea) (18). The epidemic also resulted in lower investment and a substantial loss in private sector growth, declining agricultural production that led to concerns about food security, and a decrease in cross-border trade as restrictions on movement of goods and services increased (19). Another consequence of the virus relates to rural-urban gradients of transmission and population-level beliefs and practices (i.e., shifts in where care was sought) (20)(21)(22), with a pervasive stigmatization of survivors playing a significant role (23). The stigma attached to Ebola has been reported to have led to social inequalities and mental health problems with a large portion of individuals afflicted by the disease suffering hostility and economic hardship (24,25). The Ebola outbreak in West Africa also highlighted various barriers to coordinated rapid-response capacity and the need for more robust global health security, particularly in settings with limited public health capacity (16,26). Indeed, the 3 countries most affected by the outbreak exhibited similar characteristics including: inadequate financial resources and health care systems (as reported by low numbers of nurses and doctors) in addition to a scarcity of medicines and personal protective equipment (PPE)-each of which represent a unique threat to containing the spread of infectious disease, and in turn provide a hurdle toward implementation of the International Health Regulations (IHR) (27). Notably, porous borders meant that the Ebola outbreak was not just restricted to Guinea, Liberia and Sierra Leone as cases were also reported in Nigeria, Senegal, and Mali (15,28). The slow recognition and delayed response to control the Ebola outbreak by West African governments, exposed defective containment strategies and poor crisis management in the countries' worst affected by the virus. Inadequate contact tracing and detection of suspected cases coupled with poor surveillance and gaps in the community's knowledge about the Ebola virus contributed to a rampant spread of the disease (29). Unfortunately, during the outbreak, governments in Liberia, Guinea and Sierra Leone failed to communicate effectively with citizens. In Liberia, this resulted in frustrations and riots in the capital city, Monrovia. --- Improving Pre-planned and Coordinated Rapid-Response Capacity As the effects of the Ebola virus continued to unfold in West Africa a key strategy shown to moderate the crippling effects of the epidemic involved an integrated and calibrated response strategy that included: (1) bolstering standardized supportive care of survivors via treatment for the symptoms and complications of Ebola (e.g., mental health and psychosocial support), (2) leveraging and deploying aid through international organizations such as the WHO and Médecins Sans Frontières; (3) funding for emergency Ebola treatment; (4) rapid and accurate Ebola diagnostics testing through platforms such as real-time polymerase chain reaction; (5) scaling-up of national disease surveillance activities (e.g., digital health/apps via mobile devices); (6) a licensed Ebola vaccine (Merck's VSV-ZEBOV vaccine) and (7) focusing on social science and community engagement through aspects of risk perception, tackling vaccine hesitancy and education as a means to minimize confusion and to empower individuals to adopt preventative behavior (30). This response strategy is estimated to have taken over a year to implement (31,32), with follow-up visits provided to each survivor every month over a period of 6 months and then every 3 months for a year (30). Six years on from the beginning of the Western Africa Ebola epidemic, the DRC has been grappling with its 12th Ebola outbreak. Active conflict, a severe measles outbreak and insecurity make this epidemic one of the most complex ever encountered (33). However, the Ministry of Health of the DRC has mounted an impressive response strategy to the outbreak with international support from the United States Centers for Disease Control and Prevention (CDC), WHO and Gavi, the vaccine alliance. Beyond placing an emphasis on developing local health care systems in the most affected areas (i.e., Kivu and Ituri), a strong aspect of the DRC Ebola response has been applying lessons learned from the outbreak in West Africa (30). These lessons include community engagement, better support of survivors, use of mobile phone data to inform the dynamics of Ebola transmission (via travel patterns and contact tracing) (34), and licensed approval for two vaccines (Merck's single dose VSV-ZEBOV vaccine and Jansen's two-dose vaccine regimen of Zabdeno and Mvabea), with recent estimates suggesting that more than 300,000 people have been immunized against Ebola through vaccination in the DRC (35). Despite the significance of the current vaccines against Ebola, there still remains the need to develop an effective strategy for optimal impact of vaccination (36). In this respect, Coltart et al. emphasize that prophylactic vaccination of health care workers (HCWs) could have a substantial epidemic-reducing effect on the spread of Ebola (37). Other evidence from mathematical and statistical models suggests that engaging HCWs to deliver vaccinations represents both a feasible and effective strategy that may be implemented in a future Ebola outbreak (38,39). --- Building Expertise and Fully Staffed Workforce Several notable initiatives have been developed to enhance capacity (viz., expertise) and build leadership within the local workforce at national and local levels in various countries, as a means of strengthening sustainable PHEP. For instance, the CDC's Surveillance Training for Ebola Preparedness (STEP) initiative was shown to be a successful mentorship and competence-based initiative that collaborated with various local training institutes and organizations to rapidly build the surveillance capacity of district surveillance officers in Mali, Guinea-Bissau, Senegal and Ivory Coast during the Ebola outbreak in West Africa (40). Along with the STEP initiative, implementation of laboratory capacity building programs to strengthen bio-risk and quality management systems, diagnostics and facility engineering, and bio-surveillance capacity was able to bolster emergency preparedness and response (41). Perhaps most notably, infection prevention and control capacity building programs for HCWs registered positive benefits on knowledge and practices of HCWs in the fight against Ebola outbreak in DRC (42) and Western Africa (43). Also, foreign medical worker deployment from the African Union Support to the Ebola Outbreak in West Africa and medical personnel from Cuba played a central role to fill the gap for skilled HCWs, as well as co-learning for skills development (44). --- Accountability and Quality Improvement In West Africa, community monitoring-which involves providing patients with information and enabling a public forum to monitor frontline workers-was beneficial in generating some form of social accountability and trust. An example of an effective community monitoring program was the Liberian government's door-to-door canvassing campaign during the Ebola epidemic (45). Through the Financial Tracking Service (FTS) of the United Nations Office for the Coordination of Humanitarian Affairs (UNOCHA), curated financial data (e.g., funding needs, commitments, pledges and projected funding) on Ebola virus outbreak were continuously updated and accessible in downloadable format on the UNOCHA website to facilitate accountability and transparency (46). --- COVID-19 OUTBREAK IN AFRICA The first case of coronavirus disease 2019 (COVID-19), the disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), was reported on the African continent on February 14, 2020 in Egypt, with Sub-Saharan Africa (SSA) detecting its first case in Nigeria on February 27, 2020 (47). In most countries, the initial response to the COVID-19 pandemic was strong and proactive. Despite these measures, many public health experts predicted that the pandemic would severely overwhelm Africa's largely fragile and underfunded health systems. Of the 34 African countries surveyed in the WHO COVID-19 readiness status report, only 10 countries reported adequate capacity to respond to the epidemic, including with PPE for the population (48). The UN Economic Commission for Africa estimated that, in the worst-case scenario, 3.3 million Africans would die from the disease (49). Concerns over the combination of overstretched, underfunded health systems and the existing load of infectious and non-infectious diseases often led to scenarios being talked about in apocalyptic terms. More than a year into the pandemic, the continent has however thwarted most predictions regarding the spread of the virus. The health and social measures to contain the COVID-19 epidemic implemented by most countries are likely to have slowed the spread of the virus, and the number of confirmed cases and deaths in Africa remained lower than initially forecast. As of October 18, 2021, confirmed cases of COVID-19 from 55 African countries reached 8.4 million with a CFR of 2.6% (i.e., 215,784 deaths) (50). By early August 2021, it was estimated that only 3.5% of the global COVID-19 cases and 4.1% of the global COVID-19 related deaths were from Africa (50, 51)-a continent that accounts for 17% of the global population (52). Nevertheless, the magnitude of the challenge and the continent's underlying vulnerabilities should never be underestimated. The weak PHE management systems in most countries, have rendered it difficult to discern accurate transmission, hospitalization and mortality rates (53). For example, currently, the continent's testing rate is one of the lowest in the world. Therefore, the full scope of the pandemic remains uncertain. In addition, several countries are experiencing a second wave of the pandemic and some, such as Kenya, Egypt and Tunisia, have seen a third wave (54). This new wave of infections is thought to be associated with the emergence of variants that are more transmissible. Unfortunately, only a few countries have the capacity to carry out the specialized genomic sequencing required to detect coronavirus variants. Further, the health and economic shocks occasioned by the pandemic threaten to wipe out decades of economic progress and development gains in Africa. These risks put some countries on an unsustainable debt path (55). The pandemic has also laid bare structural shortcomings such as inadequate health, educational and technological infrastructure, limited social protection, gender inequality, large informal economies, lack of access to basic services, and constrained fiscal policy space (56). For example, the contraction in per capita GDP growth caused by the pandemic may have pushed an additional 26.2 million to 40 million (i.e., 2-3%) people into extreme poverty in SSA by the end of 2020 (57). Fighting a pandemic and its economic aftershocks requires enormous amounts of money. In higher income countries, governments have stepped forward with trillions in economic stimulus packages. But most developing countries do not have the money to cover the full costs of this pandemic. --- Improving Pre-planned and Coordinated Rapid-Response Capacity To forestall the COVID-19 health and economic crisis, most African countries developed response plans. Specifically, most African governments rapidly implemented public health and social measures to contain the pandemic, including closing borders, mandatory general lockdown, physical distancing measures, and establishing centers for quarantining of cases (58). Furthermore, response plans have also majorly centered around four main areas simultaneously, including: (1) saving lives; (2) protecting poor and vulnerable citizens and responding to the impact on their livelihood; (3) protecting and creating jobs through support to private sector and (4) building back better systems (59). For example, to save lives, several African countries focused on intensive surveillance and case-finding, leveraging the Integrated Disease Surveillance and Response framework (IDSR) (60). The Partnership to Accelerate COVID-19 Testing (PACT) Initiative, for instance, is an initiative launched by the African Union Commission and the Africa CDC to boost and coordinate procurement and supply chains for medical supplies and to support protracted testing for COVID-19 within the African setting (61). The continent has also been able to draw on previous experience in dealing with PHEs such as the Ebola crisis to make better decisions on public health and social measures. For example, several countries focused their response efforts toward community engagement, risk communication, and locally adapted innovations in tracing, treatment and isolation (56). With the vaccine roll-out underway in many African countries, ensuring an adequate supply of vaccines is a priority for the region. Countries have mainly accessed vaccines through the COVAX Facility, bilateral deals, and donations. Nonetheless, concerns regarding disparities in vaccine access and distribution remain widespread (62). Many developed countries have displayed a very high degree of "vaccine nationalism, " locking up most supplies and prioritizing the vaccination of their entire populations before releasing surpluses to protect even the most vulnerable populations in low-and middle-income countries (LAMICs) (63). As of October 16, 2021, Africa had administered 12.5 doses of COVID-19 vaccines per 100 people. The vaccination rate of the continent was far slower than the world average measured at 84.5 vaccines per 100 individuals on the same date (64). Further, while concerted global efforts are working to accelerate equitable access, vaccine hesitancy-driven in part by a trust deficit between communities and the actors leading vaccine rollout-risks prolonging the pandemic and its secondary waves of conflict and economic devastation. --- Building Expertise and Fully Staffed Workforce Across the African continent, HCWs are boosting their emergency response skills in tackling COVID-19, for example through virtual and in-person trainings organized by Ministries of Health and health organizations or research institutions (65). The PACT initiative for example, also focuses on the support for training and deployment of one million community HCWs to support contact tracing within the African setting (61). Government Ministries of Health have learned to harmonize research activities, through leveraging research laboratory capacity (both personnel and equipment) of academic research institutions and other in-country laboratories for community COVID-19 testing (66); as well as building foreign/international research partnership to improve testing or medical product development capacity (67). It follows that efforts to build research capacity to conduct good quality collaborative international COVID-19 vaccine trials in Africa will allow for better protection against this devastating infectious disease (68,69). --- Accountability and Quality Improvement With regard to economic response, Africa's fiscal realities limit what most countries can do to alleviate pressures on citizens (56). Several countries have undertaken measures to address the economic fallout of the pandemic. For example, some countries announced remedial fiscal and monetary measures, as well as food distribution and financial support to the most vulnerable groups. However, less has been done across countries to cushion against lost income and export earnings, dwindling remittances, and decreased government revenue. In addition, relatively few countries have articulated initiatives to mitigate the socio-economic impacts of COVID-19 in the long-term. Therefore, the road to recovery will be long and vary significantly across countries (48). Most African countries continue to rely on foreign aid in response to the impact of the pandemic. Since the start of the pandemic in March 2020, the World Bank has made available nearly US$ 24.7 billion to respond to the COVID-19 crisis through a combination of new operations in health, social protection, economic stimulus and other sectors, as well as redeployment of existing resources (70). Several African countries have also received foreign assistance from bilateral partners to help prevent, detect, and respond to the COVID-19 pandemic and strengthen their public health preparedness. For instance, France mobilized e1.2 billion to fight the spread of COVID-19 in the most vulnerable countries, most of which are in Africa (71). As with Ebola, the FTS of the UNOCHA has curated financial data on COVID-19 emergency funds as part of the Global Humanitarian Response Plan that are continuously updated and accessible through the UNOCHA website (72). Initiatives, for example by the African Union, have enabled increased dialogue and opportunity for learning and sharing among government officials, audit institutions, procurement oversight bodies, and civil society organizations on the African continent in relation to innovative accountability mechanisms and crisis budget support operations (73). --- LOCUST SWARMS IN EAST-AFRICA The impacts of the PHEs presented above are devastating enough without additional social and economic dislocation caused by non-disease outbreaks. In 2020, East Africa faced just such a situation when a surge of desert locusts invaded the Horn of Africa. In order to fast-track effective response toward the attack, on January 17, 2020, the UN's highest level of emergency (L3 protocols) was activated by the Director-General of the Food and Agriculture Organization (FAO) (74). Beyond widespread hatching, band and swarm formation in north East Ethiopia (57,450 hectares had been treated), immature swarms prevailed in Somalia (17,477 hectares) and to a lesser extent in North West Kenya (2100 hectares) (75). Desert locust infestation was also reported in 24 districts in Uganda around a similar timeframe (76). In Kenya, the 2020 desert locust invasion is considered the worst in 70 years (77). The combination of the ongoing COVID-19 pandemic and a desert locust outbreak has exerted an enormous economic toll and an even greater burden on the health systems in East Africa. The meager financial resources, which would have been fully vested into the COVID-19 programmes within East African governments, had to be rationed so that some resources are used to combat desert locusts, and this called for more borrowing. For instance, more than US$ 160 million was loaned to the East African countries of Kenya, Uganda, Djibouti and Ethiopia by the World Bank, to combat desert locusts ( 78) and yet additional loans for COVID-19 had also been secured by some of these governments [for instance US$ 1 billion for Kenya (79) and more than US$ 15 million for Uganda (80)] which further deepens their debt crisis. --- Improving Pre-planned and Coordinated Rapid-Response Capacity Through support from the World Bank, national response programs, i.e., Uganda Emergency Desert Locust Response Project (81), Kenya emergency locust response program (82) and Ethiopia Emergency Locust Response Project (83) were set up in early 2020. As part of the commitment plans, actions were stipulated through which desert locust control programs would be implemented in accordance with social and environmental standards, for example: environmental and social assessment of risks arising from the projects, occupational health and safety measures, and pollution prevention and management strategies (81)(82)(83). These programs also were set up with incountry coordination plans. For instance, in Kenya, a multiinstitutional technical team on desert locusts was established to coordinate policy and technical advisory on desert locust management which was tasked with activities like providing advisory to county administrations and any other stakeholders, planning the collection and collation of technical information and building capacity among stakeholders on integrated desert locust management (77). Surveys of terrain, state of habitat and locust population were performed so as to inform policy and decision making (83). Strengthening of existing systems to combat future outbreaks, for example the Locust Control Unit within the Plant Protection Service Division of Kenya's Ministry of Agriculture, was among the strategic aims of the funding from the World Bank (77). FAO encouraged country-level partners to record and transmit desert locust related surveillance data to ministerial organizations (like the Ministries of Agriculture) so that such essential information is included and utilized in FAO's Desert Locust Information Service (DLIS). In each country, a Locust Information Officer is responsible for collating, analyzing and transmitting this data to DLIS (84). In turn, the DLIS analyses the data and keeps countries informed of the current situation and expected developments by providing a forecast up to 6 weeks in advance (84). Data sharing for improved monitoring of desert locusts is also being boosted through mobile-phone based surveillance technology such as the eLocust3m and other platforms like the centralized Desert Hub platform (74). According to FAO, the primary method for controlling the 2020 desert locust swarm and hopper bands, is through organophosphate chemicals, delivered by vehicle mounted aerial sprayers and knapsack or hand-held sprayers (85). The main strategy involves targeting breeding grounds and controlling the hopper bands while still at the nymph stage-that is before they can fly (77). More recently, test drones equipped with mapping sensors and atomizers have been deployed to spray pesticides to tackle the desert locust swarms in East Africa. Governments and donor agencies (e.g., FAO and the World Bank) have ensured that there is disaster recovery relief provided, including inputs such as seed-fertilizer and pesticides to selected farmers faced with hardship and also provided fodder seed to affected communities to restore lost pastures, emergency food security mechanisms and direct cash transfers (76,83,86). In Uganda, for example, as part of World Bank's US$ 48 million loan, funds were set aside to boost existing savings and investment platforms/groups at village level through a Village Revolving Fund and seasonal income transfers to vulnerable (76). --- Building Expertise and Fully Staffed Workforce Capacity building of in-country human resource was conducted. For example, the FAO facilitated training of National Youth Service trainees as part of boosting the Government of Kenya ground surveillance for desert locusts (85). Governments also mobilized and trained communities to establish locust surveillance systems based at community, district and national levels so as to ensure the sustainability of mapping monitoring and surveillance (76). By the start of the desert locust disaster, FAO was already coordinating with over 100 NGOs in Ethiopia in using and building capacity for the use of eLocust3m (86). --- Accountability and Quality Improvement The FTS of the UNOCHA has played an integral role in enabling timely access to financial data on humanitarian funding flows on desert locust response across East Africa and the Horn of Africa (87). As part of quality improvement-in terms of the potential unintended negative consequences of pesticide use-initiatives were put in place to monitor and assess the environmental and human health risks attributable to their use (76,77). --- IMPLICATIONS FOR POLICY AND RESEARCH Evidence-informed policy and decision-making is crucial for ethical and sustainable response to PHEs. The generation and translation of evidence to inform policy and decisionmaking is often seen as a race against time-with the quality, depth and conciseness of available evidence directly affecting policy decision-making processes (88). Notable in this regard are rapid assessment tools, developed as a public health approach to speed up-and bring together-the processes of evidence-based decision-making during crisis management of PHEs (89,90). The present paper emphasizes the necessity of utilizing the rapid assessment approach in a more collaborative and engaging manner as a means of facilitating dialogue between decision-makers and other stakeholders (including scientists and communities) in relation to programme planning and interventions which, in turn, enable PHEP and responses "in-the-now" (91). Indeed, rapid assessment tools have been applied successfully around the world to generate evidence for decision-making in the management of a variety of PHEs, including HIV/AIDS (92), forced displacement due to conflict (93), natural disasters (90) and more recently COVID-19 (94). Such tools are vital for identifying and addressing context specific issues, in acting as a guide for resource allocation and providing key information in relation to response planning and implementation as evidenced during the 2013-2016 Ebola epidemic in West Africa (95). Each of the PHEs described in our paper has important lessons. Notably, comprehensive and reliable data generated through well-designed and well-executed research (e.g., realtime epidemic forecasting and disease surveillance through administrative data systems) will prove important in resolving important research questions and existing knowledge gaps. It follows that any research during PHEs should only be conducted if it has high social value (i.e., it provides information to support the immediate response, either through evidence to assist the decision-making process or targeted interventions aimed at minimizing the magnitude of the harm suffered by a population) (96). Importantly, the knowledge generated through research in anticipation of, during, and after a PHE is vital to building future capacity to better achieve the goals of preparedness and response: preventing illness, injury, disability, and death and supporting recovery (97). However, conducting research in PHE settings often presents a number of challenges, including an inability to access affected people, insecure settings and lack of research infrastructure (e.g., underdeveloped oversight and regulatory bodies of host countries) (98)(99)(100). In recent years digital health technologies have been harnessed as a means of data collection in a variety of research settings including PHEs (101). Specifically, these technologies improve the quality and efficiency of research studies via automated data capture and by improved data traceability, reliability and provenance (102). Additionally, digital technologies can improve study transparency, security, informed consent, handling of confidential patient information and data sharing (102). This stated, challenges exist in relation to data sharing mechanisms as well as the technical and legal ability to protect intellectual property (e.g., inventions/innovation and research publications), particularly in the context of LAMICs. Some solutions toward improving research in the PHE context include appointing a coordinator for scientific research-a role that involves coordinating the research process, identifying mechanisms and rapid funding schemes to support research, enlisting existing research networks in order to coordinate and accelerate research efforts (e.g., for data collection), and establishing a centralized institutional review board to provide timely reviews of multiagency studies (97). In line with the lessons learned from our paper, Khan and colleagues (103) identified a number of important considerations/factors for enhancing research, policy and practice related to preparedness and response to PHEs which include governance and leadership, community engagement, risk analysis, surveillance and monitoring, resources, investment in systems strengthening and capacity building, communication, research and learning, and evaluation (103). Each of these factors are described in further detail-in relation to their significance in policy decision-making processes-in Table 1. --- CONCLUSIONS Public health emergencies in the African region continue to exert an enormous toll on people's livelihoods-with some PHEs characterized by excessive mortality and morbidity rates-often testing collective resilience. Unfortunately, there are significant challenges that surround coordinated rapid response capacity, staffing and capacity building, and quality improvement with respect to most of the PHE response efforts on the African continent, which points to systemic fragility. Compounding matters further, PHEs and some of their secondary effects are bi-directional, and these secondary effects-such as distrust of health authorities and disruption to health services-are seen to make it harder to combat these humanitarian crises (104,105). Notwithstanding, there have been vital lessons learned from previous PHEs and African governments, institutions and partners have devised various initiatives toward more holistic PHEP. In light of the PHEs discussed in this paper, namely Ebola, COVID-19 and locust swarms, initiatives to strengthen preplanned and coordinated response have included: containment measures (e.g., social distancing and border restrictions); building local and international collaborations to leverage expertise, international aid, and other resources; scaling-up surveillance and monitoring activities; leveraging initiatives like COVAX to ensure vaccine roll-out and supply; management and treatment of survivors; social protection programs against shocks to livelihoods; and community engagement. To develop expertise and a well-staffed workforce, various training programs as part of capacity building (e.g., in surveillance, laboratory work, infection prevention and control, data management) coupled with mentorship and leadership training have been found to be beneficial. Foreign worker deployment has also been critical especially in relation to the Ebola virus outbreak. Lastly, as a means of improving accountability and quality improvement in PHEP, African governments and institutions have utilized the Financial Tracking Service of the UNOCHA to monitor humanitarian financial assistance and commitments. Dialogue between governments and institutions on innovative accountability mechanisms and crisis budget support operations has strengthened cross-learning on best practices for accountability. Some countries have also devised community monitoring approaches to improve trust and monitor quality of services during PHEs. Overall, adoption of system-wide approaches matched by scaling-up innovations to achieve impact may prove effective to better deal with the negative outcomes related to complex PHEs on the African continent. There will also be a need to refine policies on leadership relating to PHEP and response in conjunction with policies that focus on strengthening national and technical capacities that align with the IHR as a means of accelerating progress toward universal health coverage. --- DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding authors. --- AUTHOR CONTRIBUTIONS All of the authors substantially contributed to the conception and drafting of this manuscript. --- Conflict of Interest: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's Note: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
The purpose of this study was to examine ethnic disparities in the utilization of digital healthcare services (DHS) in Israel and explore the characteristics and factors influencing DHS use among the Arab minority and Jewish majority populations. Methods: A cross-sectional correlational design was employed to collect data from 606 Israeli participants, 445 Jews, and 161 Arabs. Participants completed a digital questionnaire that assessed DHS utilization, digital health literacy, attitudes towards DHS, and demographic variables.The findings reveal significant disparities in DHS utilization and attitudes between these ethnic groups, with Jewish participants demonstrating higher rates of utilization and positive attitudes toward DHS. The study also explores the predictive role of digital health literacy and attitudes in DHS use while considering ethnicity as a potential moderator. Significant predicting factors related to DHS utilization among Jews include positive attitudes and high health literacy. Among the Arabs, only attitudes towards DHS significantly predict the extent of DHS use. Digital health literacy affects the extent of use through attitudes at the two groups of the moderator significantly, but it is stronger among the Arab group. Conclusion: To improve healthcare outcomes and reduce disparities, efforts should focus on ensuring equitable access to DHS for the Arab minority population. Targeted interventions, including digital literacy education, removing technology access barriers, offering services in Arabic, and collaborating with community organizations, can help bridge the gap and promote equal utilization of DHS.
Introduction Digital healthcare services (DHS) have become an integral part of the health services provided by healthcare organizations in Israel and in the world, and it has been called as "virtual service revolution". 1 It is part of the technological revolution developed in many fields and offers solutions to the growing burdens in the health systems. In addition, the DHS aims to decrease the gap between the requirements of available and accessible services and the lack of resources. 2 DHS can support health systems to deliver more health care, to promote health and prevent diseases. 3,4 Such services were also effective in hospitals, and they can reduce demand for (in-house) consultations, medical procedures, and unnecessary hospitalizations and improve postoperative monitoring of patients. 5,6 DHS can also be beneficial for individuals and patients with chronic diseases; it supports self-management and preventive behaviors related to chronic diseases. 3,7 It seems that in recent years the tendency to rely on technology in the field of healthcare services is expanding. 8,9 DHS includes a wide range of services like mobile applications of digital information technologies and more. In Israel, the four health maintenance organizations (HMOs) started developing such services many years ago, and it became essential after the outbreak of the corona virus. It includes websites, consultation with different specialized physicians, maternity care, service for receiving prescriptions and information about pharmacies, administrative services, and more. The utilization of DHS was less among certain groups based on various factors. These factors include advanced age, male gender, lower levels of education and income, and a disadvantaged socioeconomic background. [10][11][12][13] Obstacles to DHS utilization can also arise from the breakdown or interference of established resources or systems. 14 Huxley et al 15 in their review refer to barriers among marginalized groups (itinerant populations such as refugees, homeless people, unemployments) compared to the general population. The review revealed that marginalized groups reported access difficulties and stigmatizing reactions from health professionals and other patients. Previous qualitative, quantitative, and mixed articles analysis review 16 showed that eHealth can widen the gap between those at risk of social health inequalities and the rest of the population. Ethnicity and low income were the most commonly used characteristics to identify people at risk of social health inequality. Norman & Skinner 17 found that high levels of health literacy in general, and digital health literacy in particular, is needed for the utilization of DHS. Health literacy is defined as "the degree to which individuals can obtain, process, understand, and communicate about health-related information needed to make informed health decisions" p.16. 18 When digital health literacy, in addition, requires more skills to obtain online health information. 17,19 Previous research indicated that higher levels of digital health literacy related to better health, healthy behaviors, and increased knowledge regarding the management of chronic diseases. 20 Levels of digital health literacy were low among disadvantaged population groups with low socioeconomic status. A literature review and meta-analysis 21 found that accessibility to infrastructure and low levels of education were the main factors for this. Digital health literacy is a crucial means that now goes beyond restricted access to information to the denial of actual healthcare services. It is imperative to recognize that in the third millennium, digital literacy has evolved beyond mere technological expertise and has become a tool that empowers individuals to access various services, including healthcare, on an equal footing. Other studies also report that the most powerful predictors of not using information technology among older adults are cognitive decline associated with aging processes and attitudes such as anxiety about computer use and the perception that the technology was not useful for them. [22][23][24] Numerous models have been created to explore and comprehend the factors that influence the acceptance of computer technology. The technology acceptance model (TAM) proposed by Davis 25 was one of those theories. The theory proposes that user acceptance, which is affected by three elements, namely perceived usefulness, perceived ease of use, and attitudes towards usage of the system, can determine the effectiveness of a system. The theoretical frameworks utilized to examine user acceptance, adoption, and usage behavior. The Ministry of Health in Israel in 2017 2 declared a policy to encourage the use of digital services in order to improve the quality of care. They initiated a "National program for digital health". Even though surveys show significant disparities between groups in utilizing DHS. Low rates were found to be among Arab minorities in Israel. 2 Previous research from other countries also found low adherence rates of utilizing DHS among minority populations. 1,26 This research focused on the Arab minority living in Israel. They constitute about 21% of the population in Israel. 27 Almost 50% live in Northern region, 10% in the central region, 20% in Haifa, and 20% in the southern part of the country. The Arab community is characterized by low socioeconomic status and higher health disparities. 28,29 Little is known about utilizing DHS among the Arab community. Recent published data (in Hebrew) of Laron et al 30 found that more than 90% of Arabs use the internet and have smartphones. 60% of them reported that they use telehealth services just to set a doctor appointment. Two-thirds did use the health plan's application. The main barrier for using such services was a lack of awareness about using DHS, when previous acquaintances with the doctor and services in Arabic were facilitating factors. There was a significant correlation between education level and the utilization of telehealth for written communication with a known healthcare professional. They concluded that even though high percent of the Arabs have an access to the internet, the usage of the DHS is still limited. Thus, this research aims to deepen the knowledge about the other characteristics and barriers of DHS among the Arab community compared to the Jewish community in Israel and to examine the general model of literacy, including attitudes and usage, and to investigate the impact of ethnicity on individuals' patterns of use. Following, the research hypotheses are: 2. There will be a positive relationship between digital health literacy and the extent of use of DHS between the two ethnic groups. 3. There will be a positive relationship between attitudes towards DHS and the extent of use of DHS between the two ethnic groups. 4. Attitudes towards DHS will mediate the relationship between digital health literacy and the extent of use of DHS. --- Materials and Methods --- Study Design This study used a cross-sectional correlational design. An online survey was conducted during 13 September to 01 October 2022 using a closed digital questionnaire. The questionnaire was administered via a well-known survey institute to a panel of optional respondents sample of 609 Israeli citizens. The survey was conducted in Hebrew and Arabic. Participation in the survey was voluntary, and participants were not offered any compensation. --- Participants and Data Collection A representative sample of 609 subjects participated in the study: 165 from the central region, 123 from Tel Aviv, 88 from the northern region, 83 from Haifa, 80 from the southern region, 45 from Jerusalem, and 23 from the Judea and Samaria region. The inclusion criteria were Israeli adults Arabs and Jews. It should be noted that the Arab population in Israel is overrepresented in this study, and the sample represents the distribution of the insured in the various health maintenance organizations. Participants were first informed about the purpose of the study and maintained participant confidentiality. They were informed about the option to refuse to complete the questionnaire or stop filling it out at any time without any consequence to themselves. They then gave their informed consent. --- Variables and Measurements --- Demographic Characteristics The following demographic data were collected: gender, year of birth, place of residence, marital status, number of children, religion, level of religiousness, occupation, education, financial status, health status, and membership in a health fund. The Extent of Use of Digital Healthcare Services (DHS) Questionnaire The extent of use of DHS questionaire based on a questionnaire developed by Even-Zohar et al 31 and included eight digital healthcare services such as scheduling appointments and viewing test results. To validate the questionnaire and adapt it to the research purpose, it was forwarded to three experts who were asked about the degree of relevance of each of the items. In light of the experts' comments, one item was omitted from the questionnaire, and 3 new items were added. The final questionnaire included 10 items. The participants were asked to mark the frequency of use for each service on a scale between 6 levels: 0 -not familiar, 1 -familiar but never used, 2 -seldom, 3 -sometimes, 4 -in most cases, and 5 -whenever necessary. For data processing, one average was calculated for the scale of extent of use of DHS and a high score indicates a greater extent of use of DHS. The questionaire internal reliability (Cronbach's alpha) was α=0.87. --- Attitudes Towards the Use of DHS Questionnaire The questionnaire was designed for the present study. The questionnaire consists of 6 positive attitudes towards DHSs, for example: "The digital services allow me to perform actions quickly" and 4 negative attitudes, for example, "It is difficult for me to use the digital services". The degree of agreement on each item is measured on a Likert scale between 1 -do not agree at all and 5 -agree to a very large extent. To validate the questionnaire, it was passed to three experts who were asked about the degree of relevance of each of the items. Considering the experts' comments, the wording of three of the items was corrected and 2 new (inverted) items were added. The final questionnaire included 12 items. For data processing, one average was calculated for the scale of attitudes towards using DHS (after reversing the negative items), and a high score indicates more positive attitudes toward DHS. The internal reliability (Cronbach's alpha) of the questioniare was α=0.86. --- Digital Health Literacy Questionnaire The questionnaire is based on Norman's and Skinner's 17 research questionnaire. The questionnaire included eight items measuring knowledge and skill in locating, evaluating, and applying health information from digital sources. For example: "I know where to find effective health information on the Internet". For each item, there are five answer options: 1 -do not agree at all and up to 5 -agree to a large extent. The Arabic version of the questionnaire was translated and validated by Wångdahl et al 32 and high internal reliability was found (0.92). For data processing, one average was calculated for the scale of digital health literacy and a high score indicates a higher literacy towards DHS. In the present study, the internal reliability (Cronbach's alpha) was α=0.91. --- Data Analysis Analyses were conducted using the IBM SPSS Statistics 25.0. The analysis was calculated on 606 responses. The missing values were less than 0.02% and were not replaced. Cronbach's α coefficient was measured to verify the reliability of the measurement tools used in the study. Group comparisons were performed using the t-test for continuous variables and the χ 2 test for categorical variables. To compare the means of the research variables between the ethnic groups and gender we used 2×2 ANOVA's. Correlations between the study variables were analyzed using Pearson correlations. We used Fisher r-to-z transformation to compare the correlations between the two ethnic groups. We conducted a hierarchical regression analysis to test all relationship variables contribution to predicting the extent of the use of DHS. Finally, we conducted an analysis using the PROCESS macro for SPSS (model 7) to examine the moderated mediation model for predicting the extent of use of DHS. 33 A 95% confidence interval (CI) was calculated for each regression coefficient included in the model. The moderated mediation approach utilizes a bootstrap test, for which we generated 5000 samples, to produce 95% confidence intervals, which indicate a significant indirect effect if they do not include 0. 34 --- Results --- Participant Demographic Characteristics The study included 606 participants from two ethnic groups: 445 Jews and 161 Arabs. Table 1 shows significant differences in gender, religiousness, income level, number of children and health condition between the two ethnic groups. Among the Jews, 55.3% were women, 60.9% were married, whose ages ranged from 20 to 84 years (M = 43.9, SD = 16.4), and parents to an average of 1.9 children (SD=1.8). Most of the Jews reported that they were secular or traditional (83.1%). Moreover, more than half of them have an academic education (55.5%), and most are salaried employees (70.9%) and almost half with lower than average level of income (41.6%). Most of the Jews participants reported being in good or very good health (90.3%), while only 28.8% have a chronic disease. Among the Arabs, 38.5% were women, 72.0% were married, whose ages ranged from 21 to 69 years (M = 41.7, SD = 11.9), and parents to an average of 2.5 children (SD=2.1). Most of the Arabs reported that they are secular or traditional (62.1%). Additionally, half of them hold an academic education (50.3%), and most are salaried employees (71.9%) and with lower-than-average levels of income (82.6%). Finally, most of the Arabs reported being in good or very good health (74.5%), with only 28.0% have a chronic disease. --- Differences in the Research Variables Between the Ethnic Groups and Gender A 2×2 ANOVA was performed in order to compare the means of the research variables between the Jews and Arabs and between woman and men. health literacy between Jews and Arabs. In addition, there was no significant main effect for gender in all research variables. Finally, there was no significant interaction effect between gender by ethnicity in any of the measures. Table 3 shows χ 2 tests to examine the differences between Jews and Arabs in each item of the extent of DHS utilization. The table shows that except of using online pharmacy (item 10) the Jews use all operations more frequently than Arabs do. --- Relationship Among the Research Variables in the Two Ethnic Groups Correlations between the research variables (extent of use, attitudes, and digital literacy) were explored and are reported in Table 4. As can be seen, in both groups there was a significant positive correlation between the extent of use of DHS and attitudes towards DHS (r=0.39 for Jews and r=0.34 for Arabs). In Fisher Z test no significant difference was found between those correlations (z=0.62, p>0.05). In addition, a significant positive correlation was found between the extent of the use of DHS and digital health literacy in both groups. Although the correlation is stronger among the Jews (r=0.41) than among the Arabs (r=0.28), in Fisher Z test no significant difference was found between those correlations (z=1.60, p>0.05). --- 3539 In contrast, the correlation between digital health literacy and attitudes towards DHS is significantly stronger among Arabs (r=0.53) than Jews (r=0.34). In Fisher Z test, significant differences were found between those correlations (z = -2.55, p < 0.01). --- Hierarchical Linear Regression Analysis for Predicting Extent of Use of DHS Hierarchical linear regression was used to predict the extent of the use of DHS in the two ethnic groups. Step one included demographic variables: gender (1-woman, 0-man), religiousness (1-religious/very religious, 0-secular/traditional), income (1-average or above, 0-below average), health condition (1-good, 0-not good), and number of children. Step two included the following variables: attitudes toward DHS and digital health literacy. The results of these analyses are presented in Table 5. In step one, income was found to be a significant predictor of the extent of use of DHS among the Jews, whereby Jewish subjects with an average income and above use digital health services to a greater extent. Among the Arabs, the demographic variables do not significantly predict the use of DHS. Combined, the demographic variables explained 5% of the variance of the extent of use among the Jews and 3% among the Arabs. In step two, attitudes towards DHS and digital health literacy were significant predictors of the extent of use among the Jews. It was found that the more positive the attitudes and the higher the literacy, the greater the extent of use of DHS. These research variables added an additional 20% to the explained variance. Among the Arabs, only attitudes towards DHS were found to be significant predictor of the extent of use of DHS. Step 2 added an additional 14% to the explained variance. In total, our model explained 25% of the variance of the extent of use among the Jews and 17% among the Arabs. Moreover, the model is statistically significant in both groups [F(7,436)=20.93, p<0.001 for Jewish and F(7,152)=4.50, p<0.001 for Arabs]. To test the moderated mediation model for predicting the extent of use of DHS, linear regression was used using Model 7 in the macro-PROCESS. 33 The independent variable was digital health literacy, and the mediating variable was attitudes toward DHS. The Ethnicity was chosen as moderating the relationship between digital health literacy and the attitudes towards DHS since a significant difference was found in the correlation between these two variables between Jews and Arabs. The results of this analysis are presented in Figure 1. The bootstrap analysis (5000 samples) 34 was used in PROCESS 3.0 Model 7. 33 The results of the analysis are presented in Table 6. As can be seen in Table 6, the interaction between digital health literacy (X) and religion (W) is significant. The effect of digital literacy (X) on attitudes (M) was more substantial for Arabs (b=0.46) than for Jews (b=0.29). In addition, the index of moderated mediation (IMM) indicated that religion (W) moderated the indirect effect of digital health literacy (X) on extent of use (Y) through attitudes (M). The indirect effect of digital health literacy on the extent of use through attitudes was significant at the two groups of the moderator but is stronger among the Arab group. This finding suggests that the effect of digital health literacy via attitudes on the extent of use was moderated by ethnicity. --- Discussion DHS has the potential to improve healthcare access and outcomes for all populations, including minorities. However, minority communities may face barriers in utilizing digital health tools due to a lack of technological access, lower income, language barriers, and limited digital health literacy. The current research investigated the characteristics of DHS utilization and the barriers related to it and compared among the communities living in Israel Jews as a majority and Arabs as a minority. The results show that on average, the Jewish participants use more digital services and have more positive attitudes toward them than the Arabs have. Examining the differences in the extent of use of DHS between Jews and Arabs in each of the utilizing issues revealed that Jews use all operations more frequently than Arabs except for online pharmacy (both have low usage rates). Significant predicting factors related to DHS utilization among Jews include positive attitudes and high digital health literacy. Among the Arabs, only attitudes towards DHS were found to be a significant predictor of the extent of DHS use. Moreover, digital health literacy significantly affects the extent of use through attitudes in both groups but it is stronger among the Arab group. This finding suggests that the effect of digital health literacy via attitudes on the extent of use was moderated by ethnicity. The Arab minority do not have limited access to technology, as the latest unpublished data from 2021 shows that more than 90% of Arabs use the internet and have smartphones. 30 Another research held in 2019 also reported small differences between the Jewish and Arab communities in the daily rate of internet use (82% compared to 77%, respectively) and a considerable gap in the rate of computer use among the same groups (77% Jews compared to 46% among Arabs). 35 Despite these facts, the results of the current study still indicate disparities in utilizing DHS between Arabs and Jews. One of the factors related to these disparities is cultural barriers. There may be a distrust that leads to discomfort in using this technology and utilizing DHS among the Arab community, which may prevent them from using it. Lack of trust can emerge due to concerns about privacy, data security, and the accuracy of health information obtained through digital platforms. Previous research indicates trust as a necessary aspect of successfully using electronic health records and other electronically stored health information. 36 It is particularly important for low-income communities. 37 Mistrust of the Arab community on the health system raised on the recent years during and after the COVID 19 pandemic. Research conducted in 2021 and 2022 38,39 found moderate to low levels of trust in the health system among the Arab community. A related concern to safety use is the fact that private information is being recorded and stored. 40 Moreover, language can be another cultural barrier to using DHS as it affects a person's ability to understand and navigate the technology. If DHS is not offered in a person's native language or if the language used is not accessible or understandable, it can limit their ability to use these services and benefit from them effectively. This can also lead to mistrust, confusion, and, ultimately, decreased usage of DHS among certain cultural groups. Language barriers can also affect the accuracy of information exchanged and the quality of care received, leading to further disparities in healthcare access and outcomes. [41][42][43] The participants reported on language translation and cultural adaptation as a crucial factors enabling them to use the DHS. Therefore, digital health providers need to consider the linguistic diversity of their target population and offer services in multiple languages to ensure equitable access and utilization of these resources. The study found that digital literacy and attitudes toward using DHS are factors that explain the differences in the use of DHS among different groups. According to the Technology Acceptance Model (TAM), 25 a person will use digital services if they have both the perceived ability and the skills to do so and perceive the usefulness of it. The study found that Jews had more positive attitudes towards using DHS and higher digital literacy levels, which led to higher usage of digital health services. However, among Arabs, the low levels of general literacy, which affects digital literacy and brings them to have negative attitudes towards using digital services, resulted in lower usage of DHS. It was also found that people with low digital literacy would only use technology for simple tasks, such as playing games or browsing websites. 44 To conclude, the study showed that compared to Arabs, Jews have more positive attitudes toward the use of DHS and a higher digital health literacy, which perhaps leads to higher utilization rates. The use of DHS has already been proven to contribute to improving and maintaining health in most studies, and it can also contribute to closing the gaps in health between population groups of different socioeconomic status, such as the Jewish and Arab populations. Addressing these barriers and ensuring equal access to DHS is essential to reducing healthcare disparities among the Arab communities in Israel. It can be done through targeted digital literacy education, removing barriers to technology access, offering services in Arabic, designing and implementing reward strategies to motivate them to use DHS, and collaborating with community organizations to reach underserved populations. --- Limitations Although the sample in the study was a representative, the questionnaire was distributed online, through the Internet. This method can be convenient, on the one hand, as the researchers can reach a large number of participants. Yet, on the other hand, it can be a limitation. Since online surveys may only reach a limited sample, as not all people have access to or are comfortable using the internet or electronic devices. This can lead to the underrepresentation of certain groups and might limit the generalizability of the findings. Participants in online surveys self-select to participate, which can introduce bias into the sample. People who choose to participate may differ from those who do not in important ways, which can affect the accuracy of the results. While the research, conducted in the form of a cross-sectional survey and testing a moderated mediation model, indicates a possible causal link between variables, it's essential to emphasize that the analysis serves as a supplementary tool and cannot definitively establish causal relationships. Thus, the model in this research provides evidence for a possible explanation of the relationship between variables, but it does not prove that the relationship is causal. --- Ethics Considerations This study complies with the Declaration of Helsinki. Ethical approval was obtained from the Yezreel Valley College Ethics Committee before data collection (Approval No. YVC EMEK 2022-61). --- Disclosure The authors report no conflicts of interest in this work. --- Journal of Multidisciplinary Healthcare --- Dovepress --- Publish your work in this journal The Journal of Multidisciplinary Healthcare is an international, peer-reviewed open-access journal that aims to represent and publish research in healthcare areas delivered by practitioners of different disciplines. This includes studies and reviews conducted by multidisciplinary teams as well as research which evaluates the results or conduct of such teams or healthcare processes in general. The journal covers a very wide range of areas and welcomes submissions from practitioners at all levels, from all over the world. The manuscript management system is completely online and includes a very quick and fair peer-review system. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors.
In Indonesia, the legal system heavily favors state ownership of land, leading to the marginalization of Indigenous peoples and their traditional land management practices. The prioritization of economic development over environmental and cultural conservation has resulted in a limited understanding of the value of the peatland ecosystem for Indigenous Dayak communities, leading to inappropriate and ineffective peatland management policies. To address these challenges, this research adopts a descriptive qualitative approach, utilizing a cross-sectional research design that includes in-depth interviews and literature study to gather and analyze data from Indigenous Dayak Ngaju communities in Tumbang Nusa and Pilang villages, Pulang Pisau regency, Central Kalimantan province. The study uncovers that the Indigenous Dayak Ngaju community has established a zonation system for peatland use, comprising separate areas for settlement, farming, and forest protection (Pukung Pahewan). The creation of specific policies for managing sacred areas is crucial to preserving Indigenous values and practices. Moreover, the absence of free, prior, and informed consent in certain policies and programs, such as the Mega Rice project, Food Estate program, and Zero-burning policy, has caused social conflicts within the Indigenous Dayak community, leading to the destruction of their livelihoods. Despite existing laws in Indonesia that acknowledge the rights of Indigenous peoples and safeguard their customary lands, the implementation and enforcement of these laws have proven weak and inconsistent.
INTRODUCTION The legal system in Indonesia ensures the protection of the rights of Indigenous people, including the Indigenous Dayak Ngaju community. 1 They 1 See Mohammad Jamin et al, "Legal Protection of Indigenous Community in Protected Forest Areas Based Forest City," Bestuur 10, (2022): 198-212; and Yanarita et al, "Development of the Dayak Ngaju Community Forest in the Forest and Peatland Area, Central Kalimantan, Indonesia," Environmental Science, Toxicology and Food Technology 8, no. 3 (2014): 40-47. Yanarita characterizes Dayak Ngaju as indigenous communities residing in the Central Kalimantan province. In both literature and everyday language, Dayak Ngaju are often referred to as an "indigenous community," "tribe," or "subtribe." are guaranteed full involvement in decision-making and sustainable management of the ecosystem in their territory. However, the lack of empowerment and ineffective implementation of these standards hinder the Indigenous communities ability to fully leverage their knowledge, values, and wisdom in peatland management. 2 It is crucial to highlight that the Dayak people are Indigenous people of Kalimantan island, where Indonesia's new capital is planned to be relocated. 3 With over 400 subtribes, each having its unique traditional system of land and natural resource management, the Dayak community possesses valuable insights and practices. 4 Dayak community is categorized or grouped based on their dwellings in watersheds, such as the Kapuas, Katingans, Seruyan, and Barito river. Dayak people who live in the upper river in Central Kalimantan Province is called Dayak Ngaju. 5 Dayak Ngaju heavily relies on the peatland ecosystem in which they live. 6 Traditionally, they have utilized the peat swamp forests for various small-scale activities such as timber harvesting, gathering food and medicinal plants, and obtaining clean water. As forest-dwelling people, the environment has played a significant role in shaping their culture and way of life, with the peatlands deeply intertwined with the customs and traditions of the Indigenous Dayak people. 7 Although Law No. 32 of 2009 concerning Environmental Protection and Management promotes participatory principles in environmental conservation and management, with an emphasis on the value of local Although these terms may hold distinct legal and anthropological connotations, this paper employs them interchangeably in a general sense to represent a community known as the Dayak Ngaju that inhabit an upper stream in Central Kalimantan Province, united by collective ancestral connections to the land and natural resources, and steadfastly maintain the traditions passed down through generations. 2 See Gusniarjo Mokodompit et al, "Ensuring the Rights of Indigenous Peoples: International Legal Standards and National Implementation," The East Journal Law and Human Rights 1, no. 3 (2023) knowledge, there is still a need for further action to actively involve Indigenous peoples, particularly the Dayak community, in the management of peatland ecosystem.8 This necessity arises from the historical context prior to the Constitutional Court Decision No. 35/PUU-X/2012, where the legal framework in Indonesia did not recognize community ownership of forested land as previously stated in the Law No. 41 of 1999 concerning Forestry (Forestry Law). 9 According to the Forestry Law, country's forests are categorized as state forest and private forest. The law declares that any forestlands in Indonesia without private entitlements are considered state forestlands. Unfortunately, the implementation of forest-protection actions in the past often excluded local communities, resulting in no tangible benefits for them. 10 This lack of recognition resulted in conflicts between governmentdesignated zones and areas acknowledged under Customary Law. Recognizing the rights of Indigenous peoples, including the Dayak community, to govern their natural resources has long been a demand. 11 The recognition of customary forests as distinct from state forests, as stated in the Constitutional Court ruling, marked a significant change. 12 However, several studies examining the limitations and challenges associated with the implementation of the Constitutional Court Decision No. 35/PUU-X/2012 have identified several important points. Firstly, little power has been transferred in favor of Indigenous people, as the state retains full authority in determining the procedure of customary forest recognition. Secondly, the recognition of Indigenous communities has been hindered by concerns of maintaining national integration and the complex articulation of Indigenous identity due to historical and post-colonial dynamics. Additionally, the absence of an Indigenous peoples' rights law has been a common reason for many local governments to avoid recognizing Indigenous territories. 13 The Mega Rice Project, initiated in 1995 by President Soeharto in Central Kalimantan, aimed to enhance food security but neglected the active involvement of Indigenous Dayak communities. 14 The project involved extensive drainage canal construction, deforestation, and the introduction of Javanese transmigrants using unfamiliar farming methods, causing irreparable damage to the peatland and local communities. Similarly, the Food Estate project, characterized by intensive modern farming, fails to consider the Indigenous Dayak Ngaju Wisdom in agricultural practices and community land management. 15 This constitutes a violation of Indigenous Dayak people's rights as clearly recognized in the Article 18B (2) of the 1945 Constitution of the Republic of Indonesia and Law No. 39 of 1999 concerning Human Rights. 16 The peatland land use change in Kalimantan, particularly for agricultural purposes such as monoculture production of palm oil and rice, has often disregarded the valuable knowledge and wisdom of Indigenous communities. 17 This disregard has led to significant consequences, including the degradation of ecosystems, socio-economic conflicts, and land disputes. The shift towards monoculture plantations, particularly palm oil and rice, has disrupted the delicate balance of peatland ecosystems, resulting in the loss of biodiversity and the destruction of natural habitats. 18 The traditional knowledge held by Indigenous peoples, which encompasses sustainable land management practices and a deep understanding of the ecological intricacies of the region, has been marginalized and ignored. 19 As a result, the degradation of peatlands has not only had ecological implications but has also caused socio-economic conflicts as indigenous communities, whose livelihoods and cultural heritage are closely tied to the land, are marginalized and deprived of their rights. Additionally, the neglect of indigenous land rights has fueled land conflicts, further exacerbating tensions and instability in the region. 20 Recognizing and incorporating the knowledge and rights of Indigenous peoples is crucial for addressing these issues, promoting sustainable land use practices, and fostering a harmonious coexistence between agriculture and the environment. 21 Understanding the significance of the peatland ecosystem to develop effective management strategies requires acknowledging people's values, perceptions, and traditional knowledge systems. 22 The Indigenous Dayak people possess valuable traditional knowledge that provides a holistic understanding of the peatland ecosystem. The profound connectedness of indigenous people with nature and their wisdom have been highlighted as a vital aspect of the sustainable management of socio-ecological systems. By involving and valuing the knowledge and perspectives of Indigenous communities, a more inclusive and participatory approach to peatland conservation can be developed. 23 The present paper builds upon previous studies exploring the topic of indigenous land management and environmental justice. Boag (2016) conducted a comparative study on Australia and Indonesia, providing insights into the benefits and limitations of different policy strategies for Indigenous peoples in the Asian-Pacific region. 24 Jamin et al (2022) analyzed the protection of customary law communities in urban forest-based protected forest areas designated as the National Capital, recommending the inclusion of legal protection provisions for Indigenous Peoples in the Law on the State Capital. 25 Furthermore, Belliera and Preaud (2011) examined the transformative effects of recognizing Indigenous peoples globally, exploring various local contexts and strategies and uncovering transnational links and differences. 26 These studies contribute valuable perspectives to the understanding of Indigenous land management and the pursuit of environmental justice for Indigenous communities worldwide. This research aims to explore the values, cultural significance, and traditional practices held by the Indigenous Dayak communities regarding the peatland ecosystem. By understanding their perspectives, this study seeks to develop a comprehensive legal framework that actively involves and appreciates the values of local communities in ecosystem management, fostering ownership, responsibility, and stewardship. Ultimately, the goal is to establish effective and sustainable conservation strategies aligned with the aspirations and priorities of the Indigenous Dayak people, ensuring the long-term protection and well-being of both the peatlands and the Indigenous communities. The research was conducted in Tumbang Nusa village and Pilang village, located in Pulang Pisau Regency, Central Kalimantan Province. The selection of these villages as study areas was based on the following factors: First, the majority of the residents in both Tumbang Nusa and Pilang villages, specifically over 90%, belong to the Indigenous Dayak Ngaju community. These communities actively preserve and practice the traditional customs and culture of the Indigenous Dayak Ngaju. Second, both villages possess peatland ecosystems, which are currently undergoing changes in land use patterns. Third, the proximity of the villages to the researchers and their ease of accessibility were additional factors considered during the selection process. To collect and collate the data for this study, a cross-sectional survey was used. A one-month in-depth interview was conducted in Tumbang Nusa village and Pilang village from mid-September 2021 to mid-October 2021 using purposive sampling with a total of eighty-eight respondents. The indepth interview was conducted with village officials, Dayak elders, Mantir (Indigenous leaders), a Non-Governmental Organization (NGO) representative, researchers, and the provincial forestry department. --- RESULT AND ANALYSIS 2.1. Indigenous Dayak Ngaju Peatland Management There are no rules specifically governing peat forests in the Dayak Ngaju Indigenous community. Peat forest management is based on the zonation of the land. Peat forests can be used as agricultural areas, settlements, or protected forests. Peatland zonation is not based on the depth of the peat soil but based on the vegetation that grows in the area which then becomes a guideline for the community to determine the use of the area. It covers three zonations for the purposes of settlement, farming, and secondary forest. Regarding the settlement zonation within the Dayak Ngaju Indigenous community, the organization of villages follows traditional rules that have been passed down through generations. In the villages of Tumbang Nusa and Pilang, most of the stilt houses are constructed on flat land near the river. Consequently, peat areas adjacent to rivers are commonly utilized as settlement sites. This strategic choice is motivated by the convenience of accessing water, fish, and transportation routes, which are readily available in close proximity to riverside settlements. 27 For the farming zonation, the selection of land for farming, known as "ladang," is guided by several considerations, which are the presence of a nearby river or creek, the abundance of fresh and green leaves on plants within the primary forest, and the presence of specific grasses and trees, such as taro, suna, bungur tree (Lagerstroemia) and Jajangkit tree. These criteria do not apply to areas that were previously cleared for agriculture. In such cases, the land can be reused if there are trees with a trunk diameter exceeding 15 cm. 28 If a member of the Dayak Ngaju community clears land (A) within the primary forest for cultivation and subsequently re-cultivates it after a few years, the fertile land is referred to as "balik uwak." The individual who cleared the land is rewarded with rights to the land, acknowledging their hard work. According to Dayak Customary Law, the responsibility for managing forests converted into agricultural land rests with the initial cultivator. This principle is enshrined in Article 39 of the Customary Law of the Dayak, known as "Singer nalinjam bahu himba balikuwak." If another individual (B) wishes to work on the previously cultivated land (A), they are obligated to compensate the previous cultivator (A) with voluntary offerings such as rice, white chicken, whetstone, machete iron, pickaxe, and manas lilis. The subsequent rights to the former field, after being cultivated by individual (B) for one or two years, will revert back to the ownership of individual (A). 29 The Dayak Ngaju people have developed a deep understanding of the peatland ecosystem through keen observation of the vegetation present in the area. By closely observing the growth of specific plants they can discern the ecological characteristics of the land. This Indigenous knowledge guides their decision-making in choosing the most suitable areas for farming. They avoid utilizing deep peat for agricultural purposes, as they are well aware that the soil in such areas tends to be acidic and more beneficial for conserving as forested areas. For the secondary forest zonation, when a piece of land previously used for farming or "ladang" displays a thriving growth of valuable timber and fruit-bearing trees, it is conserved as a wood, fruit, medicine, vegetable, fish, and purun (grass species used for weaving) source forest. This designation ensures the availability of wood and fruits for the community's needs, allowing for sustainable utilization of these resources. --- Dayak Ngaju Farming Practice The farming practices of the Dayak people integrate ecosystem management and Dayak traditions. The stages of farming identified in their study are as follows: inspecting the land, determining the land area, cleaning farming tools, slashing, cutting trees, burning the land, planting, weeding, harvesting, and performing a thanksgiving ceremony. These ten stages are universally followed by the Dayak people and must be completed. 31 Dayak farming is typically initiated in May, during the most favorable season, according to an indigenous elder from Pilang and Tumbang Nusa Village. For the Dayak people, farming holds a deeper meaning beyond occupation; it is a spiritual connection with all beings, especially the Almighty, the Creator of the universe. Before opening and clearing the forest, the Dayak people perform a ritual called "Mangariau" to offer prayers to the spirits of the forest guards, requesting them to relocate. Mangariau is performed on small arable land, while for larger fertile lands, a ritual called "Manyanggar" is conducted, involving the offering of pigs or cows. 32 Slashing and burning practices, although controversial due to their association with forest fires, play a crucial role in shifting cultivation by clearing land and enhancing soil fertility. 33 The Dayak people, however, have been practicing this tradition responsibly for centuries. They conduct controlled burns simultaneously, equipped with water and fire extinguishers, solely for agricultural purposes, and ensure no socioeconomic issues arise from these fires. 34 Cooperation is evident in Dayak farming, as men create holes in the soil through dibbling (Manugal), while women sow seeds in these holes. This collaborative planting process is accompanied by joyful interactions, jokes, and displays of various arts and cultures. The harvest marks the final stage of farming and brings great joy to the Dayak people. They express gratitude by performing the 'Pakanan Batu Ceremony' or "feeding the rock ritual," acknowledging the farming tools they used. These rituals exemplify the Dayak people's respect for nature and all of creation, maintaining a harmonious relationship with their environment. Forestry Law and Law No. 18 of 2004 concerning Plantation prohibit land clearing through burning, except for specific exemptions introduced after the 1997-1998 forest fires. 36 Article 69 (2) of Environmental Protection and Management Law allows the continuation of fire use in traditional agricultural techniques while considering regional customs. 37 This exemption recognizes ancient practices of slash-and-burn as local knowledge protected under the law. However, there is ongoing debate and uncertainty about the definition and application of local knowledge. 38 During the drafting of Indonesia's new Job Creation Law, discussions were held regarding the repeal of this exemption. However, the implementation of the exception remains complex, and instances have been reported where traditional farmers practicing their local knowledge were detained for using fire to clear land. The enforcement of the zero-burning policy has caused fear among Indigenous communities, leading to disconnection from their land and traditions. 39 Additionally, the Food Estate program, which promotes zero-burning farming, has not effectively integrated Dayak Ngaju traditional values and knowledge in peatland management. The program's introduction of chemical fertilizers and non-local rice seeds conflicts with the regenerative farming system of the Dayak people, which relies on local seeds and avoids chemical inputs. The implementation of similar program, Mega Rice project in 90s, had led to a significant change in the traditional Dayak farming system, resulting in the abandonment of land in Tumbang Nusa and Pilang villages. The 2015 forest fire incidents prompted the government to introduce new regulations regarding land clearing without burning, posing challenges for the Dayak community, who relied on farming as their livelihood. As a result, many community members shifted their occupations to rubber farming and fishing, leading to the abandonment of land in the villages of Tumbang Nusa and Pilang. 41 --- Pukung Pahewan as Conservation Area The Indigenous Dayak Ngaju has designated a primary forest in Pukung Pahewan as a reserve or protected forest, ensuring the tribe's future. This forest also serves as a sacred home for the "forest spirits" believed by the Dayak Ngaju people to coexist with the community. According to Article 87 of Dayak Customary Law, known as 'Singer Karusak pahewan, Karamat, rutas dan Tajahan,' anyone who mocks, burns, slashes, cuts down trees, or steals from the sacred area will face penalties. The punishment includes a demand for an inheritance penalty or compensation for the nearest village, ranging from 15 to 30 kati ramu. The offender must also conduct a small ceremony at the location, offering a pig sacrifice and covering the expenses of a mediator who communicates with the forest spirits as an act of apology. 42 The Indigenous Dayak Ngaju firmly believe in their responsibility to protect and preserve Pukung Pahewan, as it holds sacred and mystical messages within its traditions and rituals. Any disturbance, destruction, or hunting of animals or plants in the area, whether intentional or unintentional, is met with sanctions. The community fears that not only the violators but the entire village may be subjected to punishment by their ancestors and the forest spirits. 43 Pukung Pahewan represents a restricted space with specific constraints, where trees, stones, and other sacred elements must not be disturbed or harmed, including the surrounding area. It serves as a conservation methodology for the Dayak people to protect nature and symbolizes their willingness to coexist harmoniously with all organisms in nature, including animals, plants, and forest spirits. 44 Despite various policies in Indonesia regarding the conservation of peat forests, there is currently no policy that specifically addresses the and Tumbang Nusa village" interviewed by Sumarni, Central Kalimantan, 14-20 September 2021. 41 Indigenous Dayak community and village official, loc.cit. 42 --- Leveraging Indigenous Dayak Participation in Peatland Management through Customary Forest Practices The Indigenous Dayak Ngaju people have a deep understanding of the interconnectedness between their lives and the ecosystems they inhabit. Their social, economic, and cultural aspects are intricately linked to the natural environment, and there exists a reciprocal relationship between the people and the land, encompassing a concept known as the "duty of care". 45 The duty of care implies a responsibility to care for and protect the land, which is closely tied to cultural norms and values. While the community benefits from the ecosystem, they recognize their duty to ensure its wellbeing. This perspective acknowledges that any benefits derived from the environment should be balanced with the preservation of cultural heritage and ecological integrity. 46 In the Dayak Ngaju community, the economy is not viewed as separate from the ecosystems but rather as an integral part of them. The well-being of the community's economy is closely intertwined with the health of the surrounding ecosystems. As a result, there is a mutual exchange between the two, with the community relying on the resources and services provided by the ecosystem, while also recognizing the need to sustainably manage and conserve those resources. However, throughout the colonial and New Order eras, the Indigenous Dayak Ngaju people faced the unfortunate reality of their customary rights not being recognized, which deprived them of the authority to manage their natural resources and apply their local wisdom. 47 This lack of recognition became evident during the Soeharto era when the Indonesian government initiated the ill-fated Mega Rice Project in the peatlands of Central Kalimantan. Tragically, the project's improper irrigation methods and degradation of the peatlands resulted in catastrophic forest fires in 1997, engulfing extensive areas that included Pilang and Tumbang Nusa. 48 Without Free, Prior, and Informed Consent (FPIC) from the community, the government proceeded with the construction of thousands of canals, resulting in the cutting and destruction of many villagers' farm areas. This 45 See Bulkani,Ilham and Darlan,loc.cit.;and Indigenous Dayak mantir,loc.cit. 46 See Sara A. lack of consultation and consent has had significant negative impacts on the affected communities, disrupting their livelihoods and causing environmental damage. The consequences were two-fold for the communities: not only did they suffer the loss of their lands without compensation, but they also witnessed the severe environmental damage caused by the ill-conceived project. 49 The implementation of FPIC is essential to ensure that decisions made by the government respect the rights, interests, and well-being of local communities and enable more sustainable and inclusive development . 50 The experience of facing discrimination in agrarian conflicts within Pilang village heightened the Indigenous peoples' awareness regarding the significance of obtaining formal recognition for their ancestral territories. In response, they have embarked on a determined struggle to secure official acknowledgment of their customary forest, which would grant them the autonomy to independently manage their ecosystem. In 2019, their long and arduous struggle began to yield positive outcomes with the issuance of a decree by the Pulang Pisau Regent recognizing indigenous peoples. Additionally, the Ministry of Environment and Forestry issued a decree acknowledging their customary forest, as specified in Minister of Environment and Forestry Decree No. 5447/MENLHK-PSKL/PKTHA/KUM.1/6/2019. 51 Notably, the Barasak Island Customary Forest, covering 102 hectares and designated for protection, stands as Central Kalimantan's only customary forest established through a social forestry scheme. The swift issuance of the decree for Barasak Island, located within an area with a different designated use, sets it apart from customary forests in forested areas, which typically require recognition through regional regulations (peraturan daerah) as mandated by the law. 52 --- Customary Forest: Recognising Indigenous Dayak Ngaju Land Management The recognition of customary forests in Pilang Village provides greater space for Indigenous Dayak peoples to use their traditional knowledge and wisdom in managing forests for the greatest prosperity of their people. This also promotes the utilization of time and space perspectives of the Indigenous Dayak people, while emphasizing handep collaboration and equal partnership among stakeholders. 49 This framework centers around the concept of time and space as utilized by the Indigenous Dayak people. Time refers to the understanding and respect for the temporal aspects of ecological processes and the intergenerational perspective. It recognizes that sustainable management of peatlands requires long-term thinking and planning, considering the needs and well-being of future generations. The Indigenous Dayak people's knowledge of the land passed down through generations, holds insights into the temporal dynamics of the ecosystem. Space refers to the Indigenous Dayak people's intimate connection with the physical and cultural landscapes of the peatlands. It recognizes the significance of their traditional practices, cultural values, and customary land management systems. The framework promotes the preservation and revitalization of Indigenous practices and institutions related to peatland management. By valuing their knowledge and expertise, the framework seeks to incorporate Indigenous perspectives into decision-making processes. 53 The framework also emphasizes handep collaboration and equal partnership among stakeholders. It recognizes that effective peatland management requires the active involvement and meaningful participation of Indigenous communities, government agencies, non-governmental organizations, and other relevant stakeholders. Handep refers to a custom practiced by the Dayak people, where they come together to collectively clear agricultural land. 54 When one villager is clearing land, others join in to provide assistance, with relatives also contributing their labor as repayment for previously received services while working on their own fields. Those who are unable to participate may feel a psychological and customary burden, as reciprocity is valued within the Dayak community. This sense of obligation to help one another fosters a strong sense of community among the Dayak people. 55 By fostering handep collaboration, the framework aims to create a more inclusive and equitable approach to peatland management, where the voices and rights of Indigenous Dayak communities are valued and integrated into decision-making processes. This will also allow the community to run their initiative, such as making tree nurseries for native peatlands species that give them economic benefits or making bee keeping. To make this movement viable, it must be supported financially and technically by providing mentoring and tools. The Indigenous Dayak Ngaju people in Pilang village have faced a challenging struggle to obtain recognition for their customary land, primarily due to the extensive documentation required and the reluctance of some local governments to acknowledge indigenous territories. The community was struggling to navigate the intricate legal processes required for recognition, including meeting various administrative requirements and complying with governmental regulations. Limited access to legal support and information further compounds the difficulties faced by Pilang community. The procedure of obtaining legal recognition for customary forests still follows the procedure required by the Forestry Law. To be able to manage the forest, Indigenous communities should be recognized by district or provincial governments, as stated in Article 67 of Forestry Law. 56 If their territories fall within the administrative jurisdiction of a single district, recognition should come from the district government. For territories spanning across multiple districts, recognition must be obtained from the provincial government. However, the practical implementation of this provision often hampers the recognition of customary forests since many local governments are unwilling to acknowledge Indigenous territories. This reluctance from local authorities constraints Indigenous communities from gaining the legal recognition they need to govern their natural resources effectively. 57 However, their efforts have been bolstered by the invaluable support and assistance of a third party, USAID-Lestari. This external entity has played a crucial role in providing the necessary backing and resources to navigate the complex process of formal recognition. With the aid of USAID-Lestari, the Indigenous community in Pilang village has been able to overcome barriers and advance their cause, paving the way for the recognition and preservation of their ancestral lands.58 --- Barriers in Community-led Peatland Management Indigenous communities may face various challenges in governing their natural resouses. The limited availability of resources to maintain and manage their natural resources might be the biggest barrier for them. For instance, in the case of Pilang village, even after obtaining legal recognition of their customary forest, the community encountered difficulties in securing funding to support their initiatives. They faced challenges in finding financial resources for essential infrastructure development, such as custom buildings, necessary for the development of ecotourism and generating economic benefits for the community. The lack of adequate resources can hinder communities from implementing sustainable practices and maximizing the potential of their natural resources. Additionally, Indigenous communities may also encounter knowledge and managerial challenges, which pose significant barriers to effective governance. The progress in government-led peatland restoration has been constrained primarily by socio-economic challenges faced by communities. 59 To achieve successful intervention, it is essential to comprehend community concerns and develop optimal short and mediumterm income solutions that facilitate the transition to sustainable income generation. 60 By addressing these socio-economic aspects, peatland restoration efforts can become more effective and inclusive, benefiting both the environment and local communities. However, historical marginalization and limited access to education and training opportunities have left some Indigenous communities lacking the necessary knowledge and expertise. This knowledge gap inhibits their ability to manage their customary forests efficiently and fully benefit from them. Addressing the barriers to community sovereignty in governing their natural resources and obtaining legal recognition of their customary forests necessitates collaborative efforts involving government agencies, civil society organizations, and the Indigenous communities themselves. These efforts should focus on overcoming the challenges related to recognition by local governments, securing sufficient resources, bridging knowledge gaps, and simplifying the legal processes. By empowering Indigenous communities and supporting their rights and stewardship over their ancestral lands and resources, a more inclusive and sustainable approach to natural resource governance can be achieved. --- CONCLUSION Peatland serves as a valuable ecosystem for the Indigenous Dayak Ngaju community. Their adoption of a zonation system for peatland use, along with the integration of ecosystem management and cultural rituals into their farming practices, showcases their deep connection and harmonious relationship with nature. However, despite the laws in Indonesia that recognize the rights of Indigenous peoples and aim to 59 safeguard their customary lands, the implementation and enforcement of these laws have demonstrated weaknesses and inconsistencies. Furthermore, several programs and policies have failed to prioritize seeking the consent and opinions of the community, despite the evident impact on their traditional way of life. To address these issues, it is essential to urgently implement the principle of FPIC and enhance the implementation and enforcement of legal protections. FPIC ensures that decisions affecting Indigenous communities are made in consultation with them, respecting their rights and interests. Strengthening legal protections will further safeguard the rights and well-being of Indigenous communities, providing them with the necessary legal mechanisms to protect their customary lands and maintain their traditional way of life. To bridge the gap between government policies and indigenous knowledge, it is important to foster a legal framework that recognizes and integrates the utilization of the 'time and space' perspectives of the Indigenous Dayak people, while placing emphasis on the collaborative practice of handep among all stakeholders involved.
Indonesia is a multi-cultural country characterised by hereditary traditions passed down by ancestors. Strands of this traditional culture are often specific to particular communities, for example the Pantuan Bunting tradition is expressed and passed down in the customs of the Besemah community of Lahat Regency, South Sumatra. The purpose of this study is to analyze the social construction of the process of the formation of Pantauan Bunting tradition, the distribution of Pantauan Bunting tradition in different regions, and the existence of Pantauan Bunting tradition in Besemah Tribe community in Lahat Regency. The method used in this study was qualitative with ethnographic, historiographic, and spatial approaches. This research was conducted in three locations, namely in Kota Agung Village, Penang Village, and Selawi Village where the research subjects consisted of traditional leaders, religious leaders, community leaders, and the Besemah community. The results of this study showed that (1) the Besemah community constructs Pantauan Bunting tradition since the time of its earliest ancestors and this process of transmission still continues. This tradition is characterised by a public invitation to prospective brides to come to their prospective bridegrooms' homes. (2) Pantauan Bunting Tradition has spread to various areas in Lahat Regency, such as Kota Agung Village, Pulau Pinang Village, and Selawi Village. (3) in the modern era, the existence of Pantauan Bunting tradition is maintained by the Besemah community, and we can still find it in various areas in Lahat Regency. The Pantauan Bunting tradition, practised in various parts of Lahat Regency since ancient times and firmly ingrained in the Besemah community, highlights the community's resiliency and commitment to preserving its cultural legacy.
Introduction Culture is an important element in human life because it provides the implied meaning of various aspects of society. The significance of culture is strongly related to values, beliefs, ways of thinking, ways of living, and world views adopted by community members at certain times (Eko & Putranto, 2019). Culture is not a thing that is only owned by a particular group of people but is owned by everyone and can be a unifier of the nation. Human beings and culture are never separable. In daily life, the human being is never detached from culture. As social beings, people interact with each other and follow habits that can become a culture (Mahdayeni et al., 2019). Pesurnay (2018) defines culture as an expression of the will of man in recognizable structures shared by those who inhabit the same world; therefore, the relationship between man and his cultural world is dynamic and dialectical. This concept informs a theory of social construction, including objectivity, internalization, and externalization. Pujiati (2017) argues that these three concepts outline a process of forming a tradition that goes hand in hand with that tradition itself and evolves continuously. The individual human being becomes an instrument involved in creating an objective social reality through a process of externalization, as the individual interprets influences through a process of internalization. Ngangi (2011) explains that this social construction can be dialectically illustrated as in Figure 1. Each process for the dialectical scheme of social construction is presented in Table 1. The human being is also seen as the creator of culture. Culture is closely related to tradition, which can be seen as formed through the community's continuous transmission of a culture. Tradition is a habit, behavior, or attitude of a society passed down from generation to generation and preserved by the local community as a reflection of that society with a distinctive culture. Tradition is a spirit of culture that strengthens a cultural system. Culture and all its products are the results of the process of human life (Suarmika, 2022). Local wisdom is a cultural product that includes philosophy, values, norms, ethics, rituals, beliefs, habits, customs, and so on (Uge et al.,2019). wisdom usually comes from ancestors, who are followed by community members from generation to generation (Gadeng et al., 2018;Atahau et al., 2020;Raj et al., 2022). This local wisdom accumulates the good habits of generations. --- Table 1. Theoretical Dialectical Scheme of Social Construction No. --- Dialectical Scheme Description 1 Externalization Human generosity is directed towards the world in mental and physical activities. It is sometimes seen as the essence of man himself, and it is an anthropological imperative that man always devotes himself to the world in which he exists. Humans cannot understand themselves as detached self-enclosed beings separated from the outside world. --- Objectivation The results have been achieved both mentally and physically through human externalization activities. The results confront the producer himself because they are outside and different from the humans who produce them. Through this process, the community becomes a sui generis reality. Objectivation can manifest as sharing opinions concerning a social product that emerges within a community through public discourse, even without direct, in-person interaction between individuals and the creators of said social product. 3 --- Internalization It can be argued that the human person is an instrument in the process of creating objective social reality through a process of externalization, as she or he influences it through a process of internalization which reflects subjective reality. Individuals become members of society through this process of internalization or socialization. Local wisdom passed down by tradition becomes the basis for someone in a particular tribe to communicate with other tribes. Habits and the local wisdom they form give rise to a tradition with its customs, norms, and other cultural forms (Pratamawaty, 2017). Local wisdom is a form of community culture in the form of knowledge, products, and activities used for survival adapted to where they are from generation to generation. Local wisdom has philosophical values believed to be guidelines (thoughts, attitudes, and behavior) in life activities to maintain personal and group survival (Suarmika, 2022). Each region has a different tradition of commemorating or celebrating important events such as births, weddings and deaths. One area of Indonesia with a unique tradition is the Lahat Regency in South Sumatra Province. One of the national tribes that inhabits this region is the Besemah Tribe which is different from the peoples inhabiting other areas in celebrating weddings. The Besemah Tribe community still upholds the tradition of pantauan bunting inherited from its ancestral beginnings, especially when wedding celebrations are held. This is evident from researchers' direct observations; many still carry on this tradition. Some research shows that the Pantauan Bunting Tradition involves a pair of brides who will be accompanied by a man and a woman who in the Besemah language are called 'bujang ngantat' and 'gadis ngantat'. A 'bujang ngantat' or 'gadis ngantat' must be unmarried. The task of the 'gadis ngantat' and 'bujang ngantat' is to join the bride in surrounding the house of the residents who have called the couple. They also accompany the bride and groom from the time of the marriage proposal until the wedding reception (Arios, 2019). The research results of (Sari et al., 2021) reveal several new things; the Pantauan Bunting tradition is still practiced by the Pasemah community, especially the Sukarami village community, in a series of marriage ceremony activities in the form of an invitation to eat from the local community to a newly married partner. The tradition of Pantauan Bunting is carried out to profit Muji Jurai or honor the descendants, as an act of gratitude and respect for their descendants because they are married, or it is also said to be a gift from the community to the bride. Zaman (2017) also reports on the uniqueness of marriage events in the community as exemplified in the performance of rituals such as the Rokat Tek-tek kemanten tradition which is imbued with institutionalized community values. These practices involve symbols that have sacred meanings in the Rokat tektek kemanten tradition. as it is an ancestral heritage of appreciating "bujuk nia" which determines social reality in the community as an institutional belief system. The process of social community formation occurs through simultaneous awareness and solidarity. --- Research Methods This study utilizes the qualitative method of ethnography to provide a detailed description of the social construction process involved in the Pantauan Bunting tradition within the Besemah Tribe Community. This research aims to shed light on how social constructs are formed by examining the stages of externalization, objectivation, and internalization. Additionally, the study will offer insights into the distribution, uniqueness, and prevalence of the Pantauan Bunting tradition specifically within Lahat Regency. Through this comprehensive analysis, a deeper understanding of the cultural significance and dynamics of the Pantauan Bunting tradition can be attained. --- Research Location This research study will be conducted in three zones selected by researchers based on the strength of the influence and distribution of the construction of the Pantauan Bunting tradition. In one zone, the tradition is still strong, in the second it is fading away, and in the third it no longer exists. Agung City Village is included in the zone that is still practicing the tradition. Pulau Pinang Village is included in the transition zone where the tradition is rarely practiced. Selawi Village is included in the zone where the Pantauan tradition is barely practiced anymore. The location of research is presented in Figure 2. --- Research Subject In qualitative research, the research subject, commonly referred to as 'the informant', is someone who provides information about the data to be studied. In this study, the subject of research is the Besemah tribe community in Lahat Regency. The Pasemah tribe, commonly called the Besemah Tribe, is one of the ethnic groups residing in the Province of South Sumatra, Indonesia. The majority of the people live in and around Mount Dempo, Pagaralam City, Lahat Regency, Empat Lawang Regency, and Muara Enim Regency. A small part of the tribe is spread across other districts. The subjects of this study were mainly people in Kota Agung Village, Pulau Pinang Village, and Selawi Village (Table 4). The persons selected by the researcher to be used as informants in this study include (1) traditional figures in Kota Agung Village, Pulau Pinang Village, and Selawi Village; (2) Native people who are tribal to the Besemah nation in Kota Agung Village, Pulau Pinang Village, and Selawi Village. This study used the snowball sampling model to gather information. The method is called 'snowball' sampling because a researcher determines a person to be a sample based on the recommendations of people who have been a sample before (Vincent et al., 2022). The specifics of this research study's informants are presented in Table 2. --- Research Instrument The instrument used in this study is the researcher himself (Human Instrument) because this study uses a qualitative approach that must interact directly with the surrounding community and involve interview guidelines, observations, recording tools, and documentation tools (in the form of photographs taken at the time of the enactment of the Pantauan Bunting tradition). --- Data Analysis Data analysis is a crucial stage in the process of conducting scientific research because it allows the researcher to arrive at answers or to the problems which the research has. Qualitative data analysis involves the sorting, coding, and thematizing of data derived from the data collection process involving interviewing study participants, recording the interview and taking notes, and reviewing the literature. --- Results and Discussion --- Social Construction: the Process of Forming the Pantauan Bunting Tradition in the Besemah Tribe Community Regions in Indonesia have many local forms of wisdom, cultures, traditions, customs, languages, and rituals or ceremonies that differ from one area to another (Hilman et al., 2020). The cultural diversity of Indonesia consists of customs or traditions that develop in society into a distinctive view held by a particular community which is embodied in its acting as it behaves (Hasmika & Suhendro, 2021). In Indonesia, several traditions have been highlighted by research into traditions in particular locations. The Pararem custom in the mass marriage tradition found in Pengotan Village, Bangli Regency, is underpinned by the community's economic situation, cultural preservation, hereditary sustainability, and culturally-specific views of happiness (Gede et al., 2021). The tradition of Javanese customary marriage that prevails in Kalidadi Village is the wetonan custom, involving taboo understood as a form of caution, a way parents choose prospective partners for their children and protect their children's households from all the possible adverse effects that could befall them in the future (Ruslan et al., 2021). Susantin & Rijal (2021) highlight that the marriage tradition in Madura differs from that in Java. In Madura, the majority of the population adheres to the matrilineal tradition in which after marriage, husbands and wives are required to live in the wife's house. Before the wedding, the future husband carries BhenGibhen (cabinets, chairs, beds, and other household furniture) to the wife's house. The wife has a house to occupy. This contrasts with the ampa sabae tradition (request for marriage by women), which has developed over a long period in the Ambalawi community. The Ambalawi community understands this ancestral tradition as a solution to the problem which arises when women suffer detriment due to men's actions. It is a way for women to hold men accountable (Elpipit & Safitri, 2021). A unique case is found in the bakar batu (stone burning) tradition, a traditional ceremony enacted by the Dani tribe. It involves the event of cooking a dish made from several pigs. This dish is served as the main part of the meal. A stone that had been burned as a cooking medium. The Dani tribe continues this tradition of baking stones as a form of gratitude to God as an expression of joy or of sorrow. It is a regular part of big events such as celebrations of marriages or births or the final tribute to God on the occasion of someone's death or thanksgiving for the blessings of the harvest (Nipur et al., 2022). The tradition of marriage in the Bugis-Makassar tribal community has several long stages including determining the amount of panai money that the groom's family will hand over to the bride's family (Mustafa & Syahriani, 2020). In this tradition, it is not surprising to find marriage occurring at an early age as documented by (Yodi et al., 2020). These researchers' findings shed light on the meanings attributed to early marriage customs in Nagari Tapan, Basa Ampek Balai Tapan District. Parents see early marriage as: 1) a way of avoiding shame, 2) an economic matter, 3) a rescue effort. A similar diversity of marriage traditions occurs in the Besemah tribe community. The marriage contract is usually signed the day before the wedding celebration. However, some people do this after the wedding. This is in accordance with the agreement of the community with the bride and groom when they will hold the Pantauan wedding festivity. Several processes are involved in keeping the tradition. They are presented in Table 3. The tradition of Pantauan Bunting continues to be carried out by the community, whether it is the Besemah tribe or even people outside the tribe. There are no traditional sanctions or customs about what other customs must be followed if people do not carry out the Pantauan Bunting tradition. However, the Pantauan Bunting Tradition uses a system of reciprocity. If we hold the traditional Pantauan Bunting celebration, relatives are going to be married, then the relatives will reciprocate by doing pantauan for us if we one day get married. Likewise, if we do not hold aPantauan Bunting celebration, other people will do the same thing and not have the celebration. This principle of reciprocity is also important in customary law in the Basemah community itself. Wardani (2021) argues that customary law is a living law that manifests itself as a community habit. This marital custom law could thus be considered an inseparable part of the human body in the community. Anyone going to get married informs the community beforehand by inviting the residents in the surrounding areas by visiting their houses (in the Besemah language 'besuare'). This is usually done two to three weeks before the wedding. The bride and groom who will be married usually first inform their families or close relatives. Thus, The community knows there will be a wedding so that they can prepare everything needed for Pantauan, including obtaining food and other necessities from far away. At this time, the inviting family will also bring 'lemang' (lahat specialty food containing glutinous rice cooked in bamboo) to people in the community. The people who receive the 'lemang' understand that it signifies they are obliged to hold a Pantauan Bunting celebration. 2 Bemasak After being notified by the bride's family of the date of wedding, the community will prepare everything necessary for pantauan. The most prominent feature in the Pantauan Bunting Tradition is the presence of various dishes ranging from snacks such as cakes and fruits to more substantial food such as rice and side dishes. For this reason, the community will usually make or cook food to be served when the Pantauan Bunting celebration is held. Usually the day before the pantauan, the community will have prepared a dish that will be served during the pantauan. Some people even start gradually making food a week before the pantauan. There are no stipulations regarding what food should be available and served during the pantauan. However, some dishes are almost always part of the Pantauan Bunting tradition, lemang, dodol and pepes ikan, a typical food from the Besemah tribe. In addition to these two types of food, there is also always meat, but it is only served by close family members. This symbolizes that the family conducting the Pantauan Bunting celebration still has a blood relationship with the bride. --- Mantau Bunting The day before the wedding celebration, Mantau Bunting or calling the bride is done. After the marriage contract is signed, the community usually calls the bride and invite the bride to visit their homes. The Pantauan Bunting feast can also be held after the wedding celebration is over because there will not be enough time for the bride and groom to visit the homes of the residents of the surrounding areas in a single day. For this reason, before the pantauan begins, the community will agree when they will hold the traditional Pantauan Bunting celebration. Source: Field Research Results (2022) In Social Construction Theory, something can be formed due to the dialectic of externalization (the adjustment of individuals to their environment). This involves objectivity (individuals are aware that they are part of society) and internalization (individuals create a social reality in everyday life) (Susanto et al., 2020). The following is an explanation of the three social processes associated with this Pantauan Bunting Tradition. Based on this explanation of the dialectical process of Pantauan Bunting, the researcher sees it as having component parts as shown in Figure 3. --- Distribution of Pantauan Bunting Traditions in the Besemah Tribe Community Pasemah Tribe, commonly called the Besemah Tribe: The origins of the name 'Besemah' for this community are believed to be derived from the name of a fish that was formerly found in the Pagaralam area of South Sumatra Province. The Semah fish is a type of goldfish that lives in murky streams among rocks that are overgrown with moss and shaded by trees. However, many people call the Besemah Tribe the Pasemah Tribe. The name 'Pasemah' stopped being used because Dutch colonists found it difficult to pronounce the phoneme "pa" and pronounce it "be", so the name "Pasemah" became "besemah" (Asrin et al., 2016;Refisrul, 2019). For the purposes of government authority and administration, the Besemah cultural area includes Pagaralam City, Lahat Regency, Empat Lawang Regency, Muara Enim Regency, and South Ogan Komering Ulus Regency in South Sumatra Province. Lahat Regency, specifically, includes the area of Jarai District, Tanjung Sakti District, and the area around Kota Agung District. The Besemah cultural area in Bengkulu Province includes Kaur Regency, Seluma Regency, and South Bengkulu Regency. Specifically, Kaur Regency includes Padang Guci Hulu District and Padang Guci Hilir District. Besemah culture is also found in Lampung Province, namely in South Lampung Regency. The spread of Besemah culture to various areas outside Pagaralam City was followed by changes and the formation of new cultural identities in these different regions but they all still recognize that their origin lies in Pagaralam (Arios, 2019;Asrin et al., 2016). The Besemah Tribe is one of the tribes that inhabit Lahat Regency and its surroundings. The Besemah people are scattered in all areas both inside and outside the Province of South Sumatra. Thus, the Pantauan Bunting Tradition is not only to be found in one place in South Sumatra, but elsewhere too. In this study, researchers selected three different villages in Lahat Regency as a research location: Kota Agung Village, Pulau Pinang Village, and Selawi Village. Based on the results of the study, the researchers can say that Pantauan Bunting Tradition is still alive in these three areas. However, the numbers of members of the Besemah tribe differ in these areas. In Kota Agung Village, the Besemah tribe community is still widespread, and it can even be said that the majority of people in this village are from the Besemah tribe community. In Penang Village, by contrast, it is fairly uncommon to find people from the Besemah tribe because the majority of people in this village are from the Gumai Lembak tribe. Finally, in Selawi Village it is very difficult to find Besemah people because Selawi Village is located very close to the city center where there has been singificant mixing of tribes and cultures. The results of this research study also show that several factors cause the distribution of the Besemah tribe in Lahat Regency to vary from area to area. The first factor is physical factors that include location and distance. Kota Agung Village is quite far from the city center makes it difficult for people to move to other inhabited places. Therefore, the people of the Besemah Tribe tend to live in the village of Kota Agung. On the other hand, in Pulau Pinang Village, members of the Besemah community are rare. This is because Penang Village is a transition area between the village and the city center, so it is quite easy for people to move from one place to another. The concentration of the Besemah tribe in Selawi Village is almost the same as that in Penang Village; it is quite difficult to find people from the Besemah tribe because it is located very close to the city center. Social factors also affect differences in the distribution of the Besemah tribe in Lahat Regency. Social factors come from society itself. As explained above, in Kota Agung Village, the majority of the people are from the Besemah tribe. The people in Kota Agung Village are still classified as a traditional community. People still adhere to the existing traditions. The closed attitude of the community makes it difficult for this village to accept anything new that comes from outside and it tends to maintain its original culture. Therefore, if we visited Kota Agung Village, the Pantauan Bunting Tradition would still be very easy to find. The next research area is Pulau Pinang Village. The distribution of members of the Besemah tribe in Pulau Pinang Village is sparse because Pulau Pinang Village is a transition area between the countryside and the city center so there has been a lot of cultural mixing. The indigenous people in Penang are not the Besemah tribe, but the Gumai Lembak tribe; the ancestors who lived in this village came from the Gumai Lembak tribe. Most people now in Penang Village are not from the Besemah tribe, but the Gumai Lembak tribe. Researchers also find it difficult to interview informants about the Pantauan Bunting tradition. The Besemah tribe is only a minority and its members are immigrants. However, we can still see the Pantauan Bunting tradition in the Pulau Pinang Village. The Besemah people in this village still continue the tradition even though they are only few in number and are not the village's original inhabitants. In Selawi Village, members of the Besemah community are few and difficult to find. This is because this village has seen significant cultural mixing. This phenomenon cannot be avoided because Selawi Village is located very close to the city center, so many people from different ethnic groups migrate to this village on account of work, education, or marriage. In Selawi Village many people have moved with the times and have weddings with a modern vibe. This is also what makes the existence of the Pantauan Bunting tradition in this village difficult to find. The community has left many traditions behind that have been considered ancient. Social factors also affect the distribution of the Pantauan Bunting Tradition. One is amalgamation, the marriage of members of different tribes. Amalgamation can lead to assimilation and acculturation. Assimilation is a meeting between two cultures that brings about a new culture and replaces and erases the old culture. Acculturation, conversely, is the meeting of two different cultures that creates a new culture but does not eliminate or abandon the old one. This phenomenon does not only occur in the Basemah tribe. Several studies show the same process in different locations and involving different populations such as the amalgamation of Chinese and Madura ethnic groups in Bangkalan Madura Regency (Rahmatina & Hidayat, 2021). Another example is the marriage amalgamation of Batak and Malay ethnic groups in Pangkalpinang City (Siagian et al., 2021). Ethnic differences between Flores and Chinese in Trubus Village, Central Bangka Regency have likewise been amalgamated through marriage (Aprilia, 2021). A final example is that of Chinese amalgamated through marriage with members of indigenous peoples in Java (Winarni, 2017). It depends on the community itself to assimilate or acculturate in a particular cultural environment. This is also the case with the Pantauan Bunting tradition. Society chooses to maintain or abandon this tradition in daily life. Every region in Java has an amalgamated community, with the exception of the three villages that the researchers chose, Kota Agung Village, Penang Village, and Selawi Village. Based on the results of field observations, the researchers can say that although there have been many amalgamations in Kota Agung and Pulau Pinang village, the community still maintains the traditional culture. Even if there is an amalgamation, other tribes follow the customs or traditions of the Besemah tribe. On the other hand, in Selawi Village, because there have been many assimilations and influences due to the location of the village very close to the city center, most people abandon existing traditions and keep up with the times slowly. The distribution of the Pantauan Bunting Tradition among the three villages exhibits variations, with some villages still having easy access to it, while others find it more challenging or even extremely rare to come across. These disparities can be attributed to the factors mentioned earlier. Despite these differences, the Besemah people strive to uphold and sustain this tradition. They view the Pantauan Bunting as an integral part of their lives that holds significant cultural value, hence the community's commitment to its preservation and continuation. --- The Existence of the Pantauan Bunting Tradition in the Besemah Tribal Community in Lahat Regency The Pantauan Bunting tradition is one of the customs which is distinctively that of the Besemah tribe. People will invite a bride to come to their house. The community will entertain the bride and groom, offering them various dishes ranging from snacks to heavy food. Pantauan Bunting is a tradition constructed by the Besemah people in ancient times and still exists today even though it is not as strong as it once was. The tradition is a special form of human interaction response with its living environment. The formation of an environment is determined by several factors, one of which is the local community's culture (Astuti et al., 2021;Fitriana, 2018;Musafiri et al., 2016). The tradition of Pantauan Bunting is still found throughout all areas of Lahat District. Although in the modern era people compete to have wedding celebrations r with a modern feel and leave behind traditions considered old-fashioned, the Besemah community still continues the Pantauan Bunting tradition. These forms of kinship solidarity include the role of parents, routine social activities, feelings of energetic life, and traditional Javanese customs and rules of thumb that still apply to everyday life. The existence of Pantauan Bunting Tradition has many benefits for the community. In this study, the researchers selected 3 villages: Kota Agung Village, Pulau Pinang Village, and Selawi Village. The three villages were selected based on the difference in distance from the city. One village is far from the city center, namely Kota Agung Village, one village is in the transition area between the village and the city center, namely Pulau Pinang Village, and one village is close to the city center, Selawi Village. With these differences in distance from the city center, the researchers can compare the villages to see whether villages that are located within a close proximity to the city center and villages that are located long distances from the city center exhibit differences in their levels and methods of maintaining the existence of the Pantauan Bunting tradition. The numbers of people living in each area of Lahat District (2022) number are shown in Table 4. The Besemah tribe still maintains the Pantauan Bunting tradition. In Kota Agung, a majority the people are from the Besemah tribe. They still maintain the practices of the Pantauan Bunting tradition. By contrast, in Penang Village, the majority of the people are from the Gumai Lembak tribe, while people from the Besemah tribe form a minority. Nevertheless, the Besemah community in this village still carries on with the Pantauan Bunting tradition. In Selawi Village, located very close to the city center, significant cultural mixing has caused people to abandon their culture and traditions. The community has begun to be diverse and it is very rare to find people from the Besemah tribe who still maintain the Pantauan Bunting tradition. We therefore know that the villages' different locations and distances between the three villages and the city center impacts, but never quite eliminates, the Pantauan Bunting tradition. Many still hold to their traditions firmly and ensure that this tradition does not fade or disappear even in an advanced modern era. --- Conclusion The Pantauan Bunting tradition originates in the Besemah tribal community. It involves occasions when a couple in the community is to be marred. The bride-to-be is invited to come to the community members' houses in the village where the community will provide various dishes ranging from snacks to heavy food. This tradition is usually done the day before the wedding celebration is held. In doing Pantauan Bunting tradition, a bride-to-be will be accompanied by a single man and woman (bujang ngantat and gadis ngantat). The Pantauan Bunting Tradition is widespread in various areas across Lahat Regency. Several factors cause Pantauan Bunting Tradition in Lahat Regency to differ in each area where it is kept. Physical factors include the village's location and its distance from a city center. Social factors come from the community itself. In this study, we find that Kota Agung Village is a village where we can easily see the Pantauan Bunting tradition. This is because the majority of Kota Agung village community members are still classified as traditional and closed to something new. In addition, the people in this village have not intermingled much from other ethnic groups so the purity of the indigenous Besemah tribe in this village is still maintained. Whereas in Pulau Pinang Village and Selawi Village, the tradition has been difficult to find because this area has become a transition area between village and city where there have been many cultural confluences caused by amalgamation. Despite the advancing and modern times, the existence of Pantauan Bunting tradition continues to be preserved by the Besemah people in Lahat Regency. --- Author Contributions
International migration shows an increasing trend around the world. The majority of labor migrants, particularly low/semi-skilled migrants from low-and middle-income countries, immigrate to destination countries leaving their family members behind, leading to an increasing number of transnational families. While non-migrating spouses often receive financial support in the form of remittances, their husbands' migration also creates numerous social and personal problems. This general qualitative study aimed to explore non-migrating spouses' experience of sexual harassment/abuse and its impact on their mental health. Fourteen in-depth interviews were conducted to collect data. Participants reported experiencing harassment by men they knew, including their teachers and colleagues, who knew their husbands were abroad. None of the women reported taking any action against the perpetrators. Policy level changes to spread awareness on sexual harassment, encouraging victims to report such acts, and establishing and implementing appropriate laws are essential to mitigate this serious problem.
Introduction Migration is a global phenomenon with 272 million international migrants in 2019; more than half are migrant workers and most are from low-and middle-income countries (LMICs) (International Organization for Migration[IOM], 2020). With over half of the households having a current or returnee migrant, migration is common in Nepal (International Organization for Migration [IOM], 2018) and it is increasing trend (Government of Nepal [GoN], 2020). Labor migrants from Nepal are predominantly men (GoN, 2020;Sharma et al., 2014) with most migrating to work in India, Malaysia, and the six countries of Gulf Co-operation Council (GCC) (GoN, 2020). Due to lack of money (Telve, 2019), stringent migration laws in host countries (Acedera & Yeoh, 2020) and the need to leave someone in charge of assets at home (Lu, 2012), most labor migrants from LMICs, emigrate to destination countries alone. Leaving spouses/family behind, which has resulted in an increasing number of transnational families globally (Démurger, 2015;Lu, 2012) and in Nepal (Lokshin & Glinskaya, 2009). There is growing literature on the sexual harassment of female migrants, for example in China (Zong et al., 2017), but left-behind family members of labor migrants (Acedera & Yeoh, 2020;Shattuck et al., 2019), non-migrating spouses remain relatively understudied (Archambault, 2020;Fernández-Sánchez et al., 2020) particularly in Nepal (Maharjan et al., 2012). Prolonged absence of their migrant partner affects spouses in several ways. While leftbehind men with migrant wives as breadwinners of the family are also affected (Acedera & Yeoh, 2019;Elizabeth et al., 2020;Hoang & Yeoh, 2011) in patriarchal settings, left-behind women are more affected since women generally have a lower status than men and they are considered to be dependent on men (Fernández-Sánchez et al., 2020). While non-migrating women receive financial support in the form of remittances, their husbands' absence often creates both social and personal problems (Kunwar, 2015). The societal perception is that women in the absence of their partners are vulnerable to sexual violence/abuse/harassment (Krug et al., 2002). The World Health Organization (WHO) identifies sexual violence as a serious problem with short-and long-term consequences on women's physical, mental, and sexual and reproductive health (WHO, 2021). The WHO defines sexual violence as: "any sexual act, attempt to obtain a sexual act, unwanted sexual comments or advances, or acts to traffic, or otherwise directed, against a person's sexuality using coercion, by any person regardless of their relationship to the victim, in any setting, including but not limited to home and work" (Krug et al., 2002). Studies have looked at sexual harassment experienced by female migrant workers in the carpet industry in Kathmandu Valley (Puri & Cleland, 2007), or internal migrants in Chine (Hu et al., 2022). Other studies reported that many women experienced sexual abuse/harassment in the absence of their migrating husbands (Ahmed, 2020;Kamal, 2019). This study aimed to explore the experiences of non-migrating female spouses of Nepali international migrant workers on sexual harassment/abuse and its impact on their mental health. --- Methods and Materials This study employed a general qualitative approach using in-depth interviews. Women were recruited using snowball sampling (Green & Thorogood, 2018). The initial participant, whose husband was working abroad, was chosen for an interview. She then introduced additional participants, and subsequent selections were made based on the subsequent participants. Owing to the ongoing coronavirus pandemic, 14 in-depth interviews (Van Teijlingen & Forrest, 2004) were conducted online with wives of international migrants residing within the Kathmandu Valley via Facebook messenger and Viber by a female interviewer. A semi-structured interview guide was developed, participants were informed about the study objectives, confidentiality and anonymity, and voluntary participation and verbal consent was taken from all prior to the interview. Interviews were conducted in Nepali language. Since data were collected virtually, participants requested not to be recorded. Therefore, the first author, who conducted all the interviews, took notes and after each interview checked these with the participants. All interviews were then transcribed and translated into English language for analysis. After translation, the first author manually coded the transcripts using thematic analysis (Green & Thorogood, 2018). Participants' verbatim quotes formed a major part of the findings. The names of participants have been changed to maintain anonymity. Ethical approval was sought from the Nepal Health Research Council (Ref: 163/2019) and informed consent was obtained from all participants prior to the interviews. --- Results --- Participants' Characteristics Most women (86%) were identifying themselves as Hindus. With 64% having higher secondary education or above, most participants were better educated than the average women in Nepal. Most were engaged in some form of employment, such as government or privatesector jobs (57%) or business (14%), while 21% identified themselves as housewives. Twothirds of participants (36%) were under 30 years old, 43% were between 30-40 years old and 21% were over 40 years old (Table 1). Sexual Harassment Among Nepali Non-Migrating Female Partners --- Participants' Circumstantial Characteristics Most (57%) reported their husbands were working in GCC (Gulf Corporation Council) countries and participants were living in joint family and 57% resided in joint families. Most lived in rented accommodations (Annex 1). Most participants living in joint families shared that the relationship with their in-laws was unpleasant which caused them stress, for example: My relatives told negative things about my character to my mother-in-law. Things like I am very beautiful, and I might have extramarital affairs with my colleagues and asked her to take care of me. After that my mother-in-law started to doubt my character and threatened and scolded me. My job is to collect money from the market so sometimes I am late to come home… [Pooja (pseudo name), 27 years] Another woman explained why she had to stop living with her husband's family: I lost my baby girl three days after birth. After that, my father-in-law and mother-in-law started to behave very badly with me. They used to put cold water and excess salt in my dal (lentil soup). They started to scold me and tortured me so much… I used to get to college in tears… They took my husband's income; we always provided them money... It's been a few months I started living separately, otherwise I would have died. [Radha, 44 years] Thus, many migrant wives lived in challenging circumstances in their husbands' absence. Participants elaborated that their husbands had to work abroad because they were unable to get good jobs in Nepal. However, participants reported that their husbands loved them, send them remittances and communicated with them regularly. Owing to increased mobile ownership and the Internet availability in Nepal, women communicated with their husbands regularly using Viber and Messenger and phone calls. --- Experiences of Harassment Most of the participants reported experiencing some form of harassment at some point in their lives, five had experienced sexual harassment by men who knew their husbands were abroad. They reported that physical harassment in the form of touching/pinching; verbal abuse wherein the perpetrator used vulgar words and repeated propositions for dating. Participants shared that as women they faced similar but less severe forms of harassment commonly in their lives. Two participants were sexually harassed by their teachers. …while writing my master's thesis, I was sexually harassed by a teacher… He was the head of that department and he asked me to date him and have sexual relations with him. He told me to visit him at a hotel in Nagarkot where he would check my thesis. Before this, he used to touch my body parts as if unknowingly and he told me that if I needed anything, he would fulfill my demands. I used to ignore all this… then he created problems in my thesis correction and viva process and stopped everything (stopped the natural progression of thesis correction and finalization). After some time, his head of department post was terminated, only then I could submit my thesis for correction and viva… through the new head of department. [(Bindu,27 years] Another woman reported similar abuse from her academic tutor: Touching/pinching, using vulgar words… these are very common in my life which I have been facing in the streets and public buses. But I had such bitter experience of sexual harassment from my teacher, I cannot forget. I was a science student. I passed my master's degree with a first division. However, I was sexually harassed severely and abused by my thesis guide (supervisor). He asked me to date him and meet him in different locations and used to touch my private parts. Because of his behavior, I felt scared, and depressed as well. I had decided to give up my master's degree. But, due to my sister's encouragement, I could continue my study. I changed my thesis guide and finally I completed my thesis. [(Sushila, 27 years)] Two women reported being harassed by work colleagues. Nepal's workplace follows a strict hierarchical structure wherein women facing harassment may find it difficult to challenge their seniors, for example: In our organization, there is a hard and strong chain of command and generally, people with a lower profile cannot oppose their seniors. In this case, most harassment cases might be overlooked or kept secret. In real working life, I have also faced some bitter experiences of sexual harassment in my career from our seniors and male friends like touching, body brushing, use of vulgar words, sexual messages via social media, and propositions for dating also. Some lady friends have shared to me their bitter experiences about sexual harassment from seniors like the request for sexual relations too. [(Suman, 28 years)] One participant, 34-year-old Sita (pseudo name), was sexually harassed by her brother-inlaw: My husband has been in Dubai for the past 10 years…My sister-in-law and brother-in-law also live with us. I feel unsafe from my brother-in-law… I am experiencing domestic violence from my mother-in-law and sister-in-law, and sexual harassment like verbal harassment… and touching and brushing body parts (from brother-in-law)… The provided text highlights the disturbing reality faced by women living alone while their husbands work abroad. These women encounter various forms of sexual harassment from multiple sources, such as their teachers, office colleagues, and even their own family members. Many migrant wives, particularly the ones who experienced harassment, shared two main reasons why they thought migrant wives faced harassment. First, men's mindset that migrant wives in the absence of their husbands are easy targets as they are alone and may need sexual partners. Second, the patriarchal societal norms where victim-blaming and shaming women as being immoral instead of accusing men who make the advances. It is very common in our life, people (men) think that married women need sexual pleasure, therefore, wives of migrant men who are alone in Nepal might need a sexual partner. With such a wrong ideology, men are motivated to sexually harass wives of migrant men. Such sexual harassment might be less severe type (touching, teasing) to extremely severe type (force dating and abuse)… because of my beauty, I have faced different types of sexual harassment like propositions for dating and sexual relationship too. Once my senior in the office proposed me to marry him. [(Pooja, 27 years)] Many participants mentioned that weak laws on sexual harassment and poor implementation of existing laws, as offenders do not get punished, encourage perpetrators of harassment, and discourage victims to speak up. Thus, our participants who had experienced harassment, did/could not report what was happening to them, despite being relatively empowered in terms of education and employment. --- Impact of Harassment Participants reported facing several negative effects due to sexual harassment including irritation, fear, frustration, humiliation, and depression. They reported that in addition to their mental health, their academic/professional life was also affected as they could not perform their jobs well. I felt mental and professional stress due to sexual harassment. I was afraid, stressed, and depressed while my thesis supervisor was torturing and harassing me sexually. I was not able to complete my thesis. At that point, I was thinking of leaving my studies due to the mental pressure…. At that time, I was very stressed, depressed, and humiliated, because of my thesis guide… [Bindu,27 years] It also negatively affected how they saw their employing organization: Yes, I feel humiliated, tortured, flustered, due to the sexual harassment at work from friends and seniors. Simple types of sexually harassing behaviors are common and we also take it lightly but some cases of sexual harassment are hard to forget and makes us nervous always (for fear it will happen again). It makes me lose respect for my organization and causes mental stress. [Suman,28 years] Depending on available support network and life circumstances, sexual harassment may add to already complicated life for some migrant wives. Such was the case for 31-year-old Dhanmati: After the death of my husband, people falsely blamed me for it. Family members as well as the society, and colleagues tried to harass me sexually… that made me depressed and scared… I thought of committing suicide, but at the same time, I thought about my child and my responsibilities toward him… to protect me from such a bad environment, I took support from a kind male friend (her current boyfriend). Now he is helping me to solve my problem as a good friend and I am feeling quite safe now. [Dhanmati,31 years] Even in such challenging circumstances, Dhanmati was still justifying entering another relationship. This reflects the strict moral values that women are subjected to in patriarchal societies: My family members and my late husband used to blame me as characterless woman, I ignored it because I was not like what they thought of me. After my husband, I have to depend on some other person for my safety and needs. He is just a good friend. But in society sometimes I tell other people that he is my husband to protect me from other sexually deprived persons (sexual predators). (Dhanmati, 31 years) Therefore, the issue of sexual harassment has led to challenging circumstances for the wives of immigrant male spouses, prompting them to employ various strategies and approaches to shield themselves from such negative behaviors. --- Suggestions to Mitigate Sexual Harassment When asked what could be done to prevent such incidents, most migrant wives' suggestions were these. First, that the society should take responsibility and create a safe space for migrant women in the absence of their husbands. Instead of blaming and shaming migrant wives for being harassed, and subjecting those to doubts of immoral behaviors and affairs, a culture of respect should be developed. Second, the government and judiciary should formulate and implement strong laws against sexual harassment. Also, the judiciary should ensure the legal procedures are victim-friendly. Finally, awareness should be created regarding sexual harassment prevention and reporting to create a safe environment for women in society. --- Discussion In this study, we explored the experiences of sexual harassment of non-migrating wives of international labor migrants from Nepal and participants were found suffered from several types of harassment such as touching/pinching, verbal abuse and repeated requests for dating. Sexual harassment is particularly common in patriarchal societies (Berman et al., 2000;Foulis & McCabe, 1997). Participants also noted that they were vulnerable to sexual abuse/harassment from men because of the patriarchal society in Nepal which places women in a subordinate position. Less severe forms of sexual harassment such as name-calling, wolfwhistling, and other forms of public sexual harassment are extremely common in Nepal (Kunwar et al., 2014). All five women suffered from sexual/abuse harassment at the hands of people they knew and who knew their husbands were away. Some studies also found migrant wives being sexually harassed by people they knew and who knew their husbands were away including male relatives and friends of their husbands (Ahmed, 2020;Kamal, 2019). Thus, we argue that the societal, particularly the perpetrators', mindset is such that non-migrating wives can be harassed as her husband is not there to protect her, and also because she may be willing to engage in an affair. Sexual harassment in academic institutions (Dunne et al., 2004;United Nations Children Fund[UNICEF], 2016) and workplaces (Kunwar et al., 2014) is common in Nepal and that in many cases perpetrators are people the women know (Puri & Cleland, 2007;Thapalia et al., 2020). Although most of our participants were highly educated and employed, they did not mention reporting or taking an active approach to dealing with harassment. They reported being very disturbed by it, so much so that their studies/work was suffering. This is consistent with existing literature where women tend not to report sexual violence (Krug et al., 2002;Puri et al., 2011) to authorities because they feel ashamed and afraid of being blamed or mistreated (Krug et al., 2002). Many participants mentioned that their in-laws, neighbors, and people in their community generally doubted their loyalty towards their husbands and in some cases even accused them of infidelity. Similar to our finding, a qualitative study among migrant wives in rural Bangladesh (Kamal, 2020) found that some participants experienced sexual harassment from the men in the family or community, but did not report or take any action for fear that if they revealed being harassed, the society would blame them for not being 'respectable' (Kamal, 2020). Participants reported facing several negative effects due to sexual harassment including irritation, fear, frustration, humiliation, shame, and depression. They reported that in addition to their mental health, their academic/professional life was also affected as they could not perform their jobs well. According to McLaughlin et al. (2017), women suffering from sexual harassment are more likely to quit jobs leading to financial stress. Already, these women were vulnerable as they were struggling with finances, missing their husbands, and adjusting with their husbands' family and life without the intimacy and support of their husbands. The additional stress of being sexually harassed further complicated their personal and even academic/professional lives. Further, except for a few women who sought help from their close family members/friends, women were unable to seek help. Berman et al. (2000) observed that girls and women who experienced sexual harassment reported feeling afraid, intimidated, belittled and a decreased sense of confidence. With no studies to date focusing on sexual abuse/harassment experiences of non-migrating wives, our study fills an important gap in literature. It highlights the urgency to focus on the protection and the needs of the non-migrating wives. However, our study's qualitative nature is a potential weakness, because we only interviewed women residing in the Kathmandu Valley who were recruiting using snowball sampling, hence the generalizability of our findings is very low. We suggest further qualitative exploratory work on non-migrating wives in different parts of Nepal including in rural/urban and mountain/hill/terai (plain) areas. We also recommend quantitative surveys to produce more generalizable findings. --- Conclusion The findings of this study show that sexual abuse/harassment is common among the nonmigrating wives and that the perpetrators are usually men they know and men who know their husbands are away. None of the women experiencing harassment reported/took any action against the perpetrator due to fear of self-blaming and hopeless to get justice in society. Patriarchal social norms where family and community view non-migrating wives dubiously and question their loyalty to their husbands, and who perceive women as weak in the absence of their husbands encourage perpetrators to commit such acts and discourage victims to report such acts. Our recommendations include that we need to develop a culture of respect. Secondly, the government should initiate support groups for migrant wives locally where they can discuss their problems and support each other. Thirdly, the government should encourage women suffering from harassment to report the act(s) and the judiciary should formulate and implement strong laws against sexual harassment. Again, the judiciary should ensure the legal procedures are victim-friendly. Fourthly, psychosocial counselling should be made available at the local level wherein women seeking help could go. Finally, awareness should be created in the society regarding sexual harassment prevention and reporting to create a safe environment for women in the society. --- Role of Authors PS and EvT: Designed the study. KG: Conducted, collected, and analyzed the data. KG: Wrote the first draft. KG and SM: Wrote the second draft with advice from PS, EvT and RCS. All authors reviewed the manuscript and provided critical feedback and suggestions. --- Conflict of Interest The authors declare that they have no competing interests.
Online health communities (OHCs) represent a popular and valuable resource for those seeking health information, support, or advice. They have the potential to reduce dependency on traditional health information channels, increase health literacy and empower a broader range of individuals in relation to their health management decisions. Successful communities are characterized by high levels of trust in user-generated contributions, which is reflected in increased engagement and expressed through knowledge adoption and knowledge contribution. However, research shows that the majority of OHCs are composed of passive participants who do not contribute via posts, thereby threatening the sustainability of many communities and their potential for empowerment. Despite this fact, the relationship between trust and engagement, specifically the trust antecedents that influence engagement in the OHC community context has not been adequately explained in past research. In this study, we leverage social capital behavior and social exchange theory frameworks in order to provide a more granular trust-based elucidation of the factors that influence individuals' engagement in OHCs. We collected data from 410 Brazilian participants of Facebook OHCs and tested the research model using partial least squares. The results confirm two new constructs-online community responsiveness and community support-as trust antecedents that influence engagement in OHCs, resulting in knowledge adoption and knowledge contribution responses. These findings contribute to the trust and engagement literatures and to social media research knowledge. From a practitioner perspective, the study findings can serve as an important guide for moderators and managers seeking to develop trusted and impactful OHCs.
Introduction As internet penetration becomes more extensive, the range of purposes for which it has been employed has equally increased. Some of these purposes contain the potential to educate and improve citizen well-being in ways that were previously not possible. This is particularly evident in the area of health. For example, online health communities (OHCs) enable individuals to interact with others who share similar health concerns in order to learn from their experiences and gain useful advice (Eysenbach et al., 2004;Hajli et al., 2014) and to reciprocate by sharing health information that is frequently based on personal experience (Ziebland & Wyke, 2012). Patients and those who support their care can use these networks to expand their understanding of diseases, treatments, or recommended healthy practices (Goonawardene & Tan, 2013;Ram et al., 2008;Rupert et al., 2016). They can source information about many aspects of medical conditions or concerns, making the issue seem less complex and more manageable. These networks also enable them to receive much-needed psychological support (Yan & Tan, 2014). This is particularly salient since a lack of informational and psychological support is consistently highlighted by those with serious illness and their caregivers (Luszczynska et al., 2013). OHCs can also increase inclusion by providing a supportive environment for those who may not be able to access health information easily due to location or socially stigmatized conditions and associated privacy sensitivities (Still, 2008), enabling them to overcome spatial or temporal limitations (Fan et al., 2014). For these reasons, OHCs are a valuable resource for expanding the understanding of medical conditions, treatments, or recommended healthy practices (Goonawardene & Tan, 2013;Ram et al., 2008), empowering patients to become more informed about how to self-manage their conditions and take an active role in their treatment, thereby improving clinical outcomes. In this way, these communities also have the potential to contribute to preventive healthcare (Goh et al., 2016), something that in a context of changing demographics and strained healthcare systems (England & Azzopardi-Muscat, 2017) has assumed greater social and economic significance. Notwithstanding the potential value of OHCs, research has shown that engagement in online health communities is highly variable-in some cases, as few as 1% of members contribute up to 75% of information (Carron-Arthur, 2014, Van Mierlo, 2014). The underpinning reasons for this appear to be trust related. For example, a recent survey found that only 4% of those surveyed said that they trust the health and medical information available on social media, 5% reported believing what they read on discussion forums, and only 15% stated that they trust information available on health websites (IPSOS MRBI and MSD, 2019). That deficit of trust is critical, as it limits individuals' engagement with OHCs and the positive potential contained therein. While research has begun to identify the factors that may increase an individual's trust in an OHC (Fan & Lederman, 2018;Fan et al., 2014;Fan et al., 2010), research on how those same factors influence OHC engagement remains limited. This gap in understanding has important implications as it limits the potential of these communities to support health self-management and improved health outcomes. This research addresses this deficiency in a number of distinctive ways. First, it advances our contextual understanding of trust generation in OHCs, illustrating how trust antecedents influence engagement in OHCs and, through engagement, influence knowledge contribution and knowledge adoption. We theorize that three trust-related antecedents influence member engagement and behavioral trust (knowledge adoption and knowledge contribution). By examining this relationship and its formation pathways, our findings yield important implications for research and practice, providing insight into how more engaged membership of these communities and their associated positive outcomes can be supported and maintained. The fact that the majority of online community users are lurkers who do not participate by contributing or adopting knowledge (Amichai-Hamburger et al., 2016;Rafaeli et al., 2004;Sun et al., 2014) has amplified the need to understand how the trustworthiness of online health environments can be more effectively developed in order to increase the active participation of their members and accelerate the realization of these communities' empowering benefits. This study advances this understanding and is therefore not just interesting but important (Tihanyi, 2020). Second, our focus on OHCs complements the existing literature. For example, much attention has been paid to trust in online transactional contexts (e.g., Connolly & Bannister, 2007;Fang et al., 2014;Gefen et al., 2003;Lee et al., 2011;Pavlou & Gefen, 2004) and, to a lesser extent, to trust in general online social networks (e.g., Grabner-Kräuter & Bitter, 2015;Matook et al., 2015). While this is valuable, the findings of these studies are bounded to those contexts and research focusing on trust in an OHC context remains limited. Moreover, what does exist varies considerably in focus, ranging from examinations of cognitive and affective trust development mechanisms (Fan & Lederman, 2018;Tacco et al., 2018) to trust stage progression (Fan et al., 2014), language, and similarity cues that indicate member trustworthiness (Sillence, 2013) and the consideration of trust dimensions in tandem with several other constructs in the context of value co-creation (Zhao et al., 2013b;Zhao et al., 2015). Furthermore, no empirical research has examined the relationship between trust antecedents and engagement, or the consequents of that relationship, in an OHC context, despite the fact that the empowerment of OHC members has been shown to relate directly to their level of engagement with the community (Oh & Lee, 2012), manifested through information disclosure (Petrič & Petrovčič, 2014) and knowledge adoption (Johnston et al., 2013), both of which are trust behaviors. However, this research does just that, answering repeated calls to address this absence of research on engagement in OHCs-answering, in particular, the call to investigate whether the outcomes of such research are similar to those obtained in other contexts (Hur et al., 2019), as well as the call (Demiris, 2006) to clarify how engagement in OHCs might empower members to make healthcare decisions. Our research yields insights that contribute to the small but growing body of knowledge on engagement in the OHC context, empirically illustrating the role of engagement as a mediator between trust antecedents and behavioral trust responses in the unique context of OHCs. Additionally, this research answers calls from the IS field for trust research that focuses on trust targets other than technology (Söllner et al., 2016) by focusing on OHC members and their responses to nontechnical trust antecedents. Finally, our findings advance an understanding useful to community hosts. Engagement is critical in determining the sustainability of social networks (Thielst, 2011), and researchers (Wang et al., 2017) have shown that those who contribute informational support in an OHC context remain members of those communities for longer periods of time than those who simply seek and receive informational support. The findings of our study yield important insights into how trust can be more effectively generated in an OHC context to support increased member engagement, thereby providing valuable guidance for those seeking to promote the sustainability of these platforms. This study is structured as follows. First, we outline the theoretical background of this examination of trust in the OHC context. This includes a review of the relevant literature and the study hypotheses. Then, we describe the methodology employed to test the research model. Finally, we discuss the study findings and their implications for theory and practice. The paper concludes with an outline of study limitations and potential directions for future studies in this area. --- Theoretical Background The objective of this paper is to examine trust formation in OHCs. We examine the relationship between trust and engagement, both in terms of the specific trust antecedents that predict engagement and in terms of the trust responses related to engagement. To that end, we draw on social capital theory and social exchange theory. Social capital is a term used to describe the "norms and networks that facilitate collective actions for mutual benefits" (Woolcock, 1998, p. 155). It has been described (Beaudoin & Tao, 2007) as the actual or potential resources that result from social connections and senses of reciprocity and trust, which can bring about outcomes at the individual and collective levels. It has been argued (Nahapiet & Ghoshal, 1998) that social capital encompasses distinct structural, relational, and cognitive dimensions. In the OHC context, the structural dimension is represented by social interaction links and ties between members of the community, as manifested in network density, interaction frequency, duration, and depth. These structural links are conduits for resources, such as credible information and experiential knowledge. The relational dimension encompasses relationship connections between community members and trust and identification with other members, as evidenced in the perceived support and perceived responsiveness of the online community. The cognitive dimension is represented by the shared understanding, values, and normative expectations of the community, all of which bind a community together and facilitate the achievement of its objectives. In the context of this study, it is proposed that the presence of these three dimensions is likely to influence a trust response and the intent to engage with the community. However, the unique nature of OHCs means that they are characterized by particular vulnerabilitiesspecifically, the adoption of incorrect health advice may result in significant consequences for the individual; similarly, the disclosure of personal health information represents privacy loss. As a consequence, both of the trust responses examined in this study involve a risk-benefit calculus with more impactful outcomes than would be the case in many other online contexts. In the case of knowledge contribution, it contains the elements of social exchange, one that takes place under a condition of risk, which in this case is loss of privacy. We thus employ social exchange theory (SET) as one of the theoretical frameworks guiding this study because the transfer of personal information is an exchange between social actors that involves awareness of the risks associated with the disclosure of this information (Youn & Hall, 2008). SET bridges disciplines, including anthropology, social psychology, and sociology, and conceptualizes social behavior as an exchange process in which individuals evaluate relationships in terms of their benefits and risks. It therefore emphasizes behavior as a process of resource exchange (Emerson, 1976) where one person evaluates the cost associated with exchanging a resource (such as health information) with someone else in order to receive a specific benefit (such as advice). The explanatory power of this theory has been applied to examine issues as diverse as psychological contracts (Rousseau, 1995), employee responses (Jones, 2010), trust generation, and privacy concerns (Luo, 2002). In an online context, it has been used (Tsai & Kang, 2019) to examine reciprocal intention in knowledge seeking, online repurchase intentions (Chou & Hsu, 2016), and knowledge sharing in OHCs (Yan et al., 2016). The literature has made it clear that trust is only required in conditions of uncertainty and risk and is necessary for exchange relationships to succeed. This applies to disclosure relationships, as without some form of trust among online community members, most individuals would be reticent to disclose personal information, particularly to online community members with whom they are unfamiliar. SET is also relevant to knowledge contribution from a benefit-evaluation perspective, as it emphasizes the intrinsic rewards that accrue from information sharing, which include feelings of belonging, network ties, trust, and community commitment-all of which are rewards that strengthen the further development of social capital. The unit of exchange (in this case personal health information) may also contain intrinsic socioemotional value for the recipient of that information, motivating their desire to reciprocate (Cropanzano & Mitchell, 2005). That value relates to the fact that disclosure of such information demonstrates trust, respect, and appreciation of their expertise. For example, researchers (Foa & Foa, 1980, 1974) have long contended that units of exchange (including information) may provide symbolic benefit to the recipient, a benefit that conveys a meaning that transcends objective worth to the individual and enables these units of value to be exchanged in a more open-ended manner. This is particularly true in the context of an online health community, where the disclosure of personal health information and the request for guidance regarding the management of one's health conveys a message that the recipients' expertise is trusted, respected, and needed (Shore, Tetrick & Barksdale, 2001). In this way, the provision of personal health information may be evaluated by the recipient as having intrinsic social value (Redmond, 2015), which facilitates their participation in an altruistically motivated interpersonal exchange, motivating their contribution to the development of an online community that they value. The SET framework therefore provides an empirically tested scaffolding for exploring the normative aspects of exchange that affect online information sharing choices, specifically trust in the online community. In the context of the current study, it indicates that when OHC members evaluate the informational and socioemotional supports that the community is providing as trustworthy and aligned with their needs, they are more likely to actively engage through applying that information, contributing health advice, and disclosing their own experiences. Our decision to integrate social exchange with social capital is consistent with an increasing body of work that has recognized the value of this integrated approach in examining trust or trust-related factors in the online community context (Ho & Lin, 2016;Jin et al., 2015;Munzel & Kunz, 2014;Wang & Liu, 2019). --- Trust Trust is a construct of enduring interest whose value and contribution to interpersonal, interorganizational, and transactional relationships is widely acknowledged by researchers and practitioners. The former seek to understand the antecedents of trust, whereas the latter seek to use those insights to reduce risk and improve interaction outcomes in situations of uncertainty. Golembiewski and McConkie (1976, p. 131) remark that there is "no single variable which so thoroughly influences interpersonal and group behavior as does trust." Notwithstanding significant interest in the construct by the academic community, there are numerous conceptualizations of trust. The multiplicity of definitions and the conceptual diversity that surrounds the construct results from the different disciplines of researchers and their different research foci and emphases (McKnight et al., 1998). Nonetheless, some points of commonality are evident in the literature, with trust frequently defined in terms of optimistic expectations or confidence. For example, McAllister (1995) perceives trust in terms of positive expectations regarding consequent behavior, while Jarvenpaa and Leidner (1999) define trust as the optimistic expectation that the trusted person will act ethically and morally, even without being monitored (Jarvenpaa & Leidner, 1999;Moorman et al., 1992). While Hosmer (1995) describes trust as a positive expectation that the other party will not exploit or take advantage of a situation through opportunistic behavior, a slightly more nuanced approach is provided by Golembiewski and McConkie (1976), who view trust in terms of confidence in an event, person or process based upon personal perceptions and experiences. Interestingly, they also view trust as a dynamic phenomenon, one that can evolve over time and can be influenced by positive experiences. Trust definitions frequently reference issues such as the potential for exploitation or perceived risk, thereby pointing to the fact that trust is critical for the success of all social interactions that involve uncertainty and dependency. In fact, Mayer et al. (1995, p. 711) note that the need for trust only arises in a situation of risk. It has also been asserted that "willingness to take risks may be one of the few characteristics common to all trust situations" (Johnson-George & Swap, 1982, p. 1306). Engaging with an OHC, either through disclosing personal information or acting on health advice, places participants in a position of vulnerability and risk. Because it involves a significant dependency on that community (for the provision of trustworthy health advice) in order to ensure a positive outcome, the potential vulnerability and risk from opportunistic behaviors are correspondingly greater. Since this study focuses on the OHC context, it incorporates these perspectives and draws on Corritore et al. (2003) to define trust as "an attitude of confident expectation in an online health community context that one's vulnerabilities will not be exploited." Trust research typically examines the relationship between trust antecedents, cognitive and affective trust (e.g., Fan & Lederman, 2018;Johnson & Grayson, 2005;Kanawattanachai & Yoo, 2002), and outcomes such as the intention to adopt technology. In our study, we advance knowledge in two ways: First, we advance knowledge in our choice of outcomes, we focus on knowledge adoption and knowledge contribution, which aligns with our OHC context. Our outcome variables also focus on behaviors rather than intention. Second, we advance knowledge by focusing on engagement as the mediating mechanism between trust antecedents and outcomes. There are two main reasons for this: First, the trust antecedents employed in this study conflate with cognitive and affective trust because they are all measured perceptually. For example, the literature has repeatedly confirmed that evaluations of information credibility reflect a cognitive trust judgment, and evaluations of community support and community responsiveness influence both cognitive and affective trust perceptions. Second, we believe that engagement is a more important mediating mechanism to examine than cognitive and affective trust because its strong association with behavior makes it particularly salient to the stability and continuation of online communities (Algesheimer & Dholakia, 2005). Thus, our model comprises an examination of cognitive and affective trust antecedents, engagement, and behavioral trusting responses, providing new insight into which type of trust antecedent and which pathway is most effective in influencing engagement and behavioral trust responses in OHCs. We turn now to an examination of engagement in the literature. --- Engagement Engagement-conceptualized in this study as a state of involvement and connection between the individual and community that creates value for the individual, as manifested by behavioral outcomes-has its roots in the marketing literature. Engagement has been found to be related to consumption and purchase behaviors (van Doorn et al., 2010), online brand community engagement (Wirtz et al., 2013), and online engagement and advertising effectiveness (Calder et al., 2009). Researchers have criticized the definitional confusion associated with engagement (e.g., Ray et al., 2014;Suh et al., 2017) and Cheung et al. (2011) observed that the definition, dimensionality, and consequent operationalization of customer engagement in many marketing studies is inconsistent and mixed. Our definition of engagement builds on existing research (Webster & Ahuja, 2006;Webster & Ho, 1997) that defines engagement in a system as something that "holds [users'] attention and they are attracted to it for intrinsic rewards" (Jacques et al., 1995, p. 58). This definition is also consistent with Higgins's (2006, p. 442) description of engagement as being involved, occupied, and interested in something, and Calder and Malthouse's (2008) view of engagement as a state of involvement and connectedness between the user and the object of engagement that can motivate behavioral outcomes. This involvement and holding of attention is not temporally bounded to a specific instance of information exchange (Eldor, 2021;Eldor & Harpaz, 2015;Brodie et al., 2011) because a responsive and supportive community can provide an intrinsic socioemotional reward that equally interests and captures the attention of community members. A small but steadily increasing body of work has started to examine engagement in more diverse and non-product-specific online community contexts where the focus is on interaction and value co-creation (Hollebeek et al., 2017). These include online magazine communities (Heinonen, 2018), social media platforms (Di Gangi & Wasko, 2016), online travel communities (Fang et al., 2018), online learning communities (Ryle & Cumming, 2007), online gaming communities (Chuang, 2020), and OHCs (Hur et al., 2019;Litchman et al., 2018), among others. Within these examinations, the locus of attention varies considerably, ranging from usage metrics, antecedents of online engagement, and the consequents of that engagement to motivations and valence, social identity, and telepresence-variations that have, at times, bounded the dimensionality and generalizability of these examinations. However, this focal diversity is accompanied by valuable conceptual work, including literature reviews that have provided much-needed structure and guidance regarding the construct (Suh & Cheung, 2019;Unal et al., 2017). One point on which most researchers agree is that engagement is context dependent (Brodie & Hollebeek, 2011;Brodie et al., 2011;Brodie et al., 2013;de Oliveira et al., 2016); as a consequence, there is a need for further research within more diverse social and cultural contexts in order to progress our understanding of the predictors and consequents of the construct and to increase its conceptual clarity (Cheung et al., 2011;Dessart et al., 2015;Suh et al., 2017). Engagement and trust are related in that they share cognitive and affective elements; nonetheless, they remain distinctive constructs, as is evident in their conceptual composition and expression. For example, trust is frequently defined in terms of beliefs and an attitude of confident expectation that vulnerabilities will not be exploited, in contrast to engagement, which is typically conceptualized as a state of involvement and connection that creates value for the individual. Engagement has a stronger association with behavior and has been found to influence trust responses in the online community context (Islam & Rahman, 2016;Kang et al., 2016). For example, Ray et al. (2014) demonstrated the relationship between online engagement and the trust-related outcomes of satisfaction and knowledge contribution, while Rich et al. (2010) found that engagement mediates behavior. However, the majority of studies examining the relationship between engagement and trust focus on online brand communities or discussion communities; as a consequence, whether that relationship extends to the specific OHC context remains undetermined. Both cognitive (rational evaluation) and emotional (indicating an affective perception) factors enable the expression of engagement (Kahn, 1990). By examining information credibility, community support, and community responsiveness as trustrelated determinants of engagement, this study explores the cognitive and emotional components of engagement that trigger the behavioral activation component, demonstrating that engagement can effectively be measured through specific interactions (and that trust-related components can influence this). In doing so, it advances understanding of the nature of the trust-engagement relationship in an online health community context, answering calls (e.g., Ray et al., 2014) to better understand engagement through expanded frameworks that incorporate related constructs. --- Model Development The research model for this study is shown in Figure 1. It proposes that OHC engagement is influenced by information credibility, community support, community responsiveness, and the propensity to trust. Community support is conceptualized as a reflective second-order construct, with four dimensions corresponding to the four facets of community support (Chiu et al., 2015). The model also shows that engagement influences knowledge adoption and knowledge contribution behaviors, both of which are also influenced by the propensity to trust. Information Credibility In an OHC, participants seek credible information to help them cope with the uncertainty associated with the illness they are trying to overcome. This is a significant challenge, as much of the communication in online groups is subjective, discursive, experiential, and frequently anonymous (Fan et al., 2010). Moreover, it is a challenge with potentially serious consequences (Hilligoss & Rieh, 2008), as acting on incorrect information regarding aspects of a disease or its management could negatively impact health outcomes (Hajli, 2014;Hajli et al., 2014;Lober & Flowers, 2011;Maloney-Krichmar & Preece, 2005). We argue that information credibility influences engagement, knowledge adoption, and knowledge contribution in the online health community context. As online communities are characterized by the lack of face-to-face interaction and the inability to verify expertise, this amplifies perceived and behavioral uncertainty regarding the credibility of information provided by other members. As a result, members of these communities place greater reliance on signals of information trustworthiness, such as member feedback (Pavlou & Dimoka, 2006) expressed in comments or posts, treating these as important indicators of information approval (Fan et al., 2014;Flanagin & Metzger, 2013). The presence of such signals of information credibility has been shown to stimulate members' participation in general online communities (Benlian & Hess, 2011) and is likely to be equally relevant to OHC contexts. Based on the above discussion, we propose: H1a: In the OHC context, information credibility is positively related to engagement. Research by Fan and Lederman (2018) on patient OHCs found that perceived information credibility influences knowledge adoption. Similarly, a reexamination of trust antecedents in internet-based health information (Sillence et al., 2019) confirms the predictive importance of information credibility on the intention to act on that information. Although in this case, the focus was health websites rather than online communities, it is likely that the same outcome may extend to OHCs. We therefore propose: H1b: In the OHC context, information credibility is positively related to knowledge adoption. Moreover, because credible information benefits the recipient by enhancing their knowledge, this results in an increase of social capital and their desire to reciprocate through information contribution. Empirical support for this is found in prior work (Benlian & Hess, 2011) showing that quality-assured content shapes the trust perceptions of online community users, thereby increasing their participation behavior. Researchers such as Chan and Li (2010) have demonstrated that interactivity or engagement in a virtual context can be developed via structural or experiential routes, both of which influence reciprocity. The structural route comprises community features that provide credible information resources to users (with the experiential route comprising social bonds and enjoyment that provide socioemotional resources to users). These authors have shown that both routes to interactivity influence the norm of reciprocity and voluntary co-creation behaviors, which in the case of this study is expressed through the contribution of knowledge. The current study is therefore consistent with extant research in proposing that the provision of information which is perceived as credible strengthens the structural bonds that stimulate community involvement and connection, motivating reciprocal engagement as expressed through knowledge contribution. Based on the above discussion, we propose: H1c: In the OHC context, information credibility is positively related to knowledge contribution. --- Online Community Support Supportive interactions among individuals in a traditional healthcare environment can play a protective role in countering the health-related effects and life-stressing consequences of a disease situation, thus contributing to participants' well-being (Cobb, 1976;Schaefer et al., 1981). OHCs can also perform this protective role by promoting social interaction. Further, participants benefit from learning from the experience of others, resulting in improved health outcomes and greater engagement in the selfmanagement of disease (Yan & Tan, 2014). Community support is a multidimensional construct comprising facets such as emotional support, informational support, tangible support, network support, and esteem support (Mattson & Hall, 2011;Schaefer et al., 1981). While tangible support does not apply in the context of online communities, the other support categories do apply and serve as manifestations of social support within online communities. Such support provides an intrinsic socioemotional reward that equally interests and captures the attention of community members. We therefore propose: H2a: OHC support is positively related to engagement. Once an individual has been diagnosed with a disease, it is understandable that they would search for health information and advice regarding how best to proceed in treating their illness (Yan & Tan, 2014;Schaefer et al., 1981). When members of an OHC perceive that they are receiving informational support, through salient information, valuable advice, and informed guidance on specific issues, this is likely to engender beliefs regarding the competency of other community members. In this way, informational support aligns with the ability dimension of trust, contributing to the decision to engage in trusting behavior. Emotional support, reflecting the demonstration of concern and care, fills the affective needs of the individual. Such concern and care has been described as empathy and sympathy (Yoo et al., 2014), encouragement and security, and care and affection. It helps to engender a sense that the community is positively intentioned and genuinely supportive of the individual and their wellbeing (Schueller, 2009) and thus aligns with the concept of benevolence (Mayer et al., 1995). Esteem support can be expressed through online interactions that reinforce the individual's self-esteem and their belief in their capacity to cope with the situation by moving through the stages of their health condition (Mattson & Hall, 2011). Because of their positive intention, such interactions are also analogous to the trust concept of benevolence. Finally, network support demonstrates that the individual is a member of a support network that is available to assist others, thereby providing the participant with a sense of belonging to the community and the ability to share their experiences (Yan & Tan, 2014;Schaefer et al., 1981;van Uden-Kraan et al., 2008). Research (Tsai & Hung, 2019) has shown that a sense of belonging or identification with an online community influences both cognitive and affective trust formation, which, in turn, predict continuous use intentions. In the literature, explicit support is provided for the predictive influence of social support on engagement and trust-related behavioral outcomes. For example, recent work by Mirsaei and Esmaeilzadeh (2021) in the U.S. found that perceived social support (as an indicator of channel richness) influences engagement in OHCs, as well as patient participation in care management. The work of Wang et al. (2021) demonstrated that social support is a key predictor of a new user's continued engagement in an OHC. Similarly, Yang et al. (2017) revealed the relationship between perceived social support, trust in health information, and engagement in health informationseeking actions, while an earlier study by Jin et al. (2016) confirmed the influence of emotional support on healthcare knowledge adoption behavior within an online community context. We therefore propose: H2b: OHC support is positively related to knowledge adoption. As previously noted, Chan and Li (2010) confirmed that interactivity or engagement in a virtual context can be developed via experiential routes that include the provision of socioemotional resources to users and that interactivity developed in this way influences the norm of reciprocity and voluntary co-creation behaviors, which in the case of this study is expressed through the contribution of knowledge. More recently, Abidin et al. (2020) demonstrated the relationship between social support and trust formation within an OHC, showing its influence on knowledge sharing and community promotion. Based on this discussion, we propose: H2c: OHC support is positively related to knowledge contribution. --- Online Community Responsiveness Many individuals join an OHC to increase their knowledge regarding a specific health concern and prefer to receive answers to their questions from others who have either experienced or are familiar with their health issue and can therefore provide informed insights. Consequently, an OHC that is perceived as being responsive to information requests by providing timely responses to posts, is likely to result in more satisfied members and higher levels of member participation and is more likely to be evaluated as trustworthy (Zhao et al., 2013a). In this study, we argue that community responsiveness positively influences engagement, knowledge adoption, and knowledge contribution in the online health community context. If other community members respond speedily to member requests, this indicates that they have the competence to provide informed guidance, are willing to do so, and are interested in the needs of the community, demonstrating their ability, integrity, and benevolence. It also builds confidence in the community as a valuable source of socioemotional support for guiding decisions. As a result, individuals are more likely to increase their participation in the community over time by reciprocating through responding to other members' posts. Lin and Lee's (2006) examination of the determinants of success for online communities confirmed the importance of perceived responsiveness to behavioral intentions, which in turn increases member loyalty to the community, as indicated by participation in the community. Later, work by Singh (2012) also showed that responsiveness can strongly influence the participation of new members of communities. Similarly, Casaló et al. (2013) found that response speed, value, and frequency influence online community members' satisfaction and their participation intentions, while Sheng (2019) empirically demonstrated that perceived responsiveness is a motivational driver of customer engagement. Although the context of these studies was general, technical, travel, and review online communities, it is a reasonable expectation that these relationships would equally extend to the OHC context. Based on this discussion, we propose: H3a: OHC responsiveness is positively related to engagement. Because a responsive online community provides a range of informed and supportive perspectives, this increases trust in the perceived competence, integrity, and benevolence of the community (Zhao et al., 2013a) and correspondingly reduces the perception of risk associated with acting on information provided by community members. This is consistent with Bagozzi and Lee's (2002) view that social processes are important determinants of decision-making. We therefore propose: H3b: OHC responsiveness is positively related to knowledge adoption. and Chen (2005) in the context of an online breast cancer discussion board, which demonstrated that the orientation of members who make frequent posts tends to change over time from an emphasis on seeking information to one of supporting other members through the provision of information. Thus, we propose: H3c: OHC responsiveness is positively related to knowledge contribution. --- Engagement and its Relationship to Knowledge Adoption and Contribution Our final hypotheses focus on the relationship between engagement and the outcomes of knowledge adoption and knowledge contribution. We reason that the more a person actively participates in an OHC-for example, by posting questions or requesting advicethe broader the range of information they will accumulate from other members, which they can then evaluate and use to guide their behavior. In addition, the psychosocial and relational benefits that result from participation will increase their confidence in member benevolence and reassure them that they are making informed and correct decisions. Support for this position is provided by Jin et al. (2016), who found that the level of involvement positively affects online community members' adoption of healthcare information. Similarly, Zhou (2020) found that informational support and emotional support, through their effect on social capital, influence Chinese online community users' participation, as expressed through health knowledge acquisition and contribution. This is also consistent with the work of Liao andChou (2012, 2017), which showed that prior positive exchanges with an online health community engender the trust necessary for leveraging a contributor's social capital for the purpose of information adoption. As a result, we propose: H4a: OHC engagement positively influences knowledge adoption. Because members of OHCs seek to protect their privacy (and avoid negative repercussions such as being trolled if dealing with a stigmatized health condition), a significant perceived risk is associated with the selfdisclosure of personal health information. We contend that active participation in an OHC generates the trust necessary to overcome that perception of risk, reasoning that observing and learning from the posts of others and their responses generates user confidence in the expertise, integrity, and benevolence of other members. It also reinforces knowledge efficacy. Over time, we predict the social capital that this generates increases the desire to reciprocate and contribute to the community through the provision of information. Support for this is provided by Kuem et al. (2020), who found that Instagram community engagement positively influences active contribution behaviors. Cheung et al. (2015) found that the posts and member recommendations in online social shopping communities influence subsequent customer information contribution behavior, with the latter exerting the stronger effect. This confirms that positive feedback and the advice of other online community members reinforce learning and drive information contribution behaviors. Similarly, Chan and Li (2010) showed that interactivity in a virtual community stimulates the norm of reciprocity and voluntary behaviors. This is consistent with work (Rodgers & Chen, 2005) showing that OHC member orientation tends to progress over time from an informationseeking orientation to one that supports other members through providing information. We thus propose: H4b: OHC engagement strongly influences knowledge contribution. --- The Propensity to Trust Researchers such as Rotter (1967) have conceptualized trust as a personality characteristic that influences an individual's likelihood of trusting others. This has alternately been described as a trust propensity (Mayer et al., 1995) or as dispositional trust (Kramer, 1999) and indicates a general willingness to trust others across a broad range of trust situations and trust targets (McKnight et al., 1998). The propensity to trust influences the amount and level of trust that a person has for another party in the absence of available or experiential information on which to base a judgment (Rotter, 1971(Rotter, , 1980)). Because of this, the propensity to trust is particularly important in the early stages of relationships involving interpersonal interactions with unfamiliar actors when there are insufficient situational cues or information about the trustee available (Bigley & Pearce, 1998;Colquitt et al., 2014;McKnight et al., 1998). Moreover, the propensity to trust retains its impact and can continue to influence trusting beliefs even after information about the trustee becomes available because it serves as filter or lens through which the behavior of others is then viewed (Colquitt et al., 2007). Research has shown that dispositional trust influences trust beliefs in relation to web vendors (e.g., Chen et al., 2015;Gefen, 2000;Kim et al., 2009). Similar outcomes are evident in nontransactional contexts as well. For example, Tait and Jeske (2015) found that the propensity to trust predicts the disclosure of potentially sensitive and identifying information in an online information-sharing context. The propensity to trust also exerts a significant influence on risk-related beliefs and the intention to adopt health information from online health infomediaries (Song & Zahedi, 2007). Similarly, Heldman and Enste (2018) found that dispositional trust determines the level of trust placed in the recipient of private data, especially when the person is unfamiliar with this recipient. Thus, we propose: --- H5a: The propensity to trust is positively related to OHC knowledge adoption. --- H5b: The propensity to trust is positively related to OHC engagement. --- H5c: The propensity to trust is positively related to OHC knowledge contribution. --- Methodology This study is aligned with the pragmatic philosophical paradigm, which encourages practical and applied action (Teddlie & Tashakkori, 2008). In order to provide an in-depth examination of the relationships between different constructs, specifically the relationship between trust antecedents, engagement and trust responses, the most appropriate method was determined to be a quantitative survey. --- Data Collection To sample from the target population of participants of OHCs, we surveyed members of OHCs on Facebook Brazil. We identified six OHC types: pregnancy/breastfeeding/motherhood (PBM), nutrition/alimentation/ dietary (NAD), beauty/esthetics (BES), disease treatment (DTR), fitness (FIT), and animal care (ANC). Respondents were asked to identify one of these online communities in which they participated and then to answer the remaining questions with regard to that community. Invitations were published in 10 OHCs, with a total of 813,223 registered members, after securing authorization from group managers. To encourage respondent participation, nine raffles of USD 20 were announced. Each raffle targeted different groups such as moderators, managers, and participants. We received 602 responses. After eliminating those with high levels of missing data, we were left with 410 valid responses. As a preliminary validity test, we checked for alterations in the mean of the responses possibly associated with the time spent answering the questionnaire, without finding relevant differences. The majority of respondents were female 93.2% (n = 382); almost half were between 26-35 years old (46.6%); 58.0% were married and 56.8% were educated to at least college level. The majority (74.2%) had more than one year of experience using online groups on Facebook and were active participants. Most (81.2%) visited online health groups at least once a day, participating in online health groups with similar themes (79%). Almost half (48.0%) considered themselves to be active participants, regularly contributing through posting questions, responding to questions, and "liking" others' posts. In order to ensure that participants had enough knowledge and experience using OHCs to be able to assess information credibility, community support, and community responsiveness, respondents with low participation frequency were excluded from the study. This was achieved by retaining only respondents who self-reported accessing the online health group "once a day" or "once a week" (n = 27 respondents were removed). Also, only communities with a sample size of at least 30 were retained for the sake of statistical representativeness. As a consequence, we additionally removed "fitness" (n = 13), and "animal care" (n = 12), resulting in a final sample of 358 responses across the four communities. Table 1 shows the sample distribution based on the community type, frequency of use, and user experience. The nature of our data collection, where no real information about the population is available, precludes a full assessment of nonresponse bias. However, we followed the procedures recommended by Armstrong and Overton (1977) to assess the likelihood of nonresponse bias. We compared the earliest and latest responses received, based on the assumption that those who respond less readily are likely to be more similar to nonrespondents than those who respond immediately. We assessed the differences in the means of each of the 40 items that make up our measures between the first and last 10% of responses received and observed only two significant differences (Knowledge Contribution Item 2 and Esteem Support 2). The limited observed differences suggest sufficient similarity between early and late responders, thus diminishing the risk of nonresponse bias as an alternative explanation for our findings. --- Measures We measured information credibility using items from Lederman et al. (2014). The community support construct combined items from previous research, which measured four dimensions: emotional, informational, esteem, and network support (Chiu et al., 2015;Schaefer et al., 1981). Although these researchers had proposed tangible support as an additional dimension, it was not considered relevant in this study, as our virtual context provides no physical interaction between participants. Community responsiveness items were drawn from Wagner et al. (2014). Knowledge adoption was measured using items adopted from Chou et al. (2015), while knowledge contribution was measured using items from Meng and Agarwal (2007) and Zhao et al. (2013a). Engagement was assessed by adapting items from Webster and Ahuja (2006). We dropped one item (ENG 5) because it originally referred to "how fun" the respondent feels the experience of using the system is, which we found inappropriate for the context of online health communities. The measures were translated into Portuguese and two pretests were conducted in order to retain meaning and idiomatic equivalence (Cha et al., 2007). In the first pretest, expert researchers in the field were invited to respond to the questionnaire and provide feedback to improve the items. For the second pretest, the process was repeated with the moderators and group managers of each OHC. During this process, we examined the validity of the scales based on statistical procedures proposed by MacKenzie et al. (2011). The research participants were asked to answer the questions using a 5-point Likert scale ranging from 1 = "strongly disagree" to 5 = "strongly agree." Overall, no significant changes to items were required, but some were slightly adjusted in order to maintain the meaning and to ensure compliance with Portuguese grammatical requirements. Appendix 1 shows the items used. --- Data Analysis The model was tested using partial least squares (PLS) structural equation modeling (SEM) as implemented in SmartPLS (Ringle et al., 2015). PLS-SEM is appropriate when the objective is to identify key driver constructs in a relatively complex model that deals with multiple latent variables and relationships, without being subject to rigorous distributional assumptions (Hair et al., 2017b). Power analysis using GPower (Buchner et al., 2014), indicated that our sample was more than sufficient to detect a medium effect size of f 2 = 0.15 (Cohen, 1988) with 90% power. --- Results --- Measurement Model Table 2 shows the individual items and cross-loadings. All but one load at greater than 0.71 on their intended construct, meaning that the item loading accounts for more than 50% of the overlapping variance, which is considered excellent (MacKenzie et al., 2011). We considered the marginal value of ENG2 (λ = 0.67) to be acceptable, as it does not pose any threat to the other measures of reliability and validity of the construct. Following the rule of thumb in Tabachnick and Fidell (2014) and Comrey and Lee (2016), we found that the majority of the cross-loadings are below the value of 0.32 (10% of overlapping variance), while scattered occurrences are under 0.45 (20% of overlapping variance), a threshold that is considered fair enough to show any notable interconstruct confounding effects. We also focused on the few occurrences where values were above 0.45 but below 0.55 (30% of overlapping variance), such as in relation to the information support latent variable (items IS01, IS02, and IS03). We then performed a post hoc analysis with the structural model by eliminating the entire construct to evaluate the potential effect on the stability of the structural model results (which will be presented in the subsequent sections) and found no substantial differences. However, we decided to maintain the construct since it is conceptually linked to the community support construct. The average variance extracted (AVE) of all constructs (Table 3) are above the threshold of 0.5 (Fornell & Lacker, 1981). Cronbach's alpha (CA) and composite reliability (CR) values were all above 0.79, indicating satisfactory reliability. Finally, the square root of the AVE for each construct is higher than the correlations with the other constructs, thus providing evidence of discriminant validity. Variance inflation factor (VIF) values were all below 2 (the highest was 1.68), indicating that multicollinearity did not exert a biasing influence on the results (Hair et al., 2017b). In any data collection with a single instrument at a single period in time, common method bias (CMB) is a potential alternative explanation for the results. To mitigate this risk, we first undertook procedural remedies (Podsakoff et al., 2003) through careful construction of the survey to deal with ambiguity, conciseness, uniqueness of content, and lack of focus. We then empirically assessed the potential concern of CMB using two procedures. First, we performed the Harman's single-factor test (Podsakoff et al., 2003). The unrotated factor solution did not converge on a single factor and the largest covariance explained by any factor was 19.5%. Second, as suggested by Kock (2015), we assessed CMB in our structural model using lateral multicollinearity assessment (Kock & Lynn, 2012). All the variance inflation factors (VIF) were below the recommended threshold of 3.3, with the highest being 1.21. Given our procedural remedies and the lack of evidence in the empirical assessments, we do not consider CMB to be a significant threat. --- Hypothesis Testing The results provide partial support for our hypotheses. Consistent with H1a, information credibility showed a positive influence on engagement (β = 0.15, p < 0.001). Similarly, the data showed a positive relationship between information credibility and knowledge adoption (β = 0.22, p < 0.001), thus confirming H1b. However, the relationship between information credibility and knowledge contribution was not significant (β = -0.07, ns), rejecting H1c. With regard to the influence of engagement, the data show that engagement influences knowledge adoption (β = 0.24, p < 0.001) but does not exert any significant influence on knowledge contribution (β = 0.05, ns), thus supporting H4a but not H4b. The propensity to trust was found to exert a positive influence on engagement (β = 0.10, p < 0.001) and knowledge adoption (β = 0.15, p < 0.001), offering support to H5a and H5b. However, no effect was observed on knowledge contribution (β = 0.04, ns); thus H5c is not supported. In summary, our findings indicate that knowledge adoption is influenced by information credibility, community responsiveness, engagement, and the propensity to trust, each of which exerts a similar effect. Community support influences knowledge adoption indirectly through its effect on engagement. In the case of knowledge contribution, the source of influence is more bounded with community support exerting a strong influence on this behavioral trust response. As shown in Figure 2, these findings support many of the proposed relationships and explain 41% of the variance in engagement in OHCs, 45.2% of the variance in knowledge adoption, and 23.8% of the variance in knowledge contribution within this context. --- Mediation Analysis of the Role of Engagement We examined the importance of engagement as a mediator variable in the model following the procedures of Hair et al. (2017b). After confirming that our measurement model is reliable and valid, a crucial prerequisite to determining mediation effects, we estimated the direct and indirect effects by bootstrapping with 5000 subsamples the complete model. This technique implements the method of Preacher and Hayes (2004) and others (Hayes, 2013;Zhao et al., 2010) in the context of PLS-SEM. Following Zhao et al. (2010) and Hair et al. (2017b) we subsequently calculated the mean and standard errors of the paths in the model and determined the multiple mediation roles of engagement revealed by the significance of the corresponding direct and indirect effects paths, as shown in Table 4. The results demonstrate that, in relation to knowledge adoption, engagement partially mediates information credibility and community responsiveness, while fully mediating community support. These results additionally clarify the importance of user engagement in OHCs. The indirect effect of the community responsiveness on knowledge adoption represents about 20% of the total effect. We further explore the mediating role of engagement in our post hoc tests below. --- Post Hoc Tests Following our assessment of the formal hypotheses we conducted post hoc tests to explore whether community type or user experience influenced the relationships in our model. We conducted a multigroup analysis (MGA) in PLS. We assessed measurement equivalence using the measurement invariance of composite models (MICOM) procedure (Henseler et al., 2016), which assessed configurational and compositional invariances across the groups. --- Community Type Our analysis of community type was restricted to the two largest communities to ensure adequate sample sizes. Table 5 shows the MGA results for the nutrition/ alimentation/dietary (NAD) (n = 127) and pregnancy/ breast-feeding/motherhood (PBM) (n = 131) groups. We found partial measurement invariance between the two groups for all latent variables in the model, which allowed for path coefficients comparisons by means of a multigroup analysis. The specific differences between groups paths () are discussed below. The results provide interesting insights regarding commonalities and distinctions. First, community support influences engagement in both types of communities, with the effect on PBM (β = 0.47, p < 0.001) being stronger (βPBM, βNAD = 0.27, p < 0.03) than NAD (β = 0.20, p < 0.001). Other factors contribute to engagement in both cases. In NAD, engagement also depends on information credibility (β = 0.34, p < 0.001) and community responsiveness (β = 0.20, p < 0.001), while in PBM it depends on the propensity to trust (β = 0.18, p < 0.05). For both communities, engagement influences knowledge adoption (βNAD = 0.23, p < 0.05; βPBM = 0.30, p < 0.001, but does not exert a significant influence on knowledge contribution. In addition to the influence of engagement, information credibility also influences knowledge adoption to a similar degree for both types of community (βNAD = 0.24, p < 0.001; βPBM = 0.26, p < 0.001). However, interesting distinctions emerge because knowledge adoption in NAD strongly depends on community responsiveness (βNAD = 0.32, p < 0.001; βNAD, βPBM = 0.23, p <0.06), while in PBM it is dependent on community support (βPBM = 0.23, p < 0.05; βNAD, βPBM = -0.21, p <0.13). Moreover, although knowledge contribution is dependent on the effect of community support for both community types, in the case of NAD it is marginally important (β = 0.20, p < 0.10), whereas for PBM it exerts a much stronger effect (β = 0.78, p < 0.001; βNAD, βPBM = -0.57, p <0.00). Finally, the propensity to trust plays a marginal role in influencing knowledge contribution in both groups (β = 0.14) with no statistical difference observed between the two groups (p <0.99). Further results are shown in Appendix 2. --- Mediating Role of Engagement by Community Type Examining the mediating role of engagement by community type (Table 6) also highlights the important role of context, showing that the effect of engagement differs according to community type, particularly in relation to the outcome of knowledge adoption. For example, engagement partly mediates the effect of information credibility on knowledge adoption for both NAD and PBM communities. However, while it fully mediates the effect of community support on knowledge adoption in the case of PBM, it has no effect in the case of NAD. In addition, when the mediating effect of community responsiveness is examined, the opposite outcome applies, with engagement partly mediating in the case of NAD but not in the case of the PBM community. This important distinction in outcomes confirms that the mediating effect of engagement on knowledge adoption differs according to the nature of community type. However, engagement does not mediate the effect of community support on knowledge contribution in either community. Experience We assessed differences in three levels of online community experience (1-6 months, 6 months to 2 years, and more than 2 years). To avoid confounding with community type, before performing group analyses, we compared the two types of OHCs-NAD (nutrition/alimentation/dietary) and PBM (pregnancy/ breast-feeding/motherhood) and found that no differences existed in distributions of users based on experience (2= 2.88, df = 2, p-value = 0.236), indicating no significant cross-effects between experience and community type. Following this, we performed all the steps of multigroup invariance analysis to ensure that we could compare the structural paths of the three experience levels. Since we were interested in comparing three user-experience levels, we employed the sequence of three pairwise comparisons with the Bonferroni correction to avoid Type I error inflation (Hair et al., 2018). We found configurational and compositional invariance across the groups. Additionally, we tested for the equality of the composite mean values and variances and found no statistical evidence of differences. The analysis of user experience (Table 7) provides interesting insights regarding the mechanics of engagement, knowledge adoption, and knowledge contribution in OHCs. The findings show that engagement is influenced by community support (1-6M: β1-6M = 0.46, p < 0.001; 6M-2Y: β6M-2Y = 0.38, p < 0.001 and 2+Y (β2+Y = 0.34, p < 0.001), irrespective of the length of user experience in online health communities (higher intergroup path coefficient difference: β1-6M,β2+Y = 0.11, p > 0.46). However, they also speak to the changing nature of the trust development process. For example, although significant only at p <0.10, the results show that in the early stages of experience, the propensity to trust exerts an influence on engagement (β1-6M = 0.18; p < 0.10) that lessens as the user gains experience. This indicates an experience-dependent repertoire of factors that illustrates the progressive nature of engagement in online communities. --- Mediating Role of Engagement by User Experience Further assessment of the mediating role of engagement according to the user's level of experience with OHCs supports the overall finding that engagement is particularly relevant in the initial and medium stages of experience, particularly in relation to the outcome of knowledge adoption. The effect of community support on both knowledge adoption and knowledge contribution is partly mediated by engagement during the early (1-6 months) stage of experience. As experience increases (6 months to 2 years), the effect of community support on knowledge adoption is fully mediated by engagement, while the effect on knowledge contribution is not. For users with the greatest amount of experience, engagement no longer mediates the effect of community support. No mediation is present for information credibility and community responsiveness. These variations in effect are important, as they reveal that the level of users' experience is an important consideration when seeking to understand the mediating role of engagement on the influence of community support in the OHC context. --- Discussion This study examines the factors that influence trust and engagement in OHCs. It does so by leveraging social capital theory and social exchange theory to examine the relationship between trust and engagement, as reflected in trust antecedents that predict engagement and trust responses that result from that engagement. This study extends theory in a number of important ways, contributing significantly to the IS literature by providing a more complete understanding of the relationship between trust and engagement in the OHC context, as well as illustrating the need for incorporating contextual influence when examining this relationship. --- Contributions First, it shows that the key trust responses (knowledge adoption and knowledge contribution) are influenced by different community attributes. Knowledge contribution in OHCs is directly influenced by perceived community support, a factor that relates to whether community members offer information and advice that helps individuals cope with their health situation and healthrelated decision-making. Previous research has pointed to a diverse range of possible factors that can influence knowledge contribution in online social communities, ranging from IT-based features and identity verification (Ma & Agarwal, 2007); performance expectancy, selfefficacy and professional experience (Tseng et al., 2014;Wang & Lai, 2006); and the influence of selfpresentation, peer recognition, and social learning (Jin et al., 2015) to the rewards associated with altruism and fulfillment (Lin & Huang, 2013), social presence and identification (Shen et al., 2010), and even egoistic motives (Yu et al., 2011). Moreover, previous research has conceptualized knowledge contribution as being dependent on interconnected prior variables, including member satisfaction (Chou, 2020;Ma & Agarwal, 2007). In contrast, our research shows that community support is the dominant/singular driver of knowledge contribution in the OHC context and that its influence is direct and independent of other variables. Adding to the richness of the contribution is the fact that because we conceptualized community support as a second-order construct comprising four support subdimensionsemotional, esteem, information, and network support, our findings also clarify the exact nature of that support. This advances insight into how support can be implemented in an online health context, something that is of particular importance to the sustainability of these communities. On the other hand, our findings show that knowledge adoption in OHCs is influenced directly by information credibility, and community responsiveness and indirectly by community support. This extends the Fan and Lederman (2018), which focuses on the influence of information credibility (and contributor attributes) on knowledge adoption in OHCs by showing that community responsiveness and support are equally important considerations for understanding the formation of this trust outcome. A second contribution relates to the centrality of engagement to knowledge adoption as part of the trust formation pathway. Our findings show that engagement in OHCs is driven by information credibility, community support and community responsiveness. However, although our findings show that engagement influences knowledge adoption, it does not influence knowledge contribution behavior. This may indicate that the privacy concerns of OHC members are distinctively stronger than would be the case for members of more general virtual communities and that additional trust generation mechanisms are required to ensure that increased engagement translates into knowledge contribution. The fact that community support is the only attribute that influenced knowledge contribution points to the likely nature of such mechanisms. This contrasting finding places a cautionary pause on the assumption that increased engagement in virtual communities will automatically motivate member cooperation (Porter et al., 2011). In our OHC context, it did not. An associated contribution relates to the direction of the trust-engagement relationship, an issue that has long been a matter of contention in the academic community and the focus of calls (Islam & Rahman, 2016) for empirical work to determine whether trust is an antecedent or consequent of engagement. In conceptualizing trust in terms of both distinct trust antecedents and also the trust responses that arise from engagement, this study progresses beyond the limited binary perspectives that tend to characterize such discussions, affording much-needed insight into the cyclical nature of that relationship in the OHC context. The findings confirm that trust antecedents influence engagement and that a positive and direct relationship exists between engagement and one specific trust response, that of knowledge adoption. Our findings build on the work of Kang et al. (2016) which indicated a positive relationship between engagement and trust in a general online community, but we deepen that insight by showing the precise pathway and behavioral expression of that trust response, as well as the limits of this relationship in the OHC context. A third contribution relates to the importance of context. The study sample was predominantly composed of women respondents, a reflection of the fact that participants of online health support communities are more likely to be women (Ginossar, 2008); further, women are the population of interest in the context of the specific online communities examined in this study. For this reason, our finding of the importance of community support as a driver of knowledge contribution should be evaluated in relation to the study context (OHCs) and the nature of the respondent sample, both of which are interconnected. For example, research shows that women place particular value on community support in the virtual community context (Klemm et al., 1999;Sun et al., 2020). This may be due to gender-based socialization (Meyers-Levy & Loken, 2015;Reevy & Maslach, 2001), the greater emphasis that women have been shown to place on cues (Porter et al., 2012;Riedl et al., 2010;Rowley et al., 2017), and/or the fact that women's perception of risk and severity of consequences is stronger than that of men (Garbarino & Strahilevitz, 2004). Since community support is a multidimensional construct that is strongly aligned to trust components of perceived ability, benevolence, and integrity, all of which reduce perceived risk, the fact that it should emerge as the predominant influence on knowledge contribution (a risk behavior) for the study sample is therefore not entirely surprising. In light of this fact, the study findings progress the understanding of the factors that influence trust formation, engagement, and trust outcomes in OHCs that are particularly relevant for women. Nonetheless, it is interesting that engagement did not produce a stronger effect in relation to knowledge contribution for this sample. The explanation may lie in the fact that other factors specific to this sample may be inhibiting knowledge contribution. For example, Amichai-Hamburger et al. ( 2016) identified a number of psychological factors that may potentially influence individuals' lack of participation in online community discussions. These include individual differences, such as the need for gratification, personality dispositions, lack of time available, and self-efficacy, in addition to social group processes and technological issues. Additional issues such as introversion and social inhibition have also been shown to inhibit knowledge contribution. For example, Nonnecke and Preece (2001) found that nearly 30% of respondents were shy about posting and Rafaeli et al. (2004) found that those with high introversion scores tend not to actively engage in online groups. Confidence in having valuable information to contribute may also explain this outcome; Ray et al. (2014) found that the contributions of the most knowledgeable online community members do not derive purely from engagement but also from a competing sense of knowledge self-efficacy. Similarly, Preece et al. (2004) found that nearly one quarter of respondents explained their lack of participation in the online community in terms of having no knowledge to offer. Our post hoc assessments of differences across community types further reinforce the sensitivity of trust responses to context, with different antecedents showing greater importance in the different types of communities. For example, in this study, we compared two types of communities. The nutrition/ alimentation/ dietary (NAD) community places particular value on structured, precise, and timely information, while the pregnancy/breast-feeding/motherhood (PBM) community values experiential knowledge. We thus consider the former a more transactional type of community and the latter more relational. The study findings show that information credibility and engagement influence knowledge adoption for both types of communities, but they also show that in the case of NAD, community responsiveness directly affects knowledge adoption. In the case of this community type, structured, transactional aspects of community evaluation, such as information credibility and community responsiveness, influence the engagement decision. On the other hand, in the case of PBM, knowledge adoption is directly influenced by community support. Similarly, for this latter type of community, the decision to engage is influenced by less structured but more relational assessments, such as the evaluation of the community support level. Although the findings show that the strength of community support influences knowledge contribution outcomes for both communities, the behavioral response is stronger in the relational community than in the transactional community. These findings provide a particularly important contribution to the body of knowledge because they show that user engagement and active participation in OHCs, as manifested through the adoption or contribution of knowledge, are influenced by an assessment of the adequacy of specific types (transactional or relational) of information, which vary according to different types of health communities. A related contextual issue is the influence of user experience on engagement in OHCs. The findings of this study show this to be an evolving and phased dynamic with engagement, knowledge contribution, and adoption outcomes shifting according to increased user experience levels. For example, the findings show that in the initial phase of exposure to the OHC, the user's propensity to trust influences their decision to engage with the community, as manifested through knowledge contribution and adoption responses. However, as the user's experience with the community increases, that influence diminishes while the effect of community responsiveness on engagement grows. As the user's experience further increases, there is a shift toward a more informational, transactional perspective. In this more mature phase, it is utilitarian evaluations of community knowledge, such as information credibility and community responsiveness, that primarily sustain knowledge adoption. The motivation for contributing knowledge also changes in line with increasing levels of experience, becoming entirely sustained by community support. Our analysis of the mediating role of engagement by community type and user experience further reinforces the importance of context in understanding trust and engagement in the context of OHCs. The different forms of mediation between the PBM and NAD communities and across levels of user experience show the complexities of the influence of context. In doing so, we highlight the need for other scholars interested in understanding engagement in online communities to further theorize community types and experience levels in order to provide more granular insight into how the characteristics of their context influence user engagement and, in turn, shape behavioral outcomes. --- Implications for Practitioners The insights from our research provide practical guidance for social media practitioners interested in increasing participation and engagement in online communities, particularly communities that provide information or advice on sensitive issues, such as health information. First, the results clearly suggest that online community administrators should employ organizational mechanisms to increase user trust in the information provided by participants. This can be achieved through the inclusion of design features that allow participants to rate answers in terms of their helpfulness, thereby guiding users of the community to information that has been deemed credible by and useful to other users. Second, helpful answers should be made easily accessible to users through the provision of search options and Q&A design features. Utilizing design features that increase the speed of access to relevant, and helpful answers will in turn increase the perception of community support and responsiveness for users. The resultant increased engagement will strengthen the likelihood of users not only using that information but also contributing their own experiences, strengthening the norm of reciprocity that will increase the perception of community support and responsiveness. Similarly, the provision of design features that enable users to interact with community members who share similar backgrounds and experiences will influence their readiness to use the information provided and share information with others. The implications of the study findings have the potential to improve user engagement and result in more trusted and successful OHCs. As the mechanisms by which users adopt knowledge vary according to community type, moderators should tailor how knowledge is structured in a way that reflects the needs of their end users. In OHCs where the availability of precise, structured, and timely information is of the highest importance, this could be achieved through online community designers providing easily accessible drop-down search lists based on frequent word tags, which also show the date of provision of the response. However, in communities where social relationships are valued as much if not more than just factual information, website designers should provide links to "my experience testimonials" that are accessible on the basis of the type of information required. --- Limitations and Future Research This study provides insights that increase our understanding of the relationship between trust and engagement in OHCs, but as is the case with all studies, it also contains limitations. First, our results are based on a sample of respondents who are users of OHC websites in Brazil. Previous research has called for greater attention to the need for research in countries other than the US, UK, and Australia (Fan & Lederman, 2018). Our work thus addresses an important gap in the literature. Nonetheless, our sample also bounds the findings to some extent. While it is unlikely that national culture would fundamentally alter the dynamics that underpin the trust and engagement relationship, it is possible that culture may influence some aspects of trust formation. For example, a comparative analysis of the trust-based drivers of health disclosure (Lin et al., 2016) found evidence of different cultural emphases, and previous research by Gefen and Heart (2006) showed differences in trust formation and trust outcomes in individualist and collectivist cultures, albeit in an online transaction context. Consequently, it is possible that perceived information credibility may exert a higher trust formative influence on people from individualist cultures, whereas people from collectivist cultures may place greater weight on community responsiveness and knowledge contribution. Future research to test the generalizability of the study results by applying this framework to other national cultures can determine whether that is in fact the case. A second point worth noting is that our sample was predominantly comprised of women. Gender-related behavior is contextually influenced (Deaux & Major, 1987), and the OHCs (breastfeeding/pregnancy/ motherhood; beauty/aesthetics; and nutrition/diet) that form the contextual backdrop to this study are normatively skewed toward women, thus making women the predominant population of interest. Because these types of health information are typically of greatest interest to women, our sample is relevant for the context of our study and provides important insight into the specific factors influencing trust formation, engagement, and trust outcomes in OHCs, which are particularly important for those respondents. It does, however, bound the research findings, and future studies using more normatively neutral community types would enable greater opportunity for gender-based comparison. Similarly, while in this study we measure gender as a biological construct, future studies that include the effect of social, psychological, or cultural constructs of genderorientation could improve the understanding of gender differences in relation to online trust formation and engagement. For example, Hupfer and Detlor (2006) demonstrated the value of measuring specific self-concept traits that are associated with gender identity in relation to predicting web shopping site design preferences, rather than assuming their existence as a consequence of biological sex. Third, we focused on OHCs, which are characterized by the need for timely and accurate advice and where inaccurate information can result in very serious consequences for community members. In such a significant environment, the emotional, information, and network support provided by an online community may explain the strength of influence on knowledge adoption and knowledge contribution behaviors. Future research conducted in different (nonhealth) contexts would be beneficial in determining whether the strength of the relationships between trust antecedents, engagement, and trust outcomes remains the same, regardless of context type. Finally, in light of the finding that the mechanisms by which users adopt knowledge vary according to community type, future research could focus on chronic and acute health conditions to determine the role of medical conditions on knowledge adoption and knowledge contribution outcomes. Examinations of responsiveness that include an explicit recognition of different valences and measure their influence on engagement in OHCs also represent a valuable avenue for future research. --- Conclusion OHCs have the potential to positively impact healthcare outcomes through user value co-creation, but the way in which that value is achieved has received limited attention to date. This study empirically examines the factors that influence how individuals engage and cocreate value in OHCs. It extends existing theory through the inclusion and empirical testing of new variables that have received little attention as antecedents of trust in the OHC context: online community support and online community responsiveness. It also extends insight into trust formation by examining the predictive influence of these constructs on different trust responses as evidenced through engagement, knowledge adoption, and knowledge contribution. In doing so, it illustrates that different community attributes drive the formation of knowledge adoption and knowledge contribution responses in OHCs, and also reveals the different influence of engagement as a formation pathway for both of those responses. Finally, conceptualizing trust in terms of distinct trust antecedents and trust outcomes provides more granular insight into the cyclical relationship between trust and engagement. Our findings contribute both to the trust and engagement literatures and to social media research knowledge. From a practitioner perspective, the study findings can serve as a guide for moderators and managers seeking to develop trusted and impactful OHCs. --- When trust responses are examined in the context of the spectrum of user experience (Table 8), it is evident that knowledge adoption is consistently influenced by information credibility (β1-6M = 0.26, p < 0.05; β6M-2Y = 0.38, p < 0.001; β2+Y = 0.29, p < 0.05). However, the relationship between other factors and knowledge adoption is more variable as the user acquires greater experience of OHCs. For example, engagement shows an influence on knowledge adoption (β1-6M = 0.22; p < 0.10 and β2+Y = 0.35; p < 0.001) in the early stages of user experience, as does community support (β1-6M,β2+Y = 0.36, p < 0.04). However, after this initial period, community responsiveness emerges as the dominant factor influencing knowledge adoption (β2+Y, β1-6M = 0.36, p < 0.02). A similar change in influence applies to knowledge contribution. In a context of limited experience, it is initially influenced by engagement (β1-6M, β2+Y = 0.30, p < 0.06) and the propensity to trust (β1-6M, β2+Y = 0.38, p < 0.06). However, as the user's experience increases, the influence of community support also increases until it becomes the most influential factor (β1-6M, β2+Y = 0.45, p < 0.07). Figures 3,4, and 5 illustrate how the evolution of user experience influences engagement, knowledge adoption, and knowledge contribution respectively. Copyright © 2023 by the Association for Information Systems. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than the Association for Information Systems must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior specific permission and/or fee. Request permission to publish from: AIS Administrative Office, P.O. Box 2712 Atlanta, GA, 30301-2712 Attn: Reprints, or via email from [email protected].
Contemporary worldwide disaster COVID-19 encountered by people seemed to be widely spread like a fire in the jungle. This surge impacted the lives of many, even caused fatalities. Staying at home and not to go outside is one of the hot topics of the moment. The population around the world found a major lapse when it was noticed that the epidemic COVID-19 is infecting both Muslims and non-Muslims. Meanwhile, due to this worldwide disaster, countries facing the issue altered their work status into work from home. This study was conducted by collecting the readily available secondary data from previous studies and recently published articles on the several worldwide pandemics. This study discussed the effects of COVID-19 on the population especially, organizations with the practice of work from home. It also discussed the effectiveness of social distancing to avoid the spread of this virus. Moreover, this study discussed the benefits of work from home along with enlightening the Islamic perspectives to strengthen the study.
INTRODUCTION Several contagious diseases infected a lot of people in past and continuing their effect in future such as AIDS (Acquired Immune Deficiency Syndrome), plague, influenza, cholera, Ebola and SARS (Severe Acute Respiratory Syndrome) (Cutsem et al., 2016;Kupferschmidt, 2016). Among these knowing diseases, in the late 2019 another disease was introduced named as, COVID-19. According to (WHO, 2020) Coronavirus was early named as, `2019 novel coronavirus` now officially named COVID-19. The virus is just like SARS (severe acute respiratory syndrome) in which the respiratory infection takes the form of pneumonia but it is actually COVID-19. Investigated to be an infectious disease that transfers through saliva excretions or droplets of sneezing or coughing from the affected to the normal people. Until now, no devised vaccine or specified medicine investigated to prevent its dispersion more effectively among the public. However, the vaccine helps to lessen the effect of the virus if an individual is infected (ElBagoury et al., 2021). Thereby, according to the situation, less physical contact as well as to keep oneself as clean as possible is also a remedy to avoid its widespread. During these circumstances, it is thereby challenging for people to work while being at their workplace (Haleem et al., 2020). Organizations cannot stop working while waiting for the unknown period of the continuity of this disease. They need to manage their work from a place with less physical interaction just for the sake of everyone's safety. Therefore, the possibility to work from outside the work territory entailed the significance of work from home. Although work from home is an appropriate facilitation to employees in their workplace due to some of the circumstances. In this arrangement, employees can dedicate their duty hours outside the boundaries of the organization. It provides a balance between employee work and family life (Christensen et al., 2013). Employees can work in any mobile place, either home or can be any place (Venkataraman et al., 2018). It is for the well-being of the employee and therefore, work from home facility helps to retain the employees (Rodwell & Martin, 2013). Moreover, it is beneficial for organizations to strategize their policies more effectively. Therefore, efficiency is also increased (Jansen & Hlongwane, 2019). Working in a safe and sound environment enables the employees to augment their productivity Christensen et al., (2013) and this enhancement plays significance in employee personal and professional life. According to Taamneh et al. (2018), internal satisfaction of employees from work leads them towards organization well-being. The debate about those practices which are employee friendly and that help to balance employee work and family life (Shakir, 2019). With these facilitations, employees found themselves more empowered to work, hence these relaxations at work increase their loyalty. Sometimes negating the idea to facilitate the employee to work out of the boundaries of work results in lack of loyalty (Kim & Lee, 2019). On the other hand, increase in efficiency and effectiveness of employees are also identified (Jansen & Hlongwane, 2019). This study involved a situation in which the virus is infectious and influencing many of the population worldwide (Haleem et al., 2020). While, giving importance to the individuals who need to go out either to work or for pray, is one of the main points of discussion nowadays. With the help of game theory, it has been made easy and understandable that to avoid circumstances, we need to strategize our plans and routines (Begley et al., 2020). Such as it is a zero-sum theory for rational decision making in which every participant of the game or the situation has the same benefit and loss as compared to the others. Referring to the game theory, individuals change their behaviours according to situations, to avoid the worst circumstances. For example, in this COVID-19, the only solution is social distancing to avoid oneself in contact with the infected ones. Some of them, who are not understanding the situation same time, will soon change their behaviour after observing those who are taking precautionary measures and are staying safe from the pandemic. Social distancing is one of the solutions to save oneself in a situation like infectious or viral diseases. Therefore, from the above discussion, the following points and questions are raised that are entailed to be discussed in this study: 1. Effects of COVID-19 in the organizational context on how it is affecting the employers and employees. 2. Does social distancing is effective to avoid the infectious disease? 3. What is the Islamic perspective behind this? --- LITERATURE REVIEW --- Effect of Coronavirus on the Population COVID-19, a pandemic disease, escalated like fire in the jungle. Its effectiveness is increasing day by day (Asrani et al., 2021). According to WHO (2020), the pandemic started on 31st December 2019 and it is continued with its second wave (Xu & Li, 2020). The pandemic with a sudden push infected around 428 million population worldwide and the latest number of deaths reached up to 5.19 million WHO (2021). The affected countries from different territories across the border were infected by this disease. This coronavirus named COVID-19 caused disruptions in the social life of the individuals living in that specified affected territories. With the effectiveness of the virus, it caused an effect on 1/3 of the world population (Begley et al., 2020). Asserting the influence on the population, quarantine was suggested that is, a state of being isolated at a place due to hygienic reasons and to avoid contact to several epidemics and infectious diseases (Altinoz et al., 2012). The prevention to spread the disease is the best remedy for infectious and viral diseases (WHO, 2020). Thereby, to provide safety from the affected ones to the unaffected, self-isolation is the best remedy. Among the seven continents of the world, Asia is the largest continent with a huge figure of the population (Moen et al., 2017). Discovered as an initiator of the COVID-19 due to the number of population and tourism. North America, although ranked fourth in the populated country's list, recent surveys found the maximum number of cases in the United States (WHO, 2020). Europe being on the third level in the most populated continent (Moen et al., 2017) and is the majority of countries facing the maximum number of cases (WHO, 2020). The United States to date is the most effected territory with the highest number of cases 104,671, while Italy is on the second number of positive cases 86,498 has the highest number of fatalities 9,134. China, who initiated the virus now is at the highest recovery rate of 74,971 patients. Other countries like, Spain, Germany, France are also with the high rate of positive cases. Iran found to be with the highest rate of fatalities and continuing with the positive cases among the Muslim countries. --- Social Distancing Distancing is the isolation and hostility, and social distancing is to make a specified distance in view to disassociate oneself with others. Individual's level of closeness within the same or different groups in their social association (Arenas et al., 2004) due to differences in age, race, ethnicity, religion, culture or maybe gender-based (Elaine, 2008). The tenderness and warmth felt by them within a similar group or with those who are the intruders (Bogardus, 1947). According to Robert E. Park (1924), social distancing is the diminution in the understanding, involvement or intimacy that is present in an individual or collective relationship. --- Globally Infected Muslims Among all, Muslim ummah is also suffering from this chronic disease. Several countries having a huge spread of the disease, with the lack of knowledge, precautions, and resources. A major spread was on the Friday prayer, where most the Muslim were gathered, took the hype of the disease, Countries included who were affected after those gatherings are, Malaysia, Lebanon, Turkey, Egypt, Iraq, Iran, Jordan, Sudan, and Saudi Arabia (Dwyer, 2020). Iran, found to be the highest and most effected country among Muslims, with 21638 to date cases and several deaths (WHO, 2020). Other countries with Muslim majority affected by viral diseases such as, Pakistan, Qatar, Saudi Arabia, Bahrain, Egypt, Iraq, Lebanon, Kuwait and also the United Arab Emirates. The widespread of this among Muslims was due to religious gatherings (Emont & Shah, 2020). These gatherings caused a severe dispersion of the virus and the migrants of the other countries spread the virus to those host countries. Workplaces, offices, shops and every kind of business were suspended, even the air service was also disconnected by the affected countries to avoid the circumstances from getting worse. Globally, organizations are pursuing to work from home facilitations to their employees just to avoid the country level situation COVID-19. People were required to stay at home and to do their work from the home (Lufkin, 2020). Even though the major worship holly places like, the Grand Mosque of Mecca and the Prophet's (SAW) Mosque of Medina were closed for the public to minimize their contact (AFP, 2020). --- Theory of Maqasid Al-Syariah This study relates to the theory in the perspective of taking care of oneself and others by preventing the disease through getting vaccine and other precautionary measures. Theory of Maqasid Al-Syariah focuses on the betterment of society, it explains that the purpose of Shariah is happiness and security (Bahri et al., 2019). It shows the Maslahat of humanity where only the social benefit is important. It ascertains the individuals gain economic and social benefit with no harm to others, thus creates a balanced environment (Amaroh & Masturin, 2018). The theory also ensures that the followers are protected from each side, i.e. their health, money, property and community are protected to ensure welfare of the society (Nordin et al., 2017). --- Islamic Perspective on Health Safety There are some verses from the Quran teaching the sense of self-caring and safety.  "Everything good that happens to you (O Man) is from God, everything bad that happens to you is from your own actions". (Quran 4:79). It is an assessment from Allah, so people who will save themselves, take precautionary measures will be saved from the COVID-19 disease. Those who ignore the safety measures and keep socializing themselves will be the suffered ones without any doubt.  "O mankind: Eat of what is lawful and good on earth" (Quran 2: 168). This verse of the Quran defends the prohibition of eating non-halal food, which can be dangerous for health. The non-halal food impedes several kinds of germs which are dangerous and somehow poisonous to the human body. COVID-19 found to be originated in the non-halal food market (Jewell, 2020) and propagated to others who are not in consumption to that. --- METHODOLOGY Methodology of the paper explains the summary of research design. Data has been gathered by collecting the information from the recently published research articles. Researchers collected those specific articles with the content related to global pandemic and its effect on the community. Along with that, the online published articles were considered with up-to-date effects of pandemic COVID-globally. The researchers searched the secondary data form the different journals with the help of google scholar, that helped to find the most recent and relevant articles from Elsevier, science direct, MDPI, WHO and research gate. The researchers also gathered the information for recent statistics from world health organization and bureau of statistics. --- RESEARCH IMPLICATIONS --- Benefits of Work from Home Thus, work from home incorporates benefits to both parties i.e., employers and employees. --- Benefits to Employers There are several studies which have discussed the work from home facility to the employees (Agus & Selvaraj, 2020;Beauregard, 2011;Fuller & Hirsh, 2019;Germeys et al., 2019;Hytter, 2007;Judge & Ilies, 2004;Ke & Deng, 2018;Mauno & Ruokolainen, 2017;Mayo et al., 2016;McNall et al., 2009;Russell et al., 2009). Researchers identified some of the benefits to the organizations by the practice work from home. a. Operating Cost. Offices, normally catering to several people working as employees. They used to work normally for 9 hours, accommodating themselves under these working hours. Utilization of office stationery, lots of documents, electricity and other utilities somehow make a huge expense towards the organization. Moreover, the rental incurred by the office place. Working from home, helps to reduce the cost of this work operating cost. b. Productivity. Increase in productivity when employees feel more relaxed while working because of a sense of autonomy and change in the working environment. Employees feel more comfortable and work more proficiently, hence, lessening their work anxiety and stress. Thereby, increasing workability and productivity. c. Turnover. Reduced turnover from several studies was investigated if the employees are satisfied with their jobs. This satisfaction leads them towards staying with their jobs. Flexibility at the workplace helps to intensify their mode of happiness and joy, removing stress, lessen the number of leaves. Turnover is itself a huge cost for the employers, if employees are leaving so frequently it will cause, bad word-of-mouth for the prospective employees, also the time and money cost associated with hiring, the overall recruitment activity will be incurred by the employers. d. Behavioural Positivity. Leading towards the feeling of freedom of workplace, employee freedom of workplace, employers take benefit in the form of a positive attitude of employees. e. Impact on the Environment. Reducing transport to travel, reduces the fuel cost and the associated environmental cost therewith. Communicating via electronic media, the employees need no traveling, hence, the environmental benefit is also on the employer's side. --- Benefits to Employees According to previous studies on work from home (Allen et al., 2016;Amorim & Santos, 2017;Fuller & Hirsh, 2019;Mansour & Tremblay, 2018;Sharma & Yadav, 2019), employees also benefit if they are allowed to work from their place such as the following: a. Work-life Balance. It creates a balance between employee work and family life. When employees are provided with deadlines, they schedule their work timing accordingly to complete their tasks on the deadline. It makes an easy for them to take care of their dependents as well as to do their work. b. Productivity. Increases as the employee are working under less pressure of the workplace environment. c. Stress. Lessening of stress by the variance of tasks associated with work and life. Employees as family members take care of the family, as well as working for the job. This balance of life makes him or her mentally relaxed and therefore, reduces stress. d. Less Cost. Staying at home, not traveling every day to the workplace, sometimes more far from home, reduces the time and money costs associated with traveling. e. Satisfaction. More if an individual is working in a family environment and flexible hours. This job autonomy gives the person more satisfaction with the job and family life. f. Less Health Hazard. If someone at the workplace is suffering from an infectious disease, it will be transmitted to the persons working in the surroundings. Work from home can reduce this effect. --- Limitations of Work from Home Work from home seems like a perfect and ideal situation (Madell, 2019) in which an employee will not be having any trouble getting ready, work in an office environment, working under supervision, and fuel-saving (Bussing, 2019). But, along with the benefits associated with both employers and employees, it also has some limitations as described below. a. Lack of Productivity. Without supervision, sometimes it becomes difficult to manage workrelated tasks (Michelle Kiss, 2019). It is thereby, investigated that while sitting out of the boundaries of the workplace, it is difficult for the employees to work efficiently and effectively. b. Absence of concentration. A balance between work and family life disturbs if an employee is somehow not very compatible to create a balance between both. This time mismanagement may thereby trigger a lack of concentration (Madell, 2019). c. Telecom cost. Being online and connected with the colleagues and subordinates may also cause the telecom cost to the employees and the employer. This stays connected practice can cause a huge amount of bills to the parties involved (Michael Hurd). d. Less incorporation. Coordination can also be subordinated if the employees do not face to face connected. Sometimes some issues cannot be handed over without the presence of the relevant staff, causes disruptions in the services (Michael Hurd). --- CONCLUSION COVID-19, a global catastrophe, calamity started from a wet market of Wuhan, China (Citroner, 2020) affected up to 100 countries worldwide (WHO, 2020). Including among the Muslim community, which are those infected individuals who had direct contact with the infected ones in religious gatherings. Muslim worship places with restricted time before but now suspended their operations until the situation gets better (Soumaré & Crétois, 2020). Thousands of Muslim migrants who were there for worship in Iran were infected by this virus (Emont & Shah, 2020). Regarding the situation, there are many controversies by religious bodies. Such as, the worships especially, the Mosque of Makkah and the Prophet's (SAW) Mosque of Madinah should be always operational until the day of judgment. Taking the point towards practicality and avoiding the current scenario to get it to the worst situation, it became mandatory to avoid those places in which people are gathered and are in direct contact with each other. Work from home, practice intended for employees to work from home or remote place. This facilitation is nowadays implemented due to chronic disease and infectious nature triggered many casualties. To avoid the circumstances to get more miserable, it is thereby executed by most of the companies worldwide, Microsoft, Google, Twitter, Spotify, Hitachi, Apple, Amazon and Chevron (Lufkin, 2020). Thereby, less probability or occurrence of getting infected. It can be elaborated as if an employee suffering, others can also be transmitted by casual contact while working together. Covid-19, also specified as a precaution to stay at home, intensifying the possibility of work by staying at home. Considering all the facts, COVID-19 propagating its effects day by day, from one to several individuals who are in contact. It can only be avoided by making the distance in a social aspect. Islam also taught to save oneself and others from loss of health. Theory of Maqasid Al-Syariah also intends towards the safety of health and life of individuals. This can only be possible when they avoid the circumstances to get infected by the virus. In the social circumstance, to get vaccinated, maintaining social distancing, wearing masks, washing hands are some of the ways to avoid the situation to get worse. In the organization perspective, flexible work practices are the other way to avoid the spread of this virus. Flexible work practices facilitate people working in the organizations by providing flexibilities in working hours, working day etc. Hence, there is a possibility to work and to pray from home to avoid and control the situation. Although, these are not permanent solutions but also the situation will not sustain for a prolonged time. So, for the betterment of the individuals is to keep distance, keep connected, keep working and keep praying at home.
Objectives: This study examined race differences in the probability of belonging to a specific social network typology of family, friends, and church members. Method: Samples of African Americans, Caribbean blacks, and non-Hispanic whites aged 55+ were drawn from the National Survey of American Life. Typology indicators related to social integration and negative interactions with family, friendship, and church networks were used. Latent class analysis was used to identify typologies, and latent class multinomial logistic regression was used to assess the influence of race, and interactions between race and age, and race and education on typology membership. Results: Four network typologies were identified: optimal (high social integration, low negative interaction), family-centered (high social integration within primarily the extended family network, low negative interaction), strained (low social integration, high negative interaction), and ambivalent (high social integration and high negative interaction). Findings for race and age and race and education interactions indicated that the effects of education and age on typology membership varied by race. Discussion: Overall, the findings demonstrate how race interacts with age and education to influence the probability of belonging to particular network types. A better understanding of the influence of race, education, and age on social network typologies will inform future research and theoretical developments in this area.
Social networks are critical sources of informal support, especially for older adults. Informal social support is important for coping with a range of social issues including physical and mental health problems (Cohen, Brittney, & Gottlieb, 2000) and daily life stressors (Benin & Keith, 1995). For instance, social support is linked to higher levels of overall well-being (Nguyen, Chatters, Taylor, & Mouzon, 2015;Smith, Cichy, & Montoro-Rodriguez, 2015) and lower rates of serious psychological distress (Chatters, Taylor, Woodward, & Nicklett, 2015;Gonzalez & Barnett, 2014), depression (Fagan, 2009), and social anxiety disorder (Levine, Taylor, Nguyen, Chatters, & Himle, 2015). Social networks also provide instrumental assistance such as financial assistance, household work and transportation (Sarkisian & Gerstel, 2004). Research universally indicates that belonging to a supportive social network is critical for the healthy functioning for older adults (Cacioppo & Cacioppo, 2014). The present study examines social network types and aims to determine (a) whether the probability of belonging to specific social network typologies varies by race and ethnicity among African American, Caribbean black, and non-Hispanic white older adults and (b) whether race/ethnicity interacts with sociodemographic characteristics, such as age and education, to influence the probability of belonging to specific network types. Although no study to date has systematically investigated racial and ethnic differences in social network types among older individuals in the United States, prior research verifies racial differences in key social network characteristics. The following sections of the literature review discuss prior research on social network typologies, along with background research examining race and social networks among older adults. This is followed by a section describing research on the social networks of Caribbean blacks. The literature review concludes with a discussion of the focus of the present investigation and study hypotheses. --- Social Network Types Social network types are a growing area of research on social relationships that uses an innovative, person-centered approach to examine varied configurations of social network characteristics (Fiori, Smith, & Antonucci, 2007;Li & Zhang, 2015;Litwin, 2001;S. Park, Smith, & Dunkle, 2014;Wenger, 1996). Information on network characteristics (e.g., network size and composition, frequency of contact) is aggregated to identify distinct typologies or profiles of social networks. In synthesizing findings from this research area, a number of distinct network types have been identified. Studies of social networks typologies (e.g., Fiori, Antonucci, & Cortina, 2006;Litwin, 2001;Wenger, 1996) have consistently identified four general, archetypal network typologies: diverse, family-focused, nonkin-focused, and restricted. Network types that belong to the diverse type are characterized by high levels of social integration and different network role composition. In contrast, network types belonging to the archetypal restricted network typology are characterized by high levels of social isolation. Network types belonging to the family-focused and nonkin-focused archetypal typologies are characterized by high levels of integration within almost exclusively family and nonkin networks (e.g., friends, congregants, neighbors), respectively. Diverse and nonkin-focused are the most prevalent typologies identified among general samples of the American population, while the family-focused type is the least prevalent (Fiori, Antonucci, & Akiyama, 2008;Fiori et al., 2006;Shiovitz-Ezra & Litwin, 2012). Network types identified in international samples are also closely aligned with these four general archetypal typologies (Burholt & Dobbs, 2014;Cheng, Lee, Chan, Leung, & Lee, 2009;Doubova, Pérez-Cuevas, Espinosa-Alarcón, & Flores-Hernández, 2010;Fiori et al., 2008;Li & Zhang, 2015;Litwin, 2001;N. S. Park et al., 2013N. S. Park et al., , 2014)). Overall, studies of non-US samples indicate that the most prevalent network types were the family-focused and diverse types, while the least prevalent types were the nonkin-focused and restricted types. The prevalence and distribution of these specific network types is consistent with an emphasis on strong family orientation and values (i.e., familism, filial piety) that is characteristic of several of these cultures (Baca Zinn, 2000;Ikels, 2004;Lin, 2013;Mucchi-Faina, Pacilli, & Verma, 2010;I. H. Park & Cho, 1995). The body of evidence of cultural differences in derived network typologies suggests that the distribution of social network types within the US population may also differ by race and ethnicity. Although race/ethnic differences in network types among older Americans have yet to be explored, there is a body of research examining race differences in social networks. --- Negative Interaction With Network Members Research on support typologies generally does not include measures of negative interaction such as criticisms and conflict. The inclusion of measures of negative interaction in this study represents an important innovation that more accurately reflects interactions within support networks. Negative interactions are a natural feature of social life and are a fairly common occurrence among family members (Rook & Ituarte, 1999), as well as church members (Krause & Batisda, 2011). Further, distinctive support network characteristics are associated with negative interactions. For example, more frequent interaction with support network members (Lincoln et al., 2013), as well as circumstances in which extensive support is provided (Newsom & Schulz, 1998) are both associated with negative interactions. Overall, negative interactions do not occur as frequently as positive emotional exchanges. Nonetheless, they are associated with emotional distress including clinicallyrelevant mood and anxiety psychiatric disorders (Lincoln et al., 2010). Consequently, we would expect that as a common and central characteristic of social life, negative interactions will emerge as an important feature of network typologies. --- Race Differences in Social Networks Over the past 50 years there has been considerable debate (both theoretical and empirical) as to whether African American families can be characterized as being stable, disorganized or reflective of alternative patterns of family (Allen, 1978;Sarkisian & Gerstel, 2004). Early theories in this area characterized African American families in a largely negative manner as being deficient and dysfunctional or they have idealized the family networks of typically poor African Americans. Further, this area of research has, either explicitly or implicitly, used non-Hispanic white families as the comparison group for African American families. Researchers, for the most part, now generally accept the view that African American and white families are different and these differences are not solely a reflection of social class (Gerstel, 2011;Sarkisian & Gerstel, 2004;Taylor et al., 2015). In particular, Allen (1978) uses the term culturally variant to explain these differences in family structure and function. He argues that, unlike a deficit perspective on African American families, a culturally variant perspective does not view differences in African American families as indicators of pathology or deficiencies. Instead a cultural variant perspective acknowledges that African American and white families exist in different social and cultural environments and, as such, these differences are manifested in a variety of family indicators. Research and theories on racial differences in support networks further argue that, in order to fully appreciate support network structure and processes, it is important to extend our conception of families beyond nuclear to extended families (Gerstel, 2011), as well as to investigate both kin and nonkin as members of social networks (Taylor et al., 2015). Despite the importance of this issue, there is a paucity of research on racial differences in social networks. Further, the available research on racial differences in social network characteristics among older African Americans and whites is equivocal in relation to network composition. Some studies indicate that older African Americans have smaller networks than older whites (Antonucci, Ajrouch, & Birditt, 2006;Barnes, Mendes de Leon, Bienias, & Evans, 2004;Magai et al., 2001), whereas others indicate no racial differences in network size (Mendes de Leon, Gold, Glass, Kaplan, & George, 2001). Research on social involvement (e.g., contact, supportive exchanges) with network members is also mixed, with some studies reporting higher involvement among older African Americans, especially within the extended family network (Antonucci et al., 2006;Johnson & Barer, 1995;Peek, Coward, & Peek, 2000), while others indicate higher involvement among older whites (Mendes de Leon et al., 2001). Finally, two recent studies indicate that older African Americans are more likely to live in extended family households (US Census Bureau, 2014) and are more likely to be involved in church support networks (Krause & Batisda, 2011;Taylor et al., 2015). There is a limited amount of research on racial differences in the quality of relationships with in support networks. African Americans have more interaction with congregation members and more negative interaction with church members (Krause & Batisda, 2011;Taylor et al., 2013). Taylor and colleagues (2013) did not find any differences in subjective closeness to friends or subjective closeness to family between Caribbean blacks, African Americans, and non-Hispanic Whies. Sarkisian and Gerstel (2004) argue that in terms of understanding kinship networks, demographic diversity within African American and non-Hispanic white populations (e.g., age) may be more important than racial differences. Therefore, it is important to examine whether race and various sociodemographic characteristics (e.g., age, education) in combination have an interactive influence on social network characteristics. For example, Ajrouch, Antonucci, and Janevic (2001) found that African Americans reported more kin in their networks than whites, but this difference was attenuated among older persons. Antonucci and colleagues (2006) found that African Americans with higher levels of education had more extended family members in their networks, while whites with higher levels of education had fewer extended family members in their networks. For our present investigation, these studies suggest that race may interact with particular sociodemographic characteristics in shaping the network typologies of older adults. --- Caribbean Blacks in the United States Despite the rapid expansion of older minority populations in the United States, most gerontology research focuses on the general, non-Hispanic white population. Further, research comparing the social networks of African Americans and Caribbean blacks and Caribbean blacks and non-Hispanic whites is extremely limited. The current investigation seeks to address this major gap in knowledge and provide a better understanding of race and ethnicity influences on social networks by including Caribbean black older adults in this study. Both family and nonkin are important sources of support for Caribbean blacks, particularly during the migration process (Taylor, Forsythe-Brown, Lincoln, & Chatters, 2015;Taylor et al., 2013). Upon arrival in the United States, social networks provide an array of support to Caribbean blacks including assistance with housing, employment, and legal documentation (Basch, 2001;Bashi, 2007). In preparing for migration, individuals rely on their extended families to fund their migration. In the postmigration period, extended family members in the home country provide care (i.e., child fostering) for children who stay behind (Bashi, 2007;Waters, 1999). Religious institutions are important community and cultural resources that assist Caribbean blacks in their migration to and settlement in the United States. Immigrant churches provide tangible social support and fellowship, serve as a cultural repository and broker (Taylor, Chatters, & Jackson, 2007), and provide access to services from clergy to manage life problems (Taylor, Woodward, Chatters, Mattis, & Jackson, 2011). In sum, Caribbean blacks rely on assistance from a diverse group of support resources comprised of kin and nonkin, both in the United States and in their home countries. There are substantial differences between Caribbean blacks and African Americans in life circumstances (e.g., family structure, immigration status) and culture (Taylor et al., 2013). However, these two groups are rarely identified as representing distinct ethnic groups, but rather are seen collectively as black American. Unfortunately, only a few race comparative studies of social networks recognize these ethnic variations within the black population and their relevance for support network structure and functioning. The broad characterization of African Americans and Caribbean blacks as constituting one undifferentiated group (i.e., black Americans), is both inaccurate and problematic for developing a well-defined understanding of potential ethnic differences in social network characteristics. Research focusing on social network typologies among racially and ethnically diverse older Americans are necessary for the development of social network interventions that are culturally sensitive and relevant. --- Focus of the Present Study The present study addresses a critical gap in knowledge on racial/ethnic differences in social network types among older adults in the United States by assessing whether the probability of belonging to specific social network types varied by race among older African Americans, Caribbean blacks, and non-Hispanic whites. I expect that the four general archetypal network typologies that have been identified in previous studies discussed in the literature review-diverse, family-focused, nonkin-focused, and restricted-will be identified (Hypothesis 1). This analysis includes measures of negative interaction as social network typology indicators. Accordingly, two subtypes of the diverse and nonkinfocused network types (the most prevalent types identified in previous work using US samples) are also expected (Hypothesis 2). Specifically, the analysis will yield a positive diverse subtype (i.e., with low levels of negative interaction) and a negative diverse subtype (i.e., with high levels of negative interaction). Similarly, a positive nonkin-focused subtype as well as a negative nonkin-focused subtype will be identified. Moreover, the probability of being in a particular social network type will vary by race/ethnicity (Hypothesis 3). Specifically, Caribbean blacks, who are often characterized as having transnational family ties (e.g., family geographic dispersion), will rely heavily on extended family, friends, and congregants for assistance. Thus, Caribbean blacks will be more likely than whites to belong to the diverse type. Moreover, it is anticipated that given the centrality of the extended family among African Americans, they will be more likely than whites to belong to the familyfocused type. Additionally, this study examines how age and education interacts with race to influence the probability of belonging to certain network types. Prior literature indicates that the effect of race (i.e., comparing African Americans and whites) on network composition varies by educational attainment and age (Ajrouch et al., 2001;Antonucci et al., 2006), specifically in relation to the presence of kin in the network. Accordingly, the associations between education and social network types and age and social network types are expected to vary by race and ethnicity (Hypothesis 4). --- Method --- Sample The study sample was drawn from the National Survey of American Life (NSAL) conducted by the Program for Research on Black Americans at the University of Michigan's Institute for Social Research. The African American sample is the core sample of the NSAL. The core sample consists of 64 primary sampling units, of which 56 overlap substantially with established national sampling areas. The remaining eight primary areas are located in the South, ensuring the sample represents the national distribution of African Americans. The African American sample is a nationally representative sample of households located in the 48 coterminous states with at least one black adult aged 18 or older who did not report ancestral ties in the Caribbean. The NSAL also included the first major probability sample of Caribbean blacks in the United States. This study used a subsample of respondents aged 55 and older, featuring 837 African Americans, 298 non-Hispanic whites, and 304 blacks of Caribbean descent. --- Measures Race and Sociodemographic Variables Race/ethnicity was coded as African American, Caribbean black, or non-Hispanic whites. For the purpose of this study, Caribbean blacks were defined as individuals who trace their ethnic heritage to a Caribbean country but now reside in the United States, are racially classified as black, and speak English (but may also speak another language). Control variables included gender, family income, marital status, parental status, and living arrangement. Gender, parental status, and living arrangement were dummy coded (0 = male, 1 = female; 0 = nonparent, 1 = parent; 0 = does not live alone, 1 = lives alone). Age, education, and family income were scored continuously; age and education were assessed in years. Family income was coded in dollars and log transformation was used to minimize variance and account for its skewed distribution. Missing data for income and education were imputed using an iterative regression-based multiple imputation approach incorporating information about age, sex, region, race, employment status, marital status, home ownership, and nativity of household residents. Marital status was coded to differentiate respondents who were married or partnered, separated, divorced, widowed, and never married. --- Social Network Type Indicators Indicators for family, friendship, and church networks were based on respondents' perceptions of their relationships and were used to identify network types. Frequency of contact with family was measured by asking: "How often do you see, write or talk on the telephone with family or relatives who do not live with you?" Possible responses ranged from 1 (never) to 7 (nearly every day). Subjective closeness to family was assessed by asking: "How close do you feel towards your family members?" Response categories ranged from 1 (not close at all) to 4 (very close). Emotional support from family was measured by asking: "Other than your (spouse/partner), how often do your family members: (a) make you feel loved and cared for, (b) listen to you talk about your private problems and concerns, (c) express interest and concern in your well-being?" Negative interaction with family members was assessed by three questions: "Other than your (spouse/partner) how often do your family members: (a) make too many demands on you, (b) criticize you and the things you do, and (c) try to take advantage of you?" Response categories for emotional support and negative interaction questions ranged from 1 (never) to 4 (very often). Similar questions and response options were used to measure frequency of contact with church members and friends, subjective closeness to church members and friends, emotional support from church members, and negative interactions with church members. To facilitate analysis and interpretation of results, all indicators were dichotomized using median split. --- Analysis Strategy Latent class analysis (LCA) was used to identify network types. LCA uses a person-centered approach to classify respondents into subgroups (i.e., latent classes) based on response patterns across dichotomous class indicators. Latent class multinomial logistic regression analysis, in which class probabilities are regressed on sociodemographic variables, was used to determine correlates of network types. This was conducted using the three-step LCA approach to avoid the inclusion of sociodemographic variables in the class extraction process. To determine whether the effects of race on network type varies by education and age, two interaction terms (Race × Education and Race × Age) were constructed and tested in latent class multinomial logistic regression models. All analyses used sampling weights and accounted for the complex multistage clustered design of the NSAL sample, unequal probabilities of selection, nonresponse, and poststratification to calculate weighted, nationally representative population estimates and standard errors. --- Results --- Social Network Types (Hypothesis 1 and 2) LCA indicated that the best-fitting model featured four classes. Goodness of fit was determined using the AIC and sample-size-adjusted BIC. The four identified network types were: optimal, family-centered, strained, and ambivalent (Supplementary Figure 1). The optimal type, which was most prevalent (30.36% of the sample), had high levels of subjective closeness, contact, and emotional support involving family and church members, and low levels of negative family and church interactions. Moreover, these respondents reported high levels of subjective closeness and contact with friends. The ambivalent type, the least prevalent (19.09% of the sample), was similar to the optimal type, with the exception that respondents in this network typology reported high levels of negative family and church interactions. The family-centered network typology (30.15% of the sample) featured high levels of subjective closeness, contact, and emotional support involving family members and low levels of negative family interactions. Additionally, members of this class reported low levels of subjective closeness, contact, emotional support, and negative interactions involving church members and low levels of subjective closeness and contact involving friends. Finally the strained type (20.39% of the sample) featured low levels of subjective closeness and contact with family, church members, and friends, coupled with low levels of emotional support from family and church members. Further, respondents in the strained type indicated moderate levels of negative family interactions and low levels of negative church interactions. Race/Ethnicity and Social Network Types (Hypothesis 3 and 4) The distribution of network types across the three racial/ ethnic groups (Table 1) indicated that for African Americans and Caribbean blacks, the optimal type was most prevalent (35.12% of the African American sample; 33.12% of the Caribbean black sample), and the strained type was the least prevalent (18.24% of the African American sample; 17.65% of the Caribbean black sample). In contrast, among whites, the family-centered type was the most prevalent (35.62% of the subsample) and the ambivalent type was the least prevalent (15.5% of the subsample). Results from the latent class multinomial logistic regression analysis (using optimal type as the reference category) did not yield a significant association between race/ethnicity and network type (Table 2). However, interactions between race/ethnicity and education and age were statistically significant. The interaction between education and race/ethnicity indicated that among respondents with lower levels of education, Caribbean blacks had a substantially greater probability of belonging to the ambivalent type (high integration and negative interaction) compared to whites (Figure 1). For whites, as education increased, the probability of belonging to the ambivalent type marginally increased. However, for Caribbean blacks, an increase in education was associated with a substantial decrease in the probability of belong to the ambivalent type. Thus, at the highest education level, Caribbean blacks and whites had similar probabilities of belonging to the ambivalent type. Figure 2 depicts an interaction between race/ethnicity and education and the likelihood of membership in the strained type (low integration, high negative interaction). This interaction revealed that at the lowest educational attainment level, African Americans had a higher probability of belonging to the strained type compared to whites. For white respondents, the probability of belonging to the strained type increased with education level, whereas African American respondents' probability remained stable across education levels. Consequently, among the most educated respondents, whites had a substantially higher probability of belonging to the strained type than African Americans. Two significant interactions between race/ethnicity and age (Figure 3) indicated that for younger respondents in this sample of persons 55 years and over, whites and African Americans had the same probability of being in the ambivalent type. As age increased, this probability decreased for both groups. However, the decline was more precipitous for whites, such that among the oldest respondents, whites were less likely to belong to the ambivalent type than African Americans. Additionally, among younger respondents, Caribbean blacks had a lower probability of belonging to the ambivalent type than their white counterparts. While the probability of belonging to the ambivalent type decreased with age for whites, it increased with age for Caribbean blacks. As a result, older Caribbean blacks had a notably greater probability of being in the ambivalent type than older whites. --- Discussion The present analysis is the first to investigate racial and ethnic differences in the likelihood of being in particular social network types among older African Americans, Caribbean blacks, and non-Hispanic whites in the United States using a national probability sample. This is an important contribution to the study of social networks, as few studies have examined network types among racial/ ethnic minorities and none have examined racial and ethnic differences among older Americans. Collectively, the findings underscore within-group heterogeneity in social relationships in the black population (i.e., African Americans and Caribbean blacks) and differences between black and white populations. Although race and ethnicity alone did not influence membership in specific social network types, it was relevant when examined jointly with education and age, underscoring the complex nature of interactions involving race/ethnicity and sociodemographic factors. Four distinct network types-optimal, ambivalent, family-centered, strained-were derived in the present investigation from a nationally representative sample of older African Americans, Caribbean blacks, and non-Hispanic whites. The study findings partially supported Hypotheses 1 and 2. These four network typologies are representative of the archetypal diverse, family-focused, and restricted network types identified in the synthesis of the literature on social network types. The optimal and ambivalent types identified in this current analysis are characteristically similar to the archetypal diverse network type in their high levels of social integration. The family-centered type, which was characterized by high levels of social integration within primarily the extended family network, is similar to the family-focused type previously identified in the literature. The strained type most closely reflects the archetypal restricted network type in its low levels of social integration. However, a nonkin-focused network type was not identified in this analysis. Further, both positive and negative diverse network subtypes (i.e., the optimal and ambivalent network types) were confirmed, but only a single restricted network typology characterized by high levels of negative interaction (i.e., the strained type) was identified. Turning to findings for race/ethnic differences with respect to typology membership, the analysis did not support Hypothesis 3; Caribbean black and African American respondents did not differ from white respondents in the probability of belonging to the optimal and family-centered types. The data did support Hypothesis 4 regarding interactive effects; race/ethnicity significantly interacted with education and age in predicting network types. Education was negatively associated with the probability of being in the ambivalent type for Caribbean blacks, but the opposite was true for whites. This difference may be linked to unique life circumstances of Caribbean blacks, who may experience more obligations to provide support to their extended families (e.g., sending remittances to family), particularly family members in their home countries. Moreover, recent migrants to the United States with lower levels of education may be more limited in their socioeconomic resources and thus burdened by support exchanges. In fact, research indicates that socioeconomic status is positively correlated with sending remittances to family members residing abroad (Menjivar, DaVanzo, Greenwell, & Valdez, 1998). Thus, Caribbean blacks with less education may find it more difficult to meet the needs of their extended family, generating negative interactions based on mismatched expectations for assistance and/or unmet needs. This combination of high negative interaction coupled with high positive social involvement leads to ambivalent ties for Caribbean blacks with less education. Furthermore, a number of studies have indicated that education is positively associated with acculturation (Romano, Tippetts, Blackman, & Voas, 2005;Shen & Takeuchi, 2001) and that lower levels of acculturation is associated with increased relational conflict (Chung, 2001;Farver, Narang, & Bhadha, 2002). Given this, an alternative explanation for this interaction could be that Caribbean blacks with less education are more likely to be less acculturated. Thus, in addition to reporting high levels of positive social involvement with their networks, they nonetheless experienced more relational conflict, which is associated with a greater likelihood of being in the ambivalent network type. A second interaction indicated that higher levels of education increased the probability of being in the strained type for whites only. In contrast, the probability of being in the strained type was virtually the same across all education levels for African Americans. This pattern is consistent with research indicating that socially disadvantaged groups are more integrated within their social networks and tend to rely more heavily on them for informal support (Gerstel, 2011;Sarkisian & Gerstel, 2004). For African Americans, however, educational attainment was not associated with the probability of belonging to the strained type, which is consistent with research indicating that, at all socioeconomic levels, informal social networks, particularly extended family networks, are important for African Americans (Gerstel, 2011;O'Brien, 2012). Finally, interactions between race/ethnicity and age involving all three racial/ethnic groups were noted. For both white and African American respondents, the probability of belonging to the ambivalent type decreased as age increased. However, this decrease was smaller for African Americans. Potential qualitative differences in social relationship dynamics for African American and white older adults may contribute to more negative interactions among African Americans. For example, older African Americans are more likely than older whites to be in poorer health, have fewer financial resources, and reside with extended family (Williams & Wilson, 2001). The effects of these factors intensify with advanced age, increasing both reliance on connections to social networks and the stressors and strains that accompany these circumstances and potentially contribute to ambivalent ties. Differences in objective life circumstances (e.g., health, income) and the cultural relevance of informal social networks may account for the differential impact of age on ambivalent ties for the two groups of older adults. In contrast, the relationship between age and membership in the ambivalent type was reversed for Caribbean blacks, whose probability of belonging to the ambivalent type increased with age. This may be a function of expectations of support from network members and diminished ability to provide support among older Caribbean blacks. Caribbean black culture has been described as a "culture of reciprocity" that underscores the importance of equity in supportive exchanges (Bashi, 2007). Due to financial and physical limitations, older adults are likely to have limited means of providing support. Thus, uneven support exchanges (i.e., receiving more than providing) between older Caribbean blacks and their social networks may contribute to ambivalence with network members, especially when support reciprocity is an expected cultural norm. --- Study Limitations and Conclusions Several limitations of the current analysis should be noted. All social relationship measures in this study were self-reported and subject to recall and social desirability biases. Given the cross-sectional nature of the data, causal relationships between sociodemographic factors and network types could not be assessed. Future studies should use prospective data to investigate the causal relationships between sociodemographic factors and network types. Additionally, because the NSAL did not include negative interaction with friends, this issue could not be addressed in this study. Without information on negative friendship interactions, this study was unable to determine whether there are network types that are delineated specifically by this characteristic. Consequently, the network types identified in this analysis may be incomplete and provisional. Another limitation of this study is the lack of differentiation between ethnic groups within the non-Hispanic white sample, which the NSAL did not assess, and the sociodemographic diversity within non-Hispanic white ethnic groups. An important contribution of this study is the use of diverse sources of informal support and relational strains. Examining multiple sources of support coupled with negative interaction provides a more complete understanding of the idiosyncratic contributions of different network members to older adults' social environments. Negative interactions, while relatively common, are an often overlooked feature of social relationships. The inclusion of both positive and negative social relationships in this analysis contributes to the literature because it identifies network types that reflect an enhanced and more realistic representation of older adults' relationships. Although previous studies have examined ambivalent relationship types (Connidis & McMullin, 2002;Fingerman et al., 2004;Rook et al., 2012;Uchino et al., 2012), these studies examined ambivalent types by constructing them based on a priori assumptions of relational ambivalence. This study builds on these prior studies and extends the literature on ambivalent relationship types by using latent variable modeling to identify an ambivalent network type. Thus, the identification of an ambivalent network type in this study confirms the existence of an ambivalent relationship type that researchers have long proposed. This is an important finding that requires further investigation because despite the presence of positive relationship qualities, ambivalent ties are associated with poor mental and physical health outcomes, such as depression, inflammation, high blood pressure, and functional health limitations (Holt-Lunstad, Uchino, Smith, & Hicks, 2007;Kiecolt, Blieszner, & Savla, 2011;Rook, Luong, Sorkin, Newsom, & Krause, 2012;Uchino et al., 2013). In fact, ambivalent ties are associated with worse physical health status than exclusively negative ties, such as relationships within the strained type (Rook et al., 2012). Moreover, this is the first analysis, to my knowledge, of social network typologies among a racially/ethnically diverse population of older Americans. Another contribution of this study is the use of multiple indicators of church-based relationships (e.g., frequency of contact with congregants, subjective closeness to congregants, emotional support from congregants, and negative interaction with congregants). Prior studies of social network typologies typically used religious service attendance as the sole indicator of church-based social networks, which captures only a single facet of these networks. Additionally, the current analysis used LCA, an innovative analytical methodology based on a person-centered data analysis approach that addresses several limitations of cluster analysis, which is the statistical analysis typically used in social network typology research. In sum, the present analysis extends previous work by examining social network types for the first time in a nationally representative sample of older African Americans, Caribbean blacks, and non-Hispanic whites. This innovation contributes to work on social network types by providing a more complete understanding of race/ethnicity in relation to network typologies and the interactive effects of race/ethnicity, age, and education on social network typologies in two traditionally under-researched populations. This investigation represents a preliminary effort to understand the role of race and ethnicity in social relationships as manifested in network typologies. Future studies should explore the implications of these differences in relation to mental health and subjective well-being, which have been linked to network types. --- Supplementary Material Supplementary data are available at The Journals of Gerontology, Series B: Psychological and Social Sciences online.
Goals aimed at adapting to climate change in sustainable and just ways are embedded in global agreements such as the Sustainable Development Goals and the New Urban Agenda. However, largely unexamined, are the ways that narrative understandings conveyed in adaptation plans consider and attempt to address inequality in climate risk to urban populations and FEW-systems. In this paper, we examine whether and how adaptation plans from C40 member cities address inequality in risk, by planning actions to reduce hazard exposure or tackling the drivers of social vulnerability. C40 is a network of 94 of the world's cities fostering policies to address climate change. We apply a mixed methods approach, including a discourse analysis and meta-analysis of adaptation plans. The discourse analysis helps to unpack framings of urban equity issues as they relate to policy actions, and the meta-analysis seeks to quantitatively investigate patterns of framing and policy across adaptation plans. Our findings suggest that FEW-nexus thinking is not yet embedded in narrative understandings of risk and planned adaptation actions, within the adaptation plans we studied. In the city adaptation plans we analyzed, we found multiple frames coexisting behind the broader adaptation visions (e.g., risk and resilience). Rather than converging, issues, and principles such as those of equality, coexist with economic issues in an imbalance of incongruent political movements and priorities. Techno-infrastructural and economic investments and concerns tend to take precedence over concerns and interests for inequality in climate risks. We discuss some of the institutional factors explaining this. Knowledge integration, for instance, is constrained by the existence of a plurality of sectors, levels of government, power, values, and ways of understanding and managing climate risk. We also suggest that the relatively low importance of equality considerations in the adaptation plans will likely limit the capacity of cities to support broader goals such as those of the New Urban Agenda and the Sustainable Development Goals.
INTRODUCTION Goals aimed at adapting to climate change in sustainable and just ways are embedded in global agreements such as the Paris Agreement, Sustainable Development Goals and the New Urban Agenda. These agreements seek to move environmental and climate concerns into the urban policy action arena by developing strategies for risk management. Ideally, these strategies would be supported by the three pillars of sustainability (economy, equality, and environment), while increasing cities' resilience to chronic and acute physical, social, and economic stressors and hazards (Zeemering, 2009;Campbell, 2013;Romero-Lankao et al., 2016a;Simon et al., 2016). However, in practice, tradeoffs are often present that shrink the size one pillar and augment another. In the last decade, scholars and decisionmakers have shown increased interest in the mechanisms by which urbanization and climate change are coevolving to compound the unequal risk of floods, wildfires, and other hazards to urban populations and their supporting food, energy, and water (FEW) systems. However, actions to improve equality on the ground have been less evident (Revi et al., 2014;Romero-Lankao et al., 2017c). Incorporation of equality into urban adaptation plans is important because the most vulnerable communities within cities, most often are more exposed, have lower socio-economic status, make lower contribution to GHG emissions, and have lower levels of access to FEW systems, and livelihood options to mitigate risk and adapt (Boone, 2010;Hughes, 2013;Agyeman et al., 2016;Romero-Lankao et al., 2016a;Shi et al., 2016;Reckien and Lwasa, 2017). It is widely accepted, in the literature of social vulnerably, that social inequality shapes differences in climate risk and vulnerability and in capacity to mitigate and adapt to these hazards (Ribot, 2010;Romero-Lankao et al., 2016a). However, largely unexamined, are the ways in which different narrative understandings relate to suggested actions in existing adaptation plans. In this paper, we examine whether and how adaptation plans from 43 C40 cities address inequality in risk, by planning ways to reduce inequality in hazard exposure or tackling the drivers of social vulnerability (Reckien and Lwasa, 2017). We apply a mixed methods approach, including a discourse analysis and meta-analysis of adaptation plans for 43 C40 cities (Figure 1 and Supplemental Table 1A). In this approach, the discourse analysis helps unpack framings of urban equality issues as they relate to policy actions, and the meta-analysis seeks to quantitatively investigate patterns of framing and policy across adaptation plans. --- TRACING EXISTING SCHOLARSHIP Three areas of scholarship, relevant to this paper, include urban adaptation, and governance, inequality in climate risk, and the food, energy, and water (FEW) nexus (Leck et al., 2015;Araos et al., 2016;Shi et al., 2016;Romero-Lankao et al., 2017c;Wiegleb and Bruns, 2018;Heikkinen et al., 2019). We use findings in these areas as a basis to suggest a conceptual framework (section Conceptual Framework), which will be used to map attention given, in urban adaptation plans, to FEW interactions with inequality, and thereby gain knowledge of how far these considerations have penetrated urban adaptation planning. --- Urban Adaptation and Climate Governance Having proven to be important agents of change globally, cities, and transnational networks occupy a central role in the global governance of climate change because of many reasons (Bulkeley and Betsill, 2013;Romero-Lankao et al., 2018). There is a wide acknowledgment among scholars of the incapacity of national actors alone to produce policy actions that can address the complex dynamics of climatic risk (Gordon and Johnson, 2017). Attention has shifted to the array of governance initiatives undertaken outside of interstate climate negotiations and policies. These initiatives, taken by state, municipal, market, and civil society actors operating at multiple local to global levels, are seen as key to creating the kinds of innovations necessary to address environmental change and climate risk (Acuto, 2013;Shi et al., 2015;Gordon and Johnson, 2017). In recent years, in what has been termed the second wave of urban climate governance (Bulkeley, 2010), cities have moved beyond symbolic commitment to climate change action, to its integration into their planning and development policies (Aylett, 2014). For many cities, part of this movement has included participation in local and city-networks such as ICLEI, the World Association of Major Metropolises (Metropolis) and the C40 Cities Climate Leadership Group (C40) (Bouteligier, 2013;Gordon and Johnson, 2017). C40 is a network of 94 of the world's cities concentrating more than 650 million people and one quarter of the global economy. This peer network of cities seeks to address climate change through the design and implementation of policies seeking to mitigate greenhouse gas (GHG) emissions and climate risks (https://www.c40.org, February 28th,2019). A body of literature has examined different aspects of the C40 global and city governance influence. For instance, some portray the C40 as an orchestrator of global urban climate governance steering member cities toward particular climate actions (Gordon and Johnson, 2017), or creating new inequalities and sometimes even intensifying existing ones (Bouteligier, 2013). Others analyze whether the kind of change the network promotes is incremental, reformistic, or transformational (Heikkinen et al., 2019). In this study, we start from the assumption that member city agendas may differ from that of the C40 network (Heikkinen et al., 2019), and examine how, in their adaptation plans, city officials understand and manage inequality in climate risk to urban populations and FEW-systems. --- Risk and the FEW-nexus Studies on FEW nexus have grown recently (Endo et al., 2015). As it pertains to human food, energy, and water systems, the term nexus refers to the relationships, as defined by linkages and interdependencies, between two or more FEW resources and systems, including trade-offs and feedbacks between them (Leck et al., 2015;Romero-Lankao et al., 2017c). FEW-nexus scholarship has grown in recent years, but differences in motivation, purpose, and scope pervade the field (Stringer et al.,FIGURE 1 | Cities covered in the analysis of adaptation plans. Based on World Bank income category as of 1 July 2015, at the country-level. Low-income economies are those with a GNI per capita, of $1,045 or less in 2014; middle-income economies are those with a GNI per capita of more than $1,045 but <$12,736; high-income economies are those with a GNI per capita of $12,736 or more. Lower-middle-income and upper-middle-income economies are separated at a GNI per capita of $4,125. --- 2018). A FEW-nexus approach can be used to analytically examine links and interdependencies between FEW-systems, but it also functions as a boundary object that engages decision makers and academics across a science-policy interface aimed at understanding and managing FEW-system links and interdependencies (Wiegleb and Bruns, 2018). In governance, its concepts are sometimes used to achieve integrated management across FEW sectors and jurisdictions (Bizikova et al., 2013). Here we will examine how linkages and interdependencies between FEW-systems are acknowledged and prioritized at the city level and whether integrated FEW-management is a goal of adaptation plans. Or if, as suggested by existing scholarship, bringing together diverse policy domains creates its own set of challenges. The most important is given by the difficulties involved in moving decision makers beyond their accustomed ways of understandings and action precisely because this involves a collective engagement of disparate sectors, ways of knowing, levels of government, power, and values (Romero-Lankao et al., 2017c). FEW-nexus studies tend to be motivated either by the scarcity of FEW resources or by threats to FEW-resource security due to development and environmental pressures (Galaitsi et al., 2018). We will focus on the latter, which tends to be framed using either a security or a risk approach (Corry, 2012). In the security approach, the focus is on an existing threat such as an ongoing drought or disruption of energy or food supplies (Comfort, 2005). In the risk approach, however, the emphasis is on how human development and environmental dynamics are interplaying (or might interplay) to create the potential for harmful events (Trombetta, 2008). While security thinking leads decision making to look for the current, direct causes of harm to urban populations and FEW-systems, risk analysis analyzes the potential causes of harm, current or future. We use a risk approach here, because it fits better with both climate change scholarship, ours included, and the framing used in 87% of the adaptation plans (Field et al., 2014;Romero-Lankao et al., 2017a) (Figure 2). Within our sample, we look at how adaptation plans address inequality in risk. Following the IPCC, we define risk as the potential for adverse effects on lives, livelihoods, health, and assets (Field et al., 2014). Risk may spring from exposure to floods, sea level rise, and other threats and vulnerability of people and the FEW-systems that support them. Such vulnerability, or the propensity to be negatively affected by events or impacts, results from the multiscale interplay of factors in five domains: Socio-demographic, Economic, Technoinfrastructural, Environmental, and Governance (SETEG), which have been used by Arup and by us in prior work (Arup, 2014;Romero-Lankao and Gnatz, 2016). While people can be susceptible to hazards, they also have capacity and agency to modify their circumstances and behavior to mitigate risks or adapt. Capacity is the unequally distributed pool of resources, assets, and options governmental, private, and non-governmental actors can draw on to mitigate and adapt FIGURE 2 | Framing the adaptation vision. After reading and summarizing each adaptation plan, four notions capturing cities' broader frame or vision were identified. See Supplemental Table 1B. to risks, while pursuing their development goals and values (Vincent, 2007). To understand how policymakers are prioritizing these issues, we examine how in their adaptation plans, city officials attribute climate risk to a series of locational and SETEG factors, and what policy actions they suggest to manage these (section Study Design). --- Urban Adaptation, Inequality, and Equality For centuries, the notions of inequality, equality, and justice have been the subject of compelling philosophical, conceptual, and ethical debates, with persistent disagreements in definition, scope and policy implications whose discussion is beyond the scope of this paper (Ikeme, 2003;Agyeman et al., 2016). The concepts of fairness and justice can be related to discussions of the differences in definitions of equal and equitable. The word justice comes from the Latin jus, meaning right or law, and refers to either an actual or ideal situation in which: (a) benefits and burdens in society are distributed according to a set of allocation principles where the basic rights and needs of individuals and groups are considered and respected (distributive element); (b) the rules and regulations that govern decision making preserve basic rights, liberties, and entitlements of individuals, groups, or communities (procedural element); and human and other biological beings are treated with respect and dignity by all parties involved (interactional element) (Jost and Kay, 2010). Likewise, equality, which we use here in its opposite, conveys an ideal state of perfectly balanced or even distribution of goods and services across populations, while equitable can allow an element of self-determination. In a neo-liberal conception, as long as each member or group has an equal chance to obtain access to resources and options, a distribution can be termed equitable because it is self-determined on an equal playing field. Such equitable distributions are seen in this conception as fair or just because no one has had an advantage in gaining access to resources and options (Ikeme, 2003;Hughes, 2013). However, this conception ignores the power of assets and options, once attained by some individuals and groups, to create or compound differential access a to assets and options for others thus creating social inequality (Agyeman et al., 2016). Social inequality thus creates self-feeding systems that are not fair or equitable because they deny, to marginalized people and groups, access to assets and options necessary to avoid risks at the same they deny access to police systems and institutional features that could help them gain access those assets and options. Inequality determines differential location and access to places, water, food, energy resources, and decision-making options in a city where resources are distributed unevenly across populations (Reckien and Lwasa, 2017). Typically such uneven distributions result from markets, power, other institutional mechanisms and risk mitigation and adaptation policies that engender or perpetuate socially defined categories of wealthy or poor, or of included and excluded populations (Stein, 2011;Romero-Lankao et al., 2016b) based on class, caste, gender, profession, race, ethnicity, age, and ability (real or perceived). Undergirding our analysis in this paper is an assumption that, in the context of city climate action, an understanding of how inequality creates differences in exposure and vulnerability is fundamental to creating fair and effective risk mitigation and adaptation. Policies aimed at creating risk-equality should contain mechanisms to ensure the fair distribution of risks of negative impacts and of benefits (assets and options) to undertake climate action across city populations (distributive justice). Creating equality also means generating equal opportunities for participation and recognition for all, including underrepresented groups (procedural justice) (Bulkeley et al., 2013;Hughes, 2013;Reckien and Lwasa, 2017). Among the resources and options that vary with inequality to create differential urban vulnerability, access to food, energy, and water are so basic and primary that they can be used as bellwethers of an uneven distribution of many other resources conditioning vulnerability Biggs et al., 2015;Romero-Lankao et al., 2016b. When considering the fair distribution of resources, assets and services related to distributive justice, it is important to recognize that differences in gender, race, socioeconomic status, and culture are part of procedural barriers that condition participation in policies affecting distribution. Thus, a cultural value can inhibit poor and marginalized populations from effectively participating in decisions (e.g., where to locate infrastructural investments in water and electricity) that affect their wellbeing, property, resources, climate risks, and capacities to adapt and mitigate. --- CONCEPTUAL FRAMEWORK Using discourse analysis, we qualitatively unpack how, in their adaptation plans, city officials' frame inequality in urban climate risk. We then combine discourse analysis and adaptation analysis to examine some of the issues addressed by the adaptation actions suggested in the plans. Lastly, we use a meta-analysis approach to quantitatively investigate patterns of framing and adaptation action across cities. We will map narrative understandings in the adaptation plans of how inequality creates differences in exposure and vulnerability. We will also examine if and how adaptation actions contain mechanisms to ensure the fair distribution of assets and options to manage climate risks (distributive justice), and generate equal opportunities for participation and recognition for all, including underrepresented groups (procedural justice). --- Discourse Analysis Various strands of social science scholarship have used discourse analysis to examine texts, images, papers, books, and reports to define the ideas and concepts-which we will call understandings-through which actors understand and act upon the world (Foucault, 1972;Sharp and Richardson, 2001;Hajer, 2004;Keller, 2011;Wiegleb and Bruns, 2018). Rather than being neutral, these narrative understandings privilege some socio-environmental facts and may suggest some policy actions over others (Sharp and Richardson, 2001;Hajer, 2004;O'Brien et al., 2007;Trombetta, 2008). We draw on section Conceptual Framework and on the Sociology of Knowledge Approach to Discourse to map the discourse of 43 adaptation plans (Keller, 2011). The sociology of knowledge analysis of discourse includes three components: knowledge structuring, discourse production, and power effects. Here we will only focus on the first and the third. We excluded the second, which entails an examination of the influence of sociopolitical context on framing and action (Keller, 2011), because our study focuses on discourse as it crystallized in the plans, and not on the influence of each city's sociopolitical context on framing and action. To help us determine knowledge structuring, we mapped, through their references to issues of concern, the general interpretative frame city officials use to make sense of a climate change issue in their adaptation plans. For instance, do city officials frame climate adaptation as a problem of risk, or of resilience? However, setting issues such as those related to inequality in climatic risk on the adaptation agenda also relates to the way in which city officials determine what kind of problem climate change is. What causal SETEG factors are involved in the creation of climate change impacts? Are these impacts only the result of location and geography, or exposure? Or are they also the result of prior policies and unequal patterns of development determining differences in the vulnerability of people and FEWsystems within cities? Drawing on the discussion of existing literature (section Conceptual Framework), we will map how adaptation plans address inequality in hazard exposure and in the following multiscale (SETEG) factors determining vulnerability (Arup, 2014;Romero-Lankao and Gnatz, 2016). -Locational (exposure) factors conditioned by the presence of populations and critical FEW infrastructures in places that could be adversely affected by floods, heatwaves, and other climate hazards (Nicholls et al., 2008). -Socio-demographic factors consist of age, gender, and demographic structure of a city or the behavior of individuals and groups (Donner and Rodríguez, 2008). -Economic factors relate to uneven economic growth, urbanization, income, and affordability of food, energy, water, and other resources (Uejio et al., 2011). -Techno-infrastructural and built environmental factors include land use change and the distribution, quality, and robustness of water, sanitation, electricity and related, FEW critical infrastructures, and systems. Critical FEW infrastructures include electric power, natural gas and oil, water supply, and food distribution systems, but because we acknowledge the role of transportation, telecommunications, health, emergency and other services, we also included these as critical urban FEW infrastructural systems (Rinaldi et al., 2001). -Environmental factors such as the biophysical and climatic characteristics affecting an urban area's predisposition to hazards relate to exposure. For instance, coastal cities are prone to sea level rise, storm surge and coastal flooding, saltwater intrusion and tropical storms. -Governance factors consist of the fit between areas of concern and authority, cooperation, and cohesiveness among governing bodies and levels of government, policies and actions, and the legacies of actions and policies around-land use planning; and through investments, location and climate proofing of FEW infrastructure and service networks, which shape the geography of urban risk (Aylett, 2014). Power effects relate to the intended or unintended consequences emerging from the discourse. Elements of the power effects include the dispositifs, a French word describing the institutional, organizational and infrastructural elements, which we define here following Foucault and Keller as the suggested apparatuses of adaptation action, such as a) Personnel and organizations charged with undertaking adaptation policies; b) Institutional and organizational processes seeking to evaluate, monitor and understand the climate change problem, or to foster awareness among city actors, decision makers, and populations. We will include these under institutionalbehavioral adaptation actions (note that (a) and (b) seek to address the sociodemographic and governance factors within our SETEG framework); c) Investments in and climate proofing of critical FEW infrastructure (artifacts), which we will include under techno-infrastructural actions. (These address the technoinfrastructural factors within our SETEG framework); and d) Other discursive or non-discursive adaptation actions, such as environmental and economic adaptation actions (which address respective factors within our SETEG framework). Such "dispositifs" are shown in the literature to hold the potential to address climate risk to people and FEW-systems in cities. In our analysis we sort "dispositifs" among techno-infrastructural, institutional-behavioral, economic, and environmental action categories (Romero-Lankao et al., 2017b). --- Adaptation Analysis We also include insights from the climate adaptation literature to add accuracy to our discourse analysis. In the climate adaptation literature, institutional-behavioral actions include changes in the procedures, incentives, or practices of city actors, and often work through existing urban competencies and hybrid actor arrangements in sectors, such as urban planning, health, water, energy, and disaster risk management (Fisher, 2013;Romero-Lankao et al., 2017b). Institutional behavioral actions entail the creation of organizations charged with mainstreaming adaptation into other sectoral and developmental policies such as urban planning, transportation, and disaster management; with evaluating, monitoring and understanding the climate change problem; and with fostering awareness among city decision makers and populations. In the environmental justice literature, these actions are fundamental to procedural justice by broadening participation in, recognition, and commitment to adaptation across governmental, private, civil society, and community actors (Bulkeley et al., 2013;Shi et al., 2016;Reckien and Lwasa, 2017). Techno-infrastructural actions are critical in the creation of artifacts, such as energy, water and sanitation. They are often framed in the climate adaptation literature, as efforts to discourage growth in risk-prone areas and to protect critical urban infrastructural systems through investments in climate proofing, and changes to design, operational, and maintenance practices (Romero-Lankao et al., 2017b). Other adaptation actions include economic and environmental policies. The former aim at creating enabling conditions for autonomous action by governmental and nongovernmental actors, and to support broader development goals. Funding programs from public and private sectors are fundamental. By strategically allocating funding (whose amount and sources vary widely across cities), local governments can effectively respond to climatic risks (Aylett, 2014). Environmental actions seek to manage the biophysical, climatic, and hydrological factors affecting an area's predisposition to hazards (Brink et al., 2016;Kabisch et al., 2016). Environmental actions take into account and manage the role of biodiversity, greenspaces, and other ecosystem services in mitigating hazard risk and reducing the vulnerability of urban populations and FEW systems to climate change (Levy et al., 2014). --- STUDY DESIGN Meta-analysis is often applied to find commonalities within a variety of research papers and methods (Littell et al., 2008). It involves the pooling of data that quantitatively examine whether causal relations described in individual papers (e.g., drivers of climate risk, determinants of vulnerability of food, energy, and water insecurity) hold across a broader body of scholarship (Misselhorn, 2005;Romero-Lankao et al., 2012). While meta-analysis is frequently combined with systematic literature reviews to synthesize the results of previous research, in our approach, we combine meta-analysis with discourse analysis to systematically investigate patterns on the framing of inequality in risks within a selection of 43 adaptation plans. --- Selection and Analysis of the Adaptation Plans This study resulted from a prior report commissioned by the C40. Although the C40 has 94 affiliated cities, we only got access to 60 adaptation plans for analysis. Of these, we selected 43 plans, 4 of which are from cities located in lower-income, 12 in middleincome and 27 in upper-income countries. As can be seen in Figure 1, our selected sample also has a good representation of C-40 cities from Latin America, Europe, North America, Africa, and South-East Asia. We built on our prior work on FEW nexus, climate adaptation and inequality cited in section Conceptual Framework, and on the review of the adaptation plans, to map how city officials prioritize policy actions to manage inequality in risk. Although we couldn't analyze how individual city officials actually understand the climate change adaptation and FEW issues we studied, we did analyze the understandings of these issues conveyed in the plan. We will refer to these understandings, conveyed in the plans, as narrative understandings. Our data extraction and synthesis followed an examination of discourses and a meta-analysis approach (Littell et al., 2008;Keller, 2011;Romero-Lankao et al., 2012;Wiegleb and Bruns, 2018). Our conceptual framework functioned as a starting point to design and test a review template and to agree on our own definition of terms and fields (available upon readers' request). We then used this template to extract data from each of the 43 adaptation plans. First, each selected plan was carefully reviewed by at least two members of our research team to ensure systematic and consistent data extraction. Factors influencing risk to people and FEW-systems were identified and coded into the five SETEG domains (i.e., sociodemographic, economic, technoinfrastructural, environmental, and governance). Adaptation actions were classified into institutional-behavioral, technoinfrastructural, economic, and environmental. We further subdivided these categories of SETEG factors and adaptation actions into terms, as described in the second column of Supplemental Tables 1A,B, 2A-E, 3A-D). After summarizing each adaptation plan, mention counters were developed, based on mention of the terms, to capture overall narrative understanding (Supplemental Tables 1A,B, 2A-E, 3A-D). Once a term was found, the counter maxed at "1" for that particular topic to avoid duplicate counting. Limiting mention counts to one per plan is the most effective way to avoid bias in answering The first gives a view of the relative importance, attributed by urban policymakers, to particular issues within plans compared with all plans. The second gives a view of the relative importance, attributed by urban policy makers, to particular issues compared to all issues within a given category (e.g., techno-infrastructural vs. institutional-behavioral actions). Together, these measures give a two-scoped view of the relative priorities given by urban policymakers to the issues addressed in the plans. Although we feel this study offers many relevant insights, it was faced with some constraints that may affect its outcomes. While we included 43 cities from low-, middle-, and high-income countries, these were not selected using a sampling approach. Due to our determination to have at least two members review each plan, and our group's language limitations, we could only review plans written in English and Spanish. This meant we were not able to analyze the discourse in many plans that might have offered additional insights. Readers of this paper should, therefore, keep in mind that while the combination of discourse analysis with meta-analysis to identify patterns in understanding and action is innovative, our study is exploratory in nature. Furthermore, while our use of a discourse analysis to examine the framings of inequality in risks exposed some of the narrative understandings conditioning policy actions, it did not include an examination of why and how the socio-political and geographical contexts in which city officials operate shape their interpretations and planned actions. Lastly, since we studied plans and not implementations we could not determine how (or if) the suggested adaptation actions were implemented. While ethical questions regarding this study might be raised around the fact that it was commissioned by the C40 to study the adaptation plans of C40 cities, giving rise to concerns about scientific objectivity, we feel that our analysis of these plans was objective and sound for two reasons: (1) We studied the adaptation plans as independent documents and not as they pertain to the C40 or its mission; and (2) The methods used in the study were evenly applied across city adaptation plans without regard to any city's membership, income level or status in the C40. --- NARRATIVE UNDERSTANDINGS AND POLICIES IN THE ADAPTATION PLANS This section is organized around three topics. The first and second include a mapping of the narrative understandings-or knowledge structuring-crystallized in the adaptation plans. This not only in terms of what interpretative frame is used but also in terms of what locational and SETEG factors are identified as key determinants of climate risk, and whether inequality is considered in this conveyed understanding. The third topic refers to the power effects in the form of adaptation actions suggested in the adaptation plans to address inequality in risk to people and FEW-systems. --- Interpretative Frames We found that the urban adaptation plans analyzed here embed adaptation in a larger vision for the city, often with a multiplicity of coexisting frames. Many of these interpretive frames are not only full of symbolism, as in the resilience framing we will describe later in this section, they also feature keyand sometimes, contradictory-organizing principles of policy action (Figure 2). Rather than converge toward an integrated understanding, these concepts often coexist in a tension of incongruent and unbalanced sets of principles and related actions. In this disharmony, economic and investment concerns and interests (e.g., infrastructural and economic investments) tend to take precedence over concerns and interests for the environment and the marginalized (see next subsection). Frequently cities appear in the adaptation plan narratives as leaders, development hubs or engines of innovation and investment, key to growth and stability nationally, and internationally. Adaptation in this context forms part of a broader sustainability vision present in many cases for the creation of a vibrant, economically prosperous, and socially just cities, or cities that are habitable, secure, resourceefficient, socially and economically inclusive, and competitive internationally (Seattle, Tshwani). In many adaptation plans, city officials frequently see climate change as posing risks, but also offering opportunities. These include opportunities to attract investment, generate high-value jobs, strengthen research and development, or foster circular or green economies. For instance, the Singapore plan states that the city is poised to tap economic opportunities offered by global warming, such as investments in new growth areas, the creation of high-value jobs, the promotion of green growth, and of R&D capabilities. Interestingly, 87% or 37 of the reports apply a risk approach to frame climate change issues (Figure 2). Risk is often framed in the adaptation plans as the probability of occurrence of a hazard, such as sea level rise, multiplied by a consequence such as property damage. While differences in emphasis exist, a dominant narrative emerges, underlying the risk approaches in these plans. Common to this narrative is the idea that strategies for the protection of urban areas from the risks and FEW constraints associated with climate change require a scientifically grounded technical assessment of how changes in temperature, precipitation, and sea level are likely to affect critical infrastructures, resources and economic activities in the cities. Adaptation plans reviewed in this study illustrate that resilience is, increasingly, becoming embedded in the discourses of urban decision-makers. Resilience is not only seen in the plans as an ecological principle, but also, frequently, as an opportunity. Such opportunities, when coupled with appropriate actions, can increase a city's economic, energy, environmental, and food security, in addition to protecting the quality of life and safeguarding property (e.g., Durban). It is, therefore, common for the adaptation plans to frame the hazards and disruptions brought about by climate change as somewhat of a blessing in disguise. In this discursive thread, cities may even view themselves as symbolically endowed with a power of resilience like "the mythic phoenix, " able to take advantage of disruptive events and carry on through challenges over the years. In such cases cities become a phoenix aware of how the threats cities face-and their responses to these threats-expose several interdependencies that city officials must better comprehend (San Francisco). An almost mythic idea of its own resilience can also be found, for instance, in the New Orleans plan, which describes a city certain that the creativity and resilience of its people and places have been key in its capacity to bounce forward, after being faced with a decade of hurricanes, oil spills, and the Great Recession. --- Inequality in Climate Risk We compared levels of attention paid to climate risk associated with five selected SETEG factors, and examined whether the plans mentioned inequality in reference to these factors (inequality within each domain, Figure 3). This comparison revealed that because city officials are, by necessity, generalists, adaptation plans deal with many climate change issues at a time, from those related to economic development and land tenure to those associated with health, disaster management, housing and critical FEW infrastructures (Supplemental Tables 2A-D). Evidence from the narrative understandings conveyed by the plans suggests that FEW-nexus thinking is not yet embedded in city officials' priorities, or that such considerations create a conundrum that officials are reluctant to tackle. Of the total of risk factors, those related to food, energy and water systems were mentioned in 6, 14, and 20 reports, respectively (Figure 3). Where they did appear, food, energy, or water systems are treated separately, in most cases, without consideration of how their interdependencies can amplify or mitigate risk. The influence and vulnerability of FEW-systems was often framed in terms of techno-infrastructural issues associated with age, design or capacity characteristics (Blue bars, Figure 3). For example, the plans mention that FEW-systems and infrastructures are vulnerable because they are old, designed without consideration of the new (and unstable) normal that climate change will bring, and in need of retrofitting and climate-proofing actions. Buildings are also vulnerable because of poor quality design and construction, age, and lack of maintenance (Figure 3; Supplemental Table 2C). Inequality also tends to be given a lower priority and appears mainly in relation to other factors and very rarely in relation to FEW systems. Inequality considerations were included in 24 plans and represented 26 percent of the total mentions of technoinfrastructural risk factors. However, scant consideration was given to how techno-infrastructural and built environment factors condition unequal risk through such distributive mechanisms as differential access to water or sanitation, or differences in the provision and placement of infrastructures and services such as electricity, waste disposal, tree shading, parks, hurricane shelters, and evacuation routes. Locational (exposure) factors were mentioned in 32 plans (green bars, Figure 3) as related to differential exposure of populations and FEW-systems to climate hazards. Adaptation plans in Peru, Mexico City, and Cape Town point to how the poor are priced out of desirable neighborhoods and are often forced to live in hazardous areas. In Seattle, San Francisco, and New Orleans, adaptation plans show concerns for how inequality makes poorer populations more likely to occupy low-lying areas, prone to flooding or more likely to experience heat island effects because these areas are more affordable. Related to location, environmental risk factors were mentioned in 12 plans (green bars, Figure 3). Some of these mention that many informal settlements locate on areas, where the high-water table and inadequate infrastructure make them particularly vulnerable to flooding (e.g., Cape Town, Buenos Aires, Tshwane, Mexico City, and Lima). Cities from the Global North also offer examples of how low-income communities living in brownfields or in flood risk areas face higher levels of exposures not only to sea level rise, floods and heatwaves but also to contaminated land (e.g., New York, and New Orleans). Regarding economic factors, twenty-seven adaptation plans (67%) refer to economic development as a key determinant of risk, and twenty-three (53%) of all plans mention urbanization as a broader driver of risk (yellow bars, Figure 3). Interestingly, 27 or 62% of the adaptation plans referred to unequal economic growth conditioning access to determinants of a population's capacity to mitigate risks and to adapt. Such determinants include location, and access to secure land, affordable, accessible, and good quality housing, energy, water, food, and transportation (yellow bar, Figure 3). In the adaptation plans of Lima, Mexico City and Cape Town, the narratives acknowledge deep inequalities and high poverty rates that relate to the existence of informal, unplanned settlements whose populations have precarious housing without adequate FEW resources necessary to protect themselves against hazards. Recognition of such conditions is rare in the adaptation plans of the global north. New York is one of the handful of such cities indicating that nearly half of its people live in or near poverty, and lack access to good quality housing and other resources needed to adapt. While 17 adaptation plans refer to socio-demographic factors such as population size and growth, age, gender, and pre-existing medical conditions as determinants of vulnerability, 20 plans convey an understanding of governance as a determinant of risk and vulnerability (purple bars, Figure 3). Such governance conditioned risks operate through investments and the location of FEW infrastructures and service networks, and through the legacies of actions and policies around-land use planning or its lack though this is not generally acknowledged in the plans (orange bars, Figure 3). As for inequality, socio-demographic and governance factors, creating social exclusion by class, gender, race, migration, and minority status were mentioned in 13 and 5 plans, respectively, (orange and purple bars, Figure 3). Adaptation plans from cities in middle-and low-income countries tended to mention the influence of social exclusion on inequality in access to affordable energy, water, food, and sanitation, and reliable transportation systems more often than plans from high-income countries. Race, however, appears in the adaptation plans of the US cities of New York, New Orleans and San Francisco as a predictor of risk. These plans indicate that people of color are more likely to live in areas more at risk of flooding and subsidence, to live in poverty, to be unemployed and to have pre-existing health conditions associated with higher hazard risks. These plans also recognize that their marginalized populations have lower capacities to mitigate and adapt (Supplemental Tables 3A-D). --- Policy Actions to Address Inequality in Risk and FEW-nexus In our mapping of the power effects emerging from adaptation discourse among policymakers, we examined whether planned adaptation actions aimed at either reducing hazard exposure or tackling the drivers of social vulnerability considered inequality. The adaptation actions identified were organized into "dispositifs" as defined in section Tracing Existing Scholarship. We sorted "dispositifs" among techno-infrastructural, institutional-behavioral, economic, and environmental action categories. Our findings suggest that, while proposed adaptation actions tend to target many issues at a time, they also tend to prioritize infrastructural and economic issues, and that inequality is a secondary concern. Furthermore, city officials tend not to address the links and feedbacks between critical FEW infrastructural systems but rather to suggest actions to manage each infrastructural system at a time. Technological-infrastructural actions, which can be a means of fostering distributive justice, received the highest number of mentions (with 124, or 41%, blue bars, Figure 4). However, by and large distributive justice was not considered. Instead, actions were presented in the plans as a means to protect buildings and infrastructure through changes to design. Similar to what we found in our examination of narrative understanding, suggested policy action did not address the links and interdependencies among critical FEW-systems but rather focused on one sector at a time. Examples of planned infrastructural adaptation actions included: • Improving energy redundancy and reliability of (e.g., distributed power), flood fitting the design of surfaces, and increasing the extent of cooler, green surroundings (Changwon, Chicago, Karachi, New Orleans, Paris, Seattle). • Introducing low-carbon or renewable energy sources, reducing coal usage for electricity generation, promoting energy-efficient and resilient technologies, appliances, and designs in buildings and developments-e.g., cooling systems, LED and fluorescent lighting (Amsterdam, Quito). • Adapting water infrastructures to withstand heavy rain events, drought, and heat. Climate-proofing water systems and implementing a water sensitive approach to urban design and flood mitigation through blue and green infrastructures (Copenhagen, New York, Rotterdam, San Francisco). Techno-infrastructural actions were most frequently organized around resilience, low-carbon utilities and buildings, promoting a circular economy, and risk as a source of investment opportunity (Supplemental Table 3A). For instance, Amsterdam and Boston suggested fostering a circular economy to reduce waste and increase recycling throughout economic activities and districts. Other cities, such as Copenhagen, suggested basing adaptation on a risk and resilience approach aimed at improving infrastructure adaptability to new or unexpected conditions by achieving a city-wide, multiple-purposed, and longer-term risk mitigation vision. There were a few exceptions were plans used technoinfrastructural actions aimed at addressing inequalities in risk. For instance, the following actions were suggested: • Reducing intra-urban differences in water scarcity, access and use; increasing water coverage to poor and informal populations without regular, safe, and continuous water service (Cape Town, Durban, Johannesburg, Kolkata, and Mexico City); and providing access to weatherization of homes to low income families (Seattle). • Scaling up development tied to renewable energy services to accomplish a lower energy impact while achieving reduced poverty and promoting economic development (Durban, Tshwane). • Fostering structural investments that consider the consequences from interrupted energy supply during and after extreme events, and target those that are more affected (Durban, Tshwane). • Renovating slums, informal, or poor settlements (Addis Ababa, Buenos Aires, Cape Town, Durban, Kolkata, Mexico City, and Tshwane). Institutional-behavioral actions were second in the number of mentions (118 or 39% of the total). The focus in order of importance was on knowledge and awareness, monitoring, urban planning, disaster risk management, and institution building (orange bars, Figure 4). Awareness and knowledge, and monitoring were addressed in 31 and 29 of the plans, respectively. These plans suggest a suite of strategies to systematically evaluate, assess, understand, and monitor the kinds of climate risks and vulnerabilities they face (Supplemental Table 3B). They also suggest using scientific and technical expertise as a vital source of knowledge. For instance, Amsterdam suggests improving the city's knowledge and understanding of data to become active partners, steering events toward sustainability based on a knowledge of interconnections between systems such as energy and water. Two crucial adaptation instruments received attention in 22 adaptation plans each: disaster risk reduction (DRR) and urban planning. Elements of DRR included early warning systems, cooling centers for poorer populations, and climate-sensitive management protocols (e.g., Bogota, Kolkata, Mexico City, San Francisco, Quito, Rio De Janeiro, and Sydney). Urban planning was mentioned as a fundamental tool for anticipating climate change impacts, fostering early action and even preventing risks (orange bars, Figure 4). Some plans (e.g., Lima and Tshwane) acknowledged institutional barriers to effective implementation, such as weak law enforcement. Others pointed to gaps in the levels of authority and autonomy to control the investments and decisions that are fundamental not only for effective urban planning but also for managing the drivers of climate risk in the city. FEW thinking with relation to equality received scant attention within planned institutional-behavioral actions. We found only the following few examples of strategies to enhance equality within each sector: • Community based adaptation actions such as upgrading informal settlements, building flood-water drainage, and sewer systems in poor areas (Mexico City and Tshwane), and training poor communities for the management and attention of disasters (Bogota). • Increasing the share of renewable energy per capita through demand management actions, such as agreements with a number of utilities, incentives that support energy efficient practices, and reduced electricity consumption during peak hours (Amsterdam, Durban). • Inducing water conservation through water restriction, tariffs, and reduction of leaks (Cape Town). • Enforcing polices and by-laws that make healthy food accessible to all (Boston) and reserve space for local decentralized food hubs that can supply small traders while reducing ecological impact, through the support of small scale, sustainable farming practices (Durban). Within the economic instruments suggested in 38 adaptation plans, equality considerations were, likewise, virtually absent. While many of the plans seek to create enabling environments for independent action by both governmental and nongovernmental actors, for example through infrastructural investments, they largely aim at enhancing their economies without regard for structural inequality or uneven distribution. Through these actions, the plans also aim to support broader goals such as the Sustainable Development Goals. Indeed, the governments that produced many of the adaptation plans we analyzed are driving investments in major flood defenses, and in the transportation, water, and sanitary services sectors, but generally steer away from equality considerations in these investments and are more concerned with how they will fund them. Some cities, particularly from highincome countries, are explicitly and actively partnering with the private sector (Amsterdam, Copenhagen). One of these plans acknowledges that society at large will pay a large dividend to have infrastructures privately constructed and operated (Copenhagen). Environmental actions were considered in 40% of the plans, and many of these contain actions primarily focused on increasing or protecting biodiversity (e.g., Karachi, Montreal, Seoul, and Los Angeles), and on strategies for managing ecosystem services (green bar, Figure 4 and Supplemental Table 3D). For instance, the plans suggest actions to green the cities' streets, parks, and open spaces in order to serve multiple risk mitigation purposes. Other planned actions include efforts to increase biodiversity and reduce the urban heat island effects (e.g., Sydney, Vancouver, Melbourne), to increase urban agriculture (Seoul), and to better manage such hazards as runoff or fires (e.g., Rotterdam, Melbourne, Rio de Janeiro, and Portland). Nature-or ecosystem-based adaptation actions are also suggested to increase the resilience of vegetation to climatic and ecological impacts (such as erosion, Montreal), or to establish temporary rainwater catchment systems (Mexico City). Some cities also suggest conservation or rehabilitation of degraded ecosystems (Tshwane, Quito, and Mexico City) and protecting or restoring natural protections in coastal areas (New Orleans). --- ADAPTATION PLANS AND RISK INEQUALITY In this study, we examined evidence from 43 adaptation plans to determine whether and how they considered the factors driving inequality in exposure and vulnerability of people and the FEW systems that support them. To do this, we combined a discourse analysis with a meta-analysis of adaptation plans for 43 C40 cities. We are not the first scholars to conduct metanalysis. Examples of existing literature include (Misselhorn, 2005;Romero-Lankao et al., 2012;Endo et al., 2015). Nor are we the first to examine environmental discourse, even with regard to FEW systems. For instance, existing discourse scholarship has shown that a risk approach is prevalent among FEW nexus scholars (Wiegleb and Bruns, 2018). Because risks lack immediacy-says the analysisdiscourse around FEW risks entails connecting a future scenario to a policy, "presented as a way of preventing that risk from materializing into real harm" (Corry, 2012. p. 244). Our methodological innovation lies, rather, in our combination of discourse analysis with meta-analysis. We used this combination to examine narrative understanding and planned adaptation actions in 43 city adaptation plans. We integrated several theoretical strands of scholarship, such as FEW-nexus thinking, adaptation, and inequality, climate change risk, and adaptation and discourse analysis. Nevertheless, we did not examine why and how the socio-political and geographical contexts, in which city officials operate shape their interpretations and planned actions. Nor were we able to determine how or if the suggested adaptation actions were implemented. These represent the short-comings and limitations of our study that make it largely exploratory in nature. Notwithstanding these limitations, however, some clear patterns emerged that can help guide future research and policy. We found that FEW-nexus thinking is not yet embedded in city officials' narrative understandings of risk and planned adaptation actions, even when unpacking interdependencies among food, energy, and water systems may help cities tackle some of the root causes of vulnerability and risk (Romero-Lankao and Norton, 2018). Other scholars have already pointed to the fact that, while promising, FEW-nexus thinking faces many practical challenges. For instance, knowledge integration is constrained by the existence of a plurality of sectors, levels of government, power, values and ways of understanding and managing climate risk (Leck et al., 2015;Romero-Lankao et al., 2017c). Scholars also suggest that local governments lack the institutional and organizational capacities needed to appropriately manage the complexity and uncertainty associated with climate risks, let alone inequalities in the vulnerability of people, or how that vulnerability interplays with FEW systems. Officials within sectors involved in managing climate risk, such as food, energy, water, disaster risk management, and urban planning hold diverse organizational and cultural values. They lack the incentives, rights, financial resources, and responsibilities needed to work across sectors and jurisdictions (Scott et al., 2015). Additionally, decision makers involved in DRR and adaptation policies lack interaction and coordination because of differences in language and political culture (Schipper, 2009). An examination of these factors is an essential first step to develop the skill sets, tools, funding, and incentives needed to foster nexus thinking in risk mitigation and adaptation practice. In the city adaptation plans we analyzed, we found multiple frames coexisting behind the broader adaptation visions conveyed in their narratives. Rather than converging, issues and principles such as those of equality, coexist with economic issues in an imbalance of incongruent political movements and priorities (Anguelovski and Carmin, 2011;Campbell, 2013). In this disharmony, techno-infrastructural and economic investments and concerns tend to take precedence over concerns and interests for inequality or the environment in climate risks. Clearly, challenges exist with under-investments, backlogs and deferred maintenance of infrastructure. Urban infrastructures in many developed countries are deteriorating, and in developing countries infrastructure construction and maintenance have often failed to keep pace with the dynamics of urbanization (Kraas et al., 2016). Adaptation plans recognize that by working as a risk amplifier, climate change is projected to intensify these challenges, through at least two mechanisms: long-term, slow impacts such as constant deterioration of storm water system due to floods mentioned in the adaptation plans of 27 cities, or extreme events such as hurricanes (mentioned by 10 cities) and damaging critical FEW infrastructural systems. Still, with a few exceptions, equality concerns were not the priority. In the adaptation plans, narrative understanding and policies to address techno-infrastructural challenges were frequently organized around resilience, low-carbon utilities and buildings, promoting a circular economy, and risk as a source of investment opportunity. All these strategic decisions advance cities as centers of economic and infrastructural growth. However, they run the danger of fostering inequality in access, related to distributional justice, by creating climate proof places that become more exclusive and expensive, pricing out marginalized populations who end up living in less desirable areas and lacking access to critical FEW infrastructures (Coutard, 2008;Zérah, 2008). In their adaptation plans, cities of high-income countries are seeking to explicitly and actively partner with the private sector (Amsterdam, Copenhagen). Policy-makers in these cities reason that moving infrastructural development and operation to the private sector can be a way of diverting development costs away from government and reducing the need for politically unpopular taxes. However, this hasn't often shown itself to be a good strategy, as private interests must inevitably draw profits from their projects, leaving less for the public good. Ultimately, this will have implications for inequality in risk, as the poor communities, those most in need of investments in climate proofing, are more likely to be excluded not only from decisions (procedural justice) but also from reaping the benefits of techno-infrastructural interventions (distributional justice) (Coutard, 2008;Zérah, 2008;Revi et al., 2014). Socio-institutional actions relate to the distributive and procedural aspects of equality in different ways (Reckien and Lwasa, 2017). For instance, by involving vulnerable populations in decisions on land use and location of infrastructural investments, in the generation of knowledge, or in the monitoring of climate risks (Moser, 1998;Moser and Satterthwaite, 2010;Bouzarovski, 2014). Nonetheless, rather than using participatory instruments such as community based adaptation (Ebi and Semenza, 2008;Dodman and Mitlin, 2013), the plans mostly suggest using scientific and technical expertise as a vital source of knowledge. There are reasons for this. Climate change adaptation is highly data-dependent, demanding that city officials engage in new ways of gathering data, collaborating with scientists, using scientific information, and dealing with uncertainty (Hughes and Romero-Lankao, 2014). Yet, the focus on technical knowledge is a key element of prevalent cultural values that inhibit poor and marginalized populations from effectively participating in decisions on where to locate FEW critical infrastructural investments that affect their well-being, property, resources, climate risks, and capacities to adapt and mitigate. Although our current study, based purely on textual analysis, did not attempt to examine socio-political context (knowledge production), our conclusions do suggest that sociopolitical context was at play in the creation of the plans. Even beyond that, they suggest that common elements in socio-political context may be drawing cities away from actions based on effectively addressing such complex concerns as vulnerability and inequality toward those least conflicting with economic priorities. The relatively low importance of equality considerations in the adaptation plans will likely limit the capacity of cities to support broader goals such as the Sustainable Development Goals, Sendai Protocol for Disaster Risk Reduction and New Urban Agenda (Simon et al., 2016). The purposefully inclusive scope of the New Urban Agenda and of the targets and indicators in the urban SDG (Goal 11) provide a unique opportunity to include equality considerations in adaptation (Romero-Lankao et al., 2018). Prospects for progressing and mainstreaming climate change agendas, therefore, depend on demonstrating that climate agendas do not always and irreconcilably conflict with development priorities, such as those related to equality. From a longer-term perspective, they are essential and complementary to them. Atmospheric Research, sponsored by the National Science Foundation. Open access publication of this article was funded by the National Renewable Energy Laboratory, operated by Alliance for Sustainable Energy, LLC, for the U.S. Department of Energy (DOE) under contract DE-AC36-08GO28308. We want to thank Dakota Smith, Adelmut X. Duffing Romero, and Olivia Pearman for their support reading and analyzing the plans. We also want to thank our C40 partners Neuni Farhad, Caterina Sarfatti, Snigdha Garg, and Amanda Ikert for their keen insights in the reviews of the report that inspired this paper. --- AUTHOR CONTRIBUTIONS PR-L led the design, gathering, analysis and interpretation of data for the work. She also drafted and revised the work critically for important intellectual content. DG contributed to the design, analysis and interpretation of data for the work. He also drafted and revised the work critically for important intellectual content. --- SUPPLEMENTARY MATERIAL The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fsoc.2019. 00031/full#supplementary-material --- Conflict of Interest Statement: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Introduction. Cessation of tobacco use has the potential to provide the greatest immediate benefits for tobacco control. Understanding the social determinants of smoking cessation is an essential requirement for increasing smoking cessation at the population level. The purpose of this study was to analyze the socio-economic dimensions associated with cessation success among adults in Argentina and Uruguay. Materials and methods. Data from the Global Adult Tobacco Survey (GATS), a cross-sectional, population-based, nationally representative survey conducted in Argentina (n=5,383) and Uruguay (n=4,833) was utilized. Univariable and multivariable logistic regression analyses with results being presented as odds ratios (OR) with 95% confidence intervals were applied to study differences among those respondents who sustained smoking abstinence (≥1 year) and those who continued smoking. Results. The GATS study revealed that social gradients in tobacco quitting exist in Argentina and Uruguay. Being aged 25-34, particularly men in Uruguay, women in Argentina, low educated men in Argentina and having a lower asset index were associated with reduced odds for quitting.Factors that are driving differences in smoking cessation between diverse social groups in Latin America countries need to be considered when implementing relevant interventions to ensure tobacco control strategies work effectively for all population segments.
INTRODUCTION The 145 million smokers in the Region of the Americas account for 12% of the more than 1 billion smokers in the world [1]. The region lies in the fourth place among the six regions of the World Health Organization (WHO) with a 22% smoking rate among the adult population [1]. Tobacco is a major preventable risk factor for major non-communicable chronic diseases (NCDs), which are currently responsible for almost two-thirds of deaths worldwide. In the Region of the Americas, NCDs are responsible for 77% of all deaths: among these, tobacco is responsible for 15% of deaths from cardiovascular diseases, 26% of deaths from cancer, and 51% of deaths from respiratory diseases [1]. According to the WHO, tobacco use and exposure to secondhand smoke kill about 1 million people annually in the Americas [2]. In Argentina, tobacco is responsible for 14% of all NCDs, compared with 8% of all communicable diseases [2]. Despite some decrease observed during recent years, high smoking prevalence and related harm still remains a significant public health concern in Argentina and Uruguay [1,3]. Apart from preventing smoking tobacco among young people, encouraging cessation is essential to ending the tobacco epidemic. Cessation of tobacco use has the potential to provide the most immediate benefits of tobacco control and maximize the benefits in terms of preventable disease morbidity and mortality [4]. However, achieving substantial improvement will depend on successful implementation of the relevant tobacco control measures that can increase the smoking cessation rate at the population level in Argentina, Uruguay and other Latin America countries. In general, smoking prevalence and tobacco consumption is much higher in certain social groups [5]. Correspondingly, an increased susceptibility to tobacco related illnesses was found in low income groups, especially in all-cause mortality, lung diseases and low birth weight [5]. Likewise, several studies have indicated the social gradient in tobacco use in Argentina as well as in Uruguay. Fleischer et al. showed that better socio-economic status, measured through education, was related to less smoking and higher odds for recent quitting [6]. The most recent study by De Maio et al. revealed social gradients in tobacco use, exposure to secondhand smoke and cessation attempts among Argentinians and Uruguayans [3]. Therefore, social context cannot be overlooked when discussing applicable strategies to improve the design and implementation of appropriate tobacco policies and cessation programs in both countries. Data on the factors associated with successful smoking cessation that can be analyzed by socioeconomic factors beyond age and gender are crucial for the development of potential, high-impact population smoking cessation strategy [7]. In view of that, the purpose of our study was to examine the socio-economic dimensions associated with successful smoking cessation among adults in Argentina and Uruguay. --- MATERIALS AND METHOD The data source was the Global Adult Tobacco Survey [9]. GATS is a nationally representative household survey designed to monitor key tobacco control indicators. The target population of GATS includes all non-institutionalized men and women 15 years of age or older. The study protocol and questionnaire is based on standard methodology with some country-specific adaptations. Detailed methodology of the survey has been described elsewhere [8,9,10]. A multi-stage, geographically-clustered sample design was used to produce nationally representative data. The GATS questionnaires were administrated by trained survey staff during in-person interviews. There was a total of 6,645 and 5,581 completed individual interviews with an overall response rate of 74.3% in Argentina and 95.6% in Uruguay. The missing data were excluded from the analysis. After exclusion of respondents younger than 25 years, the final sample used in this study consisted of 5,383 Argentineans and 4,833 Uruguayans. --- Study variables. The main outcome variable was successful smoking cessation among adults in Argentina. Previous studies on quitting smoking are not homogenous in defining successful quitting, and many different measures of success have been suggested [11,12,13,14]. Some studies have shown that the risk of relapse is relatively high for people who abstain from smoking for short periods, and are at the early stages of smoking cessation. About 65% -75% of these groups at risk would relapse within a year [11,12,15,16]. In the presented study, successful quitting is defined as having abstained from smoking for a year or more [17]. A sustained quitter was defined as a former daily smoker who had been smoking for at least 1 year or longer, and had stopped smoking for 12 months or more prior to the interview. Those subjects who had given up smoking in more recent periods were considered recent quitters. A continuous smoker was defined as a current daily smoker who had smoked more than an average of one cigarette per day on a regular basis for at least one year. The ever smokers group compriseds all the above-mentioned categories, including respondents who were current, former smokers and recent quitters. Overall lifetime cessation rates or 'quit rates' were calculated, as the number of former smokers divided by the number of ever smokers and multiplied by 100% [18]. The independent variables applied for determining associations of successful cessation were demographics: gender (male, female) and age of the respondents. Age was studied in five groups: 25-29, 30-39, 40-49, 50-59, and ≥60 years old. Age at smoking onset -the age at which respondents started to smoke tobacco on a regular basiswas also considered (≤17, 18-20, 21 years or over). Moreover, socio-economic status, including education, economic activity, monthly household income and ownership of different household items were evaluated. Educational attainment was regarded as: primary or less, secondary, and higher education. Economic activity differentiated subjects who were currently employed, self-employed, homemakers, unemployed. The variable called 'asset index' was created, based on a summative score of possession of the following assets: functioning electricity, flush toilet, fixed telephone, cell telephone, television, radio, refrigerator, car, washing machine, computer, internet access. The summative score was then divided into, high, medium low. Analogous methodology has been implemented elsewhere [19]. Additionally, awareness of the negative health consequences of smoking was assessed. Respondents were categorized as aware (those who answered 'yes' to the question: 'Do you think that tobacco smoking causes serious diseases?'), and not aware (those who answered 'no' and 'do not know'). Similarly, awareness of the adverse health consequences of environmental tobacco smoke (ETS) exposure was determined, and respondents were characterized as aware and not aware. Cohabitation with a smoker (yes, no) was also taken into account. --- Analysis and statistics. The STATISTICA Windows XP version 8.0 programme was used to carry out the statistical analysis. All analyses were performed separately for men and women. Firstly, a descriptive analysis for all variables involved in the analysis was completed. Categorical variables were studied by chi-square test. Univariable and multivariable logistic regression analyses, with results being presented as odds ratios (OR) with 95% confidence intervals, was applied to study differences among those respondents who sustained smoking abstinence for one year or longer with those who continued smoking. In multivariable analyses, all statistically significant socio-economic variables were simultaneously included in the model. Significance level for relevant calculations was set at 0.05. --- RESULTS The characteristics of the respondents are described in Table 1. In Argentina and Uruguay, there are more ever male smokers than female smokers. Argentina recorded 40.7% male smokers vs. 25.8% female smokers, and Uruguay had 60.4% male vs. 36.1% female smokers (p≤0.001). Similarly in both countries, more men started smoking before women, before or by the age of 17, while more women started smoking before men by or after the age of 21. Before or by the age of 17, 58.0% men and 43.0% women started smoking in Argentina, and 58.6% men and 47.5% women in Uruguay (p≤0.001). On the other hand, 23.4% women vs. 11.6% men in Argentina and 24.9% women vs. 9.8% men in Uruguay started smoking later by or after the age of 21 (p≤0.001). Smokers in both countries differed by economic activity. Male smokers and quitters in Uruguay tended to be older than their counterparts in Argentina, while the women were quite similar in age. The average age of male ever smokers in Argentina was 47.8±15.3 In the same vein, current male smokers in Argentina were 43.1±13.2 years vs. 47.0±13.8 years in Uruguay, while the female smokers were 44.6±13.7 years and 44.9±13.6 years in Argentina and Uruguay, respectively (p > 0.05). At the mean, former smokers were a bit older in both countries; 54.3±15.8 years and 58.6±15.4 years for men and 51.8± 16.1 years and 51.2± 16.1 years for women in Argentina and Uruguay, respectively (p < 0.04). Recent quitters were 39.7±125 vs. 48.0±14.4 for men and 39.2±13.5 vs. 48.0±14.4 years for women in both Argentina and Uruguay, in that order (p > 0.05). Following the same trend, women started smoking later than men in both countries (data not shown). Former and current male smokers started smoking by 17.3±5.1 and 17.1±4.5 years in Argentina vs. 16.7±4.6 and 17.0±5.0 years in Uruguay, respectively. Also, female former and current smokers in Argentina started at 19.6±6.5 and 19.4±7.5 years vs. 19.8±7.8 and 19.2±6.7 years in Uruguay, respectively (men vs. women p<0.001). Alternatively, a higher quit rate was observed among women relative to men in Argentina; 39.6% for women compared to 38% for men, and a lower quit rate in Uruguay; 50.1% for women compared to 52.7% for men (p>0.05). Interestingly, women who successfully quit, did so at a slightly younger age than men. The mean age of quitting for male and female former smokers was 38.9±13.2 and 37.7±13.9 years, respectively, in Argentina, and 41.7±14.3 and 38.3±14.3 years, respectively in Uruguay (p > 0.05). --- Univariate regression. In both countries, men older than 45 years were more likely to be long-term quitters relative to those aged 35-44, but those over the age of 65 had the highest likelihood to maintain cessation; Argentina (OR=7.61; 95% CI 4.76 -12.16) and Uruguay (OR=4.70; 95% CI 3.29 -6.73; p<0.001). Similar results were obtained among women (Tab. 3). In Argentina, men with complete or incomplete secondary education had a lower likelihood to be long-term quitters (OR=0.62; 95% CI 0.42 -0.92; p<0.05) relative to those in the tertiary level (Tab. 2). Results for women in Argentina were not statistically significant. In Uruguay, education did not produce statistical significant results for either men or women. Retired men in Argentina had higher odds of quitting smoking for the long-term then employed men (OR=5.47; 95% CI 3.77 -7.94; p<0.001). Results were statistically insignificant among Uruguayan men. Similarly among women, retired respondents showed better prospects to be long-term quitters in Argentina (OR=3.58; 95% CI 2.36 -5.44; p<0.001) and Uruguay (OR=4.70; 95% CI 3.29 -6.73; p<0.001). Asset index was also a significant predictor of long- The evidence showed no statistically significant association between economic activity and being a long-term quitter among men in both countries, and women in Argentina. On the other hand, retired women in Uruguay (OR=1.33; 95% CI 1.09 -2.24; p<0.05) were more likely to be long-term quitters relative to those currently employed. Similar to the univariate section, men and women with a high asset index had an increased likelihood of maintaining their status as long-term quitters. --- DISCUSSION Understanding potential social gradients in the population and its relation to quitting have significant implications for the development of a future population strategy for smoking cessation. The majority of studies on smoking cessation are derived from a Western context; it was therefore uncertain whether these findings would apply to two neighbouring Latin American countries -Argentina and Uruguay. Firstly, in Argentina, a lower lifetime quit rate was noticed compared to Uruguay (39.2% vs. 51.7%). This data coincides with trends observed in recent years showing greater progress in Uruguay than in Argentina, as it relates to many areas of tobacco control. Recent trends also showed intensified tobacco industry endeavours to postpone or undermine tobacco control legislation and policy in Argentina [20,21,22]. However, in both countries, quit rates were higher compared to middle-income European countries like Romania and Poland where one third of the people who have ever smoked gave up smoking [14,23]. Conversely, quit rates in Argentina and Uruguay are lower when compared to more developed countries, for example, Canada, where the quit rate reaches 60% [24]. This suggests that huge gaps still exist among countries in terms of the implementation, enforcement, and comprehensiveness of tobacco control efforts to curb the tobacco epidemic, including cessation measures. While the majority of studies on socio-economic inequalities in smoking have focused on education and used smoking prevalence as the outcome interest, the presented analysis focused on more than one particular dimension and being a successful quitter [16]. Although there is some variability in the findings, socio-economic conditions have been identified as a predictor of quit attempts and quitting success in a number of studies [25,26,27]. De Maio et al. found a reverse gradient, based on the GATS data, although lacking statistical significance, in smoking cessation attempts which were reported more frequently in the recent year by Argentineans and Uruguayans with lower levels of education [3]. However, when analyzing education and cessation success, in the current study it was found that men in Argentina with lower education attainment also had reduced odds to achieve tobacco abstinence for a year or more. Lower education results from the regression analysis for women in Argentina and respondents from Uruguay did not produce any statistically significant results. In general, this may suggest that male Argentineans with lower educational background are more likely to attempt to quit, but they are less likely to sustain abstinence compared to those with higher education. This is in line with the findings of Kotz et al. who indicated that smokers in more deprived socioeconomic groups are just as likely as those in higher groups to attempt stopping smoking. However, there is a strong gradient of success across socio-economic groups, with those in the lowest group being half as likely to succeed compared with the highest [28]. On the other hand, some studies have not found a relationship between socio-economic factors and quitting, particularly in multivariate analyses which also include other important characteristics [14]. The figures of the International Tobacco Control Four Country Survey showed that education was not generally associated with cessation success, although a few particular levels in certain countries were significantly associated with quitting success [29]. Furthermore, Siahpush et al. in a study of a national sample of Australians confirmed that while education had the strongest relationship with smoking cessation, of all the factors controlled, the relationship between higher education and increased odds of cessation no longer existed when other environmental and individual variables were included in the model [30]. Moreover, in the presented study it was noticed that retired women from Uruguay had higher odds of successfully quitting. This success can be linked with the fact that this group covers older people who are more likely to quit mostly due to health reasons, as previously discussed. In Argentina and Uruguay, unemployed respondents had decreased odds for successfully quitting, but the results were not statistically significant compared to each other. Figures from other GATS-based studies brought mixed results in this area. Being economically active was associated with long-term quitting among men in Romania [14]. In GATS Poland, employed males also had more than twice the probability of giving up smoking compared with the unemployed [23]. The association with employment status among women has not been observed in either country. However, GATS revealed that long-term smoking cessation was harder for men from disadvantaged groups with low asset indices from Argentina and Uruguay. Lower socio-economic groups are generally less likely to be successful quitters, although there is some variation [6,27,31,32,33]. These findings are mostly based on education and/or income data and cannot be compared with GATS results directly considering asset index. Further studies of the expected social gradients in quitting and asset index are needed. Study strengths and limitations. The data derived from GATS is the most recent, nationally representative data based on a high number of respondents. It considers various potential cessation predictors which may also contain some limitations. For the purpose of this study, subjects were selected who were aged 25 years or older at the time of the survey. The analysis was restricted to individuals aged 25 and above because they might still be engaged in the process of smoking uptake [34]. Moreover, subjects under 25 might not have completed the maximum level of education [35]. In addition, continuous abstinence for twelve months or longer was assessed by self-reporting and not validated. Selfreport methods are the most convenient and cheapest way to collect data on smoking tobacco from a large number of respondents in a short time. However, the possible limitation in obtaining answers about smoking may be recall bias, which might lead to underestimation of tobacco consumption. Nonetheless, self-report techniques are stated to be a valid tool for population studies, as addressed in previous papers [36]. Although the GATS questionnaire included questions on duration of tobacco smoking and age of smoking onset, the nicotine dependence or heaviness of smoking that are considered important determinants of, were not obtained for former smokers who maintained tobacco abstinence over one year in this data. There was no information on successful quitting for sustained quitters, such as number of quit attempts, duration per quit attempt, or details on assisted or unassisted quitting. Due to the unavailability of data, it was also not possible to compare some other information from Argentina and Uruguay with other countries; quitting motivations, impact of previous tobacco control measures, including tobacco tax increase. Another limitation is the inability to draw conclusions in causality or directionality of some results based on the cross-sectional study design. Nevertheless, in contrast to studies evaluating the efficacy of smoking cessation treatment programmes, or cessation in high risk groups of heart disease patients, the presented study population should be more representative of the great majority of quitters who quit on their own [18] --- CONCLUSIONS The GATS study revealed that a social gradient in tobacco quitting exists in Argentina and Uruguay. It also identified characteristics associated with long-term sustained tobacco abstinence in both countries. This study provided an insight in specific categories beyond age and gender that were not broadly studied previously, such as asset index. The current study also highlighted the need to encourage tobacco measures that focus on the population that have a harder time quitting smoking. These include younger people, and special attention should be paid to young groups aged 25-34, particularly men in Uruguay and women in Argentina, low educated people and those with lower economic position characterized by asset index. A number of evidence-based individual or community-based policies delivered according to the social context that successfully work in other countries and targeted socially disadvantaged groups, could be adopted in Argentina and Uruguay [37,38,39]. This may facilitate the reduction of inequalities in tobacco-related harm within populations. This is because if tobacco consumption is to be addressed across all social groups, without the distribution of impacts, the improvement will not be experienced equally everywhere, or by everyone [7]. Finally, further systematic research is needed to understand factors that are driving differences in quitting tobacco smoking between diverse social groups in Latin America countries, to ensure tobacco control policies work effectively for all population groups.
The literature shows that migration characteristics are a potential pathway through which migration can influence basic healthcare service utilization. The goal of the study was to explore the effect of migration characteristics on the utilization of basic public health services for internal elderly migrants in China and to identify the pathways that might promote their utilization of basic public health services.We studied 1,544 internal elderly migrants. The utilization of basic public health services was determined through participation in free health checkups organized by community health service institutions in the past year. Migration characteristics were represented by years of residence and reasons for migration. Other variables included demographic characteristics and social factors, e.g., the number of local friends and exercise time per day were measured to represent social contacts. Multivariate binary logistic regression was employed to explore the association of the variables with the likelihood of using community health services. Results: A total of 55.6% of respondents were men, and the mean age was 66.34 years (SD 5.94). A low level of education was observed. A total of 59.9% of migrants had been residents for over 10 years, and the main reason for migrating was related to family. Of these migrants, 12.9% had no local friends. Furthermore, 5.2% did not exercise every day. Social contacts were complete mediators of the impact of migration characteristics on the utilization of primary healthcare.Our study highlighted the mediating role of social factors in the relationship between migration characteristics and the utilization of basic public health services among Chinese internal elderly migrants. The findings supported the need to increase the opportunities for social contacts between local elderly individuals and internal elderly migrants.
BACKGROUND The utilization of basic public health services is an important aspect of the access migrants have to healthcare in the form of screening, preventive services, general practitioners, specialists, emergency rooms, and hospitals (1). In China, basic public health services are mainly provided by community health service centers (stations) in the community, with a large number of medical needs coming from large aging populations. One of the proposed solutions was to establish the elderly support systems in community health service centers (stations) through primary healthcare (2). The core functions of basic public health services in China that included prevention, case detection and management, gatekeeping, referral, care coordination, and so on were to be provided by community health service centers in urban communities (3,4). This was expected to be an effective way to relieve the congestion of superior hospitals; however, unlike the hierarchical diagnosis and treatment system in developed countries, patients could choose different types of hospitals at will in China (5). Studies in developed countries have shown that immigrants have lower rates of health insurance and use less healthcare than local populations (2,6). Specific determinants of health service utilization by immigrants were also inconsistent (7). Studies conducted in the Americas indicated that associations have been found between the length of stay and healthcare utilization of immigrants, and these findings suggested that acculturation or assimilation strongly correlated with the length of stay in the host society, which could be an important determinant of health status (8). However, anthropological studies thought that cultural differences did not necessarily fade with time (9). Along with the increase in urbanization and development of the economy, internal migrants who move between regions within the country have gradually become an integral part of migration in the context of national labor shortages in China (10). The number of internal migrants reached 241 million in 2018, of which 18 million were more than 60 years of age according to the dynamic monitoring survey for internal migrants. Furthermore, the elderly migrant population over 60 years old was growing each year (11). As is known, household registration policy, commonly known as "hukou, " which was classified by origin into urban or rural "hukou, " was implemented in 1958 by Chinese authorities (12). "Hukou" gave households access to social benefits in their registration area but limited access to those outside their registration area (12,13). Only migrants who started working for the government or were highly educated could change their "hukou" registration (14). In contemporary society, an increasing number of families are migrating with their elderly members to look after children, find jobs, or access better healthcare services in China. In addition, most of these elderly members are more than 60 years old (15). Strengthening the utilization of primary healthcare facilities is considered an effective approach to providing affordable, equitable access to quality basic health care for all Chinese citizens by 2020, as was pledged by China (13,16). As the number of internal elderly migrants increases, their use of basic public health services should also improve to contribute to health equity. For example, as one of the basic public health services, the national policy proposed free health checkups in community health service centers (stations) for the elderly aged 65 each year, where these internal elderly migrants are not subject to household registration restrictions (17). However, many studies have indicated that <40% of elderly migrants have participated in the free community health checkups in the past year. In addition, <40% of internal elderly migrants follow up on chronic diseases, and the level of other behaviors, such as establishing health records and seeking medical attention, is also low (15,18). This suggests that there are deficiencies in the health management of elderly migrants in China. In addition, this forms an institutional challenge for basic public health service utilization among elderly migrants. Therefore, it is necessary to explore how to improve the utilization of basic public health services among internal elderly migrants. Studies have shown that the factors affecting the utilization of basic public health services for the elderly include demographic characteristics and social factors (2,19,20). Some studies believe that the migration characteristics of elderly migrants directly affect the utilization of health services (21), but other studies have suggested that migration characteristics are associated with basic public health service utilization through social adaption, acculturation, and other social factors (22). For example, migrating characteristics such as years of residence are one factor of the social support network and acculturation for migrants that contribute to basic public health service utilization (23). In general, it is evident that migrating characteristics and social factors, such as community engagement, social mobilization, ability to communicate, reason for migration, and length of stay in host countries, were related to health service delivery (24,25). However, little work has been done to explore the possible pathways between social factors, migration characteristics, and the utilization of basic public health services, especially for internal elderly people (26)(27)(28)(29). Therefore, our study explored the potential pathway of the impact of migration characteristics on the utilization of basic public health services for internal elderly migrants in China. In our study, migration characteristics were represented by two dimensions: years of residence and reasons for migration, which were consistent with the literature (25). Social factors were represented by social contact, which plays an important role in determining individual health behaviors as a key dimension of poverty and well-being. However, the challenge of measuring social contact was daunting. A unified definition and measurement were not given by the vast and diverse conceptual literature on social contact (30). Empirical studies have explored different aspects of social contact, including physical isolation and access to social resources (31), such as "people feel that their communities" as a proxy for physical isolation and "ties with other people" as a proxy for access to social resources. However, these methods draw attention away from simply counting numbers of social contacts (32). For internal elderly people in China, due to retirement and migration, interactions with friends are the main social ties, and exercise in the community is the main way of feeling their communities. Referring to the literature and considering the actual situations of the population in question, the number of local friends and exercise time per day in the community were chosen as indicators of measuring two aspects of social contacts (33). In this study, we presented the situation of the utilization of basic public health services for internal elderly migrants and explored the potential pathway through which migration characteristics impact the utilization of basic public health services. The objective of this study was to complement the existing literature by providing further insights into the pathway that might influence basic public health service utilization for internal elderly migrants. The results might help policymakers design appropriate social policies to promote the utilization of basic public health services in this disadvantaged population. --- METHODS --- Data and Sampling Data were derived from the dynamic monitoring survey for internal migrants-a special survey on internal elderly migrants of the National Health Commission of the People's Republic of China in 2015. The survey was organized by the National Health Commission in 2015 (formerly the National Health and Family Planning Commission). It was part of the regular data collection of the government, and face-to-face home-based interviews were conducted by the unified training investigators. The response rates for the survey were not announced by the governments. Our local institutional review board (IRB) exempted the analysis of the public-access data because it involved analyzing existing data that had been de-identified; ethical approval was not required for secondary data. Stratified, multistage sampling based on a probability proportionate to size (PPS) sampling method was adopted. The basic sampling goal was all migrant households that did not have "hukou" (registered resident certificate) in the local area and had been living there for more than a month as reported by each village or neighborhood. Townships were randomly selected, followed by villages or neighborhoods. In each village or neighborhood, households of migrants were selected. Eight pilot cities (Beijing, Shanghai, Dalian, Wuxi, Hangzhou, Hefei, Guangzhou, and Guiyang) were chosen as sampling cities. Regarding location: Beijing, Shanghai, Hangzhou, Guangzhou, and Wuxi are located in the east and are more economically developed, Guiyang belongs to the western region, Hefei is in the central region, and Dalian is in the northeast region. Finally, 16,960 migrant households, which included 1,544 (5.5%) internal elderly migrants aged over 60, participated in the survey. In view of the bias caused by different regions, regional classification was incorporated into the model as fixed effects to disaggregate the influence of different areas. --- Measures Based on the existing data, considering that internal elderly migrants who moved between regions within the country and were more than 60 years old had reached the statutory retirement age, the informal networks of these migrants were their social ties. Since a unified definition and measurement of social contact was not given by literature, we chose the number of local friends and exercise time per day as the variables of social contacts following relevant researches (5,7,32). The respondents were asked how many friends they possessed in the local city; for comparative analysis, we adjusted the number of friends to a categorical variable with 7 categories: 0, 1-2, 3-4, 5-6, 7-8, 9-10, and 10 and above. The average daily exercise time was the time spent on physical exercise every day, such as walking more than 40 min, running, playing ball, aerobics, and swimming. If the respondents did not exercise every day, the weekly exercise time was divided by 7 to obtain the average daily exercise time. Similarly, the average daily exercise time was converted to 6 categories of categorical variables (0, within 30, 31-60, 61-90, 91-120 min, and over 120 min) for comparison in this study. Furthermore, migration characteristics were represented by years of residence in the local city and reasons for migration (25). Demographic characteristics included age, gender, education, marital status, medical insurance, average monthly household income, and self-reported health (Table 1). --- Utilization of Basic Public Health Services Health management services, which include lifestyle and health status assessment, medical check-ups, auxiliary examinations, and health guidance, were clearly proposed in the basic public health services launched in 2009 in China (20). This project is considered to be of great significance in controlling chronic diseases of the elderly and improving the health of the elderly. The "National Basic Public Health Service Standards (2011 Edition)" retains and improves the above-mentioned content of the "Elderly Health Management Service Standards." Furthermore, the document requires community health service centers (stations) to provide health management services for the elderly once a year as one of the basic public health services provided by community health service agencies (2). Free health checkups are still the core content of health management services for the elderly, and it is a prerequisite that the elderly enjoy follow-up basic public health services. Free health checkups include routine blood, blood pressure, blood lipid, fasting blood glucose, and blood uric acid tests, ECG examination, digital chest radiography (liver, ultrasonography of gallbladder and kidney, etc.), and screening for some malignant tumors. In recent years, the classification standard for elderly individuals who enjoy free health checkups is 60 years old and above in most areas, and they are not restricted by their place of household registration. In general, community health service centers (stations) usually provide free health checkup services once a year for elderly people over 60 years old in the community within a fixed period each year. The potential participants are informed by issuing leaflets, posting posters, calling through the telephone, and using other methods within the community. Recently, results showed that the percentage of migrant older adults receiving free medical checkups was 36.2% (15). Therefore, our study selected free health checkups as the measurement of basic public health services. Whether elderly migrants participated in free health checkups within the past year provided by community health service agencies was adopted as an indicator of their utilization of basic public health services in the context of free medical check-up services provided to elderly people over 60 years old. If the respondent participated in free health checkups provided by community health service agencies within the past year, in other words, his answer was "Yes, " we thought basic public health services utilization occurred and vice versa. --- Statistical Analyses Demographics, migrating characteristics, and social contacts were presented by descriptive analyses. We calculated the average and standard deviations in age, average monthly household income, and number of local friends. Furthermore, we conducted a classification process and calculated the frequency of each categorical variable. Chi-squared analysis was used to test the relevance of each categorical variable to the utilization of basic public health services. Multivariate binary logistic regression was developed to understand the associations of the variables with the likelihood of using basic public health services through modeling odds ratios. To explore the pathway of migrating characteristics and social contacts for basic public health services, three models were employed. First, the demographic variables were entered (model I), then the variables of migration characteristics were added (model II), and finally, the variables of social contacts were added (model III). In models II and III, we adjusted for demographic variables as fixed effects, and other independent variables were subjected to multifactor analysis using forward stepwise regression (forward: LR) based on the maximum likelihood estimation with a significance level of.05. --- RESULTS --- Demographic Characteristics A total of 1,544 internal elderly migrants were included across 4 districts. A total of 55.6% of internal elderly migrants were men, the mean age was 66.34 years (SD, 5.94), and 50.2% of the internal elderly migrants were aged 60-64 years old. On average, we observed a low level of education: 88.6% of the individuals had received high school education or less. A total of 78.2% were married, and 94.7% rated their health as healthy or basically healthy. Most of the respondents (74.7%) were in the eastern region. The 25-75% interquartile range of the average monthly household income of the migrants was 5,000-12,000 RMB (or US$784-1,881). More than half of them (52.5%) had New Rural Cooperative Medical Care Insurance (NCMS). This meant that the majority of internal elderly migrants came from rural areas, which was the result of the household registration policy ("hukou") that people with rural household registration (rural "hukou") can only participate in the NCMS in their hometown ("hukou" location). It was also found that 33% of internal elderly migrants had participated in a free medical checkup in community health service centers (stations) over the past year (Table 2). --- Migration Characteristics and Social Contacts The median migration time was 5 years, the interquartile range (IQR) was 2-10 years, migrants with <1 year of residence accounted for 7.2% of the respondents, and 59.9% of migrants had a residence time of over 10 years. The main reasons for migrating were taking care of grandchildren (31.1%) and spending their remaining life with children (27.2%). Regarding local friends, the average number of local friends was 8.29 (SD, 11.90), but 12.9% had no local friends. With respect to exercise time, 5.2% did not exercise, and most (62.2%) of their exercise time was 60 min or less per day (Table 3). Significant differences were observed in migration characteristics and social contacts. --- Pathways of the Impacts of Migration Characteristics on the Utilization of Basic Public Health Services In the model with demographic variables (model I), the result of the Hosmer and Lemeshow test was 0.778 (>0.05), which meant that the information in the current data had been completely extracted. The classification accuracy was 62.3%, suggesting that the mean regression model could correctly classify 62.3% of the observations. Age, region, and medical insurance were significant predictors of basic public health service use. Then, after adding the variables of migration characteristics (model II), the classification accuracy increased to 69.4%. We found that age was no longer significant, and years of residence and region were significant influencing factors (p < 0.05). Finally, social contacts were added to the model (model III), demographic variables were controlled, and the remaining variables were gradually entered into the model by forward: LR. The classification accuracy became 80.7%, which was much higher than those of model I (62.3%) and model II (69.4%). In the final model, owing to variable filtering, migration reasons were removed from the model. Furthermore, age, region, and medical insurance showed significant differences among confounders. It is worth noting that years of residence were no longer significant compared with model II, and it was replaced by the number of social contacts variables (number of local friends, exercise time per day). Comparing the results of model II and model III, we believe that, for internal elderly migrants, migration characteristics were complete mediators between social contacts and their utilization of basic public health services, which might have an effect due to the social opportunities provided for them by years of residence (Table 4). Evidence from model III suggested a significant variation in the utilization of basic public health services across regions. Respondents in different regions had different probabilities of using basic public health services. Internal elderly migrants in the western region even had 4-fold higher odds than those in the eastern region (OR = 4.661, 95% CI: 3.196-6.796) where relatively developed cities such as Beijing, Shanghai, Guangzhou, Hangzhou, etc. are located. The reason would be that the income in the eastern region would have been greater than that in the other areas, and perhaps the individuals who earned more had less reason to access the free clinic; therefore, the utilization of paid health services in this cohort could be explored. The probability that respondents who were more than 75 years old used basic public health services was more than twice as high as that for 60-64 year-olds (OR = 2.032, 95% CI: 1.304-3.168; OR = 2.136, 95% CI: 1.081-4.224). Similarly, respondents who had social medical insurance care had a higher likelihood of utilizing basic public health services than those who did not, especially for people who had urban and rural residents cooperative medical insurance (OR = 2.338, 95% CI: 1.190-4.529). Regardless of the demographic variables, the associations tended to be stronger for the number of local friends than for the other factors (p < 0.001). Respondents who reported having local friends had almost 3-fold (OR = 2.988, 95% CI: 1.745-5.118) to more than 4-fold (OR = 4.350, 95% CI: 2.593-7.452) higher odds of utilizing basic public health services than those without local friends. In short, the more local friends a respondent possessed, the more likely the respondent was to use basic public health services. Furthermore, respondents who exercised 61-90 min per day had more than three times higher odds of utilizing basic public health services (OR = 3.459, 95% CI: 1.511-7.919) than those who did not exercise as much. In other words, internal elderly migrants who had many local friends and engaged in 61-90 min of exercise time were more inclined to use basic public health services (Table 4 and Figure 1). --- DISCUSSION As one would expect, when excluding the influences of demographic characteristics, migration characteristics affected the use of public health through social factors, which was similar to some other studies. In fact, we disproved the direct effect of migrating characteristics on the utilization of public health services. Social factors caused the migrating characteristics to lose their significant influence. This study supported the potential pathway of migration characteristics influencing the utilization of public health services through social factors (6)(7)(8)11). In this study, the number of local friends and exercise time per day were significantly associated with the utilization of basic public health services rather than migration characteristics. We found that the more local friends an elderly migrant possessed, the more likely they were to use community health services, which might be due to the information and support their friends provide. Furthermore, we noted that exercise should be encouraged, as exercise time between 60 and 90 min per day was more beneficial for promoting the utilization of basic public health services. The descriptive statistics revealed that most of the internal elderly migrants were aged 60-64. There were more men than women, and all tended to have a lower level of education, which was consistent with the characteristics of internal migrants in general (34,35). Self-reported health was also observed. In this research, the finding regarding self-reported health was similar to previous research that most healthy people were inclined to choose to move (8). However, we did not find a correlation between health and the utilization of basic public health services, which was contradictory to other studies on migrants in general (9). This might account for the elderly individuals who were able to move away from their hometowns being more physically fit, and those who were not inclined to seek or used to seeking health services from health institutions. Another inconsistency with previous studies was the finding that marital status was irrelevant to the utilization of basic public health services. Other studies showed that persons who were 65 years of age or older and living with others were less likely to see a doctor than persons living alone (15). First, this might be due to immigration health effects; second, the companionship coming from family might have replaced the role of marriage (regarding reasons for migration, the results showed that 73.1% of the respondents in the study were accompanied by their families). Furthermore, we found that region was related to the utilization of basic public health services. This was confirmed by earlier findings of other studies that persons who lived in larger communities had a lower rate of general practitioner visits than those who lived in smaller communities (10). As mentioned above, the reason would be that the individuals with higher incomes or who were busier in metropolitan areas had less reason to access a free clinic. Additionally, a total of 8.9% of elderly internal migrants did not have medical insurance, which means they were in an inferior position compared to the 95% coverage of the entire population by three public insurance plans [NCMS, Urban Resident Basic Medical Insurance (URBMI), and Urban Employee Basic Medical Insurance (UEBMI)] (11). As some studies showed, family migration has become a trend in the migration process in China in recent years (13), and the migration characteristics of our study supplied evidence for this trend. In addition, 12.9% of the elderly we studied had no local friends, probably because they were far from their hometowns and old friendship circles and their social circles of local friends needed to be rebuilt. In terms of demographic characteristics, the significant predictors were age and medical insurance. We did not find that gender, marriage, education, economic income, or self-reported health had significant impacts on the utilization of basic public health services, as other studies discovered (18,20,36). In this case, we believe that the discrepancy may be attributed to our target research groups and the type of public health services. First, our objects were internal elderly migrants who tended to live with their families because most elderly migrants moved with their children to take care of their grandchildren; therefore, the impacts on the children might be greater than the impacts on the elderly themselves. Second, we focused on free medical check-up services in the community, so it was reasonable to suggest that, in this case, economic income was not significant. Our data coming from secondary data were the major limitation of this study, which limited the variables that we could employ, especially in the variables of social factors. The study only contained two social contact variables. We believe that it was necessary to further explore the influence of other social factors, such as social support and social integration, which would be our next research direction. The prevalence of chronic diseases and even acute diseases were also not considered in the study. Second, this study was a cross-sectional study, so causality could not be inferred. Last, the sample size was small; furthermore, the number of internal elderly migrants in different sampling areas varied widely, which affected the representativeness of the study. --- CONCLUSIONS To conclude, this study provided a more in-depth examination of the relationship between the studied variables and confirmed the mediating effect of social factors between migration characteristics and the utilization of basic public health services. Because only one-third of the respondents used basic public health services, some other obstacles did exist (18,24). The findings supported the need to increase the opportunities for social contacts between local elderly individuals and internal elderly migrants (31,37). --- DATA AVAILABILITY STATEMENT The datasets presented in this article are not readily available because the data used in this paper were provided by the National Health Commission of the People's Republic of China and we have signed a legally binding agreement with the Commission that we will not share any original data to any third parties. Requests to access the datasets should be directed to [email protected]. --- AUTHOR CONTRIBUTIONS YL: conceived, designed, and performed the study and wrote the paper. YL, TW, and TZ: analyzed the data. All authors contributed to the article and approved the submitted version. --- Conflict of Interest: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's Note: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Today, the influence of the social media on different aspects of our lives is increasing, many scholars from various disciplines and majors looking at the social media networks as the ongoing revolution. In Social media networks, many bonds and connections can be established whether being direct or indirect ties. In fact, Social networks are used not only by people but also by companies. People usually create their own profiles and join communities to discuss different common issues that they have interest in. On the other hand, companies also can create their virtual presence on the social media networks to benefit from this media to understand the customers and gather richer information about them. With all of the benefits and advantages of social media networks, they should not always be seen as a safe place for communicating, sharing information and ideas, and establishing virtual communities. These information and ideas could carry with them hatred speeches that must be detected to avoid raising violence. Therefore, web content mining can be used to handle this issue. Web content mining is gaining more concern because of its importance for many businesses and institutions. Sentiment Analysis (SA) is an important sub-area of web content mining. The purpose of SA is to determine the overall sentiment attitude of writer towards a specific entity and classify these opinions automatically. There are two main approaches to build systems of sentiment analysis: the machine learning approach and the lexicon-based approach. This research presents the design and implementation for violence detection over social media using machine learning approach. Our system works on Jordanian Arabic dialect instead of Modern Standard Arabic (MSA). The data was collected from two popular social media websites (Facebook, Twitter) and has used native speakers to annotate the data. Moreover, different preprocessing techniques have been used to show their effect on our model accuracy. The Arabic lexicon was used for generating feature vectors and separate them to features set. Here, we have three well known machine learning algorithms: Support Vector Machine (SVM), Naive Bayes (NB) and k-Nearest Neighbors Paper-Violence Detection over Online Social Networks: An Arabic Sentiment Analysis Approach (KNN). Building on this view, Information Science Research Institute's (ISRI) stemming and stop word file as a result of preprocessing were used to extract the features. Indeed, several features have been extracted; however, using the SVM classifier reveals that unigram and features extracted from lexicon are characterized by the highest accuracy to detect violence.
Introduction Nowadays, with the existence of the internet in life, people began creating communities online. Social media networks have become an essential part of the Internet for billions of people all over the world. These networks are equally important for both people and organizations over the globe. The information and photo sharing, spreading ideas, exchange experience and recommendations are attractive properties of social networks. They enable clients to express their opinions, show loyalty and start conversations with their favorite companies. On the other hand, companies use social media networks to collect information about markets, clients, and competitors in an easy way. Furthermore, companies can communicate with clients to enhance their image and meet their expectations. Based on this view, it is critical to build a trust in using the social media for both people and companies. In fact, building trust in social websites plays a critical role in enhancing the quality of social media networks and implementing security for them. One of the key aspects when dealing with social networks is to try to minimize the effect of hate speech which may lead to violence, terrorism, and security problem to the political systems [1]. One of the most popular methods that help in detecting certain types of semantics in a data is Sentiment Analysis (SA) [2]. SA is a way for text mining, computational treatment of opinions, sentiments and subjectivity of the text [3]. SA is a field of study that measures people's opinions, sentiments through natural language processing (NLP), computational linguistics and text analysis that are used to extract and analyze subjective information from the web, mostly social media and similar sources [4]. In this study, we are concerned with the activity of Jordanian people in the major social networks. According to Arabic social media report, there were 156 million Facebook users in the Arab world of whom 5 million in Jordan and 11.1 million Twitter users of whom 200 thousand in Jordan in March 2017 [5]. In Facebook and Twitter, people share their likes, dislikes, beliefs, political and sports opinions. Among these opinions there are a significant percentage of violence and abuse statements. Sites such as Facebook and Twitter have become a priority to actively combat hate speech which leads to violence [6]. Some of these sites such as Facebook added options to report violent in the feedback options for any post. The importance of detecting hate speech is clear from the strong relation between hate speech and actual vio-lence. Early detecting hate speech could enable outreach programs that attempt to prevent an escalation from speech to action. In this research, we built a model for violence detecting by using people (tweets, posts) written in Jordan Arabic dialect through popular social media networks sites (Twitter, Facebook). Despite the increasingly massive number of Arabic users on the Internet [5], Arabic language is considered amidst top six major languages of the world. The number of native speakers exceeds 200 million and it is the formal language used in over twenty countries [7], there is a weakness for building a strong corporation to exploit it in different applications. There are three different forms of Arabic language [2]: Modern Standard Arabic (MSA), Dialectal Arabic (DA) and Classical Arabic. This study deals with DA, but still, there are many challenges when using Arabic language as listed below: 1. Colloquial Arabic parsers: Many people used their Dialectal language instead of MSA on social media, parsing MSA is an already complex task towards which many efforts have been directed. However, colloquial Arabic different from MSA phonological, morphological such as ("walad", which means "a boy") ("waldan", which means "two boys"), and ("awlad", which means "more than two boys" [7], and lexical and does not have a standard orthography which complicates the task of building morphological analyzers and part of speech taggers [8]. 2. Pronunciation: Some pronunciations do not exist in English such as "Gh" as in "Gharb" [7]. 3. Diacritics: such as, "teacher" ‫سة"‬ ِّ ‫َر‬ ‫ُد‬ ‫"م‬ and "school" ‫ة"‬ َ ‫ْرس‬ ‫د‬ َ ‫"م‬ [9]. 4. Poor of sentiment lexicons: Sentiment lexicons contain opinions with their polarity, they are an important part of any sentiment analysis. There are currently a few number of publicly available colloquial Arabic sentiment lexicons, so building a colloquial Arabic polarity lexicon is still an open research area. 5. Named entity recognition: Named entity recognition becomes an important part of sentiment analysis when identifying the polarity of the opinion. Person name recognition becomes a requirement even for the task of determining semantic orientation, such as ‫سعيد"‬ ‫نبيل,‬ ". 6. Phrases and idioms: phrases and idioms are very commonly used by Arabic speakers in social media to express their opinions and feelings in a sarcastic way, old wisdom and idioms, consequently, we shall exclude them because it is hard to deal with them. 7. Negation: Negation can be an important concern in opinion and sentiment analysis. For example, " ‫الكتاب‬ ‫هذا‬ ‫أحب‬ ‫أنا‬ " " ‫الكتاب‬ ‫هذا‬ ‫أحب‬ ‫ال‬ ‫أنا‬ ". There is a list of negation terms in Arabic language which can change the sentiment polarity of terms from negative to positive and vice versa. 8. Emoticons: Arabic smiles and sad emoticons are often mistakenly interchanged; so many tweets have words and emoticons that are contradictory in sentiment mainly due to mixing the text orientation while typing emoticons. As mentioned above, DA is the target of our study so we have collected the dataset manually and it has been annotated into (Violence, Normal) by Arabic Jordan native speakers. In this study, we have built our own dataset that consists of set of words that were extracted from Facebook posts and twitter tweets in the Jordanian dialect. The collected dataset is then handled by applying set of pre-processing techniques to enhance the generalization performance of violence detection. This manipulated dataset and the lexicon are used as a source for the extracted features. In order to detect violence using the extracted features, three well-known classification algorithms are used (Naïve Bayes, Support Vector Machines and K-Nearest Neighbour) Many experiments have been conducted to study the effect of applying various preprocessing techniques and different classifiers. The results show that SVM classifier is doing better than KNN and NB when used with the collected dataset This paper is organized as follows. There are five more sections. In section 2, Related work presents an overview of related work. In section 3, Proposed method will be discussed Under this section data collection and pre-processing, building lexicon, features extraction, machine learning approach will be discussed. Section 4 discusses Experimental Results. Finally, in section 5, conclusion and future work. --- Related Work Millions of users share opinions on different aspects of life every day. Therefore, social media web-sites like (Facebook, Twitter) are rich sources of data for opinion mining and sentiment analysis [24]. One of the application of this mining and analysis process is the detection of violent and hate speech. Different approaches were proposed in the literature to tackle the problem of sentiment analysis. Hammer (2014) [10] presented a method of using machine learning to detect threats of violence from a data set of YouTube comments written in English language. The method described in the paper uses logistic LASSO regression analysis on bigrams of important words to classify sentences as violence or not. The dataset contains 24840 sentences from YouTube and was manually annotated as violent threat or not. The features are bigrams of two of these important words observed in the same sentence. The paper did not describe properly how these important words were selected, report only that words were chosen that were correlated with the response (violent/non-violent). However, it appears likely that the words were arrived at using LASSO regression. The obtained result shows Accuracy of 0.9466. The shortcoming of this study was the using of the logistic LASSO regression analysis, which has the limitation of performing an implicit feature selection while estimating the model. In another work, Djuric et al. [6] proposed a method to detect hate speech comments in English language by using two steps. First, they used paragraph2vec to convert a generic block of text into a vector using continuous bag of words (CBOW) which used for predicting the word given its context. Second, they used logistic regression classifier. The proposed model compared to TF and TF-IDF using area under curve (AUC) metric. The results show that Paragraph2vec outperforms TF and TFidf. In their paper, Yadav and Manwatkar (2015) [11] developed a social media networks prototype that aims to automatic filtering of offensive content in social media networks before sharing it. They applied AHO-Corasick string pattern matching algorithm. The idea of the proposed algorithm is based on matching the pattern (i.e. offensive keywords) from the input text with database that have offensive keywords collected from different datasets. After the word is detected they simply replaced it with some special character to prevent it from share. They used breadth first search to find the offensive keywords. The prototype shows good results even when using slang language. Gitari et al. (2015) [12] created a model classifier that uses sentiment analysis techniques and in particular a lexicon-based approach to automatically detect hate speech in online forums, blogs and comments in news reviews using semantic and subjectivity features. They collected blogs from Raymond Franklin sites that are considered to be generally offensive, they refer to this as first corpus. Second corpus consists of largely paragraph related to the Israel-Palestinian conflict. They concentrated classifying hate speech detection into three key target groups of race, nationality and religion. The achieved results show that precision, recall, and F-score are the best when using semantic orientation, hate verbs and theme-based as feature sets in first corpus and using subjective sentences only, while the results much less when used same feature sets but without using subjective sentences. From my opinion they could increase precision and recall scores if they applied machine learning. Waseem and Hovey (2016) [13] presented a method for hate speech detection on Twitter that is written in English. The method is divided into three parts. First, they collected 16K tweets that contain racist and sexist hate speech, then they proposed a list to identify hate speech which helps the annotators to reliably identify hate speech. Second, they examined the impact of different extra-linguistic features in coupling with character n-grams for hate speech detection instead of using word n-gram due to character n-gram being far less sparse than the word n-gram. In third part, they used a grid search in order to select the most suitable features, after that they evaluated the selection features by using logistic regression classifier and 10-fold cross validation. The obtained results show that using character n-grams of length up to 4 with gender as an additional feature have got 73.93 F1-score, which it is the best compared with location and word n-grams. The limitation in this article centred in extracted location feature which it need to consider more than just the tags Twitter provides, which affected on the final result. Alhelbawy et al. ( 2016) [14] presented a new corpus of violent tweets event; the dataset was manually labeled for seven classes of violence (Human rights abuse, Political opinion, Accident, Crime, Conflict, Crisis, Other) and annotated using popular crowdsourcing platform crowdflower. The work was targeting the Arabic tweets that are related to violence. Two filters are then applied to filter (Redundant tweets, emotional tweets, delete short tweets and sexual adverts), after that they used the confidence score to evaluate different sub-sets for each tweets. The obtained results show that crowd classification is overall reliable and can be used for further research on violence on social media. Mubarak et al. (2017) [15] presented an automated method to create and expand a list of obscene words that help to detect abusive language on Arabic social media. They classify Twitter users based on the using of abusive words or not. By using Twitter API with language filter set to Arabic they collected tweets patterns that are usually used in offensive communications. After that, they manually considered if the words that appeared in collected tweets are obscene or not. Magu et al. ( 2017) [1] introduced a mechanism to identify hate coded content on social media written in English language. For example, user have used words skype, google to represent Jew, black respectively. This hate coded leads to violence over social media. They extracted 1999 tweets separated to 1048 labeled hateful and the rest were labeled non-hateful. They used dataset to calculate the Pearson correlation coefficient appearance of every term exist in the dataset and the class label, this method helps them to extract the most correlated terms for identifying hate tweets. After that, they ran the Apriori algorithm to extract frequent item set. Lastly, to train classifier they applied a bag of words model to represent the most popular word in the corpora of training dataset with a Boolean feature vector, then they collected 23,401 tweets to build model using support vector machine with linear kernel to be able separate the hateful tweets and using 10-fold cross validation. They achieved an accuracy of 0.794 with precision of 0.794 and recall of 0.795. Abdelfatah et al. (2017) [16] proposed a new framework aiming at separating violent and non-violent Arabic tweets over Twitter by using sparse Gaussian process latent variable model (SGPLVM) followed by k-means. Experiment started by collecting 16234 Arabic tweets then pre-processing tweets by removing stop word and web links, after that they annotated manually by at least five different annotators and only tweets have a confidence score more than 0.7 are applied, then two set of experiments have been carried out. The first one is reduced dimensions with PCA and then apply K-means. Second experiment is using SGPLVM to reduce dimension then apply K-means, after that they compare between two results. The obtained results show that using SGPLVM with K-means it better than PCA with K-means and using unsupervised techniques to detect violent tweets in low dimensional representation be better than applying clustering on the original data. --- Proposed Method This section describes in details the methodology adapted in this research. The main processes in the research are: Data collection, Data preprocessing, Arabic lexicon, Feature extraction, Data classification and Model evaluation. Figure 1 illustrates the various processing steps of the proposed method. --- Data collection Data collection is the process of gathering large amount of data, in an established systematic fashion that enables one to prove research questions, test hypotheses, and evaluate results. In order to test our model, we need to use some datasets that contain data from Arabic audience. There are few datasets that contain this information but they are not useful in our research since they are collected in Arabic delicts that are different from our target delict which is the Jordanian Arabic delicts. In order to overcome this situation, we have built dataset from scratch that contain Jordanian dialect opinions about subjects related to politics and sports. The created dataset contains set of comments collected in various times from two popular social networks -Facebook and twitter [25]. These comments are political and sport comments that might lead to violence. Table 1 shows example of the collected data divided into violence and normal data. --- Data collection methods In this study, two methods were used to collect data from Facebook and Twitter: automatic and manual collection. In the first method, we used scripts to collect data automatically from the tweets of twitter. The number of tweets collected is almost 10,330 tweets. The number of useful tweets after excluding redundant and advertisement tweets is 385, 335 among them are violence and the rest is regular. The automatic collection of data was not enough, so we used manual collection. The main reasons for adopting this method despite the time and effort needed are: • Most of tweets returned from automatic collection are violence and it is hard to find keyword not including violence content. • There is no automatic data collection for Facebook. • Huge number of returned tweets are advertisement, links and comments on picture. Three persons that are aware of Jordanian dialect, politics and sport performed the data collection. They revised around 60000 tweets and comments. The final number of collected tweets and comments is 2057. Table 2. shows source, the domain and the number of followers of the groups that were used to collect data from. The next step after data collection is data annotation. Each tweet or comment should be classified as violence or normal. Table 3 summarizes the results of data annotations. --- Data preprocessing In the pre-processing stage, various Natural Language Processing (NLP) techniques are applied [9]. Several preprocessing strategies can be applied on sentiment analysis that affect the accuracy when applied to Arabic text. The pre-processing is performed on different stages: Tokenization, normalization, stop-word removal, and stemming. In this research, we applied different pre-processing techniques. At the beginning, tokenization is used to break up the text into tokens. Next, normalization is applied to convert all various forms of a word to a common form. It is the process of transform-ing text into a single canonical form since input is guaranteed to be consistent before operations are performed on it [17]. In this study, normalizer performs this specific task according to the rules listed below: • Replacing: in this stage we want to replace any words contain characters " ‫أ‬ " replace by " ‫ا‬ " , ‫"ة"‬ replace by ‫"ه"‬ . • Removing the "tatweel" character " _ ". • Removing the diacritics: " َ ‫اء‬ َ ‫الم‬ ‫بُ‬ َ ‫ْر‬ ‫َش‬ ‫ي‬ " will be ‫الماء"‬ ‫."يشرب‬ • Removing punctuation and special characters. • Remove English letters and numbers. • Remove hyperlink: some of collected data contain link such as for continue reading the news, in this situation we deleted the link and keep the news text. • Remove duplicate letters such as ‫"مبرووووووك"‬ which should be ‫."مبروك"‬ The normalized text is then handled in order to remove stop words. These words are removed because they don't add any new information to the text. A list of stop words such as" ‫كذلك‬ ‫هو,‬ ‫انت,‬ " is prepared and applied to the text. The last process applied is stemming. In stemming, all affixes of the words are removed. For instance, the word ‫"المهاجرون"‬ is stemmed into ‫."مهاجر"‬ Light stemming python library used for this purpose. It provides a configurable stemmer and segmented for Arabic text (Zerrouki,2012) . --- Normalizing the dataset In this phase, after extracting the features, the dataset is normalized. Normalization is generally performed during the data preprocessing. Since the scaling of values of dataset attributes range varies widely so it helps to fit into specific range. There are different normalization types such as Z-score and Min-Max normalization [18]. In this research Min-Max normalization type is used to scale data into range [0,1] because the dataset features have different range such as the range of feature the total frequency of violence words is [0,6]. The following equation is the formula of Min-Max normalization to scale the data in range [0,1]. --- 𝑣′ = 𝑣-𝑚𝑖𝑛 𝐴 𝑚𝑎𝑥 𝐴 -𝑚𝑖𝑛 𝐴 (1) Where v refers to original value and v′ refers to new value. --- Arabic lexicon Many researches work in the problem of sentiment analysis use sentiment lexicons also known as senti-lexicons. In this research we relied on Arabic senti-lexicon (ASL) that proposed by [19] where each word is assigned a polarity (positive, negative). The sentiment lexicon is considered as the most crucial resource for features extraction that help us to make model more accurate. ASL includes 13,760 positive and negative words, 3880 synset terms which have different words and same meaning that collected manually and dialects synst(D-synset) which are words used in the different Arabic dialects and have the same meaning. Based on ASL, we built our own lexicon to detect violence. Firstly, ASL was modified to become appropriate for violence detection where negative is changed to normal and positive is changed to violence. Secondly, 304 violence words and 225 normal words were added manually to ASL. Finally, we deleted D-synset because we apply Jordanian Dialect and we deleted the score field. In addition, we added set of part of speech (POS) and synset and inflection manually. Our new lexicon includes a list of 10,443 violence and normal words and 2450 violence and normal sentiment. An example from our new lexicon is shown in table 5. --- Features extraction Feature extraction is an important task in the sentiment analysis and more generally in text categorization. Since the text is unstructured, we need to convert original documents into feature vectors which is the main step in any supervised learning approach attached to sentiment analysis to select the right features that determine the overall performance of sentiment classification. Consequently, this work studies the effect of applying various pre-processing techniques on the extracted set of features and its impact on the overall performance. In this research, we define a group of features. These features can be grouped into four main feature groups: 1. Feature based on sentiment word of (Violence, Normal), presence and frequency. In this group, we defined six features: The presence of violence words, the total frequency of violence words, the presence of normal words, the total frequency of normal words, the ratio of the presence of violence to normal words, and the ratio of the total frequency of violence to normal words. 2. Bag-Of-Words (BOW) [20]: According to the BOW model, the document is represented as a vector of words in Euclidian space where each word is independent from others [21]. We used the feature n-grams and its Term Frequency-Inverse Document Frequency (TF/IDF). The n-grams of texts are used in text mining and natural language processing tasks. N-grams are simply all combinations of adjacent words or letters of length N that you can find in text. An n-gram of (N= 1) is referred to "unigram"; (N=2) is a "bigram": N= 3 is a "trigram". In this research we used the unigram, bigram and trigram. --- Features based on POS: The process of POS tagging allows to tag each word of text in terms of which part of speech it belongs to: noun adjective, verb, etc. The goal is to extract patterns in text based on analysis of frequency distributions of these part-of-speech. There is no common opinion about whether POS tagging improves the results of sentiment classification. Barbosa and Feng (2010) [22] reported positive results using POS tagging, while Kouloumpis et al. ( 2011) [23] reported a decrease in performance, so it depends on domain that you work in. We defined nine features: total number of violence adjective, total number of violence verb, total number of violence noun, total number of normal adjective, total number of normal verb, total number of normal noun, the ratio of the total frequency of violence adjective to normal words, the ratio of the total frequency of violence verb to normal words, and the ratio of the total frequency of violence noun to normal words. 4. Other features: We defined two features: the presence of negation word in sentence and the presence of violence and normal emoticons in sentence. Table 6 represents the summary of features extracted from (Tweet, Post). Another result from the models to measure is the AUC (area under curve), to calculate this measure values using Python Sklearn matrix, the model should be first trained and tested using the test data, then the evaluation methods can be applied to calculate the final results (which are presented in the experiment results section). These metrics of AUC and ACC are commonly used in the evaluation of the link prediction problems. After the model building and evaluation phases, the main results from this research have to be evaluated by comparing the results from the different models. There are two ways to compare these results. The first traditional one is by having that comparison manually using the resulted values from the described evaluation matrices. The second approach is more scientific method to compare the whole models' accuracy using the statistical significance tests and then apply the AUC after having confidence in the differences between models. --- Experiments In this research, three classifier methods were used to detect violence in Arabic language using sentiment analysis; SVM, NB and KNN. These methods used due to their effectiveness, simplicity and accurateness. --- Parameters of classification algorithms Generally, many machine learning algorithms need set of parameters to be assigned. Table 7 lists the algorithms used, the parameters of classification algorithms as well as the selected values. --- Table 7. parameters of classification algorithms --- Model evaluation To evaluate the quality and usefulness of the model, several experiments were conducted on our dataset. All algorithms were evaluated using 10-fold cross validation. The measures used to evaluate the models are based on the confusion matrix depicted on Table 8. In most of sentiment analysis problems, three measures are used to evaluate the model: Accuracy, precision and recall, in addition we used f-measure to measure the accuracy of test data as it considers both precision and recall. The following describe these measures: --- Table 8. Confusion Matrix • Accuracy: Tthe proportion of the total number of predictions that were correct classified. Accuracy = TP+TN TP+TN+FP+FN(3) • Recall (R): Tthe proportion of the number of correct positive predictions divided by the total number of positives. Recall = TP TP+FN(4) • Precision (P): the proportion of the number of correct positive predictions divided by the total number of positive predictions. Precision = TP TP+FP(5) • F-Measure: is the weighted harmonic mean of precision and recall. F-Measure = 2 * R * P (R+P)(6) In order to evaluate the robustness of the classifier, the standard deviation (Stdv) of the 10-folds is calculated and reported. Classifiers with lower Stdv show more robustness. The formula of Stdv is listed below: (7) where n is the number of data points,x ̅ is the mean of X i and X i is each of the values of the data. --- Experimental results This section provides the detailed experiment results. The purpose of this experiment is to evaluate the violence detection over social media by developing language resources for Arabic sentiment analysis. It lists the results of features that extracted from sentence without using Arabic lexicon such as n-gram and features extracted using Arabic lexicon such as number of violence words in sentence, then a compari-son of the results for various preprocessing techniques that were used with the features extracted. All experiments have been performed using Python 2.7, Anaconda Spyder 3.2.3. The experiments have been conducted in a computer with Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz 2.60GHz running Windows 10 64-bits. The computer contains 32 GB RAM. --- Results based on n-gram feature with various preprocessing files This section represents the comparison results between the different types of ngram features (unigram, bigram and trigram) on various preprocessing techniques by using three classifiers, SVM, NB and KNN. In order to evaluate n-gram feature, four measures used: recall, precision, accuracy and f-measure. Table 9 shows the results of n-gram feature. In first column is the file that result after applying the following preprocessing techniques: 1. Normalization file: This file contains data that has been applied to normalization without stemming and remove stop words. 2. Stop words file: This file contains data that has been applied to normalization and remove stop words. 3. ISRI stemming file: This file contains data that has been applied to normalization and ISRI stemming. 4. Light stemming file: This file contains data that has been applied to normalization and light stemming. 5. ISRI stemming and stop words file: This file contains data that has been applied to normalization, remove stop words and ISRI stemming. 6. Light stemming and stop words file: This file contains data that has been applied to normalization, remove stop words and light stemming. We can notice that, NB performed better than the others on normalization file with bigram, stop words with unigram, light stemming with bigram. On the other hand, SVM surpass others on ISRI stemming file unigram feature and ISRI stemming with stop words file bigram feature. Overall, the best results recorded using SVM on ISRI stemming with stop words file with bigram feature. The least result was when using KNN classifier with trigram feature on ISRI stemming and stop word file. --- Results based on sentiment words of presence and frequency features This section presents the comparison results between different classifiers when we used Sentiment words of presence and frequency features (presence of violence word, the total frequency of violence words, presence of normal words, total frequency of normal words, ratio of the presence of violence to normal word and ratio of the total frequency of violence to normal words) that extracted from Arabic lexicon. In this case, there is no need to compare between preprocess files. Table 10 reveals that the best measured results recorded by SVM, followed by KNN and the worst performance is reported by NB. Moreover, classifies show stable models and the differences are insignificant. --- Results based on Part of Speech (POS) features This section represents the comparison results between different classifiers when we used part of speech features (total number of violence adjective, total number of violence verb, total number of violence noun, total number of normal adjective, total number of normal verb, total number of normal noun, the ratio of the total frequency of violence adjective to normal words, the ratio of the total frequency of violence verb to normal words, the ratio of the total frequency of violence noun to normal words) that extracted from Arabic lexicon so there is no need to compare between preprocess files. Table 11 lists the results Based on POS features. NB followed by SVM and finally KNN records the best results in terms of accuracy, precision, recall and Fmeasure. --- Results based on other features This section represents the comparison results between different classifiers when we used other features (presence of negation word in sentence, presence of violence emoticons and presence of normal emoticons in sentence). Table 12 depicts the results based on other features. It shows that SVM performs better than NB and KNN and produces more stable models. --- Results based on all features without n-gram This section represents the comparison results between different classifiers when we used all features except n-gram feature. Table 13 represents the results Based on all features except n-gram. As shown in table 13, it reveals the best measured results when using NB classifier. --- Results based on all features with n-gram feature on various preprocessing files This section represents the comparison results between different classifiers when we used all features with the n-gram feature on various preprocessing techniques as presented in the Table 14. when using POS features that are extracted from lexicon, NB outperformed the other classifiers. NB also outperformed other methods in the experiment that uses all feature extracted from lexicon with n-gram. Furthermore, the experiment of using all features that extracted from lexicon with N-gram feature on various preprocessing files shows that SVM outperformed the other methods in ISRI stemming and stop word file among the six files when used with unigrams feature. SVM again outperformed other methods when applied on ISRI stemming with stop words file with unigram and all features. --- Conclusion and Future Work This study uses sentiment analysis to detect violence over social media in Arabic language. Set of steps have been applied systematically to address this problem. First, data was collected from two of the popular social media web sites (Facebook and Twitter), then this data was annotated as (Violence or Normal). After collecting the data, set of preprocessing techniques applied to normalize the data. Then, ASL was adapted and modified to capture the objective of this study and features extraction process was conducted to distinguish the sentences. These sentences were classified using (SVM, NB and KNN). Finally, the model was evaluated using (recall, precision, accuracy, F-measure and study). In the light of this methodology, research questions have been addressed in details, and it has been found that using sentiment analysis can significantly detect the violence in Arabic language (Jordanian dialect). also, it has been concluded that the using of the Arabic lexicon with modification would help to extract features for detecting violence, what distinguishes this lexicon is the ability to determine whether the sentences are normal or violence as well as it can determine POS in the words. Moreover, this research shows that combining features extracted from lexicon with features extracted from sentence such as n-gram generates models that are more accurate. In addition, the study discusses the effect of various preprocessing techniques on the performance of the generated models. The experiments proved that using SVM classifier on ISRI stemming with stop words file with unigram and all features gave best results compared with other experiments. In the future, the method can be extended to cover other Arabic dialects and the dataset can be expanded. --- Table 14 shows that NB performs better than SVM and KNN on the normalization files regardless of the n-gram selected. It gives the same results using all features when n-gram is unigram, bigram and trigram. In the case of stop word file NB recorded better results too in all features with all n-gram variations. On the other hand, SVM is performing better than the NB and KNN in the other files (ISRI stemming, ISRI stemming with stop words, light stemming and light stemming with stop). We can notice that the maximum accuracy scored was by SVM classifier on ISRI stemming with stop words file with unigram and all features. To sum up, in the experiment of N-gram feature with various preprocessing files, the optimized SVM outperformed other classification algorithms in ISRI stemming and stop word file out of six files when used with bigram feature. The experiment of using Sentiment words of presence and frequency features that extracted from lexicon on dataset shows that SVM outperformed other classification algorithms. Moreover, Authors Monther Khalafat is a Jordanian computer scientist and the main contributor to this work, and was a student at the University of Jordan, King Abdullah II School for Information Technology, Department of Information Technology, Amman (Jordan). Email: [email protected] Dr. Ja'far Alqatawna is a Jordanian Dr. of Business Security at the University of Jordan, King Abdullah II School for Information Technology, Department of Information Technology, Amman (Jordan). Currently, he is on sabbatical leave at the Higher Colleges of Technology, Faculty of Computer Information Systems, Dubai, UAE. E-mails: [email protected], [email protected] Prof. Rizik Al-Sayyed is a Jordanian Prof. of Networks, Databases, and Data Science at the University of Jordan, King Abdullah II School for Information Technology, Department of Information Technology, Amman (Jordan). E-mail: [email protected] Dr. Mohammad Eshtay is a Jordanian Dr. of Machine Learning at LTUC, from the University of Jordan, King Abdullah II School of Information Technology, Amman, Jordan. E-mail: [email protected] Dr. Thaeer Kobbaey is a Jordanian Dr. of Data Mining and Big Data at The Higher Colleges of Technology, Dubai (UAE). E-mail: [email protected]
Publisher's PDF, also known as Version of record document license Article 25fa Dutch Copyright Act Link to publication in VU Research Portal citation for published version (APA)
Introduction There is growing evidence that neighbourhood green space is beneficial for mental health (Alcock et al. 2014;Di Nardo et al. 2012;Hartig et al. 2014;Van den Berg et al. 2015). The neighbourhood social environment has been suggested to be one of the mechanisms. The presence of green, such as trees or vegetation increases the attractiveness of common spaces in the neighbourhood, thereby potentially increasing their use (Coley et al. 1997;Kuo et al. 1998), and facilitating informal social contacts between community members (Hartig et al. 2014;Kuo et al. 1998). Social contacts are health promoting; for instance through the social support they can offer (Cohen 2004). By facilitating social contacts, neighbourhood green can contribute to the development of neighbourhood social cohesion, i.e. the connectedness and solidarity among community members, which has proven to benefit people's health (Di Nardo et al. 2012;Kawachi and Berkman 2000). Furthermore, having green areas in the neighbourhood increases the attractiveness of the living environment, thereby enhancing people's attachment to the physical neighbourhood environment (Di Nardo et al. 2012). Place attachment helps to create group identity, which translates into a general sense of well-being (Brown et al. 2003) and has been associated with reduced loneliness and better mental health (Hagerty and Williams 1999;Pretty et al. 1994). The neighbourhood social environment as a mechanism for the impact of neighbourhood green space on mental health has received some research attention in the past years. Some studies found that social cohesion mediated the relation between green space and mental health (de Vries et al. 2013;Sugiyama et al. 2008), while others did not (Triguero-Mas et al. 2015). Lack of social support and feelings of loneliness were reported to mediate the relationship between green space and mental health (Maas et al. 2009), but not social contacts (Maas et al. 2009;Sugiyama et al. 2008). Inconsistencies between studies might be explained by different operationalisations of the social environment (e.g. social cohesion, individual social contacts, loneliness). It is also possible that the relationship between neighbourhood green, social environment and mental health differs across cultures (Hartig et al. 2014). For instance, in more individual oriented cultures, green space might be more important for the facilitation of social interactions than in more collectivist cultures where communal life is already more common. In the current study, we investigate the relationship between neighbourhood green space, neighbourhood social environment, and mental health in four European cities to examine if the social environment might be one of the mechanisms between neighbourhood green and mental health. The following research questions are addressed: is neighbourhood green space related to the neighbourhood social environment in four European cities? Are the neighbourhood social environment and neighbourhood green space related to mental health in these cities? This study uses a range of social environment measures (social cohesion, neighbourhood attachment, and individual social contacts) to examine if the associations depend on the operationalisation of social environment. Our green measures comprise both the amount and quality of neighbourhood green, to accommodate the increasing evidence stressing the importance of quality of green space and its impact on health (Francis et al. 2012;Hartig et al. 2014;Van Dillen et al. 2012). Furthermore, objective audit and subjective green measures are used as they may capture different aspects of greenness i.e. more emotional aspects with subjective measures and more tangible aspects with objective measures (Francis et al. 2012). These aspects may relate to the social environment characteristics and mental health differently (Leslie et al. 2010). --- Methods --- Study background This EU-funded PHENOTYPE study examined the health effects of the natural environment and its underlying mechanisms. A cross-sectional survey was carried out from May to October 2013 in four cities across Europe: Stoke-on-Trent (United Kingdom), Doetinchem (Netherlands), Barcelona (Spain), and Kaunas (Lithuania) (Nieuwenhuijsen et al. 2014). --- Study population and data collection In each city, 30 neighbourhoods varying in neighbourhood green space and socioeconomic status (SES) were selected (see Table 1 for a description of the neighbourhoods). Survey data were collected using face-to-face interviews, with the exception of Lithuania, where data were collected with a postal questionnaire. Around 1000 adults aged 18-75 years, were interviewed per city (n = 3947, overall response rate 20%) across 124 neighbourhoods. For further details on the data collection see Online Resource 1. We selected respondents with complete data for the indicators of interest, providing a sample of 3771 respondents in 124 neighbourhoods (96% of the study population). Additionally, in each neighbourhood an audit was carried out to assess the amount and quality of green space. For each neighbourhood a purposeful sample of streets was selected, ensuring that rare, but important, features of the neighborhood were included (e.g. parks). To do so, we divided each neighbourhood into more or less homogeneous sub-areas by means of land use maps in combination with local knowledge of the areas. Per sub-area, several streets were selected and combined into a route that was inspected by two trained auditors (in a small number of cases by one auditor) in a systematic way, using a form containing closed questions. --- Measures --- Mental health Mental health was measured using the mental health inventory (MHI-5) (Ware Jr and Sherbourne 1992). MHI-5 assesses nervousness and feelings of depression in the past month, with answers ranging from 'all the time' to 'never' on a six-point scale. Sum scores of the five answers were transformed into a scale from 0 to 100 (Ware Jr et al. 1995), with higher scores reflecting better mental health. The scale has proven to be of good validity and reliability (Ware Jr 2000). --- Neighbourhood green space Audit amount and quality of neighbourhood green space Amount of neighbourhood green space was based on six items containing information about the fraction of visible gardens, garden size, the arrangement of the gardens, number of trees, size of public green spaces, and size of public blue spaces (Cronbach's alpha 0.66). Quality of neighbourhood green space was derived from one question, answered by the auditors: 'what is your general impression of the quality of the green space in this neighbourhood'? Answers ranged from 1 (very negative) to 5 (very positive). Indicators were standardised using Z-scores, calculated for each city separately. This way, neighbourhood green was compared between the neighbourhoods within one city and not across all cities, allowing the examination of the relative effect of green space on mental health. Subjective amount and quality of neighbourhood green space Subjective amount of neighbourhood green space was measured by asking the respondents: 'How would you describe your neighbourhood in terms of green space', with answers on a five-point Likert scale from 'not green at all' (1) to 'very green' (5). Subjective quality of neighbourhood green space was measured by asking: 'Overall, in your neighbourhood, how satisfied are you with the quality of the green/blue environment?' Answers ranged on a five-point Likert scale, with a higher score meaning more satisfaction with the quality. We conducted ecometric analyses to calculate the average perception of neighbourhood green space (see Online Resource 2 for a description of the ecometric analysis) (Raudenbush and Sampson 1999). This way, we can include subjective assessments of neighbourhood green space, while avoiding 'same-source bias' (also measured at the same-time) (de Jong et al. 2011;Wheaton et al. 2015). Ecometric average scores were calculated (stratified by city) and standardised into country-specific Z-scores. We use the term neighbourhood green space for our natural environment measures, because the audit showed that the neighbourhood natural environment consisted foremost of green elements and because mainly green space is relevant for the social interaction mechanism. --- Social environment We measured three aspects of the social environment. Social cohesion constructed by summing the answers to five statements from the social cohesion and trust scale (Sampson et al. 1997): 'People are willing to help their neighbours', 'This is a close-knit neighbourhood', 'People in this neighbourhood can be trusted', 'People in this neighbourhood generally don't get along with each other' (reversed), and 'People in this neighbourhood do not share the same values' (reversed). Using a 5-point Likert scale, answers ranged on from 'totally disagree' to 'totally agree'. Negatively stated items were recoded so that a higher score reflected higher levels of social cohesion (Cronbach's alpha 0.76). Neighbourhood attachment measured by summing the answers to three statements: 'I feel attached to this neighbourhood', 'I feel at home in this neighbourhood', and 'I live in a nice neighbourhood were people have a sense of belonging', using a 5-point Likert scale, answers ranged on from 'totally disagree' to totally agree. A higher score reflected stronger neighbourhood attachment (Cronbach's alpha 0.80). Social contacts respondents were asked how often they had contact with their neighbours. Answers were: 'daily', 'at least once a week', '1-3 times per month', 'less than once a month', and 'seldom or never'. Social contacts was dichotomised into 'at least once a week' versus 'less often' for the analyses with social contacts as outcome measure. Similar to the subjective green measures, ecometric analyses were conducted to calculate the neighbourhood average scores of social cohesion and neighbourhood attachment (see Online Resource 2). Social contacts were included at the individual level. The correlations between the neighbourhood characteristics (Online Resource 3) show that the audit and perceived green measures were moderately related, suggesting that these indicators measured different aspects of neighbourhood green space. --- Confounders Individual control variables in all analyses were sex, age (in years), highest achieved educational level (primary school /no education; secondary school/further education; university degree or higher), nationality (country nationality; other), employment status (fulltime employed; other), household composition (with children under 12 years; other), and homeownership (yes; no). Neighbourhood socioeconomic status (SES) (low; intermediate; high; based on country-specific data, see Online Resource 1) was included as a neighbourhood level confounder. See Table 2 for the descriptive statistics. --- Analyses Multilevel linear and logistic regression analyses were performed, with individuals at level one, neighbourhoods at level two, and city at level three. City was included as level to adjust for systematic differences in the intercept between the four cities, i.e. city differences caused by, for instance, policy differences. The green variables were allowed to have a different effect (slope) on social environment and health for every city, by creating a separate green indicator variable for every city (green indicator X city-dummy (1 = belongs to this city, 0 = does not belong to this city)). All four city green variables are added to the model (Weisberg 2005). First, multilevel models assessed the association between neighbourhood green space and individual level social contacts in the four cities. Ecological models at the neighbourhood level assessed the associations between neighbourhood green, social cohesion, and neighbourhood attachment, respectively. Next, we examined the associations between social cohesion, neighbourhood attachment, social contacts and mental health in the four cities, while adjusting for green space. Finally, we examined the associations between green space at the neighbourhood level and mental health in the four cities. The analyses with the subjective neighbourhood level green measures were also adjusted for the individual perception of neighbourhood green space, to distinguish the contextual health effect of green space from the individual level effect. Analyses were conducted using SAS 9.3. --- Results --- Neighbourhood green space and the social environment More cohesive neighbourhoods were greener and had better quality green space in Doetinchem (perceived and audit) and in Stoke-on-Trent (perceived amount; perceived and audit quality) (Table 3). In Barcelona and Kaunas, neighbourhood-level green space was not related to neighbourhood social cohesion. Stronger neighbourhoods attachment was found in greener neighbourhoods (perceived) and neighbourhoods with better quality green space (audit and perceived) in Doetinchem (Table 3). Better perceived quality of neighbourhood green was associated with stronger neighbourhood attachment in Barcelona and Stoke-on-Trent as well. Neighbourhood green space was not associated with social contacts in any of the cities. --- Social environment and mental health Residents living in neighbourhoods with more social cohesion or with stronger neighbourhood attachment reported better mental health only in Stoke-on-Trent, not in the other cities (Table 4). Having more frequent social contacts was associated with better mental health consistently in all four cities. --- Neighbourhood green space and mental health In Barcelona, a higher amount of neighbourhood green (audit) was associated with better mental health 4). In the other three cities, neighbourhood green space was not associated with mental health. --- The social environment as possible mechanism In Barcelona, we found no associations between neighbourhood green space and (one aspect of) the social environment (Table 3) and between the (same aspect of the) social environment and mental health (Table 4). In the other cities, we found no associations between neighbourhood green space and mental health (Table 4). Therefore, we found no indications that the social environment could be an underlying mechanism between neighbourhood green space and mental health. --- Discussion Greener neighbourhoods and neighbourhoods with better quality green space were more cohesive and had higher levels of neighbourhood attachment in Doetinchem and Stoke-on-Trent. More neighbourhood cohesion and stronger neighbourhood attachment were associated with better mental health in Stoke-on-Trent only. Only in Barcelona, however, the neighbourhood green space was associated with better mental health, but there, we found no indications that the social environment could be the underlying mechanism. --- Study limitations The cross-sectional design of this study prevents conclusions about the causality of the relationships (Galster 2008). We therefore did not implement statistical tests for mediation, as mediation implies causal processes. Another limitation is the low response rate (see Online Resource 1), resulting in an underrepresentation of low educated people in all four cities. It is suggested that people with a low socioeconomic status (SES) may benefit more from neighbourhood green space than those with a high SES (Mitchell and Popham 2008). The underrepresentation of low educated people may therefore have resulted in an underestimation of the relationship between green space and mental health. Third, in Kaunas, there was no variation between neighbourhoods in neighbourhood attachment and, as pointed out by the low reliability scores of green space and social cohesion in Table 2, only little neighbourhood variation in case of the other neighbourhood measures (Hox 2010). Because of the low reliability scores, we excluded results from Kaunas based on the perception measures in the discussion of the implications. Finally, the neighbourhoods in Barcelona were considerably smaller in size compared to the other cities. This could have increased the chance that the Spanish neighbourhoods were more homogeneous in terms of the amount and quality of neighbourhood green space, which could have resulted in more precise audit assessment of the neighbourhood green space in Barcelona. We cannot rule out completely that a more precise audit assessment of the green space in Barcelona resulted in finding a relation between audit amount of green space and mental health there. Neighbourhood green space and the social environment Our study showed that green space at the neighbourhood level was related to the neighbourhood social environment. Our findings that related social cohesion consistently to neighbourhood green space in Doetinchem and Stoke-on-Trent strengthens the evidence on the influence of green space on the development of social cohesion. Furthermore, in line with Arnberger and Eder (2012) we report neighbourhood attachment to be consistently associated with neighbourhood green space in Doetinchem, as well as the subjective quality of neighbourhood green in Barcelona and Stoke-on-Trent. We found no evidence that neighbourhood green space is related to more contacts between neighbours, in line with Maas et al. (2009) Our findings corroborate the argument by Hartig et al. (2014) that physical neighbourhood characteristics, such as green space, influence other area characteristics, e.g. social cohesion, more easily than individual characteristics, e.g. individual social contacts. --- Green space, social environment and the relation with mental health Our finding that individual social contacts were associated consistently with better mental health, while social cohesion and neighbourhood attachment were related to better mental health in Stoke-on-Trent, UK exclusively, underlines the fact that the neighbourhood environment is in general less important for individual health than individual characteristics (Pickett and Pearl 2001). Despite of that, studying neighbourhood characteristics such as neighbourhood green is relevant as it can influence the health of many people, therewith contributing substantially to the health of the population. We found only weak evidence for a relationship between neighbourhood green space and mental health. A study that used similar green data, i.e. audit information, reported no relation between the presence of green and general health (Dunstan et al. 2013), though another study reported that the amount of green was related to mental health (Van Dillen et al. 2012). We could only replicate this association between the amount of green space and mental health in Barcelona. The Barcelona neighbourhoods were considerably less green than the neighbourhoods in other cities (see Table 1). Possibly living in greener neighbourhoods in Barcelona is more strongly related to mental health than in other cities, because of the scarcity of green space in general. Another explanation for finding an association between green space and mental health in Barcelona only, is that especially nearby green space seems important for mental health (Kaplan 2001;Triguero-Mas et al. 2015;Van Dillen et al. 2012), as the Barcelona neighbourhoods were by far the smallest in this study. When we conducted post-hoc analysis using individual perception of neighbourhood green, assuming that the individual perception is based on nearby green space more than the neighbourhood average perception of green, we indeed found associations between green space and mental health in Doetinchem as well. In our study, quality of neighbourhood-level green was not associated with mental health, which is in contrast with previous studies (Francis et al. 2012;Van Dillen et al. 2012). We used a crude measure for quality of green space. Possibly this measure was not specific enough to detect a relationship with mental health. We found no indications that the neighbourhood social environment serves as a possible mechanism between neighbourhood green space and mental health. We either failed to find a relation between neighbourhood green space and mental health (i.e. Kaunas, Doetinchem, Stoke-on-Trent), or did not find associations between neighbourhood green space and (one aspect of) the social environment and between the (same one aspect of the) social environment and mental health (i.e. in Barcelona). In Barcelona, a highly urbanized city, restoration from daily stress might be a more relevant mechanism underlying the association between green space and mental health than the social environment. Unfortunately, we were unable to examine this hypothesis with the available PHENOTYPE dataset. --- Comparison of the cities There were marked differences between the cities with regard to the relevance of the neighbourhood environment for mental health. The Intra-Class Correlations of the cities (ICC), which estimates the proportion of variation in mental health between residents that is related to neighbourhood characteristics, reflects these differences. For example, in Doetinchem, the ICC was very low (0.51%) and both green space and the social neighbourhood characteristics were unrelated to mental health, in contrast with Stoke-on-Trent and Barcelona with ICCs of 8.51 and 6.71%, respectively. In Barcelona, this ICC reflected the relation between neighbourhood green space and mental health and in Stokeon-Trent the neighbourhood social environment was related to mental health. The different findings across the cities might reflect geographical and cultural differences (Hartig et al. 2014). The differences could also reflect that, despite the use of identical measurements, data might still not be comparable due to cultural differences in the interpretation of survey questions and audit. The use of more objective measures, such as GIS data, could improve the comparability of the findings, but this might at the same time not be the environmental characteristics that have the biggest impact on mental health. Furthermore, more objective data on the quality of neighbourhood green or the social neighbourhood characteristics will be much more difficult to achieve. Future comparative studies should make efforts to also incorporate objective data to allow even better comparison between European settings. --- Conclusion Neighbourhood green and the neighbourhood social environment were related to one another in two cities, but did not translate into better mental health there. Neighbourhood green was related to mental health only in Barcelona, but there we found no indication that the neighbourhood social environment could be the underlying mechanism. Our study found no indications that improving neighbourhood green space could be a relevant public health policy, nor were there indications that health benefits of green space would occur through the improvement of the neighbourhood social environment. Future studies should use longitudinal data to further investigate the possibility of this mechanism. To improve the comparison between European settings, studies should try to incorporate objective measures of both green and the social environment. --- Key points • The neighbourhood social environment has been suggested to be one of the mechanisms responsible for the beneficial effect of green space on health. • This study examines the relationship between neighbourhood green space, social cohesion, neighbourhood attachment, social contacts, and mental health in four European cities. • We find no evidence that the neighbourhood social environment could be the underlying mechanism between neighbourhood green space and mental health. • The relevance of this mechanism needs further investigation with longitudinal data and with more objective data to improve the comparison between European settings. --- Ethical approval This article does not contain any studies with animals performed by any of the authors. All procedures performed in studies involving human participants were in accordance with the ethi-cal standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. Informed consent Informed consent was obtained from all individual participants included in the study. --- Conflict of interest The authors declare that they have no conflict of interest.
Background: In light of the growing emphasis on individualization in healthcare, it is vital to take the diversity of inhabitants and users into consideration. Thus, identifying shared perceptions among group members may be important in improving healthcare that is relevant to the particular group, but also perceptions of the staff with whom interactions take place. This study investigates how motherhood is perceived among three groups: Somali-born mothers; Swedish-born mothers; and nurses at Swedish child health centers. Inequities in terms of access and satisfaction have previously been identified at the health centers. Methods: Participants in all three groups were asked to finalize two statements about motherhood; one statement about perfect motherhood, another about everyday motherhood. The responses were analyzed using qualitative coding and categorization to identify differences and similarities among the three groups.The responses to both statements by the three groups included divergences as well as convergences. Overall, biological aspects of motherhood were absent, and respondents focused almost exclusively on social matters. Working life was embedded in motherhood, but only for the Somali-born mothers. The three groups put emphasis on different aspects of motherhood: Somali-born mothers on the community; the Swedish-born mothers on the child; and the nurses on the mother herself. The nursesand to some extent the Swedish-born mothersexpected the mother to ask for help with the children when needed. However, the Somali-born mothers responded that the mother should be independent, not asking for such help. Nurses, more than both groups of mothers, largely described everyday motherhood in positively charged words or phrases.The findings of this paper suggest that convergences and divergences in perceptions of motherhood among three groups may be important in equitable access and utilization of healthcare. Individualized healthcare requires nuance and should avoid normative or stereotypical encounters by recognizing social context and needs that are relevant to specific groups of the population.
Background In recent years, increased attention has been focused on concepts supporting the pivotal role of the individual's perceptions in healthcare [1][2][3]. This focus has been accompanied and reinforced by the marketization of Western society and the prominence given to concepts of service management [4,5]. In the 1980s, experts in the field of service quality argued for quality as a subjective perception, which varies from customer to customer [6]. The next decade saw an increased focus on the concept of value [7] as an individualized and even unique perception of the customer [8,9]. Individualization in healthcare is often proclaimed in order to enhance the healthcare user's position by becoming more participative and well-informed [10,11], ranging from co-developing treatment plans to choosing healthcare providers [12,13]. For multiple reasons, not all patients or inhabitants have the possibilities or prerequisites to be participative or well-informed [14,15]. Barriers may be constituted due to language skills [16,17], one's economic situation [18], or long travelling distances [19]. Possibilities to be participative or well-informed may also be constrained by the provider's normative or stereotypical expectations and perceptions [18]. For instance, Hedegaard et al. [10] found physicians to unconsciously be more amenable toward native Swedish-speaking than non-native speaking patients despite the latter group communicating more in align with "patient-centered care" (e.g. being wellinformed, and actively asking questions). Stereotypes have also been reported in the setting of the current paper, the Swedish child health centers, in which "family" was heteronormatively assumed to consist of child, mother and father in information given to parents [18]. Given the above, Saha et al. [20] argued that individualization in healthcare must take the diversity of patients' perspectives into consideration. Thus, identifying group members shared perceptions may be a first step in improving healthcare that is relevant to the particular group and grounded within their social context [16,17]. The social context also implies that societal structures and norms influence human interaction [21,22], thus it is also important to inquire about healthcare providers' perceptions [23]. In the decentralized Swedish healthcare system, the national government is responsible for overall objectives and regulation. At the two local levels of governmentcounty councils and municipalitiesit is decided how healthcare is to be delivered given the local conditions [24]. Generally, the county councils are responsible for providing high quality healthcare through hospitals, primary careincluding child health servicesand dental care, whereas the municipalities are responsible for care for the elderly, people with physical disabilities or psychological disorders as well as school health [25]. Funding comes mainly from county council and municipal taxes, but also from out-of-pocket fees or national government grants [26]. The purpose of Swedish child health services is to promote children's health, development, and well-being [27]. A local child health center offers voluntary child health promotion programs and free services for all preschool children (newborn to age 6 years) and their parent(s). The responsibility for carrying out the programs rests mainly with the nurse at the center, often a pediatric or district nurse. The nurse's importance as a resource and support for parents has been recognized [28]. The needs of each family determine the frequency of appointments. Typically, there are 10 to 20 health appointments during the child's first year, and then annual appointments until school-based healthcare providers take over these responsibilities [29]. Besides the care and assessment provided by the child health nurse, each center offers additional services, including vaccination programs, language tests, eye examinations, and parental education given in groups. In addition to the nurse, physicians and psychologists are seen regularly or when required. Despite this seeming standardization, there is an increasing recognition of the variation in services provided by the country's child health centers [30]. Inequities have been identified in terms of access to child health services, for example difficulties in attracting fathers [31,32] and unemployed mothers [33] to visit the centers. Other researchers have found that the centers do reach various groups, but do not always adequately meet the diverse support needs of different groups [34]. Inequities are also manifested as less satisfaction with provided services among mothers of low socio-economic status [35], and same-sex parents experiencing heteronormative communication [36]. In an attempt to coordinate services in the decentralized Swedish healthcare system, a government agency published national guidelines in 2014 [27], which emphasized the importance of including and addressing the needs of all parents. Subsequently, efforts have been made to change approaches and attitudes at the local centers [29]. For instance, in addressing the growing number of parents from Somalia, the nurses at one center worked with reflexivity of their own preunderstandings, resulting in better encounters with all families [23]. Research on motherhood in Sweden in the 2000s has often addressed gender equality and both parents' responsibilities of parenting as well as opportunities to do paid work [37,38] independent of the mother's country of birth [39] or specifically focused on Swedish-born mothers [40]. For the latter group, research has found that becoming a mother is more of an individualized life project as compared to mothers with Turkish background living in Sweden to whom becoming a mother was more of a collective project [41]. It has been reported that Sweden's local child health centers often fail to attract immigrant mothers [33]. Sweden's population of nearly 10 million people includes a substantial number (15%) born outside its borders [42]. Of all immigrants to Sweden in 2013, almost 10 % held Somali citizenship; the only groups that were larger were Syrian immigrants and returning Swedish citizens [43]. For two decades, Somalia has been one of the most common countries of origin for immigrants to Sweden [44]. However, according to an official report, Somalis as a group have had particular difficulty integrating into Swedish society [45]. Somalis living in Sweden have among the lowest employment rates [45,46] and educational levels [46] of all groups. Findings of previous studies of Somali immigrants in Swedish healthcare include: dissatisfaction with healthcare encounters [47]; dissatisfaction with childbirth experiences [48,49]; a relatively high proportion of Somali children with autism [50]; and a relatively high risk of vitamin D deficiencies [51]. Overall, Somalis' perceptions of Swedish healthcare have not been well addressed [52]. The current manuscript is directed at examining the perceptions of motherhood as expressed by two groups of mothers -Swedish-born and Somali-born, and exploring the implications for individualized healthcare. This goal includes the recognition that those belonging to a particular group may share perceptions, and that healthcare nurses (the third group) also hold their own perceptions about mothers yet must recognize differing perceptions among various groups. --- Methods --- Setting and participants This study took place in Western Sweden, in the country's second largest region, which has over 1.6 million inhabitants [53]. To enhance understanding of different perceptions among groupsand not only among individualswe asked 20 Somali-born mothers, 50 Swedish-born mothers, and 35 child health nurses to complete two statements about motherhood. The responses of the Swedish-born mothers and nurses were collected at two child health centers in two medium-sized cities in Western Sweden. Responses of the Somali-born mothers were initially collected at one child health center in a mediumsized city. In order to reach saturation [54], additional responses were collected at a local meeting place for parents located in a multicultural area of the region's largest city. The diversity of settings for data collection was thought to contribute to a more robust level of saturation within each group of respondents. The collected, de-identified demographic data revealed that all but two of 50 responding Swedish-born mothers had a partner, and they each had between one and nine children. Among the Somali-born mothers, eight of the 20 respondents had a partner, and they each had between one and seven children. The Swedish-born mothers were 22 to 40 years of age, and the Somali-born mothers were 20 to 34 years old. --- Data collection and analysis Mothers and nurses were informed about the study both verbally and in writing. Verbal and written consent were obtained prior to data collection. All participants were informed about the study's purpose; that participation was voluntary; and that they could withdraw anytime without consequence. Moreover, they were informed that all published information would be anonymous [54]. Using a similar data collection procedure as in previous research [55,56], the participants were asked to complete two statements: 1) "To me, being a perfect mother means…"; and 2) "In everyday life, being a mother means…" The statements were chosen to identify disparities given the mothers' life situations, and to compare their ideal conceptions of motherhood with what they believed was realistic. Each respondent could provide multiple responses to each statement, as shown in Table 1. Analysis was inspired by the qualitative content analysis procedure of Graneheim and Lundman [57]. Analysis can be done on content close to the text (manifest content), or on the underlying meaning (latent content). In this study, the manifest content was analyzed. All responses (n = 543) were read through several times and coded by the research members EE and KE. The codes were clustered into two types of categories: general categories and groupspecific categories (Swedish-born mothers, Somali-born mothers, child health nurses). Of particular interest were the emerging similarities and differences among the three groups. The constructed categories were discussed with the other research team members and further adjusted. --- Results Few differences were revealed between the responses to the statements of everyday and perfect motherhood. Consequently, the perceptions organized in Fig. 1 address responses among the groups that are common to both perfect and everyday motherhood. Perceptions of motherhood commonly expressed and shared among the three groups are presented in the circle in the middle. The three surrounding squares represent three distinct foci of motherhood as derived from the responses: 1) the mother; 2) the community; and 3) the child. The perceptions and quotations are distributed closest to the focus of motherhood mainly addressed. The respondents' group categorization is indicated within brackets: child health nurses, Somali-born mothers, and Swedish-born mothers. For all three groups, a mother was perceived as someone providing the basics, guiding the child, and explaining things, as exemplified in the mid-circle. Here, motherhood was expressed in terms of "Guiding the child in lifeset limits" (nurse); or to make sure "[t]hat the children's most basic needs are met, such as cooking, going to the toilet, and to rest" (Swedish-born mother); or to "love the child" (Somali-born mother). These and similar responses were common among all three groups, irrespective of addressing perfect or everyday motherhood. Despite these similarities, there were also group-specific responses to motherhood. The nurses typically thought of a mother as someone capable of asking others for help. For instance, by "admit[ting] to her surroundings when it's tough and hard work…", or to "understand when she needs to ask for help." Often "the partner"or more rarely, "the father"or parents-in-law were mentioned as providing this help. Moreover, the nurses often focused on motherhood in relation to the characteristics of mothers' themselves, predominately by using positively charged words, such as "being loving", or to be "caring, creative, inventive, friendly." Swedish-born mothers also used "positive" terms to describe the mother's characteristics. Some Swedishborn mothers responded that a perfect mother "felt good." Feeling good was expressed in relation to the child; a perfect mother should take good care of herself so she can take care of the child rather than for her own sake. Unlike the nurses' focus on the mothers, the Swedish-born mothers talked about motherhood almost exclusively from the child's perspective: "mak[ing] sure that one's child is feeling okay", or "… that she develops in every way and that she becomes a safe individual with good self-esteem and self-confidence." Somali-born mothers typically responded that a mother possessed a certain degree of stamina and was Fig. 1 Focus of motherhood perceptions supposed to be active, "never get tired", and always "make an effort." Different from both the nurses' and the Swedish-born mothers' perceptions, the Somali-born mothers typically talked about motherhood as embedded in a community context, as mothers should have "good contact with school, teachers, school nurse, and physicians". To a great extent, the responses for perfect and everyday motherhood overlapped. However, a few differences were identified. The nurses used "positive" terms to describe the characteristics of both perfect and everyday motherhood. Swedish-born mothers were more nuanced when describing everyday motherhood, often using both negative and positive wording in tandem: "Often headache. Often feeling insufficient when the children are screaming. But then there is a glimmer and emotions of joy takes over, the pride and the happiness." More than the other groups, Somali-born mothers emphasized negative aspects of everyday motherhood more than they did for perfect motherhood. Here, motherhood was associated with having a "bad character," "lying," "being angry," "lazy," "tired," "absent," "impatient," or "not taking care of herself." Furthermore, she was not well-educated, did not get by economically and relied on society's help. She was conceived as someone prioritizing her own life over the child's life. --- Discussion --- Motherhood as a social construction Research on motherhood often emphasizes biological aspects, such as pregnancy [58], post-natal depression [59], or breastfeeding [60]. The findings revealed little about these biological aspects. Focusing on biological aspects has been criticized for neglecting interests and power relations that have made women responsible for parenting [61,62]. For example, a labor market in which women are expected to raise children and men are expected to provide [63]. From this perspective, gendered expectations and characteristics are considered to have been socially or culturally constructed [63]. These expectations and characteristics may vary in relation to the social or cultural context. The differences in responses of Somali-born mothers compared with the nurses and Swedish-born mothers may be explained by perceptions being constructed or shaped, in relation to the cultural and social context of the respondent. Family policies and welfare systems differ between societies and shape the responsibilities of women differently [64][65][66]. By the same logic, gendered expectations and responsibilities undertaken may transform for an individual as he or she changes social context; for example, through migration [67][68][69]. --- Focus of motherhood: the child, the mother, the community Various healthcare studies emphasize the community's pivotal role for Somali refugees [70,71]. To talk in terms of Somalis as one group is of course a simplification. However, it is argued that the indigenous philosophies are deeply embedded in Somali societiesboth within Somalia and the Somali diasporaand in governing Somali peoples' way of life [72]. Vital in these philosophies is communalism and the individual's relationship with the community, including social cohesion and collective responsibility [72,73]. Consequently, social networks and the community are crucial for care if one is ill or in an adverse life situation [74][75][76]. Consequently, healthcare information provided by friends and key actors in the community may be just as important as information from healthcare staff [77]. This may be why many Somaliborn mothers' responses concerning both everyday and perfect motherhood focused on the surrounding community rather than themselves or their children. In contrast, the child health nurses generally focused on the mother herself when talking about perfect and everyday motherhood. This is somewhat surprising, given the official position [27,29] that child health centers emphasize the child's central position. Of course, the study's statements explicitly addressing motherhood could have affected the nurses' responses. A related deviation among the groups was their expectations (or not) of a mother's ability to share childcare responsibilities. Despite talking about motherhood within a wider community context, the Somali-born mothers believed that a perfect mother as well as an everyday mother was not supposed to ask others for help when it came to taking care of the child. She was expected to be independent and to manage child-rearing herself. Previous research has suggested that parenting responsibilities are shared to a greater extent between Somali couples living in Finland than is normally the case in Somalia [67]. Studies set in the United States [68] and Sweden [69] argue that the shift toward more equal gender roles between Somali couples in the host country compared to Somalia, may strain relationships; while both partners are expected to work, the husband may be reluctant to share household and parenting responsibilities. The nurses generally responded that mothers should cooperate with others in raising children, not least the parents-in-law. The Swedish-born mothers mentioned cooperation, but limited to the partner. --- The meta-mother Asking the respondents to complete the statement about perfect motherhood may be biased as it could be interpreted as being a stereotypical standard of perfection to which their own mothering should be measured. The stereotype of perfection was frequently addressed by Swedish-born mothers as well as nurses in responses such as, "there's no such thing as a perfect mother." But almost every time the impossibility of perfection was mentioned, it was followed by a statement along the lines of "…but she is doing the best she can," or "…but she always tries her best." In this sense, the ideal of motherhood is argued not to exist in "real life" but rather as a stereotype yet it impacts the expectations of mothering, as exemplified in the previous quotations that she is always, and in every situation, supposed to do her best given her circumstances. From the nurses' perspective, the perfect mother also realized when it was time to seek help, or when she could not manage to take care of the child herself. Previous scholars have focused on motherhood as being filled with stress, ambivalence, frustration, and self-blame [78,79]. Our findings suggest that mothers may be greatly impacted by the concept we termed meta-mothera woman who instinctively knows how and when to act, and who is always giving her maximum in her role as a mother. --- Consequences for the nurse-visitor interaction The differences and similarities described among the groups in this study may affect the individual meeting between the nurse and the visiting mother at a child health center. A good meeting has been identified as a prerequisite for the child health nurse to find out what the mother desires and to fulfill her expectations [80]. However, the nurses at the Swedish child health centers have been found to initiate most of the topics discussed [81], which may contribute to parents' lack of healthcare information related to their own concerns [33]. Moreover, it has been pointed out that normative and gendered perceptions risk being transferred to visitors at the child health centers [28]. Official reports focus attention not only on the centers' visitors, but also on staff and the prevailing norms within the organization [27]. In this study, the nurses both explicitly and implicitly expressed normative perceptions. The findings of this study suggest that increased knowledge about perceptions of motherhood and engagement with the local communities may help to improve equitable access to healthcare through approaches that are embedded in the local community context. Thus, information about child health services should not be limited to the centers, but disseminated in the wider community [16,17]. Given the potentially strong role the community can play, those who already have a position of authority or trust in the local community should be used to disseminate information [16,17,74]. Of the 35 responding nurses, ten indicated the belief that a partner is part of parenting. A few nurses mentioned the sex of this other being as male, but most mentioned a sexless "partner." A reason most nurses did not heteronormatively assume the partner to be male may be due to child health service guidelines putting attention on family constellations other than heterosexual families [27]. However, the nurses did not expect the mothers to be without a partner. Previous research highlights the fact that single-parent families are increasingly common in Sweden, and investigators have reported resulting disadvantages for the health of the child [82], as well as for single-parenting mothers [83]. Some normative perceptions were explicit. For instance, the nurses expected the mother to ask for help concerning her child. However, Somali-born mothers' perceptions of ideal motherhood were of someone capable of taking care of the child herself, not asking for help. Moreover, the nurses' expressions of everyday motherhood were described in rather dashing terms, such as wonderful and beautiful. Swedish-born mothers responded about perfect motherhood in "positive" terms. However, their responses concerning everyday motherhood were much more nuanced. Somali-born mothers rarely mentioned "positive" characteristics of everyday motherhood. There is a risk that the needs of mothers who express fatigue or being irritated will not be met by nurses at the centers. Given the potential impact of the "meta-mother," nurses should be aware of the stereotype of the "perfect mother" and therefore be ready to support mothers where they are. --- Limitations of the study In recent years, Scandinavian scholars have focused attention on fathers' perceptions, and parental experiences [31,32]. Still, most appointments to the child health center are made by the mother [31]. The mother is the norm and has been a major influence on the shaping of centers over the years. Understanding the needs and expectations of visitors other than mothers (fathers, for example) requires that one understand the norm. Focusing on motherhood risks reinforcing the mother's role as the main childcare provider. There is a similar risk of perpetuating stereotypes and generalizations of Somali-born mothers based simply on country of birth. Indeed, these women's experiences may be very different, depending on the reasons for migration, birth at migration, and other factors. These factors were not considered in this study. The intent of the current study was to understand perceptions of motherhood in three groups in Western Sweden with participants completing open-ended statements about ideal and everyday motherhood. As such, the scope was limited and therefore caution should be exercised when generalizing the results and conclusions to other populations. Neither should the results be generalized to all Somali-born or Swedish-born women in Sweden. As with all qualitative studies, there are limitations with smaller sample sizes and nonrandom sampling. In this case, the Somali-born sample was much smaller due to translation resource intensity and logistical constraints of accessing the mothers at a time convenient to them. Data was collected at different sites by necessity in order to access mothers and nurses who were willing to participate in the study. The responses of the Somali-born mothers were translated into Swedish by an interpreter. The Swedish translations were then translated into English by EE. Consequently, there is a greater risk that information has been lost in translation or somewhat distorted for the Somali-born mothers than for the other two groups (for which translation only took place once). No demographic information was collected for the nurses. Previous research [84] has suggested that healthcare providers' backgrounds also affect preconceptions. Previous research [67][68][69] also indicated that gendered responsibilities transform with migration. In the case of the Somali-born mothers, this has not been thoroughly addressed in this paper, which focuses primarily on differences between groups. --- Conclusion With the growing emphasis on individualization in healthcare comes a need to acknowledge the social context, including societal structures and norms. This paper suggests that between groups of peopleand not solely between single individualsthere are differences and similarities in perceptions of motherhood which potentially may have implications for health services access and utilization. In this study, Somali-born and Swedishborn mothers, as well as nurses expressed differences in the focus of motherhood: the community, the child, and the mother herself. Potential convergences and divergences of beliefs by mothers and staff may constitute a source of misunderstanding, and normative or stereotypical encounters. However, recognition of the existence of gendered and cultural constructions may be a first step to avoiding such encounters. Because healthcare encounters do not take place in a social vacuum, healthcare needs to be provided that is relevant to specific groups of the population and that is grounded within their social context. Moreover, group perceptions should be used constructively. When healthcare providers design services that satisfy the needs of a diversity of users, equity in healthcare may be enhanced. --- Authors' contributions EE and KE managed the project. LV proposed data collection methods, KE carried out most of the data collection, and EE and KE jointly analyzed the data. EE drafted the manuscript. All authors helped with revisions of the manuscript. All authors read and approved the final manuscript. --- Competing interests The authors declare that they have no competing interests. --- Consent for publication Consent for publication was obtained from the participants. --- Ethics approval and consent to participate The Research Ethics Committee in Gothenburg, Sweden, decided that this project was not a matter for the Ethical Review Act (registration number 127-15). Consent were obtained prior to data collection.
Broadly defined, "peer support" refers to a process through which people who share common experiences or face similar challenges come together as equals to give and receive help based on the knowledge that comes through shared experience (Riessman, 1989). A "peer" is an equal, someone with whom one shares demographic or social similarities. "Support" expresses the kind of deeply felt empathy, encouragement, and assistance that people with shared experiences can offer one another within a reciprocal relationship.
Peer support as an organized strategy for giving and receiving help can be understood as an extension of the natural human tendency to respond compassionately to shared difficulty. A widow may offer comforting words, tea, and company to a woman grieving the death of her husband. Someone who has learned to cope with the effects of a serious injury explains how they manage to a newly injured person. Most people who have been through hard times empathize with and have an urge to help when they meet others who struggle with similar problems. It not only benefits the person receiving support, it makes the helper feel valued and needed (Riessman, 1965). Sometimes referred to as self-help or mutual aid, peer support has been used by people dealing with different types of social circumstances, emotional challenges, and health issues, including those with alcohol or drug problems, bereaved individuals, and people living with physical illnesses or impairments (Penney, Mead, & Prescott, 2008). Peer support has a significant history among people with psychiatric diagnoses. This article will review recent literature on peer support among people with psychiatric diagnoses in the United States. It begins by addressing the substantial definitional issues involved and offering a brief consideration of the history of two types of peer support. This will be followed by an examination of recent review articles on peer support in mental health. An ongoing study of a peer-developed approach, Intentional Peer Support, within the context of peerrun programs, is described. Finally, policy and practice implications are discussed. --- Defining Peer Support by and for People with Psychiatric Disabilities In recent decades, there has been increasing attention in the professional literature to the study of peer support among people with psychiatric disabilities. But the ability to conduct a meaningful review of this literature is complicated by the fact that there is no agreed-upon definition of the term "peer support." In the research literature, terms such as "peer support," "peer-delivered services," "self-help," "consumer services," "peer mentors," and "peer workers" are used interchangeably, making it difficult to draw meaningful distinctions among fundamentally different types of interventions (Repper & Carter, 2011;Rogers et al., 2010;Davidson, Chinman, Sells, & Rowe, 2006;Mead & MacNeil, 2005). Despite this confusion, upon examination of the history of peer support, one can differentiate between two major categories that are often conflated in the literature: peer-developed peer support and the practice of employing peer staff in traditional mental health programs. These are defined and discussed below. --- Peer-Developed Peer Support Peer-developed peer support is a non-hierarchical approach with origins in informal self-help and consciousness-raising groups organized in the 1970s by the ex-patients' movement. It arose within an explicitly political context, in reaction to negative experiences with mental health treatment and dissatisfaction with the limits of the mental patient role (Van Tosh & del Vecchio, 2001;Kalinowski & Penney, 1998). While peer support for people with specific medical conditions, like cancer, focuses on coping with illness, peer support by and for people with psychiatric histories has always been closely intertwined with feelings of powerlessness within the mental health system and with activism promoting human and civil rights and alternatives to the medical model that defines extreme mental and emotional states as "mental illnesses" (Chamberlin, 1978). Deegan (2006) sees peer support as a "response to the alienation and adversity associated with being given a psychiatric diagnosis," by which diagnosed people are ostracized from the larger community and work to create their own communities by reaching out to others who share their lot. The development of peer support was influenced by the human and civil rights movements of African Americans, women, and lesbians and gay men in the 1960s and '70s. It was also influenced by the Independent Living (IL) movement of people with physical, sensory, and cognitive disabilities (Deegan, 1992). Peer support was inseparable from human rights activism during the development of the IL movement and is one of four required services of federally funded Centers for Independent Living serving people with disabilities (White, Simpson, Gonda, Ravesloot, & Coble, 2010). The IL movement sees "disability" as the result of physical, attitudinal, and social barriers, rather than the consequences of deficits within individuals with impairments (De Jong, 1979). This formulation resonated with people who had negative experiences in the psychiatric system and used peer support as a means for coping with adverse effects (Penney & Bassman, 2004). Although peer support emerged in a political environment, it is also an interpersonal process with the goal of promoting inner healing and growth in the context of community (Mead, 2003). As a practice, it is characterized by equitable relationships among people with shared experience, voluntariness, the belief that giving help is also self-healing, empowerment, positive risk-taking, self-awareness, and building a sense of community (Budd, Harp, & Zinman, 1987;Harp & Zinman, 1994;Clay, 2005). Peer support, by definition, is "led by people using mental health services" (Stamou, 2014, p. 167;Faulkner & Kalathil, 2012). --- Intentional Peer Support as an Evolution of Informal and Peer-Developed Peer Support In the early days, peer support-more commonly called "self-help" in those years-was often informal and relatively unstructured. People met in apartments, in church basements, and in libraries, but rarely in spaces affiliated with the mental health system (Chamberlin, 1990). But, during the 1980s and '90s, independent, peer-run nonprofit organizations emerged (Chamberlin, 2005). Many of these groups began to offer more structured peer support, generally with some government funding. The development of government-funded peer-run programs meant that these programs needed to more clearly define the vision, principles, and practices of peer support to meet government oversight requirements. Shery Mead has been a pioneer in this work for more than 20 years, developing an approach called Intentional Peer Support (IPS). While IPS grew from the informal practices of grassroots-initiated peer support, it differs from earlier approaches because it is a theoretically based, manualized approach with clear goals and a fidelity tool for practitioners (MacNeil & Mead, 2005). IPS sees its fundamental purpose as helping people unlearn the mental patient role, and defines peer support as "a system of giving and receiving help founded on key principles of respect, shared responsibility, and mutual agreement of what is helpful. Peer support is not based on psychiatric models and diagnostic criteria. It is about understanding another's situation empathically through the shared experience of emotional and psychological pain" (Mead, 2003, p. 1). It is posited as a non-clinical intervention whose benefits are primarily intrapersonal and social in nature (Mead & MacNeil, 2005). In working with individuals with psychiatric diagnoses, the goals of IPS are to move from top-down helping to mutual learning, from a focus on the individual as the locus of dysfunction to a focus on relationships as a tool for growth, and from operating from fear to developing hope (Mead, 2014). IPS is a philosophical descendant of the informal peer support of the ex-patients' movement of the 1970s. What distinguishes it from earlier, less structured peer support is a focus on the nature and purpose of the peer support relationship and its attention to skill-building to purposefully engage in peer support relationships that promote mutual healing and growth. IPS recognizes that trauma plays a central role in the experience, diagnosis, and treatment of people with psychiatric histories, and emphasizes the need for peer support to be trauma-informed (Mead, 2001). Other peer support practitioners have expanded this effort to bring a trauma-informed lens to the practice of peer support through guidebooks and training (Blanch, Filson, Penney, & Cave, 2012). --- Peer Staff Employed in Traditional Mental Health Programs The growth of this approach is illustrated by the recent, rapid expansion in the U.S. of "peer specialists" and similar positions in mental health programs (National Association of State Mental Health Program Directors [NASMHPD], 2012). While there is no standard definition or job description for a "peer specialist," a number of states, provider organizations, and government agencies have such titles, also known as peer mentors, peer support specialists, recovery support specialists, recovery coaches, and a host of other titles that usually involve the words "peer" and/or "recovery." The use of the word "peer" as part of job titles is a topic that deserves fuller discussion than can be offered here. The term is confusing at best; in general usage, a "peer" is an equal, one who shares characteristics or experiences in common with the subject. To employ the word as a euphemism for "service user" or "mental patient" poses both grammatical and philosophical problems. What these job titles have in common is that they apply to employees with psychiatric histories working in paraprofessional roles in traditional mental health programs, often performing the same tasks as non-peer staff (Davidson, Bellamy, Guy, & Miller, 2012). Job descriptions vary: peer staff may provide clinical and/or paraprofessional services that are indistinguishable from those provided by non-peer staff, they may serve as clerical staff or van drivers, or they may have undefined roles that evolve based on the individual's aptitude or the perceived needs of the organization. Peer workers in traditional programs generally do not provide "peer support" as this term is commonly understood by users and practitioners of informal peer support. In fact, peer staff working in traditional programs rarely receive training about or exposure to the principles and practices of peer-developed peer support (Alberta, Ploski, & Carlson, 2012). Peer employees are usually expected to disclose their psychiatric histories and serve as role models for people they serve. Relationships between peer staff and service users are usually hierarchical, similar to staff-service user relationships generally within the mental health system, in contrast to the horizontal relationships that characterize peerdeveloped peer support (Alberta & Ploski, 2014;Davidson et al., 2012;Rogers et al., 2010). An early study of peer specialist services in Bronx, New York, funded by the National Institute of Mental Health (NIMH) in 1990, found that several components of well-being were positively affected by the work of peer specialists (Felton, Stastny, Shern, Blanch, Donahue, Knight & Brown, 1995). Using a quasi-experimental design, the study demonstrated that adding three peer specialists to a team of ten intensive case managers (ICMs) resulted in stronger beneficial effects for service recipients, compared to two control groups (adding three paraprofessionals or no extra staff). The most significant benefits for the group served by the ICM teams with peer specialists were in quality of life, specifically greater satisfaction with living situation, finances, personal safety, and fewer overall life problems (Felton et al., 1995). Based on initial findings of this study, the New York State Office of Mental Health (NYS OMH) established a Peer Specialist civil service title in 1993, the first state to do so. As of 2014, NYS OMH employed about 100 individuals in that title and at least 500 people worked in similar jobs in publicly funded voluntary sector agencies in the state. In both the Bronx ICM study and the Peer Specialist civil service title, the emphasis was initially on bringing the values and principles of peer-developed peer support into paid peer staff roles, but the ability to keep the focus on these values was often compromised by clinicians and administrators who did not understand or support the principles (Stastny & Brown, 2013). The practice of using peer staff in traditional programs has been accompanied by state peer specialist certification programs in 38 states as of 2014 (Kaufman, Brooks, Bellinger, Steinley-Bumgarner, & Stevens-Manser, 2014). These certifications require completion of a state-approved training course, using either a curriculum designed by the state or one of a number of proprietary training programs. There are currently no national standards for peer specialist training, and the length, intensity, and content of the courses vary widely (Kaufman et al., 2014). The expansion of peer staff in traditional programs accelerated when the federal Centers for Medicare and Medicaid Services (CMS) issued a State Medicaid Directors' letter in 2007 clarifying the conditions under which "peer support services" could be reimbursed by Medicaid. As of 2015, 31 states and the District of Columbia offered Medicaid-reimbursable peer support services, and it is likely that, under provisions of the Affordable Care Act, many other states will follow (Ostrow, Steinwachs, Leaf, & Naeger, 2015). The State Medicaid Directors' letter defined "peer support services" as "an evidencebased mental health model of care which consists of a qualified peer support provider who assists individuals with their recovery from mental illness and substance use disorders" (CMS, 2007, p. 1). While this policy clarification spurred an increase in the use of peer specialists, it also added to the definitional confusion, stating that any service provided by a "qualified peer support provider" was, by definition, "peer support." Stastny and Brown (2013, p. 459) 1, below, highlights key features of each review. These are primarily studies involving employment of peer staff in traditional programs, because, while informal and peer-developed peer support has been extensively described in the non-research literature (for example, Blanch, Filson, Penney, & Cave, 2012;Clay, 2005;Mead, Hilton, & Curtis, 2001;Chamberlin, 1990), its effectiveness has not been studied. Chinman et al., 2014 This review looked at the effectiveness of three types of peer support services (peer staff added to traditional services, peer staff in existing clinical roles, and peer staff delivering structured curricula) and found 20 relevant studies between 1995 and 2012 (Chinman et al., 2014). An argument can be made that two of the three types of peer support defined by the reviewers (peers in existing clinical roles and peers delivering structured curricula) are not really "peer support services" in the commonly understood sense of the term. Inclusion criteria included randomized controlled studies, quasi-experimental studies, single-group time-series designs, and cross-sectional correlational studies of peer support services for adults diagnosed with serious mental illness and/or co-occurring mental health and substance use disorders. This review was sponsored by the federal Substance Abuse and Mental Health Services Administration (SAMHSA), which defined "peer support services" as "a direct service that is delivered by a person with a serious mental illness to a person with a serious mental disorder" (Chinman et al., 2014, p. 1-2). This definition is at variance with the definition of the term that grew out of peer-developed peer support; it does not recognize the centrality of equitable, mutual relationships based on shared common experience that is the hallmark of peer-developed peer support. --- SAMHSA's Assessing the Evidence Base Review/ The authors found that peer support services met moderate levels of evidence, and that effectiveness varied across service types, with "peers in existing clinical roles" showing less effectiveness than the other two service types. The review noted that many of the studies had methodological problems. Because the studies under review used disparate outcome measures (e.g., hospitalization days, social support, quality of life), comparisons were difficult. As with most of the review articles discussed in this section, the authors decried the quality of many of the studies, pointing to a need for "studies that better differentiate the contributions of the peer role and are conducted with greater specificity, consistency, and rigor" (Chinman et al., 2014, p. 11). --- Cochrane Review/Pitt et al., 2013 A Cochrane review of 11 randomized controlled studies of what the authors referred to as "consumer services" (Pitt, Lowe, Hill, Prictor, Hetrick, Ryan & Berends, 2013, p. 4) found that the outcomes of such services are neither better nor worse than professionally provided services, although there is some evidence that peer services reduce the use of crisis and emergency services. Among the studies examined, the definitions of "consumer services" varied, making comparisons among and between the studies' findings difficult. For example, some were studies in which peer workers provided services identical to those provided by professionals (usually case management), while others concerned services that were based on peer providers' experiential knowledge. The review looked only at studies that compared outcomes of peer services (e.g., quality of life, hospital days) to outcomes of services delivered by professionals. --- Walker and Bryant, 2013 Walker and Bryant (2013) conducted a metasynthesis of the findings of 27 published qualitative studies and mixed methods studies that examined peer support services provided within traditional mental health programs; studies of peer support provided within peer-run organizations were excluded. Their review reported on the experiences of peer staff and their non-peer colleagues, as well as on the experiences of people using services. Peer staff faced numerous challenges, including low pay, insufficient hours, negative or rejecting attitudes from non-peer staff, and being treated as "patients" instead of colleagues. They also reported positive benefits for peer staff, such as increased self-esteem, larger social networks, and increased community participation. Non-peer staff reported increased empathy for and understanding of people with psychiatric disabilities due to interactions with peer colleagues; however, non-peer staff feared that the presence of peer staff would result in job losses for non-peer staff. People who received services experienced better rapport with peer staff than non-peer staff and reported increased hope and motivation, as well as increased social networks, as a result of working with peer staff. --- Davidson and Colleagues, 2012 Davidson and colleagues (2012) identified three categories of research on peer support that occurred sequentially over the past 25 years. First, there were feasibility studies of peer-provided services in the 1990s, which showed that peer staff could function adequately in ancillary roles and produce outcomes on a par with those of professional services. Second, a number of studies compared peer staff and professional staff providing conventional services in conventional roles. These studies generally reported that peer staff functioned at least as well as professionals, with comparable outcomes. Some studies found that peer staff had better outcomes than professionals on a few measures, including increased engagement among "hard-to-reach" clients, reduced hospitalization rates, and decreased substance abuse rates among people with dual diagnoses. Third, there are nascent investigations into the unique qualities/ contributions of peer services and the outcomes these produce. The authors acknowledge that this research endeavor is in its infancy. They report on two of their own studies in this area. One compared "usual care" with "usual care" plus two different types of peer services, finding that the two peer services conditions resulted in increased participant satisfaction on quality of life measures. The other suggested that peer support may reduce re-hospitalization rates and number of hospital days. --- Rogers and Colleagues, 2010 Rogers and colleagues ( 2010) reviewed 53 studies that met a minimum threshold for research quality as determined by a system developed by Rogers, Farkas, Anthony, and Kash (2008) rating the rigor of disability research and reported outcomes of peer services. They found no evidence that adding peer services to traditional services improved outcomes (neither did it worsen them); some evidence that peer support groups improved a number of outcomes for people who participated regularly (but not for occasional participants); and equivocal findings in other categories, such as one-to-one peer support and residential peer services. The authors noted a number of methodological problems that left them unable to draw firm conclusions related to the effectiveness of peer support and peer-delivered services. --- Davidson and Colleagues, 2006 Davidson and colleagues (2006) examined four randomized controlled studies of peer-delivered conventional services and supports that compared case management teams with and without peer staff members. Two of the studies reviewed found no significant difference in outcomes between the groups. In contrast, one study found that clients receiving services from the team with a peer worker reported increased satisfaction with services, while another found that clients receiving services from the team with a peer worker had fewer hospitalizations and longer community tenure. --- Discussion --- Study Design and Outcome Measures In the aggregate, the reviews and studies described above found minimal to moderate evidence that adding peer-delivered services of various types to traditional mental health services may be effective on a range of outcome measures. However, there are a number of methodological concerns that raise questions about these findings, including the underlying design of many of the studies, the type of peer support studied, and the relevance of the outcome measures selected. Most of the studies reviewed compared the outcomes of peer-delivered services with those of professionally delivered mental health services, and used traditional clinical outcome measures, such as symptom reduction, decreased hospitalization, and reduced substance use (Sledge et al., 2011). Both these factors raise issues about the appropriateness of these studies' designs. Peer support was never conceptualized as a substitute for-or interchangeable with-clinical services (Chamberlin, 1990;Campbell et al., 2006); neither are its goals the same as those of clinical services (Mead, Hilton, & Curtis, 2001;Harp & Zinman, 1994). Peer support staff generally do not have clinical training and are usually paid substantially less than credentialed mental health professionals. Since peer support was a) never envisioned as a substitute for clinical services-and, in fact, arose out of negative experiences with clinical services (Van Tosh & del Vecchio, 2001;Kalinowski & Penney, 1998)-and b) has different goals and thus outcomes than those of clinical services, it is not methodologically sound to compare the outcomes of peer support with those of clinical services. The review articles noted serious methodological problems that interfered with the authors' ability to draw firm conclusions about the strength of the evidence in the research literature for a wide variety of peer-delivered services. Many of the authors had unresolved questions about exactly what types of interventions and services were actually involved in the studies they reviewed. For example, Rogers and colleagues (2010, p. 24) concluded that their review "was hampered by a lack of description of the peer delivered activities, services and supports being provided, a lack of information about the intensity of those services and supports, and little information about the models and contexts of the service delivery…. If the field is to move forward and be adequately reviewed as an evidence-based practice, future research activities should focus on improving the state of our understanding of peer delivered services." It should be noted, however, that one of the review articles discussed above (Walker & Bryant, 2013) looked at qualitative and/or non-clinical outcomes that may have bearing on community participation and social inclusion. This approach shows promise for the development of outcome measures that actually track with the goals of peer-developed peer support as originally envisioned by the pioneers in this field. It should also be noted that none of the review articles-nor the research they reviewed-reflected on cultural considerations in the delivery of peer support services, nor about the development of peer support services in communities of color. --- Definitional Concerns Rogers and colleagues' (2010) statement quoted above is a call for more rigorous studies of peerdelivered services. This call is useful as far as it goes, but, like most of the comments on methodological shortcomings expressed by the review authors, it fails to address the serious definitional issues associated with this body of research. Despite the fact that the studies reviewed used a wide range of confusing and often incompatible definitions, none of these authors addressed the question of whether what they were studying was, in fact, "peer support" at all. The authors do not discuss or take into consideration the history and philosophy of the consumer/survivor/ ex-patient movement or the theories, principles, and practices of peer-developed peer support approaches. Many of the review authors-and the researchers whose work they examined-essentially defined "peer support" as any service or activity provided by a person with a psychiatric history. For example, Davidson and colleagues (2006) defined "peer support as an asymmetric, one-directional relationship" (Fuhr et al., 2014, p. 2), in stark contrast to the mutual, bi-directional relationships conceptualized by Intentional Peer Support (Mead, 2014). The people who have developed and practiced peer-developed peer support over the past 40 years understand it as a specific type of relationship-based approach with a philosophical basis in the potential for mutual growth and healing, and with clear principles and practices reflecting equality and respect. IPS, for example, defines peer support as "connecting with someone in a way that contributes to both people learning and growing… the intention is to purposefully communicate in ways that help both people step outside their current story" (Mead, 2014, p. 8). The development of this type of horizontal relationship is quite different from using peer staff within a traditional program to perform functions such as traditional case management services or driving people to appointments. Simply hiring people with psychiatric histories to do some of the usual tasks of the traditional mental health system is not the same as practicing peer support. By not exploring the true bi-directional relationship of peer support (the peerdeveloped definition), the extent to which the studies above truly identify the effectiveness of peer-delivered services is questionable. --- A New Peer-Led Study of Intentional Peer Support One approach to addressing the methodological and definitional issues discussed above is through peerled research of a peer-developed approach using non-clinical outcome measures that track with the stated goals of peer support. An ongoing three-year quasi-experimental study funded by the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR) is meeting this challenge by examining the comparative effectiveness of Intentional Peer Support in improving community integration, community participation, and quality of life for adults with psychiatric disabilities. This study is led by a principal investigator (PI) with a psychiatric history studying a peer-developed approach (IPS) delivered within the context of peer-run programs. This contrasts with earlier studies, which primarily looked at peer-delivered services (not specifically peer support services) within traditional mental health programs. The study compares outcomes of participants receiving IPS in a peer-run program to those of participants in a peer-run program that does not practice IPS. Outcome data are collected through in-person interviews that assess self-esteem, selfstigma, social connectedness, community integration, community participation, and quality of life at baseline and six months after the initial interview. A new scale developed by the PI, the IPS Core Competencies Scale, assesses the extent to which study participants perceive that peer support staff are practicing the core competencies taught to IPS practitioners. Secondary data sources include staff self-assessments and supervisory assessments, as well as focus groups of staff and service users. Quarterly, the IPS-trained staff at the intervention site complete a selfassessment of their skills and practices using the IPS Core Competencies Scale; supervisors rate staff using this tool biannually. Focus groups with peer support participants and with staff at both sites gather qualitative information on receiving and providing peer support prior to IPS training, 9 months after training, and 12 months later. Randomized regression models and content analyses will be used to examine whether any significant differences in outcome measures occur between the groups. These findings will be supplemented by qualitative findings from the focus groups and staff self-assessments. Study results will provide important information on how an innovative approach to peer support, designed by people with psychiatric histories and delivered within independent peer-run programs, may enhance community integration, community participation, and quality of life for adults with psychiatric disabilities. --- Implications for Practitioners and Future Research As noted above, peer-developed peer support is a non-hierarchical interpersonal process promoting mutual healing in the context of community, characterized by equitable relationships among people with shared experiences and a commitment to growing beyond the limits of the mental patient role. However, in clinical and psychiatric rehabilitation service settings, the term "peer support" has been used to describe activities and jobs that are not necessarily congruent with the peer-developed definition. Peer specialist and similar titles describe staff with psychiatric histories working in paraprofessional roles in traditional mental health programs. These staff may provide clinical and/ or paraprofessional services; work as clerical staff, janitors, or van drivers, or they may have relatively undefined roles that vary based on the perceived needs of the organization. Because peer-developed peer support approaches are not generally available in clinical settings, perhaps it is not surprising that the literature reviewed above conflates a variety of peer-delivered services with "peer support." It is important that policy makers and administrators develop clear job descriptions for a variety of peer-delivered services, and that they understand that these services are not the same thing as "peer support." This will provide administrators, clinicians, and researchers with the opportunity to educate themselves about the distinctions between peer-developed peer support approaches and the varied ways that peer staff are employed in traditional programs, so that they can accurately describe what they are providing and studying. Other peer support-related topics that would be fruitful directions for future research include studying the implementation of Intentional Peer Support with peer staff working in traditional programs, as well as comparing Intentional Peer Support training with some of the state and organizational training curricula for Certified Peer Specialists currently in use. The ongoing study of Intentional Peer Support described earlier is looking at the effects of a peerdeveloped approach to peer support implemented in a peer-run program, using non-clinical outcome measures that correspond to the principles and practices of peer-developed peer support. It is hoped that the results will help the field clarify its understanding of peer support and promote the expansion of services that are congruent with the original, peer-developed meaning of peer support. --- Darby
The involvement of patients in health research has resulted in the development of more effective interventions and policies in healthcare that respond to the needs of healthcare users. This article examines how working with youth and their families as co-researchers in health research communities of practice (CoPs), rather than just as participants, can benefit all involved. Health research (CoPs) promote an environment in which co-researchers have the opportunity to do more than just participate in the data collection phase of the research process. As co-researchers, youth and their families are able to participate, learn, and contribute to knowledge and building relationships that are designed to innovate and improve healthcare systems. However, in order to ensure engagement of youth and their families in health research that they find meaningful and rewarding, three factors have been identified as important parts of the process: promoting identity, building capacity, and encouraging leadership skills.
Introduction There is growing awareness that patient engagement in health research is not only ethically important, but leads to evidence for developing the most effective interventions, policy and practice recommendations, and planning for ongoing research [1][2][3]. Models for patient engagement have been evolving over the past four decades, and research that is grounded within evolved forms of stakeholder participation is typically understood as research as practice. Recently, attention has been shifting towards new forms of interpretive communities known as communities of practice (CoPs) and their potential for developing greater knowledge around participation [4][5][6][7]. CoPs do not privilege research-based evidence over experience-rooted knowledge; therefore, they have the potential to become powerful "venues for bridging traditional rifts in the health sector between research and practice, and among disciplines" ( [8], p.3). CoPs have been found to enhance the performance of interventions through breaking down professional, geographical and organizational barriers, enhancing knowledge sharing, and facilitating the implementation of new processes [9]. There are examples of CoPs that have been successfully assembled in healthcare settings, especially within oncology [8]. Furthermore, training on how to develop and maintain CoPs focused on improving health within different contexts is an important emerging area in the literature around patient support and participation [7]. However, it is not yet well documented how CoPs could influence the development of research for health systems and interventions for youth (i.e., children and adolescents) and their families. In this article we examine patient engagement beyond the traditional hierarchical structures for participation towards developing a greater and more functional understanding of how youth and families are involved through health research CoPs. We do so by exploring the participation of youth and families in health research CoPs created through IN•GAUGE®, an ongoing research program led and coordinated by Dr. Roberta Woodgate. Dr. Woodgate's research engages youth, families and caregivers, service providers, researchers, and policy makers towards building insights into the lived experiences of youth with physical and mental illness. --- Background Patient engagement has roots in several international agreements including the WHO Alma Ata Declaration (1978) in which declaration 10 states, "The people have a right and duty to participate individually and collectively in the planning and implementation of their own health care" [10]. Since the Alma Ata, this ethos has been applied with different patient populations, including children and adolescents. The most formal and largely recognized articulation of children's rights to participate is Article 12.1 of the United Nations Convention on the Rights of the Child (UNCRC), which asserts that "States Parties shall assure to the child who is capable of forming his or her own views the right to express those views freely in all matters affecting the child, the views of the child being given due weight in accordance with the age and maturity of the child" [11]. A few models and typologies for participation run parallel with the UNCRC standards and have been applied to youth engagement in health research. Many of such models have roots in Arnstein's "ladder of citizen participation" [12], which Hart later modified and contextualized as being relevant to young people's participation [13]. Almost a decade later, Shier's "pathways to participation" typology for children's participation in decisionmaking significantly revised the format of the model to five levels operating as a continuum [14]. These frameworks for participation have significantly influenced how children are regarded within research communities. Stewart however, stresses that it is difficult to find a workable definition of participation, and the popularity of incoherent definitions in health research "belies fundamental uncertainties about what [participation] entails and its associated benefits" ( [15], p. 124). Towards overcoming uncertainties around hierarchical frameworks for decision-making, Turner argues that the conceptual frames provided in the growing literature around CoPs can provide a more comprehensive understanding of health research systems aiming to have an influence on practice [16]. Lave and Wenger contributed greatly to the concept of CoPs and focused on informal and situated interactions towards achieving a better understanding of learning that is grounded in practice [17]. Later, Wenger focused on the trajectories of participation, social identities, and the effects of participation within different communities; and developed a set of indicators for CoPs [4]. Wenger defines the three core components of CoPs as the domain, which refers to a "concern, set of problems, or passion about a topic"; the practice, representing the knowledge that the group shares and generates; and, the community, which is the set of interpersonal relationships that are the product of engaging in learning through practice [4]. Wegner, McDermott, and Snyder developed the definition of communities further as "Groups of people who share a concern, a set of problems, or a passion about a topic, and who deepen their knowledge and expertise in this area by interacting on an ongoing basis" ( [18], p.4). When the three elements function well, CoPs become structures that can take on the responsibility of developing and sharing knowledge [18]. le May expanded the CoPs framework to account for the outcomes and benefits of CoPs in health and social care, including knowledge sharing, learning, building social relationships, innovation, and improving organizations [19]. Research communities engaging in research as practice have been studied using the CoPs framework, which has been applied to a variety of contexts (e.g., incarceration, community development, health, and education) with adults and mixed-age populations. Yet, the process of engaging youth and their families in health research communities of practice CoPs is yet to be studied in a systematic way. Furthermore, major funding agencies (e.g., Canadian Institutes of Health Research) equate patient engagement with the adult model of participation and do not give special consideration to how research should be conducted with youth and their families. In the following section we identify how insights about youth and their families' views are taken into account as well as their specific roles in health research CoPs that are created through IN•GAUGE®, a research program focussed on building knowledge through conducting research with youth and their families, rather than on them. --- Approach: Research with youth and their families Turner asserts that framing health research within CoPs should first involve translating research findings into practical implications for organizations and identifying ways of developing and "communicating evidence from social science research that demonstrates its relevance to 'real-world' decision-making such that it has maximum impact on healthcare policy and Practice" ( [16], p. 2). Towards achieving the important next step of understanding the roles and outcomes of youth involvement in health research CoPs it is essential to hear from the youth themselves. IN•GAUGE® creates health research CoPs through the implementation of participatory research agendas with co-researchers (we use this term instead of "participants" towards acknowledging the contributions made as well as the power that has been divested according to participatory action research principles). These health research CoPs emerged through both intentional planning as well as the organic development of a web of relationships that results from sustained engagement in an area of research, depending on the particular study. Through IN•GAUGE®, qualitative and arts-based methods (such as photovoice, computerized drawings, body mapping) are applied to help amplify the voices of youth and their families, as well as explore and share research findings from the youth-and family-centred health research program through accessible strategies (such as the development of films, websites and choreography). These strategies all worked to highlight the voices of young people and families with lived experience and to ensure that the best available evidence flowing from the research is in the hands of those who influence children's healthparents, families, healthcare professionals, decisions makers and children and youth. In considering CoPs framework in relation to IN•GAUGE®, the health and well-being of youth with physical and mental illness and their families is the domain. The practice is the development of strategies for improving child and family health and well-being, and the community are the youth and families, service providers, researchers, and policy makers that are brought together into action through the common concern (i.e., the research question). Learning is promoted in IN•GAUGE® health research CoPs through participatory action research protocols involving knowledge brokering and various feedback cycles with youth and families, researchers, service providers, and special knowledge holders [6,20]. For learning to happen within health research CoPs, individuals need to be willing to contribute to the evolution of collective learning through sharing information, developing, and implementing strategies and conducting evaluations [21,22]. Promoting and creating the space for identity, capacity building, leadership, knowledge building, and relationship building --- Identity Youth and families find a sense of their roles and identities within health research CoPs that have been created through IN•GAUGE®. Co-researchers have revealed that they have learned a lot about their own journey with illness through being engaged and finding a space for reflection in the research process. Many co-researchers have reflected on how they felt at ease in the research process and viewed their participation as an opportunity to give back and help others who have similar challenges. Co-researchers often make strong statements about their identity and the ways in which their health condition is intertwined with their identity, and how that identity relates to the particular project. The building of shared identities relating to health research CoPs (i.e., feeling of belonging and being welcomed in the community) is also important, and creates the ability to transcend ways of communicating (i.e., disciplinary, cultural, generational, etc.), acknowledging other's perspectives, and challenging assumptions [23]. Much of this communication is facilitated through the use of highly flexible interview schedules, as well as the creation of a safe space established as a result of the implementation of Youth Advisory Councils (YACs) and Family Advisory Councils (FACs). YACs and FACs contribute knowledge and direction to developing projects through project scoping, giving input on suitable research methods, providing feedback throughout the research, and planning for and participating in knowledge translation (KT) and dissemination (Fig. 1). Within IN•GAUGE®, for those studies that focused directly on youth experiences of health and illness, YACs facilitated the participation of young people in research outside of the direct influence of their families. FACs brought to light the lived experience of health and illness on families and also served to complement the work of the YACs for those projects more focused on youth experiences. Multi-directional communication and critical self-reflection within IN•GAUGE® health research CoPs contributes to connectivity and learning across boundaries and promotes the development of a shared identity and sense of belonging within health research CoPs, one of Wenger's (1998) indicators of CoPs. --- Co-researchers from IN•GAUGE® involved in YACs and FACs frequently state that they enjoyed being engaged through the research process, and state that reflecting through the research process enables them to find different interpretations of their own or a family member's illness and disability. --- Capacity building Capacity building for youth and their families includes skill development and the enhancement of self-esteem and the ability to build social networks. Skill development involves refining communication and advocacy skills within health research systems (i.e., through YACs and FACs), which is important for developing meaningful engagement within health research CoPs more broadly, as well as in health systems more generally [24]. Through promoting and maintaining relational qualities within the research agendas, the researchers themselves provide many opportunities enhancing the self-esteem and abilities of co-researchers to build and extend their social networks [3]. Co-researchers often express that their opinions were being taken seriously and felt empowered to communicate with others through the research. Enhancing this ability to influence social networks is especially important for youth and families who may be disadvantaged through their experiences with illness. In developing IN•GAUGE® projects it is important to be cognizant that belonging to additionally marginalized groups (i.e., Indigenous, female, etc.) can cause enhanced jeopardy for health and social outcomes (Demas,[25]), requiring special attention to how power relationships are addressed and how social relationships are enhanced within health research CoPs. It is especially important when working with such groups that partnerships are fostered, collaboration is promoted, and that shared concerns are explored (i.e., the domain) early on [26]. It is also critical to consider that social relationships may mean something different for each group of co-researchers and can be influenced by factors such as culture and regional norms [27,28]. For example, in an IN•GAUGE® study exploring African newcomer experiences with the Canadian health system, co-researchers developed an ability to extend their influence and knowledge that they then used to advocate for improved access to health services in the context of a change in political system (i.e., government cutbacks for newcomer health services). --- Leadership Co-researchers' learning about identity and the development of new skills eventually lead to greater participation and leadership in projects. A parent of a child with complex care needs talked about their long-term involvement in an IN•GAUGE® study and how being asked to give their thoughts on the research process and dissemination of research findings (i.e., through participating in a video documentary) enabled them to create meaning and engage in deep reflection and learning through the process. Through advisory councils, coresearchers take up a number of leadership roles and have demonstrated commitment and interest in the research reaching its full potential. Co-researchers also report finding spaces within health CoPs where they can directly impact their day-to-day care through finding new pathways for informing service providers about their particular needs. The leadership skills of coresearchers are brought into the initiation and development stages of the research through providing spaces for co-researchers to shape research priorities, project design and methods. --- Knowledge building Patient engagement involves acknowledging that youth and families have certain knowledge and skills, but that they also will gain knowledge and skills through being involved in the health research CoPs [2]. Likewise the experience of knowledge sharing and building holds true to others who may be involved in research (e.g., clinicians), as well as the researchers themselves. Furthermore, direct interactions through health research CoPs are especially important for picking up on social cues and developing critical understanding of the lived experiences of youth and families [29]. Through working with co-researchers it was possible to build knowledge around the topics being researched, as well as knowledge contributing to how to develop approaches within ongoing and future health research CoPs. Co-researchers are keen to be part of the reflexive research design and are asked to give feedback regarding different stages of the research. Innovations in the research process occur through integrating the feedback and striving to find new and better ways to bring youth and families into health research CoPs as fully engaged co-researchers. Through YACs and FACs, coresearchers provide key knowledge for the analysis of data and KT. A few examples include the direct input into content and design for a KT website, involvement in the editing of a video documentary, and feedback on the artistic interpretation of research findings. --- Relationship building Relationship building is an essential component to ethical research engaging youth and families [22], and is central to IN•GAUGE® health research CoPs [3]. Relationship building occurs among youth and families, as well as among youth, families, and different members of the IN•GAUGE® health research CoPs (Fig. 1). Social relationships that are fostered through previous interactions (i.e., through systems of care) between the different co-researchers (i.e., youth and their families, university researchers, service providers, and special knowledge holders) act as foundations for many of the IN•GAUGE® projects. Coresearchers are invited into collaborative spaces where their perspectives are heard and given full consideration and see YACs and FACs as a safe spaces to share experiences with each other and create sense of community. Building relationships in this way lead to the development of respect, knowledge, awareness, and understanding of knowledge within the community and enable youth and their families to contribute in more meaningful ways to health research CoPs. It is also important to acknowledge that being involved in health research CoPs can be burdensome for co-researchers and that the risks associated with tokenistic participation could be managed in part by creating measures to equally value the commitments of co-researchers, such as through adequate remuneration (depending on context, specific research project and contributions, time commitments, etc.). Co-researchers that are part of IN•GAUGE® health research CoPs are given honorariums, as well as other types of compensation (e.g., meals and transportation). Such protocols are put into place to demonstrate respect, value and commitment towards co-researchers. In some situations it is appropriate for co-researchers to be employed as paid staff members on a project as a way of formalizing their roles and compensating them for their knowledge, experience and contributions. Co-researchers involved in IN•GAUGE® YACs and FACs are also often given the option being a coinvestigator or consultant to projects. Such categories came with different benefits and types of payments. --- Limitations This interpretation mainly focused on the engagement of youth and families within health research CoPs through exploring their interactions with university researchers. This paper would have been strengthened by working with co-researchers in its development however time constraints for co-researchers and prior commitments to other knowledge translation activities by members of YACs made such an approach challenging, reflecting broader challenges of working in a participatory manner. Further examination of the internal structures and connections between the other actors (i.e., with service providers and special knowledge holders) within emerging health research CoPs would be advantageous for developing greater understanding and best practices around how health research CoPs function as entire systems [30]. Further investigations into the structures of YACs and FACs would also be beneficial for understanding their impacts of health policy and practice [31]. --- Conclusions The dearth of health literature focusing on patient engagement involves frameworks equating patients with adults, not recognizing that these approaches may not resonate with youth. Through exploring the outcomes of engaging youth and their families through IN•GAUGE®, a sixteen yearlong research program led by Dr. Roberta Woodgate that is focused on working with youth and their families across the spectrum of patient engagement, improving health research and practice, we have gleaned several important insights about the development of health research CoPs for health systems for youth and their families. Promoting and creating the space for identity, capacity building, and leadership is integral to the meaningful engagement of youth and their families in health research. Within such conscious spaces, co-researchers are able to participate, learn, build social capital, and contribute to knowledge and building relationships that are designed to innovate and improve healthcare systems. --- Availability of data and materials Not applicable --- Abbreviations CoPs: Communities of Practice; FAC: Family Advisory Council; UNCRC: United Nations Convention on the Rights of the Child; WHO: World Health Organization; YAC: Youth Advisory Council Authors' contributions All authors contributed to designing the structure and content of the manuscript as well as drafting of the manuscript. All authors have approved the submitted version of the article. --- Ethics approval and consent to participate Not applicable --- Consent for publication Not applicable --- Competing interests The IN•GAUGE® program was trademarked in 2015 by RLW to prevent the use of the same name in other research program. At this point in time there is no plan to sell the trademarked programme. All other authors declare that they have no competing interests. --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
We analyze bibliometric trends of topics relevant to the epidemiologic research of social determinants of health. A search of the PubMed database, covering the period 1985-2007, was performed for the topics: socioeconomic factors, sex, race/ethnicity, discrimination/prejudice, social capital/support, lifecourse, income inequality, stress, behavioral research, contextual effects, residential segregation, multilevel modeling, regression based indices to measure inequalities, and structural equation modeling/causal diagrams/path analysis. The absolute, but not the relative, frequency of publications increased for all themes. Total publications in PubMed increased 2.3 times, while the subsets of epidemiology/public health and social epidemiologic themes/methods increased by factors of 5.3 and 5.2, respectively. Only multilevel and contextual analyses had a growth over and above that observed for epidemiology/public health. We conclude that there is clearly room for wider use of established techniques, and for new methods to emerge when they satisfy theoretical needs.
Introduction A nearly exponential growth in the scientific output on social epidemiology has been documented 1 . However, since Solla-Price's seminal work on the exponential growth of science 2 , it has been shown that scientific production doubles every ten to fifteen years 3 . Hence, it remains unexplored whether the absolute growth described for general science would also be observed in relative terms for specific areas of knowledge. By examining scientific growth in specific areas, it is possible to highlight emerging themes, so as to indicate possible advances in the near future. The objective of the present study was to assess trends in scientific production of methods and themes in the investigation of social determinants of health, between 1985 and 2007. --- Methods This study consists of a bibliometric analysis of yearly trends in groups of publications indexed in PubMed (http://www.ncbi.nlm.nih.gov/ pubmed) from 1985 to 2007. The main database in PubMed is MEDLINE, but the proportion of non-MEDLINE citations is unknown and a strict search of MEDLINE is not possible. In addition to MEDLINE, PubMed covers in-process citations, some papers before 1950 (OLDMEDLINE), and citations that are out-of-scope from MEDLINE journals. Each journal article added to PubMed's collection is indexed by specialized staff under at least one MeSH (Medical Subject Headings) descriptor of the U.S. National Library of Medicine Thesaurus, which was established in 1960 and has been updated since. The MeSH Thesaurus contains a tree-like cross-referenced structure of controlled vocabulary, used to translate terminology employed in articles in different idioms to a "system language". Each "entry term" encompasses many synonyms, near-synonyms and related concepts, regardless of the idiom, wording and spelling used by authors. When a MeSH term is used, PubMed automatically searches on narrower descriptors further out on that branch of the tree. For instance, the use of the MeSH term "Socioeconomic Factors" automatically searches for "Education Status", "Income", "Occupation", "Inequality" and "Social Class". --- Principles applied to build search strategies First, whenever possible, a MeSH term was employed rather than non-MeSH "keywords" in all search strategies. If an appropriate MeSH term could not be identified, a search strategy was built with text words based on the authors' experience. Second, we favored the earliest MeSH terms -those added before 1985 -and the most general category of the MeSH tree hierarchy. Third, MeSH term definitions were scrutinized in order to choose only those terms with the desired meaning. Finally, based on a preliminary analysis, we excluded redundant terms in the search strategy, so as to keep it as parsimonious as possible. Our searches were conducted on May 17, 2008. --- Search strategies and data analysis We tallied the annual number of publications identified using the following search strategies (details of search strategy can be obtained from the authors): all publications in the PubMed database (search strategy number #1); publications in epidemiology/public health (#2); and selected themes in social epidemiology (#14). We further refined the bibliographic analysis of (#14) by selecting publications focusing on 11 themes: socioeconomic factors (#3), sex (#4), race/ethnicity (#5), prejudice/discrimination (#6), social capital/support (#7), lifecourse (#8), income inequality (#9), stress (#10), behavioral research (#11), contextual factors (#12), and residential segregation (#13). The number of publications identified in the epidemiology/public health search (#2) served as the denominator to calculate the proportion represented by each of the 11 social epidemiology themes (i.e. strategies from #3 to #13 were divided by #2). The total number of publications in PubMed (#1) was used as the denominator to calculate the proportion of publications in epidemiology/public health (#2 divided by #1) and in social epidemiology (#14 divided by #1). We also examined trends for three types of data analysis methods: multilevel modeling (#15), regression-based indices to measure inequalities (#16), and structural equation modeling/causal diagrams/path analysis (#17). The absolute number of publications of these data analysis techniques among the eleven social epidemiology themes were determined for each year between 1985 and 2007 and plotted. Their proportion in relation to the total number of articles in the 11 themes (i.e. the number of publications from strategies #15 to #17 divided by the number of publications from strategy #14) was also determined and plotted for the period 1985-2007. --- Results Between 1985 and 2007, there was a 2.3-fold increase in the annual number of citations added to the PubMed: from 329,263 in 1985 to 759,698 in 2007. In the same period, articles indexed under epidemiology/public health headings (search strategy #2) increased 5.3 times (from 48,719 in 1985 to 256,892 in 2007) and, among the selected themes in social epidemiology (search strategy #14), there was a 5.2-fold increase (from 9,349 to 49,052). In 2007, more than 30% of the scientific output indexed in this bibliographic database had at least one of the descriptors used to identify citations in the area of epidemiology/public health. In contrast, the relative contribution of social epidemiology increased moderately, compared to epidemiology/public health, reaching 6.5% in 2007. Trends in selected themes in social epidemiology are depicted in Figures 1 and2. Absolute frequencies tended to increase over the studied period, and more markedly in recent years. Themes like socioeconomic factors, sex, race/ ethnicity, behavioral research, stress, contextual factors, lifecourse and prejudice/discrimination are good examples in this regard. Due to small numbers, however, we could not be confident about the trend patterns for income inequality (#9) and residential segregation (#13). A different picture emerges when relative frequencies are considered. When the count of all epidemiology/public health publications serves Cad. Saúde Pública, Rio de Janeiro, 27(1):183-189, jan, 2011 as the denominator, the exponential growth pattern seen with the absolute count is no longer observed. Increases or stationary trends are seen for behavioral research, race/ethnicity, stress, contextual factors, lifecourse, prejudice/discrimination and social capital/support. Among these, contextual factors showed the steepest relative increase in the period, rising from almost 4% in 1985 to 7% of all epidemiology/public health publications in 2007. In contrast, socioeconomic fac-tors and gender showed declining relative trends. Residential segregation and income inequality each exhibited a fluctuating pattern, which likely reflects the small number of publications. Publications using one of the three analysis methods -multilevel modeling, structural equation modeling/causal diagrams/path analysis, and regression-based indices -all showed steep increases in the absolute number of articles, as well as in their relative frequencies (Figure 2). Among these, multilevel modeling emerged as the most employed method. --- Discussion Our results showed that absolute and relative trends can provide different conclusions, but regardless of how we plotted them, epidemiology and social epidemiology grew over and above the growth of general health science. One could argue that the larger growth of social epidemiologic themes in relation to the total citations in PubMed could be determined in part by an increased indexation of epidemiologic/public health journals in the database over the study period. However, this is not likely to be the case; epidemiologic/public health journals account for only 1% (n = 370) of the 37,665 journals indexed in PubMed and 2.2% of all publications (data from PubMed Journal Database), such that an expressive number of journals would have to be included in the database in order to artificially influence the observed trends. One of the limitations of the present study is that its findings are based only on PubMed; however, this is recognized as the largest and bestknown database in the field of health sciences. Another limitation is that not all retrieved publications necessarily strictly fit the definition of social epidemiology, that is, explicitly incorporating so-cial theory in the article's analytical framework 4 . This concern is tempered by our use of MeSH terms, which increased the sensitivity and the specificity of the search strategies. Because the terms used were constant over time, the results are likely to reflect real trends in social epidemiology publications indexed in PubMed. Overall, from the results presented, it can be concluded that the branch of social determinants Y1 number of publications Y2 % in epidemiology/Public health of health has been growing fast, and that this growth was seen in nearly all of the 11 sub-areas. It is important to emphasize that the magnitude of absolute increases in some sub-areas might be misinterpreted as outpacing others, when, in fact, relative figures reveal increases that were actually modest. Although the number of publications in social epidemiology increased more than the average growth of the total publications in PubMed, epidemiology/public health did too, and the only themes in social epidemiology growing over and above trends in epidemiology/public health were those which lent themselves to multilevel or contextual analysis. This is a good example where methodological advances met theoretical needs. However, there is clearly room for wider use of established techniques, and for new methods to emerge and satisfy theoretical needs. --- Resumo Analisam-se tendências bibliométricas de tópicos relevantes na pesquisa epidemiológica de determinantes sociais da saúde. Realizaram-se buscas no PubMed no período 1985-2007, sobre: fatores socioeconômicos, sexo, raça/etnia, discriminação/preconceito, capital/ suporte social, curso de vida, desigualdade de renda, estresse, pesquisa comportamental, efeitos contextuais, segregação residencial, modelo multinível, indicadores de desigualdade baseados em regressão e equações estruturais/análise de caminhos/diagramas causais. A freqüência absoluta, e não a relativa, de publicações cresceu em todos os temas. O total de publicações aumentou 2,3 vezes no período, enquanto que o conjunto epidemiologia/saúde pública e os temas de epidemiologia social aumentaram 5,3 e 5,2 vezes, respectivamente. Efeitos contextuais e modelagem multinível apresentaram crescimento relativo acima do observado para epidemiologia/saúde pública. Conclui-se que existe espaço para a ampliação do uso das técnicas de análise existentes e para que novos métodos surjam, atendendo a necessidades teóricas específicas da área. Consumo de Alimentos; Inquéritos sobre Dietas; Métodos Epidemiológicos Contributors R. K. Celeste collaborated in the design, write-up, data collection, analysis and interpretation and final revision. J. L. Bastos collaborated in the design, write-up, analysis and interpretation of data, and revision of the final version. E. Faerstein collaborated in the design and data interpretation, as well as in the write up and revision of the final version.
Research suggests that for many people happiness is being able to make the routines of everyday life work, such that positive feelings dominate over negative feelings resulting from daily hassles. In line with this, a survey of work commuters in the three largest urban areas of Sweden show that satisfaction with the work commute contributes to overall happiness. It is also found that feelings during the commutes are predominantly positive or neutral. Possible explanatory factors include desirable physical exercise from walking and biking, as well as that short commutes provide a buffer between the work and private spheres. For longer work commutes, social and entertainment activities either increase positive affects or counteract stress and boredom. Satisfaction with being employed in a recession may also spill over to positive experiences of work commutes.
Introduction Everyday living in modern Western societies is dominated by mundane activities performed routinely, many motivated by obligations, although needs and desires are also common motives (e.g., Vilhelmson 1999;Michelson 2011;White and Dolan 2009). Since these activities constitute such a large part of everyday life, they are likely to have an impact on people's overall life satisfaction and emotional well-being. This was recently demonstrated by Jakobsson Bergstad et al. (in press) for a subset of routine out-of-home activities. As a further indication of such an impact, research has shown that satisfaction with life domains including work or school, family life, and leisure (associated with performance of particular mundane routine activities) is positively correlated with overall life satisfaction (Pinquart and Silbereisen 2010). Work commutes are in this context a neglected aspect of everyday life. A fact is still that billions of people commute to and from work every workday. An informal literature search of international transportation studies reveals that average commute times vary between 40 and 80 min, with public-transit taking longer than car commutes. An average of 4-10% of waking time on workdays is spent on commutes. Several previous, predominantly US studies have found that work commutes induce stress (see Novaco and Gonzales 2009, for review). Thus, it has been found that long work commutes in congested automobile traffic cause residual stress in the workplace (Novaco et al. 1990). Stress due to work commutes by public transit increases with the complexity of the commute (Wener et al. 2003) and with crowding in vehicles (Singer et al. 1978). In a similar vein, it has been argued that the negative effects of the length of work commutes substantially reduce the benefits of living in attractive places distant from the work place (Stutzer and Frey 2008). In a study using the day reconstruction method to measure emotional well-being, Kahneman et al. (2004) found that the work commute belonged to the episodes that were most frequently associated with negative feelings during the day. The aim of the present study is to add to these research findings by investigating how overall life satisfaction and emotional well-being are affected by the work commute. Our approach is different in that satisfaction with the work commute is measured such that its correlation with overall life satisfaction and emotional well-being can be directly assessed. The analyzed data were collected in Sweden where, as in other European countries, the travel mode split is more even than in the US where driving is predominant. The results therefore also provide a contrast to the previous research in the US demonstrating stress effects. Happiness (also commonly referred to as subjective well-being) has attracted a plethora of cross-disciplinary research in recent years (e.g., see reviews by Dolan et al. 2008;Lyubomirsky et al. 2005). In line with this research, we refer to happiness or subjective well-being as a higher-order construct consisting of a cognitive and two affective components (Busseri and Sadava 2011). The cognitive component consists of a judgment of life satisfaction (evaluations of life circumstances) that is commonly measured by reliable self-report rating scales, for instance the 5-item satisfaction with life scale (SWLS) (Diener et al. 1985;Pavot and Diener 1993;Slocum-Gori et al. 2009) which will be used in the present study. The affective components of happiness include the positive and negative moods and emotional episodes that people experience. Several self-report methods have been devised to measure these affective components. A distinction is whether the methods are on-line such that they assess immediate affects (Stone et al. 1999) or retrospective and memorybased (Schwarz et al. 2009). The positive and negative affect scale (PANAS; Watson et al. 1988) has frequently been used either on-line to measure current mood or retrospectively to assess the frequency and intensity of affects for a specified timeframe. On this measure happiness increases with the frequency and intensity of positive affect (PA), including emotions such as joy and delight, and decreases with the frequency and intensity of negative affect (NA), including emotions such as anger and fear. A measure of emotional well-being (also referred to as the affect balance) is constructed by computing the difference between retrospective assessment of the frequency and/or intensity of positive and negative affect. Other research has shown that affect is related to two dimensions, a pleasantnessunpleasantness dimension labelled valence and an active-passive dimension labelled activation (e.g., see the affect grid, Russell 2003). Diener and Lucas (2000) accordingly proposed that measures of the affective components of happiness should be based on a dimensional description varying in valence and activation. The Swedish core affect scale (SCAS; Va ¨stfja ¨ll et al. 2002) that will be used in the present study is such a measure based on the affect grid. Both the cognitive and affective components of happiness may be influenced by the work commute (Ettema et al. 2010). Even though the work commute generally have an intended positive outcome-and would therefore have a positive effect on the cognitive component of happiness-it may still be experienced as stressful. Thus, how commuters react affectively should also be important, whether they are predominantly stressed, relaxed, excited or bored. We have therefore developed the satisfaction with travel scale (STS) to measure a cognitive component and two affective components of the experience of any type of travel (Ettema et al. 2011;Jakobsson Bergstad et al. 2011). The STS that will be used in the present study thus has three components: a cognitive evaluation of the quality of travel, an affective evaluation of feelings during travel ranging from stressed to relaxed, and an affective evaluation of feelings during travel ranging from bored to excited. --- Method The participants were 713 work commuters (41.7% male; age ranging from 20 to 65 with a mean of 41.2 years) living in the three largest urban areas of Sweden (Stockholm population 850,000; Go ¨teborg population 510,000; Malmo ¨population 395,000) (for detailed sample characteristics, see Table S1 in the supporting information available online). The participants answered a mail questionnaire that had three consecutive modules consisting of questions about the work commute, overall happiness, and sociodemographics. To minimize memory distortions (Schwarz et al. 2009) the most recent normal commute to and from work was targeted in the questionnaire. In the first module the participants first reported the date, departure and arrival times, intermediate stops, and travel modes. On the basis of the self-reports of mode use, work commutes were classified as made by car if car was used for at least one leg of the commute1 (to work n = 269; from work n = 259), as made by public transit (PT) if PT was used for at least one leg and that no car was used for any other leg (to work n = 251; from work n = 254), and as commutes by slow modes if the commuters walked or biked all legs (to work n = 165; from work n = 164). In the same module the STS was then administered to assess satisfaction with the commute to and from work, respectively. The STS consists of nine seven-point adjective scales; three ). The order between the ratings scales was counterbalanced. In the second module the SWLS (with the time frame last month) was first administered. An average was computed for ratings of agreement on 1 (do not agree) -7 (fully agree) scales to the following five statements: in most ways my life is close to my ideal; the conditions of my life are excellent; I am satisfied with my life; So far I have received the important things I want in life, and; if I could live my life over, I would change almost nothing. A measure of the affective components were thereafter obtained as self-report ratings of the frequency (never = 0; rarely = 1; sometimes = 2; often = 3; very often = 4) during the last month of experiencing three intensities (slightly = 1; moderately = 2; very = 3) of the six positive emotions glad, active, joyful, awake, peppy, and pleased and the six negative emotions sad, passive, depressed, sleepy, dull, and displeased. An affect-balance index was constructed by multiplying the ratings of frequency and intensity for each emotion, then summing with a positive sign for the positive emotions and with a negative sign for the negative emotions. --- Results A composite measure of satisfaction with the work commute was formed by averaging across all nine STS scales. 2,3 As Table 1 shows, on this measure daily commute time (from 10 to 180 min) reduces satisfaction with the work commute. Slow commute modes (walking and biking) also result in more satisfaction than car and public transit (Table 2). A multiple linear regression analysis reported in Table 3 reveals a significant negative effect of daily commute time and a significant positive effect of slow modes (walking/ biking) versus public transit or driving. By dichotomizing the affective components of STS, Table 4 shows that positive or neutral feelings dominate during the work commute. The negative effects of daily commute time are also observed for the affective components. Table 5 summarizes the results of multiple linear regression analyses with SWLS and affect balance as dependent variables, separately for the commutes to work and from work (the full results are provided as Table S2 in supporting information available online). As can be seen, satisfaction with the commutes to and from work directly influences the affect balance as well as SWLS, directly or indirectly through the affect balance. In previous studies (reviewed in Lyubomirsky et al. 2005), socio-demographic factors have accounted for approximately 10% of the variance in SWLS. The present figure is 14% that decreases to 7% when affect balance is partialled out. Satisfaction with the work commute accounts for an additional 2% of the variance in SWLS and an additional 11-12% of the variance in the affect balance. --- Discussion The key finding of the reported survey is that satisfaction with the work commute has a substantial influence on overall happiness, particularly on the balance between positive and negative affect (Table 5). This influence would be negative for participants who are dissatisfied and positive for those who are satisfied with their work commutes. On average satisfaction is high (Table 1), thus a positive contribution is made to overall happiness. In addition the present study fails to show that work commutes are predominantly stressful (Table 4), as previous research has found (Novaco and Gonzales 2009), although negative feelings during the work commute increases with the length of the commute. The present results add to previous findings by suggesting that affect associated with mundane routine activities in different life domains may play important roles for overall happiness. In fact, the role of satisfaction with the work commute is of the magnitude observed for several of the mundane routine activities investigated by Jakobsson Bergstad et al. (in press). We assume here that the causal direction is from satisfaction with the work commute to overall happiness. A reverse direction is however also conceivable. For instance, given the negative effects that unemployment has on overall happiness in the current economic situation, it is possible that the happiness derived from having a job spills over to the satisfaction with the work commute. Why are the present results inconsistent with those of studies of work commuting, predominantly by car, in the US (Novaco and Gonzales 2009)? One factor is that biking and walking, more common in the present study conducted in Sweden (and this would be the same in several other European countries, e.g., The Netherlands), contributes more to Missing values were excluded pairwise satisfaction with the work commute than driving and public transit. That walking and biking provide desirable physical exercise is a reason for their popularity (Lawrence et al. 2006). If commutes are short, as walking and biking commutes usually are, they may also be appreciated as a buffer between the work and private spheres (Jain and Lyons 2008). Particularly over longer distances, satisfaction with the work commute decreases. It is an open question whether public transit leads to more satisfaction than driving when the length of the commute increases. Speaking for public transit is that, more than solo driving does, it allows for engagement in activities such as talking to others, resulting in PAs, and work or entertainment activities that may reduce stress and boredom. The entries in the table are increments (DR 2 ) in hierarchical regression analyses. All of the increments and the full models are statistically significant at p \ .01 or less. The full results are given as Table S2 in supporting information available online Other possible factors accounting for satisfaction with the work commute are not directly related to the travel mode per se. Some research has demonstrated adaptation to adverse conditions (Frederick and Loewenstein 1999). The results of the present study do not exclude that some people report positive experiences because they adapt to negative effects of their work commute. These people may be susceptible to adaptation costs (e.g., physiological stress reactions Ng et al. 2009) which the self-report measures in the present study do not fully capture. The findings in the present and other studies (Jakobsson Bergstad et al. in press, 2011) that experiences of work commutes and other mundane routine activities has measurable effects on overall happiness should be a reminder of that engagement in particularly meaningful activities such as practicing generosity, developing social relations or learning to manage stress (Lyubomirsky 2008), even though important, are not the only routes to happiness in life. For many people, being able to make the routines of everyday life work, such that positive feelings dominate over negative feelings resulting from daily hassles, may be equally important for their overall happiness. This insight is particularly significant to convey to policy makers who are responsible for spending tax money to improve municipality facilities.
Objective The aim of this study was to explore the experiences and need for social support of Chinese parents after termination of pregnancy for fetal anomalies. Design A qualitative study using semistructured, indepth interviews combined with observations. Data were analysed by Claizzi's phenomenological procedure. Setting A large, tertiary obstetrics and gynaecology hospital in China. Participants Using purposive sampling approach, we interviewed 12 couples and three additional women (whose spouses were not present). Results Four themes were identified from the experiences of parents: the shock of facing reality, concerns surrounding termination of pregnancy, the embarrassment of the two-child policy and the urgent need for social support. Conclusion Parents experienced complicated and intense emotional reactions, had concerns surrounding the termination of pregnancy and an urgent need for social support. Paternal psychological reactions were often neglected by healthcare providers and the fathers, themselves. These findings suggest that both mothers and fathers should receive appropriate support from family, medical staff and peers to promote their physical and psychological rehabilitation.
INTRODUCTION In 2017, the WHO estimated that approximately 280 000 neonates worldwide died of congenital malformations within 28 days of birth. 1 In China, the incidence of congenital abnormalities is approximately 5.6%, and approximately 1 million infants are born with birth defects every year. 2 With the development of medical ultrasound techniques and improvements in prenatal diagnosis, fetal malformations can be discovered during pregnancy. In China, implementation of the universal two-child policy has resulted in an increase in pregnant women with advanced age and high-risk pregnancies, which in turn has led to an upwards trend in the incidence of fetal abnormalities. 3 Receiving a diagnosis of a fetal anomaly is distressing for expecting parents, who frequently experience intense emotional responses, including shock, grief, anger, uncertainty and fear. [4][5][6] A substantial proportion of expecting parents choose to terminate the pregnancy when their babies are diagnosed with life-limiting anomalies. 7 Termination of pregnancy for fetal anomaly (TOPFA) not only causes great physical pain to women but also brings intense sadness and destructive psychosocial problems, which may last for many years, 8 including anxiety, depression, post-traumatic stress disorder (PTSD), complicated grief and even suicidal thoughts. [9][10][11][12] There is a sizeable body of studies and recommendations associated with women's experiences of grief and support needs. [13][14][15] Chen et al 13 indicated that women with induced labour for fetal abnormality will experience a special coping process identified as 'admitting the child's existence, seeking information and emotional support, avoiding the TOPFA event and looking forward to the future'. Traditionally, fathers often play the role of supporters for their partners, and fathers' experiences of TOPFA have been underexplored in comparison to mothers. 11 16 However, they also experience high levels of --- STRENGTHS AND LIMITATIONS OF THIS STUDY ⇒ A qualitative design using face-to-face, semistructured interviews combined with observation is beneficial for improving the accuracy of data. ⇒ The inclusion of accounts from mothers and fathers provided a rich insight into parents' experiences and need for social support. ⇒ Our population did not distinguish between stillbirth and fetal malformation, and the experiences and needs may be different. Open access grief, anxiety, depression and PTSD, 11 17 18 which requires acknowledgement and validation from healthcare professionals, family and friends, community networks and workplaces. 11 To date, a growing body of studies has focused on men's grief and care experiences following abortion and stillbirth, [19][20][21] but few qualitative studies have exclusively focused on fathers' psychological experiences and need for social support after TOPFA, 22 23 particularly in the context of Chinese culture. In traditional Chinese culture, childbearing is highly valued by many couples, and TOPFA can bring feelings of guilt and inferiority. 24 25 In addition, traditional Chinese cultural concepts such as reporting good news but not bad news and superstition can also have an impact on parents' psychological experiences and coping styles. 25 26 According to the theory of social exchange, social support is a combination of functional support and structural support. Functional support includes emotional, instrumental, informational and appraisal support. Structural support includes formal support (eg, from healthcare professionals) and informal support (eg, from close family members and friends). Social support is often identified as the most critical component for mothers' adaptation to the death of their children, 27 as it can effectively decrease parental grief, anxiety, depression and PTSD. [28][29][30] Therefore, it is necessary to explore the experiences and social support need of parents under the special cultural context of China to help clinical medical staff and social workers better understand and support parents. Given the significant role of social support in reducing the negative psychology of parents, and the lack of recommendations on bereavement care for parents, especially for fathers, this study sought to explore the experiences and social support need of both fathers and mothers following TOPFA against the unique cultural background of China. --- METHODS The study and manuscript were prepared following the Consolidated Criteria for Reporting Qualitative Research guideline. --- Design and setting This study employed a qualitative design using semistructured, in-depth interviews combined with observation. The purpose of interviewing both mothers and fathers was to provide a more comprehensive understanding of parents' experiences and social support need from different perspectives. Because the research topic was sensitive and unfamiliar to the interviewees, the in-depth, semistructured interviews were particularly well-suited for the study. The study was carried out in the Obstetrics and Gynaecology Hospital affiliated to the Medical College of Zhejiang University from March to September 2016. This hospital has set up the Zhejiang Provincial Prenatal Diagnosis Center, which has carried out a variety of prenatal diagnosis techniques, and is responsible for intrauterine diagnosis and intervention of fetal congenital abnormalities and genetic diseases in the province and surrounding areas. --- Participants A purposive sample of women who experienced TOPFA and their spouses was used. Inclusion criteria were that participants were at least 18 years of age, had experienced fetal abnormality (gestational week >14 weeks) confirmed by the Zhejiang Provincial Prenatal Diagnosis Center, decided to terminate a pregnancy for a fetal abnormality and voluntarily participated in the study with signed informed consent. The exclusion criteria were as follows: a history of psychosomatic disease, intellectual disability or illiteracy, and/or inability to understand the interview questions. To ensure a representative sample and capture a wide range of perspectives, the heterogeneity of sample selection was expanded for participants' age, education level, occupation and perinatal loss characteristics. --- Data collection Data were collected using face-to-face, semistructured interviews combined with observation. During the interview, the changes in the interviewees' expressions, speed and intonation were observed and recorded. 31 At the same time, their feelings or opinions were clarified and confirmed in time to ensure the accuracy of the data. The researchers developed the interview outline before the interview, including the following open-ended questions: How do you feel when you received news of fetal anomaly? What are your concerns going forward? What kind of support and assistance would you like? How does the two-child policy affect your current pregnancy and future? The order of the questions in the outline is not fixed and can be adjusted according to the specific situation. All interviews were conducted by the first author, who was specifically trained in conducting qualitative interviews. The interviewer had no prior relationship with parents, briefly introduced herself before the interview and used neutral, objective and non-leading language during the interview to maximise data integrity. Before the formal interview, the interviewer provided participants with a detailed introduction to our research purpose, significance, interview process and privacy protection measures for the interview content, in order to promote the research subjects' familiarity with the research topic and reduce their sensitivity to the research topic. The women and their spouses were interviewed separately in a quiet and undisturbed environment based on the convenience of the interviewees. After a written informed consent was obtained from the participant, interview was recorded with a digital recorder. The sample size of the study was based on the principle of information saturation. 32 The study was discontinued when qualitative Open access data reached saturation. When the number of interviews with mothers reached 15 and the number of interviews with fathers reached 12, our research data were saturated and no new information appeared, so the sampling was terminated. Each interviewee was interviewed 1~2 times, 40~60 min each time. In three cases, the spouse did not appear during the interview, so the interview was missing. To protect participants' privacy, 15 women were numbered 1A~15A, and their spouses were numbered 1B~15B (3B, 13B and 15B were not present). --- Data analysis The interview recordings were transcribed verbatim into textual materials within 24 hours and checked by another researcher. The data were analysed by the same researcher who collected the information following Colaizzi's sevenstep procedure. 33 The specific steps were as follows: the researcher read all transcribed materials carefully, analysed and obtained significant statements, coded recurring and meaningful viewpoints, summarised all encoded viewpoints, developed a detailed and complete narrative, distinguished similar viewpoints and verified the obtained results with the interviewees to ensure the authenticity of the content. Initial themes were developed by the first author and then discussed and refined with qualitative research experts and all authors to avoid subjective influence and ensure the accuracy and objectivity of the results. To ensure the trustworthiness of the data, the following procedures were used: anonymous transcription of each interview; making field notes after each interview and the field notes were examined during the data analysis to help better understand the data; the researchers remained reflexivity to recognise their potential effect on the study findings and maintained faithful to the perspectives of the interviewees. Finally, we conducted the last step of the Colaizzi's seven-step procedure. The participants were invited to respond to the obtained results. The results were presented to them in a general overview table containing quotes, emerged meanings, all themes and subthemes. If the interviewees disagree with the results, the researchers then rechecked the relevant codes to conduct the final analysis. In this step, the research team verified the obtained results with the interviewees to ensure the accuracy and credibility of the results. All interviewees considered that the results represented their perceptions and no significant themes were missed. --- Patient and public involvement None. --- RESULTS Finally, 12 couples and 3 additional women (whose spouses were not present) were interviewed. The demographic information of the participants is shown in table 1, and perinatal loss characteristics are shown in table 2. A total of four overarching themes were identified across the interviews, each with some subthemes (see figure 1). Details of the themes are outlined below. Theme 1: The shock of facing reality Subtheme: Query and verification of fetal abnormalities Without psychological preparation, parents initially showed strong shock and denial in the face of the Subtheme: Abandoning the fetus with reluctance and struggle After identifying fetal abnormalities, bereaved parents often struggled between continuing the pregnancy and inducing labour. In the interview, 11 parents showed obvious entanglement and reluctance when forced to make a choice. Until the last second before induced labour, I hoped there would be a miracle (participant 2A). If I had enough money, I would have still wanted to give birth to him [referring to the abnormal foetus]…. As a father, I was still reluctant to give up. I sent him to the operating table with my own hands. It was not that he gave up on me, but I gave up on him (participant 14B). Especially in the face of non-fatal fetal abnormalities, it was more difficult for parents to make the decision to induce labour, and four parents had a strong sense of uncertainty. At present, there are uncertain answers. For us, there was too much uncertainty, so we dared not take this risk to give birth to our children (participant 2A). Due to the uncertain nature of the lump at present, it is difficult for us to make a choice emotionally. We don't know whether the decision to induce labour is right or wrong (participant 5B). --- Subtheme: Compromise with reality and seeking spiritual comfort After objectively weighing the advantages and disadvantages of medical risks, fetal health and potential future economic burden, parents were forced to acknowledge reality, choose induced labour and seek spiritual comfort. In the face of fetal abnormalities, nine men accepted reality more pragmatically than women. I believed in science because the baby was a flawed life and could not survive. That is all we could do. We have tried our best (participant 9A). This was a confirmed fact. If the child would be born with so much pain, we would rather make the choice to terminate in the early stage (participant 12B). Five parents received spiritual comfort and support from their living children, and the living children also made the parents more confident in their decision to induce labour. Some parents sought spiritual comfort by believing in religion. My daughter was very clever and kept asking me about my condition, which is also a comfort to me [expression was comfortable]. At my age, even if the baby was born, his quality of life would not be high, and I do not want to force my daughter to be involved (participant 3A). Buddha has spoken of fate… I comforted myself, thinking that I had no fate with this child, so as to calm my heart, and that I was lucky to find out the foetal abnormality early (participant 13A). --- Subtheme: Intense grief Parents felt intense grief following the death of the fetus, similar to the loss of other relatives. When the doctor said that the baby's foetal heartbeat disappeared, I felt heartrending pain (participant 3A). During the interview, there were five fathers weeping in sadness, indicating that the fathers also had complex grief. However, in traditional thinking, spouses mainly play the role of supporters and need to be strong to avoid Open access aggravating the sorrow of their wives. Therefore, they often hid their sadness in front of their wife and family. I didn't show sadness in front of my wife, I wanted her to feel that I wouldn't care too much about the outcome, but my wife felt that I wasn't sad at all and didn't care about the child, and I was actually very sad [with tears in his eyes, voice trembling] (participant 1B). I really felt a pity in my heart [he expressed regret many times to researcher), my wife could cry, but I couldn't [the corners of his eyes were wet] (participant 4B). Family members, especially grandparents, also had grief, and the grief of family members could aggravate the grief of parents. I didn't want others to know about it [TOPFA event], especially my mother who was also looking forward to the baby. Facing the reality of the abnormal foetus, I was very sad and afraid that my mother's sadness would aggravate my sadness (participant 3A). After the induction of labour, the role of parenthood was completely interrupted, and four pregnant women and their spouses felt at a loss. I felt that this happened suddenly [crying loudly], and it was difficult to accept it for a while. When I saw all the baby-related supplies, I would think about it [TOPFA event]. I was very reluctant to leave this baby. I was pregnant when I was admitted to the hospital, but I came home with nothing. It was even more sad to see others holding their babies (participant 3A). --- Subtheme: The shackles of traditional thinking Traditional Chinese ideas such as 'family succession', 'son preference' and 'superstition' brought pressure to parents. Seven couples in this study were bound by traditional ideas. My dad already had three granddaughters, and was looking forward to having a grandson, which was a little stressful for me (participant 10B). In a country like China, if the neighbours in the countryside suspected that the baby was not developing well, they would certainly speculate, which might cause some gossip (participant 14B). At the same time, due to the conservative idea of 'reporting good news but not bad news', some parents were unwilling to share their sad feelings with others to avoid and cover up their inner grief. I didn't want to see anyone when I got home, I wanted to be alone in a small room where no one could disturb me, and I didn't want others to know about it (participant 3A). --- Subtheme: Rumination After accepting the reality of fetal abnormality, 8 parents were still confused about the causes of fetal abnormality and reflected on their own deficiencies during pregnancy. This rumination would lead to parents' strong sense of guilt and self-blame. My wife had a cough in the early stages of pregnancy, and a plaster was applied to her neck… We worked at Taobao and faced the computer for a long time every day. We really did not exercise enough, and our immune systems were not very good … (participant 14B). I wasn't ready for this baby. I was pregnant unexpectedly. Without knowing I was pregnant, I took cold medicine and underwent anaesthesia, so I was an unqualified mother [with tears in her eyes] (participant 12A). Theme 2: Concerns surrounding termination of pregnancy Subtheme: Concerns about induced labour Rivanol amniotic cavity injection is the most commonly used method of labour induction in the middle and late stages of pregnancy. Most parents did not understand the complete process of labour induction and were concerned. In this study, nine pregnant women and their spouses were full of anxiety and fear about the process of induced labour. I was worried about this delivery. I had heard the process was terrible (participant 2A). Subtheme: Concerns about women's physical and mental recovery Induction of labour not only made women suffer great physical pain but also caused great psychological trauma to them. Three spouses expressed strong concerns about maternal physical recovery, and four spouses were more worried about the psychological recovery of the wives. I was worried about my wife's health. After all, I could only talk about the next one [referring to the next pregnancy] after her physical recovery (participant 14B). --- I was worried about my wife's psychological recovery [repeated many times]. If she didn't adjust to this well, having another child would increase her burden (participant 10B). Subtheme: Concern about the subsequent pregnancy TOPFA not only brought intense grief, anxiety, fear and other psychological problems to parents but this traumatic experience also left an indelible psychological shadow on them. Parents who have the need to get pregnant again were especially concerned about the risk of the subsequent pregnancy. There was a psychological shadow, and I always felt that I had experienced a miscarriage. Even if --- Open access everything is normal in the next pregnancy, there will be faint worries (participant 6A). Theme 3: The embarrassment of the two-child policy Subtheme: Contradiction between older age and the two-child policy The comprehensive liberalisation of the two-child policy has aroused countless couples' desire for having children, including older couples. However, in the face of the reproductive risks brought by old age, many families fall into the embarrassing situation of whether to have a second child or not. In this study, nine cases were giving birth to a second child, and six of them were mothers of advanced age. I just wanted to have another child. It was better to have two children. After the second child policy was liberalized, this age [approximately 40 years old] was a concern. The second child policy was embarrassing for us… Our family and economic conditions allowed us to have a second child, but our physical conditions were not suitable (participant 6B). --- Subtheme: Eager to give birth to new life again The reproductive responsibility of women in traditional thinking and the second child policy have effectively aggravated the parents' desire to conceive again. Ten parents expressed their desire to conceive a healthy new life again. Especially when the two-child policy was liberalized and others had two children, maybe only having a healthy baby could truly eliminate the impact of this event [TOPFA] (participant 11B). Theme 4: The urgent need for social support Subtheme: Support from medical staff When parents knew that the fetus was abnormal, they were eager to know the advantages and disadvantages of continuing pregnancy and induced labour, the root cause of malformation, what physical recovery would look like after induced labour and other information from medical staff. In addition, they were eager for understanding and care from medical staff. The views of medical staff play a leading role in our choice (participant 13 A). My wife was still thinking that the child may be good until now, so I wanted the doctor to tell her that the child is definitely bad and make up a white lie to alleviate her guilt, which would also be a balm to my heart (participant 5B). --- Subtheme: Family support In the face of TOPFA, parents need the understanding and support of their families, especially the support of their spouses. My husband's company is the most important thing. If my husband is by my side during childbirth, my heart might be stronger (participant 6A). Spouses who played an important role of supporters would also provide effective support to their wives based on their psychological needs and personality traits. I told my wife from all aspects that the decision to induce labor was the right one… Now my wife was still thinking that the baby might be normal, so I wanted the doctor to tell her that the child was definitely abnormal, fabricate a white lie to alleviate her inner guilt (participant 5B). I took care of my wife, accompanied her, and took her out for relaxation. I would do my best to do well. There are many kinds of support, and I should support her effectively according to her personality characteristics (participant 10B). --- Subtheme: Peer support Peer support refers to making patients with similar diseases, physical conditions or experiences share information, emotions, ideas or behavioural skills through diversified forms. 34 Five women believed that the exchange of experiences and emotional resonance with peers was very beneficial to their psychological recovery. I joined a peer group. They also experienced it [TOPFA]. It was convenient to talk with them. They knew what I wanted to know. Seeing the photos of their babies [healthy babies born later] was very lively and lovely, which gave me a lot of positive energy. They were also more compassionate and gave me suggestions (participant 14A). Peer support was very helpful to my wife. The most important thing was confidence and informational help. They also went through TOPFA step by step and sorted out a set of processes, which was of great significance (participant 14B). Six women were eager to communicate with peers to obtain information and emotional support. I also wanted to communicate with my peers… I wanted to ask them how they came out in the end. We could prepare for the second child together (participant 11A). --- DISCUSSION Fetal abnormalities are serious traumatic events for parents, 35 which can make them face psychological crisis and complex psychological problems. The findings of the present study demonstrate that parents have experienced the following mental processes: denial and verification of fetal abnormalities, abandoning the fetus with reluctance and struggle, an acknowledgement of reality, intense grief, a confrontation with the shackles of traditional Open access thinking, rumination, concerns after deciding to induce labour and a desire for social support. To our knowledge, this is the first study in China to include both fathers and mothers in the exploration of parents' experiences and need for social support following TOPFA. Consistent with previous studies on parents' experiences of TOPFA, 6 23 36 when receiving the cruel fact of fetal abnormality, parents often struggle with doubts, selfblame, reluctance and sadness. We also found that most fathers in our study chose to hide their real emotion to support their spouses and reported more rationalising than mothers, which is similar to prior findings. 22 23 37 This may be because the social role given to fathers as supporters of the mother 38 leads to the suppression of grief, anxiety and stress in fathers, potentially increasing the risk of chronic psychological problems, so that the fathers will experience more anxiety in the subsequent pregnancy. 30 In addition, Chinese parents are bound by traditional ideas such as 'family succession', 'son preference' and 'superstition', which often brought pressure to parents and increased the stigma they felt. However, the surviving children could bring spiritual comfort and support to parents, which is an important predictor of parental grief intensity. 39 Unlike previous studies reported that fathers felt overlooked and marginalised at hospitals while their partners were receiving treatment, 23 36 we found that fathers, like mothers, mainly focused on maternal physical and psychological recovery and the impacts on the subsequent pregnancy, they did not realise that paternal psychological trauma also needs attention. TOPFA can increase the psychological stress of parents, especially mothers, in subsequent pregnancy, which is consistent with previous studies. 40 41 Parents who experience fetal abnormalities have a high degree of anxiety and fear in subsequent pregnancies, and especially worry about the risk of recurrence. 41 Under the influence of the Chinese two-child policy, parents' desire to conceive again was more urgent, but they were still full of doubts about the causes of fetal abnormalities and what to do in the subsequent pregnancy. Therefore, medical staff should establish a longterm follow-up mechanism to continue to pay attention to parents' physical and mental recovery 23 42 and provide them with the necessary knowledge and psychological support, such as information about abnormalities, childbearing and pregnancy examination, to promote them to conceive a healthy new life. Many studies have demonstrated the role of social support in alleviating negative emotions in parents experiencing TOPFA, including anxiety, depression and PTSD. 28 43 44 Parents in this study also showed a strong need for social support. When first informed about the fetal abnormality, parents were full of confusion about the causes and worried about the impacts on any subsequent pregnancy. They urgently need professional guidance and suggestions from medical staff. At this period, the information support from medical staff constitutes the main part of the support system, which can help parents make good decisions; these findings were in line with a prior study. 22 Therefore, medical staff should take the initiative to understand the thoughts and needs of parents with fetal abnormalities and patiently provide them with complete, adequate and appropriate informational support to help them establish a scientific understanding of fetal abnormalities and relieve their confusion and feelings of guilt to facilitate their process of grieving. 45 46 Echoing the experiences of parents in the broader pregnancy loss literature, 47 48 parents in this study also need psychological counselling and empathetic care from medical staff following perinatal loss. Therefore, healthcare professionals should offer parents' bereavement care with empathy and cultural sensitivity. 22 During induced labour in hospitals, parents need care and emotional support from their families, especially the support of their spouses; this finding is consistent with prior studies. 49 Fathers will also actively take on the role of supporters, providing various forms of support based on their wives' psychological needs and personality traits, including appraisal support for recognising their wives' induced abortion decisions and instrumental support such as daily care, company and helping wives shift their attention. However, fathers, like mothers, also need support from their families. 50 Therefore, medical staff should guide family members to treat fetal abnormalities scientifically, provide help and care for parents in life, and give emotional understanding and assistance to make them feel the warmth from their families. Peer support has long been considered an essential component of a supportive network for people facing Open access adversity. 51 Parents in this study also showed a strong need for peer support, and they hoped to obtain informational support and emotional resonance through the experience of sharing and communicating with their peers. Healthcare providers should establish a peer support platform according to the needs of parents, provide a platform for parents to exchange experiences, help them eliminate loneliness and helplessness, release inner pressure and transmit positive energy. Based on the above parents' experiences and social support need, we preliminarily constructed the following social support model (see figure 2) and verified the effects of family support and peer support; see our previous studies for details. 28 52 In the future, we will further verify the effect of the social support model. The study has a few limitations. First, there may be subtle differences in the psychology of parents who induce labour due to stillbirth and fetal malformation, but our population did not distinguish between stillbirth and fetal malformation. Second, we found that most fathers tended to avoid interviews, so we did not collect other demographic information except age to improve fathers' participation. Furthermore, some bereaved parents, especially fathers, refused to participate in the study, and based on the principle of information saturation, we ultimately included 15 mothers and 12 fathers, this small sample size would decrease transferability of our findings. However, the psychology of parents who refused to participate in the study is also well worth attention, and future research should explore essential reasons for their refusal to participate in the study and provide targeted psychological support. Finally, TOPFA will not only bring psychological trauma to parents but also have a negative impact on the psychology of the whole family; therefore, it is necessary to further explore the influence of TOPFA on other family members (such as grandparents, surviving children, etc) and the interaction of emotions among family members. --- CONCLUSION This study contributes to the limited body of international studies on parental experiences and social support need following TOPFA. The findings suggest that TOPFA is an extremely painful experience for parents, characterised by psychological reactions of denial and the need to verify fetal abnormalities, abandoning the fetus with reluctance and struggle, intense sadness, an acknowledgement of reality, a confrontation with the shackles of traditional thinking, rumination, concerns surrounding induced labour and a need for social support. Paternal psychological reactions were often neglected by healthcare providers and the fathers, themselves, because they often played the role of supporters, which requires more attention. Based on the above parents' experiences and social support need, medical staff should provide tailored information and emotional support for bereaved parents and guide family members in support. Finally, medical staff should establish a peer support platform to provide peer support for parents. --- Data availability statement Data are available upon reasonable request. --- Competing interests None declared. Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research. --- Patient consent for publication Not applicable. Ethics approval This study was approved by the ethics committee of the Women's Hospital School of Medicine, Zhejiang University (IRB Number 20150071). All participants were voluntary participation in the study and gave written informed consent prior to participation. All methods were performed in accordance with the relevant guidelines and regulations. Provenance and peer review Not commissioned; externally peer reviewed.
Background: Early Marriage is one of the global problems that undermine the personal development and the rights of women seriously. It is delicate among the developing countries such as Ethiopia. It has major consequences for public health, national security, social development, human rights, economic development, and gender equality. Methods: The analyzed data were obtained from the 2016 EDHS and 1120 samples were considered in this analysis. Both bivariate and multivariable binary logistic regression model were used to identify the determinants of early marriage practice. Results: The prevalence of early marriage practice was 48.57% in the study area. The odds of early marriage practice were 2.04(AOR=2.04, 95% CI: 1.88, 2.45) times higher among rural residents compared to urban. The odds of early marriage practice was 0.94(AOR=0.94, 95%CI: 0.57, 1.98) times lower among women who had primary education compared to uneducated women. Those who did not know the legal marital age were 1.61(AOR=1.61, 95%CI: 1.26, 2.07) times more likely to practice early marriage compared to parents who knew the legal marital age. Conclusion: Education level, family monthly income, residence, literacy level and knowledge of legal marital were significant determinants of early marriage practice.
Introduction Child early marriage is defined as "any marriage carried out below the age of 18 years, before the girl is physically, physiologically, and psychologically ready to shoulder the responsibilities of marriage and childbearing. It therefore has major consequences for public health, national security, social development, human rights, economic development and gender equality 1 . Similarly, the age at first marriage is defined as the age at which the respondents began living with her/his first partner 2 . The extent of early marriage varies between countries and regions. The highest rates are reported in South Asia and sub-Saharan Africa, where 44% and 39% of girls, respectively, were married before the age of 18 years. Data from 33 countries showed that trends in marriage indicate limited change since the International Conference on Population and Development 3 . More over 27 per cent were in East Africa and 20 percent in Northern and Southern Africa 4 . As the current estimates of the Convention on the Elimination of all forms of discrimination against women showed, approximately 82 million girls in the world between 10-17 years will be married before they reach 18 years; and of the 331 million girls aged 10-19 in developing countries,163 million will be married before they are 20 years 5 . A study in Gojjam and South Wollo zones of Amhara region indicated that early marriage is highly prevalent. The prevalence is higher for women than men. About 49% of women were married before age 15 and about 83% were married before age 18 years 6 . By 2015, the prevalence of female early marriage was 76.7% in Amhara region, North Ethiopia. Females who did not know the legal marital age were 12 times more likely to practice early marriage compared to those who knew the legal marital age 7 . A study by marital relations and Intimate Partner Violence in Ethiopia showed 70% of respondents had married before age 15 years and 30% had married at ages 15-17 years. Among those who married before 15 years of age, 82.2 % were from rural residences. This study also showed that rural residence added more risk for early marriage. Rural residence was associated with nearly a threefold elevation in the odds of marriage at ages 15-17 years 8 . A study by Pathfinder International in Amhara region showed that the prevalence of girls' early marriage was 81.8%. Moreover, about 44% of urban and 53% of rural ever-married women were married between 12-15 years of age. The proportion marrying between age 16 and 17 years was 14.5% in urban and 15.5% in rural areas 9 . In Ethiopia, female child early marriage is seen as a way to improve the economic status of the family, to strengthen ties between families, to ensure that girls are virgins when they marry, and to avoid the possibility of a girl reaching an age where she is no longer desirable as a wife. The practice of female child early marriage is now understood to have very harmful effects on the health, psychological, physiological and socio-economic well-being of young girls (as well as for the newborns). None the less, this knowledge is not broadly shared across most of the population 10 . In Amhara region of Ethiopia, about 80% of girls are married before they are 18 years old, and the most common age for a girl to marry is 12 years old. Child marriage is rooted in religious and cultural tra¬ditions based on protecting a girl's honor since sex before marriage is seen as an extremely shameful act. A girl's worth is therefore based on her virginity and her role of being a wife and mother 11 . All relevant laws of Ethiopia established a legal minimum age at marriage of 18 years for boys and girls. In fact, much of the education on early marriage prevention clearly indicates that the legal minimum age for marriage is 18 years for both girls and boys. However, a study in Amhara region shows clearly that the general public's definition of early marriage for girls uses a much lower cut-off than the legal definition. This indicates that the cut-off age for defining early marriage for female adolescents was often ignored with many marriages occurring before age 15.3 years 12 . The prevalence of early marriage in the Amhara region is still high. The findings of this study will make an input for the policy makers and planners in the area as well as the regional government to respond to early marriage at all levels of governmental and non-governmental sectors. Furthermore, it will help as an initiative for further investigation and intervention in the area regarding early marriage for those who will be interested in studying its consequences and related issues. Therefore, the goal of this study was to determine the prevalence and factors associated with early marriage in the Amhara region, North Ethiopia. --- Methods --- Study design and sampling The dataset used in this study was obtained from the Ethiopia Demographic and Health Survey (EDHS) conducted by Central Statistics Agency (CSA) in 2016. The survey utilized a multistage cluster sample and was designed to obtain and provide information on the basic indicators of the health and demographic variables. The study design was cross-sectional, in which the data on the independent and outcome variables is collected at the same point in time. --- Target population The target population was all female community members in selected cities of Amhara Region who practiced marriage. --- Dependent variable Female child Marriage (classified as either below 18 or 18 and above). --- Independent variables Age, Education, Religion, income, ethnicity, literacy level and residence area(urban/rural). --- Data analysis The data were extracted, edited, recorded, and analyzed by using SPSS version 23 for Windows. Both bivariate and multivariable binary logistic regressions were done. Bivariate logistic regression was performed and variables with a p-value less than 0.25 were transported into multivariable binary logistic regression analysis to identify the determinants of early marriage among female children. Finally, variables with p-values <0.05 in the multivariable logistic regression modewere taken as statistically significant. The association was reported as adjusted odds ratios with 95 % confidence intervals. --- Binary logistic regression model The binary logistic regression model is used when the dependent variable is a dichotomous and the independent variables are of any kind. This model is mathematically flexible and easily used distribution and it requires fewer assumptions 13 . It is presented as follow: 𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙 � 𝑝𝑝 1 -𝑝𝑝 � = 𝛽𝛽 0 + 𝛽𝛽 1 𝑋𝑋 1 + 𝛽𝛽 2 𝑋𝑋 2 + ⋯ + 𝛽𝛽 𝑘𝑘 𝑋𝑋 𝑘𝑘 (1 ) where p is the probability of being practice early marriage, is a set of independent variables, and β = (β0, β1… βk)T is a vector of unknown coefficients. The quantity to the left of the equal sign is the log of the odds of early marriage in the binary logistic regression. The adequacy of the model was checked by using Hosmer-Lemeshow goodness of fit test. --- Results A total of 1120 married women were selected for this study. The median age at first marriage for women was 15 years and about 48.6% of women got married before their 18th birth day in the selected study area. As shown in Table 1, early marriage was significantly associated with women's educational level (p-value=0.000 < 0.25). The highest proportion of early marriage was observed in the age group 15-19 years (60.6%). The educational status and early marriage cross tabulation also reveals that the smallest proportion of early marriage (21.6%) was observed for those hving higher education while the highest proportion of early marriage was observed for those having no education (51.1%).The proportion of early marriage was 30.3% for women that lived in urban and 52.2% for women who lived in rural. It was also observed that among women that took part in the study, the percentage of early marriage for Orthodox, Protestant, Muslim, and others religion followers were 48.2%, 53.8%, 50%, and 50.0%, respectively (Table 1). The literacy level of women was significant for early marriage in the study area. Women who read only parts of the sentence were 0.49 times less likely to get early marriage than who could not read at all. Women who read the whole sentence were 0.36(AOR=0.36, 95%CI: 0.24, 0.52) times less likely to get early marriage than who cannot read at all. The income level of the family was another significant factor for early marriage in Amhara region. Women who lived in a family with medium monthly income were 0.81(AOR=0.81, 95% CI: 0.60, 1.11) less likely to get early marriage than who lived in the poor family. Women who lived in a family with rich monthly income were 0.57 less likely to get early marriage than who lived in the poor family. The odds of early marriage practice were 2.04(AOR=2.04, 95% CI: 1.88, 2.45) times higher among rural residents compared to urban (Table 2). --- Binary logistic regression --- Assessment of goodness of fit of the model The omnibus tests are used to measure how well the model performs. The chi-square tests are used to measure the difference between the initial model and the regression model in terms of a number of correctly classified subjects or it is the change in the -2log-likelihood from the previous step. Since the omnibus test was significant, the model in the final step was consid-ered to be appropriate (Table 3).If the p-value of the Hosmer-Lemeshow Goodness of fit test statistic is greater than α=0.05, we fail to reject the null hypothesis that there is no difference between observed and predicted values. The value of Hosmer-Lemeshow statistic had a chi-square value of 2.264 and a p-value of 0.645 indicating that the model had a good fit. This shows that there was no significant difference between the observed and predicted model values and hence the model fits the data well (Table 3). --- Discussion This study examined the prevalence of early marriage and the related factors in the Amhara Region of Ethiopia. The prevalence of early marriage practice was 48.57% in the study area. This finding was lower than three previous studies in Amhara region, Ethiopia which were 83%, 81.85 and 76.7% respectively 6,7,9 . Education level was a significant factor in early marriage. The percentage of early marriage was highest among women with no education. Females with primary education were less likely to get married before reaching the age of age 18 than those with no education. The higher one's educational attainment, the more knowledge females get and understand the best marriage age, and the effect of having an early marriage. This result was supported by the previous study 12 . The odds of early marriage practice were high for those women who did not know the legal marital age compared to women who know the legal marital age. The odds of early marriage practice were higher among rural residents compared to urban. This finding was similar to previous studies in Amhara region which showed that rural residents were more likely to practice early marriage than urban residents 7 . The income level was another significant factor for early marriage in Amhara region. Families with medium monthly income were 0.81 times less likely to practice early marriage compared to those having poor monthly income and similarly families with rich income were 0.57 times less likely to get early marriage than poor families. This result was consistent with the previous study which indicates that families with lower monthly income are more likely to practice early marriagthan families with high monthly income 7,14 . This study also revealed that, the odds of early marriage practice were higher among rural residents compared to urban. This finding is in line with a previous study in Ethiopia 7 . --- Conclusions and recommendations This study established the prevalence and factors as-sociated with early marriage. The authors found that the prevalence of early marriage was still high in the Amhara region, Ethiopia. The study identified education level, family monthly income, literacy, residence area and knowledge of legal marital age as the main factors associated with early marriage practice. Since the education level of women was a significant factor for early marriage, parents and the Ministry of Education should emphasize education of women. In the communities' awareness, regarding the legal marital age, has to be developed. --- Authors' contribution All the authors participated in proposal development, data extracting, analysis and manuscript writing. --- Competing interests The authors declare that they have no competing interests.
The role of students in supporting creativity programs, especially social and humanities research at Sam Ratulangi University, is very important, because apart from attending lectures, they will become human resources who have academic knowledge, management skills, communication skills, but Students are also expected to have the skills and creativity to carry out research, in order to become productive, superior, competitive, adaptive, flexible, productive and competitive graduates during the industrial revolution 4.0. As well as the achievement of higher education KPI.
Introduction Sam Ratulangi University has a mission, namely to be at the forefront in carrying out the Tridharma of Higher Education and as a Center for Innovation in Science, Technology and Arts and Culture to Improve the Level and Quality of Community Life, which is described in the words IMANKU (Innovative, Partner, Applicative, Normative, Creative and Excellent). Excellence and Competitiveness in Entrepreneurship is a description of the Mission of Sam Ratulangi University. The Student Creativity Program (PKM) is one of the efforts of the Directorate General for Strengthening Research and Development, Ministry of Research, Technology and Higher Education to lead students to reach a level of enlightenment in creativity and innovation based on mastery of science and technology and high faith. The Social Humanities Research Student Creativity Program (PKM-RSH) is a program that criticizes social humanities phenomena that exist in society. PKM-RSH focuses on elements of creativity and innovation that are useful and provide answers to problems raised by combining the social and humanities fields (Nur Fadhila, 2022). The quality of higher education graduates does not only depend on academic abilities (hard skills) but is also required to have supporting abilities (soft skills) such as thinking skills, management, communication, leadership and working in a team. Lack of soft skills can cause a decline in the quality of graduates. Students who pass the PKM-RSH selection are implementing the Independent Learning Program -Independent Campus (MBKM) and efforts to achieve the Main Performance Indicators (IKU) for universities and study programs (Brahma Wicaksono, 2022). The phenomenological theory of Alfred Schutz puts forward two motifs, namely the "Cause" motif or Because of Motive and the "Purpose" motif or In Order To Motive. Motive "Cause" is what motivates someone to carry out certain actions. Meanwhile, the "goal" motive is the goal that someone who carries out a certain action wants to achieve (Warouw Desie, 2022). This research will reveal the motives for participating in the Social Humanities Research Student Creativity Program (PKM-RSH) at Sam Ratulangi University, which is carried out by students and the student's accompanying lecturers. Data from the Unsrat Dashboard shows that the number of active undergraduate students at Sam Ratulangi University for the 2000/2021 academic year is 535 people, 882 for the 2021/2023 academic year, 1,405 students for the 2022/2023 academic year, spread across 11 faculties. For this reason, it is interesting to research what the students' motives are for taking part in the Social Humanities Research PKM because the numbers are relatively small compared to the total number of students. --- Methods The method used is a qualitative method with a descriptive approach, which is aimed at: (1) collecting detailed actual information that describes existing symptoms, (2) identifying problems or examining applicable conditions and practices, (3) making comparisons or evaluation, (4) determining what other people have done in facing the same problem and learning from their experiences to determine decision plans in the future (Soegiono, 2012). These informants were selected by Purposive and Snowball Sampling. Informants selected purposively are people who are determined because they are deemed capable of providing information and are able to appoint other people as informants who can provide more in-depth information. Meanwhile, informants based on Snowball Sampling are informants who are selected based on clues in the selected field and are rolled out more and more day by day and finish until the saturation point, where the informant has provided the same information as the previous informant (Moleong, 2012). Data collection was carried out using methods commonly used qualitative approaches, namely observation (participant), in-depth interviews (in depth interviews), and document study (18). In qualitative research, data analysis is carried out from the beginning and throughout the research process. In this research, qualitative data analysis will be used with an interactive model developed by Miles and Huberman (2012) --- Results and Discussion Sam Ratulangi University is committed to excellence in the learning process, research and community service as an integral part of the process of forming cross-disciplinary leadership character. Everyone must have the same access to education regardless of their social or cultural background. Our vision and mission reflect the moral responsibility to "humanize" others as aspired by educational figure G.S.J. Samuel Ratulangi in the motto "Si tou timou tumou tou". Sam Ratulangi University has a vision, namely to jointly organize Sam Ratulang University into a superior and cultured university. Its mission is to be at the forefront in carrying out the Tridharma of Higher Education and as a Center for Innovation in Science, Technology and Arts and Culture to Improve the Standard and Quality of Community Life, which is described in the words IMANKU (Innovative, Partner, Applicative, Normative, Creative and Excellent). Students as intellectual actors are expected to be able to develop science and technology that is potential, efficient, and useful in social life. Students are expected to be able to develop research based on observations of social phenomena in the surrounding community, and be able to understand the meaning of research, its objectives and benefits. With this provision, students will be able to take a creative and innovative scientific approach to uncover a phenomenon, discover novelty or prove a hypothesis in the field of social humanities. The Social Humanities Research Student Creativity Program (PKM-RSH) is an activity that provides a forum for student creativity and innovation in the field of research in accordance with scientific principles. In this PKM-RSH, students are expected to be able to criticize social and humanities phenomena that exist in society with a scientific approach, use appropriate methods in searching for information, analyze information using theory, and provide answers to problems that arise from these phenomena. In this way, research results can be published and provide benefits to interested parties. At PKM-RSH, students are expected to be able to explore ideas and develop innovative creative discoveries based on research and development so that they are able to excel in national events. This is the aim of one of the Independent Learning -Independent Campus (MBKM) programs, namely independent studies/projects. Thus, PKM-RSH becomes a form of independent study/project in the MBKM program. PKM-RSH can be a substitute for courses that must be taken or a complement that can be included in the Diploma Companion Certificate (SKPI). Equivalence of Semester Credit Units (credits) from PKM-RSH is calculated based on the implementation time as well as the student's proven contribution and role in activities under the coordination of the accompanying lecturer. The range of credits that students can obtain in all PKM-RSH activities is a minimum of 6 to 10 credits in accordance with the provisions in their respective study programs. In line with Independent Campus learning, PKM activities are expected to provide opportunities and challenges in developing creativity, innovation and capacity, as well as independence in seeking and finding knowledge or solutions through problems and dynamics that exist in society. An explanation of the credit conversion recommendations can be seen in the PKM General Guidelines Book. According to Minister of Education and Culture Regulation No. 3 of 2020 Article 15 paragraph 1, forms of learning activities that are in accordance with PKM-RSH are research and independent studies/projects as seen in the following picture: Copyright © 2023, Journal La Edusci, Under the license CC BY-SA 4.0 The Student Creativity Program -Social Humanities Research (PKM-RSH) is an activity that provides a forum for student creativity and innovation in the field of research in accordance with scientific principles. The general meaning of PKM-RSH is to reveal facts or phenomena through a scientific approach. Meanwhile, the Special Meaning is Innovative in discovering a novelty about a phenomenon or proving a hypothesis in either one scientific discipline or multidisciplinary so as to produce a contribution in the form of information for the progress of science and technology as well as in order to overcome existing problems in society. The aim of PKM-RSH is to foster interest and research skills, understanding of research methods and methods of data analysis. Producing quality research that has the potential to be published in scientific journals and has the opportunity to produce policies that are beneficial to both the academic community and the wider community. PKM-RSH is a combination of social and humanities fields which has research objects on social phenomena and human behavior that can be found in social life. The social sector focuses more on social phenomena of interaction in social life such as economics, psychology, social, education, management and politics. The field of humanities focuses more on basic aspects of behavior in people's lives, such as the development of culture, art, philosophy, customs, history, beliefs or religion, law and values. The combination of social and humanities uses research paradigms in the form of cause-and-effect relationships, conclusive descriptive, phenomenology, hermeneutics, postcolonial, positivistic, historical, structural, development, and so on according to each field of science. Quality research can be seen through the quality of several underlying aspects, namely: intellectual challenge, problem focus, approach methods, theories used, data quality, and outcome impact. Intellectual challenges can be seen from the "state of the art" regarding the topics raised, the use of logic, and the research platforms used. The focus of the problem can be seen from the sharpness in choosing the scope of research, the sharpness in selecting unique problems and the suitability of the virtual or digital approach used. The theory used must be relevant to the problem focus and used in data analysis to answer research problems. The approach method can be measured by the novelty and procedures and completeness of the system used to collect information or data and its analysis techniques. The quality of the data or information collected can be measured from the adequacy and reliability of the data or information collected, including the data sources used. The output impact can be seen from the quality of the output in a logical and systematic manner. Data that can be used in social humanities research is grouped into primary and secondary data. Primary data can be obtained from respondents, participants, sources, artifacts, society (collective memory, myths, folklore, norms, and so on), using questionnaire or survey techniques, interviews, observations, active participation, and experiments. Meanwhile, secondary data can be sourced from archives, literature, reports (data from BPS, companies, etc.), digital data (social media or big data), and written laws or regulations. In 2021 there will be 2 (two) groups that pass the PKM-RSH from the Faculty of Social and Political Sciences, under the guidance of lecturer Dr. Daud Markus Liando, SIP, MSI with the PKM-RSH Team leader being Mineshia Lesawengan. The second team was guided by Dr. Leviane J.H.Lotulung, S.Sos, M.I.Kom, as chairman of the PKM-RSH TEAM is Fabio Y. Lasut. Every year, Sam Ratulangi University opens opportunities for students to take part in the Student Creativity Program (PKM), including Social Humanities Research (PKM-RSH), which is an activity and forum for student creativity and innovation in the field of research in accordance with scientific principles. In this PKM-RSH, students are expected to be able to criticize social and humanities phenomena that exist in society with a scientific approach, use appropriate methods in searching for information, analyze information using theory, and provide answers to problems that arise from these phenomena. In this way, research results can be published and provide benefits to interested parties. At PKM-RSH, students are expected to be able to explore ideas and develop innovative creative discoveries based on research and development so that they are able to excel in national events. This is the aim of one of the Independent Learning -Independent Campus (MBKM) programs, namely independent studies/projects. In 2021 at the Faculty of Social and Political Sciences, Sam Ratulangi University, there will be 2 (two) groups that will pass the PKM-RSH funding. In this socialization, it was explained about the importance of proposal format and writing which must comply with PKM guidelines. Because in making PKM proposals there are still many that do not comply with the guidelines, so many proposals are rejected. Creativity is also something that is valued and important because it needs to be emphasized in the contents of the proposal. Apart from creativity, the local wisdom aspect of PKM products can also be an added value in researching PKM proposals. In this socialization, students received information and explanations about the Student Creativity Program (PKM), especially Social Humanities Research (RSH), which was attended by lecturers and students. In this socialization, students received information on the techniques for writing PKM proposals, as well as important things that must be done so that the proposal can pass, whether it concerns determining the title, creative aspects, should contain elements of local wisdom, etc. This motivates students to make proposals. Students want to take part in PKM-RSH but not many continue to make proposals. Therefore, the leadership encouraged students to continue to make proposals and look for competent supervisors to guide the PKM-RSH team. Student enthusiasm is quite high, but the problem is that many of them do not maintain that enthusiasm until they create and upload quality PKM-RSH proposals. Even though proposal coaching has been provided to perfect proposal writing. To make it easier for students to find out about the --- Encouragement from accompanying lecturers Students are motivated to take part in PKM-RSH because they are continuously encouraged by accompanying lecturers or supervisors. The accompanying lecturer or supervisor continues to accompany the PKM student team and directs and provides inspiration to the students in determining the title and preparing the proposal until its completion. Good collaboration between supervisors or assistants and students produces quality PKM proposals so that they can pass the National Student Science Week (PIMNAS). This PKM-RSH culminates in PIMNAS, so the accompanying lecturer encourages the student team to make good, quality proposals while we continue to discuss to get a proposal that can pass to PIMNAS. The accompanying lecturer encouraged the student team to read the guidebook for preparing the PKM-RSH Proposal, how to write it, understand the completeness of the required files, follow the prescribed writing format, etc., because there are differences in substance between PKM-RSH and other PKM, as well as the forms. After understanding other forms of writing, the student team knows the rules of what can be written and what cannot be written, for example the number of pages and writing sources. Adhere to the writing format and systematics specified in the guidebook, so that the proposal you make can pass the selection. --- As a Challenge to Develop Ideas, Creativity and Innovation Through Research In participating in PKM-RSH, the student and lecturer team was challenged to come up with a title that was interesting, innovative and contained elements of local wisdom. Teams of students and lecturers are challenged to think creatively in expressing ideas in proposals that have scientific quality and make a positive contribution to society. PKM-RSH is also a forum for student creativity and innovation in the field of research in accordance with scientific principles. The student team is expected to be able to criticize social and humanities phenomena that exist in society using a scientific approach, using appropriate methods in searching for information, analyzing information using theory, and providing answers to problems that arise from these phenomena. In this way, research results can be published and provide benefits to interested parties. Through PKM-RSH, students are expected to explore ideas and develop creative and innovative discoveries based on research and development so that they are able to excel in national events. This is the aim of one of the Independent Learning -Independent Campus (MbKM) programs, namely independent studies or projects. --- Opportunity to Work as a Team Students are also motivated to carry out PKM-RSH because they get the opportunity to work as a team with other students. Students not only do research alone but are given the opportunity to work with other teammates. Working as a team creates many thoughts that team members can discuss. The aim of conducting research in the form of a team of students and lecturers is to ensure collaboration. Working as a team ensures that PKM-RSH proposals are completed quickly, and also trains students to exchange ideas and be skilled in communicating or conveying ideas to other team members. When working as a team, togetherness gradually grows with fellow group members, each of whom understands the character of the other members. If there are problems that arise in preparing and completing PKM-RSH, then all teams can provide ideas and solutions so that problems can be resolved Students are motivated to work with student teams in preparing and completing PKM-RSH, because students can get to know each other's personalities and train themselves to respect differences of opinion. Team work also allows students to convey ideas and ideas. And if we pass PIMNAS, it will be an advantage for all of us, not just one person. --- Qualified for the National Student Science Week (PIMNAS) The student's motive for taking part in PKM-RSH is also because they want to get through to the National Student Science Week (PIMNAS), because the PKM program is a prestigious student program which has become an annual program, and getting through to PIMNAS is a proud achievement. Students and lecturers who take part in PKM-RSH are motivated to pass the proposal so that they can continue to PIMNAS. Many other competitors were also good, but the motivation to bring the name of the Department, Faculty and University to the PIMNAS level made the team enthusiastic about participating in PKM-RSH. It would be very encouraging if the PKM-RSH proposal made it to PIMNAS. The student team realized that there were many competitors who were also good, so we had to do our best in making this proposal. In Order To Motif (Purpose) students take part in the Student Creativity Program -Social Humanities Research (PKM-RSH) --- Graduated Without a Thesis Students are motivated to take part in PKM-RSH also because they finish studying to become undergraduates without a thesis if their PKM-RSH proposal passes to PIMNAS. PKM-RSH can be a substitute for courses that must be taken or supplements that can be included in the Companion Diploma Certificate (SKPI). The equivalent of Semester Credit Units (credits) from PKM-RSH is calculated based on the implementation time as well as the student's proven contribution and role in activities under the coordination of the accompanying lecturer. The range of credits that students can obtain in all PKM-RSH activities is a minimum of 6 to 10 credits in accordance with the provisions in their respective study programs. Two students from the Faculty of Social and Political Sciences, Unsrat, who are part of the Student Creativity Program -Social Humanities Research team, finally graduated from college without going through the thesis stage because they succeeded in passing the National Student Science Week (PIMNAS). This is an award by the campus based on the chancellor's decision letter. This is also because students have created scientific work such as in the process of writing a thesis, and this scientific work has been assessed by professors from several universities in Indonesia. Students who only passed the PKM-RSH proposal until the university stage were also not disappointed because they received rewards from accompanying lecturers by giving A grades in seminar courses and certain courses. --- Contribution to Universities and Faculties The --- Difficult to Find Assistant Lecturers Initially the student team had difficulty finding PKM-RSH supervisors, because the supervisors had to be competent in what they were going to research. Also look for lecturers who have the opportunity and time to guide student teams, remembering that lecturers also have the obligation to teach, research and carry out community service with a fairly tight time intensity. There are several lecturers who have been contacted, but are not yet willing to accompany and guide because there are other tasks that must be completed. Initially, students had difficulty finding the right accompanying lecturer for the PKM-RSH team, because the lecturers had busy activities so that mentoring time with the team was quite With guidance from WD3 and the Head of Department, finally the PKM-RSH student team got a companion lecturer who guided the team well so that they produced a good proposal. --- Limited Time to Discuss with the Team and Lecturers There are obstacles that cause the process of preparing PKM-RSH proposals between fellow student teams or with the supervisor. Students are sometimes faced with busy lectures and assignments from lecturers that must be completed on time, also during the Mid-Semester Examination (UTS) and Final Semester Examination (UAS) which require concentration in studying. So the time to meet and discuss with the PKM-RSH Team is limited. In the process of preparing a proposal until the presentation stage, there are obstacles to meeting for discussion, because sometimes it is difficult to match the free time of fellow team members. Especially during UTS and UAS, where we have to concentrate on exams and completing assignments from lecturers, so sometimes meetings with the team to discuss PKM-RSH proposals are delayed. Likewise, meetings with PKM-RSH accompanying lecturers are very busy, because apart from having additional duties or positions, they also have activities outside campus, so that sometimes it becomes an obstacle in holding discussions. It is very clear that the accompanying lecturer has additional duties as a study program coordinator, so sometimes when the student team needs to discuss it turns out he has other activities, so the student team has to wait and adjust to the lecturer's free time. --- Confused about Finding Phenomena and Determining Titles At first the PKM-RSH team had difficulty finding the phenomenon and determining the title. That was the role of the accompanying lecturer to direct the student team to find the phenomenon and determine the title correctly. Students receive very useful input, which stimulates students' thinking so that creative ideas emerge to develop what will be researched. Searching and reading a lot of the latest information and looking at the surrounding environment can help identify social phenomena and determine research titles. From reading you can get ideas and thoughts to determine a topic. Seeing phenomena, problems or social events can also motivate you to determine the right title. --- Difficult to Find Friends to Build a Team Preparing a PKM-RSH Proposal requires a solid team, because there is interaction between one member and another, as well as with accompanying lecturers. Therefore, you must look for teammates who have the same vision and way of working so that there are no debates and you can work cohesively. It is not easy to find friends to build a team, because students come from different backgrounds. (2) In Order To Motive (Goal): -Graduate without a thesis -Contribution to universities and faculties -Find solutions to social phenomena that are beneficial to society. (3) Obstacles encountered: -Difficult to find accompanying lecturers -Limited time to discuss with the team and lecturers -Confused about finding phenomena and determining titles -Difficult to find friends to build a team. As a suggestion in this research, the Student Creativity Program must be disseminated continuously to students so that more students take part in PKM because the results can support the achievement of higher education and faculty IKU. The student team and accompanying lecturers who have passed also provide motivation and assistance to other students who intend to take part in PKM-RSH.
Introduction: Interventions aimed at optimizing parents' ability to manage their children's asthma could be strengthened by better understanding the networks that influence these parents' choices when managing asthma. This study aimed to explore the asthma networks of parents of children with asthma-specifically to gain insights into whom parents select to be within their networks and why; how
individuals within parents' networks influence the way in which they manage their children's asthma medications, and factors driving the development of these networks. Methods: A qualitative research methodology utilizing semi-structured interviews with parents of children with asthma was employed to fulfil the objectives of this study. Results: Twenty-six face-to-face interviews with parents of children with asthma were conducted, recorded, and transcribed. Transcriptions were independently coded for concepts and themes by the research team. Asthma medications was a dominant theme identified, and revealed that parents actively sought advice and support from a series of complex and multidimensional relationships with people and resources in their health network. These not only included health care professionals (HCPs) but also personal connections, lay individuals, and resources. The composition and development of these asthma networks occurred over time and were determined by several key factors: satisfaction with their HCP provider; need for information; convenience; trust and support; self-confidence in management; and parents' perceptions of their children's asthma severity. Conclusions: By exploring parents' asthma network, this study uncovers the complex relationship between HCPs, family and friends of parents of children with asthma, and provides new insight into the intimate and parallel Interventions aimed at optimizing parent's ability to manage their child's asthma could be strengthened by better understanding the networks that influence these parents' choices when managing asthma. There is a need to explore the asthma networks of parents of children with asthma-specifically to gain insights into whom parents select to be within their networks and why; how individuals within parent's networks influence the way in which they manage their child's asthma medications, and factors driving the development of these networks. --- What was learned from the study? It adds to our depth of understanding relating to the sources of information/support within which parents of children with asthma engage, and for the first time articulates the relative importance and influence that parents place on different sources of health information. Emphasizes the major gap in our guidelines, which despite their evolution, continue to be framed in a biomedical model of health care and fail to address the needs of parents. Supports the need for a collaborative approach to the management of pediatric asthma, which involve medical and nonmedical individuals. --- INTRODUCTION Asthma is one of the most common chronic conditions in the Australian pediatric population, affecting up to 20.8% of children at some stage in their childhood, with 11.3% of children having a current asthma diagnosis [1,2]. The burden of pediatric asthma is significant; high rates of unscheduled emergency department visits and hospitalizations, sleep disturbances in one-third of children, 60% reporting absenteeism from school, study or other activities [1][2][3], and considerable extended burden for the whole family through for example missed days of work, sleep deprivation, and anxiety [4,5]. Parents of children with asthma play a key role in their asthma management, however current data suggest that parents are falling short in their abilities to optimally manage the condition [6]. Despite receiving education, in some cases parents still identify a need for more education and information about medication use from professionals [7,8], demonstrating significant gaps in medication knowledge [9][10][11], as well as skills in adherence [12][13][14] and inhaler technique [15][16][17]. This need for information has continued for years, despite parents having access to healthcare professionals (HCPs) [8]. Parents also have a large circle of non-medical individuals/resources to whom they also turn to [18]. Most importantly, poorly managed asthma has important consequences for the family, health care systems, and the child. A lack of parents' knowledge and skills has ramifications on children's understanding of asthma, especially as children grow older and learn to self-manage their condition, and therefore this should be addressed. It is important that children start with a solid knowledge base. Research has established that parents establish their own sources of health information, i.e., their own ''health networks'' that relate to their children's asthma medication management [18]. These networks include professional connections (44% of networks), i.e., their health care professionals (HCP)(such as general practitioners (GPs), respiratory specialists, pediatricians, pharmacists, hospital staff), personal connections (such as spouses, family and friends; 42% of networks), and impersonal connections such as lay individuals (such as school staff, work colleagues; 14% of networks) and resources (such as information resources obtained off the internet or patient support groups including asthma organizations). Exploration of these networks highlights that the parent's health networks are large, complex, and variable with the influence of HCPs being just as significant as the influence of non-HCPs on the management of their children's asthma medications [18]. The asthma networks of participants ranged from two to ten connections, with an average number of five. The most commonly nominated connection was with general practitioners (GPs) followed by family members and the internet. When parents were asked about how influential these connections in their health networks were, professional connections represented 53%, personal connections 36%, and impersonal connections 11% [18]. What this research does not tell us is that although parents have specific preferences for particular health connections, it remains unknown what their role in asthma medication management is, on what basis they selected specific individuals for health advice, and how they may impact the strategies parents employ in the management of their children's asthma medications. We hypothesize that different individuals/resources in parents' networks have different roles within their influence of decisions and choices parents make in management, and this may not be related to the importance parents place on each. Understanding these connections and their importance to parents is critical to our understanding of parent perspectives and decision-making. This will enable HCPs and the community to better support the optimal management of their children's asthma. The overall aim of this study is to gain a deeper understanding of the asthma networks of parents; specifically, to gain insights into: i) The role of individuals/resources within parent's asthma networks, their level of importance to parents, and how they influence the way in which parents manage their children's asthma medications, and ii) Factors driving the development of these networks. --- METHODS --- Study Design The study was conducted between January and May of 2017. It was based on novel empirical data describing the composition of asthma networks parents established within the context of managing their children's asthma medicines [18]. Building on this research, this study adopted a qualitative approach to drive an indepth, qualitative exploration of the connections, relationships, and influences in these asthma networks. This project was approved by the University of Sydney Human Research Ethics Committee (Project No: 2015/762). Methods were performed in accordance with consolidated criteria for reporting qualitative research regulations and guidelines [56]. --- Setting and Sampling Frame The sampling frame included parents of children with asthma who had previously participated in research in this line of inquiry [18]. These parents were contacted as they had previously expressed an interest to be part of future studies and were screened for eligibility. Selection was based on a set of inclusion criteria (Table 1). Once eligibility was established, all parents provided written consent prior to participation. --- Data Collection Data collection occurred in two parts and by one researcher (PSA): i. Participant demographics and baseline data Participant demographic and baseline data were collected and included: age, gender, asthma history, highest level of education, and occupation. The child's level of impairment due to asthma was assessed using the functional severity questionnaire (FSQ) [18]. ii. Semi-structured interview Prior to the commencement of this study, participants had identified individuals and resources with whom/which they engaged around their child's asthma medicines [18]. These individuals and resources identified in parent's asthma networks and were referred to as 'alters' [18]. For this study, participants were required to reflect on the 'alters' that they had previously identified within their asthma network. A semi-structured interview guide (Table 2) was developed based on empirical evidence [18,[20][21][22][23], the theory of self-management and the theory of pediatric medication autonomy [24][25][26][27][28][29][30][31][32][33][34][35]. The interviews were recorded on digital media devices and transcribed verbatim. --- Data Analysis Participant Demographics and Baseline Data Descriptive statistics were used to summarize participant demographic and baseline data. The FSQ [19] consists of six questions with five questions utilizing 5-point Likert scales and one question requiring a 'yes/no' response where 4 points were given for yes and 0 points for no. The raw sum of the six question scores were then calculated and a severity score of low, mild, moderate, or severe was then allocated. --- Semi-Structured Interview Transcribed interviews were reviewed to identify descriptive and contextual information. Deductive and inductive approaches were used to analyze content, identify categories, and arrange them into themes [47]. A deductive approach was used to explore a participant's selection of individuals and resources in their children's asthma medication management, an investigation of their role, relative importance to participants, their impact on their child's asthma medications, and finally on other relations within their networks. An inductive approach was used to identify concepts and themes associated with the development of asthma networks. Data collection and analysis occurred concurrently, enabling further exploration of emerging themes. To ensure reliability and validity, data were independently reviewed (PSA, EA, SBA, LC, and BC) to develop inductive and deductive codes. These included any issues, topics, or ideas discussed and raised by participants. Deductive codes were developed from topics in the interview guide and research literature, while inductive codes were developed --- RESULTS --- Participant Characteristics A total of 26 parents of children with asthma participated in this study. The average time for each interview was 25 min, ranging from 15 to 40 min. Participant demographics and baseline data are summarized in Table 3. Mothers represented 100% of participants (n = 26). The mean age of participants was 42 ± 7 (mean ± SD) and 10 ± 4 (mean ± SD) for the participant's children. In this study, 46% (12/26) participants had asthma themselves. When looking at participant's children, 77% were aged between 4 and 12 years and 23% between the ages of 12 and 18. Of these children, 62% were female and 73% had mild asthma as evaluated by the Functional Severity Questionnaire (FSQ) [47]. Reporting of past experience with health care utilization indicated that all children (26/26, 100%) had been hospitalized at least once for their asthma in the past; 69% (18/26) Overall, participants reported wide differences in the nature and level of influence/importance of the different individual/resources identified. Exploration of these networks uncovered a series of complex and multidimensional relationships, and highlighted that some relationships/ individuals truly influenced the decisions made by participants, others filled a gap in knowledge and understanding, others were convenient relationships, and some connections were unrelated to the child's asthma but provided support to parent's continual needs. The specific roles and subsequent influences of individuals/resources are presented under the four categories: healthcare professional (HCP) connections, personal connections, lay individuals, and resources. These relationships and influences are discussed in detail below with examples from participant responses in Table 4. --- HCP Connections HCP connections included general practitioners (GPs), specialists (respiratory and pediatric), pharmacists, and hospital staff. Participants considered the GP to be ''officially in charge'' of their child's asthma and GPs were reported to serve a wide range of roles (18/ 26, 72%). These included the diagnosis of asthma; including physical examinations (inspections of chest and upper airways); being the first point of call in recognizing and confirming any respiratory symptoms and re-confirming a hospital diagnosis of asthma post discharge if the hospital was the first point of call. There was complexity reported around the 'diagnosis' role, as participants reported that GPs were hesitant to confirm a diagnosis of asthma at a young age (under 5 years of age). A diagnosis was often only confirmed by the GP after a hospital visit when symptoms had exacerbated during an acute attack or a flare up of symptoms, resulting in parents often feeling ''frustrated'' (11/18, 61% of parents whose primary provider was their GP). GP's were also described as actively involved in the prescribing of asthma medications, commonly described as taking a ''trial and error'' approach, to determine the most suitable medication for their child. Some participants specifically noted that their GP tended not to provide information about all possible medication side effects or the reasons for prescribing a particular medication. Specifically, GPs were reported to not provide day-to-day management advice, which participants expected would be discussed. A very small number (5/18, 19%) of parents reported that their GPs supplied a written asthma self-management plan and conducted inhaler technique assessments and training. The GP also left participants with many unanswered questions, and concern about treatment options and the medications their children had been prescribed. Participants reported that this impacted their willingness to give their children medication and made them more cautionary in taking on the management suggestions of the GP. Six participants expressed that they only see the GP now for prescription renewal. Specialists who were seen by a proportion (12/26, 46%) of participants (were seen to deliver ''specialized care'' as ''experts in the field''. They were ''respected'' by all participants who were in their care and described their advice as ''valued''. This advice made participants feel ''confident'' that the medication prescribed and management recommended were the most appropriate for their child. They were involved in the diagnosis of asthma, especially when participants did not receive a definitive diagnosis from their GP. Specialist diagnosis of asthma involved monitoring of respiratory symptoms, lung function tests, and trialing of asthma medications for symptom relief. Specialists were reported to consider the role of allergy in asthma and initiated immunotherapy if required. They were also involved in medication management, which entailed medication dose adjustments, providing written asthma self-management plans, trialing different asthma medications, adjustment of medications Table 4 Quotes supporting the perceived role of individuals/resources within parent's asthma networks and how they influence the way in which parents manage their child's asthma medications Theme Quote --- HCP connections General Practitioner ' 'He had his first bad asthma attack and that's when we were told he was an asthmatic. He had to be hospitalized for our GP to finally realize he had asthma! We were petrified and annoyed that that was what it took for him to make up his mind!' ' (Participant 5) Hospital staff, such as medical practitioners and nurses, provided emergency asthma care during acute exacerbations which participants viewed as ''lifesaving'' (11/26, 46%). Their influence, however, went far beyond the acute management of the child's asthma. In these circumstances, hospital staff were highly influential, and participants reported that they played a role in the way they administered asthma medication to their children post hospital admission, i.e., participants modeled their medication administration behaviors on what they saw in the hospital under emergency circumstances for all future medication administration. ' ' When mentioned by participants (14/26, 54%), for all, the pharmacist played a fundamental role in the supply of medications. Beyond this, there was a dichotomy of roles described. A majority of participants (10/14, 71%) reported limited potential of the pharmacist to contribute towards their child's medicine management and only turned to the pharmacists for medication supply having infrequent interactions with them. They reported that information about medications, inhaler technique training, and management suggestions were covered by their GP or specialist and did not require any further support. Further participants reported being uninterested in the standard questions and common advice that was provided. For a smaller minority (4/14, 29%), roles reported included medication information and advice, inhaler technique education, and assisting with prescription issues (dealing with incorrect dosages, confirming directions, providing emergency medications when doctors could not be seen). Emotional support and referral to other HCPs were also reported. Some participants described a pharmacist as ''dependable'', making them feel ''confident'' in the way they administered medications to their children and helping to understand the importance of medications and ''taking the orange inhaler [reliever] everyday''. --- Personal Connections Personal connections included family (16/26, 62%) and friends (10/26, 38%), featured throughout participant asthma networks. Participants frequently encountered on-going challenges in the management of their children's asthma medications and while they would interact with HCPs occasionally to rectify these issues when in need of a professional opinion, they would interact with family and friends on a more regular basis, as they lived and socialized with most of these individuals on a daily basis. For family (9/26, 35%) and friends (5/26, 19%) who did not have asthma themselves, they were not reported to be influential in the participants decision-making around their child's asthma medications, however, they were still reported to play an important role in their network. They were involved in physically assisting the participant with the practical aspects of their child's asthma care. During events such an asthma attack or symptom exacerbation, participants turned to these individuals who could assist them in an emergency, watch over their children, and provide continuing assistance with daily life tasks. This included monitoring their child's asthma for acute symptoms when in their care, identifying any increase in asthma symptoms that they may ' 'The doctor talked without any pause to allow me to ask questions. They were not interested to focus on listening or pay close attention to (the) questions asked, (and) when I was finally able to ask, he was always cutting me (off) while I was in the middle of asking or explaining my concerns. I just got so frustrated.' ' (Participant 23) ' 'Even like doctors sometimes they misjudge, one doctor gave him the Redipred for three days which he didn't need to take and at the end he needed antibiotics for a bacterial infection, I wasn't very happy or pleased. He didn't even explain why he needed it!' ' (Participant 20) ' 'See he [GP] didn't even tell me to use the spacer at the beginning so there was a little kid trying to take a puffer because when I was a kid it wasn't an aerosol it was the twisty powder so I didn't know about the spacer thing. Trust and Support ' 'In general, ladies I know, their kids have asthma and we talk to each other about it. I tell them that I'm using the Ventolin on my kids and we start borrowing it from each other like it's a toy or something if their kids are sick. Without bothering to go to the doctor because we know it's worked for our own kids and they trust us. They start using it on their kids!' ' (Participant 12) ' 'With her [friend] I've talked about it a lot; she's seen the worst-case scenario, and I've known her all my life, so I feel comfortable talking with her. So, she's had Davidworst-case scenario-and I know that I can trust what she has to say and she's provided me with great advice in the past that's worked. Also referred me to a great specialist! She was also the first person that I knew who also had a child with asthma.' ' (Participant 7) ' 'Of course, they [GP] helped me and I will never go to another doctor again because he introduced me to all the medication and this worked for me and my son is very well. He also answers any questions I have whenever he can at any time of the day, he is always supportive' '. (Participant 12) have overlooked themselves, and aiding with the administration of medications. Having family and friends who were always available in all situations was invaluable to participants ongoing management of their children's asthma. They also provided participants with emotional support for feelings of anxiety and fear, which were a result of caring for a child with asthma. The ''support'' of family and friends was described with regard to their influence in triggering the participant to seek professional assistance, especially when their child's asthma symptoms worsened. More importantly, family and friends were found to influence participants' choices of HCPs. Often participants' selection of HCPs was influenced by family and friends' recommendations. When it came to family (7/26, 27%) and friends (5/26, 19%) who either had asthma themselves or a child with asthma, their role was similar to that of other friends (described above), however also related to the sharing of personal lived experiences, stories, and insights. Spouses who had asthma themselves were identified by participants to be role models for their children when it came to administering medications on their own without assistance from parents. Participants reported that it was helpful to turn to these individuals who had similar experiences as they provided realistic advice and valued this sharing as they did not believe they received this from HCPs. They also provided practical advice regarding medication use, such as advice on inhaler device use and technique, as well as medication dosages. Their high level of influence on participant's medication management was conveyed in the way they often relayed management-related information that they had heard from family and friends back to HCPs. They also compared the management advice they had received from their HCPs between each other and took on Lay Individuals Individuals such as school staff and work colleagues were labeled as lay connections (14/26, 54%) in participants asthma networks. The role of school staff was to administer asthma medication following an individualized written asthma self-management plan, especially when a child was experiencing acute asthma symptoms. They were also to inform participants of any symptoms their child has experienced while at school. Outside of that, participants did not perceive them to be influential in any way when it came to managing any aspect of their children's asthma. When it came participant's work colleagues, they played a supportive role for participants. That is, they provided participants with a place to share their experiences, feelings, and emotions. If they had asthma themselves, they shared their stories and insights, however, they reported that this had no impact on their management strategies or decisions when it came to their child's asthma management. --- Resources In addition to professional and personal connections, participants also reported to turn to other resources for additional asthma medication-related information such as the Internet and pamphlets. Participants (15/26, 58%) in this study frequently reported turning to the Internet for health information both prior to and after their interactions with HCPs. Participants used this resource to find practical information regarding their children's medications, their administration, side effects, and to find new and upcoming treatment strategies, particularly when their children experienced increased asthma symptoms. Participants reported feeling empowered as a result of access to quick health information. They reported that this influenced their decision-making independently from HCPs in regard to which medications their child should be on, adherence to medications, and when to initiate or cease medications. --- Factors Driving the Development of Asthma Networks Inductive analysis of the data identified that the development of these asthma networks occurred over time and was driven by six factors: the level of satisfaction with their primary HCP provider; the need for different information; convenience; trust and support; self-confidence in management; and participant perception of their child's asthma severity. These factors are discussed below and supported by quotes from participants in Table 5. --- Level of Satisfaction with Their Primary HCP Providers All participants utilized general practitioners (GPs), often being accessed when their child first started experiencing symptoms of asthma. While GPs were considered highly influential, participants had their own individual expectations of their GP. A large proportion of participants (12/18, 67%) expressed dissatisfaction with their GP, articulating that their needs were not being met. Some participants (10/18, 55%) reported that their GP had poor professional communication. They were not given a chance to ask questions, and if they managed to do so, they were interrupted. Others reported their GP failed to answer their questions, and instead of addressing their concerns, their GP proceeded to ask other questions important to them. Other participants reported that their GP provided inadequate information in relation to at least some aspects of their child's asthma management and treatment or highlighted the poor quality of information provided. While participants did not recognize all the gaps and found it difficult to pinpoint and express their exact needs, they expected to be provided with more detailed explanations on medication side effects (long-term side effects), prognosis of their child's asthma (if the child will ''grow out of it''), potential complications of living with asthma, health management strategies in case of worsening asthma, and any new upcoming treatments (research in the area). Especially in cases where they found themselves in an unfamiliar or critical situation that they did not how to deal with, e.g., being unprepared to recognize or respond to an exacerbation of asthma. Further, participants highlighted that they wanted advice about how to manage their child's asthma on a day-to-day basis. In terms of medicine administration, several participants reported that they were given a demonstration on proper inhaler technique on one occasion, without reinforcement or assessment over time. The majority of participants reported that their child's inhaler technique was never assessed. Few participants reported their GP providing a Written asthma self-management plan, however rarely did participants voice that it was explained clearly or updated regularly. In terms of emotional support, most participants explained that doctors failed to be empathetic and demonstrate an understanding of their ''sense of guilt'', ''anxiety'', and the ''constant worry'' that their child's condition may be causing them. One of the key connections that participants then sourced were other health care professional connections (such as specialists), family, friends, and the Internet. --- Trust and Support Trustworthy and supportive connections were important in shaping participants' asthma networks. Participants reported pursuing management-related advice from those who have previously contributed to their child's asthma or had personal experiences of asthma themselves. Through positive interactions with both professional and social relationships, such as the provision of effective treatment options, quality information and successful recommendations founded a sense of ''trust'' and ''support'' in that connection. Connections that displayed effective communication (through active listening and displaying empathy), honesty, showed respect, and cared for participants and their children helped build trusting relationships. Trusted connections were described to have an important role in expanding participants' networks, which potentially improved their child's asthma medication management. --- The Need for More Information While participants continuously reported wanting more information relating to equipping them to be able to deliver ''the best possible care for their child'', their need for more information was driven by a complex multitude of factors and underlying issues. Some participants wished for more information regarding management strategies, which would enable them to feel involved in the management of their child's illness, more confident, and be able to understand the decisions being made. They reported that feeling that they understood what was happening helped some participants to cope with their child's illness and re-establish a sense of control. Others felt that information provided by their primary HCP was lacking. This was due to physical barriers, such as a lack of time, or that insufficient information had been provided. Immediately after a child's diagnosis of asthma, many participants reported that they experienced difficulties taking in the information that was presented to them and were left with ''many questions'' after consultations with HCPs. This was due to the large amount of information imparted, causing ''information overload''; feeling ''overwhelmed'' as to the realization that their child has a chronic condition; and/or the use of medical jargon. In all of these instances, participants would turn to as many different individuals and resources as they could to answer their questions. --- Confidence in Management Participants who expressed that they did not have an active need to acquire further information from sources other than their HCPs were confident in dealing successfully with the ongoing management of their child's asthma medication. They were ''satisfied'' and ''happy'' with the resources and information they were receiving from their HCPs and felt that they could ''manage all their medications and symptoms'' on their own. These participants reported discussing their child's asthma with fewer people in comparison to others who did not display this same level of confidence. --- Perception of their Child's Asthma Severity The level of interaction and selection of individuals/resources within participant's asthma networks were also influenced by the participant's perceptions of their child's asthma severity. Participants who viewed their child's asthma to be mild in comparison to other children kept their asthma networks small, and rarely interacted with family and friends in regard to medication management of their child's asthma. In fact, they reported a desire to keep all interactions about their child's asthma to a minimum. They reported that they were ''confident'' to manage their child's condition on their own. In contrast, participants who perceived their child's asthma to be ''poorly controlled'' actively sought both physical and emotional 'support' from people that were already known to them or formed new connections. In looking for support, they were actually looking for ways to increase their ''confidence'' across all aspects of asthma medication management. These parents sought out additional information from different sources, especially when their child was experiencing an asthma flareup. Consequently, these participants had larger asthma networks. --- Convenience When it came to seeking medication advice, participants made decisions about which individual or resource to utilize based on the level of convenience. The more convenient the source was, the more frequently participants reported to utilize and interact with it. The Internet was an easy and convenient source of medication information both prior to and after their interaction with HCPs, especially when HCPs lacked time during consultations, or participants wanted to re-affirm something that they had heard. By accessing asthma-related websites, participants were able to diagnose and treat symptoms promptly and timely if they were unsure what to do. When it came to HCPs, those who were easily accessible and could provide quick information and advice, such as the pharmacist, were utilized often when in need of reliable information or emergency medication. These HCPs were also held in high esteem in these situations, particularly hospital staff, as they were easily accessible to provide treatment in life-threatening situations. --- DISCUSSION This study explored the asthma networks of parents of children with asthma and has identified the role of individuals within their networks, how they influence the way in which they manage their children's asthma medications, and the factors driving their development. A qualitative exploration of the parents' network utilizing the principles of social network theory [22,48] was employed in this study and has not been conducted previously in this cohort. Important outcomes of this study highlight that when it came to the management of their children's asthma medications, parents actively sought advice and support from a series of individuals with whom they had complex and multidimensional relationships; including HCPs, personal connections, lay individuals, and resources. Some individuals directly influenced asthma medication management decisions made by parents, others provided emotional or informational support for asthma management and some connections were unrelated to the child's asthma but instead provided physical support to the needs of parents. The development of these asthma networks occurred over time and was influenced by their satisfaction with their primary HCP provider; trust and the support provided; the need for different information; convenience; their own confidence in managing the condition; and parents' perceptions of their children's asthma severity. The in-depth exploration of each connection in parents' asthma networks found that parents have a multitude of needs that are fulfilled by different individuals/resources. What parents perceived their children's asthma needed and who/what they felt was capable/available to fulfil these needs strongly influenced their selection of health connections. Support from family and friends was orientated to all aspects of the parents' everyday lives, while friends and family members who either had asthma or a child who did, shared their advice and experiences, filling in gaps in knowledge, helping parents in management decisions and ultimately playing a key influential role. The Internet, a convenient and easily accessed resource for health information, aided in asthma management decisions. Other relationships in parents' networks, such as schools, while not influential, were important to parents, and provided them with the physical support they needed. In contrast, HCPs focused on the diagnosis and treatment of the condition, providing professional advice for their children's asthma. Generally, parents saw GPs and specialists as gatekeepers to their children's asthma medications, truly influencing the asthma medicine management decisions made by parents. Specialists were highly respected and instilled a sense of confidence in parents that the medications prescribed, and management recommended is the most appropriate for their child. While pharmacists had the knowledge and skills to assist parents with asthma management, for most parents, they were relationships of convenience, where they simply supplied what the doctor had prescribed, having little influence on the parent's management decisions. This is not surprising, as other research has shown patients continue to prefer physician-led service [22,36]. For those parents who were more engaged with the pharmacist, it was clear that their level/extent of influence was meaningful. They respected their advice and help and spoke highly of them. In trying to better support the needs of parents, perhaps we need to re-consider where the role of HCPs currently fits in and how we may evolve this role over time to meet all their needs. By exploring these connections, it has provided new information on which HCPs may act on to better engage and utilize patients' existing health connections so that they better meet parents' needs. Parents' health connections were led by experiences of the condition, their experiences with their HCPs, and the child as they grew older and received more autonomy in the management of their asthma. Ultimately, a parent's relationship with their primary provider, such as their GP, played the biggest role in a parent's choices and selection of connections within their health networks. Despite their fundamental and influential role, their satisfaction with their primary provider, the trust they had in the relationship, if they had met parent's information needs, delivered uniformity in education in line with other HCPs and instilled in them a sense of confidence in managing their children's asthma, determined the extent they sought out other connections to meet their needs. Unfortunately, a large proportion of parents were disappointed with their primary providers. This disappointment in primary providers is mirrored in a study by Peterson-Sweeney et al., where parents voiced the lack of education they received from their GPs [37]. However, this study highlights that it has resulted in parents seeking out more health information and support from various individuals or resources to fill in this void and that it is affecting their willingness to communicate with their primary provider and as well as other HCPs. This is concerning, as other connections may not be equipped to provide good-quality professional advice and information. Ensuring these relationships are positive and that HCPs are meeting parents' needs is important. This can be achieved by focusing on the patient experience at every level to increase patient satisfaction, especially at first contact. Positive first interactions with parents as well as patients tend to strongly shape the experiences and emotions that follow, especially when children are first diagnosed at the hospital. Including the child and parent in a triadic discussion as part of a HCPs clinical approach helps foster a positive relationship [38]. When initial consultations with their HCP go well, a positive cycle begins with their HCP; when it goes poorly, as was the case with a large number of parents in this study, it may be difficult to recover. Negative relationships, poor experiences, and a lack of communication have been shown in adults to limit a professional's ability, or wiliness, to identify a patient's health beliefs; education needs; a patient's confidence in managing their own asthma; relevant, non-medical lifestyle factors impacting control; etc. [11,[38][39][40][41][42]. Only by identifying these patient factors through good communication, positive and trusting relations with both parents and children will HCPs be able to present more meaningful, targeted information for patients, which will, in turn, promote better asthma understanding and more effective self-care, and enable professionals to support asthma patients appropriately [38]. In short, the poverty of good professional communication, trust, and support leads to poor asthma management for parents of children with asthma and in turn, facilitates network expansion. Further, patients become a sum of all their experiences over time, and it is important that these experiences remain positive, so that they are better equipped to manage their asthma, leading to improved pediatric asthma outcomes. This study highlights the important link between health and social relations in the management of children's asthma medications. When looking at the influence of social support on chronic illnesses such as asthma, they can be both positive and negative [48][49][50][51][52][53]. It is difficult to determine the quality and accuracy of advice and information that family and friends are providing parents. This is especially worrisome if parents are taking this advice on board over HCP advice. Health care professionals need to consider the influential impact family and friends have on parents. Management education should incorporate skills and strategies designed to minimize social influences that hinder the optimal management of their child's asthma and enhance social interactions that facilitate successful management [41,43,44]. Studies have shown that patient behavior is highly influenced by family members and friends, resulting in decreased risk of serious illness or death [48,49]. Adults have reported to turn to family members and friends prior to seeing an HCP or using the Internet to learn from the personal lived experiences of individuals who share similar conditions, especially through participation in online health communities [50,51,54,55]. Through the provision of support and exchange of health information, social connections are proposed to promote healthy eating [52], provide emotional support to help adults with asthma to better cope with their condition [48], and motivate them to participate in preventive care programs [53]. Future research needs to reveal a deeper understanding of the social context in which this occurs in parents of children with asthma, which would allow for the development of tailored interventions that consider the specific roles of family and friends to maximize positive outcomes. Given the complex needs of parents in the ongoing management of their children's asthma, such as symptom monitoring, medication adherence, lifestyle changes, and emotional stress, this study conveys that simply having one health connection alone was insufficient to provide all the support parents needed and required to address their needs. It is clear from our results that HCPs are no longer the sole source of input. HCP solely delivering health care may not be an accurate approach to health-care delivery and providers need to consider the influences of lay advice on parents. It highlights the need to develop pediatricspecific guidelines for asthma management that foster a 'community' approach to management and the need for uniformity in education required between all HCPs. With parents interacting just as frequently with family, friends, and the Internet compared to HCPs, there is much more work to be done to effectively engage parents with HCPs to ensure that they are receiving correct information and are properly supported to provide optimal care for their children's asthma. This is especially highlighted in the fact that all of the children in this study had been hospitalized at least once for their asthma, showing already the potential failure of the primary care system. Given the increasing demands on the time of primary care doctors, particularly in the area of chronic disease management, perhaps referrals to specialists need to be considered more often or a need for greater involvement of community pharmacists or nurses. It has been repeatedly shown throughout the literature that adults with asthma have benefited from pharmacists' interventions in their asthma care [57][58][59][60]. A study by Saini et al. [57] highlighted that pharmacists who deliver specialized models of asthma care to patients are also able to meet their needs. Policies need to enable other health professionals, such as pharmacists, to contribute to optimal chronic disease management if primary providers are not able to meet the demands. However, pediatric asthma is a chronic disease that is often under-represented and under-prioritized by policymakers, government bodies, healthcare professionals, and researchers. While this study has added valuable insights to our understanding of the key influences in parents' management of their children's asthma medications, research limitations need to be considered. That is (i) this study was conducted in only one particular Australian district, (ii) all the parents interviewed were female, although the latest Census data show that fewer than one in 20 families have a father who is the primary carer [45], (iii) the vast majority of the participants had very mild asthma, with the majority symptom-free for the past month, (iv) although interview data were independently reviewed, interpretive bias should be considered. However, interpretation is never completely independent of a researcher's beliefs and preconceptions [46]. To minimize bias, regular meetings were held to critically compare and discuss findings. Further, these findings are based on a cross-sectional case study and parent relationships that may change over time and under different circumstances, leading to different asthma networks. A time series design study or an ethnographic study should be considered to examine parents' asthma networks at different time points during childhood in order to look at the development of these networks over time. This would allow us to gain further understanding of how these asthma networks came to evolve over time. --- CONCLUSIONS In conclusion, this study has uncovered the important and underestimated role of parents' non-medical sources of information/support. It also highlights the complex relationship between HCPs, parents' non-medical sources of information/support, and the intimate and parallel influence they have on a parent's decision-making when it comes to the management of their child's asthma medications. Simply, HCPs are not able to provide all the support parents need and require to address their needs when it comes to the management of their children's asthma medications. This study supports the need for a collaborative approach to the management of pediatric asthma, which involves both medical and non-medical individuals, uniformity in education between all individuals involved, and highlights the need to develop pediatric-specific guidelines for asthma management that foster a 'community' approach to management. for developing educational presentations from Teva and Mundipharma; and has received honoraria from AstraZeneca, Boehringer Ingelheim, and GlaxoSmithKline for her contribution to advisory boards/key international expert forum. Dr. Vicky Kritikos has received honoraria from AstraZeneca, GlaxoSmithKline, and Pfizer. Compliance with Ethics Guidelines. This project was approved by the University of Sydney Human Research Ethics Committee (Project No: 2015/762) and participants provided their informed consent to participate in the interviews. Data Availability. The data are not publicly available as consent was not obtained from participants to share data outside of the requirements of the research process. --- Authorship. All named authors meet the International Committee of Medical Journal Editors (ICMJE) criteria for authorship for this article, take responsibility for the integrity of the work as a whole, and have given their approval for this version to be published.
This study examined the relationship between racial discrimination and use of dental services among American adults. We used data from the 2014 Behavioral Risk Factor Surveillance System, a health-related telephone cross-sectional survey of a nationally representative sample of adults in the United States. Racial discrimination was indicated by two items, namely perception of discrimination while seeking healthcare within the past 12 months and emotional impact of discrimination within the past 30 days. Their association with dental visits in the past year was tested in logistic regression models adjusting for predisposing (age, gender, race/ethnicity, income, education, smoking status), enabling (health insurance), and need (missing teeth) factors. Approximately 3% of participants reported being discriminated when seeking healthcare in the past year, whereas 5% of participants reported the emotional impact of discrimination in the past month. Participants who experienced emotional impact of discrimination were less likely to have visited the dentist during the past year (Odds Ratios (OR): 0.57; 95% CI 0.44-0.73) than those who reported no emotional impact in a crude model. The association was attenuated but remained significant after adjustments for confounders (OR: 0.76, 95% CI 0.58-0.99). There was no association between healthcare discrimination and last year dental visit in the fully adjusted model. Emotional impact of racial discrimination was an important predictor of use of dental services. The provision of dental health services should be carefully assessed after taking account of racial discrimination and its emotional impacts within the larger context of social inequalities.
Introduction Racial inequalities in use of dental services and their potential implications are widely documented in the literature [1][2][3][4][5][6]. These inequalities are mostly attributed to socioeconomic and cultural/behavioral attributes [7][8][9], but the question as to how the observed racial gaps in utilization of dental services could be a consequence of racial discrimination is not fully explicated [10,11]. Racial discrimination in healthcare is an individual's appraisal of unfair treatment in a medical setting based on race, colour or national origin [12,13]. Discrimination can occur at individual, institutional, and structural levels [14]. It can operate through different mechanisms, subsequently leading to poor health outcomes and underutilization of services [13,15]. First, the institutional or structural racism arising from racially discriminatory unfair policies and institutional culture can lead to differential educational/employment opportunities, and thus, access to health-promoting resources [15]. Second, racism often leads to the development of implicit racial bias and explicit racial stereotypes, influence clinicians' behavior decision making and communication process, and thus contribute to the differential treatment of members of same institution [16,17]. Third, racism can act as a psychosocial stressor that operates through physiological, psychological, and behavioral pathways which could have consequences for health [16]. Moreover, negative experiences have the potential to influence quality of healthcare, interpersonal trust, medical adherence levels, and treatment delays [18,19]. Finally, the racial discrimination deeply entrenched within the healthcare settings may alter the patient's perception of healthcare interaction and the pattern of their healthcare access [20]. To be more precise, internalization of unfair treatment may engender involuntary responses such as anxiety or increased vigilance and voluntary coping responses like disengagement from situations or environments that negatively stereotype people [21]. This may possibly inhibit certain individuals from using a wide range of needed health services, including dental services. Few studies have evaluated the impact of racial discrimination on oral health, with conflicting evidence. While racial discrimination was associated with experience of toothache among pregnant Aboriginal Australians [22], self-reported dental problems among Canadian immigrants [23] and tooth loss among pregnant Aboriginal Canadians [24], no association was found with periodontitis among Hispanic Americans [25], tooth loss among Brazilian civil servants [26], or oral health-related quality of life among pregnant Aboriginal Canadians [24]. Evidence on the impact of racial discrimination on utilization of dental services is even more limited. Racial discrimination was not associated with a dental visit within the past year among pregnant Aboriginal Canadians, but it was associated with being asked to pay for dental services by dental care providers despite entitlement to free dental care and seeking care off-reserve or out of the community [24]. Among pregnant Aboriginal Australians, racial discrimination was associated with having never visited a dentist before. However, no adjustment for participants' socioeconomic status was attempted [27]. A broader implication of the negative impact of discrimination comes from qualitatively research, which showed that disadvantaged caregivers of Medicaid-enrolled children cited discriminatory behavior attributed to racism as one of the key barriers to accessing dental services [28,29]. Using the well-established Andersen's behavioral model of health services use [30,31] as a theoretical framework to understand factors affecting an individual's decision to use dental services, this study explored the relationship between racial discrimination and use of dental services among American adults. It was hypothesized that individuals experiencing racial discrimination would be less likely to use dental services than those without such experiences. --- Materials and Methods --- Data Source The Centers for Disease Control and Prevention's Behavioral Risk Factor Surveillance System (BRFSS) is an annual, state-based, random-digit-dialled, telephone health survey of the non-institutionalized US civilian population, aged 18 years or older, living in all 50 States, the District of Columbia, Puerto Rico, and Guam. In each geographic region, the BRFSS uses a disproportionate stratified sampling for the landline telephone survey and random sampling for the cellular telephone survey. For the landline telephone survey, interviewers collect data from a randomly selected adult in every participating household. For the cellular telephone survey, interviewers collect data from adults residing in a private residence or college housing who have a working cellular telephone. The data were collected using the BRFSS standardized questionnaire via computer-assisted telephone interviews (CATI). The BRFSS questionnaire consists of core, optional, and state-added questions modules. Interviewers were trained as per the BRFSS protocol, and confidentiality of participants was maintained by ensuring their anonymity to the interviewers [32]. We used data from the 2014 BRFSS because this was the most recent survey, including questions on both oral health and racial discrimination. In 2014, 464,664 interviews were made, representing a weighted median response rate of 47% (48.7% and 40.5% for landline and cellular telephones, respectively). Response rates varied from 25.1% to 60.1% across states. The present analysis is limited to participants in the three states (Minnesota, Mississippi, and New Mexico) that included the optional module on reactions to race in 2014. There were 23,272 participants who completed the optional module on reactions to race and the core module on oral health. After exclusions due to missing values on covariates, the study sample include 11,950 adults. --- Selection of Variables The outcome variable of interest was utilization of dental services, which was assessed with the following item: "How long has it been since you last visited a dentist or a dental clinic for any reason?" with multiple response options (within the past year, 1 year but less than 2 years ago, 2 years but less than 5 years ago, 5 or more years ago and never). Responses were re-coded into those who visited a dentist in the past one year and those who did not visit one (reference group). Two main explanatory variables indicating discrimination were taken from the reactions to race module of the questionnaire. First, perceived racial discrimination while seeking healthcare was assessed with the following question: "Within the past 12 months when seeking healthcare, do you feel your experiences were worse than, the same as, or better than people of other races?" with multiple response options (worse than other races, the same as other races, better than other races, worse than some races, better than others, only encountered people of the same race). For analysis, 'worse than other/some races' responses were coded as having experienced discrimination. The second indicator of discrimination was emotional impact of discrimination, which was assessed using the question: "Within the past 30 days, have you felt emotionally upset, for example angry, sad, or frustrated, as a result of how you were treated based on your race?" with yes/no response options. Based on Andersen's behavioral model of health services use [30,31], several covariates were included as potential confounders of the association between racial discrimination and use of dental services. They were predisposing (age, gender, race/ethnicity, income, education, and smoking), enabling (health insurance), and need factors (number of missing teeth). Self-identified race/ethnicity was categorized into five groups: Non-Hispanic White, non-Hispanic Black, Hispanic, and Asian/Other, and Multiracial. Yearly income was categorized into four groups: Less than $20,000, $20,000 to $34,999, $35,000 to $74,999, and $75,000 or more. A binary variable created for health insurance coverage was based on whether participants were enrolled in a healthcare plan (including Medicare and Medicaid). Educational attainment was categorized as less than high school, high school or equivalent, some college, and college graduate. Current smokers were respondents who reported smoking at least 100 cigarettes during their lifetimes and reported smoking "every day" or "some days". Former smokers were those respondents who reported ever smoking at least 100 cigarettes but reported smoking "not at all". Never smokers were respondents who reported smoking fewer than 100 cigarettes during their lifetimes. The original categories of missing teeth (none; 1 to 5; 6 or more, but not all; and all) were categorized into missing teeth and no missing teeth. --- Statistical Analysis Analyses were weighted to produce nationally representative estimates and accounted for the sampling design (clustering and stratification). Only cases with complete data in all variables were included in the analysis. Analyses were performed in Stata 14 (StataCorp, College Station, TX, USA). We first described the characteristics of the sample according to predisposing (demographic factors, socioeconomic position, smoking status), enabling (health insurance), and need factors (missing teeth). Both indicators of racial discrimination were also included. Secondly, the proportion of individuals who had a dental visit in the past year were compared between categories of the two indicators of racial discrimination and covariates using the Chi-squared test. Thereafter, we constructed two logistic regression models to assess the relationship between each indicator of racial discrimination (racial discrimination when seeking healthcare and emotional impact of racial discrimination) and utilization of dental services in the past year (outcome). Odds ratios (OR) were used as the measure of association. We first tested the association between racial discrimination when seeking healthcare and use of dental services in a crude model, which was subsequently adjusted for predisposing (sex, age, race/ethnicity, education, income, and smoking status in Model 1A), enabling (health insurance in Model 1B), and need (number of missing teeth in Model 1C) factors. No statistical interactions were included in the models. The same set of models was developed when testing the association between the emotional impact of racial discrimination and use of dental services (labelled as Models 2A to 2C). --- Results The characteristics of the study sample are presented in Table 1. The percentage of dental visits within the past year was 68.3% (95% CI 67.1-69.5). In addition, 2.7% (95% CI 2.3-3.1) of adults reported healthcare discrimination and 5.0% (95% CI 4.5-5.6) reported emotional impact of discrimination. Dental visits within the past year were more common among women, older, non-Hispanic White, and non-smoking participants. Those with more education, greater income, health insurance, and no missing teeth were also more likely to have visited the dentist in the last year. Dental visits were less common among those who reported racial discrimination while using the healthcare system or reported emotional impact of discrimination than those who did not (Table 2). Those who experienced racial discrimination when seeking healthcare were less likely to have visited the dentist in the past year than those who did not have such an experience (OR: 0.57, 95% CI 0.41-0.79). Similarly, those who experienced the emotional impact of discrimination were less likely to have visited the dentist in the past year than their counterparts (OR: 0.57, 95% CI 0.44-0.73). The association between healthcare discrimination and use of dental services was fully attenuated after adjustment for predisposing factors (demographics, socioeconomic position, and smoking status) and remained unchanged after subsequent adjustments for enabling (health insurance) and need (missing teeth) factors (Table 3). In the fully adjusted model, the odds ratio was 0.88 (95% CI 0.62-1.25). By contrast, the association between emotional impact of discrimination and use of dental services remained significant after adjustments for predisposing, enabling, and need factors. In the final model, those who experienced the emotional impact of racial discrimination were 25% (OR: 0.75, 95% CI 0.58-0.99) less likely to have visited the dentist in the past year than those without such experience. --- Discussion This study supported our hypothesis that racial discrimination is negatively associated with the utilization of dental services. Those who experienced the emotional impact of racial discrimination were less likely to have used dental services within the past year. This finding was robust to multiple adjustments for known determinants of utilization of dental services, which were carefully chosen according to the Andersen's behavioral model of health services use [30,31]. Most of the current research on racial/ethnic inequalities in use of dental services gives scant attention to the role of racial discrimination by which this association might exist [11]. In this study, both healthcare discrimination and the emotional impact of discrimination were negatively associated with dental visits in crude regression models. Interestingly, the association between healthcare discrimination and dental visits was fully accounted for by controlling for predisposing factors, which emphasizes the role of socioeconomic position in explaining such an association. Having greater financial resources might give individuals more options of healthcare providers and, as such, prevent discriminatory experiences. The above finding also suggests two related points. One is that it might be easier to report emotional impacts of racial discrimination than actual experiences of discrimination. As with other subjective experiences, racial discrimination could be misperceived or overlooked, which can lead to underestimating or overestimating their actual occurrence. The other is that the emotional impact of discrimination can arise from a broader set of life experiences, not only those related to seeking healthcare. This argument explains why the prevalence of the emotional impact of discrimination was higher than that of healthcare discrimination despite the former having a shorter recall frame than the latter (30 days versus 12 months). What is important to remember is that such experiences might prevent people from using other services, thus perpetuating the vicious circle. As a contributory factor to racial inequalities in oral health, discrimination based on race should be viewed as a social determinant of oral health [11]. However, what remains unclear from this line of research is why, how, and to what degree experiences with discrimination in everyday social settings, including a dental clinic or a hospital, influence the lives of racial minorities [2]. One popular explanation that the literature offers is based on the conceptual framework laid by the Major and O'Brien's model of stigma-induced identity threat, which explains how subjective perceptions of discrimination within the healthcare system influence receipt of health services [21]. People may disengage from health services because of discrimination and insensitivity from healthcare staff (on the grounds of gender, race, culture, social class, sexuality or even symptom-related factors such as drug use or homelessness). They may feel alienated by negative discriminatory experiences; being subject to more paternalistic and coercive treatments and cultural or language barriers in assessments [33,34]. Collectively, this would lead to mistrust [19], non-compliance with treatment, and partial or complete disengagement from a dominant cultural institution such as healthcare, further attenuating the unmet need of care. The current findings begin to fill an important gap in evidence regarding racial discrimination and the consequential emotional impact of discrimination on dental visits using quantitative data. In general, the uptake of dental services is conceptualized as issues of acceptability, affordability or availability, and contributory roles of the perceived systemic level and interpersonal discrimination are often overlooked. Dental visits could be a wider experience of racial discrimination prevailing in the society in general and in the healthcare system in particular. It is imperative to account for the accurate impact of discrimination and assess the contemporary policies that pave the way to racial discrimination. Our findings underscore the need to understand the patterns of discrimination and ways in which it influences engagement or disengagement from health services in a longitudinal framework. If understood, this could move policies forward and potentially facilitate developing effective strategies to deliver non-discriminatory and equitable healthcare services. Few caveats should be acknowledged in interpreting the findings. First, the cross-sectional nature of the data limits the extent to which causal inferences can be drawn from our findings. Second, the single-item measure of perceived discrimination in healthcare pertained to experiences during the past 12 months, which intrinsically excluded people who did not seek care in the past year because of access-related issues. Third, the moderate response rate of the BRFSS might raise concerns about selection bias, although the probability weights used to correct for the differential probability of selection and non-response might have corrected it to an extent. Finally, the inconsistent adoption of the optional reactions to race module across the states limits the generalizability of our findings. Characteristics of the states in which the module was used may differ in significant ways from those of states that elected not to use it. Therefore, the present findings represent valid relationships between the variables of interest but cannot be viewed as representing the entire adult nation in the United States. --- Conclusions In a large population survey of adults in the United States, this study shows that the emotional impact of racial discrimination was associated with lower uptake of dental services. This association persisted even after accounting for various predisposing, enabling, and need factors. The preliminary findings from this study underscore the need to understand the patterns of discrimination and ways in which it influences engagement or disengagement from health services.
Understanding pedestrian dynamics and the interaction of pedestrians with their environment is crucial to the safe and comfortable design of pedestrian facilities. Experiments offer the opportunity to explore the influence of individual factors. In the context of the project CroMa (Crowd Management in transport infrastructures), experiments were conducted with about 1000 participants to test various physical and social psychological hypotheses focusing on people's behaviour at railway stations and crowd management measures. The following experiments were performed: i) Train Platform Experiment, ii) Crowd Management Experiment, iii) Single-File Experiment, iv) Personal Space Experiment, v) Boarding and Alighting Experiment, vi) Bottleneck Experiment and vii) Tiny Box Experiment. This paper describes the basic planning and implementation steps, outlines all experiments with parameters, geometries, applied sensor technologies and preand post-processing steps. All data can be found in the pedestrian dynamics data archive. Keywords CroMa project • controlled experiments • train platform • crowd management • single-file • personal space • boarding and alighting • bottleneck • tiny box • 3D motion capturing • electrodermal activity • heart rate • luggage Collective Dynamics 8, A141:1-57 (2023) Licensed under
Introduction This paper gives general information about the experiments performed within the project CroMa (Crowd Management in transport infrastructures) [1]. In addition, experiments from other projects such as CrowdDNA [2] were carried out in the context of this experiment series as well as experiments that cannot be assigned to a third party funded project. The paper includes information about the overall organization, the experimental site, the procedure and timeline, the participants, the data collection technique and gives an overview of all experiments. Further detailed information of single experiments, especially data analysis, will be found in focused papers on these experiments. All data gathered by the sensors used will be made freely accessible in the pedestrian dynamics data archive [3] with the publication of the first scientific results at the latest. General data mentioned in this paper like the overall composition of the test persons based on handed out questionnaires and a measurement course can also be found in the data archive [4]. The CroMa project itself is focused on developing and enhancing different strategies, such as building regulations, crowd management, and innovative action strategies to increase the efficiency of pedestrian facilities in railway and underground stations. These strategies aim to increase the robustness and efficiency of railway stations during peak load and to avoid crushes in the event of critical crowd densities. Research within the framework of CroMa includes the investigation of pedestrian flow in traffic facilities and the study of pedestrian behaviour within dense crowds. These research fields have also been assessed by means of the large-scale experiments described in this paper in which several external (structural) and internal (characteristics regarding the test person sample) parameters have been varied on a controlled basis. --- Preparation of Experiments The CroMa-experiments were conducted from October 8, until October 11, 2021 in the Mitsubishi Electric Hall (MEH) in Düsseldorf, Germany. The MEH is a multipurpose event hall with an interior hall size of 3500 m 2 and an additional main and side foyer. The planning and preparation were divided into two interlocked parts. One being the overall organization and provision of rooms and material and the other being the scientific planning of the individual experiments. Several preparatory meetings were held to discuss issues related to the variety of tested scenarios and statistical significance relative to available time and personnel. A temporal and spatial setup was developed to account for the level of information given to the participants about the aims of the study as well as their learning effect over the course of the day. The experimental plans were simplified, concretized and the times for conducting the experiments, announcements, walking routes, filling in the questionnaires and taking small breaks were calculated to test the feasibility of the setup. Originally, the experiments were planned for March 2020 and had to be postponed due to the growing SARS-CoV-2 (Covid-19) pandemic. When the experiments were conducted in October 2021, the setup was revised regarding compliance with safety measures and expanded to include a hygiene and safety concept. --- General Framework The experiments were performed in a circuit training model. This means that three experimental setups were performed at the same time at three different sites, and participants were guided from one site to the other in designated groups. The three groups were marked with wristband colors: red, green or blue. The experimental sites were labeled alphabetically 'B', 'C' and 'D' (Fig. 1). The experimental sites were separated by black curtains that shielded the view but were not sound proof. To limit the view on to the experiment, the waiting areas within the experimental sites were shielded by curtains as well. Each day (day 1-3) consisted of six experimental time slots lasting 1 hour each, therefore participants attended each experimental site twice a day, but never participated twice in the same experimental setup, as those changed from one time slot to the next. A rough time schedule is shown in Fig. 2. The following experiments were performed in site B: --- Site • Train Platform Experiment (day 1-3; Sec. 3.1) The following experiments were performed in site C: • Crowd Management Experiment (day 1-3; Sec. 3.2) • Single-File Experiment (day 4; Sec. 3.3) • Personal Space Experiment (day 4; Sec. 3.4) The following experiments were performed in site D: • Boarding and Alighting Experiment (day 1-3; Sec. 3.5) • Tiny Box Experiment (day 1-3; Sec. 3.6) • Bottleneck Experiment (day 4; Sec. 3.7) Day 4 was different in that participants were divided into only two groups: Group yellow consisted of 80 people and group red of 120 people. Group red took part in the experiments at site D in all six time slots. Group yellow took part in the experiments at site C for the time slots 1 to 3 and also came to site D for time slots 4 to 6. --- Recruiting Process Participants were recruited by spreading information via various channels including printed and social media as well as e-mail lists of former experiments. The information included a short summary of the project, dates and payment. A QR code and link to a registration website was provided. The website included further information on conditions of participation, days available as well as a registration form. The conditions of participation (originally in German) included: • minimum age of 18 years and recommended age of younger than 75 years • body height of 1.5 m to 2.0 m • not being affected by limited mobility or claustrophobia • wearing dark clothes without lettering and not wearing large bags/backpacks • agreeing to being filmed and the material to be published in a data repository After submitting the registration form, the potential participants received an e-mail that their registration had been received. People were assigned to days based on their statements of availability and evenly divided among the days if they were available for multiple days. People were only able to register for one of the three first days and additionally for the fourth day. After allocation to the days, participants received an e-mail with allocation information. Two weeks prior to the experiments, a reminder was sent including current information on the hygiene and safety concept. The hygiene concept to protect against infection by Covid-19 was a necessary requirement of the authorities and institutions involved. One week prior to the experiments, participants received an e-mail with a reminder of their personal assigned dates and important things to remember and bring along with them (e.g. ID card, comfortable shoes, wear dark clothes). the Mitsubishi Electric Hall and proceed to the registration desk in the main foyer. During registration the identity documents were checked again, participants signed forms that they consented to the conditions of participation, and they were handed a green hat, personal ID code (Aruco Code dict 6X6 1000 [5]) and corresponding number on a wristband (Fig. 3 a) as well as a clipboard with questionnaires and declaration of informed consent to be filled out. --- Registration and Measurement Course The wristbands had three different colors (red, blue, green) and were handed out alternately. That way participants were divided into three experimental groups on arrival. Participants who arrived in social groups were therefore split among our experimental groups, although it cannot be completely ruled out that people who know each other were in the same experimental group. The wristbands were labeled with numbers (Fig. 3 a) referring to the number associated with the personal ID code. The number was used for all questionnaires throughout the course of the day, to allocate sensor information and trajectories to a participant without revealing personal information. After registration participants entered a course that led them to a sequence of stations. At these stations, information was collected and subjects were provided with markers and utensils: • measuring height • applying shoulder markers to top (Fig. 3 c) • putting on green hat and attaching personal code (Fig. 3 c,d) • checking correct fit of hat with code and telling people to leave the hat on for the entire day • time for questions • final check if declaration of informed consent was signed and questionnaires were filled out correctly • targeted addressing of suitable people to ask if they were willing to wear additional sensors (3D motion capturing suit, heart rate sensor) After completing the measurement course participants could check their bags at a cloakroom and proceed to a large waiting area. --- Test Person Sample To perform the experiments we accepted 1500 people for the duration of the four days. Of these, 1038 people attended the experiments. The sample of test persons included people from the age of 18 to 85 years (median=31, σ = ±17), with 47 % being male, 51 % female and 2 % not specified. Some of the distributions of the demographic data collected via questionnaire are shown in Fig. 4. Data that had to be measured such as body height, body weight and shoulder width were collected by employees during the measurement course (Fig. 3 b). On average, the participants were 1.75 m (σ = ±0.1) tall, weighed 79 kg (σ = ±21) and had a shoulder width of 45 cm (σ = ±4). Female participants were shorter, more lightweight and slimmer at the shoulders on average. Further personal data were collected via questionnaires. Differences in distributions for the different days and experimental groups can be found in the App. A.1. More data can be found in the archive [4]. --- Notes Related to Covid-19 Pandemic At the time recruiting started as well as at the time of the experiments, Germany was at the beginning of a third Covid-19 wave (Fig. 5). Of the enrolled people, 90 % declared that they had been fully vaccinated. Due to the pandemic, a number of precautions were taken: • a hygiene and safety concept was developed by the team and approved by the crisis committee of Forschungszentrum Jülich and the competent regulatory authority of the city of Düsseldorf • participants had to be recovered, vaccinated or tested (referred to as "3G" in Germany) • everyone was tested at the time of arrival and people were only allowed to enter the building with a negative test result • participants had to wear surgical masks at all times (except when eating or drinking at a seat in the waiting area) In the first year of the Covid-19 pandemic, Germany's regulations prohibited people from getting together in large groups and required them to keep a distance of 1.5 m between people not belonging to the same household. During the summer of 2021 these restrictions were dropped for vaccinated people. In public it could be observed that people kept wearing masks and kept their distance to other people to a large extent on a voluntary basis. Because our experiments were designed for situations in which high densities may occur, we attempted to mitigate the behavioural changes described above. In order to get people accustomed to larger groups and higher densities again, we performed an 'icebreaker' experiment. Participants were not informed about the icebreaker experiment, which was performed as part of the walkway to the first experiment. After registration people waited in a large waiting area (Fig. 1 green shading). When the registration was finished up to 100 people were asked to walk to the first experiment at a time (based on the color of their wristband). A person in charge, responsible for an experimental area, walked them into a corridor with two doors (Fig. 1 blue shading). The person in charge asked people to wait until the last participant had arrived in the corridor. The rear door was closed when everyone had arrived (the density in the room was about 1 P/m 2 ). Then the person in charge waited for another few minutes before releasing participants into another open space. The icebreaker was performed once every morning for each group. To assess the extent to which participants were influenced in their actions by the thought of the pandemic, everyone was given a questionnaire about perceptions of various risks (focusing, in particular, on perceptions of risks of Covid-19 infection) and about the potential influence of the Covid pandemic on the experiments, at the end of each experimental day. The questions were answered on a 7-point scale from "strongly disagree" (1) to "strongly agree" (7). The questionnaire started with the general items about whether participants felt uncomfortable in the crowds during the experiments. Participants were then able to rate how much the following seven factors influenced their discomfort: Crowding, concern about contracting Covid, concern about contracting an other illness, unclear instructions, physical exertion, stress caused by the experimenters, being with many people. Two more questions directly addressed the mental engagement with Covid. In addition, participants could indicate in which setting (e.g., in the registration course) and in which type of experiment (e.g. bottleneck) they were most concerned about Covid with a yes/no answering format. In the two final questions, subjects estimated whether they would have behaved differently before the pandemic and indicated whether they had already been in a crowd before the experiments since the onset of the pandemic. N = 1000 participants filled out the questionnaire on the four experimental days. Descriptive statistics are reported for the questionnaire data (Tab. 12). The results of the questionnaire suggest that the Covid pandemic did not have a major impact on the answers of the participants. The concern about infection was reported to be low (mean value M between 1.98 and 2.45). Subjects self-reported that their actions were not significantly different than before the pandemic (M = 2.69). A self-selection effect certainly plays an important role for these results: Presumably only people who were not very concerned about Covid signed up for the experiments. This can also be seen in the last question. The statement that they have already been in a crowd elsewhere was often agreed with (M = 4.02). Furthermore, these results reflect the extensive safety measures that apparently reduced the fear of contagion. In general, a low expression of discomfort was reported in the experiments (M = 2.61). The factors that caused the most discomfort were crowding (M = 3.37) and physical exertion (M = 3.49). Subjects thought most strongly of Covid in the morning registration and measurement course (38 % answered with yes). We explain this by the fact that the registration came immediately after a mandatory Covid test. This meant that the topic of the pandemic was very present at that moment. On days 1-3 participants thought about Covid most often in experiment site C (26.2 %), followed by site D (16.1 %) and B (10.4 %). On day 4 participants thought about Covid quite a lot in the experiment site D (37.9 %) and very little in experiment site C (6.6 %). --- Configuration of Experiments --- Train Platform Experiments In this experimental site two different experiments were performed. The first one investigated the waiting behaviour of people on a simulated train platform under varying physical or social psychological factors, the second one investigated types of social influence in ambiguous situations on train station platforms. The outer dimensions of the platform were 7 m x 20 m x 0.8 m. The ascent and descent was realized by stairs secured with railings and organized in a way that resembled one primary access and three stairs to "board the train" that only a few people could stand on at the same time. The stairs at the narrow side were 3 m wide while the stairs at the long side of the platform were each 1.5 m wide. The smaller stairs were movable and the positions for save attachment to the platform were only visible for helpers moving the stairs. The platform's edge was marked with adhesive tape (width 0.05 m). White adhesive tape marks the safety distance (0.8 m) from the edge of the platform, which is a standard for platforms in Germany. A loudspeaker box with recordings of railroad sounds was placed under the platform during all of the experiments to reduce the influence of sound from the neighbouring experimental sites. Cameras were mounted to record the experiment. They are listed in Tab. 2. Experimental runs in which 3D motion capturing data were recorded are listed in Tab. 11. The mood of the participants (cf. Sec. 4.7) was recorded for all runs. Trajectories were generated as described in Sec. 4.2. The coordinate origin's location was centred in relation to and 0.5 m from the short platform border (Fig. 6a), with the y-axis pointing to the left, parallel to the longer edge and aligned with the midpoint of the large stairs. The data of the Train Platform Experiments are provided online [7]. Waiting Experiments In this experiment instructions were given in the waiting area of the experimental site directly in front of the entrance to the experiment. The waiting area was separated from the experiment by a black curtain and thus the participants could not see the setup during the waiting phase and the instructions. In runs in which questionnaires were completed, this was done after the respective runs in a second waiting area at the opposite side. With the instructions, the participants were informed that the train they intend to board would arrive on the left hand side of the platform. The instructions read as follows, "Imagine you are at a train station. Behind those curtains is the platform, which you will enter through the stairs. You plan to take the train that will arrive in a few minutes at the platform at the left-hand side". The following parameters were varied (a detailed list of performed runs and combinations of parameters can be found in App. A.3): • number of participants: 40, 80-100, (140-180) • obstacle on platform: none (blank), wall, house (Fig. 6 b-d) • inflow: every 2-3 seconds, in groups of ten • waiting time on platform: 2, 4 minutes The degree of familiarity of participants, as well as talking being allowed or prohibited was not manipulated, as in the experiments on decision making described below. The measurements of the wall were 0.6 m x 3.6 m x 2 m and of the house 3 m x 3.6 m x 2 m. Both were aligned symmetrically to the borders of the platform. In runs with 40 people, participants waited either 2 or 4 minutes. The waiting time was counted from the moment the last participant entered the platform. Additionally, the inflow sequence to the platform was varied, for the experimental runs without obstacles. The participants entered individually or in groups of ten. The groups of ten entered the platform with an interval of 35 seconds. For the platform without an obstacle and the setup with the house, additional runs with an larger number of participants were performed. In those runs the group of participants assigned to the corresponding experimental slot entered the platform first. After those participants had positioned themselves on the platform, participants from another group were brought in. Therewith the total number of passengers on the platform was unknown to all participants. Experiments on Decision Making For the experiment, the participants of each run were instructed directly in front of the platform. They entered the platform using the stairs on the long side. On the platform, two areas were marked, one slightly larger than the other. Participants were instructed to wait in the larger area but not instructed on how they should know which area was the larger one (because it was not easily visible). The instructions read as follows, "Imagine you are at a train station and want to leave on the next train. The platform is marked with a white safety strip at the long sides. Two areas are marked in yellow, the so-called 'yellow squares', one on the right and one on the left. You will only be able to board the train from the larger area. Please proceed to the larger of the yellow squares". Depending on the experimental condition the instruction was continued, "Please, do not talk to each other during the whole experiment" or "You are allowed to talk to each other during the experiment". The curtains to the waiting area of the experimental site prevented the participants who did not take part in that specific run from observing the active participants of the run while carrying out the task. Questionnaires were completed for all runs both before entering and after leaving the area. The following parameters were varied (a detailed list of performed runs and combinations of parameters can be found in App. A.3): • number of participants: 10 (small groups), 23-41 (other groups) • special design: 2 marked areas (Fig. 6 e) • inflow: all at once per group • waiting time on platform: up to 5 minutes • degree of familiarity of the participants with each other: no connection at all, short acquaintance before starting the experiment, being in the same group for hours before the experiment • special announcements: talking allowed, talking prohibited The marked areas were placed on the platform asymmetrically (Fig. 6 a) or e)), and had a size of 35 m 2 (left) and 36.7 m 2 (right). --- Crowd Management Experiments This series of experiments investigated the extent to which physical parameters such as the number of line-up gates or the width and the shape of the barrier layouts influence the formation and the density of a queue. For this purpose, an admission situation under the assumption of "admission to the concert of your favorite artist" was simulated using barriers and line-up gates typical to those used at large events. Furthermore, non-physical parameters were considered. The following parameters were varied (a detailed list of performed runs and combinations of parameters can be found in App. A.4): • setup structure grid (narrow): straight, small bend, 90 • -right bend with and without lines (floor markings) (Fig. 7, 8 c-) • setup structure grid (wide): none, lines, signs (above the line-up gates construction) (Fig. 7, 8 a-b) • no. of open line-up gates: 1, 3 • motivation: low (enough time and guaranteed seat ticket), high (the general admission ticket) • norm specification: none, 70%, 85%, 95%, 100% • special announcements: no interruption (nI), with interruption (wI) (along with HRV sensors (Sec. 4.4)) • reference runs to record free walking speed (solo ref) Instructions according to the motivation and the run were given directly in front of the entrance to the experiment before participants could see the experimental setup. At the time of the instructions, participants were separated from the experiment by a black curtain. If a norm specification was given, every participant was given a slip of paper with a note on it saying either "In the following experiment, behave as you always would" or "Imagine that you are a very selfish person. Push yourself to the front during the experiment". The percentage refers to the amount of paper notes prescribing normal behaviour. After each run, participants were directed back to the area where the instructions were given and questionnaires distributed. HRV Sensors were collected (if given) after each group had finished their respective runs. Within the framework of the experiments, a comparison of the estimated and the physically measured density of people was carried out. Some assessors had been previously trained to the Level of Service and had been given further knowledge of density estimation, while others were untrained. Time-dependent densities ranging from low to medium to high were estimated [8] and documented. In addition, positive and negative factors influencing the density estimation of the observers were surveyed using questionnaires. The outer dimensions of the experimental area were 7 m x 20 m. The line-up gates construction was 2.5 m x 3.3 m x 1.18 m (length x width x height) with a passage width of 0.5 m each. The line-up gate construction at the 90 • bend setup was 30 cm wider. Police barriers with dimensions of 2.0 m x 0.94 m x 1.1 m were used to set up the structure grid. Cameras were mounted to record the experiment and are listed in Tab. 3. Experimental runs in which 3D motion capturing data were recorded are listed in Tab. 11 and runs in which HRV data were recorded in Tab. 10. The mood of the participants (cf. Sec. 4.7) was recorded for all runs. Trajectories were generated as described in Sec. 4.2. The coordinate origin was located where participants enter the experimental area, with the y-axis pointing in walking direction and aligned with the midpoint of the middle entry gate (Fig. 7). The data of the Crowd Management Experiments are provided online [9]. --- Single-File Experiments This series of experiments investigated how walking speed and density affect physiological arousal. For this purpose, subjects were equipped with electrodermal activity (EDA) and heart rate variability (HRV) sensors (cf. Sec. 4.3, 4.4). Additionally the effect of gender on walking speed and density was investigated. The experiments were performed in the setup of classical single-file experiments, where people walked in ovals behind each other. Overtaking was prohibited. The instructions read, "Please walk one behind the other in the oval until a signal to stop is given. Do not push or overtake". The following parameters were varied (a detailed list of performed runs and combinations of parameters can be found in App. A.5): • number of participants • gender: male, female, mixed • running order by gender: random, alternating Instructions to start or to stop were given by a person standing between the two experimental setups without using technical amplification. Participants walked in the oval for at least 2 minutes or until they had walked one round at very high densities. The width of the ovals walking paths was 0.8 m with a circumference of 14.97 m as measured from the middle of the indicated walking width (Fig. 9a). The course was indicated by colored markers on the floor. Two oval experiments were performed at the same time. They were separated by a wooden wall (Fig. 9b). Cameras were mounted to record the experiment and are listed in Tab. 4. Experimental runs in which EDA and HRV sensors were recorded are listed in Tab. 9 and 10. Trajectories were generated as described in Sec. 4.2. The coordinate origin was located at the lower side of the two ovals' in the axis of the screen wall (Fig. 9a). The data of the Single-File Experiments are provided online [10]. --- Personal-Space Experiments This experimental series investigated physiological arousal when personal space is violated at low densities. Seven participants were positioned within an area marked on the floor and then passed by ten other participants (individually or several simultaneously) from all directions without being touched. Passing Participants were instructed to "Enter the area and walk around until a signal is given to leave the area" and standing participants to "Stand on one of the floor markings. Then remain in place until a signal is given to leave the area". All participants assigned to be standing in the designated spots were equipped with electrodermal activity (EDA) and heart rate variability (HRV) sensors (cf. Sec. 4.3, 4.4). Instructions were given without using technical amplification as the groups were small. Eight runs of four minutes each were performed in total (a list of performed runs can be found in App. A.6). The dimensions of the experimental area were 12.1 m x 5.3 m. The experiment was performed next to the Oval Experiments outlined in the previous subsection. There was no visual shielding between the two experiments and it was possible for participants to pass unhindered between the two experimental sites. Participants with EDA and HRV sensors were placed at positions indicated as red dots in Fig. 10a. Passing participants were participants who were not currently on runs in the neighbouring Oval Experiment. Questionnaires were completed at the end of the whole experiment set including the runs of the Oval Experiment. Cameras were mounted to record the experiment and are listed in Tab. 5. Experimental runs in which EDA and HRV sensors were recorded are listed in Tab. 9 and 10. No trajectories were exported for this experiment. The data of the Personal Space Experiments is provided online [11]. --- Boarding and Alighting Experiments This series of experiments investigated how different parameters influence the boarding and alighting process of a train car. For this purpose, the boarding area of a local train was mimicked. Sliding doors could be opened from outside the experimental area via ropes without interfering with participants. Different parameters were varied (a detailed list of performed runs and combinations of parameters can be found in App. A.7): • number of persons boarding/alighting/staying in train car • groups: none, pairs, groups of 3, groups of 5, mix • norm: none, 70%, 80%, 100% Instructions indicating 'the arrival' and 'the departure' of the train as well as the opening and the closing of the doors were given without using technical amplification from a person standing behind the waiting/boarding pedestrians. Special announcements were made via slips of paper or by directly addressing individuals by the investigators to make individual, targeted announcements when necessary to achieve the study objective. These kinds of announcements as well as handing out luggage were done in the waiting area to the left of the experimental area. Persons assigned to a group got sticky dots of the same color and had the instruction to stay together during the boarding process. For the variation of norm, the percentage in the list above refers to the amount of paper notes prescribing 'normal', considerate behaviour whereas the remaining persons got the information that pushing is allowed. Where questionnaires were completed, this was done after the respective runs in the waiting area. The outer dimensions of the experimental area were 20 m x 20 m. The inner dimensions of the train car were approximately 9.2 m x 3 m (the exact dimensions can be extracted from Fig. 11a) and aim to mimic a typical local train in Germany with the measurements w door = 1.2 m, w const1 = 0.5 m, w const2 = 0.8 m, w const3 = 4.0 m, w aisle1 = 0.9 m, w aisle2 = 2.2 m as indicated in Fig. 11a. Cameras were mounted to record the experiment and are listed in Tab. 6. Experimental runs in which 3D motion capturing data were recorded are listed in Tab. 11. The mood of the participants (c.f. Sec. 4.7) was recorded for all runs. Trajectories were generated as described in Sec. 4.2. The coordinate origin was located on the axis of the front of the bottleneck in the middle between the two bottleneck sides. The data of the Boarding and Alighting Experiments are provided online [12]. 6 5 4 3 2 1 0 1 2 3 4 5 6 X [m] 0 1 2 3 Y [m] --- Tiny Box Experiments This experimental series investigated the relationship between density and physiological arousal while waiting. For this purpose up to eight participants waited in 'tiny boxes' under different conditions. All participants were equipped with electrodermal activity and heart rate variability Sensors (cf. Sec. 4.3, 4.4). The following parameters were varied (a detailed list of performed runs and combinations of parameters can be found in App. A.8): • number of people in box • communication: speaking allowed, speaking prohibited The tiny boxes were four wooden boxes of 1 m × 1 m and a height of 1.5 m (Fig. 12). Participants could enter and exit the boxes through a one-sided swinging door. The experiments were performed in a delivery channel close to experimental site D (cf. Fig. 1) to shield participants as well as possible from acoustic and visual influence of the experiments performed at the same time. Participants were chosen based on age and gender from the respective groups taking part in the experiments at site D. Instructions were given without technical amplification as the groups were small. Where questionnaires were completed, this was done after the respective runs in an area in front of the delivery channel. Cameras were mounted to record the experiment and are listed in Tab. 7. Experimental runs in which EDA and HRV sensors were recorded are listed in Tab. 9 and 10. No trajectories were exported for this experiment. The data of the Tiny Box Experiments are provided online [13]. --- Bottleneck Experiments This series of experiments investigated different physical and social-psychological aspects in a bottleneck scenario. The following experimental parameters were varied (a detailed list of performed runs and combinations of parameters can be found in App. A.9): • bottleneck width: 0.6 m, 0.7 m, 0. The experiments were performed on day 4 at experimental site D. The announcer and further observers were standing on a scissor lift that was parked behind the bottleneck construction and raised to a height of several meters to have a good overview. All announcements were made with a microphone connected to a portable loudspeaker. To increase the initial density, the participants in the first row were asked to stay in place while everyone else was asked to take one step forward, before each run. The intended initial density was 1 P/m 2 . Whenever special targeted announcements were necessary to achieve the study objective, they were made via slips of paper or by the investigators directly addressing individuals. In 'normal' condition runs participants were instructed as follows: "You are in a crowd where people walk through a door at a normal pace. You yourself move purposefully, but without haste." In 'hurry' conditions the instructions were "You are in a crowd where people are in a hurry to pass a door. You yourself are also moving briskly", and in 'full commitment' conditions "You are in a crowd where everyone wants to pass through a door as quickly as possible and pushes their way through. You yourself do everything you can to get to the front and through quickly as well." Fig. 13 shows snapshots of two example experiments and sketches of the bottleneck construction. The outer dimensions of the experimental area were 20 m by 20 m. The bottleneck construction consisted of an 4 m x 2 m x 1 m aluminium frame with gray plastic panels, weighing 250 kg per side. They were each visually extended to be 6 m long by adding trade fair walls. Each side was secured against slipping with anti-slip mats and 750 kg concrete blocks which were bolted to the bottleneck construction. Participants were to maintain their motivation until they crossed a finish line 8 m behind the bottleneck. The way to the finish line was marked with barrier tape. Beyond the finish line, participants could return to the line-up area by turning to either sides. Where questionnaires were to be completed, this was done after the respective runs in an area to the left of the experimental area. To take safety precautions, a practised crowd manager was present at the experiments equipped with an air pressure horn. The horn was activated whenever a participant indicated discomfort during an experiment by calling out 'stop' aloud or if the crowd manager himself identified a situation as critical or potentially harmful. At the beginning of the day, all participants were trained in what was to be done in the case of the horn being activated. The procedure included immediately stopping in the current position without further movement. Designated helpers that were close to the crowd at all times started tapping people at the shoulder once the crowd had come to a full stop. On the signal of 'shoulder tapping' participants were allowed to turn around and move to the far back of the experimental site. The procedure was continued until every person was tapped at the shoulder. Apart from the test run where the horn was activated on purpose, two runs (4D250, 4D280) were aborted and resolved in the way described above. Cameras were mounted to record the experiment and are listed in Tab. 8. Experimental runs in which 3D motion capturing and pressure sensor data were recorded are listed in Tab. 11. Pressure sensor data (cf. Sec. 4.5) and the mood of the participants (cf. Sec. 4.7) were recorded for all runs. Trajectories were generated as described in Sec. 4.2. The coordinate origin was located on the axis of the front of the bottleneck walls in the middle between the two bottleneck sides. The data of the Bottleneck Experiments are provided online [14]. The scientific content of some of the bottleneck experiments in this series is part of the CrowdDNA project. --- Sensors Different combinations of sensory systems were used in the different experiments. This included camera recordings, electrodermal activity sensors, heart rate variability sensors, pressure sensors (at the wall as well as on participants), 3D-motion capturing systems and mood buttons. The design and use of the individual sensors as well as their synchronization in time will be described in the following sections. --- Time Synchronization Between Sensors Accurate time synchronization is required for reliable connection of data from multiple sensor sources. Depending on the sensors' technical settings, different technical solutions need to be adopted to enable synchronization. In an ideal setup, all sensors operate with the same frequency, are connected to the same metronome to capture the exact same instance in time and have the same time code. However, in reality different sensors operate on different frame rates, are not pairable with a metronome or might show a drift in time. In order to keep the deviations as small as possible, a global time was introduced and distributed by Tentacle timecode generators [15] and as many sensors as possible were attached to submodule metronomes. --- Camera and Trajectories Camera recordings can be used for experiments in many ways. On the one hand, they make it possible to get a qualitative view of behaviour and to reconstruct the actual execution of the day and any deviations from the plan that may have occurred, as well as to reconstruct announcements and their intonation via the audio track. On the other hand, cameras can be used specifically to obtain measurement results such as extracting walking paths (trajectories) or documenting facial expressions of participants in response to the experiments. In total, 21 cameras were mounted in order to perform the above tasks. Cameras intended for extracting trajectories and serving documentation purposes were mounted under the ceiling (≈ 8.65 m) facing straight downwards. Camera views were overlapping and mounted in such a way that the occlusion of people was minimized. All cameras used for trajectory extraction were backed up since image loss would have been fatal for the experiments. The approximate fields of view at head height for cameras mounted in the main experimental sites are shown in Fig. 14 and listed in Tab. 2 to 8. The camera types and settings are listed in Tab. 1. Trajectories The trajectory extraction was performed with the pedestrian tracking software PeTrack [17,18] for cameras indicated in Sec. 3.1 to 3.7. Cameras operated with the Simplylive system produced frame drops, double and black frames. The reason could not be reproduced unequivocally. These artifacts were detected and treated before continuing with trajectory extraction. Black frames were detected by applying a binary filter on the grey-scale frames of the video and checking if all pixels were black. Duplicated frames were detected by computing the difference between each frame with the previous one in greyscale. On these differences, DBSCAN [19,20] was used to detect clusters with camera-based parameters. Each frame which did not belong to a cluster was considered to be a duplication of the previous one. Afterward, the videos were reencoded with ffmpeg skipping these erroneous frames. Before exporting the trajectories from PeTrack, the results of the tracking were used to further improve the output data to interpolate the movement between dropped frames. For this, the displacement of each pedestrian in a frame was computed using the Lucas-Kanade method [21,22]. Computing the ratio between these displacements and the average displacement of the previous frame gave the number of missing frames. To implement the mapping from pixel to real world coordinates two types of calibration [23] had to be performed. Intrinsic calibration was performed to take into account the distortion of the lenses and internal hardware combinations. Extrinsic calibration was performed to create a transformation map between the camera and real world coordinate system and was performed every morning with a ranging pole and attached levelling unit. The resulting mean re-projection error for all calibration points for all days and cameras was 1.1 cm with a standard deviation of 0.6 cm and a maximum error of 2.2 cm. However, values differ greatly depending on the camera. The values for the individual cameras are shown in the appendix in Tab. 20. For cameras used for code reading, recognition in the software PeTrack was performed with the code marker method using Aruco Code dictionary dict 6X6 1000 [5]. All other cameras' recognition was performed using the multicolor marker method within PeTrack. After the automatic extraction of trajectories, all runs were manually corrected. To handle the perspective distortion of the cameras for a correct head position in space, the individual heights of each person were accounted for and if a code could not be read a default height of 1.75 m was applied. The different camera views of each experimental area were combined into one single dataset by linear interpolation from the trajectory of one camera view to the trajectory of the other camera view in the overlap region. --- Electrodermal Activity Ambulatory sensors (EDA Move4) from the Movisens company [24] were used for measuring electrodermal activity. A total of 28 sensors were used, which were activated every morning and their data saved every evening. The EDA Move recorded electrodermal activity using the exosomatic method at a constant voltage of 0.5 V. The measurement range is 2 -100 micro Siemens and the sampling frequency is 32 Hz. The sensor was attached to the non-dominant hand of the subjects using a wristband. There were two cables attached to the wristband. These cables connected the two measurement electrodes to the sensor. The electrodes were structural non-woven electrodes with special gel/solid gel and a diameter of 55 mm, which was cut to size if necessary. The electrodes were glued to the palm of the hand below the little finger so that the gel surfaces of the electrodes did not overlap. If the electrodes did not hold well, they were fixed with leukotape. The EDA sensors were always attached by the experimenters and worn for a maximum of one hour. Between different subjects, the sensors were not read. The separation of the data was done in the aftermath by cutting up the individual experiment blocks. The sensor number and the subject number for the day were noted and thus the data of the sensor and the remaining data of the subjects could be linked. EDA data was recorded in runs listed in Tab. 9. --- Heart Rate Variability Movisens ambulatory sensors (ECG Move4) [25] were used for heart rate measurements. A total of 28 sensors were used, which were activated every morning and their data saved every evening. The ECG Move records the heart rate with a resolution of 12 bit. The input range is 560 mV (CM), ± mV (DM) and 3 db bandwidth from 1.6 -33 Hz. The sampling rate is 1024 Hz. In addition to the ECG sensor, the ECG Move contains a number of other sensors. These include a 3D-acceleration sensor, which records with 64 Hz and has a measuring range of ± 16 g, and a rotation rate sensor with a measuring range of ± 2000 dps and a resolution of 70 mdps, with an output rate of 64 Hz. It also has a pressure sensor with a range of 300 -1100 hPa at a resolution of 0.03 hPa, a sampling rate of 8 Hz and a temperature sensor that measures ambient temperature at a frequency of 1 Hz. The sensor was placed below the chest using disposable electrodes. The electrodes contained a highly conductive wet gel and a high quality Ag/AgCl sensor. They had a decentred connection to reduce motion artifacts. The ECG sensors were frequently attached by the subjects themselves. Between different subjects, the sensors were not read. The separation of the data was performed by cutting up the individual experimental blocks. The sensor number and subject number for the day were noted and thus the sensor data and the rest of the subject data could be linked. Heart rate data was recorded in runs listed in Tab. 10. --- Pressure During the bottleneck experiments on day 4 (Sec. 3.7), two pressure sensors from Tekscan (Pressure Mapping Sensor 5400N [26]) were employed to estimate normal forces within a crowd. Each sensor consists of 1768 measurement cells covering an area of 57.8 cm × 88.4 cm. Before the actual data recording, the sensors must be calibrated. For this purpose, the sensor was placed horizontally on a table and successively loaded with 10 kg, 20 kg, 30 kg, 40 kg, 50 kg, 95 kg, 110 kg, or 120 kg in total. Corresponding pressure values were measured with a sensitivity of S-40 and used for a multi-point calibration. On either side of the bottleneck, a pressure sensor was attached vertically at a height of 0.97 m for the lower edge (Fig. 15 a). The short side of the sensor was bent around the corner to place 10 cm of the measurement area inside the bottleneck and 47.8 cm in front of it (Fig. 15 b). Teflon foil was spread over the pressure sensors to reduce shear forces and ensure a secure attachment. Each sensor was connected to a laptop recording pressure with the I-Scan software using a sampling rate of 60 fps. Furthermore, two participants were equipped with flexible pressure sensors on their body, each with two upper arm sensors and one sensor for the back. Xsensor LX210:50.50.05 [27] has 2500 measuring cells providing pressure measurement on an area of 25.4 cm × 25.4 cm on the participant's back. The arm sensor (Xsensor LX210:25.50.05 [28]) covers an area of 12.7 cm × 25.4 cm with 1250 measuring cells. All sensors were calibrated in advance by the manufacturer resulting in a pressure range of 0.14 Ncm -2 -10.3 Ncm -2 . For easy wearing, the three pressure sensors were tucked into designated pockets of a specific T-shirt (Fig. 16) and connected to a tablet. The tablet, which was carried in the chest pocket throughout the experiments, used the Software Xsensor Pro V8 to capture pressure at a sampling rate of 25 fps. In order to receive as much pressure as possible at the central part of the back, the volunteers who wore the shirts were 1.91 m and 2.04 m tall. Unfortunately, no pressure data from the Xsensor sensors in the T-shirts were recorded during the experiments. --- 3D-Motion Capturing We used the 3D motion capturing (MoCap) system MVN Link by Xsens to track the full body motion of a person in the crowd [29]. While optical MoCap Systems need a free line of sight between the tracking points on the body and a set of cameras, the Xsens Mo-Cap system uses inertial measurement units (IMU) as sensors. These IMUs measure the acceleration, the angular rate and the magnetic field strength and a line of sight between the body and a camera is not necessary. Therefore, it is possible to capture the full body motion even in dense crowds. Each MVN Link suit (Fig. 17) is equipped with 17 IMU sensors on predefined independently moving body segments. The measurement can be triggered manually and the recorded data is stored locally on a body-pack in the suit. Thus, the measurement is self-contained and the data can be downloaded afterwards. After doing a calibration procedure and taking detailed body dimension measurements, the MVN Analyze software then calculates the full body motion based on a biomechanical model from the measured data set. The processed data includes the orientation, position, velocity, acceleration, angular velocity and angular acceleration of each body segment as well as the angles of joints and the location of the centre of mass. The data can be exported either as xml file or as biomechanical c3d file. Because the IMU based motion capturing is self-sufficient and based on relative measurements only, the absolute positioning in space suffers from a drift which can accumulate over time. The head trajectories extracted from camera recordings, however, have a small positioning error. Therefore, we used a hybrid tracking algorithm [31] to combine both data sets. In particular, that means that the position of the biomechanical model was shifted and rotated to match the head position and orientation of the camera trajectories. On all days, we equipped 20 people with an Xsens MVN Link Motion Capturing system. On experiment days 1-3, these persons were part of the red group (Fig. 2), and on day 4 they took part on experiment site D, namely the bottleneck experiment. 3D motion capturing data were therefore recorded in runs listed in Tab. 11. --- Mood-Buttons In order to be able to classify the mood of the test subjects over the course of the day in the individual experiments, we installed simple mood button terminals (Happy-or-not [32]). The terminals consist of four smiley-faced buttons with a sign saying "how did you feel in the last run?" (Fig. 18) that participants were invited to press after every run. The system saved a time stamp for each pressed button. The terminals were attached to man-rails and placed so that participants passed them after each run. Care was taken to ensure that the grids were positioned in such a way that the walking path was affected as little as possible and no backlog was created. Participants were actively asked to press a button after each run. In the Train Platform Experiments, Figure 18 Terminal showing Mood Buttons placed in some experiments to compare the well-being of the participants between the experimental runs. The question on top translates as "How did you feel in the last run?" from German. the terminal was placed in the corridor leading participants from the area where they filled out questionnaires back to the waiting area in front of the experiment. In the Crowd Management Experiments, the terminal was positioned 15 m after passing the entry gate on the way back to the line-up area. In the Boarding and Alighting Experiments, the terminal was placed next to the waiting area (behind the train car for runs marked as 'reversedirection'). In the Bottleneck Experiments, one terminal was placed at each side of the bottleneck. Participants passed the terminal on their way back to the line-up area, regardless of whether they turned right or left after passing the finish line. No mood buttons were placed in the Tiny House, Oval or Personal Space Experiments. --- Summary and Discussion This paper presents pedestrian experiments conducted as part of the CroMa project aimed at increasing the robustness and efficiency of transport infrastructure. Even though the planning and execution of large-scale experiments requires far-reaching planning and organizational steps that go far beyond the scientific content, experiments under laboratory conditions offer the opportunity to control factors and can therefore be worth the effort involved. This publication provides an overview of the individual experiments carried out as well as descriptions of sensor techniques applied, as the contents and goals of the experiments were planned and evaluated by different disciplines and had to be coordinated and combined with each other. Furthermore, it presents the context in which the individual experimental runs and experimental sites were intertwined. The results of the scientific analyses will be published in subsequent content papers. Even though the experiments took place during a global pandemic, the questionnaire results as well as the evaluation of well-being during the experiments (mean value of mood buttons over all days) show that the overall concept of communication, hygiene and safety measures as well as the slow acclimatization to density (queuing, measurement course, waiting area, icebreaker) led to the participants feeling confident. As a result, they felt good and the thought of a potential infection seemed to have no meaningful influence on their actions. This is consistent with the impressions of the organizers regarding the mood of the subjects during the experiments. For each of the conducted experiments, the goal of the study is described along with which parameters were varied, how participants were approached and which dimensions the experimental areas and geometries had. The description is supplemented by impressions of the experiments given through sketches and snapshots. In the chapters about the sensors, the technical specifications are listed. Furthermore, it is documented how the sensors were synchronized with each other, how many of the sensors were used, with which settings they were operated and which basic processing steps were carried out if necessary. For each sensor there is an overview of the runs in which the sensors were used. For each experiment, a link to the data archive is given under which the respective complete data will be made freely available after publication of the respective content paper. days, without whom it would not have been possible to fulfil all tasks and ensure a smooth process on all days. Ethical Review The application of ethical approval for the experiments "Crowd Management", "Single-File", "Personal-Space", "Train Platform" and "Boarding and Alighting" were submitted by A. Sieben to the German Psychological Society (DGPs, the Society) and approved in December 2019 (file reference SiebenAnna2019-10-22VA). The "Bottleneck" experiment was submitted to the ethical review committee of the University of Wuppertal (German: Bergische Universität Wuppertal) by A. Seyfried and was approved in January 2020 (file reference MS/BBL 191213 Seyfried). Funding --- A. Appendices --- A.1. Test Person Sample: Statistics per Day Figure 19 Panel of different histograms showing demographic data of the participants for each day. Row one refers to data of participants on day 1, row 2 to day 2, row 3 to day 3 and row 4 to day 4. Column 1 shows age, column 2 gender, column 3 body height, column 4 body weight and column 5 shoulder width. Data shown in grey includes data from all participants of the respective day, data in green data of participants belonging to the green experimental group and data in red, blue and yellow to data of participants belonging to experimental group of the respective color accordingly. Respective Medians are shown in the same color as the data. --- A.5. Experimental Configurations Oval Experiments --- A.8. Experimental Configurations Tiny Box Experiments
This article examines the impact of open space planning on relations and cooperation between locals and new immigrants in rural settlements. In recent years kibbutz settlements have transformed agricultural land into residential neighborhoods for migration of previously urban populations. We examined the relationship between residents and newcomers to the village, and the effect that planning a new neighborhood adjacent to the kibbutz has on creating motivation for veteran members and new residents to meet and build common social capital. We offer a method of analyzing planning maps of the open spaces between the original kibbutz settlement and the adjacent new expansion neighborhood. Analysis of 67 planning maps led us to define three types of demarcation between the existing settlement and the new neighborhood; we present each type and its components and offer their significance in the development of the relationship between veteran and new residents. The active involvement and partnership of the kibbutz members in deciding the location and the appearance of the neighborhood about to be built allowed them to determine the nature of the relations that would be forged between the veteran residents and the newcomers.
Introduction Historically, non-farmers were not allowed to settle in rural agricultural villages in Israel. According to the decision of the Israel Land Administration, residence in an agricultural settlement was allowed only for those owning and working the land (Alterman, 2017). Resolution 737 passed in 1995 was novel and groundbreaking in that it allowed the agricultural village to construct a number of residential homes for sale to those who do not intend to engage in agriculture. This decision was made as a result of political and other pressures to alleviate the effects of the economic crisis that had devastated agriculture by heavily indebting the farmers to banks and suppliers. Subsequent migration of young people from the rural to the urban space left an increasingly aging population in these villages (Sofer and Applebaum, 2006). This decision as part of the economic recovery plan allowed agricultural communities to benefit from changing the designation of the land from agriculture to construction, enabling farmers to cover, at least in part, the debts they had accumulated (Hananel, 2012). In the kibbutz villages (pl. kibbutzim) whose unique way of life was characterized by social and financial equality, equal partnership of members in the production and the service industries, and a Z. Greenberg (*) Tel-Hai College Faculty Social Sciences and Humanities, Yiftah, Israel e-mail: [email protected] high level mutual responsibility, the economic crisis resulted in debts to banks and private suppliers shouldered by the members of the kibbutz who were also left without savings for the future (Russell et al, 2011). Thus ensued the negative migration of young people from the kibbutz to the urban areas, as they realized that the future of the kibbutz was no longer secure. The social crisis in the kibbutzim manifested in a lack of trust in the kibbutz leaders and managers, a phenomenon of withdrawing into the family unit and abandoning the mutual relations that had so typified kibbutz society, as well as a full-blown ideological-value crisis where the foundations and philosophy of kibbutz society were questioned. In reality, the economic crisis resulted in negative migration and an aging kibbutz population, fewer services offered to the residents, especially those that were not profitable, and an atmosphere of despondency and loss of ideology and future direction (Ben-Refael, 2011). The solutions for easing the crisis in the agricultural villages were handled at the State level and led by the National Kibbutz Organization, and included a recovery plan involving economic, organizational and managerial changes that would allow the continued existence of the kibbutzim and their future growth. The aforementioned Resolution 737, which allowed the kibbutzim to bring in new residents who had no part in the for-profit industries cooperatively owned by kibbutz members, was made with the understanding that the arrival of a new population would relieve the kibbutz and its members of their debts and allow them to continue life in the rural space (Arnon & Shamai, 2010;Greenberg et al, 2016). The construction of new neighborhoods allowed the arrival not only of young urban families searching for a rural quality of life in the revitalized community but also allowed the kibbutz memberfs' own children to return and live in the kibbutz environment without the oppressive burden of the settlement's obligations on their shoulders (Charney & Palgi, 2014). Three groups of residents arrived in these expansion neighborhoods built adjacent to the original kibbutz borders: families from urban centers who wanted to live in a rural environment, families from towns and small cities located near the kibbutzim, typically of lower socioeconomic status, who wanted to improve their quality of life, enjoy services at a high level and improve their defined status, and grown-up children of kibbutz members who wished to return to the kibbutz environment but not as members (Greenberg et al, 2016). The arrival of the new residents led to major changes in the kibbutz: the average age in most of the kibbutzim decreased, the number of children increased dramatically, re-vitalizing the kibbutz day care centers, schools and children's activity centers, the social fabric of the village received a muchneeded injection of energy, and the rural space gained professionals in the areas of management, sales and the independent professions. Some of them were employed by the kibbutz's businesses and industries but many work outside the kibbutz boundaries. In this way, gentrification came to the rural settlements in Israel (Charney & Palgi, 2013), but the arrival of new residents was not without conflicts and disagreements. The kibbutz wanted to independently manage the service systems. The question also arose of the statutory status of the newcomers regarding the management of the community that now included two population groups that sometimes had opposing interests. The kibbutz members viewed the newcomers as mere consumers, those who are required to pay for the services they receive from the kibbutz, and less so as partners involved in the management of the newlyenlarged community. Any attempt by the new residents to share control was seen as questioning their confidence in the kibbutz administration to manage the system . The current study contends that kibbutz members treated the space that would become the expansion neighborhoods in a similar way: the structures and open spaces within and adjacent to the original kibbutz were defined as part of the historical social and cultural capital that they sought to preserve and perhaps even promote to those who were not members of the kibbutz, to come and live in the settlement. The status of the kibbutz members and historical regulations allowed them to influence the planning of the new neighborhoods in addition to planning topography and traffic routes. In this study, we examined the characteristics of these open spaces and the impact of planning on the development of rapport and relationships between the kibbutz members and the new residents, as well as their common social capital. The results were based on an analysis of 67 planning maps of the kibbutzim and their expansion neighborhoods, as well as tours and interviews with kibbutz members, non-member residents and officials in the field of planning. We maintain that villages with spaces designed as meeting areas promote closeness and familiarity and shared social capital. Villages without such common meeting spaces will see more feelings of disappointment, anger and strained relations between those who live in the original kibbutz and those residents of the expansion neighborhood. First we shall present the structural development of the kibbutzim throughout the years and the reasons that led to the planning of expansion neighborhoods. These will be examined with reference to the theoretical terms of social and economic capital and cultural and social space as defined by Bourdieu (1984Bourdieu ( , 1989)). The Results section will present findings from the analysis of the maps, tours and interviews; we will propose a typology for defining the demarcation between the two parts of the village. Finally, we shall offer an interpretation based on Bourdieu's theory of the elements of space as a proposal for understanding the characteristics of specific types of planning. --- The theoretical framework for the study --- Open spaces Open spaces are unbuilt areas with natural features of surface and ground rock, as well as flora and fauna, and in which human activity is at a low level. Open spaces contribute to maintaining the level of 'naturalness' in a region that is under construction and development pressures; they often enable the continuing functioning of ecological systems and the survival of local nature and landscape features. There are two approaches to defining open spaces: 1) ecological, according to which the natural spaces importantly preserve abiotic conditions such as water sources, soil, plant types and endemic animal species (Handy & Maulana, 2021) utilitarian, according to which open spaces have social, economic and political significance, since human beings need these areas for the benefit of leisure, sports, and quiet, peaceful spaces where the pace of daily life can be slowed-parks, green rings and green areas that separate different neighborhoods (Firth et al, 2011;Moran, et al, 2017;Tesler et al., 2018). Another distinction concerning open spaces regards those located within or adjacent to large cities vs those outside the cities. In planning the former, the need to preserve natural treasures combines with meeting human requirements for gardens, parks and green corridors in built-up areas as outlets for recreation, active leisure and sports activities, pleasure and environmental quality (Moran et al, 2017;Turner et al, 1993). The latter, those outside the cities, include national parks, nature reserves, and natural landscapes and natural treasures and species where the emphasis in on a future vision of preservation of an ecological balance (Frenkel & Orenstein, 2012). --- Social capital Social capital is an aggregate expression of the joint activity of a group whose members share common values and goals. This type of capital contains familiarity, interpersonal relationships and partnerships, and is a product of social interaction. In the daily life of the community, it is expressed in interpersonal relationships, the degree of the individual's commitment to the group and its goals, sharing, trust and reciprocity (Coleman, 1988). Putnam defined social capital as "features of social organization, such as trust, norms, and networks that can improve the efficiency of society by facilitating coordinated actions" (Putnam et al., 1994, p.167). Social capital affects the individual's sense of well-being. Briggs (2004) distinguished between private social capital expressed in relationships and types of social and personal connections, and public social capital which is common to all individuals and reflects all the relationships, the strength of the shared social cohesion and the group's ability to reach agreements and decisions and promote joint processes. Portes (1998) contended that a group characterized by a high level of social capital can utilize it to promote common interests and accumulate additional capital over other weaker groups located close by. Hence, social capital is significant in building power relations between groups occupying a certain area (Alon-Mozes, 2020). Bourdieu (1989) maintained that social capital symbolizes the group's significance not only for its members but for others who are outside it, observe it and are even influenced at times by its activities. According to Bourdieu (1984), the joint activity of the group members is required to maintain the existing capital that lends the group its status and to accumulate new capital that will strengthen the group's status in the region. Group capital is the outcome of joining economic, social and cultural capital. Bourdieu contends that the connections between these three components reflect the group's history and identity and contribute to the cohesion and mobilization of its members to promote common interests. Pavin (2007) viewed the significance of social capital as a means of strengthening the community and enabling it to survive, asserting that deep and meaningful personal relationships among community members contribute to solidarity and cooperation and enable the whole group to successfully deal with multi-system crises. Gallent (2013) and Medha, and Ariastita, (2017) cited the significance of collaborations in creating external connections and mobilizing resources that promote the future development of the community. However, no validated indicators have yet been found to define social capital as a justified variable; this lack of definition makes it difficult to understand its empirical significance when examining the community and the processes taking place in it (Amoyal & Carmon, 2011). There is another criticism, this one of political significance, according to which the very use of the term 'social capital' originates from a socio-economic logic that accepts privatization and reduces the State's commitment to its citizens. This approach dictates that social capital is capital that grows from below, from residents' initiative and activity, and migrates upwards, as a response to the downsizing and privatization that have been occurring since the 1980s (Ferragina & Arrigoni, 2017). An examination of social capital reveals three definitions of such capital. 'Bonding' social capital is built from social networks of those with a strong common identity which may be ethnic, cultural, socio-economic or another. 'Bridging' social capital refers to shared norms and connections of people from different ethnic, racial or social groups whose capital is created during meetings and joint activities with the purpose of promoting common interests of the group members. 'Linking' social capital is capital that represents more all-inclusive connections, less stringent but broader in scope. 'Linking' social capital is weak but important pattern of social capital, which content together different groups living side by side and making up a wider entity such as a nation. Connecting capital is an example of connecting the state and its institutions with civil bodies, associations and communities in order to promote common goals. This capital symbolizes relationships that cross class and social boundaries and is significant in networks that connect actors from different networks, such as joint activity of representatives of civil society and the state to promote common issues (Abbas & Mesch, 2018). Social capital is therefore a consequence of familiarity, connections and activity, all of which have an impact on the development of the space in which they exist (Arisoy and Paker, 2019). Amoyel and Karmon (2011) perceive the significance of social capital as a source of social stability in weak neighborhoods undergoing accelerated gentrification. To date, most writing has focused on the development of social capital in urban neighborhoods undergoing regeneration (Medha, & Ariastita, 2017) Matsuoka & Urquiza, 2022); the current article examines the development of the new social capital with the arrival of new residents in the rural settlements. --- Kibbutz planning as a means of expressing social and cultural capital Since the establishment of the first kibbutz, this specific type of village has been a planned settlement where the conditions of the area, the topography, and the location have all been taken into account, together with the unique, cooperative way of life of its members that expressed values of equality, sharing, providing centralized services of all kinds and being satisfied with few material goods (Khana, 2011). CHyutin and chyutin (2010) noted that the planning of many kibbutzim was entrusted to a few planners who specialized in this type of community and who were employed by the National Kibbutz Movement that oversaw the development of kibbutzim in Israel. They outlined the planning features that can be seen repeatedly in many kibbutzim and that expressed significance values and articulations in the daily life of the settlement (Karniel, & Churchman, 2012). Thus, in the center of each village are several central buildings around which the kibbutz continued to develop: the all-important dining room that served as a meeting place for all kibbutz members three times each day. In addition, single-story buildings whose rooms had central functions -offices of managers and directors, meeting rooms for committee decision-making-and the club-the focal point for social gathering. Outside of these buildings was the large park-like square that served as an informal meeting place and turned this group of buildings into the kibbutz center (Kahana, 2011). The traffic routes inside the kibbutz were pedestrian paths, movement was on foot or by bicycle. All of these gave the kibbutz the sense of a contained area where all the activities of management and daily life could take place in just a small area located in the center of the kibbutz (Chyutin et al., 2010). Lawns and open spaces were planned in a ring around the center of the village where ceremonies, major events and holidays were celebrated. Another space beyond that ring was dedicated to the children's houses where the children lived and were educated (Epstein-Poloush and Levin, 2016). Yet another space was dedicated to the kibbutz members' homes, neighborhoods with modest houses in one-or two-story buildings, depending on the availability and the type of land. Another part of the kibbutz was reserved for the farm structures-where animals were raised, tools and machines were stored, warehouses stood, and in later stages, where kibbutz industries were established (Kahana, 2011). The planning of the new expansion neighborhoods was different from all that had been familiar until now in the kibbutzim. The new neighborhoods were planned outside the borders of the existing village, sometimes adjacent to it and sometimes at a distance from the original kibbutz. They were planned in the style of residential suburbs: private houses, walking paths and vehicle roads throughout the neighborhood. The kibbutz planning committees were often involved in the decisions concerning the location of the new neighborhood, the type of buildings that would be allowed (whether 'free' construction or according to specific models of homes decided upon by the kibbutz), and the planning of the common, open spaces between the kibbutz and the expansion neighborhood. There are three models for the general planning of expansion neighborhoods. While all three place the new neighborhood outside the perimeter of the original kibbutz, each has a different interface between the new neighborhood and the kibbutz. Type A is the peripheral model, where the new neighborhood is adjacent to the original settlement along one perimeter, therefore sitting alongside the kibbutz. In this scenario many of the houses in the new neighborhood are close to those in the original settlement. Type B is the expansion neighborhood that is nearby the kibbutz but has no apparent connection to it because of the large open spaces that separate them. Type C is a lengthwise neighborhood were only a few houses are adjacent to the original kibbutz settlement and the remainder stretch further and further away from it (Fig. 1). In all three models open spaces can be found where there are monuments belonging to and reminiscent of the old kibbutz. The planning of these open spaces and their significance in creating the connection between the kibbutz members and the new residents are the subject of this article. Vol:. ( 1234567890) --- Rural gentrification Gentrification describes the renewal of a neighborhood through the migration of people of a high socioeconomic level into depressed neighborhoods or urban areas where an aging population lives at a lower socio-economic level (Chung, 2021). The new arrivals to this neighborhood find it attractive, take over the space by purchasing apartments and houses, open new businesses that characterize their lifestyle, and develop services that suit them. All of these lead to an increase in the value of the neighborhood's real estate but also increases the price of services in the neighborhood and the general cost of living, making it extremely difficult for the original population to remain there. They are forced to leave and move to another, weaker and more affordable neighborhood (Brummet & Reed, 2019). Levine and Aharon-Gutman (2022) point out that gentrification changes the conceptual framework by which local residents once viewed their neighborhood as an extension of their home, a space of stability, security and belonging. The neighborhood now takes on additional economic dimensions, becoming a source of income and a magnet for investment, and leading the local population to act in an entrepreneurial manner that can profit from the gentrification process. These patterns of migration of a strong population to weak and aging villages is also well known in the rural area (Gosnell et al., 2011) as one result of an economic crisis that affected so many rural populations worldwide and brought the migration of an urban population seeking quality of life to small communities (Phillips & Smith, 2018) far from the big-city lifestyle (Guimond & Simard, 2010;Nelson, 2018) and closer to nature (Phillip, 2005;Schwake, 2021). Rural settlements in Israel also underwent similar processes, as detailed in the Introduction section of this article. The national program for land designation had historically targeted certain rural settlements for agriculture, allowing only those willing to work the land to take up residence there. This declaration prevented gentrification from occurring in rural communities for many years, until Resolution 737 in 1995 that opened the gates of the rural village to a new population, most of it urban, who wanted to enjoy the quiet, close-to-nature life offered in these settlements (Arnon & Shamai, 2010). The establishment of the expansion neighborhoods in long-existing villages such as the kibbutzim brought a relatively large number of new people to the village in a short period of time, people of medium-high socioeconomic status and with managerial and economic knowledge, skills and experience (Greenberg, 2012). The processes that occurred in the Israeli rural space were similar to those happening in cities and other populations. --- Methodology The current study used a mixed-method type of research in which we combined analysis of planning maps with interviews. We analyzed 67 settlement planning maps that had been approved by the official committee for expansion neighborhood plans. Analysis of the maps was carried out using a GIS analysis (Mahmoody and Jelokhani-Niaraki,2021). We examined the distances between the outer edge of the original settlement and the houses of the expansion neighborhood, the number of kibbutz homes and new neighborhood homes adjacent to each other, the location of the open spaces and the uses assigned to these parcels of land. We examined the visibilitythe ability to see the old kibbutz from the new neighborhood and vice versa. All of these were gathered in a table that displays the data collected about each one of the villages examined. This table was the basis for the empirical analysis of the distance between the two parts of each village and of planning and functional characteristics of the open spaces, forming the foundation for the typology that we present in Table 1. In the second phase of the research, we conducted tours and observations in order to better understand people's own perceptions of the significance of the spaces, their daily function and their effect on people's lives. We conducted semi-structured interviews with both kibbutz and expansion neighborhood residents (Flick, 2018). The choice to include both was based on the findings of Arnon and Shamai (2010) on the motivation to connect, as seen in both the members and the newcomers. The interviews were conducted in the participants' homes and in some cases by Zoom due to the Covid-19 quarantine periods. The questionnaire included demographic questions and questions related to the way of life in the village: how often the respondent accesses the open spaces, Vol.: (0123456789) what activities are carried out in them, to what extent they feels that the spaces between the new neighborhood and the original settlement are significant in establishing relationships between local residents, the respondent's degree of participation in the activities of the entire community, what compels them to take part in an activity and what deters him/her, and a request to relate an example of a meaningful experience from a meeting with other residents in his/her early days in the village, and what kind of social connections they have with kibbutz members and with residents of the expansion neighborhood. --- Results --- Findings of the map analysis Three patterns of distance were found between the original kibbutz boundaries and those of the new neighborhood: 1) Up to 37 m between the original kibbutz and the closest point of the new neighborhood. This planning included a road inside both parts of the village settlements and a sidewalk on both sides of the road; 2) A distance of up to 60 m between the original kibbutz and the new neighborhood with an internal road, a sidewalk for pedestrians on both sides and plant beds next to the sidewalk. We considered both of these groups to be in the category of 'soft barriers' because they allow motorized or foot traffic; an individual engaging in physical activity could easily cross from one part of the community to the other. In the third group of kibbutzim, the expansion neighborhood and the kibbutz have larger areas separating them, a distance of over 60 m. Table 1 shows the average distances between the kibbutzim and their expansion neighborhoods. In over 60% of the planning maps, we found a relatively short distance between the original kibbutz and the expansion neighborhood, and that the connecting road conformed to the regulations for transportation on urban roads-a two-lane road with sidewalks on both sides which must be at least 35 m wide. For over one-third of the kibbutzim the distance was slightly greater. In 23% of the planning maps, longer distances of more than 90 m were found between the new neighborhood and the original kibbutz. --- Characteristics of open space planning in the maps In this analysis we examined the planned functional characteristics of these spaces. Our questions included: Did the planning propose to change their function? What were their functions prior to the building of a new neighborhood and what did the map expect them to be in the future? Then we examined the types of traffic possible in these spaces and the visibility between the two parts of the settlement. Based on the analysis of the maps, we defined three types of barriers: • Soft barriers-found in about 40% of the planning maps examined. The plan allows movement and spontaneous interaction between residents of Vol:. ( 1234567890) both parts of the village. Local roads, lawns, playgrounds and public buildings that are accessed by the entire population are shared in these open spaces. • Semi-hard barriers-found in about 15% of the planning maps. These appear in the planning maps as public open spaces or as agricultural areas. This type of barrier allows direct visual contact between the expansion neighborhood and the original kibbutz, but does not allow pedestrian traffic. • Hard barriers-found in approximately 45% of the planning maps. Active industrial and agricultural buildings and spaces owned by the kibbutz's agricultural cooperative separate the original kibbutz from the new neighborhoods. These may be buildings dedicated to industry, orchards or groves, or wooded areas. It can be concluded that these are permanent and will not be moved from their place. These barriers do not allow direct visual contact or pedestrian movement between the two residential areas. --- Results from the interviews The interviews revealed that the planning of the open spaces had relevant effects on the feelings, involvement and sense of familiarity of the residents of the new neighborhood. Interviewee #9 from Kibbutz #4 referred to his expectations before arriving in the village: "Between us and the kibbutz there is a wooded area. On the planning map it appeared as a nature reserve and it was like such a dream to live close to nature." [But] "…in my day-to-day life it creates disconnection and alienation, it doesn't create connection. I sit on the balcony enjoying the view from the woods. It's amazing and very different from where I came from. At the same time I know that on the other side is the kibbutz... I'm not really a part of it. Between me and them there is a forest, we are not connected to the kibbutz itself. It hurts me that it is like this because that is not what I intended when I came here." Interviewee #1 lives in the expansion neighborhood of Kibbutz #15. He described his relationship with the kibbutz: "From the moment we arrived there was the feeling that we are not them. We are the new people, the others who don't really belong to the kibbutz. I have to go through a gate and cross the highway to enter the kibbutz that I am a part of. It doesn't make sense. All the public buildings are there, I have to drive in order to go to every meeting, it affects my motivation to go there, to participate." Interviewee #3 lives in the new neighborhood of Kibbutz #7, and described the lack of new social capital: "It's them and us -each in a different part of a settlement. They live in a kibbutz and I live in a neighborhood, the neighborhood is not part of the kibbutz, ours is a village in itself…, it's really a different place. For me, my connection with them is only that I pay them taxes ..." Interviewee #5 from Kibbutz #9 described the effect of the neighborhood plan on her motivation: "When we got here I was motivated to be a partner, to take part, that's why we came here. Today it no longer exists for me, I don't feel part of this settlement"… Interviewee #7, a kibbutz member from Kibbutz #4 also expressed a situation of lack of shared social capital with the new residents: "Now in the expansion neighborhood there are some thirty or forty families that we don't even know. Don't know what they look like at all. So we are trying now these days once again to see how we can connect them to here. They see all the announcements [for activities]. They choose not to come. They don't feel a part or a connection." Interviewee #13, a kibbutz member from kibbutz #2: "We hardly go there, there is no reason to. Why go all the way there? To see houses and streets? I don't know anyone there anyway. You hardly see them, there is no real reason to go there." These interviews express the meaning of the way in which open spaces are incorporated into the planning of expansion neighborhoods in rural settlements and their effect on familiarity, the motivation to be involved in activities and the formation of new common social capital in the settlement that has received Vol.: (0123456789) new residents. Long distances and movement barriers suppress the motivation to make connections with the veteran residents who reside in the original kibbutz, to participate in activities, and take part in kibbutz life. This situation makes it difficult to create common social capital. In contrast, in kibbutzim where the distances between kibbutz and expansion neighborhood are small and where there is a soft buffer, processes of getting to know each other and building a sense of belonging between the old and new residents are more likely to occur. Interviewee #13 lives in an expansion neighborhood of Kibbutz #6. She described the importance of the new spaces and their meaning in building a shared sense of belonging: "The large grassy area and the playground contribute to the feeling that we are part of the kibbutz, it began with going there to let our children play together immediately after taking them out of the kindergarten. We parents slowly got to know each other, we spend a lot of time together." Another example of the planning that enabled informal meeting between the two populations came from interviewee #15 who lives in the new neighborhood of kibbutz #5: "At noon I see everyone playing together, and the parents sitting together and talking. For the parents of the younger children, it doesn't matter who someone is or where they live... everyone does everything together." Interviewee #11 from Kibbutz #6 recounted: "The elderly couple who live next door came in and brought a cake. I didn't understand at all what they wanted, I'm not used to it. We were shocked. But this is my memory of arriving here, that they are interested in us, that we are welcome." Interviewee #9, a member of Kibbutz #4 said: "In the beginning there were concerns and we didn't know how it would be with the expansion neighborhood. We were also frightened by the intensive construction. But we got to know each other little by little, you meet them on the walk-ing path, in the grocery store, there is a motivation to get to know each other and make contact and then you realize these are people who came here and want to be part of this place. Today it is a large kibbutz and there are many children once again, and a kind of joy has returned to this place. Interviewee #2, a member of kibbutz #1, said: "I like to walk in the afternoon in the grassy area and the playground. It's nice to see a lot of young people here, little kids. There was a time when it was sad and old and abandoned. The new people brought life and events and culture here, as far as I'm concerned, I feel that I've benefited from them and the kibbutz is renewed." --- Discussion Planning open spaces in a settlement is significant in creating quality of life for its residents (Moran et al, 2017). Areas that promote meeting opportunities between people and populations contribute to increasing friendliness, the motivation to meet, and to advancing joint community activities that contribute to shared social capital (Firth et al, 2011;Tesler et al, 2018). In many of the kibbutzim we found planning of areas that promote members' meeting between themselves and with different population groups in the settlement. Aharon-Gutman (2014) teaches about the significance of open spaces in a city as enabling meeting, rituals and group events that contribute to the construction of a distinct group identity. The findings of the current study show that a certain planning of open spaces also allows meeting between groups; specifically in these cases, between the veteran population that considers itself 'owners' of the settlement and the newcomers who arrived with an aspiration and ideal of being absorbed into the community (Arnon & Shamai, 2010). Shared activities promote a sense of belonging and motivation to participate in both the new residents and the veteran kibbutz members. We would like to emphasize that the new relationships created between the new and old populations have a great impact on the lives of both groups. For the new residents, living in a small settlement while Vol:. ( 1234567890) having left family and friends behind in the urban center lends great importance to personal relationships and local friendships which impart a sense of belonging, security and personal well-being. As for the veteran population, the flight of their younger generation to the cities and a different way of life during the economic crisis, created a 'brain drain' in terms of future kibbutz leaders and few young children in the community's classrooms and playgrounds. The expansion neighborhoods brought a new supply of youthful energy. Designing spaces that encourage some level of relationships contributes to the promotion of the main goal for which these neighborhoods were established (Moran et al, 2017). The characteristics of new social capital-the development of acquaintance, interpersonal relations and partnerships, are all the product of social interaction. Social networks that promote motivation to participate in the community are beginning to take shape; these will also contribute to the developing construction of the common capital of the renewed community's members (Putnam, 1994). Private social capital was the prominent issue in our interviews and to a lesser extent group social capital. These findings correspond to Briggs (2004) definition of group social capital as being composed of relationships, social connections and personal ties. The new social capital that can potentially be created in the shared community of a revitalized kibbutz meets the definition of social capital according to Pavin (2007)deep and meaningful personal ties that contribute to solidarity and enable the entire group to successfully deal with multi-system crises (Greenberg et al, 2016). Phillips and Smith (2018) discussed the symbolic significance of buildings and monuments for the preservation of local identity in the wake of gentrification in small communities. Our findings showed that not only monuments are significant but also the empty spaces that can symbolize the social and cultural capital of the existing population vis-a-vis the newcomers joining them. In our study, we found settlements that built their expansion neighborhoods at a distance from the original village and with nonresidential spaces acting as buffers between the two sections, areas that make it difficult to establish physical contact between the two parts of the same village. These 'hard' barriers affect the degree of familiarity between kibbutz members and new residents, as well as the motivation of both populations to participate in shared community activities. Our interpretation of this situation is that this type of 'out-of-sight, out-of-mind' planning is evidence of the complexity of the decision to create the expansion neighborhoods in the first years of their establishment. The construction of those neighborhoods was portrayed as the best and perhaps the only way to dig the kibbutz out from under its heavy debt after the economic crisis of the 1980s shocked the agricultural villages into a new reality. They realized that kibbutz real estate was their greatest asset and that bringing in new families would assure the settlement's future growth and the opportunity to keep the kibbutz afloat financially, socially and ideologically. Faced with this solution to the woes of the kibbutz, they were afraid of changing their unique way of life, afraid of newcomers with different ideologies taking over the administrative systems in the kibbutz, afraid that savvy real estate companies would take over and change their lifestyle, and they were afraid of losing their unique social capital they had accumulated over the years. After all, with the kibbutz's communistic ideology of no private material possessions, they owned nothing else, except the kibbutz itself. Therefore, their statutory 'right' to influence the planning of the expansion neighborhood affected its location, the distance between it and the original kibbutz, the functional characteristics of the open spaces between the two parts of the settlement, and basically, the nature of the relationship between the kibbutz members and newcomers. Theoretically, the open spaces perceived as barriers separating the two parts of today's settlement can be interpreted as how the kibbutz members marked their identity anew as a group with unique ideology, values and lifestyle. The planning of these spaces allowed them to redefine the cultural and social capital they had accumulated and then lost during the crisis years, when the social and political changes in Israel altered the status of the kibbutz members from an elite and storied group to a group facing a longterm multidimensional crisis. The values and ideas had that characterized the lifestyle of the kibbutzim and its glorious history of 'turning a wasteland into farmland' were no longer revered (Amit-Cohen & Sofer, 2016). This may be the manifestation of the real estate view of the locals in the gentrification process. In contrast to the individual thinking in the Vol.: (0123456789) urban neighborhood offered by Levin and Aharon-Gutman (2022), we offer in the case of a kibbutz a common social thinking of the kibbutz members who wish to improve their real estate and they do this through the planning of the open spaces. Faced with their inability to change reality and restore their status as an elite group in Israeli society in general, kibbutz members had the opportunity to mark their uniqueness at the local level, with the planning of the new kibbutz neighborhood. Thus, the re-marking of their capital leaves a stamp that is perhaps the final chapter in the legacy of the once-glorious kibbutz. --- Limitations of the study This study focused on an examination of open spaces and their significance in building the new social capital. There are other dimensions of capital-building such as participation in organized activities, in committees and common forums. Future studies can examine the degree of participation of the new residents in community forums and committees as another means of building the new social capital of the shared community. Future research can also utilize the method of analyzing the planning maps proposed here to examine the same in other types of villages in rural area. --- Conclusions The expansion neighborhoods that began to appear in rural kibbutz settlements beginning in the late 1990s attracted a different population than that of the traditional kibbutz -young urban families seeking quality of life and a back-to-nature existence, but with none of the original kibbutz ideology of working the land and living a frugal socialistic lifestyle. This gentrification of kibbutz land was met with differing attitudes from the kibbutz members, ranging from fear of admitting a new and untested population into their protected life to gratitude for the infusion of a new young generation that would keep the kibbutz economically and socially relevant. The kibbutz members' opportunity to participate in the planning of the new neighborhoods gave expression to their attitudes toward the arrival of a new population. By analyzing the original planning maps of the new neighborhoods we detected three types of demarcation between the borders of the original kibbutz and its expansion neighborhood(s) -soft, semi-hard and hard barriers, each one signaling the degree to which the two populations should function as one unit or remain separate, and whether a new combined social capital could be expected. --- activity substance abuse psychosomatic symptoms and life satisfaction among adolescents. International Journal of Environmental Research and Public Health,15(10), 2134. https:// doi. org/ 10. 3390/ ijerp h1510 2134 Turner, R. K., Turner, R. K., Pearce, D. W., and Bateman, I. (1993). Environmental economics: an elementary introduction. Johns Hopkins University Press. ISBN 0-8018-4862-8 Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
The complex relationships among social support, experienced stigma, psychological distress, and quality of life (QOL) among tuberculosis (TB) patients are insufficiently understood. The purpose of this study was to explore the interrelationships among social support, experienced stigma, psychological distress, and QOL and to examine whether experienced stigma and psychological distress play a mediating role. A cross-sectional survey was conducted between November 2020 and March 2021 in Dalian, Liaoning Province, Northeast China. Data were obtained from 473 TB patients using a structured questionnaire. Structural equation modelling was used to examine the hypothetical model. The research model provided a good fit to the measured data. All research hypotheses were supported: (1) social support, experienced stigma and psychological distress were associated with QOL; (2) experienced stigma fully mediated the effect of social support on psychological distress; (3) psychological distress fully mediated the effect of experienced stigma on QOL; and (4) experienced stigma and psychological distress were sequential mediators between social support and QOL. This study elucidated the pathways linking social support, experienced stigma, and psychological distress to QOL and provides an empirical basis for improving the QOL of TB patients. Tuberculosis (TB) is a major infectious disease that poses a serious threat to human health and has significant negative social and economic consequences 1,2 . It leads to poor health for millions of people every year and is a major public health problem 1 . In 2019, there were an estimated 10 million new cases of TB worldwide, of which approximately 833,000 were in China, accounting for 8.4% of the global total, ranking third 1 . TB is also the leading cause of death from infectious diseases globally, and approximately 1.41 million people died of TB in 2019, of whom approximately 33,000 died in China, accounting for 2.4% of the global total 1 . The burden of TB remains high in China. Although the suffering caused by TB has been acknowledged for thousands of years, most current TB programs and research have primarily focused on detection, microbiological treatment, prevention, and control, while the quality of life (QOL) of TB patients has been neglected 3,4 . Although effective anti-TB drugs are available and TB patients have access to effective treatment, TB infectivity, chronic progression, long-term drug treatment over a period of at least 6 months and drug side effects have significantly affected patients' daily lives, thus affecting their QOL [5][6][7] . Research has confirmed that TB patients tend to have poor QOL, demonstrating QOL significantly worse than that of the general population 8,9 . The World Health Organization (WHO) defined QOL as an individual's perception of their position in life within the cultural context and value system in which they live and in relation to their goals, expectations, standards, and concerns 10 . In addition, QOL refers to a person's subjective assessment of their life's satisfaction and meaning 11 . QOL can affect treatment adherence in TB patients, while non-adherence to TB treatment is thought to be an important reason for the gap between high financial inputs and poor performance in TB control [12][13][14] . More importantly, impairments in QOL are associated with poor treatment outcomes, which can increase TB mortality and morbidity and negatively impact TB control 15 . Therefore, it is necessary to explore the factors that influence the QOL of TB patients and to improve the QOL of TB patients. Previous studies have analysed factors associated with
QOL. They found that sex, age, education level, marital status, occupational status, monthly income, drug side effects, comorbidities, body mass index (BMI), type of TB, phase of treatment, stigma, depressive symptoms, and social support were associated with QOL 2,6,16,17 . Social support refers to the amount of perceived and practical care received from family, friends and/or the community 18 . Previous studies have shown that social support affects the QOL of TB patients 9,19 . Patients with adequate social support from family, friends and community are likely to have better QOL 20 . Furthermore, social support was also an important predictor of stigma 21 . Patients with poor social support are more likely to be isolated and alienated, with manifestations such as being denied shared utensils and food by family members and losing their jobs, which may lead to stigma 22,23 . Additionally, good social support will increase life satisfaction and social confidence, enabling patients to adapt to a crisis and reducing the pressure of the patient's role change, thus also reducing the risk of psychological distress 24 . Previous studies have also demonstrated that perceived social support is associated with psychological distress in TB patients during treatment 25 . Because TB is transmitted by droplets and is highly contagious, patients with TB often experience great stigma, whether at home, in the workplace or in the community 26 . Studies of patients from a variety of backgrounds have indicated that between 42 and 82% of TB patients report stigma 27,28 . Research has suggested that social stigma may affect life satisfaction in TB patients during and even after treatment 15 , and TB-associated stigma is one of the most important aspects affecting QOL 29 . Stigma disrupts patients' social interactions with others and reduces social functioning and ability to fulfil daily roles, ultimately endangering patients' QOL 2 . In addition, studies conducted in rural China and Ethiopia have shown that experienced stigma is significantly associated with psychological distress 30,31 . TB patients who feel stigmatized may less frequently use health services and conceal their illness because of low self-esteem and social isolation. Moreover, studies have reported that TB-associated stigma is associated with psychological stress disorders. These factors can increase the risk of mental health problems, such as psychological distress 21,32,33 . The main factor that affects the QOL of patients with TB is psychological distress 34 . Studies have indicated that once TB is diagnosed, a wide variety of psychological responses are observed, for example, 51.9% to 81% of TB patients suffer from psychological distress 30,35,36 . Studies have also reported that the presence of mental health problems is the strongest predictor of decreased QOL 8 , and depression is also believed to be an important cause of poor QOL in patients with chronic diseases 37 . Notably, psychosocial burdens may have a greater impact than clinical symptoms in TB patients 34 . Psychological distress may interfere with an individual's immune response system and affect adherence to anti-TB treatment, which may lead to poor QOL and exacerbate mortality from the disease 38,39 . QOL of TB patients is generally neglected in existing national TB control programs 17 , and the lack of research on influencing factors of QOL may be one of the key reasons. As mentioned above, previous studies mainly relied on regression analysis and mostly explored only the direct relationships among variables. The pathways reflecting social support, experienced stigma, and psychological distress effects on QOL remain unclear. Without this understanding, it is difficult to determine precisely which variables should be the primary target of QOL priority interventions. Structural equation modelling (SEM), however, aims to decompose the direct and indirect effects of variables, discover the potential and important associations, and produce a more complete picture of causal effect mechanisms to understand the mechanisms and pathways that might explain these relationships 40,41 . In addition, the SEM incorporates measurement errors into the research model, which is more robust than the regression model 42,43 . The use of SEM enables us to untangle the complex relationships among social support, experienced stigma, psychological distress and QOL. Understanding the mechanisms and pathways of the relationship among social support, experienced stigma, psychological distress and QOL can help accurately determine the intervention objectives to improve QOL for TB patients, improve the effectiveness of intervention measures, achieve better clinical management, and ultimately increase the possibility of obtaining the best treatment outcomes and achieving the WHO's strategy to end TB. Based on the above theory and empirical research results, we proposed a hypothetical model (Fig. 1). As illustrated in Fig. 1, the current study aimed to test the following hypotheses: (1) social support, experienced stigma, and psychological distress are associated with QOL (H1); (2) experienced stigma mediates the relationship between social support and psychological distress (H2); (3) psychological distress mediates the relationship between experienced stigma and QOL (H3); and (4) experienced stigma and psychological distress are sequential mediators from social support to QOL (H4). According to these research hypotheses, we suggest ways to improve the QOL of TB patients. --- Methods Study design and setting. A cross-sectional, questionnaire-based survey was carried out between November 2020 and March 2021 at three TB medical institutions in Dalian, Liaoning Province, Northeast China. The three medical institutions were selected based on the number of patients attending, type of patient and location. The first is the only tertiary specialized hospital for TB prevention and control in Dalian and is divided into northern and southern parts located in Ganjingzi District and Pulandian District, respectively. It currently has nearly 500 beds, serving the whole city's TB patients, especially critically ill TB patients, as the main medical institution for TB patients in Dalian. The other two institutions are TB dispensaries located in Lushunkou District and Zhuanghe City (a county-level city), which serve only local TB patients with a milder instance of the disease. Participants. TB patients who attended the selected TB medical institutions between November 2020 and March 2021 were recruited as participants. The inclusion criteria were patients with a definite TB diagnosis according to national TB program guidelines, aged 18 years or older and with a new or relapsed case of TB undergoing treatment. The exclusion criteria were patients with psychosis, communication problems, or difficulty understanding the questionnaire and completion of treatment. A total of 481 patients were recruited and completed a structured questionnaire. Of the 481 questionnaires obtained, eight were excluded due to logical errors or large amounts of missing data. Ultimately, this study included 473 TB patients, with a participation rate of 98.34%. Ethics procedure. Ethical approval was provided by the Ethics Committee of Dalian Medical University, Liaoning Province, China. Before participating in the study, each participant was informed of the purpose of the study and how the results would be presented and received guarantees that their personal information would not be disclosed. Each participant voluntarily signed an informed consent form to participate in our study. All methods in our study were conducted in accordance with relevant guidelines and regulations (Declaration of Helsinki). --- Measurement. A structured questionnaire consisting of questions concerning demographic characteristics, treatment status, social support, experienced stigma, psychological distress, and QOL was developed by reading a large amount of relevant literature and consulting experts in related fields. Demographic characteristics included sex, age, marital status and education level. Treatment status included the category of TB treatment, phase of treatment and self-assessed disease severity. Social support. Social support was measured using the Oslo 3-item social support scale, a 3-item questionnaire that is commonly used to assess social support-related issues in clinical and community settings 44 . This questionnaire contains questions that ask patients about the number of people they feel close to and on whom they could count on for serious problems, how much people cared about them and the ease with which they could receive practical help from neighbours. Its overall score ranges from 3 to 14, with a high score indicating a high level of social support. In the current study, the scale's Cronbach's α reliability coefficient was 0.718. Experienced stigma. Experienced stigma was assessed using a 9-item stigma questionnaire developed in accordance with Chinese social and cultural contexts 45 . The questionnaire assesses the stigma experienced by patients on the three dimensions of prejudice, discrimination, and rejection. Responses to each item were rated on a 4-point Likert scale, ranging from strongly disagree (= 1) to strongly agree (= 4). The scores of each item were summed to obtain the total score (range 9-36). Higher scores indicate higher levels of stigma experienced by TB patients. The scale showed good reliability and validity, and its Cronbach's α in this study was 0.946. Psychological distress. The Kessler Psychological Distress Scale (K-10) questionnaire was used to assess psychological distress in TB patients 46 . Numerous studies have demonstrated the reliability and validity of this scale 30 . The scale is composed of 10 items divided among four subscales: nervousness, agitation, fatigue and negative affect 47 . Negative affect includes hopelessness, low mood, sadness and a sense of worthlessness. An example of such an item is "How often did you feel hopeless in the last 30 days?". The frequency of each item was recorded on a 5-point Likert scale, ranging from none of the time (= 1) to all the time (= 5). The overall score ranged from a low of 10 to a high of 50, indicating an increase in psychological distress. In this study, the scale had high internal consistency (Cronbach's α = 0.929). QOL. The QOL is an index of the satisfaction levels of the body, spirit, family and social life and the overall evaluation of life 48 . In the current study, a 6-item quality of life scale (QOL-6) developed by Phillips in 2002 was used to measure QOL in TB patients 49 . This scale consists of six items covering physical health, psychological health, economic circumstances, work, family relationships and relationships with nonfamily members. Patients were asked to rate the extent to which the six traits reflected their actual life situation over the past month. Each item was recorded on a 5-point Likert scale, ranging from very poor (= 1) to excellent (= 5). The overall score ranged from 6 to 30, with a high score reflecting good QOL. This scale has been used to assess the QOL of differ-ent populations 17,49,50 . In the current study, the scale had acceptable internal consistency (Cronbach's α = 0.792). QOL was parcelled to produce three categories using the item parcelling method for the final model analysis 51 . --- Statistical analysis. The complete and correct questionnaires were inputted into the database established using EpiData 3.1 software (EpiData Association, Odense, Denmark) by double entry to ensure the accuracy of the data. The data were exported to SPSS 21.0 (IBM Corporation, Armonk, State of New York) for preliminary statistical analysis. Descriptive statistical analysis included the frequency and percentage of classified data and the mean and standard deviation (SD) of continuous data. T tests and analysis of variance were used to compare QOL scores among different groups. Pearson correlation analysis was used to evaluate bivariate correlations. All comparisons were two-tailed, and P < 0.05 was considered statistically significant. When multiple potential mediating variables and complex relationships were considered in the research model, we used AMOS 23.0 software (IBM Corporation, Armonk, New York, USA) to conduct structural equation modelling (SEM) to test the hypotheses. Confirmatory factor analysis (CFA) was carried out to test the reliability and validity of the constructs and combined with SEM to improve the research model 42 . The maximum likelihood method was used to estimate the parameters. Additionally, the 95% confidence interval (CI) was calculated using bootstrapping with 5000 resamples for all effects 52 . The bootstrapping performed was a non-parametric test that does not rely on assumptions of normal distribution, and the effect was considered statistically significant if the 95% CI did not include zero. The goodness-of-fit index (GFI), comparative fit index (CFI), Tucker-Lewis index (TLI), standardized root mean square residual (SRMR), and root mean squared error of approximation (RMSEA) were calculated to examine the fit of the model. GFI, CFI and TLI values were greater than 0.900, and SRMR and RMSEA values were less than 0.080, indicating adequate goodness of fit 53 . --- Results Participants' demographic characteristics and treatment status. Among the 473 participants, the mean age was 48.36 (SD = 17.58) years, and most participants (60.04%) were aged 45 years or older. There were more than twice as many male participants (69.13%) as female participants (30.87%). There were slightly more participants with a high school education or above (34.88%) than those with a middle school education (33.19%) or a primary education or below (31.92%). Nearly two-thirds of the participants (65.33%) were married, and only 72 (15.22%) had relapsed. More than half of the patients (59.20%) were in a continuous phase of treatment, and nearly one-third (29.60%) felt that their current condition was severe. Among the respondents, the average QOL score was 20.41 (SD = 3.65). Age, marital status, education level, treatment category, treatment phase and self-assessed severity were significantly associated with QOL (P < 0.05) (Table 1). Correlations of the variables. The mean scores of social support, experienced stigma, and psychological distress were 9.71 (SD = 2.27), 18.86 (SD = 7.14), and 19.62 (SD = 7.49), respectively. Social support was negatively correlated with experienced stigma (r = -0.263, P < 0.01) and psychological distress (r = -0.151, P < 0.01) and positively correlated with QOL (r = 0.579, P < 0.01). In addition, experienced stigma was positively correlated with psychological distress (r = 0.453, P < 0.01) and negatively correlated with QOL (r = -0.429, P < 0.01). Psychological distress was negatively correlated with QOL (r = -0.480, P < 0.01) (Table 2). Reliability and validity of the constructs. Through factor analysis, the unstandardized estimates of each item were significant (P < 0.001), and the standardized factor loadings of each item were > 0.5, which met the physical requirements of factor analysis, indicating that each item has a substantial effect on the measurement of latent variables. The CR value represents the internal consistency of the construct. The higher the CR is, the greater the internal consistency of the tested factors. In this study, all CR values were > 0.7, indicating that the constructs exhibited acceptable internal consistency. Moreover, AVE is the average of the explanatory power of the calculated latent variable to the observed variable. The higher the AVE is, the higher the convergent validity. The value of AVE is recommended to be greater than 0.5. In this study, the AVE ranged from 0.500 to 0.819, which implied that the interpretation degree of latent variables with respect to the observed variables was good and the convergent validity of the constructs was high. The values of √ AVE s in the diagonal were greater than or slightly lower than the Pearson correlation coefficient of other related constructs. This fact indicates that the discriminant validity among factors is significant and that each factor can be well separated. Overall, these constructs exhibited good reliability and validity 54,55 (Tables 3,4). 2 shows the research model with unstandardized path coefficients. Age, marital status, education level, treatment category, treatment phase and self-assessed severity acted as covariates. Education level was positively associated with QOL (β = 0.131, P < 0.001), while selfassessed severity was negatively associated with QOL among TB patients (β = -0.095, P < 0.05). As shown in Table 6, the total effect of social support on psychological distress was -0.154 (95% CI (-0.245, -0.068) and (-0.243, -0.067)). Social support significantly predicted psychological distress via experienced stigma (95% CI (-0.187, -0.075) and (-0.184, -0.074)). However, the direct effect of social support on psychological distress was nonsignificant (95% CI (-0.117, 0.052) and (-0.114, 0.054)). Therefore, experienced stigma fully mediates the effect of social support on psychological distress. The total effect of experienced stigma on QOL was -0.163 (95% CI (-0.246, -0.088) and (-0.249, -0.090)). Experienced stigma significantly predicted QOL via psychological distress (95% CI (-0.156, -0.095) and (-0.156, -0.059)). However, the direct effect of experienced stigma on QOL was also nonsignificant (95% CI (-0.137, 0.012) and (-0.137, -0.011)). Thus, psychological distress fully mediates the effect of experienced stigma on QOL. The total effect of social support on QOL was 0.524 (95% CI (0.435, 0.635) and (0.435, 0.635)), and the direct effect was 0.463 (95% CI (0.386, 0.562) and (0.384, 0.561)), accounting for 88.36% of the total effect. In addition, social support significantly predicted QOL via the sequential mediation variables of experienced stigma and psychological distress (95% CI (0.017, 0.058) and (0.016, 0.057)), whose estimated multiple indirect effect was only 0.033, accounting for 6.30% of the total effect. In sum summary, all the hypotheses were supported (Fig. 2, Table 6). --- Fit indices of the overall research model. --- Discussion Patients with TB often have symptoms such as cough, chest pain, low fever, fatigue, and loss of appetite. In addition, the treatment of TB is a complex and lengthy process, requiring many medications and a long period of treatment. These factors significantly affect the QOL of patients 17 . However, to date, the complex relationships among social support, experienced stigma, psychological distress, and QOL in patients with TB have not been fully explored. To our knowledge, this study was the first to use SEM to explore the interrelationships among social support, experienced stigma, psychological distress, and QOL and to examine whether experienced stigma and psychological distress play mediating roles. In the current study, factor analysis indicated that each construct displayed good reliability and validity, which further verified the stable structure of the scale in TB patients and provides a basis for future studies to measure the social support, experienced stigma, psychological distress, and QOL of TB patients. More importantly, the fitness indices exhibited good model fit, indicating that our proposed research model is reasonable and provides key information for improving the QOL of TB patients. Moreover, this study found that education level was associated with QOL in terms of demographic characteristics. Previous studies have also demonstrated that education level is an important predictor of QOL, such that a higher education level has a positive effect on the QOL of TB patients 2,9 . Patients with higher levels of education contribute to greater knowledge about TB from the outside world. Knowledge of TB can improve health-related behaviors such as taking anti-TB drugs on time and seeking care in a timely manner 57 . This will contribute to the effective control of the disease and reduce the patients' stigma, thus reducing psychological distress and improving the QOL. However, patients with low levels of education may lack a correct understanding of TB. This often leads to doubt about the ability to cure TB and reduced self-efficacy 58 . Patients with low self-efficacy also have stronger stigma experience 59 , which increases the risk of psychological distress and affects the QOL of patients. This study also found that patients with perceived severe illness had worse QOL than those with mild illness. Previous studies have also demonstrated that worse physical symptoms are associated with lower physical health-related QOL and higher mental health-related QOL among TB patients 60 . Understandably, patients with more severe disease have more complex clinical conditions and longer treatment times, as well as increased patient concerns, which may be particularly damaging to QOL. In addition, it is understandable that the more severe the illness are, the more obvious the symptoms. Obvious symptoms, especially a prolonged cough, may lead to a greater degree of accidental disclosure of the illness. This will have a negative impact on access to social support, increase stigma and psychological distress and threaten QOL. The results also showed that social support demonstrated a significant, direct effect on the QOL of patients with TB. Social support helps improve patients' QOL 20 , which has also been found in studies on patients with traumatic brain injury 61 . A possible explanation is that patients who receive adequate social support might have improved health outcomes. Moreover, consistent with previous studies, stigma was a predictor of QOL 62 . Stigma can damage patients' self-esteem and self-efficacy, lead to patients' isolation from society and self-concealment, and ultimately endanger patients' QOL 63 . In addition, psychological distress exerted a direct effect on QOL in our study. Studies have reported that untreated depression is independently associated with poorer QOL 39 . Another study also demonstrated that mental distress had a significant effect on QOL 6 . Patients with psychological distress were less likely to adhere to treatment regimens, thus eliminating the chance of successful treatment, impairing their function, and reducing QOL 39,64 . Previous research has demonstrated that patients who receive an adequate amount of social support are likely to have the best mental health outcomes 20 . Our results suggest that social support can also have an indirect negative effect on psychological distress through experienced stigma. Sufficient social support can increase patients' self-esteem and make patients more likely to be diagnosed in a timely manner and to comply with treatment, thus reducing the occurrence of psychological distress 65,66 . In addition, our results confirmed that psychological distress moderated the relationship between experienced stigma and QOL. It is not difficult to understand that the experience of stigma will lead to patients' feelings of inferiority, lack of confidence and low emotional well-being, which threaten patients' emotions and cause psychological distress, thus affecting their QOL 33 . Our results also indicated that experienced stigma and psychological distress are sequential mediators from social support to QOL, a relationship that has not been demonstrated in previous studies. However, the findings seem logical because patients with better social support have more emotional and financial resources, which makes them face less discrimination and stress, and are likely to use drugs with fewer side effects; thus, they may have improved QOL 65,67 . In the current study, SEM was used to test the mediating variables. In epidemiological studies, the assessment of mediation has been widely used to open up the "black box", allowing us to discern complex relationships between variables 68 . In practice, understanding the interrelationships among social support, experienced stigma, psychological distress and QOL provides an opportunity to intervene effectively in QOL among patients with TB, and it allows for interventions to be tailored to these specific pathways. Specifically, interventions aimed at improving the QOL of TB patients should focus on increasing social support for patients. At the same time, the role of experienced stigma and psychological distress should also be understood and addressed. Given that experienced stigma and psychological distress mediated the effect of social support on QOL in TB patients, interventions should be combined with measures to eliminate stigma and reduce psychological distress. This was essential for improving the QOL of TB patients. Previous studies have demonstrated that family functions and doctor-patient communication are the most important sources of social support for patients 69 . The attitude of family members has an important influence on TB patients. There are widespread psychological burdens among TB patients, such as lack of confidence in a cure and fear of treatment failure 70 . Constant encouragement from and care by family members can increase patients' confidence and their feelings of being taken care of. Therefore, family members can be educated and trained to provide better support for patients. Doctors also play an important role in the treatment of TB, and a good doctor-patient relationship is the fundamental factor to ensure the normal operation of the treatment process. It is necessary to require medical staff to establish the concept of patient-centred service and to show respect and humanistic care in the process of medical service delivery 71 . It is also important to provide more financial support for patients. Although the country has established some free TB treatment policies, some items, such as the cost of expensive adjuvant drugs, are not included in the free package. Previous studies have also found that TB clubs, composed of health workers and TB patients, have been successful in reducing stigma among TB patients 72 . In addition, community awareness and patient education may contribute significantly to a reduction in stigma 73 . This study has several limitations that need to be addressed in future studies. First, although SEM was applied, the causal relationship between the variables could not be inferred due to the cross-sectional nature of the data. Therefore, longitudinal studies are needed to validate the current findings. Second, the study sample only included TB patients from Dalian, Liaoning Province, Northeast China, which limited the ability to generalize the results to individuals from other regions with different social and cultural backgrounds. Future research should expand the study area to determine the suitability of our study model. Additionally, the study was limited to TB patients who already had access to health care, while those who did not seek any care were not recruited. The latter are probably the most marginalized and affected by TB, and their participation could enrich our findings. The study also did not include healthy people as controls. Therefore, the results may not capture the impact of TB on patients alone. Finally, only quantitative analysis was conducted in this study, and data were collected through patient self-reports. Patients may hide certain facts, which may cause our results to be underestimated. Extensive interviews and qualitative analysis are needed for a more comprehensive assessment. --- Conclusion This study empirically explored the interrelationships among social support, experienced stigma, psychological distress, and QOL and tested whether experienced stigma and psychological distress played a mediating role. Using the SEM method, we found that (1) social support, experienced stigma and psychological distress affect the QOL of patients with TB; (2) experienced stigma mediates the relationship between social support and psychological distress; (3) psychological distress mediates the relationship between experienced stigma and QOL; and (4) experienced stigma and psychological distress are sequential mediators from social support to QOL. Understanding and managing the QOL of TB patients may lead to better outcomes, and the results of this study provide useful information to help TB patients achieve better QOL. --- Data availability The datasets generated and/or analysed during the current study are available from the corresponding author on reasonable request. --- Author contributions L.Z. and X.C. conceived and designed the research and advanced the whole research. X.C. analysed the data and drafted the manuscript. X.C., J.X., Y.C., R.W., H.J., Y.P., Y.D., M.S., L.D., M.G. and J.W. were involved in data collection, entry, and verification. All authors read and approved the final manuscript and agreed to take responsibility for all aspects of the work. --- Competing interests The authors declare no competing interests.