text
stringlengths
1
6.27k
id
int64
0
3.07M
raw_id
stringlengths
2
9
shard_id
int64
0
0
num_shards
int64
16
16
ramblings of meaning, attachment, and desire circulating in the landscape into sensethat is, into something that can be envisioned, set before our mind's eye, or imagined in a mental tableau" (2006, p. 537). Further away, a cluster of half-built towers that protrude from Wood Wharf signify another massive residential and mixed-use development. The estate will deliver 3,300 homes distributed across a range of high-rise towers, but critically, their nocturnal presence will be muted: the developer has made it a condition that the light design of each tower is subdued in order to retain the nocturnal dominance of the commercial towers on Canary Wharf to which we are drawing closer. | Billingsgate market The subtle gleamings, glints, and reflections of the lambent dock environments are brusquely shattered by the onslaught of light that greets us when we gaze down from the road to overlook the 13-acre site on the Isle of Dogs, on which since 1982 the UK's largest inland market, Billingsgate fish market, has been located. As Lyon discusses, the site is "a tightly defined temporal and spatial frame for the exchange and physical redistribution of goods" (2016, p. 2), bordered with high walls and railings. The huge, low, rectangular market building is surrounded by a tarmac expanse of parking spaces and loading bays, large trucks and corrugated iron sheds. At Billingsgate, trade commences at 4 am and finishes at 8:30 am. Lyon (2016) writes evocatively about the phases of business within the brightly lit interior of the market, a bustling scene that is difficult to
3,070,400
229477096
0
16
imagine from outside. In the quiescence of the late evening, imposing halogen floodlights evenly cast their brilliant white light across the asphalt, functional lighting that makes no concessions to aesthetic considerations. It ensures that unloading and loading produce, stacking and storing, and driving and parking large vehicles can be carried out efficiently and safely during the very busy hours of darkness. The pragmatic luminosity of this floodlit nocturnal hive facilitates the work of those who supply London's food, and prompts our memories of working on the factory night shift when younger. The deep industrial history of this east London setting has not yet succumbed to pervasive surrounding development processes. This is another stretch of land that has resisted the aesthetic nocturnal uniformity that such schemes would extend (Degen, 2018), as amply demonstrated after dark. | Canary Wharf As we enter the semi-private estate of Canary Wharf, we have become familiar with its iconic architecture, guided by the cluster of gleaming giants in the distance since setting off, and informed by their reproduction in popular film, television, and photography as potent signifiers of cosmopolitan modernity. The tall commercial towers here distinguish themselves from residential towers by announcing their presence against the backdrop of the night-time sky, luminous business names gleaming from their crowns: HSBC, Citi Group, Bank of America, KPMG, Barclays, JP Morgan. While local planning prevents any advertising of products, company names are allowed to radiate power across the city, selling the vertical fantasy of power invested in global finance (Hayden, 1977). We are spellbound by
3,070,401
229477096
0
16
the spectacular architectural forms: at ground level the towering glass and steel structures meet the street with luminous shine, reflection, and glare. Opulent, well-lit, spacious, and largely empty lobbies provide a transparent entrance that leads past security gates to rows of lift doors that lead upwards. While sterile, serialised, and subscribing to a global aesthetic characteristic of similar financial districts in New York or Shanghai, walking through these streets nonetheless captivates us. Gazing inwards and upwards at the luminous towers that encroach on the sky and loom over the streets, the "invitation to attune" makes us aware that our physical exclusion from the world of corporate power and global finance is strangely supplemented by an affective attraction that, similar to our experience of London City Island, gives an illusory impression of inclusiveness; we feel invited inside the sensory boundaries of this regime. Upmarket retail outlets, restaurants, bars, and gardens suffuse this quasi-public realm with indirect, muted lighting, the soft trickle of fountains, and diffuse music, puncturing the corporate landscape and further enhancing our sense of sensory inclusion. These soothing, sociable elements surely provide respite from the stressful, high-intensity work played out above. In this alluringly toned realm of upscale dining, shopping, and drinking, modish street-level illumination produces a more diverse, glitzy aesthetic than the incipient lighting of the residential developments we have visited. The predictable presence of public art installations populates the area's squares; in Canada Square Park a curved row of luminous benches designed by German artist Bernd Spieker remain from the annual light festival,
3,070,402
229477096
0
16
"Winter Lights" (2018). While the benches attract attention from pedestrians, they feel as if they have been parachuted into the location with little regard for place-identity, unlike the redesigned benches installed under the A13 flyover in Canning Town. An illuminated news feed runs along and around the corner of the Reuters building, ceaselessly reporting selective world events in real time. In addition, a large screen delivers up-to-the-minute news reports and advertisements, in front of which are six, glowing clocks on chrome poles, all showing the same time. These illuminated signifiers broadcast the exciting impression of a place continuously connected to global finance, politics, and business. The illumination of Canary Wharf is thus a seductive blend of light that signifies commercial power, sophisticated design, and the promises of leisure and consumption. For us, while it concocts an impression of a vibrant public nocturnal realm, the overwhelming, illuminated corporate architecture dominates the landscape, ultimately generating a sense of exclusion rather than inclusion. | CONCLUSION In this paper, we focus on an east London urban landscape that has been undergoing dramatic rapid change for many decades and is currently experiencing an especially intense wave of regeneration. We have sought to investigate how experiences of such transformations are mediated by lighting and darkness, foregrounding the night walk as a method for disrupting emergent and dominant forms of aesthetic organisation. Accordingly, we demonstrate that walking through differently lit spaces can unsettle conventional experiences of urban illumination, for walking constitutes an embodied practice through which we can become attuned to distinctive affordances,
3,070,403
229477096
0
16
prompting affective and sensory experiences that change in response to shifting moods and intensities. Walking and thinking together, we paused to discuss the diverse impressions of the light and darkness that we encountered. Based on these experiences, and following the insights of Rancière (2009), we suggest that the night walk can potentially facilitate a redistribution of the nocturnal field of the sensible. While walking, we certainly became attuned to the considerable affective and sensory impact of the power that inheres in particular contemporary styles of illumination. A selective aesthetic ordering was especially evident in certain illuminated elements of the landscape. The continuous sight of Canary Wharf's corporate towers and their brightly lit logos shaped our spatial orientation in space, as did the more subtle illumination of the upmarket residential developments through which we passed. In different ways, these lighting designs mark out new centres of power and exclusion. Backed by financial institutions, private developers are increasingly taking charge of urban spaces, razing the ground to make space for vertical, selfenclosing mixed-use developments that give citizens an illusion of public inclusiveness. Certain forms of light we experienced exposed us to the emergence of an archipelago of vertical secluded enclaves, with cranes and building sites signifying schemes in process, with others already inhabited. Such lighting schemes simultaneously assert cultural capital, mobilising a taste-making strategy that underpins the increasing centrality of lifestyle, consumption, and design to middleclass identities, and reiterate an aesthetic consent that echoes globally across similar upmarket projects, bestowing regularity, uniformity, and continuity on nocturnal cities. These
3,070,404
229477096
0
16
powerful and stylish illuminated designs have been co-opted into an urban infrastructure that expresses political and economic power and solicits a pervasive structure of feeling. This entangling of sensation with capitalist aesthetics has, for us, produced a highly seductive nocturnal realm in which we have been bedazzled by vertical spectacle, felt inclusively invited into ersatz public spaces, and been mesmerised by shimmer, glow, and colour. City Island, Poplar Dock, and Canary Wharf possess potent lighting schemes that temporarily distracted us from the expressions of power that inhere in such strategic designs, impacts that underpin how effectively illumination can be deployed in distributing the sensible. As we have emphasised, light's non-representational qualities, its capacity to tincture spaces and bodies, generate affective and sensory responses and shape attunement, all render it a critical tool in producing an aesthetic consensus. Yet our walk also exposes the incoherence of the wider urban lightscape, for the city rarely wholly succumbs to these homogenising tendencies. As Sumartojo and Pink (2018) emphasise, invitations to attune can be ignored, resisted, or sidetracked by other invitations. We came across the inclusive, inventive design of the alluring Terry Spinks Place in Canning Town, its aesthetics motivated by social rather than economic imperatives. Here, a creative act of dissensus has produced an alternative realm of the sensible. In addition to the potential redistribution of the senses produced by such deliberate interventions, dissensus may emerge in encounters with outmoded or unorthodox designs in the interstitial spaces that connect discrete areas of high-end development and reveal tensions between private
3,070,405
229477096
0
16
and public lighting schemes. Our walk along the sodium-lit Aspen Way provoked a deeply embodied nostalgia for nocturnal urban sensations that are disappearing, while the harsh, industrial, functional lighting of Billingsgate Market conjured up involuntary memories of working on the factory nightshift. These dissonant forms of illumination were part of an extensive light clutter that vanquished desires for a more expansive, seamless stretch of modish lighting installed by gentrifiers and corporations. Equally disruptive were thick riverine mud, glistening water and impassive concrete, materialities that reflected, absorbed, and deflected light in distinctive ways, soliciting very different attunements and moods to those provoked by the smooth designs of regeneration. Such features reveal that desires to achieve aesthetic consent are unlikely to succeed, for it seems that there must always be spatial, material, and sensory gaps to the dominant orchestration of the sensible. We have explored how light can foreground such dissensus in this part of London, but across all parts of the city non-human intrusions, incongruities, remnants, and oppositional designs attract attention and sensorially reattune us. Without unending surveillance, policing, repair, and maintenance, all urban design schemes are destined to fail; in any case, the production of a seamless, eternal realm of the sensible is a chimera.
3,070,406
229477096
0
16
Neoadjuvant Chemotherapy in Cervical Cancer: A Review Article neoadjuvant chemotherapy in patients with FIGO ... [5] Carcinoma of the cervix uteri. FIGO 26th annual report on the ... [6] Randomized comparison of fluorouracil plus cisplatin ... [7] Pelvic radiation with ... [8] Concurrent cisplatin-based radiotherapy and ... [9] Cisplatin, radiation, and adjuvant ... [10] Concurrent chemotherapy and pelvic radiation therapy compared ... [11] Improved treatment for cervical ... [12] Survival and recurrence after concomitant chemotherapy ... [13] Concomitant and neoadjuvant chemotherapy ... [14] Hysterectomy with radiotherapy or chemotherapy ... [15] Cost-effectiveness of radical hysterectomy ... [16] Young cervical cancer patients may be more responsive than older ... [17] Phase III randomized controlled trial of neoadjuvant ... [18] Neo-adjuvant chemotherapy ... [19] Prognostic value of responsiveness of neoadjuvant chemotherapy before ... [20] Clinical efficacy of modified preoperative neoadjuvant ... [21] Treatment patterns of FIGO Stage IB2 cervical cancer ... [22] Neoadjuvant chemotherapy for locally advanced ... [23] A review of topotecan in combination chemotherapy ... [24] Clinical efficacy and safety of paclitaxel plus carboplatin ... [25] Major clinical research advances in gynecologic ... [26] Neoadjuvant chemotherapy for locally ... [27] When does neoadjuvant chemotherapy ... [28] Radiation-sparing managements for ... [29] Evaluation of mediastinal lymph nodes ... [30] New response evaluation criteria in ... [31] Comparative study of neoadjuvant ... [32] Is there a role for postoperative ... [33] Optimal pathological response indicated ... [34] Concurrent chemoradiation versus ... [35] Improved survival with bevacizumab ... [36] Randomized trial of cisplatin and ... [37] Prognostic value of responsiveness
3,070,407
86718594
0
16
... [38] Cervical ... [39] Adjuvant chemotherapy ... [40] Practice patterns of adjuvant therapy for intermediate/high ... [41] Outcome of neoadjuvant intra-arterial chemotherapy and radical ... [42] What is the value of hemoglobin as a prognostic ... [43] Efficacy of neoadjuvant cisplatin ... [44] Cervical cancer in pregnant ... ©2018 ASP Ins., Afarand Scholarly Publishing Institute, Iran ISSN: 2476-5848; Journal of Obstetrics, Gynecology and Cancer Research. 2018;3(2):87-91. Introduction Cervical cancer is one of the most common cancer in females that may lead to death. Incidence and mortality of cervical cancer have declined from the number one killer of woman to twelfth ranked in the United States since 1950 due to cervical cytologic screening and intervention at the in-situ stage [1] , however, 85% of its mortality is in developing country maybe because of socioeconomic factors [2,3] . Treatment is clean cut in early and advanced stages of cervical cancer, however, the controversy of studies is in early-stage bulky (IB1 to IIA2) especially in countries with inappropriate sources of radiotherapy [4] . The International Federation of Gynecology and Obstetrics (FIGO) recommend 3 approaches for the treatment of stage IB2 and IIA2 cervical cancer: Concurrent chemoradiation, Neoadjuvant chemotherapy (NACT) before radical hysterectomy and lymphadenectomy with or without post-surgery radiotherapy and radical hysterectomy and lymphadenectomy with radiation or chemoradiation [5] . Since 1990, NACT is one of the approaches before surgery. National Cancer Institute recommended chemoradiation (CT/RT) as a standard approach in locally advanced cervical cancer (LACC) by foreseeing 5 randomized clinical trials research with 30-50% decrease in death
3,070,408
86718594
0
16
[6][7][8][9][10][11] . Comparing smaller tumors at the same stage showed early stage bulky (IB2/IIA2) of cervical cancer was high-risk early-stage cervical cancer due to higher recurrence and poor prognosis [12,13] . NACT before surgery administered in Europe, Asia, and Africa because of radiotherapy facilities limitation, no any research showed this modality is more effective than primary chemoradiation [14] . Just 10% of patients in stage IB2 treated with radical surgery compared with 40% in neoadjuvant chemotherapy, about 50% of the patients need adjuvant pelvic radiotherapy. Lower limb lymphedema (LLL) and decreasing quality of life (QOL) were two most complications in radical hysterectomy with radiotherapy [15] . NACT before radical surgery decreases tumor volume, lymph node metastasis in responders, and deep cervical stromal invasion [16] ; however, NACT did not affect on survival compared with only radical surgery [17] . NACT increases radiotherapy sensitivity and decreases hypoxic cell fraction; NACT also treats micrometastatic tumor so as to prevent the significant proportion of relapses [18] . NACT decreases lymphovascular invasion (0 vs. 4.7%), deep stromal invasion (19.8 vs. 53.5%), lymph node metastasis (8.1 vs. 25.6%), and need of adjuvant radiotherapy (5.5 vs. 30.2%) compared with non-responders and primary surgery [19] . In the NACT group, pelvic metastasis and parametrial infiltration rates were significantly lower in compared with primary surgery group [20] . The aim of the present review was to study the effect of NACT before radical surgery in comparison with other treatments and various clinical outcomes. Information and Methods This study is a systematic review. Search Strategy:
3,070,409
86718594
0
16
In this study PubMed, Nature, Elsevier, Medicine Journal, Scopus, and Gynecologic Oncology-online were searched. Common keywords that used were: Cervical Cancer, Adenocarcinoma of Cervix, Neoadjuvant Chemotherapy, also references and similar articles for access more publication were used. Selection Criteria: This study includes previous publishes about cervical cancer and effect of NACT before radical surgery in compared of other treatments and various clinical outcomes difference in approaching of cervical cancer, all publish were in English language and all full text of them were studied. Selection Identified: Over 40 gross previous studies were used, none of them was case report, at least five studies were randomized clinical trials and six studies were meta-analysis or systematic review. Findings NACT is the most studied as an alternative treatment modality for FIGO stage IB2. The 5-year survival in stage IB2-IIA patients with tumors bigger than 4cm was 30-60% with surgical intervention, although lymph node involvement was 35%-80% so resulting in a 5-year survival rate was 30-40% [21] . Some few studies used NACT (Especially chemotherapy with Platinum) and reported a response rate between 66.6% and 94% in cervical cancer [22,23] , and some studies showed tumor shrinkage without severe chemotherapeutic toxicity but actually, there were some patients were nonresponders that had lower overall survival and lower progression-free survival rates [24] . A meta-analysis study includes 6 randomized clinical trial and study of Gynecologic Oncologic Group (GOG) showed NACT did not support its overall survival benefit, however, NACT before surgery demonstrated advantages of reducing the rate of lymph node metastasis and parametrial
3,070,410
86718594
0
16
infiltration so improving progression-free survival [17, [25][26][27]] . Pelvic lymph node invasion was one of a strongest negative prognostic factor in patients with stage IB2-IIB bulky, most of these patients were recurrent in the first year after NACT, no any adjuvant therapy indicated for end-stage patients yet [28] . Lymph nodes greater than 1cm in MRI or CT scan and increasing uptake value in PET-CT scan was helpful for diagnosis of metastatic disease [29] . In patients with pelvic lymph node invasion (Approximately 35% of stage IB2-IIB bulky), NACT under consideration of quality of life and costeffectiveness should be recommended [27] . After NACT, response was evaluated based on Response Evaluation Criteria in Solid Tumors (RECIST) criteria: Clinical examination and imaging (MRI, CT scan, or PET-CT scan) [30] , patients that respond to NACT treated with surgery, if no any response found chemotherapy continued. The result showed that chemoradiotherapy recommended in NACT non-responders group for survival, however decreasing quality of life should be considered [31] . This approach couldn't generally recommend for every patient with Locally Advanced Cervical Cancer (LACC), patients should have some criteria such as stage IB to IIA cervical cancer with greater tumor size, deep stromal invasion of the outer third of the cervix, lymphovascular invasion, and adenocarcinoma or adenosquamous carcinoma in pathology [32] . Results showed NACT is really effective in decreasing incidence of pathological risk factors and the frequency of adjuvant treatment after radical surgery, most studies found that pathology report of surgical specimen was a predictive factor for the clinical
3,070,411
86718594
0
16
outcome in patients undertreated with NACT and radical surgery [32] . Patients with an optimal pathological response should receive two additional cycles of chemotherapy after surgery with the same NACT regimen [33] , patients with positive nodes, parametrial invasion, cut-through or suboptimal response were candidates for External Beam Radiation Therapy (EBRT) or Concurrent Chemoradiation (CCRT) [34] . In poor response to NACT benefit from both additional cycles of the same induction or CCRT or EBRT were limited because of chemo-resistant tumors were often radio-resistant too [32] . GOG (Gynecologic Oncology Group) published that addition of bevacizumab to chemotherapy is associated with increasing 3.7-month overall survival in compared to chemotherapy alone in patients with recurrent cervical cancer [35] . Bevacizumab plus chemotherapy should be tested in suboptimal responder to NACT with residual tumor [32] . Discussion The aim of the present review was to study the effect of NACT before radical surgery in comparison with other treatments and various clinical outcomes. As limited access to radiotherapy centers in the poorly radiotherapy facilities area especially in developing country, NACT is an alternative treatment for patients with locally advanced lesions [36] . Chemotherapy had side effects, especially during preoperative intravenous chemotherapy maybe granulocytopenia, gastrointestinal toxicity, alopecia, numbness, palpitation, and electrolyte imbalance occur, however, signs of these toxicities could resolve or disappear during a time without any significant permanent complication. Surgery had complications as involve urinary system, lymphatic cyst, delayed healing, ileus, hydronephrosis, and venous thrombosis. Rates of complication were similar in both groups; 9.7% in the NACT group and
3,070,412
86718594
0
16
15.3% in the primary surgery group [37] . Primary Radical Surgery had some benefits such as preserving fertility (Ovaries), the absence of radiation complications and keep the potential use of radiotherapy for recurrence in patients with stage IB-IIB cervical cancer [38] , so result of GOG publish showed NACT before radical surgery in patients with stage IB2 cervical cancer had no any significant difference in comparison of radical hysterectomy alone [17] . However, NACT improves resectability and survival in the patients with stage IB2, another GOG study published CCRT with weekly cisplatin in stage IB2 increase both progression-free survival and overall survival significantly in compared with Radiation alone [39] . GOG also recommended another Cervical Cancer Detection Program should use CCRT in stage IB2 patients [34] . Most Japanese gynecologist oncologists prefer radical hysterectomy rather than CCRT in patients with stage IB-IIB cervical cancer [40] . The use of NACT before radical hysterectomy is a controversy for patients in stage IB2-IIB bulky cervical cancer [41] . For NACT approach, first should choose a right patient, patients with big size tumor (5cm or more) and patients with pretreatment hemoglobin level less than 12g/dl have lower overall survival [42] . In a greater tumor, advanced stages and patients with anemia prognosis of NACT before radical surgery is really bad [17] . NACT response associated with the stage of diagnosis, tumor size and pathology of the specimen (Squamous tumor have a better response than a non-squamous tumor) [43] . In our center, we classify the patients over 70 years
3,070,413
86718594
0
16
old as ineligible for radical hysterectomy and we highly recommend NACT followed by radical hysterectomy in stage IB2-IIB cervical cancer patients with a bulky tumor less than diameter 4cm. Parametrial invasion is one of a prognostic factor in cervical cancer. This factor decreased significantly in the NACT approach. NACT before radical surgery and radiotherapy after surgery is really useful in a patient with stage IIB bulky and only pelvic lymphadenopathy [44] . Conclusion Neoadjuvant chemotherapy before surgery demonstrates advantages to reduce the rate of lymph node metastasis and parametrial infiltration, so improves progression-free survival in patients with pelvic lymph node invasion (Approximately 35% of stage IB2-IIB bulky). NACT also decreases Journal of Obstetrics, Gynecology and Cancer Research Spring 2018, Volume 3, Issue 2 tumor volume and minimizes the need for adjuvant radiotherapy, thus NACT under consideration of quality of life and cost-effectiveness should be recommended. NACT is really effective in decreasing incidence of pathological risk factors. NACT response associated with the stage of diagnosis, tumor size and pathology of the specimen (Squamous tumor has a better response than a nonsquamous tumor). NACT seems to be feasible in the management of stage IB bulky cervical cancer, NACT followed by surgery represent an alternative to primary chemoradiotherapy in young and sexually active patients.
3,070,414
86718594
0
16
Examining Factors Influencing Use of a Decision Aid in Personnel Selection In this research, two studies were conducted to examine factors influencing reliance on a decision aid in personnel selection. Specifically, this study examined the effect of feedback, validity of selection predictors, and presence of a decision aid on the use of the aid in personnel selection. The results demonstrate that when people are provided with the decision aid, their predictions were significantly more similar to the predictions made by the aid than people who were not provided with the aid. This suggests that when people are provided with an aid, they will use it to some degree. This research also shows that when provided with a decision aid with high cue validity, people will increase their reliance on the decision aid over multiple decisions. Assessing job candidates and selecting those with the highest qualifications is of utmost importance as organizations attempt to win the war for talent. Personnel selection systems aim to assess applicants on physical and psychological attributes required to perform the job; ideally, these attributes help identify individuals who will demonstrate better performance and improve organizational effectiveness and efficiency (Farr & Tippins, 2010). A personnel selection system, however, is only as good as the measures used to assess the specified attributes, as well as the evaluators assessing applicants. Researchers have spent decades investigating the validity of various constructs and assessment methods. From meta-analytic studies, several conclusions can be made regarding the overall effectiveness of job performance predictors in selection. Specifically, general cognitive ability
3,070,415
151608675
0
16
is one of the best predictors of performance (Schmidt & Hunter, 2004), whereas conscientiousness is the most valid of the five-factor model of personality dimensions (Barrick, Mount, & Judge, 2001). Further, structured interviews are superior to unstructured interviews (Huffcutt & Arthur Jr., 1994;Huffcutt, Culbertson, & Weyhrauch, 2014), and using multiple valid predictors can improve predictions 1998). Moreover, research has demonstrated that practitioners should use decision aids (e.g., scores on cognitive ability tests) when making hiring decisions (Highhouse, 2008;Schmidt & Hunter, 1998). Nevertheless, decision makers tend to disregard statistically validated predictors and over-rely on their intuition, usually to the detriment of the selection decision (Highhouse, 2008;Slaughter & Kausel, 2014). Decision Aid Use Researchers have shown that people are hesitant to rely on decision aids when making predictions or decisions (Arkes, Dawes & Christensen, 1986;Ashton, 1990;Diab, Pui, Yankelevich, & Highhouse, 2011). The reasons include an assumption that perfect prediction is possible and people can consider more information than an aid. People believe they themselves are capable of perfect prediction (Highhouse, 2008), and any evidence to the contrary is downplayed or discounted. However, people cannot in fact perfectly predict behavior and the "variance in [employee] success is simply not predictable prior to employment" (Highhouse, 2008, pp. 335-336). Therefore, when predicting human behavior, there is a guarantee of error. Furthermore, Dietvorst, Simmons, and Masey (2015) demonstrated that when people see a decision aid err, they distrust the aid more than they distrust themselves after making the Impact of Validity on Decision Aid Use Uncertainty is a key factor influencing managerial
3,070,416
151608675
0
16
reliance on intuition. In a sample of 200 executives, almost all reported using intuition to guide decision making and noted reliance on intuition most heavily when a high level of uncertainty existed (Agor, 1986). Managers also reported relying on intuition when outcomes were less scientifically predictable, when information was limited, when the information available did not provide clear direction on how to proceed, when statistical data had limited utility, and when time pressures were greatest. Additionally, researchers have demonstrated that the accuracy, or validity, of a decision aid influences its use. Gomaa et al. (2011) directly manipulated the validity of a decision aid, such that the decision aid participants were presented with had an accuracy of 50%, 60%, 70%, 80%, or 90%. Specifically, they informed participants that the decision aid gave correct estimates "in every X out of 10 cases" (p. 211). They found that more valid decision aids were used to a significantly greater extent. Similarly, across sev-eral studies Dietvorst et al. (2015) manipulated participants' experience with a decision aid by providing the decision aid's previous forecasting performance, their own previous forecasting performance, previous forecasting performance for both the decision aid and their self, or no previous performance information. Their results showed that after viewing the forecasting performance of the decision aid, people were less likely to use it because they were less tolerant of the decision aid's smaller errors than their own larger errors. Further, Gomaa et al. demonstrated that people utilize a decision aid more when it is more valid. All of this
3,070,417
151608675
0
16
information suggests that managers are most likely to rely on a decision aid when it has a higher level of validity. Feedback Slaughter and Kausel (2014) noted that providing decision makers with feedback regarding their personnel selection decisions can improve those decisions. Feedback may be a vital source of information in calibrating one's decision strategies when it assesses the accuracy of one's own decisions (e.g., Louie, 1999). Such feedback has had meaningful influence on individuals' decision-making processes (Brown, 2006;Louie, 1999) and may influence one's future decision-making strategies. Louie (1999) demonstrated that individuals who receive positive feedback regarding a decision exhibit a strong hindsight bias or believe the outcome was predictable after learning the outcome (Roese & Vohs, 2012). Additionally, Brown (2006) demonstrated that when decision outcomes are less uncertain, decision feedback actually leads to decreases in the effectiveness of decision-making strategies; however, when decision outcomes are more uncertain, decision feedback leads to more effective decision making. Wofford and Goodwin (1990) found that repeated negative feedback changed the decision-making strategies individuals used. In essence, the feedback was a form of operant conditioning whereby positive feedback reinforced a person's decision strategy and negative feedback punished a decision strategy. Because the negative feedback led to a change in the decision-making strategies individuals used, it would be expected that providing negative feedback in the form of information about the magnitude of one's errors would lead them to utilize different decision-making strategies. Additionally, it is likely that feedback will interact with the cue validity. When cues have lower validity, people who
3,070,418
151608675
0
16
receive feedback may be more likely to rely on their own pre-existing beliefs. Arkes et al. (1986) examined the effect of different types of feedback on decision aid reliance when the decision aid was 70% accurate, a high level of validity. They found that feedback type had a significant effect on decision aid reliance. However, the validity of the decision aid was not manipulated. Further, Gomaa et al. (2011) demonstrated that when a decision aid is more valid, people utilize the decision aid to a greater extent. Conversely, after observing a model make mistakes, participants instead relied on their own decision-making processes. Furthermore, researchers have directly examined the interactive effects of future uncertainty and feedback on optimal decision making strategies. When provided with feedback regarding uncertain future outcomes, people made less prudent decisions than when provided with feedback regarding certain outcomes (Brown, 2006). Cue Learning Within the field of judgment and decision making, researchers have focused on understanding how people make inferences and judgments about some unknown criterion based on probabilistic cues (Brunswik, 1943). For example, every year faculty members utilize cues (undergraduate GPA, GRE scores, letters of recommendation) to make inferences about graduate school applicants' likelihood of success (graduate school GPA). Researchers have also examined whether and how people can accurately learn the appropriate weighting of various cues for making judgments. For instance, Santarcangelo, Cribbie, and Ebesu Hubbard (2004) demonstrated that training participants on the appropriate use of visual, vocal, and verbal content cues leads to more accurate judgments of the truthfulness of messages. Similarly,
3,070,419
151608675
0
16
in their test of whether the modality of cue-based training impacts appropriate use of cues, Henriksson and Enkvist (2018) found that feedback-based training, observational learning, and training focusing on cue profile matching all significantly increased accuracy of judgments. Trippas and Pachur (2019) found that feedback and continuous criterion information lead to cue learning. Further, when cues are experienced as being predictive of important outcomes (compared to not being predictive), people are better able to discriminated between cues when the cue predictiveness is established during cue training (Le Pelley, Turnbull, Reimers, & Knipe, 2010). However, as Dawes (1979) suggests, even improper cue weighting can be more accurate than normal human judgments, in part because people tend to change the relative weighting of the cues between judgments. The present study can be construed as a training design in which decision makers are taught the relative importance of various selection cues. Specifically, in the current study, when the decision aid is present, participants are given information about the proper model and relative importance of the predictors as well as scores on the predictors (or cues). When the decision aid is not present, participants are not given information about how good the different predictors are yet still see the applicant scores on the various predictors. Thus, we contribute not only to the judgment and decision making literature but also to research involving cue training effects. The Current Study Hypothesis 1: Participants' hiring choices and performance predictions will more closely match those made by the decision aid when cues are more
3,070,420
151608675
0
16
valid than when they are less valid. Hypothesis 2: Participants' hiring choice and performance predictions will more closely match the choice and performance predictions made by the decision aid when it is provided. Hypothesis 3: The presence of the decision aid will interact with the validity of the cues, such that when the decision aid is present and the cues are more valid, participants' hiring choices and performance predictions will more closely match those made by the decision aid than in all other conditions. Hypothesis 4: Participants' hiring choice and performance predictions will more closely match those made by the decision aid when negatively framed feedback is provided regarding participants' predictions than when no feedback is provided. Hypothesis 5: The effect of feedback on decision aid reliance will depend on the validity of the cues, such that when the cues are more valid and feedback is provided, participants' hiring choices and performance predictions will more closely match those made by the decision aid than all other conditions. The hypotheses we are testing in this study build upon the existing literature in several ways. First, we directly evaluate recommendations made by Slaughter and Kausel (2014), who argued that in order to improve personnel selection decisions, decision makers should be asked to make precise estimates of performance and be provided with feedback regarding those estimates. In both of the studies we discuss below, we presented participants with feedback regarding the performance predictions they made. Further, in Study 2, we directly manipulated the presence of feedback to examine its
3,070,421
151608675
0
16
effects on decision aid use. We also extend the literature on feedback by examining the role of feedback over multiple occasions to determine whether people will learn from previous decisions and predictions (e.g., Louie, 1999;Wofford & Goodwin, 1990). Slaughter and Kausel (2014) also argued that instead of instructing decision makers to make a decision based solely on a statistical prediction, decision makers should be provided with decision support on how to select among applicants (e.g., using a decision aid). In both studies, we directly tested this assertion. We sought to replicate and extend the findings of previous studies examining the effects of cue validity (e.g., Gomaa et al., 2011). Last, we extend each of these assertions by examining the interactive effects they have on Personnel Assessment And decisions Use of A decision Aid in Personnel selection decision aid use. Method Participants. Participants were recruited from Amazon's Mechanical Turk program. Attention check and screening items were used to identify and exclude participants who were not paying attention and were simply clicking through the survey. Usable data were obtained from 154 participants. Participants were paid one US dollar for their participation. Approximately 57% of participants were male with an average age of 37.7 (SD = 11.8), 73% were Caucasian, and 89% were employed. For employed individuals, the mean number of hours worked per week was 40.3 (SD = 9.5). Participant hiring experience was measured using a 6 point Likert scale (1 = no experience to 6 = extremely experienced). The average hiring experience level of participants was 3.19
3,070,422
151608675
0
16
(SD = 1.59). Decision task. The decision task was adapted from Kausel, Culbertson, and Madrid (2016). Participants completed 10 trials which they compared two applicants for a sales agent job. Applicant data came from an actual organization that was validating their selection procedures. Over 200 applicants were assessed with a variety of selection tools and three months later their performance was assessed by their supervisors. We randomly selected 10 pairs of applicants for study participants to evaluate. For each trial, participants were presented with the two applicants' percentile scores on tests of cognitive ability, conscientiousness, and an unstructured interview. Participants were asked to predict each candidate's performance percentile rank from 0 (will perform worse than all other employees) to 99 (will perform better than all other employees). Participants then selected the candidate that the company should hire. Feedback information. Participants received feedback after each decision. Participants were shown their original predictions (i.e., their estimated performance percentile rank), job performance of both candidates once hired (i.e., their actual performance percentile rank), and the prediction error for each candidate's performance (e.g., "Your prediction for Candidate A was off by X% points"). As such, participants were informed about the extent to which their predictions differed from the candidates' actual performance. Cue validity manipulation. Participants were randomly assigned to a high validity condition or a moderate validity condition. Participants were unaware of which condition they were in. In the high validity condition, the job candidates' eventual performance was highly predictable (R2 = .962) from an appropriate weighting of the three
3,070,423
151608675
0
16
predictors. In the moderate validity condition, the job candidates' eventual performance was less predictable (R2 = .504) from an appropriate weighting of the three predictors. The weighting of the predictors in both conditions was .50 for cognitive ability, .40 for conscientiousness, and .10 for the unstructured interview based on the results of meta-analyses (e.g., Huffcutt & Arthur, 1994, Schmidt & Hunter, 1998. The model used to create the high validity condition was: Equation 2 y lp = round((logistic(logistic percent(.50 * x 1 + .40 * x 2 + .10 * x 3 ) + x r~N (0,1)) * 100) Where y lp represents the candidate's eventual performance in the moderate validity condition. In both equations, x 1 represents the candidate's cognitive ability score, x 2 represents the candidate's conscientiousness score, and x 3 represents the candidate's interview score. Additionally, x r~N (0,1) represents the value randomly sampled from a standard normal distribution with a mean of 0 and a standard deviation of 1. In order to determine the actual validity of the cues once the random error has been introduced in the eventual performance of the candidates, the candidates' test scores were used to predict their eventual performance. The model used to predict the candidates' eventual performance used the same weighting used in Equations 1 and 2. Therefore, the formula used to predict the candidates' eventual performance was: Equation 3 ŷ = .50 * x 1 + .40 * x 2 + .10 * x 3 Where ŷ = the predicted eventual performance for the candidate.
3,070,424
151608675
0
16
In the high cue validity condition, Equation 3 resulted in an R 2 = .962. In the moderate cue validity condition, Equation 3 resulted in an R 2 = .504. This confirms that the conditions represent situations in which the selection predictors are highly valid and moderately valid, respectively. Decision Aid Manipulation Two operationalizations of decision aid reliance were utilized: the degree of match between the participant's and model's predicted performance as assessed by the percentile rank, and the degree of match between the participant's and model's hire choice. Participants were randomly assigned to one of two conditions in which a decision aid was either present or absent. In the decision aid present condition, participants were provided information about the validity of the three predictors and information regarding a statistical model that should be used to predict candidate performance. In the decision aid absent condition, participants did not receive any information regarding the validity of the three selection predictors or the model. Participants were asked to utilize the candidates' scores to estimate the candidates' performance as well as select one of the candidates to hire. For participants in the decision aid present condition, participants were presented with Equation 3, but they were not provided with the results of the calculations for each candidate. Instead, participants were only presented with the result of the validity weights multiplied by the predictor scores. Thus, participants would still be required to add the three weighted predictor scores. The rationale for this was that participants who engaged in more systematic information
3,070,425
151608675
0
16
processing (i.e., relied more on the statistical model's prediction) would actually add these scores. Thus, their predictions should match the predictions made by the model. In contrast, individuals who engaged in more automatic information processing would not rely on the information provided by the model. Instead, they would rely on their own decision-making processes to make their predictions, which would likely result in predictions that do not match the predictions made by the model. In summary, when the decision aid is provided, participants are provided with information about the proper statistical model, the relative importance of each of the predictors (cues), and the scores on the predictors (cues) for both candidates. When the decision aid is not provided, participants do not receive any information about the relative importance of the predictors but are provided with the candidates' scores on the predictors. Thus, this study design can be thought of as a training design with the attempt of training the decision makers to use the decision aid and about the relative importance of the predictors. The instructions provided and the example decision stimuli are presented in Appendix C. Results Match in hire choice. To examine reliance on the decision aid based on the match between the participant's and model's hire choice, a repeated measures logistic regression was conducted using the generalized linear mixed-effects modeling package in R (Bates, Maechler, Bolker, & Walker, 2014). The cue validity, decision aid presence, trial, and their interactions were entered as fixed effects. The match in hire choice was entered as the
3,070,426
151608675
0
16
dependent variable. To reduce the effects of multicollinearity, the predictors were centered before being entered into the model by using effect coding of cue validity and decision aid presence and mean centering of trial. The results of Model 1 are displayed in Figure 1 (all figures are displayed in the Appendix A), which showed a significant main effect of model presence, B = 0.453, z = 5.126, p < .001. When model information was provided, participants' hire choices were significantly more likely to match the model's hire choices than when model information was not provided. As can be seen in Figure 1, participants who were provided with the decision aid on average made hiring decisions that were approximately 11% more likely to match the decision aid's choices. There was not a significant main effect of cue validity or trial. Further, no interactions were significant (see Table 1 -all tables are displayed in Appendix B). Match in predicted performance. To examine reliance on the decision aid based on the match between the participant's and model's predictions about the candidates' performance, a repeated measures linear regression was conducted using the linear mixed-effects modeling package in R (Bates et al., 2014). Cue validity, decision aid presence, trial, and their interactions were entered as fixed effects. The absolute value of the difference between the participants' and model's performance predictions for each candidate was used as the dependent variable. To reduce effects of multicollinearity, predictors were centered before being entered into the model. The results revealed a significant main effect of
3,070,427
151608675
0
16
decision aid presence (B = -0.750, t(3071) = -5.969, p < .05), cue validity (B = -0.368, t(3071) = 2.932, p < .05), and trial (B = -0.036, t(3071) = -3.514, p < .05). These main effects were qualified by significant interactions. Specifically, there was a significant interaction between cue validity and decision aid presence (B = -0.307, t(3071) = 2.443, p < .05), such that the effect of the cue validity was stronger when the decision aid was provided (B = -0.675) than when it was not provided (B = -0.061). In other words, when the decision aid was provided, participants' performance predictions were on average 5.78% closer to the decision aid's performance predictions (see Figure 2). Additionally, there was a significant interaction between trial and cue validity (B = -0.038, t(3071) = -3.754, p < .05), such that the effect of trial was stronger when the cue validity was high (B = -.074) than when the cue validity was moderate (B = 0.002). For those in the high validity condition, performance predictions improved from 2.42% to 1.24% difference with the decision aid's predictions between Trial 1 and Trial 10. In contrast, the difference in performance predictions between participants in the moderate validity condition and the decision aid's predictions did not significantly change from Trial 1 to Trial 10 (Trial 1: 5.45%, Trial 10: 5.57%). This suggests that learning occurred over the 10 trials in the high validity condition but not the moderate validity condition Use of A decision Aid in Personnel selection (see Figure
3,070,428
151608675
0
16
3). Table 2 summarizes these results. Figures 4 through 7 show how participants' weighting of the different predictors (cognitive ability, conscientiousness, and unstructured interview ratings) changed over the course of the 10 trials for each of the study conditions. Exploratory analyses. As a result of a query made during the review process, we conducted exploratory analyses to examine whether our findings would be applicable to real work scenarios in which decision makers are often experienced. Specifically, we explored the role of hiring experience as a moderating variable in our analyses. First, we repeated the analyses predicting match in hiring choice, but we added hiring experience and all subsequent interactions as fixed effects. To reduce the effects of multicollinearity, hiring experience was mean centered. As can be seen in Table 3, none of the interactions including hiring experience were significant. We then repeated the analyses predicting match in performance predictions with mean-centered hiring experience and the subsequent interactions entered as fixed effects. As can be seen in Table 4, there is a significant four-way interaction among cue validity, decision aid presence, trial, and hiring experience. Figure 8 displays the four-way interaction. As can be seen in the figure, previous hiring experience does impact use of a decision aid. Specifically, when the decision aid is provided, the cue validity is moderate, and experience is low, decision makers only perform slightly worse than the decision aid itself. However, when the decision aid is provided, the cue validity is moderate, and experience is high, decision makers perform much worse than
3,070,429
151608675
0
16
the decision aid. This suggests that more experience may lead people to be less willing to use the decision aid. However, when the decision aid is provided, cue validity is moderate, and experience is high, we do see an increase in the match in performance predictions between the decision aid and the participants over time. This suggests that those with higher experience increased their used of the decision aid across the 10 trials. Discussion The first study sought to examine the interactive effects of decision aid presence and cue validity on reliance on a decision aid over a series of hiring decisions. Cue validity was not a significant predictor when examining the degree of match in hiring choices. However, cue validity was a significant predictor when examining the degree of match in performance predictions, such that when cues had higher validity, there was a greater degree of match between participants' performance predictions and the model's performance predictions. Therefore, Hypothesis 1 was partially supported. When examining the degree of match in hiring choices and in performance predictions, the presence of the decision aid was a significant predictor, thus supporting Hypothesis 2. Only when examining the degree of match in performance predictions was the interaction significant, such that the greatest degree of match in performance predictions occurred when the decision aid was provided and cues were highly valid. Therefore, Hypothesis 3 was partially supported. Results also revealed a significant effect of decision trial, suggesting learning effect over time. Indeed, the exploratory analyses showed that individuals with higher experience
3,070,430
151608675
0
16
tended to increase their use of the decision aid over time. Study 2 Study 2 extended Study 1 in four ways. First, Study 2 utilized 20 decision trials instead of 10 (to better examine learning). Second, a third cue-validity condition was introduced to represent realistic hiring situations (R 2 = .204). Third, feedback was manipulated, such that half of the participants received feedback while the other half did not. Finally, handwriting analysis was added as a fourth cue and distractor to determine whether participants' cue weighting strategies could accommodate a cue with a near-zero relationship with job performance (Reilly & Chao, 1982;Schmidt & Hunter, 1998). Method Participants. The same attention check items from Study 1 were used. Usable data were obtained from 519 hiring professionals recruited using Qualtrics participant panels. Participants had approximately 7.7 (SD = 6.7) years of hiring experience. Most (93%) were currently employed, and those employed worked an average of 43.0 (SD = 10.0) hours per week. Approximately 52% of participants were female with an average age of 39.0 (SD = 11.3), and 80% were Caucasian. Materials and procedure. This study used the same decision task used in Study 1 except with 20 instead of 10 selection decisions. The ordering of the 20 decisions was randomized to account for order effects. As in Study 1, participants were randomly assigned to the cue validity conditions. However, a third condition was added. Participants were randomly assigned to the high (R 2 = .962), moderate (R 2 = .504), or realistic (R 2 = .204) cue
3,070,431
151608675
0
16
validity condition. The same procedures used Study 1 were used to create the realistic validity condition, except with a greater degree of random error introduced. The formula used to create the realistic cue validity was: Equation 4 y r = round(logistic(logistic percent(.50 * x 1 + .40 * x 2 + .10 * x 3 + .0 * x 4 ) + 1.5 * (x r~N (0,1))) * 100) Where yr represents the candidate's eventual performance in the realistic condition, x 1 represents the candidate's cognitive ability score, x 2 represents the candidate's conscientiousness score, x 3 represents the candidate's interview score, and x4 represents the candidate's handwriting analysis score. Additionally, x r~N (0,1) represents the value randomly sampled from a standard normal distribution. In order to determine the actual validity of the cues once the random error has been introduced in the eventual performance of the candidates, the candidates' test scores were used to predict their eventual performance. The formula used to predict the candidates' eventual performance was: Like Study 1, participants were randomly assigned to receive or not receive the decision aid. Participants were also randomly assigned to receive or not receive feedback regarding their performance predictions and hiring choices after each decision. Those assigned to the feedback condition were shown what their original performance predictions were, the actual job performance of both candidates once they were hired, and their prediction error for each candidate's performance. Participants assigned to not receive feedback did not receive any feedback regarding what their original performance predictions were,
3,070,432
151608675
0
16
the actual job performance of both candidates once they were hired, or their prediction error for each candidate's performance. Results Match in hire choice. The analytic procedures used in Study 1 were also used in Study 2. Cue validity, model presence, the presence of feedback, trial, and their interactions were entered as fixed effects. The match in hire choice was entered as the dependent variable. Categorical predictors were centered using effects coding, and trial was mean centered. No significant main effect of cue validity on match between the participants' and model's hiring choices emerged. However, there was a significant main effect of decision aid presence, B = 0.153, z = 3.98, p < .001. When the decision aid was provided, participants' hire choices were significantly more likely to match the model's hire choices than when model information was not provided. Additionally, there was a significant main effect of feedback on whether participants' hiring choices matched the model's choices, B = -0.088, z = -2.29, p = .022. When feedback was provided, participants' hiring choices were significantly less likely to match the model's choices. Further, there was a significant three-way interaction among cue validity, feedback, and trial, B = 0.017, z = 2.47, p = .013. Table 5 summarizes these model effects. Figure 8 displays the significant three-way interaction. As can be seen in the figure, when feedback is provided and cue validity is high, people are more likely to make choices that match the decision aid's over time. However, when the cue validity is moderate or
3,070,433
151608675
0
16
realistic, there is essentially no change in the likelihood that participants' hiring choices match the decision aid's over time. This suggests that when feedback is provided and the cue validity is high, people are more likely to use the decision aid over time than when no feedback is provided or when the cues have realistic to moderate validity. Match in performance predictions. Cue validity, decision aid presence, feedback, trial, and their interactions were entered as fixed effects. The absolute value of the difference between the participants' and model's performance predictions for each candidate was used as the dependent variable. The predictors were centered before being entered into the model. Table 6 displays the model effects. Results showed no significant effect of cue validity on the degree of similarity in the participants' and model's performance predictions. There was also no significant main effect of feedback. This likely suggests that our feedback manipulation did not significantly impact participant's reliance on the decision aid, and participants were unable to actually learn from the feedback in the way it was presented. However, a significant main effect of decision aid presence emerged (B = -0.534, t(20719) = -10.13, p < .05), such that when provided with the decision aid, participants' performance predictions were significantly more similar to the model's performance predictions than participants who were not provided with the decision aid. There was also a significant main effect of trial (B = -0.015, t(20719) = -4.78, p < .05), such that participants' predictions regarding the candidates' performance became more similar to
3,070,434
151608675
0
16
the model's predictions over time. However, these main effects were qualified by significant interactions. There was a significant interaction between cue validity and decision aid presence, F(2, 507) = 3.566, p = .029. This interaction was further qualified by a significant three-way interaction among cue validity, decision aid presence, and trial (F(2, 507) = 7.211, p < .001, see Figure 5). Therefore, post hoc comparisons of the simple slopes in the interaction using Bonferroni corrected p-values were conducted. Post hoc analyses revealed that for the high validity condition, the slope for trial when the decision aid was provided (B = -0.030) was significantly different than when the decision aid was not provided B < 0.001, z = -2.931, p = .027. Additionally, when the model was provided, the slope for trial in the high validity condition (B = -.030) was significantly different from the moderate validity condition (B = 0.006, z = -2.888, p = .030) and from the realistic validity condition, B = .022, z = -4.547, p < .001). There was also a significant three-way interaction among cue validity, feedback, and trial. Figures 9 and 10 display these interactions. Figures 11 through 22 show how participants' weighting of the different predictors (cognitive ability, conscientiousness, and unstructured interview ratings) changed over the course of the 20 trials for each of the study conditions. Exploratory analyses. As in Study 1, we explored the role of hiring experience as a moderator in our analyses. First, we repeated the analyses predicting match in hiring choice, but we added
3,070,435
151608675
0
16
hiring experience and all subsequent interactions as fixed effects. To reduce the effects of multicollinearity, hiring experience was mean centered. As shown in Table 7, there was a significant interaction between experience and cue validity, B = .023, z = 2.28, p = .022. There were no other significant interactions with experience. For the purposes of illustration, Figure 11 displays the five-way interaction among decision aid presence, cue validity, feedback, trial, and experience. We then repeated the analyses predicting match in performance predictions with mean-centered hiring experience and the subsequent interactions entered as fixed effects. When predicting match in performance ratings, there were several significant interactions including experience. Specifically, there were significant four-way interactions among cue validity, decision aid presence, trial, and hiring experience, Bcue validity 1 = -.001, t = -2.205, p = .027, Bcue validity 2 =.002, t = 3.695, p < .001. There was also a significant four-way interaction among cue validity, feedback, trial, and hiring experience, Bcue validity 1 = -.001, t = -2.233, p = .026. Last, there was a significant four-way interaction among decision aid presence, feedback, trial, and hiring experience, B = .001, t = -2.478, p = .013. For brevity and ease of interpretation of all of these interactions, the five-way interaction is displayed in Figure 24. As can be seen in the figure, when the cue validity is high, the only difference observed was when the decision aid was provided. When people were provided with the decision aid they were more likely to make performance predictions that
3,070,436
151608675
0
16
matched those of the decision aid, suggesting that they were using the decision aid. The figure also shows that in the moderate validity condition, we see that not providing feedback had a more pronounced effect on individuals with higher experience when they were provided with the decision aid. Specifically, they were less likely to make performance predictions that matched the decision aid over time. A similar pattern of decreased match in performance predictions over time occurred in the realistic validity condition when people were not provided with feedback. Discussion Study 2 was conducted to replicate the findings of Study 1 as well as test Hypotheses 4 and 5. All three analyses showed no significant main effect of cue validity on the degree to which participants' hiring choices and performance predictions match those made by the model. Therefore, Hypothesis 1 was not supported. In contrast, analyses did show a significant main effect of the presence of the decision aid on the degree to which participants' hiring choices and performance predictions matched those made by the model, supporting Hypothesis 2. Further, there was not a significant interaction when predicting the match in hiring choice. However, when predicting similarity in performance predictions, there was a significant interaction between the presence of the decision aid and the validity of the cues. Specifically, when the decision aid was provided and the cues had high validity, participants' relied on the decision aid more than when the validity of the cues was realistic, but not when they had a moderate level of validity.
3,070,437
151608675
0
16
Therefore, Hypothesis 3 was partially supported. Unfortunately, the observed validity of selection predictors more closely resembles the realistic validity condition (Schmidt & Hunter, 1998). Therefore, a practical reason people are hesitant to rely on decision aids is that decision aids err, which led to a slight (nonsignificant) decrease in the reliance over time in the realistic validity condition. The presence of feedback was only a significant predictor when examining the match between participants' hiring choices and the model's hiring choices and in the opposite direction than predicted. Therefore, Hypothesis 4 was not supported. Across both analyses, there was not a significant interaction between the presence of feedback and cue validity. Therefore, Hypothesis 5 was also not supported. This is surprising, especially given the three-way interaction among trial, decision aid presence, and cue validity. The significant interaction would suggest that for the high validity condition, people are able to learn to use the decision aid when it is provided. However, people cannot learn about the validity of the decision aid without feedback, and this may depend on the form and content of feedback. A secondary purpose of Study 2 was to increase the number of decisions participants made to better examine learning effects. In contrast to Study 1, there was a significant three-way interaction among the validity of the cues, the presence of the decision aid, and trial (see Figure 5). Participants experienced the greatest degree of learning when the decision aid was provided and the cues were highly valid. As the validity of the cues decreased,
3,070,438
151608675
0
16
learning decreased. When the validity of the cues was weakest and thus more realistically mirrored the validity of current hiring cues, learning was not observed. There are two possible conclusions from this finding. First, there may have been too few decisions for participants to learn the predictive relationships in the presence of such high degrees of uncertainty. Alternatively, there may be so much uncertainty that the relationships are unlearnable. Last, the exploratory analyses revealed several significant interactions with hiring experience. Together these findings suggest that providing feedback regarding the accuracy of one's decisions compared to that of a decision aid may be essential to getting people, especially those with greater hiring experience, to rely on decision aids. GENERAL DISCUSSION The purpose of these two studies was to examine the conditions under which people will utilize decision aids in a personnel selection context. Specifically, this study sought to examine whether (a) the mere presence of a decision aid will lead people to rely on the decision aid, (b) the validity of the predictors used in the selection context influence reliance on a decision aid, (c) the presence of feedback regarding one's predictions of a candidate's performance, and (d) the interactions among these factors influence reliance on a decision aid. In this study, the decision aid took the form of a statistical model that should be used to select the candidate to be hired. In both Study 1 and Study 2, the evidence clearly demonstrated that the mere presence of a decision aid leads people to rely on
3,070,439
151608675
0
16
the decision aid. Although this is not an overly profound finding, it does have its own merit. By having a comparison group (those who did not receive the decision aid), we were able to examine whether participants were actually relying on the decision aid. The finding that participants rely, to some extent, on a decision aid when it is provided, also has practical importance. Both studies demonstrated that when a decision aid is present, people do indeed rely on it, albeit not entirely. Therefore, organizations should provide individuals with a decision aid. This should ultimately make their performance predictions and hiring choices more accurate. A second major finding in the present research is that the validity of the cues interacts with the presence of a decision aid to influence reliance on the decision aid when making performance predictions. In both Study 1 and Study 2, the validity of the cues interacted with the presence of the decision aid, such that there was the greatest degree of match between participants' predictions of candidates' performance and the model's predictions of the candidates' performance when the decision aid was provided and the validity of the cues was high. The importance of this finding is inherent in nearly all personnel selection research. Specifically, personnel selection research aims to identify and develop methods of assessment that maximize the relationship between selection tests and future job performance. This research demonstrated that reliance on the decision aid was greatest when the validity of the predictors was greatest. Unfortunately, the observed validity of selection
3,070,440
151608675
0
16
predictors more closely resembles the realistic validity condition (Schmidt & Hunter, 1998). Therefore, a practical reason why people are hesitant to rely on decision aids is that decision aids do err. This leads people to distrust decision aids (e.g., Dietvorst et al., 2015). This is especially apparent in Figure 5. In the high validity condition, people saw the ac-curacy of the decision aid, which lead to an increase in the reliance over time. However, in the realistic validity condition, people saw the decision aid err, which led to a slight (nonsignificant) decrease in the reliance over time. This research also sought to answer the call by researchers to examine the effect of immediate feedback on reliance on a decision aid in a personnel selection context (Slaughter & Kausel, 2014). The results of Study 2 showed that feedback did not have a significant effect on reliance on the decision aid. Nor did feedback interact with trial, decision aid presence, or the validity of the cues to influence reliance on the decision aid. This is surprising, especially given the three-way interaction among trial, decision aid presence, and cue validity. The significant interaction would suggest that for the high validity condition, people are able to learn to use the decision aid when it is provided. However, people cannot learn about the validity of the decision aid without feedback. It may be the case that the form and content of feedback may influence the reliance on a decision aid. Limitations and Future Directions One limitation of the present research is
3,070,441
151608675
0
16
that participants simply saw candidates' scores, which may not resemble real hiring decisions where managers likely have more information about the candidates (e.g., résumés, references, etc.). In the context of the present research, participants' information was limited, which may have lowered the psychological fidelity of the hiring situation. Thus, the current studies may represent a best-case scenario in which fewer invalid cues are present that could draw a hiring manager's attention. In Study 2, participants were randomly assigned to receive feedback or not receive feedback. Thus, one limitation of this research is that participants in the no feedback condition were not able to learn the validity of the cues. As such, there should be further investigation regarding whether providing feedback interacts with cue validity to influence reliance on the decision aid. Previous researchers have argued that resistance to using decision aids stems from a lack of trust in the aid (e.g., Dietvorst et al., 2015). Therefore, future research should assess participants' trust in a decision aid and how it changes over a series of decisions. General Conclusions This research sought to examine the effects of cue validity, presence of a decision aid, and feedback on reliance on a decision aid in a personnel selection context. Providing a decision aid led to reliance on that aid, at least to some degree. Finally, when the cues had high validity and the decision aid was provided, people learned to increase their reliance on the aid. Use of A decision Aid in Personnel selection Two-way interaction between decision aid presence
3,070,442
151608675
0
16
and decision aid validity predicting match in performance predictions in Study 1. Note that the y-axis has been inverted to ease comparison across operationalizations of decision aid reliance. Error bars represent +/-1 standard error. Two-way interaction between trial and cue validity predicting match in performance predictions in Study 1. Note that the y-axis has been inverted to ease comparison across operationalizations of decision aid reliance. Error bars represent +/-1 standard error. Use of A decision Aid in Personnel selection Change in participants' weighting of the predictors over time when the decision aid is provided and the cue validity is high. Error bars represent +/-1 standard error. Figure 5. Change in participants' weighting of the predictors over time when the decision aid is provided and the cue validity is moderate. Error bars represent +/-1 standard error. ReseaRch aRticles Figure 6. Change in participants' weighting of the predictors over time when the decision aid is not provided and the cue validity is high. Error bars represent +/-1 standard error. Figure 7. Change in participants' weighting of the predictors over time when the decision aid is not provided and the cue validity is moderate. Error bars represent +/-1 standard error. Use of A decision Aid in Personnel selection Four-way interaction among decision aid presence, decision aid validity, trial, and hiring experience predicting match in performance predictions in Study 1. Note that the y-axis has been inverted to ease comparison across operationalizations of decision aid reliance. Error bars represent +/-1 standard error. Three-way interaction among cue validity, decision aid presence,
3,070,443
151608675
0
16
and trial predicting match in performance predictions in Study 2. Note that the y-axis has been inverted for ease of comparison across operationalizations of decision aid reliance. Error bars represent +/-1 standard error. Figure 11. Change in participants' weighting of the predictors over time when the decision aid is provided, cue validity is high, and feedback is provided. Error bars represent +/-1 standard error. Use of A decision Aid in Personnel selection Change in participants' weighting of the predictors over time when the decision aid is provided, cue validity is moderate, and feedback is provided. Error bars represent +/-1 standard error. Figure 13. Change in participants' weighting of the predictors over time when the decision aid is provided, cue validity is realistic, and feedback is provided. Error bars represent +/-1 standard error. ReseaRch aRticles Figure 14. Change in participants' weighting of the predictors over time when the decision aid is provided, cue validity is realistic, and feedback is provided. Error bars represent +/-1 standard error. Figure 15. Change in participants' weighting of the predictors over time when the decision aid is provided, cue validity is moderate, and no feedback is provided. Error bars represent +/-1 standard error. Use of A decision Aid in Personnel selection Figure 16. Change in participants' weighting of the predictors over time when the decision aid is provided, cue validity is realistic, and no feedback is provided. Error bars represent +/-1 standard error. Figure 17. Change in participants' weighting of the predictors over time when the decision aid is provided, cue validity
3,070,444
151608675
0
16
is high, and feedback is provided. Error bars represent +/-1 standard error. ReseaRch aRticles Figure 18. Change in participants' weighting of the predictors over time when the decision aid is provided, cue validity is moderate, and feedback is provided. Error bars represent +/-1 standard error. Figure 19. Change in participants' weighting of the predictors over time when the decision aid is provided, cue validity is realistic, and feedback is provided. Error bars represent +/-1 standard error. Use of A decision Aid in Personnel selection Change in participants' weighting of the predictors over time when the decision aid is provided, cue validity is high, and no feedback is provided. Error bars represent +/-1 standard error. Figure 21. Change in participants' weighting of the predictors over time when the decision aid is provided, cue validity is moderate, and no feedback is provided. Error bars represent +/-1 standard error. Figure 23. Five-way interaction among decision aid presence, cue validity, feedback, trial, and hiring experience predicting match in hiring choices in Study 2. Error bars represent +/-1 standard error. Use of A decision Aid in Personnel selection Five-way interaction among decision aid presence, cue validity, feedback, trial, and hiring experience predicting match in performance predictions in Study 2. Note that the y-axis has been inverted to ease comparison across operationalizations of decision aid reliance. Error bars represent +/-1 standard error. Note. Bolded values are significant at p < .05. Variables were coded using effects coding. Cue validity 1 was coded as 1 = highly valid cues, 0 = moderately valid
3,070,445
151608675
0
16
cues, -1 = low validity cues. Cue validity 2 was coded as 0 = highly valid cues, 1 = moderately valid cues, -1 = low validity cues. Decision aid presence was coded as 1 = decision aid present, -1 = decision aid not present. Feedback was coded as 1 = feedback provided, -1 = feedback not provided. (Note: Percentile is the percentage of individuals who score less than the candidate. For example, a percentile score of 50 on the cognitive ability test means that the candidate performed better than 50% of the other individuals). Participants assigned to decision aid condition Recall that the prediction formula was: 0.50 x (cognitive ability score) + 0.40 x (conscientiousness score) + 0.10 x (unstructured interview score) = Predicted Job Performance Based on the scores for each candidate, the formula for each candidate is: Candidate A: 42.5 + 38 + 5 = Predicted Job Performance Candidate B: 41 + 3.6 + 7 = Predicted Job Performance Instructions (all conditions) Thanks for participating in this study. One of the major objectives of personnel selection is to predict candidates' performance based on available information. In this study, we are interested in how people make hiring decisions using limited information. As such, your opinions are very important to us. The following is from a large airline company. The firm was validating their selection procedures for the ticket agent job. As such, more than 200 applicants took a standardized personality test (conscientiousness factor), standardized cognitive ability test, and completed an unstructured interview before being hired.
3,070,446
151608675
0
16
Three months after being hired, these same individuals were assessed by their supervisors in terms of their general performance. On the following pages, you'll be presented with prehiring information of 20 pairs of applicants. Based on this information, for each pair, we ask you to • Make a prediction of each candidate's potential job performance as rated by his or her supervisor, and • Choose which candidate should be hired. Information about the decision aid (decision aid present condition) According to research examining various selection procedures, scores on standardized cognitive ability tests are good predictors of future job performance. Scores on the conscientiousness factor of standardized personality tests are moderate predictors of future job performance. Last, scores on unstructured interviews are weak predictors of future job performance, and scores on the handwriting analysis do not predict future job performance. Based on this information, one can use the following equation to estimate a candidate's job performance 0.50 x (cognitive ability score) + 0.40 x (conscientiousness score) + 0.10 x (unstructured interview score) + 0.00 x (handwriting analysis) = Predicted Job Performance
3,070,447
151608675
0
16
Novel insights into the dynamics of intractable human epilepsy Probability density functions and the probability of Sz occurrence conditional upon the time elapsed from the previous Sz were estimated using the energy and intervals of SZ in prolonged recordings from subjects with localization- related pharmaco-resistant epilepsy, undergoing surgical evaluation. Clinical and subclinical seizure E and ISI distributions are governed by power laws in subjects on reduced doses of anti-seizure drugs. There is increased probability of Sz occurrence 30 minutes before and after a seizure and the time to next seizure increases with the duration of the seizure-free interval since the last one. Also, over short time scales, ``seizures may beget seizures.'' The cumulative empirical evidence is compatible with and suggests that at least over short time scales, seizures have the inherent capacity of triggering other seizures. This may explain the tendency of seizures to cluster and evolve into status epilepticus. Power law distributions of E and ISI indicate these features lack a typical size/duration and may not be accurate criteria or sufficient for classifying paroxysmal activity as ictal or interictal. This dependency and the existence of power law distributions raise the possibility that Sz occurrence and intensity may be predictable, without specifying the likelihood of success. Introduction The temporal behavior and other dynamical aspects of human epilepsy such as seizure duration and intensity (severity) is an underdeveloped area in epileptology. This is largely due to lack of complete and accurate data as seizure diaries, the current "gold standard", do not satisfy either of these conditions (refs).
3,070,448
2387514
0
16
In the absence of dynamical knowledge of epilepsy at a large/macroscopic scale, characterization of its pathophysiology will remain incomplete. The seminal concept put forth by Hughlings Jackson in the 19 th century is that seizures are the result of an imbalance between excitation and inhibition, laid the foundations for the study of the cellular mechanisms of ictiogenesis that are likely to continue at ever smaller temporo-spatial scales for the foreseeable future. This approach has yielded valuable insights, but cannot in isolation, provide the knowledge necessary to further advance epileptology since it focuses only on the cardinal manifestation (seizures) while ignoring the disorder (epilepsy). Fundamental questions such as: "What are the probability distributions of times of seizure occurrences and of energies?" and, Are seizures independent of each other?" or, "Do seizures have the inherent capacity to trigger seizures?" remain unanswered, more than 100 years after Gowers made the clinical observation that "seizures beget seizures". The dearth of knowledge of epilepsy dynamics may account in part for the fact that despite attempts spanning nearly two decades, worthwhile prediction of seizures remains elusive (1)(2)(3)(4)(5)(6)(7). Review of the various approaches to prediction reveals that all share in common a reductionist approach. That is, all attempts at forecasting the time of seizure occurrences have been based solely on contemporaneous changes in electrical signals recorded at or near the site of presumed ictiogenesis, thus ignoring potentially relevant temporal (the "history") and spatial (global/systems) information. This work endeavors to gain insight into the dynamics (8) of localization-related pharmaco-resistant epilepsies by adopting a "systems"
3,070,449
2387514
0
16
approach to its study and by using simple but powerful mathematical tools that have proven useful in fields that investigate the behavior of complex systems. Methods Three tools were used to investigate the dynamics of human intractable epilepsy: 1. Probability density or distribution functions (PDFs) of seizure energy (E) and of interseizure intervals, (ISI); 2. Superposition ("stacking") analysis and, 3. Empirical estimation of the probability of seizure occurrence conditioned upon the time elapsed from the previous seizure. A probability density function (pdf) is a function from which the probability distribution for a random variable to take values in a given interval can be obtained by integration of the pdf over this interval. Loosely, a probability density function can be seen as a "smoothed out" version of a histogram. The PDF has an associated statistical measure that can provide constraints and guidelines to identify the underlying mechanisms of the behavior of complex systems such as the brain. Conditional probability is the probability of some event A, given the occurrence of some other event B. Conditional probability is written P(A|B), and is read "the probability of A, given B". Superposition analysis orderly "stacks" a variable using a "marker" to ensure alignment. In this analysis, seizures are the variables and their onset and termination times are the "markers" that allow precise their precise alignment. With approval from the Human Subjects Committee and from each subject (signed consent form), quantitative analyses were performed on 16032 automated seizure detections (Sz) in prolonged (several days' duration) intracranial recordings from 60 human subjects
3,070,450
2387514
0
16
with mesial temporal and frontal lobe pharmaco-resistant epilepsies on reduced doses of medications, undergoing evaluation for epilepsy surgery at the University of Kansas Medical Center (1996)(1997)(1998)(1999)(2000). The vast majority of these seizures lacked behavioral manifestations and were classified as subclinical. Using a validated detection algorithm (9,10) Sz onset and end times, duration, intensity and site of origin were obtained. Szs were defined in humans as the dimensionless ratio of brain seizure to non-seizure electrical activity in a particular weighted frequency band reaching a threshold value, T, of at least 22 and remaining at or above it for at least 0.84 s (duration constraint, D) as previously described elsewhere. Two key variables were derived from these data: (1) "Energy" (E), defined as the product of each Sz's peak intensity ratio and its duration (in seconds) and (2) Inter-Sz event-interval (ISI), defined as the time (in seconds) elapsed between the onset of consecutive Szs. The reason for considering E and ISI lies in the fact that to optimize usefulness, Sz forecasts should include not only time of occurrence, but also their intensity, so that interventions, especially warning, may depend on whether the upcoming seizure is likely to be clinical or subclinical; warning for subclinical seizures may be optional so as to minimize anticipatory anxiety. To characterize the statistical distribution of energy (E) and inter-seizure interval (ISI) of clinical and subclinical seizures, pooled values (all subjects) of E and ISI were used to construct doubly logarithmic plots. For this, the number of Szs of a given E and the
3,070,451
2387514
0
16
number of ISI of a given duration D, were used to construct histograms whose bins were geometrically spaced (powers of 2) and made to span the entire range of the data. The number of seizures in each bin was then normalized by the bin's width and plotted on log-log scale. Additionally, the temporal evolution of the probability of being in seizure as a function of time before the onset and following the termination of a given seizure, was investigated as follows: 1. The state of being in seizure was assigned a value of 1 and of being in the interictal state (non-seizure) a value of 0; 2. Using superimposed epoch analysis, the seizure onsets were "timelocked" to all other onsets and the ends of seizures to all other ends; 3. The state values, overlayed in this manner, were then averaged to compute the empirical probability, P(t), of being in seizure at a relative time, t, in reference to the onset and termination of another seizure; 4. The resulting probability curves for each subject were then normalized by the subject's total fraction of time spent in seizure and averaged across all subjects. Seizure Energy Distribution The probability of a Sz having energy, E, larger than x is proportional to x -β , where β≈2/3 (Fig.1). This pdf differs from a Gaussian or normal pdf in its skeweness (to the right) that appears as a "heavy" or "fat" tail, reflecting the presence of very large ("extreme") events that occur with non-negligible probability. These "extreme" events lie many more
3,070,452
2387514
0
16
"standard deviations" away from the "mean" than predicted by a Gaussian pdf. These properties are also reflected in the fact that unbounded power law distributions with β≈2/3 have infinite mean and variance. Temporal Distribution of Seizures In humans with pharmacoresistant epilepsies undergoing invasive monitoring, there is increased probability of Sz occurrence in the window beginning 30 minutes before a Sz and ending 30 minutes afterward (Fig. 2). That is, seizures in these subjects and under the conditions they were studied had a tendency to form clusters. Distribution of inter-seizure intervals The pdf estimates for inter-seizure intervals (ISI), defined as the time elapsed from the onset of one seizure to the onset of the next, were also calculated using histogram-based estimation methods. The pdf of ISI (Fig, 3) approximately follows a power-law distribution with β≈0.5. This distribution encompasses very short and long interseizure intervals, consistent with seizures clustering and prolonged seizure-free intervals in this population . Paradox of conditional expected waiting time to next seizure The prediction, derived from the heavy tail structure of the waiting time distributions between successive events (Fig. 3), which paradoxically implies that for such heavytailed distributions, "the longer it has been since the last event, the longer the expected time till the next', (11) was tested. The results confirmed that for seizures as for earthquakes (11), the dependence of the average conditional additional waiting time until the next event, denoted <τ|t>, is directly proportional to the time t already elapsed since the last event (Fig. 4). For Szs, for short times t
3,070,453
2387514
0
16
since the last event, < τ|t> is smaller than the average (unconditional) waiting time < τ > between two events, and then increases until it becomes significantly larger than < τ > as t increases. This means that the longer the time since the last seizure, the longer it is expected to be until the next one. Discussion The temporal dynamics of seizures originating from discrete brain regions in subjects with pharmaco-resistant epilepsies on reduced doses of antiseziure medications may be partly described by laws, more specifically, by power laws that bear striking similarity to those governing seismic activity (12). The existence of a power law for E (the product of seizure intensity and duration) indicates that if E is the energy released during a seizure, the probability of occurrence of a seizure of intensity x or larger is proportional to x -β , being much larger for mild than for severe seizures; the same pdf and interpretation applies to ISIs. Power laws, which are ubiquitous in nature, are endowed with the property of "self-similarity" or "scale-invariance". This means that the shape of the distributions of physical quantities such as E and ISI does not change with changes in the scale of observation. Consequently, there is no typical Sz energy or ISI, but a "continuum" of E (sizes) and waiting times between events (ISI). The clinical implication of this scale invariance in this case, is that intensity or duration may not be fundamental or defining seizure properties. That is, and contrary to the universally sanctioned practice
3,070,454
2387514
0
16
in basic and clinical epileptology, intensity and/or duration may not be accurate criteria to classify certain neuronal activity as either seizure or interictal (non-seizure). At a more abstract level, scale invariance in seizures may be conceptualized as the hallmark of certain complex systems (the brain in this case) in which, at or near a "critical" point/threshold (for ictiogenesis), its component elements are correlated over all existing spatial scales (neuron, minicolum, column, macrocolum, etc.) and temporal scales (microsecond, millisecond, second, etc). The characteristic of scale invariance is for example, also shared by cancer in which coupled mechanisms interact across multiple spatial and temporal scales: from the gene to the cell to the whole organism, from nanoseconds to years (13,14). Seizures' proclivity to entrain or "kindle" other brain regions may partly reflect the existence of multi-scale spatio-temporal correlations. The value, β≈2/3, of the exponent of the power law of seizure energy, E, is indicative of a heavy-tail/extreme event distribution as opposed to a normal (Gaussian) distribution, and has interesting statistical and clinical implications: the mean and variance of distribution of E, are not definable, as their values are infinity for unbounded distributions in the ideal mathematical limit of infinite systems. In practice, for finite systems such as the brain, this means that the empirical determinations of the mean and of the variance of the distribution of E do not converge as they remain random variables sensitive to the specific realization of the data and in particular to the largest measured value. This is in stark contrast to the
3,070,455
2387514
0
16
good convergence properties of the mean and variance in normally distributed random variables. This characteristic exponent, (β≈2/3), and more precisely the heavy tail, explains at a mathematical-conceptual level, the brain's tendency and capacity to support status epilepticus (SE), a form of extreme seizure. The other "path" to SE is through very short ISIs, which abound in the corresponding power law distribution (Fig. 3). Simply put, SE occurs when the brain's ictal activity "visits" the far right of the E distribution or the far-left region of the ISI power law distribution. The structural and functional substrate to support the scale-free behavior observed in these seizure time series is in place: 1. The brain is an assembly of coupled, mainly nonlinear oscillators (neurons) with labile and unstable dynamics (15) and the length, density/clustering and patterns of neuronal interconnectivity have fractal or self-similar properties, that are repeated across a vast hierarchy of spatial scales (16)(17)(18)(19); 2. Human magnetoencephalographic data obtained during rest and tasks revealed that large-scale functional neuronal networks that generate delta, theta, alpha, beta, and gamma frequency rhythms have attributes that are preserved across these frequency bands and that flexibly adapt to task demands. The most remarkable characteristic of these networks is their relative invariance of the network topology across all physiologically relevant frequency bands, forming a self-similar or fractal architecture (20). Dynamical analysis showed that these networks were located close to the threshold of order/disorder transition in all frequency bands and that behavioral state did not strongly influence global topology or synchronizability. Similarly, human EEGs showed
3,070,456
2387514
0
16
scale-free dynamics and self-similar properties during eyes-closed and eyes-open, no-task conditions, but the scaling exponent differed significantly for different frequency bands and conditions (21). Those studies and this one suggest that scale-free behavior in the human brain may be insensitive to state (physiological vs. pathological such as in seizures), but its scaling factor may be sensitive to the prevailing conditions. For example, the slope of the pdf of size of spontaneous neuronal "avalanches" recorded 'in-vitro" changes from -3/2, to -2, in response to excitation with picrotoxin a GABA A -receptor antagonist (22), but conserves its power law behavior. While power laws are often generated in systems with Euclidian geometry, the fractal geometry (16)(17)(18)(19) of the brain likely illustrates a remarkable coupling between structure and processes, suggesting neural self-organization that generates both power laws statistics and dynamics as well as the fractal geometry of the brain. It is hypothesized that this fractal geometry and the emergent dynamics are intrinsically coupled as for seismogenic faults and earthquakes, implying that a successful prediction scheme for Sz requires understanding (as for earthquakes) of the interplay between dynamics at the time scale of sequences of Sz and of the structural elements of the brain. The increased probability of pharmaco-resistant seizures to occur in clusters (Fig. 2), and the decreased probability of seizure occurrence with increasing time from the last one (Fig. 4), may be interpreted as: a) reflective of the inherent capacity of seizures to trigger seizures, thus supporting [at least over short time scales (minutes)] the concept put forth in
3,070,457
2387514
0
16
the 19 th century by Gowers (23) that "seizures beget seizures" and advanced by Morrell (24) in the second half of last century; b) indicative of some form of seizure interdependency or plasticity ("memory") in the system, as recently shown (25), and c) a harbinger of predictability, alluding to the possibility that seizures may be predictable, but without specifying the probability or ease of success. That seizures may be predictable is in itself a valuable finding for which no factual support had been sought, as those working in this field presumed predictability a priori. While at this juncture, seizure "predictability" cannot be generalized to out-of-hospital conditions and fully/properly medicated subjects, these findings, justify and foster, not only renewed efforts in the field of prediction, but also different approaches from those (6) applied to date. In particular, and at a minimum, the monitoring of observables should be expanded from the local (epileptogenic zone) to the global/systems scale and encompass both clinical and subclinical seizures, including their severity and the system's history. Prolonged ECoG recordings, from humans with pharmaco-resistant epilepsy contain frequent, low intensity short duration seizures that go unperceived by patients and observers ("subclinical") and have been consistently ignored for seizure forecasting purposes (25); these "subclinical" seizures should be included along with clinical seizures in prediction models (31). Taken in their totality, these findings and the proposed systems ("non-reductionist") approach seem not only fruitful as evidenced by the uncovering of "laws" governing the temporal behavior of seizures and their energy distribution, but may also serve as the
3,070,458
2387514
0
16
bases for expanding the inquiry into the dynamics of pharmaco-resistant epilepsy. This research direction may provide much needed impetus for the development of new or the refinement of existing theories and tools for the eventual control or prevention of epilepsy and mitigation of its negative psycho-social impact Empirical probability (0-1; y-axis) of being in seizure as a function of time elapsed (x-axis) before onset and after termination of a seizure. The curve to the left of the vertical dashed line (at time zero) depicts the probability before onset and the one to the right of this line, the probability after seizure termination. The empirical probability of being in seizure increases approximately 1200s before onset and returns to baseline 1200s after termination of a given seizure. This behavior is indicative of a strong clustering tendency.
3,070,459
2387514
0
16
Portable environment-signal detection biosensors with cell-free synthetic biosystems By embedding regulated genetic circuits and cell-free systems onto a paper, the portable in vitro biosensing platform showed the possibility of detecting environmental pollutants, namely arsenic ions and bacterial quorum-sensing signal AHLs (N-acyl homoserine lactones). This platform has a great potential for practical environmental management and diagnosis. dialyzed in molecular porous membrane tubing (6-8 KD MWCO) for 3 h at 4 ℃ with magnetic stirring. The dialysate was then centrifuged at 12000 x g for 10 min at 4 ℃, flash frozen, and stored at -80 ℃ 8 . Solution-phase cell-free reaction For the constitutive PT7 cell-free reactions, the purified genetic circuits were added into the total 20 μL reaction as formally described and incubated at 37 ℃ overnight. The substrate (2 mg/mL for X-Gluc, 0.6 mg/mL for chlorophenol red-β-D-galactopyranoside and 2 mg/mL for pyrocatechol for the final concentration) was supplied to the 10-fold diluted reactions for the colorimetric analysis. The absorbance of the reactions was measured on the standard ultraviolet spectrophotometer at 660 nm, 470 nm, and 390 nm, respectively, after diluted to 2 mL with ddH 2 O. For inducing cell-free reactions, the arsR-and luxR-based synthetic genetic circuits were supplied to the assembled reactions along with the inducer. For characterization assays, 0.2 μL arsenic ion solutions in water and AHL stock solutions in DMSO were added as the inducer (within the 20 μL total reaction volume) to give final concentrations of 0.10, 0.20, 0.25, 0.50, and 1.0 μM. The colorimetric reactions were performed in the 10-fold
3,070,460
229251542
0
16
diluted cell-free systems after hours of incubation by adding the substrate mentioned above and measured. Preparation of reactions and incubation The 10 μL assembled cell-free reactions (without plasmids) were applied to 4 mm paper disc, which were then frozen at -80 ℃ and freeze-dried in 2 hours. Moreover, 2 mg/ml pyrocatechol as the substrate of XylE could be supplied to the cell-free reaction before the freeze-drying process. Paper discs were cut using a 4-mm puncher. The freeze-dried paper discs were stored at room temperature for days (Fig. S10). The paper reactions were rehydrated with plasmids solution coupled with inducer at the concentrations specified. Rehydrated reactions were incubated at 37 ℃ using the incubator. After hours of incubation, for LacZbased colorimetric reactions, the chlorophenol red-β-D-galactopyranoside was supplied to the reaction at the final concentration of 0.6 mg/mL. The colorimetric signals of papers were collected by the camera of iPhone and analyzed by Image J software. Fabrication of matric materials Although the initial paper-based reactions were successful, there still might be nonspecific interactions between the cell-free components and papers or the activity loss during the freezedrying processes, which could impede the activity of the reactions. Several commonly used protectants were used for treating papers or the cell-free reactions. For treating paper, the papers were wet with 5 % (w/w) protectants and then cut into paper discs for loading cell-free components and freeze-dried. For treating the cell-free components, the protectants were supplied to the assembled cell-free reactions at the final concentrations of 5 % (w/w) and similarly freeze-dried for
3,070,461
229251542
0
16
the late rehydrated reactions (Fig. S1). Measurement and analysis After the conditional expression, images of paper discs were collected by the smartphone camera of iPhone and analyzed by Image J software. Fig. S1. Cell-free reactions. (A) Cell-free reaction components (including cell extract, NTPs, Mg 2+ , 19 amino acids, PEP, and others) and protectants were assembled on ice and put onto papers. (B) Cell-free reaction components were assembled on ice and put onto protectanttreated papers. Then the paper could be rehydrated with genetic circuits and sample solutions.
3,070,462
229251542
0
16
Driver Facial Expression Analysis Using LFA-CRNN-Based Feature Extraction for Health-Risk Decisions As people communicate with each other, they use gestures and facial expressions as a means to convey and understand emotional state. Non-verbal means of communication are essential to understanding, based on external clues to a person’s emotional state. Recently, active studies have been conducted on the lifecare service of analyzing users’ facial expressions. Yet, rather than a service necessary for everyday life, the service is currently provided only for health care centers or certain medical institutions. It is necessary to conduct studies to prevent accidents that suddenly occur in everyday life and to cope with emergencies. Thus, we propose facial expression analysis using line-segment feature analysis-convolutional recurrent neural network (LFA-CRNN) feature extraction for health-risk assessments of drivers. The purpose of such an analysis is to manage and monitor patients with chronic diseases who are rapidly increasing in number. To prevent automobile accidents and to respond to emergency situations due to acute diseases, we propose a service that monitors a driver’s facial expressions to assess health risks and alert the driver to risk-related matters while driving. To identify health risks, deep learning technology is used to recognize expressions of pain and to determine if a person is in pain while driving. Since the amount of input-image data is large, analyzing facial expressions accurately is difficult for a process with limited resources while providing the service on a real-time basis. Accordingly, a line-segment feature analysis algorithm is proposed to reduce the amount of data, and the LFA-CRNN
3,070,463
219000504
0
16
model was designed for this purpose. Through this model, the severity of a driver’s pain is classified into one of nine types. The LFA-CRNN model consists of one convolution layer that is reshaped and delivered into two bidirectional gated recurrent unit layers. Finally, biometric data are classified through softmax. In addition, to evaluate the performance of LFA-CRNN, the performance was compared through the CRNN and AlexNet Models based on the University of Northern British Columbia and McMaster University (UNBC-McMaster) database. Introduction In our lives: emotion is an essential means to deliver information among people. Emotional expressions can be classified in one of two ways: verbal (the spoken and written word) and non-verbal if real-time video data are processed by a face recognition technique based on deep learning, it is necessary to learn many classes. Thus, they mostly show the structure in which a fully connected layer becomes larger, which accordingly decreases the batch size and acts as a factor disturbing convergence in the learning by the neural network. Accordingly, in this paper, to resolve such problems, facial expression analysis of drivers by using line-segment feature analysis-convolutional recurrent neural network (LFA-CRNN) feature extraction for health-risk assessment is proposed. A service using facial expression information to analyze drivers' health risks and alert them to risk-related matters is proposed. Drivers' real-time streaming images, along with deep learning-based pain expression recognition, were utilized to determine whether or not drivers are suffering from pain. When analyzing real-time streaming images, it may be difficult to extract accurate facial expression features if the
3,070,464
219000504
0
16
image is shaking, and it may be difficult or impossible to run the analysis process on a real-time basis due to limited resources. Accordingly, a line-segment feature analysis (LFA) algorithm reduces learning and assessment time by reducing data dimensionality (the number of pixels). Also proposed is increasing the processing speed to handle large-capacity original data and high resolutions. Drivers' facial expressions are recognized through the CRNN model, which is designed to reduce input data dimensionality and to learn the LFA data. The driver's condition is understood based on the University of Northern British Columbia and McMaster University (UNBC-McMaster) database to understand the driver's abnormal condition. A service is proposed for coping with risks, spreading the dangerous conditions concerning health risks that may occur while driving through the notice by understanding the driver's conditions as suffering and non-suffering conditions. This study is organized as follows. Section 2 presents the trends in face analysis research and also describes the current risk-prediction systems and services using deep learning. Section 3 describes how the dimensionality-reducing LFA technique proposed in this paper is applied to the data generation process, and also presents the CRNN model designed for LFA data learning. Section 4 describes how the UNBC-McMaster database was used to conduct a performance test. Face Analysis Research Trends In early facial expression analysis, various studies were conducted based on a local binary pattern (LBP). LBP is widely used in the field of image recognition thanks to its ability to recognize things, its strength against changes in lighting, and its ease of
3,070,465
219000504
0
16
calculation. As LBP became widely used in face recognition, center-symmetric LBP (CS-LBP) [24] was used in a modified form that can show components in the diagonal direction, reducing the dimension of feature vectors. Also, some studies enhanced the accuracy of facial expression detection by using multi-scale LBP that multiplied the size of the radius and the angle [25,26]. However, the LBP technique is used with techniques for extracting feature vectors in order to increase accuracy. In this case, based on the field of the application, there is difficulty in choosing the appropriate feature vectors. Transformation in various forms is possible, but the optimal feature vector should be decided by experiential elements and from various experiments. If the LFA proposed in this study is used, the minimum necessary data are used when the face is analyzed, so data compression takes place autonomously. Also, since it can be performed through techniques for detecting the face and its outline, it can easily be used in various fields. Studies of face analysis based on point-based features utilizing landmarks are also in progress. Landmark-based face extraction has a very fast process of measuring and restoring the landmark, so it can immediately display changes in the face shape and facial expressions filmed in real time. The weight of the measured landmark can be lightened for uses and purposes, such as character and avatar. Jabon et al. (2010) [27] proposed a prediction model that could prevent traffic accidents by recognizing drivers' facial expressions and gestures. This prediction model generates 22 x and y
3,070,466
219000504
0
16
coordinates on the face (eyes, nose, mouth, etc.) in order to extract facial characteristics and head movements, and it automatically detects movement. It synchronizes the extracted data with simulator data, uses them as input to the classifier, and calculates a prediction for accidents. Also, Agbolade et al. (2019) [28] and Park (2017) [29] conducted studies to detect the face region based on multiple points, utilizing landmarks to increase the accuracy of face extraction. However, to prevent prediction of the landmark value from falling to the local minimum, it is necessary to pass through a process of correcting the result through plural networks based on the initial prediction value in cascade form. The difficulty in detection differs depending on the set value of the feature point of the face. The more subdivided the overall detected outline, the more difficult it gets. Also, if part of the face is covered, it becomes very hard to measure landmarks. If the LFA proposed in this study is used, it is somewhat possible to escape the impact of light, since only information about the segments is used. Also, there is no increase in the difficulty of detection. Since the deep learning method shows high performance, studies based on CNNs and deep neural networks (DNNs) are actively conducted. Wang et al. (2019) [30] proposed a method for recognizing facial expressions by combining extracted characteristics with the C4.5 classifier. Since some problems still existed (e.g., overfitting of a single classifier, and a vulnerable generalization ability), ensemble learning was applied to the decision-making tree
3,070,467
219000504
0
16
algorithm to increase classification accuracy. Jeong et al. (2018) [31] detected face landmarks through a facial expression recognition (FER) technique proposed for face analysis, and extracted geometric feature vectors considering the spatial position between landmarks. By implementing the feature vectors on a proposed hierarchical weighted random forest classifier in order to classify facial expressions, the accuracy of facial recognition increased. Ra et al. (2018) [32] proposed a deep learning structure in a block method to enhance the face recognition rate. Unlike the existing method, feature filter coefficients and the weighted values of the neural network (on the softmax layer and the convolution layer) are learned using a backpropagation algorithm. Performing recognition with the deep learning model that learned the selected block region, the result of face recognition is drawn from an efficient block with a high feature value. However, since the face recognition technique based on CNNs and DNNs should generally learn a large amount of classes, there is a structure in which the fully connected layer grows bigger. Accordingly, the structure acts as a factor reducing the batch size and disturbing convergence in the learning by a neural network. If the LFA proposed in this study is used, the input dimension is small. Thus, the disturbance in the convergence from learning (due to the decrease in the batch size that may be generated in the CNN and DNN) can be minimized. Facial Expression Analysis and Emotion-Based Services FaceReader automatically analyzes 500 features on a face from images, videos, and streaming videos that include facial expressions,
3,070,468
219000504
0
16
and it analyzes seven basic emotions: neutrality, happiness, sadness, anger, amazement, fear, and disgust. It also analyzes the degree of the emotions, such as the arousal (active vs. passive) and the valence (positive vs. negative) online and offline. Research on emotions through analyzing facial expressions has been conducted in various research fields, including consumer behavior, educational methodology, psychology, consulting and counseling, and medicine for more than 10 years. It is widely used in more than 700 colleges, research institutes, and companies around the world [33]. The facial expression-based and bio-signal-based lifecare service provided by Neighbor System Co. Ltd. in Korea is an accident-prevention system dedicated to protecting the elderly who live alone and who have no close friends or family members. The services provided by this system include user location information, health information confirmation, and integrated situation monitoring [34]. Figure 1 shows the facial expression-based and bio-signal-based lifecare service, which consists of four main functions for safety, health, the home, and emergencies. The safety function provides help/rescue services through tracing/managing the users' location information, tracing their travel routes, and detecting any deviations from them. The health function measures/records body temperature, heart rate, and physical activity level, and monitors health status. In addition, it determines whether or not an unexpected situation is actually an emergency by using facial expression analysis, and provides services applicable to the situation. The home function provides a service dedicated to detecting long-term non-movement and to preventing intrusions Appl. Sci. 2020, 10, 2956 5 of 19 by using closed-circuit television (CCTV) installed within
3,070,469
219000504
0
16
the users' residential space. Lastly, the emergency function constructs a system with connections to various organizations that can respond to any situation promptly, as well as deliver users' health history records to the involved organizations. Appl. Sci. 2019, 9, x FOR PEER REVIEW 5 of 20 using closed-circuit television (CCTV) installed within the users' residential space. Lastly, the emergency function constructs a system with connections to various organizations that can respond to any situation promptly, as well as deliver users' health history records to the involved organizations. Driver Health-Risk Analysis Using Facial Expression Recognition-Based LFA-CRNN It is necessary to compensate for senior drivers' weakened physical, perceptual, and decision-making abilities. It is also necessary to prevent secondary accidents, manage their health status, and take prompt action by predicting any potential traffic-accident risk, health risk, and risky behavior that might show up while driving. In cases where a senior driver's health status worsens due to a chronic disease, it becomes possible to recognize accident risks through facial expression changes. Accordingly, we propose resolving such issues with facial expression analysis using LFA-CRNN-based feature extraction for health-risk assessment of drivers. The LFA algorithm was performed to extract the characteristics of the driver's facial image in real time in the transportation support platform. An improved CRNN model is proposed, which can recognize the driver's face through the data calculated in this algorithm. Figure 2 shows the LFA-CRNN-based driving facial expression analysis for assessing driver health risks. Driver Health-Risk Analysis Using Facial Expression Recognition-Based LFA-CRNN It is necessary to compensate for senior
3,070,470
219000504
0
16
drivers' weakened physical, perceptual, and decision-making abilities. It is also necessary to prevent secondary accidents, manage their health status, and take prompt action by predicting any potential traffic-accident risk, health risk, and risky behavior that might show up while driving. In cases where a senior driver's health status worsens due to a chronic disease, it becomes possible to recognize accident risks through facial expression changes. Accordingly, we propose resolving such issues with facial expression analysis using LFA-CRNN-based feature extraction for health-risk assessment of drivers. The LFA algorithm was performed to extract the characteristics of the driver's facial image in real time in the transportation support platform. An improved CRNN model is proposed, which can recognize the driver's face through the data calculated in this algorithm. Figure 2 shows the LFA-CRNN-based driving facial expression analysis for assessing driver health risks. Appl. Sci. 2019, 9, x FOR PEER REVIEW 5 of 20 using closed-circuit television (CCTV) installed within the users' residential space. Lastly, the emergency function constructs a system with connections to various organizations that can respond to any situation promptly, as well as deliver users' health history records to the involved organizations. Driver Health-Risk Analysis Using Facial Expression Recognition-Based LFA-CRNN It is necessary to compensate for senior drivers' weakened physical, perceptual, and decision-making abilities. It is also necessary to prevent secondary accidents, manage their health status, and take prompt action by predicting any potential traffic-accident risk, health risk, and risky behavior that might show up while driving. In cases where a senior driver's health status worsens due
3,070,471
219000504
0
16
to a chronic disease, it becomes possible to recognize accident risks through facial expression changes. Accordingly, we propose resolving such issues with facial expression analysis using LFA-CRNN-based feature extraction for health-risk assessment of drivers. The LFA algorithm was performed to extract the characteristics of the driver's facial image in real time in the transportation support platform. An improved CRNN model is proposed, which can recognize the driver's face through the data calculated in this algorithm. Figure 2 shows the LFA-CRNN-based driving facial expression analysis for assessing driver health risks. The procedures for recognizing and processing a driver's facial expressions can be divided into detection, dimensionality reduction, and learning. The detection process is a step of extracting the core areas (the eyes, nose, and mouth) to analyze the driver's suffering condition. In the step, there is a preconditioning process to solve the problem that the core areas are not accurately recognized. To extract features from the main areas of frame-type facial images segmented from real-time streaming images, multiple AdaBoost-based input images are divided into blocks. In the dimensionality reduction process, the LFA algorithm reduces the learning and reasoning time by reducing data dimensionality (the number of pixels) in order to increase processing speeds to handle large-capacity original data. High-resolution data and the dimensionality of the input data are reduced. Lastly, in the learning process, drivers' facial expressions are recognized through the CRNN model designed to learn the LFA data. In addition, to confirm a driver's abnormal status based on the UNBC-McMaster shoulder pain expression database, the service
3,070,472
219000504
0
16
proposed determines if the driver is in pain, identifies the driver's health-related risks, and alerts the driver to such risks through alarms. Real-Time Stream Image Data Pre-Processing for Facial Expression Recognition-Based Health Risk Extraction Because pre-existing deep-learning models utilize the overall facial image for facial recognition, areas such as the eyes, nose, and lips serving as the main factors for analyzing drivers' emotions and pain status are not accurately recognized. Accordingly, through a detection process module, pre-processing is conducted for dimensionality reduction and learning. To analyze the original data transferred through real-time streaming, input images are segmented at 85 fps, and to increase the recognition rate, the particular facial image sections required for facial expression recognition are extracted using the multi-block method [35]. In particular, in cases where a multi-block is big or small during the blocking process, pre-existing models are unable to accurately extract features from the main areas, and this causes significant errors relating to recognition and learning. To resolve such issues, multiple AdaBoost is utilized to set optimized blocking, and then sampling is conducted. Figure 3 shows the process of detecting particular facial areas. A Haar-based cascade classifier is used to detect the face; Haar-like features are selected to accurately extract the user's facial features, and the AdaBoost algorithm is used for training. At this point, since features can be seen as a face/background-dividing characteristic and as a classifier, each feature is defined as a base classifier or a weak classifier candidate. During iterations, the training samples select one feature demonstrating the best
3,070,473
219000504
0
16
classification performance, and the selected feature is used as the weak classifier in the iteration. The final weak classifiers are used in the weighted linear combination process to acquire the final strong classifiers. Appl. Sci. 2019, 9, x FOR PEER REVIEW 6 of 20 The procedures for recognizing and processing a driver's facial expressions can be divided into detection, dimensionality reduction, and learning. The detection process is a step of extracting the core areas (the eyes, nose, and mouth) to analyze the driver's suffering condition. In the step, there is a preconditioning process to solve the problem that the core areas are not accurately recognized. To extract features from the main areas of frame-type facial images segmented from real-time streaming images, multiple AdaBoost-based input images are divided into blocks. In the dimensionality reduction process, the LFA algorithm reduces the learning and reasoning time by reducing data dimensionality (the number of pixels) in order to increase processing speeds to handle large-capacity original data. High-resolution data and the dimensionality of the input data are reduced. Lastly, in the learning process, drivers' facial expressions are recognized through the CRNN model designed to learn the LFA data. In addition, to confirm a driver's abnormal status based on the UNBC-McMaster shoulder pain expression database, the service proposed determines if the driver is in pain, identifies the driver's health-related risks, and alerts the driver to such risks through alarms. Real-Time Stream Image Data Pre-Processing for Facial Expression Recognition-Based Health Risk Extraction Because pre-existing deep-learning models utilize the overall facial image for facial
3,070,474
219000504
0
16
recognition, areas such as the eyes, nose, and lips serving as the main factors for analyzing drivers' emotions and pain status are not accurately recognized. Accordingly, through a detection process module, pre-processing is conducted for dimensionality reduction and learning. To analyze the original data transferred through real-time streaming, input images are segmented at 85 fps, and to increase the recognition rate, the particular facial image sections required for facial expression recognition are extracted using the multi-block method [35]. In particular, in cases where a multi-block is big or small during the blocking process, pre-existing models are unable to accurately extract features from the main areas, and this causes significant errors relating to recognition and learning. To resolve such issues, multiple AdaBoost is utilized to set optimized blocking, and then sampling is conducted. Figure 3 shows the process of detecting particular facial areas. A Haar-based cascade classifier is used to detect the face; Haar-like features are selected to accurately extract the user's facial features, and the AdaBoost algorithm is used for training. At this point, since features can be seen as a face/background-dividing characteristic and as a classifier, each feature is defined as a base classifier or a weak classifier candidate. During iterations, the training samples select one feature demonstrating the best classification performance, and the selected feature is used as the weak classifier in the iteration. The final weak classifiers are used in the weighted linear combination process to acquire the final strong classifiers. In the formula in Figure 3, E(x) is the strong classifier finally
3,070,475
219000504
0
16
found; e is the weak classifier drawn in the learning process, and a is the weighted value for the weak classifier. T is the number of repetitions. In this process, it is very hard to normalize the face if it is extracted without information such as the rotation and position of the face. Extracting the geometrical information of the face, it is necessary to normalize the face consistently. Faces can be classified according to their rotational positions, and if random images do not provide such information in advance, such rotational information must be detected during image retrieval. The detectors learned through multiple Adaboost are serialized, using the simple pattern of the face searcher. Using the serialized detectors, information can be found, such as the position, size, and rotation of the face. As for the simple pattern used in multiple Adaboost learning, the pattern in a basic form was used. 160 was chosen as the number of simple detectors to be found by Adaboost learning, and the processing speed of the learned detectors improved through serialization. The face region calculated through the above process detects the outline of the face through the Canny technique. This is the optimal technique option based on the experimental result. In the early stages, various outline detection techniques were used, but only the Canny method showed a high result. Pain Feature Extraction through LFA Even after executing facial feature extraction through the procedures specified in Section 3.1, various constraint conditions may arise when extracting a driver's facial features from real-time driving images.
3,070,476
219000504
0
16
In analyzing a real-time streaming image, it may be hard to extract accurate facial characteristics due to the motion of the image. Accordingly, since it is necessary to reduce the dimensionality of facial feature images extracted from real-time streaming images, the LFA algorithm is proposed. The proposed LFA algorithm is a dimensionality-reduction process that reduces learning and reasoning time by reducing data dimensionality (the number of pixels) to increase the processing speed in order to handle the original large-capacity, high-resolution data. To extract information from images, the line information on a 3 × 3 Laplacian mask's parameter-modified filter is extracted, a one-dimensional (1D) vector is created, and the created vector is utilized as the learning model's input data. Based on such a process, this algorithm creates new data through the line-segment features. LFA uses the driver's facial contour lines calculated through the detection process to examine and classify line-segment types. To examine the line-segment types, a filter, f, is used, and the elements {1, 2, 4, and 8} are acquired. Figure 4 shows the process where a driver's facial-contour line data are segmented, and the line-segment types are examined through the use of f . Figure 4 shows the first LFA process. The contour line image calculated through pre-processing (detection) had a size of 160 × 160, and this image was segmented into 16 parts, as shown in Figure 4a. This process is calculated as shown in Algorithm 1. The segmented parts have a size of 40 × 40, and the segments are arranged in a
3,070,477
219000504
0
16
way that does not modify the structure of the original image. These segments are max-pooled via the calculation shown in Figure 4b, and the arrangement of the segments is adjusted. This process is defined in Equation (1): Algorithm 1 Image Division Algorithm Equation (1) is a calculation where the contour line image obtained during pre-processing is divided into 16 equal segments, and the divided segments are max-pooled. and ℎ denote the number of segments in the width and height, respectively, and and ℎ denote the size of the segmented data from dividing the contour line image by and by ℎ , respectively. P indicates the space for memorizing the segmented data, and the segmentation position is maintained through P[n, m], in which n and m refer to a two-dimensional array index, having a value ranging between 0 and 4. MP memorizes the segmented data's max-pooling results. In every process, the sequence of the segmented images must not be lost. The sequence of the re-arranged segments must not be lost as well. Figure 4b shows the calculation where a convolution between the segment images and the filter is calculated: the parameters of the segmented images are converted, the sum of the parameters is calculated, and one-dimensional vector data are generated. The number of segmentations and the size of images in this process were the values selected through experiential selection, and after experimenting on various conditions, the optimal variables were calculated. Line-Segment Aggregation-Based Reduced Data Generation for Pain Feature-Extracted Data Processing Load Reduction The information from the line
3,070,478
219000504
0
16
segment (LS) extracted (based on real-time streaming images) is matched with a unique number. The unique numbers are 1, 2, 4, and 8; they have a value that does not overlap another value, and the aggregate value deduces mutually different values. The LFA algorithm uses a 2 × 2 filter having a unique number for matching normal line-segment data. The LS has a value of 0 or 1, and where a filter consisting of a unique number is matched with the LS, only the areas having 1 as the unique number are displayed. A serial number is given to express information on segments, which is visual data in a series of information on numbers. That is converted to a series of information on numbers for easy counting of various segments (curve, horizontal line, and vertical line, etc.). Namely, visual data are converted into a series of patterns (numbers). Figure 5 shows the process where a segmented image is converted into 1D vector data. Figure 4 shows the first LFA process. The contour line image calculated through pre-processing (detection) had a size of 160 × 160, and this image was segmented into 16 parts, as shown in Figure 4a. This process is calculated as shown in Algorithm 1. The segmented parts have a size of 40 × 40, and the segments are arranged in a way that does not modify the structure of the original image. These segments are max-pooled via the calculation shown in Figure 4b, and the arrangement of the segments is adjusted. This process
3,070,479
219000504
0
16
is defined in Equation (1): Equation (1) is a calculation where the contour line image obtained during pre-processing is divided into 16 equal segments, and the divided segments are max-pooled. D w and D h denote the number of segments in the width and height, respectively, and P w and P h denote the size of the segmented data from dividing the contour line image by D w and by D h , respectively. P indicates the space for memorizing the segmented data, and the segmentation position is maintained through P[n, m], in which n and m refer to a two-dimensional array index, having a value ranging between 0 and 4. MP memorizes the segmented data's max-pooling results. In every process, the sequence of the segmented images must not be lost. The sequence of the re-arranged segments must not be lost as well. Figure 4b shows the calculation where a convolution between the segment images and the filter is calculated: the parameters of the segmented images are converted, the sum of the parameters is calculated, and one-dimensional vector data are generated. The number of segmentations and the size of images in this process were the values selected through experiential selection, and after experimenting on various conditions, the optimal variables were calculated. Line-Segment Aggregation-Based Reduced Data Generation for Pain Feature-Extracted Data Processing Load Reduction The information from the line segment (LS) extracted (based on real-time streaming images) is matched with a unique number. The unique numbers are 1, 2, 4, and 8; they have a value that
3,070,480
219000504
0
16
does not overlap another value, and the aggregate value deduces mutually different values. The LFA algorithm uses a 2 × 2 filter having a unique number for matching normal line-segment data. The LS has a value of 0 or 1, and where a filter consisting of a unique number is matched with the LS, only the areas having 1 as the unique number are displayed. A serial number is given to express information on segments, which is visual data in a series of information on numbers. That is converted to a series of information on numbers for easy counting of various segments (curve, horizontal line, and vertical line, etc.). Namely, visual data are converted into a series of patterns (numbers). Figure 5 shows the process where a segmented image is converted into 1D vector data. Equation (2) shows the calculation between the segment image and filter , in which and ℎ represent the size. At this point, has a fixed parameter and a fixed size; is a partial area of the A segment of an image utilizing contour line data has a parameter of 0 or 1, as shown in Figure 5a. The involved segment is a line segment when this parameter is 1, and is background when the parameter is 0. Such segment data are calculated with the filter, f , in sequence. The segment data have a size of 20 × 20, and filter f is 2 × 2. The 2 × 2 window is used to calculate a convolution between the segment data
3,070,481
219000504
0
16
and filter f . At this point, the window moves one pixel at a time (stride = 1) to scan the entire area of the segmented image. Each scanned area is calculated with filter f ; the parameter is changed, and the image's 1 parameter is replaced with the f parameter. The process in Figure 5b is calculated as shown in Algorithm 2. Equation (2) shows the calculation between the segment image and filter f , in which f w and f h represent the size. At this point, f has a fixed parameter and a fixed size; x i is a partial area of the segment image, and segments it into pieces the same size as f . Once a convolution between such segmented data and f is calculated, the calculated results are added up and recorded in P i . f = [ [1,2], [8,4] Table 1 shows the type and sum of lines according to the scanned areas. When scanned areas are expressed as 0, they are considered the background (as shown in Table 1), and the summed value is also expressed as 0. On the other hand, all the areas expressed as 1 are considered active, and the side acquires a value expressed as 15. Other areas, according to the position and number of 1s, are expressed as point, vertical, horizontal, or diagonal, and are given a unique number. Despite being identical line types, all data are assigned a different number according to the expressed position, and the summed value is the
3,070,482
219000504
0
16
unique value. For example, vertical is one of the line types detected in areas expressed as 0110 or 1001. However, each summed value is either 6 or 9 and has a different unique value. This means that the same line types are considered different lines based on their line-expressed positions. In addition, each line type's total cannot exceed 15. The data calculated through such a process will tie and save the line types (total) calculated per segment as a 1D vector, and will create a total of 16 1D vectors. Each vector has a size of (20 − 2 + 1) × (20 − 2 + 1) = 841, and each vector's parameter has a value ranging from 0 to 15. Unique Number-Based Data Compression and Feature Map Generation for Image Dimensionality Reduction The 16 one-dimensional vector data calculated through the process shown in Figure 5 consist of unique values according to the line type determined through the information calculated by segmenting the facial image into 16 parts and matching each part with a particular filter. Such vector data consist of parameters ranging from 0 to 15. Each parameter has a unique feature (line-segment information). This section describes how cumulative aggregate data are generated based on the parameter value owned by each segment. The term "cumulative aggregate data" refers to data generated through a process where a parameter value is utilized as an index to generate a 1D array having a size of 16. The involved array's factor increases by 1 every time each index is
3,070,483
219000504
0
16
called. Figure 6 shows the process where cumulative aggregate data are generated. As shown on the right side of Figure 6a, the parameters of the data segmented through the previous process are utilized as an array of the index, and a 1D array having a size of 16 is generated according to each segment. This array is shown in Figure 6b, and the factor value of the index position corresponding to each parameter of the one-dimensional array having a maximum size of 16 increases by 1. The process in Figure 6b is calculated as shown in Algorithm 3. Since this process is applied to each segment, an array having a size of 16 and corresponding to each segment is generated for each segment, and a total of 16 arrays are generated. These are known as LFA data and are shown in Figure 7a. As shown on the right side of Figure 6a, the parameters of the data segmented through the previous process are utilized as an array of the index, and a 1D array having a size of 16 is generated according to each segment. This array is shown in Figure 6b, and the factor value of the index position corresponding to each parameter of the one-dimensional array having a maximum size of 16 increases by 1. The process in Figure 6b is calculated as shown in Algorithm 3. Since this process is applied to each segment, an array having a size of 16 and corresponding to each segment is generated for each segment, and a
3,070,484
219000504
0
16
total of 16 arrays are generated. These are known as LFA data and are shown in The LFA process in Figure 7a restructures each array generated through each segment image in the appropriate order (in the order prioritized based on the segmentation position). Through this, for one image, the LFA data calculated through the LFA process are expressed as two-dimensional sequences with a size of 16 × 16. This is used as input for the CRNN. LFA-CRNN Model for Driver Pain Status Analysis Once a feature map is generated, and the image is deduced through facial and contour line detection, the pre-processing of the given input images restructures them into two-dimensional arrays having a size of 16 × 16 through the LFA process. Specifically, the dimensionality is reduced The LFA process in Figure 7a restructures each array generated through each segment image in the appropriate order (in the order prioritized based on the segmentation position). Through this, for one image, the LFA data calculated through the LFA process are expressed as two-dimensional sequences with a size of 16 × 16. This is used as input for the CRNN. LFA-CRNN Model for Driver Pain Status Analysis Once a feature map is generated, and the image is deduced through facial and contour line detection, the pre-processing of the given input images restructures them into two-dimensional arrays having a size of 16 × 16 through the LFA process. Specifically, the dimensionality is reduced through the LFA technique. Since LFA always has the same output size and consists of aggregate
3,070,485
219000504
0
16
information on the line segment contained in the image, the reduced data themselves can be considered unique features. In addition, a learning model dedicated to LFA data is designed instead of a general CRNN learning model architecture for drivers' pain status, and the learning process is performed as well. Figure 8 shows the structure of the proposed LFA-CRNN model. The LFA process in Figure 7a restructures each array generated through each segment image in the appropriate order (in the order prioritized based on the segmentation position). Through this, for one image, the LFA data calculated through the LFA process are expressed as two-dimensional sequences with a size of 16 × 16. This is used as input for the CRNN. LFA-CRNN Model for Driver Pain Status Analysis Once a feature map is generated, and the image is deduced through facial and contour line detection, the pre-processing of the given input images restructures them into two-dimensional arrays having a size of 16 × 16 through the LFA process. Specifically, the dimensionality is reduced through the LFA technique. Since LFA always has the same output size and consists of aggregate information on the line segment contained in the image, the reduced data themselves can be considered unique features. In addition, a learning model dedicated to LFA data is designed instead of a general CRNN learning model architecture for drivers' pain status, and the learning process is performed as well. Figure 8 shows the structure of the proposed LFA-CRNN model. The LFA-CRNN architecture is a CRNN learning model. It consists
3,070,486
219000504
0
16
of one convolution layer, and expresses a feature map as sequence data through the reshaped layer. The features that changed into sequence data are transferred to the dense layer through two bidirectional gated recurrent units (BI-GRUs), and the sigmoid layer serves as the final layer before the results are output. Through the convolution layer's batch normalization (BN), the risk of depending on and overfitting the learning-speed improvement and initial weighted-value selection is reduced [36][37][38]. Since this learning model uses dimensionality-reduced LFA data, the compressed data themselves can be considered one feature. Accordingly, to express one major feature as a number of features in a convolution, the input-related expressions are diversely divided through a total of 64 filters having a size of 16 × 16. The value deduced through such a process passes through the BN and generates a series of feature maps through the rectified linear unit (ReLU) layer. Such feature maps are restructured through the reshape layer into 64 sequence data having a size of 256 and are used as the The LFA-CRNN architecture is a CRNN learning model. It consists of one convolution layer, and expresses a feature map as sequence data through the reshaped layer. The features that changed into sequence data are transferred to the dense layer through two bidirectional gated recurrent units (BI-GRUs), and the sigmoid layer serves as the final layer before the results are output. Through the convolution layer's batch normalization (BN), the risk of depending on and overfitting the learning-speed improvement and initial weighted-value selection is reduced [36][37][38].
3,070,487
219000504
0
16
Since this learning model uses dimensionality-reduced LFA data, the compressed data themselves can be considered one feature. Accordingly, to express one major feature as a number of features in a convolution, the input-related expressions are diversely divided through a total of 64 filters having a size of 16 × 16. The value deduced through such a process passes through the BN and generates a series of feature maps through the rectified linear unit (ReLU) layer. Such feature maps are restructured through the reshape layer into 64 sequence data having a size of 256 and are used as the RNN model's input. The RNN model consists of two BI-GRUs, one with 64 nodes and one with 32 nodes. The data deduced through this process are delivered to the sigmoid layer through the dense layer. At this point, the dropout layer is arranged between the dense layer and the sigmoid layer to prevent calculation volume reduction and overfitting [39][40][41]. Lastly, through the Sigmoid class, nine types of pain are classified. In this model, the pooling layer generally used in the pre-existing CNN and CRNN models is not used. Since the input LFA data themselves have a considerably small size of 16 × 16, and consist of the cumulative number of line segments owned by the images when the involved data are compressed, the main features may be damaged or removed. In addition, in this model, BN and the dropout layer are arranged instead of the pooling layer, and the convolution's stride and padding are set to 1 and
3,070,488
219000504
0
16
same, respectively. We used the convolution layer to get a variety of information about the expression of individual, highly-concentrated LFA data by designing the model like Figure 9. Thus, the filter of the convolution layer was set to 16 × 16 with stride = 1 and padding = "same." Through this, one LFA data size is maintained, and because of the weighted value of the filter, it can express a lot of information. The data are used as input in each cycle of the RNN, and through the previous characteristics, strong characteristics are gradually detected from within. A simulation was conducted in the following environment: a Microsoft Windows 10 pro 64-bit O/S on an Intel Core(TM) i7-6700 CPU (3.40 GHz) with 16GB RAM, and an emTek XENON NVIDIA GeForce GTX 1060 graphics card with 6GB of memory. To implement this algorithm, we utilized OpenCV 4.2, Keras 2.2.4, and the Numerical Python (NumPy) library (version 1.17.4) based on Python 3.6. OpenCV was used to perform the Canny technique during pre-processing by the LFA, and the calculation of the queue generated in the LFA process was performed using the NumPy library. The neural network model was implemented through Keras. Figure 9 shows the process by which the driver's pain status is analyzed and under which the system's performance was evaluated. Simulation and Performance Evaluation A simulation was conducted in the following environment: a Microsoft Windows 10 pro 64-bit O/S on an Intel Core(TM) i7-6700 CPU (3.40 GHz) with 16GB RAM, and an emTek XENON NVIDIA GeForce GTX 1060
3,070,489
219000504
0
16
graphics card with 6GB of memory. To implement this algorithm, we utilized OpenCV 4.2, Keras 2.2.4, and the Numerical Python (NumPy) library (version 1.17.4) based on Python 3.6. OpenCV was used to perform the Canny technique during pre-processing by the LFA, and the calculation of the queue generated in the LFA process was performed using the NumPy library. The neural network model was implemented through Keras. Figure 9 shows the process by which the driver's pain status is analyzed and under which the system's performance was evaluated. To evaluate the performance of the LFA-CRNN model-based face recognition (suffering and non-suffering expressions), the UNBC-McMaster database was used. In addition, a comparison was made with AlexNet and CRNN Models. The experiment of this paper chose the basic structure of the proposed model, the CRNN model and the AlexNet model generally well known for image classification to compare the performance. The UNBC-McMaster database classifies pain into nine stages (0~8) using the Prkachin and Solomon Pain Intensity (PSPI) scale, with data consisting of 129 participants (63 males and 66 females). The accuracy and loss measurement test were based on such data, calculated through pre-processing (face detection and contour line extraction). The LFA conversion process was used as the LFA-CRNN's input, and the CRNN [42] and AlexNet [43] for performance comparison used the data calculated through the face detection process. The test was conducted by taking 20% of the data from the UNBC-McMaster database [44] as the test data, and utilizing 10% of the remaining 80% as verification data. In the
3,070,490
219000504
0
16
process of classifying data, to prevent data from leaning too much towards a particular class, the classification was undertaken by designating a specific percentage for each class. Specifically, 42,512 data units consisted of 29,758 learning data units, 3401 verification data units, and 8503 test data units. Figure 10 shows the results of the accuracy and loss, using the UNBC-McMaster Database. As shown in Figure 10, the LFA-CRNN showed the highest accuracy, with AlexNet second and the CRNN third. AlexNet showed a large gap between the training data and verification data. The CRNN showed a continuous increase in the training data accuracy but showed a temporary decrease in the verification data accuracy due to overfitting. Although the LFA-CRNN proposed in this paper showed a bit of a gap between the learning and validation data, such a gap is not considered significant. Since no temporary decrease was shown in the validation data, it was confirmed that no learning overfitting occurred; loss data showed the same patterns. AlexNet showed the highest gap between learning and validation data, in terms of loss. The CRNN showed a continuous decrease of loss in both learning and validation data, but showed a temporary increase in validation data. Therefore, the LFA-CRNN can be considered more reliable than both AlexNet and traditional CRNN. shown in Figure 10, the LFA-CRNN showed the highest accuracy, with AlexNet second and the CRNN third. AlexNet showed a large gap between the training data and verification data. The CRNN showed a continuous increase in the training data accuracy but showed
3,070,491
219000504
0
16
a temporary decrease in the verification data accuracy due to overfitting. Although the LFA-CRNN proposed in this paper showed a bit of a gap between the learning and validation data, such a gap is not considered significant. Since no temporary decrease was shown in the validation data, it was confirmed that no learning overfitting occurred; loss data showed the same patterns. AlexNet showed the highest gap between learning and validation data, in terms of loss. The CRNN showed a continuous decrease of loss in both learning and validation data, but showed a temporary increase in validation data. Therefore, the LFA-CRNN can be considered more reliable than both AlexNet and traditional CRNN. Figure 11 shows the accuracy and loss achieved with the test data. As shown in the figure, the LFA-CRNN had the highest accuracy at approximately 98.92% and the lowest loss at approximately 0.036. The CRNN showed temporary overfitting during learning, and this was determined to be the reason why its accuracy was lower than the LFA-CRNN. Likewise, it was determined that AlexNet showed a performance decrease in its accuracy due to the verification data's wide gap. The test results shown in Figures 10 and 11 can be summarized as follows. As far as UNBC-McMaster-based learning is concerned, the LFA-CRNN model showed no rapid change in accuracy and loss, and it was confirmed that a stable graph was maintained as the epochs progressed (i.e., no overfitting or large gap). In addition, compared to the basic models, the proposed method showed the highest performance with an accuracy
3,070,492
219000504
0
16
of approximately 98.92%. Figure 11 shows the accuracy and loss achieved with the test data. As shown in the figure, the LFA-CRNN had the highest accuracy at approximately 98.92% and the lowest loss at approximately 0.036. The CRNN showed temporary overfitting during learning, and this was determined to be the reason why its accuracy was lower than the LFA-CRNN. Likewise, it was determined that AlexNet showed a performance decrease in its accuracy due to the verification data's wide gap. The test results shown in Figures 10 and 11 can be summarized as follows. As far as UNBC-McMaster-based learning is concerned, the LFA-CRNN model showed no rapid change in accuracy and loss, and it was confirmed that a stable graph was maintained as the epochs progressed (i.e., no overfitting or large gap). In addition, compared to the basic models, the proposed method showed the highest performance with an accuracy of approximately 98.92%. LFA-CRNN had the highest accuracy at approximately 98.92% and the lowest loss at approximately 0.036. The CRNN showed temporary overfitting during learning, and this was determined to be the reason why its accuracy was lower than the LFA-CRNN. Likewise, it was determined that AlexNet showed a performance decrease in its accuracy due to the verification data's wide gap. The test results shown in Figures 10 and 11 can be summarized as follows. As far as UNBC-McMaster-based learning is concerned, the LFA-CRNN model showed no rapid change in accuracy and loss, and it was confirmed that a stable graph was maintained as the epochs progressed (i.e.,
3,070,493
219000504
0
16
no overfitting or large gap). In addition, compared to the basic models, the proposed method showed the highest performance with an accuracy of approximately 98.92%. To measure the accuracy and reliability of the proposed algorithm, precision, recall, and the receiver operating characteristic (ROC) curve [45] were measured. Figure 12 shows the results achieved. To measure the accuracy and reliability of the proposed algorithm, precision, recall, and the receiver operating characteristic (ROC) curve [45] were measured. Figure 12 shows the results achieved. In Figure 12, the precision results show the percentage of the number of samples actually determined to be true out of the samples predicted to be true for each pain severity class. The LFA-CRNN showed the following results: 0 = 98%, 1 = 81%, 2 = 63%, 3 = 63%, 4 = 19%, 5 = 74%, 6 = 78%, 7 = 100%, and 8 = 100%. Such results are quite poor, compared to the results achieved by AlexNet and the CRNN. It was determined that such results are attributable to the dimensionality reduction LFA technique. Since the dimensionality reduction technique itself either compresses the original image to generate new data or reduces the data size by using particular features consisting In Figure 12, the precision results show the percentage of the number of samples actually determined to be true out of the samples predicted to be true for each pain severity class. The LFA-CRNN showed the following results: 0 = 98%, 1 = 81%, 2 = 63%, 3 = 63%, 4 = 19%, 5 =
3,070,494
219000504
0
16
74%, 6 = 78%, 7 = 100%, and 8 = 100%. Such results are quite poor, compared to the results achieved by AlexNet and the CRNN. It was determined that such results are attributable to the dimensionality reduction LFA technique. Since the dimensionality reduction technique itself either compresses the original image to generate new data or reduces the data size by using particular features consisting of strong features, it removes specific features and only uses strong features. However, only the LFA-CRNN was able to detect data having a PSPI of 8. In addition, as a result of confirming the average precision, both LFA-CRNN and AlexNet showed an average precision of 75%, while the CRNN showed an average precision of 56%. In addition, the recall measurements were similar to the precision results. The LFA-CRNN showed an average recall of 75%, AlexNet showed an average recall of 73%, and the CRNN showed an average recall of 56%. Based on this test, it was confirmed that it was difficult for all the models to detect data having a PSPI of 4, and that only the LFA-CRNN was able to detect data having a PSPI of 8. To sum up all experiments, the proposed LFA-CRNN model showed a stable graph in the learning process, and in the performance evaluation, using the test data, it showed the highest performance of 98.92%. In addition, its loss measurement showed the lowest result at approximately 0.036. Although the LFA-CRNN's precision and recall were quite poor, its average precision was 75% (as high as the
3,070,495
219000504
0
16
precision by AlexNet), and it showed the highest average recall at 75%. The LFA-CRNN proposed in this study showed higher accuracy, using fewer input dimensions than comparable models. We judge that this is because of the effect of the maximum removal of unnecessary regions. We examined the metadata necessary for analyzing test data of facial expressions and judged that the color and area (size) constituting the images were unnecessary elements. Thus, the remaining element was the information about segments, and we set up a hypothesis for a sentiment analysis algorithm through this. When people analyze facial expressions, they do not usually consider colors, and the color element was removed, using the understanding of emotions through the shapes of the mouth, eyes, and eyebrows. Also, the images with colors removed were similar to the images expressed with the outline. In learning with the neural network model, a big loss of data took place when images were reduced via max-pooling and stride in the processing, and the overfitting and wind-up phenomena occurred. Thus, we devised a method for reducing the size of the images, and that method is LFA. LFA maintained information about segments as much as possible to prevent data loss that might occur during processing, utilizing data with both color and unnecessary areas removed. In other words, when we extracted emotions, necessary elements were maintained as much as possible, and all other information was minimized. We judge that LFA-CRNN shows high accuracy for these reasons. Conclusions With this paper's proposed method, health risks due to an
3,070,496
219000504
0
16
abnormal health status that may occur while someone is driving are determined through facial expressions, a representative medium capable of confirming a person's emotional state based on external clues. The purpose of this study was to construct a system capable of preventing traffic accidents and secondary accidents resulting from chronic diseases, which are increasing as our society ages. Although automated driving systems are being mounted on vehicles and are commercialized based on vehicle technology advancements, such systems do not take into consideration driver status. If abnormal health status in a driver is detected while the vehicle is in motion, it may operate normally, but the drivers might not be able to meet the required "golden time" to address any health problem that arises. Our system checks the driver's health status based on facial expressions in order to resolve to a certain extent problems related to chronic diseases. To do so, in this paper, an LFA dimensionality reduction algorithm was used to reduce the size of input images, and the LFA-CRNN model receiving the reduced LFA data as input was designed and used to classify the status of drivers as being in pain or not. The LFA is a method where a series of filters is used to assign a unique number to the line-segment information that makes up a facial image, and then, the input image is converted into a two-dimensional array having a size of 16 × 16 by adding up the unique numbers. As the converted data are learned through the LFA-CRNN model, facial
3,070,497
219000504
0
16
expressions indicating pain are classified. To evaluate performance, a comparison was made with pre-existing CRNN and AlexNet models. The UNBC-McMaster database was used to learn pain-related expressions. As far as the accuracy and loss calculated through learning are concerned, the LFA-CRNN showed the highest accuracy at 98.92%, a CRNN alone showed accuracy of 98.21%, and AlexNet showed accuracy of 97.4%. In addition, the LFA-CRNN showed the lowest loss at approximately 0.036, the CRNN showed a loss of 0.045, and AlexNet showed a loss of 0.117. Although the LFA-CRNN's precision and recall measurement results were quite poor, average precision was 75%, which is as high as the 75% precision achieved by AlexNet. We optimized the facial expressions and the data sources for the LFA-CRNN, and intend to compare the processing times of several models and improve the accuracy in the future. The proposed LFA-CRNN algorithm shows high dependency on the outline detection method. This is self-evident, because LFA is based on segment analysis. We are devising an outline detection technique that can optimally be applied to LFA based on this fact. In addition, the LFA performance process generates a one-dimensional sequence before the production of a two-dimensional LS-Map. It is expected that by converting this, a class can be produced that can be used in the neural network model. Through this improvement process, we will combine the LFA-CRNN model with a system for recognition of facial expressions and motions that can be used in services like smart homes and smart health care, and we plan to apply
3,070,498
219000504
0
16
that to mobile edge computing systems and video security. Conflicts of Interest: The authors declare no conflict of interest.
3,070,499
219000504
0
16