As artificial intelligence becomes more common in hiring and admissions processes organizations are increasingly using automated systems to evaluate candidates. These systems are seen as efficient and objective but they may influence how people behave during evaluations. The study suggests that many individuals assume AI values logical and data-driven qualities more than emotional intelligence and they adjust their behavior accordingly. This belief is referred to by the researchers as the analytical priority lay belief.
Initial evidence for this effect came from a survey of over fourteen hundred job seekers who completed a game-based assessment. Those who thought AI was involved reported changing their behavior more than those who believed a human was assessing them. As new regulations like the European Union’s AI Act require transparency about AI use public awareness is rising but it can also lead to inaccurate assumptions about what AI values.
The twelve studies included a range of approaches from online surveys to real-life hiring contexts. Participants were told they were being evaluated by either AI a human or both. Regardless of the setting those who believed AI was the evaluator adjusted their self-presentation to appear more analytical. This shift in behavior was especially noticeable among younger participants and those with certain personality traits. Participants were least authentic when they believed only AI was evaluating them.
Interestingly when participants were asked to reflect on the possibility that AI might also value emotional or intuitive qualities the tendency to highlight analytical traits decreased or even reversed. In some cases when humans were involved in the final evaluation stages the effect of AI still persisted but it was less pronounced. In one study over a quarter of candidates would have only been selected if AI had evaluated them and not a human indicating how significant these shifts can be in real-world decisions.
Overall, the study concludes that AI-driven assessments can significantly shape how people present themselves. This behavior seems to stem from a common but possibly incorrect belief that AI systems favor logical thinking. While this shift is not a total omission of emotional traits it does represent a change in emphasis. The researchers also found preliminary signs that other aspects of self-presentation such as creativity ethics and risk-taking may also be influenced by AI assessments though the main focus was on the analytical versus intuitive dimension.
These findings raise important concerns about the fairness and accuracy of AI-based evaluations. If people alter their behavior based on flawed beliefs it can distort results and lead to poor matches in hiring or admissions. Organizations using AI in decision-making should carefully consider how transparency and communication about the AI’s role may influence candidate behavior. Informing applicants about what AI systems truly assess might help reduce unintended behavioral shifts.
Though this study focused on hiring processes similar effects could emerge in other areas where AI is used in high-stakes decisions. Future research should explore these areas as well as the long-term impact of managing impressions for AI systems. As AI technology continues to evolve so too may people’s behavior and expectations when interacting with it.