“How are you doing today?” Seems like a simple question, I know. One of the newest AI use-cases, emotional recognition technology (also called affect recognition), may make that question redundant as companies around the world look to technology rather than humans to help evaluate the emotional state of their employees, detainees, patients, and more. That said, is it possible that emotional recognition tech is potentially dangerous to the recruitment process?
On the surface, emotional recognition tech sounds like a pretty cool development. For instance, the technology could potentially preempt a domestic assault, a suicide, a drunk-driving accident, or any number of crimes of passion. And that’s, good, right? It is also being used in border security efforts across the world, as well as in recruitment efforts. In October of 2019, Unilever claimed that by using an AI system as part of its recruitment process, it had saved 100,000 hours of human recruitment time.
That’s all well and good — except for the fact that a growing body of research shows there’s very little scientific evidence to support the belief that emotional recognition tech is accurate. Even more unsettling, there’s a lot of evidence that it may actually increase issues of racial and gender disparity, rather than help them.
Recently, a group of researchers and professors from the AI Now Institute at NYU, whose mission is to “produce interdisciplinary research on the social implications of artificial intelligence and act as a hub for the emerging field focused on these issues” called for the ban of emotional recognition tech altogether. This level of concern, at a time when the industry is estimated to be worth $20 billion and growing is interesting to me. It got me to thinking: Is emotional recognition the next big thing in AI? Or is it the next big thing to fear? And is emotional recognition tech dangerous to the recruitment process? Does it potentially harm more than it helps?
Emotional Recognition Tech in Recruitment: Maybe a Bad Bet, So Far, Anyway
Some companies have developed assessments that they market as ‘designed based on Industrial and Organizational Psychology and selection science.’ They assert that their technology evaluates candidate facial expressions, word choices, and language in response to interview questions and then assigns candidates a score based on performance. These solutions often position themselves as technology solutions that can help remove bias, unconscious and conscious, from the hiring process and “democratize” the process and/or “level the playing field” — terms we have heard ad nauseam over the years.
In an article published late last year by the Washington Post, artificial intelligence hiring systems were described as “a powerful gatekeeper for some of America’s most prominent employers, reshaping how companies assess their workforce — and how prospective employees prove their worth.” The headline alone was noteworthy: “A face-scanning algorithm increasingly decides whether you deserve the job.” Hmmm. Not exactly insignificant things there.
That same article reported that more than 100 employers have used emotion-recognition technology developed by HireVue, a recruiting technology firm, to evaluate more than a million interviews. The use of this kind of technology is reportedly so widespread that some universities are actually training students how to navigate this as part of their interview process. There are many players in this space, including Jobvite, WePow, Modern Hire (previously Montage), and Avature, to name just a few.
I noted with interest the WaPo article posits that many researchers in the AI space are vehemently against assessing people like this, and claim that the technology used by HireVue (and others in the space) are unreliable at best. Without question, this is a debate, not like other conversations around ethics and artificial intelligence, that I’m sure we’ll continue to see and hear more of — and it’s something my team and I will be watching with interest.
Moving beyond recruitment, it’s also worth noting that emotion recognition tech and/or affect recognition is also being used by law enforcement, casinos, prisons, and even used in healthcare to assess pain and in schools to assess whether students are paying attention.
My primary concern is that it appears the problem is with the technology, and thus far, it appears to have limited scientific basis. In a December 2019 report, NYU’s AI Institute, insists this technology is potentially both wrong and harmful, especially to people of color. In fact, the Institute’s report recommended that “regulators should ban the use of affect recognition in important decisions that impact people’s lives and access to opportunities.”
What’s equally alarming, certainly as it relates to use in the recruitment process is that the companies using it rarely disclose the results of their analysis to candidates — which means they not only never get the benefit of the doubt, they also don’t get the benefit of the ability to dispute the analysis.
This bothers me. At best, it’s an irresponsible recruitment strategy. At worst, it’s potentially building systematic bias into hiring processes at leading companies around the world. While I do believe there are some genuinely positive use cases for emotional recognition tech, I’m not convinced that its use in recruitment, at least at this point, is warranted. In the future? Perhaps. But I think we are going to need more solid scientific evidence to support their findings, and ensure the technology is bias-free.
Emotional Recognition Tech, in Diversity and Inclusion Efforts: Another Bad Bet
Since I mentioned bias, there’s a reason the folks at AI Now are focused on bias in emotional recognition — because research shows it’s there. And when major companies are focused on improving diversity and inclusion efforts, in recruitment and otherwise, this is an important part of the conversation.
When the recruitment process involves applicants answering predefined questions in a recorded video and AI-powered facial recognition software being used to analyze their faces, it’s critically important to include potential bias as part of the equation.
In her December 2018 study, Lauren LaRhue, Assistant Professor of Information Systems and Analytics at Wake Forest, tackled this topic. LaRhue’s study was focused on examining bias in the facial recognition systems that analyze people’s emotions. She used a data set of 400 NBA players from the 2016-2017 season, all of which were professionally taken team photographs. She used two different emotional recognition programs, both of which gave more negative emotional scores to black players, who were were generally assessed angrier and more unhappy than their white counterparts, no matter how much they smiled. Across the board, LaRhue’s study found that negative emotions were attributed to black faces more often than to white faces. Think about that for just a moment as it relates to the potential for bias in the recruitment process, hinging on artificial intelligence-driven tech solutions.
That’s a problem — and it doesn’t bode well for improving cultural relations or inclusion programs in this country. In fact, it just further amplifies the reasons we need them in the first place.
Not only is emotional recognition tech of concern to many, facial recognition technology has also proven problematic. Look no further than MIT researcher Joy Buolamwini’s study that identified the fact that Amazon’s Rekognition facial recognition technology, used often by law enforcement, inaccurately detected dark-skinned faces like hers.
These findings around artificial intelligence technology in general are having an outsize influence on the debate over how artificial intelligence should be deployed in the real world and this is incredibly important as it relates to the use of this kind of technology in recruitment.
Emotional Recognition: Never Trust AI More Than Your Gut
So, is emotional recognition (and facial recognition) tech dangerous to the recruitment process? Don’t get me wrong, I do see some value here. But I also believe, like many of the academics and others immersed in the artificial intelligence space, that we must proceed with caution. There might be some upsides, but I also believe that when we are assessing human beings for their ability to perform a job function, that it’s incredibly dangerous for us as humans to turn off our own internal judgment systems in exchange for that of a machine.
After all, we have feelings for a reason, and we all process those feelings differently. Most of us, especially recruitment experts, have a strong internal sense of whether another person is struggling, whether they’re being truthful, or whether they might actually be exhausted and need a nap. Interviewing for a job is stressful, and we all handle stress differently. Do we really need technology to tell us those things?
In my opinion, we need to be very careful on that front. I have no doubt companies will continue to use emotional recognition tech — for recruitment and other things. And make no mistake, I’m a huge fan of technology. But ethical use of technology should always trump all other uses. I think that it’s incumbent upon companies who adopt the use of emotional recognition tech, as well as the makers of these technology solutions, to be ethical, to always consider how their systems might be inherently biased, to invite the input, analysis, and criticism of researchers and academics, and to take their concerns seriously, as well as to be working to continuously improve and fine-tune the technology. Especially now, in these early days of this kind of technology, where there remains some significant criticism of its capabilities and accuracy — we owe people that much. And for great leaders focused on building successful companies with great culture, this shouldn’t be a problem.
Futurum Research provides industry research and analysis. These columns are for educational purposes only and should not be considered in any way investment advice.
Image Credit: Voicebot.ai
The original version of this article was first published on Futurum Research.
Daniel Newman is the Principal Analyst of Futurum Research and the CEO of Broadsuite Media Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise. From Big Data to IoT to Cloud Computing, Newman makes the connections between business, people and tech that are required for companies to benefit most from their technology projects, which leads to his ideas regularly being cited in CIO.Com, CIO Review and hundreds of other sites across the world. A 5x Best Selling Author including his most recent “Building Dragons: Digital Transformation in the Experience Economy,” Daniel is also a Forbes, Entrepreneur and Huffington Post Contributor. MBA and Graduate Adjunct Professor, Daniel Newman is a Chicago Native and his speaking takes him around the world each year as he shares his vision of the role technology will play in our future.