By Cliff Stevenson, Principal Analyst, Talent Management and Workforce Management
Here are two statements, both from theoretical performance reviews:
- John achieved all of his tasks on or before schedule in 2017.
- John was able to complete his tasks on or before schedule in 2017.
They seem like they’re saying the same thing, but there’s a subtle difference. In the second example, there is an expectation that John might not have the capacity to make his goals. He “was able” to make his goals, which shows a mindset that the expectation was that he might not. This is subtle, but could be a key indicator of an unconscious bias about John’s ability. This is all theoretical of course, but it illustrates something that is often neglected when we speak of AI or machine learning — it not only doesn’t have to be dehumanizing, it can actually help us understand humans better.
What got me thinking about this was the 2018 Connections conference, held this last week by Ultimate Software, and Brandon Hall Group’s own 2017 HCM data and analytics study where the top three concerns regarding people data were data security (52%), employee privacy (43%) and the dehumanization of the workforce (23%).
Ultimate Software addressed this concern head on, talking about real concerns; the sense of the loss of humanity, job replacement concerns and even loneliness in the workplace. But more than just talk about it, Ultimate demonstrated an AI system using natural-language processing and sentiment analysis to help humans improve the way we connect with each other.
This dread of dehumanization is real. For all of the stories from tech giants saying that AI will save us all, actual humans are only growing more restless. When Brandon Hall Group asked the same question in 2016 about people data concerns, 40% listed employee security as their top concern, privacy 16% and dehumanization 13%, which shows a drastic increase in worry about this topic in just one year. I’ve been as guilty as anyone touting the benefits of automated HCM systems. But it is sometimes difficult to reconcile automation with humanity and it’s time to think about — and understand — various visions for the future.
Let’s start by noting the difference between job loss and dehumanization. Job loss was only a concern for 14% of respondents in 2017, but job loss is often how companies and some media think of dehumanization. I think it’s been fairly well established that robots are not coming for our jobs, but that doesn’t mean we shouldn’t be worried about what sort of decisions software is helping us make. If an algorithm decides we can’t get a home loan, that has real consequences for families and those sorts of programs have left a distrust in the minds of many of us when thinking about AI-assisted decision making. That is why I was so pleased to see Xander, Ultimate Software’s AI program, being used and marketed to help humans understand emotions and understand people better, rather than something as cold and mathematical as just predicting flight risk.
When I hear that Ultimate has a system that separates 100+ emotions and 140+ workplace contexts to place those words in, and can combine that with the actual nouns and actions used in open text responses, and they can take that to help managers understand when an employee is frustrated (and believe me, I think we’ve all been caught misunderstanding the tone of an email or a text), I see AI being used not just to help employees be more productive, but to be more human.
Make no mistake: machines will always have some bias because the originating data and programming comes from humans. But unlike humans, it’s detectable and changeable. Identifying bias, resolving culture problems, and developing leaders into coaches are all possible uses for a more human AI. Maybe at some point AI can even create jobs by marrying organizational development experts with the technology to help resolve culture issues that have plagued us for literally hundreds of years (cough-cough, gender pay gap).
What’s even more fascinating is these systems are being built from an emotional standpoint even at the technology level, which again is a bit odd to think about, but really can be part of the design philosophy.
“We are focused on having built the capabilities for the user community,” said John Machado, VP Development for Ultimate, speaking to analysts. “I always start with people first, and work toward whatever way will be the most people friendly, most user-friendly manner.”
I personally am a firm believer that technology cannot solve our social or workplace problems, but that it can speed us faster toward whatever direction we point it in, and it sounds like John agrees, “With being first comes the responsibility that comes with learning how far can you go and should go. It’s all about improving people’s understanding of people.”
That’s sounds like a good direction to me.
Cliff Stevenson, Principal Analyst, Talent and Workforce Management, Brandon Hall Group
For more information on our research, please visit www.brandonhall.com.
Here’s Ultimate Software’s video about its AI tool, Xander.