Up Next

ki-logo-white
Market-Based Solutions to Vital Economic Issues

SEARCH

Kenan Institute 2024 Grand Challenge: Business Resilience
ki-logo-white
Market-Based Solutions to Vital Economic Issues
Commentary
Jun 28, 2023

Navigating Changing Relationships in the Workplace and in Life

The primary motivation for the Kenan Institute’s 2023 grand challenge – which focuses on a workforce disrupted – is the medley of rapid attitudinal, technological and social changes that are drastically altering our approach to work. Issues including the rise of new technology, sharp political and cultural polarization, and a global pandemic have caused deep shifts that individuals and organizations are still attempting to navigate.

Kurt Gray looks at many of these questions in his research, which covers morality, deeply held beliefs, bridging divides, and the psychology of human interactions with artificial intelligence. Dr. Gray is a professor in psychology and neuroscience at the University of North Carolina at Chapel Hill, where he directs the Deepest Beliefs Lab and the Center for the Science of Moral Understanding; he’s also an adjunct professor in organizational behavior at UNC Kenan-Flagler Business School, where he teaches organizational ethics and team processes.

We asked Dr. Gray several questions around his work to better understand these and other changes – as well as takeaways from his fields of study.

Robots have been increasingly integrated into the workforce, which may have mixed effects on human psychology. You’ve found that the rise of robot workers may increase job insecurity, burnout, and incivility in the workplace – but also that the threat of robot replacement can spur workers to learn new skills, and may even make humans less prejudiced. How can we maximize the positive effects of robots in the workplace and minimize the negatives?

12
Oct
2023
Frontiers of Business Conference: Workforce Disrupted

Our 2023 Frontiers of Business Conference will convene corporate executives, top researchers and policy leaders to share objective, evidence-based solutions for navigating the precarious road toward a labor market equilibrium. Learn more today.

Kurt Gray: Robots and artificial intelligence are “agents of replacement,” entities created by humans to replace other humans and the things that they do – whether it’s receptionists answering calls, factory workers shaping sheet metal or healthcare providers taking notes. Everyone likes it when their job gets easier, but no one likes to be replaced.

To maximize the positive effects and minimize the negatives, employers need to carefully watch how they frame technology in the workplace. If robots and AI are described as “augmenting” or “helping” people, then people will be more excited to come to work than if they are described as “replacing” or “taking work away” from people. Many bad workplace behaviors are driven by feelings of threat, and minimizing these feelings is essential.

On a deeper level, how do these relationships and interactions with robots affect things like our morality, judgment and creativity?

Gray: Robots and AI are changing everything, but also leave our minds fundamentally unchanged. They are transforming our economy, and both taking away and creating new jobs. They are providing new ways to have relationships with other people, and even providing whole new opportunities for relationships with (think of the 2014 movie “Her,” where a man falls in love with his phone’s AI). At the same time, our psychology will remain fundamentally unchanged. We have the same minds that we did thousands of years ago when we were living in small agrarian towns, where we had to work in the fields alongside other villagers. This mismatch between new technology and ancient minds can poses some challenges for how we interact with others, especially when it comes to morality.

Some work shows that emerging technology can make people less ethical, as robots and AI can put a layer of automation between our decisions and their impacts, making people feel less responsible for harms that they cause. Social media companies can feel absolved of spreading misinformation if they can just point to “the algorithm.” At the same time, if you make clear that decisions are causing direct harm to people in our communities, it is harder to ignore unethical behaviors. Morality evolved to protect our communities from harm, so if you make these communities and this harm salient, then people should care more about ethics.

The key is whether robots and AI are seen as something entirely “new,” making the ethical rules of our ancient minds seem irrelevant, or still fundamentally the same, meaning that our ancient ethical sense still applies.

You also study how we can bridge divides, which is a major concern given the high current levels of polarization – and an election next year that is likely to be quite contentious. What advice do you have for successfully communicating with someone who may have very different beliefs, opinions or experiences?

Gray: Our first impulse when we talk to someone with different beliefs is to win, to convince them that they are wrong and [we] are right. We hurl facts at them to pound their moral beliefs into submission. But very few people are ever convinced by this barrage of facts, and no one likes when someone else views their deeply held moral convictions as wrong.

A better strategy — at least when you first meet someone — is to simply try and understand. Invite them to tell you why they believe what they believe, and when you talk about your own viewpoints, don’t summarize stats you’ve read but instead tell the story of your beliefs and why you hold them. We humans are storytelling creatures, and our work finds that sharing the narrative of why you hold your beliefs creates respect and a willingness to engage.


You may also be interested in: