Find A Speaker or Advisor

Tags:   +   +   +   +   +   +   +

We have long known that artificial intelligence (AI) has the potential to improve business productivity and customer experience.

But in marveling at the magic of AI in specific roles, have we neglected the impact on society?

Recently, concerns over data privacy, biased algorithms and an increasingly impersonal, dehumanized digital future have come to the fore as seemingly unstoppable technology advances. These four thought leaders and advisors have insights that can both help us mitigate the negative consequences for society, and develop responsible, ethical and more human-centered AI.

SHERRY TURKLE: HOW TO ADDRESS THE IMPACT OF THE OTHER AI – “ARTIFICIAL INTIMACY”

From self-driving cars to digital assistants like Siri and Alexa, AI is fast becoming ubiquitous. But while we are quick to embrace technologies that make our lives, jobs and relationships easier, says Turkle, MIT professor and New York Times best-selling author of “Reclaiming Conversation” (2015), we’re slow to address the ways that technology undermines human empathy.

Backed by more than 40 years of research, Turkle argues this is because AI now stands for not only artificial intelligence but also artificial intimacy – devices don’t only claim that they are smart, but that they care for us, that they are in a position to be our friends and companions. This new move challenges our way of thinking about empathy, an essential element in personal, as well as business, relationships that demand trust.

In her lively keynotes and interactive workshops, Turkle reveals how companies can (and should) ensure AI is not deliberately sold as a replacement for human trust or companionship, and avoid suffering the consequences of what customers tend to see as a personal betrayal: machines that have represented themselves as friends. As Turkle puts it, “Simulated thinking may be thinking, but simulated feeling is never feeling. There is no empathy app.”

DESMOND UPTON PATTON: BIAS IN AI – THE NEXT BATTLE FOR EQUITY

AI – while “intelligent” – often lacks the other qualities that would make it human, such as empathy and fairness. In another sense, AI can be all too human, drawing on flawed data to replicate human bias. Specifically, AI algorithms commonly make biased decisions adversely affecting women and people of color, on matters ranging from credit allocation to prison sentencing. This is because AI relies on datasets that unconsciously teach machines to replicate injustice, fail to understand cultural nuance or context, and are designed primarily by white men, who often fail to anticipate or comprehend the power of bias.Patton – pioneer at the intersection of AI, social media, race and society, and founding director of the SAFE Lab and co-director of the Justice, Equity and Technology lab at Columbia University – draws on his own experiences challenging AI bias to show how organizations can help defeat this emerging social problem. As AI becomes more crucial to decision-making across industries, those who care about equity, fairness and justice will have to take notice of how bias is being unconsciously promoted by technology.

IYAD RAHWAN: MORALITY, ETHICS AND MACHINE BEHAVIOR 

The rapid rise of AI has generated new questions about the relationship between people and machines. It has also spurred the creation of a new field of interdisciplinary science: machine behavior. Rahwan, “anthropologist of AI” and director of the Center for Humans and Machines at the Max Planck Institute for Human Development, is at the helm of this new field.

A computer scientist by training, Rahwan and his team of colleagues from disciplines as diverse as robotics, sociology, evolutionary biology, economics and more are investigating how artificial agents interact “in the wild” with humans, their environments and each other. Their work is imperative and urgent, especially as these autonomous, smart systems touch more aspects of our lives, affecting everything from credit scores to politics. Rahwan’s Moral Machine experiment examines the ethical decisions made by machine intelligence – such as that in self-driving cars (in case of collision, who or what should it try to spare?). His keynotes and workshops help organizations make sense of the changes wrought by current technology while channeling them into constructive, profitable gains that advance, rather than undermine, the common good.

ARNAV KAPUR: PUTTING HUMANS AT THE CENTER OF TECHNOLOGY WITH EXTENDED COMPUTING 

In the popular imagination, AI is often something to fear – an onslaught of machines that will replace us in the workplace and render humanity redundant. But Kapur, a pioneering inventor, researcher and rising star of the MIT Media Lab, imagines a future where people are at the center of technology that is used for good – extending our intelligence and abilities multifold, and enabling us to solve the world’s big problems. And that future isn’t far from reality. Kapur, a graduate researcher at MIT, is leading progress toward melding man and machine, building technologies that augment us, instead of replacing us; that disappear into the background of the human experience and raise us to new levels of curiosity and creativity.

His latest innovation, which Kapur demonstrated recently for CBS News 60 Minutes host Scott Pelley, is an AI-enabled headset that allows humans to communicate with computers or other people silently, without speech, mouth movements or physical actions. It can make any one of us an expert on all things, connecting our brains to the internet. It can also help millions of people struggling with speech, like those with ALS, oral cancer or stroke. The promise of AI, if we build it well and for good, is enormous.

Wherever you are in your organization’s AI journey, contact us to find the expert who will help you make the right moves in AI in 2020.

How Can We Build AI Without Endangering Society? was last modified: October 17th, 2022 by Brian Sherry