Me, Myself, and AI Episode 304

Technology as a Force for Good: Salesforce’s Paula Goldman

Play Episode
Listen on
Previous
Episode
Next
Episode

Topics

Artificial Intelligence and Business Strategy

The Artificial Intelligence and Business Strategy initiative explores the growing use of artificial intelligence in the business landscape. The exploration looks specifically at how AI is affecting the development and execution of strategy in organizations.

In collaboration with

BCG
More in this series

Paula Goldman has been a passionate advocate for the responsible use of technology for her entire career. Since joining Salesforce as its first chief ethical and humane use officer, she’s helped the company design and build technology solutions for its customers, with a focus on ethics, fairness, and responsible use.

In this episode of the Me, Myself, and AI podcast, Paula joins hosts Sam Ransbotham and Shervin Khodabandeh to discuss her specific role leading the ethical development of technology solutions, as well as the role technology companies play in society at large.

Read more about our show and follow along with the series at https://sloanreview.mit.edu/aipodcast.

Subscribe to Me, Myself, and AI on Apple Podcasts, Spotify, or Google Podcasts.

Transcript

Sam Ransbotham: AI ethics are easy to espouse but hard to do. How do we move from education and theory into practice? Find out today when we talk with Paula Goldman, chief ethical and humane use officer at Salesforce.

Welcome to Me, Myself, and AI, a podcast on artificial intelligence in business. Each episode, we introduce you to someone innovating with AI. I’m Sam Ransbotham, professor of information systems at Boston College. I’m also the guest editor for the AI and Business Strategy Big Ideas program at MIT Sloan Management Review.

Shervin Khodabandeh: And I’m Shervin Khodabandeh, senior partner with BCG, and I colead BCG’s AI practice in North America. Together, MIT SMR and BCG have been researching AI for five years, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities across the organization and really transform the way organizations operate.

Sam Ransbotham: Today, we’re talking with Paula Goldman, chief ethical and humane use officer at Salesforce. Paula, thanks for taking the time to talk with us today. Welcome.

Paula Goldman: Thank you. I’m really excited to have this conversation.

Shervin Khodabandeh: Hi, Paula.

Paula Goldman: Hi.

Sam Ransbotham: Our podcast is Me, Myself, and AI. Let’s start with a little about your current role at Salesforce. First, what does Salesforce do, and what is your role?

Paula Goldman: Salesforce is a large enterprise technology company that, if I had to really summarize what we do, we put out a lot of products that help our customers, which tend to be companies, or sometimes nonprofit organizations or whatnot, connect better with their customers, their stakeholders — whether that’s from a sales process, service, marketing [perspective], you name it. And data plays a really, really important role … giving them the tools to understand their customers and what they need and serve them better.

Within that, my role, I’m chief ethical and humane use officer, which I know is a bit of a mouthful. It’s a first-of-its-kind position for Salesforce. I work with our technology teams and more broadly across the organization on two things. One is, as we’re building technology, thinking about the impact of that technology at scale out in the world, trying to avoid some unintended consequences, [and] trying to tweak things as we’re building them to make sure that they have maximum positive impact. Then, secondly, we work on policies that are really about the use of our technology and making sure that we are putting [up] sufficient guardrails to make sure that our technology is not abused as it is used out in the world.

Sam Ransbotham: Tell us about your background. How did you end up in that role? Pretend you’re a superhero: What’s your origin story here?

Paula Goldman: Well, I have the short story and the long story. The short story is, essentially, I am super passionate about how technology can improve people’s lives, and I spent a long time thinking about the “tech for good” side of that. I worked for a while under Pierre Omidyar, the founder of eBay, working to build an early-stage investing practice that was investing in startups that use technology to serve underserved populations, whether that meant getting financial services to people that otherwise would be excluded in emerging markets, alternative sources of energy, or whatnot.

Having done that for a long time, I think we started to see this shift in the role of technology in society. For a long time, I think the technology industry viewed itself as a bit of an underdog, a disrupter. Then, all of a sudden, you could sort of look and see the writing on the wall, and technology companies were not only the biggest companies in whatever financial index you want to name, but also, technology was so pervasive in every aspect of all of our lives, and even more so because of COVID. And I think we just saw the writing on the wall and saw that the sort of famous adage, “With great —” oh, I’m going to mess this up: “With great power comes great responsibility.”

Sam Ransbotham: Perfect tie-in to my superhero origin question, because that’s the Spider-Man quote.

Paula Goldman: Exactly.

Sam Ransbotham: Go ahead.

Paula Goldman: It’s time to think about guardrails, particularly for emerging technologies like AI, but across the board, how to think about “What do these technologies do at scale?” In any industry that goes through a period of maturation, that’s where I think tech is. That’s my motivation around it. As part of that role, I was leading a tech ethics practice. I was asked to be on Salesforce’s ethical use advisory board, and through that, they asked me to come lead this practice.

Shervin Khodabandeh: Paula, your background’s generally been quite focused on this topic, right? Even before Salesforce. Do you want to talk a little about that?

Paula Goldman: That’s right. That’s a bit of what I was referring to. I spent a lot of time … I mean, even just straight from college, I spent a lot of time on mission-driven startups, often very technology-driven, that were meant to, again, open up opportunity. For example, I spent a lot of time in India working with an affordable private school, thinking again about “How does technology open up opportunity for these students?” Many years later — I won’t tell you how many years later — a lot of those students are actually working in technology in the country.

Essentially, it’s been a through line in my work: the role of technology and markets as a force for good. How do we implement also appropriate guardrails and think about the power of trust and technology, which is ultimately so essential for any company that’s putting out a product in the world these days?

Shervin Khodabandeh: How do you think the nature of the dialogue has changed, since you’ve been in this field for some time as one of the pioneers in this area? How do you think the nature of the dialogue has changed, let’s say, from 10 years ago — guardrails and ethical use — versus now?

Paula Goldman: I would say, 10 years ago — and let’s start by giving credit where credit is due. Ten years ago, certainly there was a ton of leadership in academia thinking about these types of questions, and I think if you would go to a campus like MIT, you would find a lot of professors teaching classes on this and doing research on this. It has been a long-standing field: society and technology, science and technology — call it what you will — and many other disciplines. But I don’t think it was as widespread of a topic of public conversation. Today, you can hardly pick up a newspaper without seeing a headline about some sort of technology implication, whether it’s AI or a privacy story or a social media story or whatnot. Certainly, it was fairly rare 10 years ago to think about companies hiring people with titles like mine.

I think about that history; I actually think a lot about the analogy with … what the technology industry went through in the ’90s with security. There, too, it was a fairly immature field, and you might well, as an observer at that point, have looked at the viruses that were attacking technology and thought, “How could you possibly predict every risk and get a framework for getting ahead of it?” Fast-forward to where we are now, and it’s a mature discipline. Most companies have teams around this and sets of protocols to make sure that their products don’t have these vulnerabilities. I think we’re in the early stages of a similar evolution, especially with AI and AI ethics, where these sorts of norms will become standard and this is a specialized profession that’s developing.

Sam Ransbotham: Maybe let’s get a little bit more specific here. We’ve talked about needing guardrails. What kinds of things do people need to be worried about?

Paula Goldman: If we zoom in on AI specifically here, the positive potential impact — and, actually, real-time impact — of that technology is immense. Think about health care. One of our research teams is working on AI to spot breast cancer with machine learning; it’s called ReceptorNet. That’s the tip of the spear. There’s so much stuff going on in health care that can improve outcomes and save lives. We have a project called SharkEye, which is using vision to look at images of beaches and see if there’s a shark there for a safety warning. There are many, many, many applications of AI like that that have huge benefits for humanity.

At the same time, with technologies, there’s often unintended consequences that come. AI is really an automation of human intelligence, and it’s as good as the data that it gets fed, and that data is the result of human decisions, and it makes it imperfect. That’s really important for us to look out for. It’s very, very important for companies that are using AI to automate processes or, especially, make decisions that could impact human outcomes, whether that’s a loan, or access to a job, or whatnot. I’m sure by now you’ve heard many times about the research that was done — I think actually partly at MIT — about facial recognition by folks like Joy Buolamwini and Timnit Gebru, showing that facial recognition is more accurate on lighter-skinned people versus darker-skinned people, which can have catastrophic impacts if it’s in a criminal-justice setting. There’s a lot of stuff to look out for to make sure, in particular, that the questions of bias are appropriately safeguarded when developing this technology.

Shervin Khodabandeh: On that point, clearly there is a strong case to be made to make sure that, as you said, the unintended consequences are understood and mitigated and managed. That is only going to get more complex as AI gets smarter, and there will be more data. Do you think that there is a possibility of AI itself driving or being a contributor to more ethical outcomes or to more equity in certain processes? I mean, there’s clearly a case for making sure AI doesn’t do something crazy. Then, is [it] also possible for AI to be used to make sure we humans don’t do something crazy?

Paula Goldman: Of course. I think that’s the flip side that maybe doesn’t get talked about as much. Humans making decisions about who gets a loan or who gets a job are also very subject to bias. So I think there is the potential, if done right, when AI is used in those circumstances, in combination with human judgment and appropriate guardrails, for the three of those things to actually open up more opportunities together.

I’m just giving you examples of use cases, but I think that’s probably across the board. Going back to the health care example, a doctor could be tired the day he’s looking at a scan for cancer. That’s why sometimes we get into these polarized discussions of AI versus humans. And it’s not an “either/or”; it’s an “and” — and it’s with a set of guardrails and responsibilities.

Shervin Khodabandeh: I wanted to do a follow-up on the guardrails and responsibilities. As you were thinking about ethical AI, either for Salesforce or more broadly for any organization, how much of the effort to do this at scale do you think is about guardrails and education and governance and more visibility, and appreciation of the potential risks — so it’s process and people kind of stuff — versus technology itself?

Paula Goldman: Absolutely, and when you think about [it] … part of the answer to these guardrails is the technology itself. How do you build tools at scale to watch for risk factors? This is actually something we try to do with our customers. We have integrated AI into a number of applications, for example. We have chatbots for customer service, and we have AI that helps salespeople with lead and opportunity scoring so that they know which prospect to go after and can spend most of their time going after that prospect.

Within that, we’ve built technological safeguards and prods. Some of it is just the way we build the technology itself, but some of it is actually having the technology prompt the human and say, for example, “Hey, you’re building a model. You have identified that maybe you don’t want race as a variable in this model because it can introduce bias, but we see you have a field that is ZIP code, and ZIP code can be highly correlated with race.”

I think the answer to your question is “yes.” There’s a very human element to this question, but to address it at scale, you actually need to automate the solution as well. Again, it comes back to what we were just saying — of combining the technology, the human, the process, that judgment, all together to solve the problem. At Salesforce, we do something called consequence scanning, building off of a methodology put out by a nonprofit in the U.K. called Doteveryone. We’ve customized it for our own good. We’re about to put a toolkit out around it, but we work with scrum teams at the beginning of a process. We say… it’s actually simple at the end of the day: “What are you intending to build here? And what might some of the unintended consequences be, positive and negative?” From that, we generate ideas that actually go on the backlog for that team. That’s how you influence the product road map. It’s not foolproof, for sure. You’re not going to predict everything.

Shervin Khodabandeh: But it’s getting better.

Paula Goldman: It’s getting better, as you might expect it to, as you start to see some of these consequences at scale, and you start to see this pushback and critique from society. It’s always more complicated than a black-and-white picture of how simple it might be to fix things. It’s not simple to fix AI bias, but there’s also no excuse for not paying attention to it now, because it’s such a known problem.

Shervin Khodabandeh: Paula, I want to ask you a hard, maybe unfair question. You commented on the past in this field and how the evolution and strength of technology and tech firms has shifted the dialogue. And you’ve talked a lot about the present. How do you think the future will be 10 years from now? How do you think this conversation we’re having now, which 10 years ago wasn’t commonplace — that “Let’s look at the algorithms; let’s look at unfair treatments and ZIP codes [and] whether it’s a correlation to poverty levels or race” — how do you think the future will be different?

Paula Goldman: Well, you’re catching me on a good day, so I’m optimistic about this, but really, I don’t think these are easy things to scale — not in the beginning, as in any big change in industry. I think all the signs do point to the maturation of this type of work.

I will say, especially with more obvious places, like AI or crypto or things that are in the news because of the questions that come up around them. … I will also say there’s a lot of regulatory pressure, and some of how that will play out will depend on a lot of very complicated politics. You see it not only in the U.S.; you see it in Europe, [which] just released a draft AI law a couple of months ago, and in other jurisdictions as well — also privacy legislation, social media legislation. And those debates are bringing to the forefront what is the responsibility of technology companies versus governments and the role that civil society can play.

I think you could pick apart any particular proposal with pros and cons, but the debate itself is very healthy, in large part. I take a lot of hope from that, because if you were to ask me, “What’s going to happen with technology companies on their own?” I actually don’t think it would scale on its own. It’s part of an ecosystem where companies play a role, civil society groups play a role. That voice is very, very important. And it’s the combination of all those things, which I will say actually has surprised me in terms of how robust it’s remained, and the conversation keeps deepening and widening.

Sam Ransbotham: Well, this is great, and enough about you, but focus on me for a minute. I teach a bunch of college students.

Shervin Khodabandeh: Let’s talk about you, Sam.

Sam Ransbotham: I teach a bunch of college students. That seems like a way to influence the future. What should we be telling people? Monday morning, I have a class on machine learning and artificial intelligence. What can you vicariously tell me to tell them? What should we be talking about?

Paula Goldman: Well, what have you told them? I’m curious. Has it come up?

Sam Ransbotham: It comes up all the time. Actually, I feel a little bit negative, because I always start every class … even though it’s a technical class, we start with what’s happening in the news, and I feel evil because the news is, lots of times, either a glorious example of something that could happen in a lab in 30 years or something terrible that’s happening right now. I feel like I come to the dark side. What should we be telling college students or people heading into this field?

Paula Goldman: I’m going to go on a little bit of a tangent and come back to your question. I think your commentary on the news is really apt. It does relate to the conversation about technology, because it is very extreme right now — all the things that are wrong, which we should talk about for sure. Then, all these overstated claims about what will happen in 30 years but ignoring that there’s a lot of just extraordinary benefit right now also happening from technology. I could go on and on about this, thinking about what technology did for society during COVID — how it helped businesses stay open and people stay safe, and on and on.

I think there is a more nuanced conversation that balances the real need for caution and societal engagement. At the end of the day, when you get this right, technology is an extraordinary force for opening up opportunity for people. I think that basic thinking about that balance and how we teach up-and-coming people who may indeed work in technology is very important. I will also say, I’ve been super heartened over the last year to see the blossoming of curricula that are around tech ethics, especially in a university setting.

I remember a few years ago, when I was at Omidyar Network, we sponsored a challenge for educators — professors — to work this into [a computer science] curriculum. We see that blossoming all over the place — a lot of integration across different technological disciplines. That’s super heartening because then it’s a part of the conversation. The real challenge you want to avoid, working in a company, is that this is seen as some effort off in a corner, separate from the real business.

Sam Ransbotham: Governance.

Paula Goldman: Exactly. You want technologists to see this as part of their job. It’s so cool that we’re seeing more and more ways that professors are just integrating this into the standard curricula of … name your technological discipline.

Sam Ransbotham: One of the things that I like to do in class is give people data sets and then say, “That column — I told you it was X; it actually is race. How does that change what you feel about the analysis you just did and how proud you were of the significant results you got based on using that variable? Does it matter, and why?”

Paula Goldman: Does it work?

Shervin Khodabandeh: That’s a great educational tool.

Sam Ransbotham: Yeah, I think it does work, because it says, “Well, you’re just treating this as data, but this data actually represents something in the real world that is an attribute of a real human versus an abstract number in row 7 and column 3.”

Paula Goldman: That’s another huge topic, and another really important thing to set when we’re teaching is that data is a person, most likely, and keeping that in mind, as we’re thinking about what data we’re collecting, what our intended use is, whether we really need that data … super, super important. It’s something that we also spend a lot of time thinking about within our overall tech ethics work at Salesforce.

Sam Ransbotham: When you’re hiring people, what do you want them to know? What kind of skills are people needing? How can you tell if someone’s going to use their power with great responsibility, to come back to your prior quote?

Paula Goldman: This is a great one. It’s something that DJ Patil, who’s the former chief technologist of the United States … he talks about this a lot, too, in the context of data science. In the hiring process, when you’re asking someone … so, often hiring processes will have an exercise that they’ll ask someone to do. Throw in an ethics question. See how they deal with it. See how coherent the answer is. I think those types of little cues, they not only help you evaluate how sophisticated someone’s thinking is about those questions, but those cultural cues [should not] be underestimated.

We think about that a lot as well. How do we create a culture in which everyone feels like it’s their responsibility to think about these questions? Part of that is about giving them the tools that they need to do it, and the incentives that they need to do it, which include having this be echoed all around by leadership. It’s like, how often are you running across these questions — whether it’s in an interview process or it’s in an all-hands or it’s in your one-on-one with your manager? It matters a lot, which brings me back to why I’m so excited that this is starting to really flourish in university settings, in the teaching itself, in the core curriculums. Those cues matter a lot.

Sam Ransbotham: Paula, this is all quite fascinating. I think if we come back to what Shervin [intimated at the] beginning … how technology can affect society and can affect the organization as a whole — not so much about how we use technology but [how] these technologies affect us and affect society. It’s notable that those effects can be positive and value-building or negative. It just depends on how we make those choices to use it. Thank you for taking the time to talk with us. I’ve really enjoyed it.

Shervin Khodabandeh: Thank you so much. Very insightful.

Paula Goldman: It was super fun. Thank you for having me.

Sam Ransbotham: Next time, we speak with Ranjeet Banerjee, CEO of Cold Chain Technologies. Please tune in.

Allison Ryder: Thanks for listening to Me, Myself, and AI. We believe, like you, that the conversation about AI implementation doesn’t start and stop with this podcast. That’s why we’ve created a group on LinkedIn, specifically for leaders like you. It’s called AI for Leaders, and if you join us, you can chat with show creators and hosts, ask your own questions, share insights, and gain access to valuable resources about AI implementation from MIT SMR and BCG. You can access it by visiting mitsmr.com/AIforLeaders. We’ll put that link in the show notes, and we hope to see you there.

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.

Subscribe to Me, Myself, and AI

Me, Myself, and AI

Dismiss
/