Me, Myself, and AI Episode 208

Starting Now On Technology Ethics: Elizabeth Renieris

Play Episode
Listen on
Previous
Episode
Next
Episode

Topics

Artificial Intelligence and Business Strategy

The Artificial Intelligence and Business Strategy initiative explores the growing use of artificial intelligence in the business landscape. The exploration looks specifically at how AI is affecting the development and execution of strategy in organizations.

In collaboration with

BCG
More in this series

Technology presents many opportunities, but it also comes with risks. Elizabeth Renieris is uniquely positioned to advise the public and private sectors on ethical AI practices, so we invited her to join us for the final episode of Season 2 of the Me, Myself, and AI podcast.

Elizabeth has worked for the Department of Homeland Security and private organizations in Silicon Valley, and she founded the legal advisory firm Hackylawyer. She now serves as founding director of the Notre Dame-IBM Technology Ethics Lab, which is focused on convening leading academic thinkers and technology executives to help develop policies for the stronger governance of AI and machine learning initiatives. In this episode, Elizabeth shares her views on what public and private organizations can do to better regulate their technology initiatives.

Read more about our show and follow along with the series at https://sloanreview.mit.edu/aipodcast.

Subscribe to Me, Myself, and AI on Apple Podcasts, Spotify, or Google Podcasts.

Transcript

Sam Ransbotham: Ethical use of technology is and should be a concern for organizations everywhere, but it’s complicated. Today we talk with Elizabeth Renieris, founding director of the Notre Dame-IBM Technology Ethics Lab, about what organizations can do today without waiting for the perfect answer.

Welcome to Me, Myself, and AI, a podcast on artificial intelligence in business. Each episode, we introduce you to someone innovating with AI. I’m Sam Ransbotham, professor of information systems at Boston College. I’m also the guest editor for the AI and Business Strategy Big Ideas program at MIT Sloan Management Review.

Shervin Khodabandeh: And I’m Shervin Khodabandeh, senior partner with BCG, and I colead BCG’s AI practice in North America. Together, MIT SMR and BCG have been researching AI for five years, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities across the organization and really transform the way organizations operate.

Sam Ransbotham: Today we’re talking with Elizabeth Renieris. Elizabeth is the founding director of the Notre Dame-IBM Technology Ethics Lab, as well as founder and CEO of Hackylawer. Elizabeth, thanks for taking the time to talk with us today. Welcome.

Elizabeth Renieris: Thanks for having me.

Sam Ransbotham: Let’s start with your current role — your current new role at the Notre Dame-IBM Technology —

Elizabeth Renieris: I was going to ask which one.

Sam Ransbotham: Well, I was thinking about the ethics lab, but you actually can start wherever you like.

Elizabeth Renieris: Sure. So, as you mentioned, I’ve been recently appointed as the founding director of a new technology ethics lab at the University of Notre Dame. It’s actually called the Notre Dame-IBM Technology Ethics Lab, as the generous seed funding is actually from IBM. My appointment is a faculty appointment with the University of Notre Dame, and the intention of the lab is to complement Notre Dame’s existing technology ethics center, which is a very traditional academic research center focused on technology ethics, so you can imagine there are many tenured faculty members affiliated with the center, and they produce traditional academic research, peer-reviewed journal articles.

The lab, in contrast to that, is meant to focus on practitioner-oriented artifacts, so the things that we want to produce are for audiences that include companies themselves, but also for law and policy makers, also for civil society and other stakeholders. And we want them to be very tangible and very practical, so we’re looking to produce things like open-source toolkits and model legislation and explainer videos and model audits, and a whole array of things that you wouldn’t necessarily find from a traditional academic research center. What we really need in this space is … centers and institutions that can translate between academia and practice. The beauty of housing the lab in the university, of course, is having access to the faculty that’s generating the scholarship and the theoretical foundations for the work.

Shervin Khodabandeh: Can you comment a bit more on how you guys make that happen? Because I know there’s a lot of primary research, and then you have the faculty’s point of view. And I assume that there’s also industry connections, and some of these applications in real life come into play, which is really important, as you say. What are some of the ways you guys enable that?

Elizabeth Renieris: Right now, as we’re getting up and running, really what we’re focusing on is convening power, so we’re looking to convene groups of people who aren’t necessarily talking to each other and to do a lot of that translation work. So right now, the intention is to put out an official call for proposals to the general public and to be sourcing projects from all the different stakeholders that I outlined, consisting of teams of individuals who come from different industries and sectors and represent different sectors of society, and to have them focus on projects that actually try and solve real-world challenges. So, for example, right now, during the pandemic, those challenges might be something like returning to work or returning to school. And then, of course, what we want to do as the lab, is we want to take the brilliant work that the faculty at Notre Dame is doing — and eventually [from] elsewhere — and leverage that to underpin and to inform the actual projects that we’re sourcing. And we can hopefully build some narrative arc around how you start translating that theory into practice.

Shervin Khodabandeh: It seems you’re looking at ethics in AI from two sides. One is the ethics of the technology itself, as in, “Is what the technology [is] doing ethical, and how do you make sure it is ethical?” and, “How can technology help the ethics conversation itself?”

Elizabeth Renieris: I think it’s absolutely both. In my mind, you cannot separate a conversation about technology ethics from a conversation about values — both individual values and collective and societal values. So, what I find really fascinating about this space is that you’re right: While we’re looking at the ethical challenges presented by specific technologies, we’re also then confronted with having to identify and prioritize and reconcile competing values of different people and communities and stakeholders in the conversation. And when we have a specific challenge or a specific technology, it actually really turns the mirror back on us as a society and forces us to ask the question of, “What kind of society do we want to be?” or “What kind of company do we want to be?” or “What kind of individual or researcher do we want to be, and what are our values, and how do those values align with what it is that we’re working on from a technology standpoint?” So I believe it’s absolutely both. And I think that’s also been part of the evolution of the ethics conversation in the last couple of years. … Perhaps it started out with the lens very much on the technology. It’s been very much turned around and focused on “Who’s building it? Who’s at the table? What’s the conversation? What are the parameters? What do we count? What values matter?” And, actually, from my standpoint, those are the really important questions that hopefully technology is an entry point for us to discuss.

Shervin Khodabandeh: Sam, I was just going to ask Elizabeth to maybe share with us how she ended up here, the path that you took that —

Elizabeth Renieris: How much time do you have? I’ll give you the abbreviated version. So I was classmates with Mark Zuckerberg at Harvard, and I’ve been thinking about these issues ever since. But more seriously, my professional trajectory was that, after law school, I worked at the Department of Homeland Security for a couple of years in their general counsel’s office. And this was a long time after 9/11, and I actually am from New York and have vivid memories of the event. And I was really struck by how much of the emergency infrastructure was still in place more than a decade after it was initially rolled out. I subsequently went back to obtain an LLM [master of laws degree] in London and, accidentally, having arrived in the year 2012, started working on the first draft of what became the General Data Protection Regulation, or the GDPR, and through that process gained a lot of exposure to the ad tech industry, fintech industry.

Somewhere along the way, [I] read the Bitcoin white paper, came back to the [United] States just before the referendum, and was branded a blockchain lawyer because I had read the Bitcoin white paper. [Laughs] So then I had this interesting dance of trying to be a data protection and privacy lawyer and also split my time with the blockchain distributed ledger folks. And I quickly picked up on some of the unsavory, unethical behavior that I saw in the space. And I was really bothered by it, and it also triggered these memories of the experience with Mark Zuckerberg scraping faces of my classmates in university. And it was just an interesting thing that I didn’t appreciate at the time but sort of bubbled in the background. And that led me to actually work in-house at a couple of companies — startups based in Silicon Valley and elsewhere. And there was more of this unsavory behavior, and I thought, “If only we could talk about technology, and we can engage with technology and be excited about it, without all of these terrible downsides.”

I think part of the reason I was observing that is because you didn’t have the right people in the room. So you had technologists that were talking past and over lawyers and policy makers, and that was my idea in late 2017 to start my Hackylawer consultancy. The idea with that was, I’m fairly technically savvy, but I have this great training, these legal skills, these public policy skills; I’d like to be able to translate across those groups and bring them together, and [I then] built up a pretty successful consultancy around that for a couple of years thereafter.

Shervin Khodabandeh: That’s a very inspiring story, from the beginning to here.

Sam Ransbotham: I want to ask about the long version, but I don’t know if we have time for that. Let’s follow up on a couple of things that you mentioned. One is — and I think you and Shervin both talked about this briefly, but — there’s a little bit of excitement about some of the bad things that happen. When we see these cases of AI bias coming out and they make headlines, there’s also a silver lining, and that lining is pretty thick, that it’s really highlighting some of these things that are already existing, are already going on. And these seem like opportunities. But then, at the same time, you also mentioned how when we react to those, we put things in place, and more than a decade later, the DHS protections were still in place. So how do we balance between reacting to these things that come up — between addressing biases and not putting in draconian measures that stifle innovation?

Elizabeth Renieris: You’re right that they’re opportunities. I think the idea is that, depending on the challenge presented — I don’t like the frame of stifling innovation or the tension between innovation and other values, like security or privacy or safety — I think we’re seeing this play out again in the pandemic, right? Where we are often being pushed a narrative around technologies that we need to deploy and technologies that we need to adopt in order to cope with the pandemic. And so we saw this in the debate over exposure notification and contact-tracing apps. We’re seeing this right now very prominently in the conversation around things like immunity certificates and vaccine passports. I think the value of ethics there, again, is that rather than look at the narrow particulars and tweak around the edges of a specific technology or implementation, to step back and have that conversation about values. And to have the conversation about, “What will we think of this in five or 10 years?”

The silver lining of what happened after 9/11 was we’ve learned a lot of lessons about it. We’ve seen how emergency infrastructure often becomes permanent. We’ve seen how those trade-offs in the moment might not be the right trade-offs in the long run. So I think if we don’t take lessons from those — and this is where it’s really interesting in technology ethics, how there’s so much intersection with other things, like STS [science, technology, and society] and other fields around history and anthropology and why it’s so critical to have this really interdisciplinary perspective, because all of those things, again, go back to a conversation about values and trade-offs and the prioritization of all of those. And some of that also, of course, has to do with time horizon, going back to your question before. So it’s easy to take the short view. It could be hard to take the long view. I think if you have an ethical lens, it’s important to balance both.

Shervin Khodabandeh: And also, I think you’re raising an interesting point — that is, with AI particularly, the consequences of a misstep [are] very long term, because the algorithms keep getting embedded and they multiply, and by the time you find out, it might not be as easy as just [replacing] it with a different one, because it has a cascading effect. On the point about innovation, AI can play a role in helping us be more ethical. We’ve seen examples. I think one of our guests talked about … Mastercard — how they’re using AI to understand the unconscious or unintended biases that their employees might have. What is your view on that — on AI specifically as a tool to really give us a better lens into biases that might exist?

Elizabeth Renieris: I think the challenge with AI is that it’s so broad, and definitions of AI really abound, and there’s not entire consensus around what we’re even talking about. And so I think there’s the risk that we use this broad brush to characterize things that may or may not be beneficial, and then we run the risk of decontextualizing. So we can say, “We have a better outcome,” but relative to what? Or, what were the trade-offs involved?

And I think it’s not just AI, but it’s the combination of a lot of new and advanced technologies that together are more than the sum of their parts, right? So AI plus network technologies, plus some of the ones I’ve mentioned earlier, I think are that much harder to unwind or course-correct or remedy when things go wrong. One of the challenges I see in the space is that, again, we can tweak around the edges and we’ll look at a specific implementation or a specific tech stack, and we won’t look at it in the broader context of “How does that fit into a system? And what are the feedback loops? And what are the implications for the system as a whole?” And I think that’s one of the areas where the technology ethics conversation is really useful, particularly when you look at things like relational ethics and things that are a lot more concerned with systems and relationships and the interdependencies between them. I worry that we’re a little too soon to declare victory there, but [it’s] definitely something to keep an eye on.

Shervin Khodabandeh: As you say, the devil’s in the details. This is the beginning of having a dialog and having a conversation on a topic that otherwise would not even be on the radar of many people. What is your advice to executives and technologists that are right now building technology and algorithms? What do they do in these early stages of having this dialog?

Elizabeth Renieris: That’s a tough question. Of course, it depends on their role. So you can see how the incentives are very different for employees versus executives, versus shareholders or board members, so thinking about those incentives is important in terms of framing the way to approach this. That being said, there are a lot of resources now, and there’s a lot available in terms of self-education, and so I don’t really think there’s an excuse at this point to not really understand the pillars of the conversation: the core texts, the core materials, the core videos, some of the principles that we talked about before. I mean, I think there’s so much available by way of research and tools and materials to understand what’s at stake that to not think about one’s work in that context feels more than negligent at this point. It almost feels reckless in some ways.

Nevertheless, I think the importance is to contextualize your work — to take a step back. This is really hard for corporations, especially ones with shareholders, so we can understand that. We can hold both as true at the same time and think about taking it upon yourself. There are more formal means of education; one of the things that we are doing at the lab, of course, is trying to develop a very tangible curriculum for exactly the stakeholders that you mentioned, and with the specific idea to take some of the core scholarship and translate it into practice. That would become a useful tool as well. But at the end of the day, I think it’s a matter of perspective and accepting responsibility for the fact that no one person can solve this. At the same time, we can’t solve this unless everyone acknowledges that they play a part.

Sam Ransbotham: That ties into things your lab is doing because … I think the idea of everybody learning a lot about ethics makes sense [on] one level. On the other hand, we also know — we’ve seen with privacy — that people are lazy, and we are all somewhat lazy. We’ll trade short term for long term. And it seems some of what your lab is trying to set up is making that infrastructure available to reduce the cost to make it easier for practitioners to get access to those sorts of tools.

Elizabeth Renieris: Yeah, and I think education is not a substitute for regulation. I think ultimately, it’s not on individuals, it’s not on consumers. My remarks shouldn’t be taken as saying that the responsibility to really reduce and mitigate harms is on individuals entirely. I think the point is that we just have to be careful that we don’t wait for regulation. One of the things that I particularly like about the technology ethics space is that it takes away the excuse to not think about these things before we’re forced to. I think in the past, there’s been this luxury in tech of waiting to be forced into taking decisions or making trade-offs or confronting issues. Now, I would say, with tech ethics, you can’t really do that anymore. I think the zeitgeist has changed. The market has changed. Things are so far from perfect — things are far from good — but at least in that regard, you can’t hide from this. I think in that way, they’re at least somewhat better than they were.

Shervin Khodabandeh: I also feel part of that is, many organizations, to Elizabeth’s point, not only [do] they [not] have the dialog; even if they did, they don’t have the necessary infrastructure or investments or incentives to actually have those conversations. And so I think — I go back to your earlier point, Elizabeth — we have to have the right incentives, and organizations have to have, with or without the regulation, the investment and the incentives to actually put in place the tools and resources to have these conversations and make an impact.

Elizabeth Renieris: You have to also align the incentives. Some of these companies, I think, actually want to do the right thing, but again, they’re sort of beholden to quarterly reports and shareholders and resolutions, and they need the incentives. They need the backing from the outside to be able to do what it is that is probably in their longer-term interests.

Sam Ransbotham: You mentioned incentives a few times. Can we get some specifics for things that we could do around that to help align those incentives better? What would do it?

Elizabeth Renieris: I think process-oriented regulations make sense. So, what incentive does a company have right now to audit its algorithms and then be transparent about the result? None. They might actually want to know that; they might actually want an independent third-party audit. That might actually be helpful from a risk standpoint. If you have a law that says you have to do it, most companies will probably do it. So I think those types of — they’re not even nudges, they’re clear interventions — are really useful. I mean, I think the same is true of things like board expertise and composition. You may want to think about, “Is it useful to have superclass share structures in Silicon Valley, where basically no one has any control over the company’s destiny apart from one or two people?” I think these are all, again, common interventions in other sectors and other industries. And the problem is that this technology exceptionalism was problematic before, but now, when every company is a tech company, the problem is just metastasized to a completely different scale.

Sam Ransbotham: The analogy I think about is food. I mean, anything that sells food now, we want them to follow food regulations. That certainly wasn’t the case 100 years ago, when Upton Sinclair had The Jungle. I mean, it took that to bring that transparency and scrutiny to food-related processes. But we don’t make exceptions now for, “Oh, well, you’re just feeding 100 people, [so] we’re not going to force you to comply with health regulations.”

Elizabeth Renieris: Exactly.

Shervin Khodabandeh: That’s actually a very good analogy, Sam, because as I was thinking about what Elizabeth was saying, my mind went into just also ignorance. I mean, I think many users of a lot of these technologies — highly, highly senior people, highly, highly educated people — may not even be aware of what the outputs are or what the interim outputs are, or how they come about, or what all of the hundreds and thousands of features that give rise to what the algorithm is doing [are] actually doing. And so it’s a little bit like the ingredients in food, where we had no idea some things were bad for us and some things would kill us and some things that we thought were better for us than the other bad thing are actually worse for us. So I think all of that, it’s bringing some light into that education as well as regulation and incentives.

Elizabeth Renieris: The point is, we acted before we had perfect information and knowledge. And I think there’s a tendency in this space to say, “We can’t do anything. We can’t intervene until we know exactly what this tech is, what the innovation looks like.” We got food wrong. We had the wrong dietary guidelines. We readjusted them. We came back to the drawing board. We recalibrated. The American diet looks different — I mean, it’s still atrocious, but we can revisit.

Shervin Khodabandeh: But we keep revisiting it, which is your point.

Elizabeth Renieris: We keep revisiting, and we reiterate. And so that’s exactly what we need to do in this space and say, “Based on what we know now. …” And that’s science, right? Fundamentally, science is the consensus we have at a given time. It doesn’t mean it’s perfect. It doesn’t mean it won’t change. But it means that we don’t get paralyzed. We act with the best knowledge that we have, in the humility that we’ll probably have to change this or look at it again. The same thing happened with the pandemic, where we had the WHO saying that masks weren’t effective and then changing course, but we respect that process, because there’s the humility and the transparency to say that this is how we’re going to operate collectively because we can’t afford to just do nothing. And I think that’s where we are right now.

Shervin Khodabandeh: Very well said.

Sam Ransbotham: Well, Elizabeth, I really like how you illustrate all these benefits and how you make that a concrete thing for people. And I hope that the lab takes off and does well and makes some progress and provides some infrastructure for people to make it easier for that. Thank you for taking the time to talk with us today.

Shervin Khodabandeh: Yeah, thank you so much.

Elizabeth Renieris: Thanks so much for having me. This was great.

Sam Ransbotham: Shervin, Elizabeth had a lot of good points about getting started now. What struck you as interesting, or what struck you as a way that companies could start now?

Shervin Khodabandeh: I think the most striking thing she said — I mean, she said a lot of very, very insightful things, but in terms of how to get going, she made it very simple. She said, “Look, this is ultimately about values, and if it’s something you care about” — and we know many, many organizations and many, many people and many very senior people and powerful people do care about it — “if you care about it, then do something about it.” The striking thing she said is that you have to have the right people at the table, and you have to start having the conversations. As you said, Sam, this is a business problem.

Sam Ransbotham: That’s a very managerial thing.

Shervin Khodabandeh: Yeah. It’s a managerial thing. It’s about allocation of resources to solve a problem. It is a fact that some organizations do allocate resources on responsible AI and AI governance and ethical AI, and some organizations don’t. I think that’s the key lesson from here — that if you care about it, you don’t have to wait for all the regulation to settle down.

Sam Ransbotham: I liked her point about revisiting it as well. That comes with the idea of not starting perfectly; just plan to come back to it, plan to revisit it, because these things, even as you said, Shervin, if you got it perfect, technology would change on us before —

Shervin Khodabandeh: Exactly. You would never know you got it perfect.

Sam Ransbotham: You would never even know you got it perfect. Yeah. The perfection would be lost in immortality.

Shervin Khodabandeh: I’m still trying to figure out if coffee is good for your heart or bad for your heart, because it’s gone from good to bad many times.

Sam Ransbotham: I think that’s some of what people face with a complex problem. If this was an easy problem, we wouldn’t be having this conversation. If there were simple solutions, if people are tuning in to say, “All right, here are the four things that I need to do to solve ethical problems with artificial intelligence,” we’re not going to be able to offer that. We’re not quite [at] that BuzzFeed-level of being able to say, “Here’s what we can do,” because it’s hard.

Shervin Khodabandeh: The other thing that struck me is that she has a lot of education and passion in this space that I think is actually quite contagious, because I think that’s exactly the mentality and the attitude that many organizations can start to be inspired by and adopt to start moving in the right direction rather than waiting for government or regulation to solve this problem. We can all take a role in becoming more responsible and more ethical with AI, starting now. We already have the right values, and we already know what’s important. Nothing is really stopping us from having that dialog and making those changes.

Sam Ransbotham: Thanks for joining us for Season 2 of Me, Myself, and AI. We’ll be back in the fall of 2021 with Season 3. In the meantime, check the show notes for ways to subscribe to updates and stay in touch with us. Thanks for joining us today.

Allison Ryder: Thanks for listening to Me, Myself, and AI. If you’re enjoying the show, take a minute to write us a review. If you send us a screenshot, we’ll send you a collection of MIT SMR’s best articles on artificial intelligence, free for a limited time. Send your review screenshot to smrfeedback@mit.edu.

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.

Comment (1)
Anatoly Kazakov
Ethics is an important research topic. The applied ethics model is of value to technology. All ethical problems will be clear and understandable, and solutions to problems in technology will have shared value. First Step - Architecture? Ethics before the stakeholder?

Subscribe to Me, Myself, and AI

Me, Myself, and AI

Dismiss
/