Balancing innovation and safety: ethics, transparency, bias and privacy

Synopsis

How do we protect ourselves in the new era of AI? In this episode, hosts Amanda Lang and Jodie Wallis speak with Ann Cavoukian about the privacy of personal data. Cavoukian is the distinguished expert-in-residence leading the Privacy by Design Centre for Excellence at Ryerson University. They also speak with Dr. Foteini Agrafioti, the chief science officer at RBC, and head of Borealis AI, as well as Deb Santiago, co-lead of the Responsible AI practice at Accenture. The conversation continues with Richard Zemel, co-founder and director of research at the Vector Institute for Artificial Intelligence.

Appearing in this episode

Transcript

Amanda Lang: I’m Amanda Lang and this is the AI Effect, a podcast that looks at artificial intelligence and the opportunities and threats that it might pose for businesses and people in Canada.

Jodie Wallis: I’m Jodie Wallis, Managing Director of Artificial Intelligence, for Accenture in Canada. My job is to help businesses navigate one of the most fundamental technological and economic shifts of our lifetime.

[00:00:30]

Amanda Lang: And of course, businesses do that by figuring out how to use the technology to their advantage. With AI, it’s shown up early and often, in a form of dumb AI, that’s predictive. It helps them identify customers, close sales, automate production.

I remember the first time I saw this form of AI. It was in 2009, and I’d been doing a little research for a holiday, and two days later, on my computer a banner ad for the hotel in Hawaii, as it happened to be, popped up. And it was the creepiest thing that had happened to me at that point. It felt super intrusive.

[00:01:00] Of course, it’s not just me, this has come a long way. Remember Target a few years ago getting into trouble for sending people coupons, maternity coupons and their family didn’t know they were pregnant? This is from analyzing shopping data.

We know Facebook knows things about where we are in our cycle, based on the language we use. Is this scary or is it helpful?

Jodie Wallis:

[00:01:30] It’s funny you talk about 2009, cause I remember a similar timeframe and a similar experience and how creepy it was then. But it’s not as creepy now, so what does that tell us about the evolution of our comfort level with AI?

Amanda Lang: The AI is changing us.

[00:02:00] So, there’s a lot of work underway to consider and incorporate questions about ethics when it comes to AI. How do we counter the biases we have, that we may or may not know about? Accountability, transparency, and there are questions of privacy. Jodie raised the point that data is the most important piece of the equation when it comes to AI. Where does the data come from?

[00:02:30] Ann Cavoukian is the former Information and Privacy Commissioner, Ontario. She’s also the pioneer of something called Privacy by Design. That’s an approach to systems engineering that takes privacy right into account through the whole process of designing something, and it’s an approach that’s being adopted by governments globally. Ann says that the trick here is to have a slightly more nuanced and thoughtful view about understanding privacy.

Ann Cavoukian:

You know, I always tell people, privacy’s not a religion. If you want to give away your information, do it. As long as it’s your choice. Privacy is all about control. Personal control on the part of a data subject. So I think in some cases, people will

[00:03:00] be happy to- to allow in their information to be used, because they get benefit from it. Other cases, the answer will be “No.” So I just always want the individual to play some kind of role in that. That’s- that’s my goal.

Ann Cavoukian:

[00:03:30] Privacy by Design is all about being proactive and embedding the privacy protective measures into the design of technologies. Into code, into policies, so that you can avoid the privacy harms from arising. It’s a model of prevention, much like a medical model. You don’t want to wait for the harm to arise and then offer some solution. Which is also valuable, in terms of regulatory compliance, but that’s too little, too late. You’ve gotta prevent the harms from arising, that’s why I developed Privacy by Design. Long time ago. Late ’90s, but it really took off after 9/11.

[00:04:00] What is going to be huge, and what we are doing in Canada, thanks to our federal privacy commissioner, Daniel Therrien. He’s already gone to the government, the federal government and said, “Look, we need to strengthen our privacy laws. The existing law, PIPEDA, is no longer sufficient to attain essential equivalence with the GDPR, the new law.” We had it in the past, but now that the privacy bar’s been elevated, we’re not there anymore. So he’s already pressed the federal government to upgrade our laws, which are from the early 2000s, and he’s asked them specifically to add Privacy by Design into the new legislature.

Jodie Wallis: So, it’s okay to collect all sorts of data, pictures of us walking across streets, as long as we’re using that data without identifying who we are?

Ann Cavoukian:

[00:04:30] That’s right. The privacy harms rise and end with the identifiably of the individuals. If you have pictures of people, but they’re not personally identifiable, then, uh, the privacy laws don’t apply to them, which apply to personal information of personally identifiable individual’s data. The law we have here in Ontario, for example. Um, it all relates to personally identifiable data. Once you no longer have data that is associated with me. Let’s say you have medic- you have sensitive medical data associated with anybody, Ann Cavoukian. Once my name is removed from it, you will still have the sensitive medical data, but it could be about anybody.

[00:05:00] So, and, in- in a way, you don’t want to stop that, cause you want to use that for much needed research, and analysis and new developments. So, it’s not the data, it- it hinges on the identifiably of the data. And there are very- and you have to do it properly. You have to strongly de identify data, using strong de identification protocols, combined with the risk of re identification framework. We have the technology to do this.

Amanda Lang:

[00:05:30] Do we need to worry about though, AI that’s developed for benign purposes? For instance, fraud detection AI for credit cards, that could be used in the wrong hands for ill. In other words, my credit card company knows more about me than I may know about myself. I don’t care, because it helps me. But if the government got its hands on it and decided I was, you know, suspect of something, I might be more troubled.

Ann Cavoukian: Absolutely. You have to be concerned about that. So the example you gave with

[00:06:00] the credit cards, for example. They collect your information for fraud related purposes. That’s the primary purpose of the data collection. Valuable, you- you want that to happen. What you don’t want it- you be used by- for s- by unauthorized parties. Third parties for secondary uses, like surveillance by the state. And believe me the government is very interested in engaging in surveillance. And we have to ensure that that doesn’t happen.

So, for sure, we have to keep our eyes open, we have to prevent secondary uses of data. We- we always have to be on top of this. There is this myth that it’s a zero sum proposition. It’s either or. You can have privacy or security. Uh, privacy or business interests. One versus the other. And that zero sound- zero sum mean will- will just bring it all to an end. We have to get rid of that.

[00:06:30] Substitute positive sum. All positive sum means is you can have privacy and security at the same time, in doubly enabling ways.

Amanda Lang: We’ve historically relied on private enterprise to innovate and create new products and services, and we’ve relied on governments to curtail the behavior in a way that we find socially acceptable. Who should be playing the biggest role here in making sure Privacy by Design is embedded in products?

Ann Cavoukian:

[00:07:00] Those rules are changing. I would no longer rely on government, uh, unfortunately. I would rely on, um, progressive, private sector organizations like Apple, for example. They’re amazing. End to end encryption, they offer that, if you have an Apple iPhone, or anything else. And they were the ones, who said, “No” to James Comey last year, when he said, uh, “We want you to- to,” uh-

Amanda Lang: Unlock the phone.

Ann Cavoukian:

[00:07:30] Yeah, which one? Was it, San Bernadino, um, the guy there. And they said, “We- we- we don’t have the means to do that.” And they said, “You can develop the means.” And they said, “No, we won’t. We offer end to end encryption, full encryption to all our customers. That’s what we do. Of course we could develop code to undo it. We’re not going to do that.” And that was the beauty of it. There are others that can do this for them, but Apple refused to do it. And they were willing to go to court. James Comey backed off.

[00:08:00] That’s the beauty of company like Apple- companies like Apple. And Microsoft has very strong code and, um, so I don’t want us to give up on this. Uh, I- we increasingly have to turn to the private sector, uh, to strengthen our technology, encryption, and the protections that individuals will increasingly need. And unfortunately, government is becoming the purveyor of surveillance, and much of it you don’t know about.

Um, it is so unfortunate. And there’s no independent- oh, you have to have independent oversight. You know, in Canada we have this bill, C59 right now. There’s no independent oversight. I mean, I think it before it reports to the Minister of Public Safety. That’s not independent of government, please. You have to have

independent oversight.

[00:08:30]

Amanda Lang: So, we’re getting into these important questions about privacy and oversight, and how we’ll set the regulations around that. Jodie, data is so important. It’s the fuel of AI, people call it the new oil. So, how do you handle your data? How do you handle privacy issues?

Jodie Wallis:

[00:09:00] You know, when we started to record some of these podcast episodes, I have to admit, I hadn’t thought about it in a while. I tend to think about the applications that I’m willing to use, and not necessarily about the data that I’m willing to give.

So, for example, I am not on Facebook, and, uh, therefore, I don’t worry about what Facebook is doing with my data. Um, but I do use Google for searching, and so there’s plenty that Google would know about me, in- as it pertains to my search habits.

Amanda Lang:

[00:09:30] It sounds like we’re pretty similar, because I avail myself of all kinds of useful technologies, like Uber. And that means that I need to tell Google Maps where I am, but any chance I get where I don’t have to, I actually turn off the option to be in the Cloud. I don’t store my information in the Cloud. People think I’m crazy on this front. But, I feel as though, whatever, kind of small marketplaces I have to maintain privacy in, I’m going to do it for as long as I can. Maybe it’s delusional.

Jodie Wallis:

[00:10:00] Well, I- I think we’re both a little bit delusional. I think the- the- taking the perspective or the approach that we’re just going to control the apps that we use, is naïve. Um, especially when we hear-

Section 1 of 3 [00:00:00 – 00:10:04]

Section 2 of 3 [00:10:00 – 00:20:04] (NOTE: speaker names may be different in each section)

Amanda:

[00:10:30] … that we use is naïve, um, especially when we hear about what’s happening with smart cities. As I walk down the street, information is being collected about me. And not long ago, I was traveling and my credit card stopped working, and I just assumed that the credit card company had flagged it. And I called them, as one does, to say, “Please un-flag my credit card, I really want these shoes.” And the message on the number said, “You don’t need to call us anymore. We haven’t frozen your card, our algorithms are so good, that there’s very little chance that our fraud detection did that to you.” And I, so sure enough, I went to the next store, it had been a problem with the machine. In other words, sometimes I am happy to have, in this case, my credit card company tracking my behavior and understanding me well enough to provide a service in a better way.

Jodie: Intellectually, I feel that if I’m getting something in return for giving my data, I’m okay with it. If my data’s being used for purposes that don’t benefit me, then I don’t want to give it up.

[00:11:00]

Amanda: I guess the question it raises, and this is part ethical and just part human nature, is

do we sometimes make that trade off for a benign use that could be used against us later. We assume that the company’s doing good with it, and they could turn out to be doing bad at some point.

Jodie: In some cases, you want company’s to know more about you. Like in the case you described, Amanda, of credit card fraud. But in some cases, you may want that information to remain yours and yours alone.

[00:11:30]

Amanda: Here’s Dr. Foteini Agrafioti, the chief science officer at RBC and head of Borealis AI, RBC’s research institute in artificial intelligence. She’s responsible for RBC’s intellectual property portfolio in the fields of AI and machine learning.

Foteini A.:

[00:12:00]

[00:12:30] If you think of how we do fraud detection, uh, you can on one hand side, you can use data analytics to create a set of rules, manmade rules, that say, hey, if you buy a coffee in Toronto, five minutes later you buy a coffee in Vancouver, that’s probably, you know, your credit card has been stolen, flag that. But the real world is more complicated than that. Um, and those traditional rules, uh, don’t, they fall short. So, machine learning comes in and says, forget about rules, let me have an agent in the background of every customer, learning the particular patterns of that customer and using outlier detection, figure out when there is an anomaly to that particular client. ‘Cause something that I do may be unusual, uh, for me, but it may be very common for somebody else. So, the ability to customize, to learn, uh, online, uh, as people are transacting, adapting to new patterns, ’cause we change, um, it’s, it’s the power of AI. And this is one core application in banking.

[00:13:00]

Amanda: So, there’s obviously a lot of issues surrounding data. We asked Deb Santiago from Accenture what’s she’s seeing from global corporate leaders when it comes to this question.

Jodie: There’s an emerging thought that publicly available data may have more intimate views of who we are and what we’re thinking than private data. And so, how do we deal with that emerging view?

Deb Santiago:

[00:13:30] What gets me slightly nervous is around the fact that AI can kind of anticipate or, not just anticipate, but also manipulate and nudge us in certain ways. They may lead us to things that we didn’t understand about ourselves. And that’s when, uh, that’s when I’m, when I talk about the shift from publicly-available data to private data to secret data, it is this, it’s that shift that we see over time.

Amanda:

[00:14:00] So, we’ve all come a long way in terms of our understanding of how little privacy we have in some ways. But we’re actually also in need of education still. This concept that there’s public data, I am somewhere in the world and something can say I’m there. My phone says I’m there. There’s private data, there are my emails that I don’t want people to see. And then there’s secret data, and Jody secret data, I might not even know I have this. It’s data that I’m not intending to reveal to anybody. That’s a whole other level that we need to consider.

Jodie:

[00:14:30] Right, it’s one thing to agree to take an online q-, quiz about your dating habits, it’s another thing entirely for assumptions to be attributed to you about what you like and what kind of lifestyle you lead.

Amanda: And so, this question that was raised by Ann Cavoukian of giving permission to share your data, gets a little lost when it’s data you don’t intend to share. (laughs) And how do we safeguard against that? How do we make sure when we are intending to share the public, maybe even the private sometimes, the secret stays secret?

Deb Santiago:

[00:15:00]

[00:15:30]

[00:16:00] That’s what actually makes me slightly nervous, it’s not so much … When people say, “Oh, I don’t mind, you know, whatever, you know, whatever I post on Facebook, it doesn’t really bother me whatever I put up, you know, in social media and in Instagram, or things like that. It doesn’t really bother me, because if they collect that data, they’re already doing that already.” The, the question I always pose back to people is, you know, “Would you feel comfortable if you understood that all that data was being used to kind of influence you in a way that you weren’t anticipating? And influencing it, you in a way in which you, you didn’t realize this about yourself.” You know, it’s, it’s not these online quizzes (laughs) that people take to find out more about their personality. But it’s actually leading you into a direction that you yourself … it was so subtle that you didn’t realize you’re, that you were going in that direction. And it, it is on the basis on collecting all of that publicly-available data, but it’s that element of the secret data that you don’t realize about yourself, it’s t-, self, self-realization that AI could bring that may not necessarily be based on, um, things that you understood about yourself in the past.

Amanda: And of course, data is as they now say, the new oil. It is the currency of the future-

Deb Santiago: Yeah.

Amanda: … and have we, have the regulations we have, kept pace with who’s, who’s getting it, who has it, where it’s coming from, (laughs) or are we really behind the eight ball on understanding who’s gathering the data, including this secretive data?

[00:16:30]

Deb Santiago:

[00:17:00] So, I think it’s such, it’s a s-, it’s a, it’s a really great moment in time, I think, for regulators. (laughs) There’s a lot of interesting questions that are being posed. Canada has always been at the forefront of kind of thinking through, um, concepts like privacy by design, which is this overall, um, idea that as people build out their solutions, they should build it in a way to incorporate privacy principles like transparency, you know, data minimization, things like that. Um, and Canada has always been on the forefront for those types of things, I think partly because there’s a lot of alignment to the way that the European data privacy, um, regulators have kind of thought about privacy in the past.

The regulators are going to be behind, um, regulators are on a 18 to 24 month notice and comment period. It j-, just generally on that cycle of activity, so you

[00:17:30] [00:18:00] know, providing guidance, you know, waiting for the public to provide, um, their feedback and then going to issuance. That’s, that’s usually around an 18 to 24 month cycle. And when you look at, um, cycles of innovation, the, the cycles of innovation occur on a six to eighth to nine month clip. Maybe even, even shorter than that. So, it, just looking at time scales alone, yes, and in, in some respects regulators will be behind. Having said that, um, a lot of the questions are, that fundamentally are being asked here, still are being addressed by data privacy law as, as it exists today. I think the harder thing is, given the vast amount of data that’s, that’s happening and given the issues that people are struggling with, whether it’s like, are we dealing with authentic people, authentic identities, authentic data, authentic news? Those are the harder questions that I, I, I think that we’re going to be see-, we’re going to continue to see regulators grapple with over, over the next year or so.

Amanda:

[00:18:30] Over the next year might sound like a short timeframe, but at the speed at which this technology is developing, it’s actually a long time. You know, and the point she makes of the different lag, the cycles for innovation versus regulation, we’re going to have to figure out how to match them up or pay a price.

Jodie:

[00:19:00]

[00:19:30] Yeah, so when I’m talking to large companies about this question, and, and believe me this is a pretty hot topic these days, there’s a few things that I think that company’s can and should be doing. The first is really identifying the elements of a responsible AI policy that are relevant for them. Just because an AI, or a responsible AI topic is out in the media or out in the public, it doesn’t mean it is applicable to every single company. The second is to pull together a multi-disciplinary team to look at this. This is not a technology question, this is not a legal question, this is not a compliance question, this is a question that requires a multi-disciplinary approach. The third is to then engage with the government, engage with the policymakers, get ahead of the regulation cycle, and align the priorities of the organization to the priorities of the regulatory bodies.

Amanda:

[00:20:00] So, your advice sounds really smart. In the absence of regulations, play to the highest standard. So, get out in front of the regulations and be the most ethical. The only question I have is, what do you do about your competitors who aren’t doing that? And the, the example that most of us know is Uber. Uber said, “We don’t care what the regulations are, we’re going to take this to market. And by the way, by the time people love us so much they can’t live without us, you’re going to have to change the regulations.” So, my fear for businesses is they’ll slow themselves down trying to stay in front of regulations that don’t exist and their compet-

Section 2 of 3 [00:10:00 – 00:20:04]

Section 3 of 3 [00:20:00 – 00:30:01] (NOTE: speaker names may be different in each section)

Speaker 1: … trying to stay in front of regulations that don’t exist and their competitors won’t.

Speaker 2:

I think the slowing down, uh, question is a good one, and this is where I say relevant. You don’t need to solve every responsible AI or every ethical AI problem that the world has ever come up with. Pick the ones that are key priorities for your

[00:20:30] business, key priorities that align to your values and start with those. Deb Santiago says there’s another reason for businesses to behave responsibly, and that’s so as not to turn off their customers.

Deb Santiago:

[00:21:00] Uh, so the urgency, I think, is, is always going to be there. Um, and the great news is that the AI, a, a lot of the applications of AI at least where I’ve been, um, involved in, is that people can get AI up and running very quickly. So, uh, you know, the, you know, applying AI now is not something that necessarily takes years, and years, and years to do in terms of an implementation. I, I’ve seen chat bots get up and running within 48 hours or less. Um, and so I do think that this is something in which adding a little bit of time early on isn’t going to stifle innovation. In fact, it will create better acceptance of AI over time.

[00:21:30] So, um, what do I mean by this? So, I think to the extent that you launch something and it’s very creepy, it may not be illegal but it’s, it’s creepy. And it, it not only gets into people’s personal data but it gets into their, not only their private data, but also their secretive data. People are not going to accept this product. So, no matter how quickly you launch that product, it, it’s going to be something that’s accepted in the marketplace.

Speaker 1: Well, when it comes to getting products to the marketplace, there’s another piece of this that’s worth discussing, and that is the data that the machine is learning from. It does make a difference. The quality of the underlying data is going to determine how good the output is.

[00:22:00]

[00:22:30] So, a while back, Microsoft has one of the early, early versions of AI. It was a chat bot. It was basically a computer that could learn as it went. They called it Tay, and Tay was an adolescent girl. And the data that they used to have Tay grow up in the world happened to be Twitter. Well, the problem with Twitter is, it’s a huge cross-section of humanity and not always humanity at its best, as we know. And within about 48 hours, this young girl, Tay, became racist, misogynistic. I think she also became a sex addict, and they had to shut her down. And the lesson in that was not just that humans are horrible, it’s that the data matters. Where you source the underlying information is going to change how good the AI is.

Speaker 2:

[00:23:00]

[00:23:30] Right. Regardless of how good your algorithms are, if the underlying data doesn’t live up to the highest quality standards, then you’re, then you’re in trouble. There’s an example we often use, which is a little closer to home, but take human resources data. So, a lot of applications for AI in human resources, particularly around recruiting, screening resumes, matching resumes to jobs, generating job descriptions, but the data that is being used there is often historical data about the qualifications or the qualified applicants for jobs. And if you look back over our history, and certainly in certain jurisdictions, we haven’t always had the best track record in hiring diverse candidates. And so, the data being trained on for these AI applications can lead to similar problems, albeit not as dramatic, similar problems that you just talked about.

Speaker 1: Which gets us to one of the problems as we develop AI. In the early going, do we

[00:24:00] build in our systemic issues, the biases, the prejudices? So, one example that comes up is in banking. If you build a, an algorithm, a set of data facts that says, “If somebody’s going to default on a loan, and therefore shouldn’t get a loan,” you might actually find yourself discriminating against whole groups out of hand because that’s how we’re building it, because the human beings making the datasets are p-putting this into the machines instead of letting the machine be smarter than we are without our biases.

Speaker 2:

[00:24:30] And these are not just philosophical question. Researchers, AI researchers, are doing research into these very topics. There is a very, very well known and, and very prestigious AI conference every year. It’s called NIPS. It’s the Neural Information Processing Systems Conference. And, uh, we’ll hear from Rich Zemel from the Vector Institute on what research is doing to prevent problems like bias or unintentional bias in the data that helps construct AI and what we’re seeing at conferences like NIPS on the topic.

[00:25:00]

Rich Zemel:

[00:25:30] I just came back from the NIPS Conference where that was certainly a theme, and it’s been a theme, um, you know. So, I, I work in the research area called Fairness, and there’s been … there was somebody who did a tutorial this year on fairness where it showed that in the last few NIPS every year there was two or three papers, and this year there was more than 20, right, on that topic. And so, as a growing theme in the field in general is awareness. And the general awareness is that there’s, you know, machine learning systems are built on historical data, right, it’s inherently a learning system. And historical data often has some bias in it.

[00:26:00] And so, if you’re going to learn from, uh, historical data, you’re going to often perpetuate whatever biases were there. Uh, and so the question is how to overcome that. And that’s really a social question as well as a technological question. Right? So, how can we build a system that somebody can have input in and control and say, “We want to unbiased the system.” So, we’re doing research on that, so, you know, uh, myself and some colleagues have been doing that for a few years, actually, in, in, at Vector, or at, at UFT, um, and across other collaborators.

[00:26:30] Um, and there’s also work going on in kind of transparency, which is a little bit different. And this is like you mentioned, deep learning. Right? So, people that knock on deep learning systems for many years has been, “That’s a big black box with tons of parameters. We can’t make any sense of it.” And so, we used to call it, “Reading the tea leaves,” where you’d kind of pull it apart and try to figure out what was going on inside of it. And we’ve gotten a little better than just reading tea leaves recently. People are doing a lot of research on figuring out how can you make these things understandable and kind of boil down whatever they’re doing into some understandable way. And that’s an important research area, and there’s people at Vector working on that as well.

And, and so I think in general there’s this trend that’s, you know, that … it’s called FATML, Fairness Accountability and Transparency in Machine Learning. And that’s a

[00:27:00] growing interest, uh, across the field, and I think at Vector we have a lot of people also interested in doing research in that. And I think we’re going to start working with industry on that as well. That’s a new initiative I think we’ll have in the new year is, you know, for a lot of industries, that’s important. Right? And so, this is another way we can work with industries. It’s not just solving their problems in terms of, you know, I don’t know, a-answering a recommendation or doing something like that. It’s actually working on accountability, transparency, these other … fairness.

[00:27:30] From my perspective, the big companies are very aware that everybody’s worried about, you know, Google, Facebook, and all these things. That not the … who you worry about. (laughs) Right? They’re the ones, they have so much regulation and, you know, a lot of safeguards in place in terms of data privacy. It’s the little ones, right, it’s the people who are the kind of start-ups that don’t have nearly the kind of oversight and regulation that they need.

Speaker 1: So, that’s an interesting and somewhat counterintuitive point that it’s not the big companies who incidentally already have a lot of our data, the Googles, the Facebooks, that we should worry about. It’s the little ones that fly under the radar that might actually cause the damage.

Speaker 2:

[00:28:00] It’s definitely different than what we hear in the media, but Rich makes an excellent point that Google and Facebook live by a code of ethics and that there is a lot of scrutiny on them. What about the hundreds and thousands of companies with little or no scrutiny?

Speaker 1: I think most of us are already sort of interested in protecting our secret data, and a lot of us actually are quite concerned, depending on our age, with protecting our private data, but do you think we really understand the extent of this issue? As data becomes a new oil, are we on top of it? How concerned do we need to be?

[00:28:30]

Speaker 2: I think because data is the great differentiator here, there are people in companies that will look for new and innovative ways to use and exploit that data. I believe that most of it will be for good, most of it will be to improve the products and services that they offer and improve the way they interact with us. But that doesn’t mean it’s all good.

Speaker 1:

[00:29:00] In the end, and perhaps surprisingly, the renewed focus on the human element of AI, what role we play in shaping the machines, is become increasingly important both for businesses and for us. In our next episode, how AI might not take your job, it might, just might, make it better.

The AI Effect is produced by Antica Productions and hosted by Amanda Lang and Jodie Wallis.

Speaker 2: This podcast is sponsored by Excentric.

Speaker 1: Our producers are Paula [Flalo 00:29:21], [Deala Valasquez 00:29:21] and [Analeesa Nelson 00:29:23].

Speaker 2: Our executive producer is Stewart Cox.

Speaker 1: Music for this podcast by Podington Bear.

[00:29:30]

Speaker 2: Subscribe to the AI Effect on Apple Podcasts, Google Play, Stitcher or wherever you get your podcasts.

Speaker 1: Visit our website, theeffect.ai.

Speaker 2: And follow us on Twitter at AI Effect.

Speaker 1: Thanks for listening.

Type and hit enter