Evidence of meeting #125 for Canadian Heritage in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was deepfakes.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Heidi Tworek  Associate Professor, University of British Columbia, As an Individual
Monique St. Germain  General Counsel, Canadian Centre for Child Protection Inc.
Shona Moreau  BCL/JD, Faculty of Law, McGill University, As an Individual
Chloe Rourke  BCL/JD, Faculty of Law, McGill University, As an Individual
Signa Daum Shanks  Associate Professor, University of Ottawa, Faculty of Law, As an Individual
Keita Szemok-Uto  Lawyer, As an Individual

4:20 p.m.

BCL/JD, Faculty of Law, McGill University, As an Individual

Shona Moreau

I can take this one first.

As all the witnesses today talked about, it's a very good step in the right direction. We are here to showcase that this is a big issue and it needs to have further steps.

As you said, if there's more work we can do to protect, specifically—as we talked about—children, women and schoolgirls from this technology, it should be done. If there's a possibility for further legislation in the future, having a body that takes this on or more studies about this would be very beneficial, because this is not an issue that's going away.

AI technology is rapidly expanding, more than we can even predict regarding its effects. You could argue that there needs to be a constant committee and study on the new and innovative issues coming through this technology.

I think my colleague Chloe wants to talk a bit, as well.

4:20 p.m.

Conservative

Rachael Thomas Conservative Lethbridge, AB

Sure. I am running out of time, so perhaps the comments could be brief.

4:20 p.m.

BCL/JD, Faculty of Law, McGill University, As an Individual

Chloe Rourke

I was just going to add that I think the Criminal Code provisions that currently exist and apply to actual real recordings of intimate images, or so-called revenge porn, are an incomplete remedy as is. That's not even including the issue of deepfakes and how much more complicated it is to apply there. I think our bigger priority is about what accessible remedies there are that can be implemented in the vast majority of cases. Many revenge porn cases would never be litigated or criminally prosecuted, and the harm continues. That's why I think involving the platforms is really important in that respect.

4:20 p.m.

Conservative

Rachael Thomas Conservative Lethbridge, AB

Thank you very much.

4:20 p.m.

Liberal

The Chair Liberal Hedy Fry

You have 35 seconds, Rachael.

4:20 p.m.

Conservative

Rachael Thomas Conservative Lethbridge, AB

Mr. Szemok-Uto, I'm sorry. I did intend to leave you more time. I'm not sure if you wish to comment.

4:20 p.m.

Lawyer, As an Individual

Keita Szemok-Uto

Do you mind repeating the last part of the question?

4:20 p.m.

Conservative

Rachael Thomas Conservative Lethbridge, AB

We're probably out of time.

Maybe I'll conclude by saying this. I think at the end of the day, what's being made clear at the table is that it's not enough just to signal good intentions. Rather, what I'm hearing from folks is that we do need an updated Criminal Code in order to go after those who would create and propagate deepfakes. I think that's really important for all women across our country.

I think what the witnesses have certainly drawn attention to is the fact that this is a gendered issue. It is women and girls who are subjected to it far more than men, and it does ruin lives. I think the government has a responsibility to act on that.

4:20 p.m.

Liberal

The Chair Liberal Hedy Fry

Mrs. Thomas, you're going over. Thank you.

I will now go to Michael Coteau for the Liberals.

You have six minutes, Michael.

June 13th, 2024 / 4:20 p.m.

Liberal

Michael Coteau Liberal Don Valley East, ON

Thank you very much, Chair.

Thank you to all of our witnesses here today. I appreciate the work you're doing to protect young people and all Canadians from online harm.

My first question will go to Ms. Tworek.

We heard from Professor Krishnamurthy from Colorado a few days ago. He said something interesting. He said that sometimes one of the big challenges is the “elephant and mice” in the room. The big platforms, obviously, are the ones that have a lot of control, but there are also the small entities online that come and go quickly. For regulators it's hard to keep up with these fly-by-night websites.

What are your thoughts on how we go about tackling that challenge that was presented at the last heritage meeting?

4:25 p.m.

Associate Professor, University of British Columbia, As an Individual

Dr. Heidi Tworek

Thank you very much.

I served with Mr. Krishnamurthy on the expert advisory group. This is something we grappled with quite a lot. Of course, major platforms like Facebook and so on have many employees and can easily staff up, but we often see these harms, particularly now with generative AI lowering the barrier to entry, that could be a couple of individuals who create complete havoc or very small firms.

I think there are two aspects to this question. One is the very important question of international co-operation on this. We've talked as if all of the individuals creating harm would be located in Canada, but the truth is that many of them may be located outside of Canada. I think we need to think about what international co-operation looks like. We have this for counterterrorism in the online space, and we need to think about this for deepfakes.

In the case of smaller companies, we can divide between those whom I think are being abused and then the question of how the new proposed online bill, Bill C-63, could have a digital safety commissioner who actually helps those smaller firms to ensure that these deepfakes are removed.

Finally, we have the question of the more nefarious smaller-firm actors and whether we need to have Bill C-63 expanded to be able to be nimble and shut down those kinds of nefarious actors more quickly—or, for example, tools that are only really being put up in order to create deepfakes of the terrible kinds that have been described by other witnesses.

I would just emphasize that the international co-operation, finally, is key. Taking things down in Canada only will potentially lead to revictimization, as something might be stored in a server in another country and then continually reuploaded.

4:25 p.m.

Liberal

Michael Coteau Liberal Don Valley East, ON

You talked about a massive increase in deepfakes. I think you said it was 550% or something around there. Obviously, the technology is shifting quickly. Over the course of my lifetime, we've seen technology rapidly shift. You know, I've gone from buying the same album as a record, cassette, disc and MP3, and that's just in the music sector. Technology is constantly shifting.

I was reading recently about AI agents who can be created and who have a mind of their own. They can be programmed to do things themselves. Was there any discussion around how a specific AI agent can be programmed to do stuff on their own, which relates to online harm?

4:25 p.m.

Associate Professor, University of British Columbia, As an Individual

Dr. Heidi Tworek

We didn't specifically discuss generative AI that much within our group, but I think that within Bill C-63 there's certainly at least an attention to the question of deepfakes. I think there's a concept of a duty to act responsibly that's certainly capacious enough to be able to deal with these kinds of updates. If we're thinking about generative AI companies, they too will have a duty to act responsibly and then I think the question becomes, what exactly should that duty to act responsibly look like in the case of generative AI? A lot of the things we've been talking about today would obviously be a very central part of that.

4:25 p.m.

Liberal

Michael Coteau Liberal Don Valley East, ON

Thank you very much.

I have another question. This is for Monique St. Germain.

Thank you again for being here. Thank you for the work you're doing around the protection of children.

What happens when the AI technology is so good that it can create, without using a deepfake, images and videos of illegal sexual exploits and acts that may look real but are actually fake? There is no technical living victim, but it obviously has a harsh impact on a sector and on society as a whole. Is it becoming more of a problem, where you have a person who doesn't exist but the exploitation of that image is being used more and more? Can you talk about that?

4:25 p.m.

General Counsel, Canadian Centre for Child Protection Inc.

Monique St. Germain

Yes, absolutely.

We've been talking a lot about adults and this is also happening in the space of child sexual abuse material. There is a lot of harm that is done in terms of the systems that detect this type of material, which rely on hash values of real material. The fake material doesn't have those hash values in the databases that are being relied on, so removal of them becomes an incredible challenge.

There are all sorts of new CSAM out there. There's already a lot of CSAM out there, so we're now talking about making it even more—

4:30 p.m.

Liberal

Michael Coteau Liberal Don Valley East, ON

What do you call it...CSAM?

4:30 p.m.

General Counsel, Canadian Centre for Child Protection Inc.

Monique St. Germain

It's child sexual abuse material. The Criminal Code still calls it child pornography, unfortunately.

4:30 p.m.

Liberal

Michael Coteau Liberal Don Valley East, ON

That's interesting. That's a big definition change that's necessary.

Thank you so much for your time. I appreciate your being here.

4:30 p.m.

General Counsel, Canadian Centre for Child Protection Inc.

4:30 p.m.

Liberal

The Chair Liberal Hedy Fry

Thank you, Michael.

I will now go to the Bloc Québécois with Martin Champoux.

Martin, you have six minutes.

4:30 p.m.

Bloc

Martin Champoux Bloc Drummond, QC

Thank you, Madam Chair.

Thank you to the witnesses for being here today.

We're dealing with a very sensitive topic that I think we all care about on this committee. Although we sometimes have different opinions on how to approach this issue, I think that all committee members, all parties represented here, share a single objective, which is to make web browsing safer. We all want to make sure that our children, our daughters, our women, our sisters can feel safe and that they can be spared from this kind of reprehensible behaviour.

Ms. Moreau, Ms. Rourke, you mentioned in your article, as well as in your opening remarks, that deepfakes don't just affect celebrities. However, that's really our perception, that it's generally used to provide us with images of Taylor Swift, say, in pornographic poses—as a witness told us a little earlier. However, anyone can be a victim of this, not just politicians in election campaigns, but also ordinary people.

Are there many examples of this?

Have you noted many cases where ordinary people who are not famous are victims of sexual deepfakes?

4:30 p.m.

BCL/JD, Faculty of Law, McGill University, As an Individual

Shona Moreau

We talked a little bit about that in our article. There really are cases where people—as you say, ordinary people—have been victims since 2017. So this is not a new phenomenon.

There are also a lot of articles in the newspapers that show this is growing in schools. We read a report in December that at a school in Winnipeg, some 40 young girls had been victimized using these technologies. That's significant.

If there's one story like that, I'm sure there are more, everywhere. It is really becoming more popular as the technology gets more accessible.

4:30 p.m.

Bloc

Martin Champoux Bloc Drummond, QC

This type of content is easy to produce. There are even applications for that. It's quite appalling.

People are talking a great deal about Bill C‑63, which seeks to regulate hateful and inappropriate content online.

Beyond legislation, do you feel that the platforms could do more about this?

Do you think they are now able to do more technologically, contrary to what they claim?

4:30 p.m.

BCL/JD, Faculty of Law, McGill University, As an Individual

Chloe Rourke

It's possible. Certainly, once the technology became open source, it's been impossible to completely remove the technology and the capacity to create deepfakes from the Internet, that's for sure.

It could be less accessible. I think decreasing the accessibility would decrease the frequency of these types of attacks. Just as an example, while we were doing research for this article, if you type “deep nude” into Google, the first results will get 10 different websites you can access, and it can be done in minutes.

It's possible to make it less visible and less accessible than it is now. It's pretty unnerving just how easy and how accessible it is. I think that's why we're seeing teenagers use it, and that's why a criminal remedy or civil remedies would be inadequate, considering how accessible it is.

4:30 p.m.

Bloc

Martin Champoux Bloc Drummond, QC

As you said, you type keywords into Google, and you end up with a bunch of content. Everyone knows this, but some claim that we can't force these large companies to control this content at the source. I can't believe that they are not able to put in place a mechanism to raise a red flag when this inappropriate content is requested.

If we made the platforms more accountable and required them to better control the inappropriate content that may be on them, do you think that would improve the situation?

It's all well and good to legislate, regulate and crack down on abuse, but if the technology exists, the least these companies that provide the content to whoever requests it should be held accountable for what they give us.

Isn't that right?