Evidence of meeting #125 for Canadian Heritage in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was deepfakes.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Heidi Tworek  Associate Professor, University of British Columbia, As an Individual
Monique St. Germain  General Counsel, Canadian Centre for Child Protection Inc.
Shona Moreau  BCL/JD, Faculty of Law, McGill University, As an Individual
Chloe Rourke  BCL/JD, Faculty of Law, McGill University, As an Individual
Signa Daum Shanks  Associate Professor, University of Ottawa, Faculty of Law, As an Individual
Keita Szemok-Uto  Lawyer, As an Individual

June 13th, 2024 / 3:45 p.m.

Liberal

The Chair Liberal Hedy Fry

I call this meeting to order.

Welcome to meeting number 125 of the House of Commons Standing Committee on Canadian Heritage.

I'd like to acknowledge that this meeting is taking place on the traditional and unceded territory of the Algonquin Anishinabe people.

According to Standing Order 108(2) and the motion adopted by the committee on February 14, 2022, the committee is resuming its study on online harms.

Before we begin, I want to ask all members and other in-person participants to consult the cards on the table in front of them and to take note of the preventative measures in place to protect the health and safety of the participants, especially the interpreters. Only use the black, approved earpiece. Keep your earpiece away from all microphones. There's a little decal in front of you. When you're not using your earpiece, you can put it face down on that decal. Thank you for doing that because we do have problems with interpretation sometimes with feedback.

Today's meeting is taking place in a hybrid format. In accordance with the committee's routine motion concerning connection tests for witnesses, I want to let you know that all the witnesses have completed their connection tests in advance of the meeting.

I want to make a few comments for the benefit of members and witnesses. Please wait until I recognize you by your name before speaking. Put your hand up. If you're in a chat, you have a little hand icon that you can put up, and if you're in the room, put your actual hand up, and I will recognize you.

I will remind you that all comments should be addressed through the chair. Also, please do not take pictures of the meeting because it's going to be produced online later for you anyway.

Pursuant to the motion adopted by the committee on Tuesday, April 9, we have Claude Barraud, psychotherapist from Homewood Health, in the room with us today.

Claude, can you put your hand up or stand up so that people know where to go?

If you feel distressed or uncomfortable with some of what you're hearing and you feel you want to talk to him, he's here to help you out.

I want to welcome our witnesses. We have our witnesses set up in a particular order, but I just want to flag the two witnesses who must leave at 4:30 p.m. They are Heidi Tworek, associate professor, the University of British Columbia; and Monique St. Germain, general counsel for the Canadian Centre for Child Protection. They are both here by video conference and will be leaving at 4:30 p.m.

Then, from 3:30 to 5:20 p.m.—or if, because of the votes, we are starting later—we will have Shona Moreau from the faculty of law, McGill University; Chloe Rourke from the faculty of law, McGill University; Signa Daum Shanks, associate professor, faculty of law, University of Ottawa; and Keita Szemok-Uto, lawyer.

Before we begin, I want the witnesses to know that they have five minutes, but not each. If you represent a group, then that group has five minutes. I notice that we have two people from McGill, so they can decide who's going to be their speaker.

I will give you a 30-second shout-out—and I mean shout-out because I'll say “30 seconds”—so that you can wrap up what you're saying. You will have the opportunity later on, when you get to the question and answer period, to finish up some of the things you wanted to say.

Thank you very much.

We'll begin with Heidi Tworek from British Columbia for five minutes, please.

3:50 p.m.

Dr. Heidi Tworek Associate Professor, University of British Columbia, As an Individual

Thank you, Madam Chair, for the opportunity to appear virtually before you to discuss this important topic.

I'm a professor and Canada research chair at the University of British Columbia in Vancouver. I direct the centre for the study of democratic institutions, where we research platforms and media. Two years ago I served as a member of the expert advisory group to the heritage ministry about online safety.

Today, I will focus on three aspects of harms related to illegal sexually explicit material online, before discussing briefly how Bill C-63 may address some of these harms.

First, the issue of illegal sexually explicit material online overlaps significantly with the broader question of online harm and harassment, which disproportionately affects women. A survey in 2021 found that female journalists in Canada were nearly twice as likely to receive sexualized messages or images, and they were six times as likely to receive online threats of rape or sexual assault. Queer, racialized, Jewish, Muslim and indigenous female journalists received the most harassment.

Alongside provoking mental health issues or fears for physical safety, many are either looking to leave their roles or unwilling to accept more public-facing positions. Others have been discouraged from pursuing journalism at all. My work over the last five years on other professional groups, including political candidates or health communicators, suggests very similar dynamics. This online harassment is a form of chilling effect for society as a whole when professionals do not represent the diversity of Canadian society.

Second, generative AI is accelerating the problem of illegal sexually explicit material. Let's take the example of deepfakes, which means artificially generated images or videos that swap faces onto somebody else's naked body to depict acts that neither person committed. Recent high-profile targets include Taylor Swift and U.S. Congresswoman Alexandria Ocasio-Cortez. These are not isolated examples. As journalist Sam Cole has put it, “sexually explicit deepfakes meant to harass, blackmail, threaten, or simply disregard women's consent have always been the primary use of the technology”.

Although deepfakes have existed for a few years, generative AI has significantly lowered the barrier to entry. The number of deepfake videos increased by 550% from 2019 to 2023. Such videos are easy to create, because about one-third of deepfake tools enable a user to create pornography, which comprises over 95% of all deepfake videos. One last statistic is that 99% of those featured in deepfake pornography are female.

Third, while it is mostly prima facie easy-to-define illegal sexually explicit material, we should be wary of online platforms offering solely automated solutions. For example, what if a lactation consultant is providing online guidance about breastfeeding? Wholly automated content moderation systems might delete such material, particularly if trained simply to search for certain body parts like nipples. Given that provincial human rights legislation protects breastfeeding in much of Canada, deletion of this type of content would actually raise questions about freedom of expression. If parents have the right to breastfeed in public in real life, why not to discuss it online? What this example suggests is that human content moderators remain necessary. It is also necessary that they are trained to understand Canadian law and cultural context and also to receive support for the very difficult kind of work they do.

Finally, let me explain how Bill C-63 might address some of these issues.

There are very legitimate questions about Bill C-63's proposed amendments to the Criminal Code and Canadian Human Rights Act, but as regards today's topic, I'll focus briefly on the online harms portion of the bill.

Bill C-63 draws inspiration from excellent legislation in the European Union, the United Kingdom and Australia. This makes Canada a fourth or fifth mover, if not increasingly an outlier in not regulating online safety.

However, Bill C-63 suggests three types of duties for platforms. The first two are a duty to protect children and a duty to act responsibly in mitigating the risks of seven types of harmful content. The third most stringent and relevant for today is a duty to make two types of content inaccessible—child sexual exploitation material and non-consensual sharing of intimate content, including deepfakes. This should theoretically protect the owners of both the face and the body used in a deepfake. A newly created digital safety commission would have the power to require removal of this content in 24 hours as well as impose fines and other measures for non-compliance.

Bill C-63 also foresees the creation of a digital safety ombudsperson to provide a forum for stakeholders and to hear user complaints if platforms are not upholding their legal duties. This ombudsperson might also enable users to complain about takedowns of legitimate content.

Now, Bill C-63 will certainly not resolve all issues around illegal sexually explicit material, for example, how to deal with copies of material stored on servers outside Canada—

3:55 p.m.

Liberal

The Chair Liberal Hedy Fry

Thank you, Ms. Tworek.

I would advise you to wrap up. You can expand on this later on in the question and answer section. Thank you.

If you want to finish your sentence, go ahead.

3:55 p.m.

Associate Professor, University of British Columbia, As an Individual

Dr. Heidi Tworek

Bill C-63 is a step in the right direction to address a problem that, tragically, is swiftly worsening.

I'm looking forward to your questions.

3:55 p.m.

Liberal

The Chair Liberal Hedy Fry

Now we go to the Canadian Centre for Child Protection and Monique St. Germain.

Ms. St. Germain, you have five minutes.

3:55 p.m.

Monique St. Germain General Counsel, Canadian Centre for Child Protection Inc.

Thank you for the opportunity today.

My name is Monique St. Germain, and I am general counsel for the Canadian Centre for Child Protection, which is a national charity with the goal of reducing the incidence of missing and sexually exploited children.

We operate cybertip.ca, Canada's national tip line for reporting the online sexual exploitation of children. Cybertip.ca receives and analyzes tips from the public and refers relevant information to police and child welfare as needed. Cybertip averages over 2,500 reports a month. Since inception, over 400,000 reports have been processed.

When cybertip.ca launched in 2002, the Internet was pretty basic, and the rise of social media was still to come. Over the years, technology has rapidly evolved without guardrails and without meaningful government intervention. The faulty construction of the Internet has enabled online predators to not only meet and abuse children online but to do so under the cover of anonymity. It has also enabled the proliferation of child sexual abuse material, CSAM, at a rate not seen before. Victims are caught in an endless cycle of abuse.

Things are getting worse. We have communities of offenders operating openly on the Tor network, also known as the dark web. They share tips and tricks about how to abuse children and how to avoid getting caught. They share deeply personal information about victims. CSAM is openly shared, not only in the dark recesses of the Internet but on websites, file-sharing platforms, forums and chats accessible to anyone with an Internet connection.

Countries have prioritized the arrest and conviction of individual offenders. While that absolutely has to happen, we've not tackled a crucial player: the companies themselves whose products facilitate and amplify the harm. For example, Canada has only one known conviction and sentencing of a company making CSAM available on the Internet. That prosecution took eight years and thousands of dollars to prosecute. Criminal law cannot be the only tool; the problem is just too big.

Recognizing how rapidly CSAM was proliferating on the Internet, in 2017, we launched Project Arachnid. This innovative tool detects where CSAM is being made available publicly on the Internet and then sends a notice to request its removal. Operating at scale, it issues roughly 10,000 requests for removal each day and some days over 20,000. To date, over 40 million notices have been issued to over 1,000 service providers.

Through operating Project Arachnid, we've learned a lot about CSAM distribution, and, through cybertip.ca, we know how children are being targeted, victimized and sextorted on the platforms they use every day. The scale of harm is enormous.

Over the years, the CSAM circulating online has become increasingly disturbing, including elements of sadism, bondage, torture and bestiality. Victims are getting younger, and the abuse is more graphic. CSAM of adolescents is ending up on pornography sites, where it is difficult to remove unless the child comes forward and proves their age. The barriers to removal are endless, yet the upload of this material can happen in a flash, and children are paying the price.

It's no surprise that sexually explicit content harms children. For years, our laws in the off-line world protected them, but we abandoned that with the Internet. We know that everyone is harmed when exposed to CSAM. It can normalize harmful sexual acts, lead to distorted beliefs about the sexual availability of children and increase aggressive behaviour. CSAM fuels fantasies and can result in harm to other children.

In our review of Canadian case law regarding the production of CSAM in this country, 61% of offenders who produced CSAM also collected it.

CSAM is also used to groom children. Nearly half of the victims who responded to our survivor survey of victims of CSAM identified this tactic. Children are unknowingly being recorded by predators during an online interaction, and many are being sextorted thereafter. More sexual violence is occurring among children, and more children are mimicking adult predatory behaviour, bringing them into the criminal justice system.

CSAM is a record of a crime against a child, and its continued availability is ruining lives. Survivors tell us time and time again that the endless trading in their CSAM is a barrier to moving forward. They are living in constant fear of recognition and harassment. This is not right.

The burden of managing Internet harms has fallen largely to parents. This is unrealistic and unfair. We are thrilled to see legislative proposals like Bill C-63 to finally hold industry to account.

Prioritizing the removal of CSAM and intimate imagery is critical to protecting citizens. We welcome measures to mandate safety by design and tools like age verification or assurance technology to keep pornography away from children. We would also like to see increased use of tools like Project Arachnid to enhance removal and prevent the reuploading of CSAM. Also, as others have said, public education is critical. We need all the tools in the tool box.

Thank you.

4 p.m.

Liberal

The Chair Liberal Hedy Fry

Thank you very much.

I will now go to Ms. Moreau and Ms. Rourke.

Who is going to be the spokesperson? You will share it?

4 p.m.

Shona Moreau BCL/JD, Faculty of Law, McGill University, As an Individual

We will if possible, yes.

4 p.m.

Liberal

The Chair Liberal Hedy Fry

You still have only five minutes. You know that. Okay.

Go ahead. Which one of you will begin?

4 p.m.

BCL/JD, Faculty of Law, McGill University, As an Individual

Shona Moreau

I will begin, if possible. Thank you so much.

4 p.m.

Liberal

The Chair Liberal Hedy Fry

Thank you, Ms. Moreau.

4 p.m.

BCL/JD, Faculty of Law, McGill University, As an Individual

Shona Moreau

Madam Chair, members of the Standing Committee on Canadian Heritage, thank you for inviting me to testify before you today.

While this study covers a wide range of topics, we're here to highlight a very specific dimension: the need to address the growing threat of deepfake pornography and its effects on women and girls in Canada.

Our presentation will touch on three key aspects of deepfakes—one, what they are; two, who is affected; and three, what can be done about them.

Deepfake technology, as you know, is generated AI that creates fake audiovisual content by manipulating a person's appearance and likeness. As the technology has advanced, AI-generated content has become increasingly sophisticated and harder to distinguish from real-life footage. Lifelike deepfakes can now be generated using just a single photo of a person. As a result, it's not just celebrities and public figures who are vulnerable. Everyone is vulnerable to this technology, and though there are other applications for deepfakes, by far the most common use is for non-consensual porn.

The vast majority of deepfakes are pornographic, and these overwhelmingly feature female subjects. It's important for the committee to know that this gendered and sexualized use of the technology is not new. The term deepfake actually originated in 2017 and stemmed from the practice of using online tools to switch female celebrities' faces onto pornographic videos. In other words non-consensual porn has been central to the technology since its very beginning.

While the unauthorized use and creation of fake intimate images is not a new phenomenon—Photoshop, for example, has been around for decades—the advent of generative AI technology has taken this issue to a whole new level. Today, highly realistic and convincing fake pornographic content can be produced quickly and with minimal effort and skills. Even when fake, these types of images inflict real emotional, societal and reputational harm on victims.

Now even children are affected. In the past year, reports have exploded of schoolgirls who have found themselves the subject of pornographic deepfakes made and shared by their own classmates.

All this goes to show that deepfake porn is not a trivial matter. It's real. It poses a significant threat to people and to human dignity, and as such, it demands our attention and action.

4 p.m.

Liberal

The Chair Liberal Hedy Fry

Thank you.

Ms. Rourke, you have two minutes and 16 seconds. Thank you.

4 p.m.

Chloe Rourke BCL/JD, Faculty of Law, McGill University, As an Individual

To effectively address this issue, it's crucial to understand how existing laws can be extended to cover deepfakes, but also why current regulatory frameworks are insufficient.

First, Canadian legislation prescribing the non-consensual distribution of pornography, such as section 162.1 of the Criminal Code, should be reviewed and extended to include altered images such as deepfakes. Doing that would send a clear message that it is wrong and must be denounced.

However, it is important to recognize that this is not enough. Unlike a real recording, deepfakes are not tied to a specific time, location or sexual partner. They can easily be produced and distributed anonymously. Therefore, in practice, it will often be difficult to identify perpetrators and hold them legally accountable, which will limit the deterrent effects of such provisions.

Additionally, even when an individual perpetrator is identified, criminal or civil penalties cannot restore a victim's privacy, dignity or sense of safety, particularly when the content continues to circulate in the public domain. To address these ongoing harms, we must consider the role and responsibility of digital platforms. Tech platforms such as Google and pornography websites have already created procedures that allow individuals to request that non-consensual pornographic images of themselves be removed and delisted from their websites. This is not a perfect solution. Once the content is distributed publicly, it can never be fully removed from the Internet, but it is possible to make it less visible and therefore less harmful.

Implementing such systems would mitigate the reputational harm caused by non-consensual porn, whether it be real or synthetic, and provide a more immediate and practical recourse for victims. Public regulatory bodies should work with major online platforms to require such procedures and to ensure they are effective, accessible and meaningfully enforced.

Lastly, this technology must be understood within the context of gender-based violence and societal attitudes toward women's sexuality.

The non-consensual sharing of porn is already weaponized against women and is further exacerbated by deepfakes because anyone is able to create and distribute such content. Women will have limited options to protect themselves. It's already being used to target, harass and silence female journalists and politicians. If unchecked, deepfakes threaten to rewrite the terms of participation in the public sphere for women.

This technology is rapidly evolving and harms have already materialized. While no one law can eliminate it, we can take action and legislatures have a role to lead these efforts.

Thank you.

4:05 p.m.

Liberal

The Chair Liberal Hedy Fry

Thank you, Ms. Rourke.

I will now go to Ms. Shanks from the University of Ottawa's faculty of law.

You have five minutes, please.

4:05 p.m.

Dr. Signa Daum Shanks Associate Professor, University of Ottawa, Faculty of Law, As an Individual

Thank you.

I'm a law professor at the University of Ottawa and a law professor on leave at Osgoode Hall law school. I belong to the Law Society of Ontario.

I specialize in the history of laws, the impact of laws on marginalized peoples, law and economics, and tort law. I've taught at the university level for 26 years. Teaching has also included updating the judiciary about trends in law and professional development sessions for the legal profession.

Today, I'm going to focus on the influence of tort law upon legislation. Why? It's because it is directly responsible for responding to harm. Tort law is also a subject that has allowed courts to respond to matters that are not addressed in legislation yet. It has its benefits and shortcomings for making society better. It is considered part of private law and includes topics like personal injury, intentional infliction of mental distress, intimidation and breach of fiduciary duty, such as to children or the increasing topic involving indigenous peoples.

For me, two observations surface about legislation regarding harm.

First, I think about how private law interacts with legislation. Historically, many topics in private law have come across a judge's bench because parties, and ultimately the judge, have concluded that society would support the recognition of a certain harm. The harm, however, may not be articulated yet in legislation and may seem novel. However, those topics are constructed on jurisprudence, so while the name of a tort might be new, the details of the tort are familiar and already supported.

Tort law has helped create tools that have been and are integrated into legislation. Like other topics in law, there is often what is called a dialogue. Events in society impact arguments in court. Those arguments in court are learned about by those who create and implement legislation, like all of you. In this dialogue, sometimes the legislation introduces the idea first, and views about the legislation will then be brought up by parties in the courtroom.

This idea that private law and legislation have an ongoing relationship is vital to also realizing that almost all tort litigation does not result in a trial decision. As a result, any litigating, negotiating and resolving happens at earlier stages of litigation. In fact, those earlier stages are organized by the courts and involve many parties, including judges, to evaluate the nature and scope of the claimed harm. When a tortious subject is not guided by legislation, figuring these subjects out takes time and space in the court system.

All of us know stories in which people have felt less heard due to the slowness of the court process. That slowness is arguably magnified when legislation does not exist to quickly determine one part or all parts of a problem. Private law has helped get some harms more recognized, but when the private law's focus on harm does not have legislative guidance, addressing examples of harm and preventing that harm can take time and arguably increase the number of times that said harm occurs.

My second observation is about when legislation is proposed. I see any legislation about online harm, particularly when it impacts groups we consider more vulnerable, capable of paralleling the benefits of private law, plus avoiding some of private law's limitations. Demanding that a party act responsibly, for example, is mirror-like to negligence law. The concept is also prevalent in many intentional torts, such as intentional infliction of mental distress. It might be a word we are now integrating, but the word's presence is already evident.

Moreover, we can learn how other countries have found that it's possible to integrate acting responsibly into a rights-based system like Canada's. I believe we have the underpinnings of acting responsibly already in Canadian law.

Legislation has also an effect of dissuasion that private law might not have. Legislation hopefully stops most intentional harm before it happens. Introducing such subjects by a legislation influenced by tort law, especially when subjects are urgent due to their own form and growth, creates a type of social and judicial efficiency that trends in private law often lack.

Thank you for including the duty of acting responsibly, so that courts and society will have more guidance about how to evaluate it. It is my view this duty makes legislation stronger and the need for lawsuits less likely.

Thank you for this opportunity. I look forward to our discussion.

4:10 p.m.

Liberal

The Chair Liberal Hedy Fry

Thank you, Ms. Shanks.

I now go to Mr. Szemok-Uto, please, for five minutes.

4:10 p.m.

Keita Szemok-Uto Lawyer, As an Individual

Madam Chair, committee members, thank you for the opportunity to speak before you this afternoon.

By way of brief background, my name is Keita Szemok-Uto. I'm from Vancouver. I was just called to the bar last month. I've been practising, primarily in family law, with also a mix of privacy and workplace law. I attended law school at Dalhousie in Halifax, and while there I took a privacy law course. I chose to write my term paper on the concept of deepfake videos, which we've been discussing today. I was interested in the way that a person could create a deepfake video, specifically a sexual or pornographic one, and how that could violate a person's privacy rights, and in writing that paper I discovered the clear gendered dynamic to the creation and dissemination of these kinds of deepfake videos.

As a case in point, around January this year somebody online made and publicly distributed sexually explicit AI deepfake images of Taylor Swift. They were quickly shared on Twitter, repeatedly viewed—I think one photo was seen as many as 50 million times. In an Associated Press article, a professor at George Washington University in the United States referenced women as “canaries in the coal mine” when it comes to the abuse of artificial intelligence. She is quoted, “It's not just going to be the 14-year-old girl or Taylor Swift. It's going to be politicians. It's going to be world leaders. It's going to be elections.”

Even back before this, in April 2022 it was striking to see the capacity for, essentially, anybody to take photos of somebody's social media, turn them into deepfakes and distribute them widely without, really, any regulation. Again, the targets of these deepfakes, while they can be celebrities or world leaders, oftentimes are people without the kinds of finances or protections of a well-known celebrity. Worst of all, I think, and in writing this paper, I discovered there is really no adequate system of law yet that protects victims from this kind of privacy invasion. I think that's something that really is only now being addressed somewhat with the online harms bill.

I did look at the Criminal Code, section 162, which prohibits the publication, distribution or sale of an intimate image, but the definition of “intimate image” in that section is a video or photo in which a person is nude and the person had a reasonable expectation of privacy when it was made or when the offence was committed. Again, I think the “reasonable expectation of privacy” element will come up a lot in legal conversations about deepfakes. When you take somebody's social media photo, which is taken and posted publicly, it's questionable whether they had a reasonable expectation of privacy when it was taken.

In the paper, I looked at a variety of torts. I thought that if the criminal law can't protect victims, perhaps there is a private course of action in which victims can sue and perhaps get damages or whatnot. I looked at public disclosure of private facts, intrusion upon seclusion and other torts as well, and I just didn't find anything really satisfied the circumstances of a pornographic deepfake scenario—again with the focus of reasonable expectation of privacy not really fitting the bill.

As I understand today, there have been recent proposals for legislation and legislation that are come into force. In British Columbia there's the Intimate Images Protection Act. That was from March 2023. The definition of “intimate image” in that act means a visual recording or visual simultaneous representation of an individual, whether or not they're identifiable and whether or not the image has been altered in any way, in which they're engaging in a sexual act.

The broadening of the definition of “intimate image”, as not just an image of someone who is engaged in a sexual act when the photo is taken but altered to make that representation, seems to be covered in the Intimate Images Protection Act. The drawback of that act is that, while it does provide a private right of action, the damages are limited to $5,000, which seems negligible in the grand scheme of things.

I suppose we'll talk more about Bill C-63 in this discussion, and I do think that it goes in the right direction in some regard. It does put a duty on operators to police and regulate what kind of material is online. Another benefit is that it expands the definitions, again, of the kinds of material that should be taken down.

That act, once passed, will require the operator to take down material that sexually victimizes a child or revictimizes a survivor—

4:15 p.m.

Liberal

The Chair Liberal Hedy Fry

Thank you, Mr. Szemok-Uto. Can you wind up, please?

4:15 p.m.

Lawyer, As an Individual

Keita Szemok-Uto

Yes, I'll conclude there.

4:15 p.m.

Liberal

The Chair Liberal Hedy Fry

I'm now going to the question and answer session.

Before I begin, though, I'd like the committee to know that we have until 5:45. I would like us to have in camera work for 15 minutes, so I'd like to end at 5:30. We're going to try to fit everybody and their questions into that space.

Now we'll go to the first round of questions. They're six-minute rounds of questions. I'll begin with the Conservatives.

Go ahead, Mrs. Thomas, for six minutes, please.

4:15 p.m.

Conservative

Rachael Thomas Conservative Lethbridge, AB

Thank you.

Thank you, witnesses, for giving us your time here today and for sharing your expertise.

My first question goes to Ms. Moreau and Ms. Rourke.

In your opening statement, you said the Criminal Code should be expanded to include deepfakes. Then you went on to say that this isn't actually enough. We must also consider the role platforms play, and how government has a responsibility to ensure there are teeth in terms of holding those platforms accountable.

In an article you recently wrote in February, you said, “Updated telecom regulations can play a part. But Canada also needs urgent changes in its legal and regulatory frameworks to offer remedies for those already affected and protection against future abuses.”

You seem to be outlining that both are needed. I'm wondering if you can expand on that.

4:15 p.m.

BCL/JD, Faculty of Law, McGill University, As an Individual

Chloe Rourke

Our position is similar to what was discussed. The Criminal Code provision would need to be amended if it were to apply to altered images and deepfakes, in our interpretation.

While that's important, it's not going to provide a remedy in many cases, in part because deepfakes are so easy to produce anonymously that the person who produced them, in many cases, won't be identifiable. As we discussed, it won't necessarily provide the complete remedy that all victims are seeking in the sense that the content itself can continue to cause reputational harm and be circulated.

It's our position, also, that it would be important to work with platforms and have them be held accountable for the content distributed on their websites. They are the ones that have control over the algorithms listing the results, and they are the ones that can take the content down—at least make it less visible, if not remove it entirely.

I can defer to my colleague for further comments.

4:15 p.m.

BCL/JD, Faculty of Law, McGill University, As an Individual

Shona Moreau

Yes, I echo everything my colleague Chloe said.

Also, we feel a bit desperate when we see this: If you search “deep AI”, “nude AI” or anything like that, they are so easily accessible. It just pops up on Google. You put in a picture, pay three dollars and you're able to generate massive numbers of deepfakes of an individual—or many individuals, if you choose.

That accessibility is really what we're fighting against the most, because, as we know, litigation can take many years. Oftentimes, it doesn't make a victim whole. It's really about the ability to make sure these platforms don't make this technology as easily available for everyone. Also, as we've seen, children have an ability to use this. They might not know the consequences or realize how detrimental this is to the victims.

4:20 p.m.

Conservative

Rachael Thomas Conservative Lethbridge, AB

Thank you.

Bill C-63 does something very interesting. Rather than updating the Criminal Code to include deepfakes.... It doesn't do that at all. Deepfakes aren't mentioned in the bill, and folks are not protected from them—not in any criminal way. I believe that's an issue.

Secondly, platforms will be subject to perhaps an assessment and a ruling by an extrajudicial, bureaucratic arm that will be created, but again, there's nothing criminal attached to platforms allowing for the perpetuation of things like deepfakes or other non-consensual images.

Does that not concern you?