Mr. Lemire, that is a very good question. We have very different definitions of developers and users, as they are very different groups. Developing a system and using it in a business are two completely different things, with different obligations. The developer clearly has a lot more obligations since they must ensure certain things.
It is important to note that the bill will regulate what we call high-impact systems. As I said in my opening remarks, I am not interested in the type of movie or music you might choose. I am repeating that because I know that people who are following us from home are concerned about that. If a robot or algorithm decides whether someone gets a loan or an insurance policy or is admitted to an institution, we have to make sure that the algorithm does not generate biased results that would lead in the wrong direction. Individuals also have to know that artificial intelligence was used to make that decision. When you buy an insurance policy these days, you don't even know if the application was processed by a person. So the method used to make a decision will have to be indicated. We also suggest that a special symbol be used to indicate that a robot is responding or acting on behalf of the business.
A big concern these days is not knowing whether a machine or a person is making the decision to grant a loan or not. If it is a machine, what is the decision based on? Is it based on our postal code, our age or the number of years we have been in the country? Canadians have to know. This must be public and clearly indicated because it can lead to all kinds of problems. That is what we want the bill to prevent, and that is why we want to act quickly.
Personally, I think it is possible to be innovative, but there is also a risk of abuse. A letter from Yoshua Begio telling us to be careful was co‑signed by hundreds of people. Yes, Canada is certainly ready for responsible innovation, but at the same time we need a framework that will enable us to protect people.