Yes, I think this is a good context. Think about facial recognition and different error rates across different groups. I think it's a great example if we're thinking about how safe harbours and regulatory markets might work, and why we're limiting ourselves when we say it's only in these domains. Look, we can have facial recognition across all of these domains. We should be asking this: Are there steps anybody who is deploying facial recognition technology in any domain—who's developing it or purchasing and deploying it—can take to verify that it's meeting minimum legal standards?
A safe harbour would require that by establishing that, as long as you've done these kinds of tests or as long as you've employed this kind of technology and maybe this independent third party provider of a technology, whom we've certified and approved, to verify that the accuracy of your facial recognition system is equitable across different groups.... That's the kind of thinking we need to be developing, and we need to recognize that it's something that will evolve. The technology is going to evolve. The systems will evolve. You need that agility to do that.
That's an example where you give companies greater certainty to build. I think we should all be thinking about how we encourage AI development and deployment throughout Canada. However, you reduce that uncertainty by providing some safe harbours and some mechanisms at a lower cost that companies can use to verify that.