It's a great question.
What I would say is that, first, while AI systems look very impressive to consumers, millions of people on a daily basis are working behind the scenes to make them work. That spans from our interactions with social media to automated decision-making systems.
The scope of what I'm asking for is very simple. By having a disclosure mechanism in the law that requires companies to give information about the data they've collected and how they collected it, we essentially ensure that millions of people around the world who are annotating daily and interacting with AI systems in the back end are protected from exploitative processes and procedures.
Right now, nothing is in place in any jurisdiction in the world. Right now, this is a wild west and nobody is protecting these people. These are youth in Pakistan and women in Kenya. These are vulnerable Canadians who are trying to have a side job to make a bit more money. In all of these circumstances, they have nothing protecting them.