I think part of the answer to that is correctly following the risk-based approach that this act is taking. This is because with a risk-based approach based on standards, rather than trying to makes specific rules for the specific technologies we have now versus the ones we'll have in five years, we'll be able to adjust as the technology changes. Avoiding the temptation to regulate the technologies we had a couple of years ago and focusing on being technologically neutral while at the same time putting enough content into the bill will allow us to be future-proof and be aligned with these international principles.
I think part of that relates to the question that was asked just before yours about the impossibility to meaningfully consent today to most of the data processing, because it is impossible to anticipate the inferential harms from AI. I think part of the answer is again following standards and focusing on things like privacy by design, data minimization and purposeful mutation. These are independent of an individual's consent. This approach will allow our laws to adjust to the different ways in which the inferential harms will mutate in the next 10 years, and it's similar to the approach the EU is taking for artificial intelligence.