This is a really crucial issue. We've heard that AI systems are unpredictable in their behaviour, and also that we can't understand, or we can't explain, how they've come up with those, so those are two big problems.
In addition to having conversations and being open about this, we need to apply a sort of precautionary and accountability principle, so that organizations that put them into play are accountable and have to take prior steps before they start experimenting on us.
The aerospace industry has these well worked out systems, because when a plane comes down, everybody knows about it, and that's dreadful. When an AI system, and it doesn't have to be an AI system, but when a complicated digital network system goes wrong, the problems are distributed, and it's very difficult to analyze them. It's in a worse situation, I think, than aerospace.