Different types of mitigation, as you're mentioning, depend on the use of the system. Both the technology and the context within which it's being used will change. The harms will change, from an individual to a group to the organization itself. Therefore, first of all, it's understanding what the harms are.
The work I did at the Responsible AI Institute was really building on the work I did at Treasury Board: This is what the scope of a system is, and we need to put something like a certification mark on it, like a good housekeeping or LEED symbol. That type of acknowledgement would require you to be able to identify what those harms are, first and foremost, and therefore identify the different types of criteria or controls you would need to go through in order to mitigate them for the individual or the group or the organization.