Data Ethics & Society Reading Group 03-03-2021
Bias “[uncountable, countable] the fact that the results of research or an experiment are not accurate because a particular factor has not been considered when collecting the information”, Oxford Learners Dictionary.
These are not pre-requisite reading material for attendance, but a compilation of material the team find interesting and would read if they themselves have unlimited time.
Organisation and Government Reviews
Questions for discussion
- How can we leverage data practices in order to gain an in-depth understanding of certain problems as situated in structural inequalities and oppression (e.g. Institutional racism as defined by MacPherson Report)?”
- How might a data worker engage vulnerable communities in ways that surface harms, when it is often the case that algorithmic harms may be secondary effects, invisible to designers and communities alike, and what questions might be asked to help anticipate these harms?”
- What are the key challenges for data scientists in considering bias in their work?
- Do we know enough about bias and how to prevent it in practice? Or are we still missing things?
- What could this mean for the models that you build or work with?
- What historical patterns or assumptions could lead to the perception of bias?
- How do you/ will you communicate those patterns and assumptions to users of the model so that they understand the impact on results?