Characteristics that make up human identity have become increasingly embedded into technological systems. Human characteristics like age, gender, race, and sexuality are being folded into the categorical structures of automated systems, such as algorithmic computer vision methods. However, these characteristics are often complex, nuanced, and fluid–and linked to social and historical instances of bias and discrimination. The simple and discrete categorization of these characteristics leads to tensions that can clash with human values and identity, and result in risky ramifications for already marginalized populations.
To mitigate the potential risks of these types of technological methods, we are researching ways to appropriately develop algorithms that are sensitive to the nuanced human identities held and expressed by the people classified. Our aim is to inform design approaches that are empowering and safe for all users.