Positional Machine Learning

Artifacts have politics; they are imbued with the worldviews of their creators. Given machine learning tends to enact worldviews about human beings—from their identities to their creditworthiness—there is a need to better understand how the humans behind their creation determine their design. In this project, we adopt the lens of positionality to understand the worldviews embedded in machine learning. Positionality refers to how an individual’s “position” in the world shapes their outlook—how the complex web of identities like race, gender, nationality, location, sexuality, class, and more influence their experiences and thus beliefs, values, and relationships. We are researching how the positionalities of the individuals impact the outcomes of machine learning artifacts, like models and datasets. Our goal is not only to understand the role of positionality in shaping technical artifacts, but in harnessing positionality for more diverse, ethical, and representative designs.


Morgan Klaus Scheuerman, Jed Brubaker


  1. Do Datasets Have Politics? Disciplinary Values in Computer Vision Dataset Development Scheuerman, Morgan Klaus and Denton, Emily and Hanna, Alex
    Proceedings of the ACM on Human-Computer Interaction 5, CSCW2: Article 317 Best Paper Honorable Mention
  2. How We’ve Taught Algorithms to See Identity: Constructing Race and Gender in Image Databases for Facial Analysis Scheuerman, Morgan Klaus and Wade, Kandrea and Lustig, Caitlin and Brubaker, Jed R.
    Proc. ACM Hum.-Comput. Interact. 4, CSCW1: Article 58 Best Paper Honorable Mention