Artifacts have politics; they are imbued with the worldviews of their creators. Given machine learning tends to enact worldviews about human beings—from their identities to their creditworthiness—there is a need to better understand how the humans behind their creation determine their design. In this project, we adopt the lens of positionality to understand the worldviews embedded in machine learning. Positionality refers to how an individual’s “position” in the world shapes their outlook—how the complex web of identities like race, gender, nationality, location, sexuality, class, and more influence their experiences and thus beliefs, values, and relationships. We are researching how the positionalities of the individuals impact the outcomes of machine learning artifacts, like models and datasets. Our goal is not only to understand the role of positionality in shaping technical artifacts, but in harnessing positionality for more diverse, ethical, and representative designs.