Abstract: In this paper, based on an ethnographic study of doctors performing colonoscopies, we show how professionals bolster confidence in their decisions- especially when these decisions conflict with AI recommendations- by employing not only their professional vision but also semiotic work. We connect the uncertainty introduced by AI to the differences between human and AI reasoning and demonstrate how workers address these challenges through semiotic work enabling them to confidently validate or reject AI outcomes. Our findings reveal that the difficulty in developing a confident diagnosis while interacting with AI stems from a thinking that effective collaboration with AI requires understanding AI reasoning. Contrarily, we propose that justifying professional vision with semiotic work allows to confidently validate or reject AI outcome. We show how doctors can transform the uncertainty induced by AI into a confident decision by transforming their intuitive professional vision into an explicit reasoning process which use semiotic work, the interpretation of additional signs that go beyond the object of scrutiny.
Bio: During her PhD, she conducted ethnographic fieldwork with shopfloor workers producing molds for the automotive industry. From this fieldwork, she co-published a paper with Ruthanne Housing, highlighting how workers who lack official training or peripheral learning opportunities can achieve computational literacy through what they term "Vicarious Coding." Based on the same fieldwork, she also published a paper in collaboration with Jean Clarke, which explores how workers in low-skill environments can fin meaning in their work by engaging in anomalous craft. Currently, she is involved in a hospital project examining how AI infleunces doctors' work practices. Over the course of 18 months, she has observed doctors in operating rooms and has been analyzing how AI's recommendations integrate with and enhance professional vision.