Irene Senatore
Berlin School of Mind and Brain – Humboldt Universität zu Berlin

Sullivan has recently argued that the black-box nature of computational models does not get in the way of drawing epistemically robust scientific inferences on the target phenomenon, once a clear link between the model and the target phenomenon is established. That is, it is mostly irrelevant to know the details about the model’s internal workings to gain understanding from the model (2022). I will argue that in computational neuroscience research that uses Deep Neural Networks (DNN) as models, it is epistemically relevant to gain knowledge of at least some details about the model’s implementation to 1) establish a more robust link between the model and the target phenomenon in the first place and 2) to draw epistemically meaningful scientific inferences. Link uncertainty matters, but it can be ameliorated not only by conducting more empirical research but also by looking at specifics of model implementation, as opposed to what Sullivan contends. First, by adopting De Regt’s account, I will argue that theory and model intelligibility are necessary for scientific understanding and explaining (2017). Thereby, I will specifically emphasize the role of visualizability. After that, I will turn to consider the differences between opacity (which is generally negatively correlated with transparency), abstraction, and idealization and point out why opacity poses specific issues for intelligibility. I will distinguish three levels of machine learning (ML) transparency, following Creel’s account (2020): algorithmic, structural, and run transparency, and identify the levels that Sullivan deems irrelevant as the ones of structural and run transparency. I will then analyze a prominent example from Reinforcement Learning (RL) research in neuroscience to show how 1) structural and run transparency matter to better establish a link between the model and the target and 2) visualizability of internal workings of the model can play a crucial role in model intelligibility, thereby triggering a virtuous cycle in the scientific research process. Finally, I will offer some replies to possible objections and point to promising future research directions in this domain.
Keywords: Opacity; Computer Simulations; Deep Neural Networks; Scientific Modelling; Computational Neuroscience; Scientific Understanding

Chair: Kamil Furman
Time: September 12th, 10:40 – 11:10
Location: SR 1.004
