Details
Original language | English |
---|---|
Article number | 80 |
Journal | SYNTHESE |
Volume | 205 |
Issue number | 2 |
Publication status | Published - 5 Feb 2025 |
Abstract
Deep Neural Networks (DNNs) are epistemically opaque in the sense that their inner functioning is often unintelligible to human investigators. Inkeri Koskinen has recently argued that this poses special problems for a widespread view in social epistemology according to which thick normative trust between researchers is necessary to handle opacity: if DNNs are essentially opaque, there simply exists nobody who could be trusted to understand all the aspects a DNN picks up during training. In this paper, I present a counterexample from scientific practice, AlphaFold2. I argue that for epistemic reliance on an opaque system, trust is not necessary, but reliability is. What matters is whether, for a given context, the reliability of a DNN has been compellingly established by empirical means and whether there exist trustable researchers who have performed such evaluations adequately.
Keywords
- AI, Evaluation, Opacity, Science, Trust
ASJC Scopus subject areas
- Arts and Humanities(all)
- Philosophy
- Social Sciences(all)
- General Social Sciences
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: SYNTHESE, Vol. 205, No. 2, 80, 05.02.2025.
Research output: Contribution to journal › Article › Research › peer review
}
TY - JOUR
T1 - Of opaque oracles
T2 - epistemic dependence on AI in science poses no novel problems for social epistemology
AU - Ortmann, Jakob
N1 - Publisher Copyright: © The Author(s) 2025.
PY - 2025/2/5
Y1 - 2025/2/5
N2 - Deep Neural Networks (DNNs) are epistemically opaque in the sense that their inner functioning is often unintelligible to human investigators. Inkeri Koskinen has recently argued that this poses special problems for a widespread view in social epistemology according to which thick normative trust between researchers is necessary to handle opacity: if DNNs are essentially opaque, there simply exists nobody who could be trusted to understand all the aspects a DNN picks up during training. In this paper, I present a counterexample from scientific practice, AlphaFold2. I argue that for epistemic reliance on an opaque system, trust is not necessary, but reliability is. What matters is whether, for a given context, the reliability of a DNN has been compellingly established by empirical means and whether there exist trustable researchers who have performed such evaluations adequately.
AB - Deep Neural Networks (DNNs) are epistemically opaque in the sense that their inner functioning is often unintelligible to human investigators. Inkeri Koskinen has recently argued that this poses special problems for a widespread view in social epistemology according to which thick normative trust between researchers is necessary to handle opacity: if DNNs are essentially opaque, there simply exists nobody who could be trusted to understand all the aspects a DNN picks up during training. In this paper, I present a counterexample from scientific practice, AlphaFold2. I argue that for epistemic reliance on an opaque system, trust is not necessary, but reliability is. What matters is whether, for a given context, the reliability of a DNN has been compellingly established by empirical means and whether there exist trustable researchers who have performed such evaluations adequately.
KW - AI
KW - Evaluation
KW - Opacity
KW - Science
KW - Trust
UR - http://www.scopus.com/inward/record.url?scp=85218193195&partnerID=8YFLogxK
U2 - 10.1007/s11229-025-04930-x
DO - 10.1007/s11229-025-04930-x
M3 - Article
AN - SCOPUS:85218193195
VL - 205
JO - SYNTHESE
JF - SYNTHESE
SN - 0039-7857
IS - 2
M1 - 80
ER -