Loading [MathJax]/extensions/tex2jax.js

Of opaque oracles: epistemic dependence on AI in science poses no novel problems for social epistemology

Research output: Contribution to journalArticleResearchpeer review

Authors

  • Jakob Ortmann

External Research Organisations

  • University of Cambridge
Plum Print visual indicator of research metrics
  • Captures
    • Readers: 20
  • Mentions
    • News Mentions: 1
see details

Details

Original languageEnglish
Article number80
JournalSYNTHESE
Volume205
Issue number2
Publication statusPublished - 5 Feb 2025

Abstract

Deep Neural Networks (DNNs) are epistemically opaque in the sense that their inner functioning is often unintelligible to human investigators. Inkeri Koskinen has recently argued that this poses special problems for a widespread view in social epistemology according to which thick normative trust between researchers is necessary to handle opacity: if DNNs are essentially opaque, there simply exists nobody who could be trusted to understand all the aspects a DNN picks up during training. In this paper, I present a counterexample from scientific practice, AlphaFold2. I argue that for epistemic reliance on an opaque system, trust is not necessary, but reliability is. What matters is whether, for a given context, the reliability of a DNN has been compellingly established by empirical means and whether there exist trustable researchers who have performed such evaluations adequately.

Keywords

    AI, Evaluation, Opacity, Science, Trust

ASJC Scopus subject areas

Cite this

Of opaque oracles: epistemic dependence on AI in science poses no novel problems for social epistemology. / Ortmann, Jakob.
In: SYNTHESE, Vol. 205, No. 2, 80, 05.02.2025.

Research output: Contribution to journalArticleResearchpeer review

Download
@article{df22d1a35f3f49099d9b2719be3e52a0,
title = "Of opaque oracles: epistemic dependence on AI in science poses no novel problems for social epistemology",
abstract = "Deep Neural Networks (DNNs) are epistemically opaque in the sense that their inner functioning is often unintelligible to human investigators. Inkeri Koskinen has recently argued that this poses special problems for a widespread view in social epistemology according to which thick normative trust between researchers is necessary to handle opacity: if DNNs are essentially opaque, there simply exists nobody who could be trusted to understand all the aspects a DNN picks up during training. In this paper, I present a counterexample from scientific practice, AlphaFold2. I argue that for epistemic reliance on an opaque system, trust is not necessary, but reliability is. What matters is whether, for a given context, the reliability of a DNN has been compellingly established by empirical means and whether there exist trustable researchers who have performed such evaluations adequately.",
keywords = "AI, Evaluation, Opacity, Science, Trust",
author = "Jakob Ortmann",
note = "Publisher Copyright: {\textcopyright} The Author(s) 2025.",
year = "2025",
month = feb,
day = "5",
doi = "10.1007/s11229-025-04930-x",
language = "English",
volume = "205",
journal = "SYNTHESE",
issn = "0039-7857",
publisher = "Springer Netherlands",
number = "2",

}

Download

TY - JOUR

T1 - Of opaque oracles

T2 - epistemic dependence on AI in science poses no novel problems for social epistemology

AU - Ortmann, Jakob

N1 - Publisher Copyright: © The Author(s) 2025.

PY - 2025/2/5

Y1 - 2025/2/5

N2 - Deep Neural Networks (DNNs) are epistemically opaque in the sense that their inner functioning is often unintelligible to human investigators. Inkeri Koskinen has recently argued that this poses special problems for a widespread view in social epistemology according to which thick normative trust between researchers is necessary to handle opacity: if DNNs are essentially opaque, there simply exists nobody who could be trusted to understand all the aspects a DNN picks up during training. In this paper, I present a counterexample from scientific practice, AlphaFold2. I argue that for epistemic reliance on an opaque system, trust is not necessary, but reliability is. What matters is whether, for a given context, the reliability of a DNN has been compellingly established by empirical means and whether there exist trustable researchers who have performed such evaluations adequately.

AB - Deep Neural Networks (DNNs) are epistemically opaque in the sense that their inner functioning is often unintelligible to human investigators. Inkeri Koskinen has recently argued that this poses special problems for a widespread view in social epistemology according to which thick normative trust between researchers is necessary to handle opacity: if DNNs are essentially opaque, there simply exists nobody who could be trusted to understand all the aspects a DNN picks up during training. In this paper, I present a counterexample from scientific practice, AlphaFold2. I argue that for epistemic reliance on an opaque system, trust is not necessary, but reliability is. What matters is whether, for a given context, the reliability of a DNN has been compellingly established by empirical means and whether there exist trustable researchers who have performed such evaluations adequately.

KW - AI

KW - Evaluation

KW - Opacity

KW - Science

KW - Trust

UR - http://www.scopus.com/inward/record.url?scp=85218193195&partnerID=8YFLogxK

U2 - 10.1007/s11229-025-04930-x

DO - 10.1007/s11229-025-04930-x

M3 - Article

AN - SCOPUS:85218193195

VL - 205

JO - SYNTHESE

JF - SYNTHESE

SN - 0039-7857

IS - 2

M1 - 80

ER -