Complexity, uncertainty and the Safety of ML - 42nd International Conference on Computer Safety, Reliability and Security Access content directly
Conference Papers Year : 2023

Complexity, uncertainty and the Safety of ML

Simon Burton
  • Function : Author
  • PersonId : 1278299
Benjamin Herd
  • Function : Author
  • PersonId : 1278300

Abstract

There is currently much debate regarding whether or not applications based on Machine Learning (ML) can be made demonstrably safe. We assert that our ability to argue the safety of ML-based functions depends on the complexity of the task and environment of the function, the observations (training and test data) used to develop the function and the complexity of the ML models. Our inability to adequately address this complexity inevitably leads to uncertainties in the specification of the safety requirements, the performance of the ML models and our assurance argument itself. By understanding each of these dimensions as a continuum, can we better judge what level of safety can be achieved for a particular ML-based function?
Fichier principal
Vignette du fichier
SAFECOMP_2023_paper_3459.pdf (267.24 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-04191756 , version 1 (30-08-2023)

Identifiers

  • HAL Id : hal-04191756 , version 1

Cite

Simon Burton, Benjamin Herd. Complexity, uncertainty and the Safety of ML. SAFECOMP 2023, Position Paper, Sep 2023, Toulouse, France. ⟨hal-04191756⟩

Collections

LAAS SAFECOMP2023
77 View
97 Download

Share

Gmail Facebook X LinkedIn More