SafeComp 2026
Logo_v4

The use of neural networks has expanded into areas as diverse as medical systems, industrial devices and space structures. In these cases, it is essential to balance performance, power consumption, and silicon area. Furthermore, in critical environments, it is necessary to ensure high dependability.

This workshop will provide a focused forum on how to design, verify, and certify AI-enabled embedded systems while preserving their overall dependability, including safety, reliability, availability, and security, under tight resource and real-time constraints. It targets AI-enabled embedded and cyber-physical systems deployed in safety-critical domains such as automotive, rail, aerospace, robotics, medical devices, and industrial control. The workshop will emphasize how functional safety, cybersecurity, and sustainability jointly contribute to system dependability when AI models, often opaque and non-deterministic, are deployed on constrained embedded platforms exposed to both accidental faults and malicious attacks.

  • Architectures and patterns for safe deployment of AI in embedded and cyber-physical systems (runtime monitors, safety envelopes, redundancy, graceful degradation).

  • Methods for verification, validation, and testing of AI components in real-time and resource-constrained embedded environments, including scenario-based testing and robustness analysis.

  • Fault-tolerant and resilient hardware and software for AI in embedded systems. Reliability analysis and test of AI-enabled embedded systems.

  • Integration of AI components with safety and security standards and assurance frameworks (e.g., ISO 26262, IEC 61508, DO-178C, IEC 62304, security co-engineering).

  • Safety- and security-aware design of AI models for embedded devices (compression, quantization, scheduling, resource management) with guarantees on timing and performance.

  • Safety and security co-engineering for AI-enabled embedded and IoT devices, including attack surfaces, threat modeling, and defenses (e.g., model poisoning, adversarial inputs, backdoors, physical tampering).

  • Lifecycle management, monitoring, and update strategies for AI models in the field, including continuous assurance, re-certification, and data governance.

  • Out-of-Distribution (OOD) detection and Operational Design Domain (ODD) monitoring, including anomaly detection, uncertainty quantification, and mechanisms for safe fallback when AI systems encounter unforeseen conditions.

  • Case studies and lessons learned from deploying AI in safety-critical embedded systems (automotive, rail, aerospace, robotics, healthcare, energy, industrial automation).

  • Tools, benchmarks, and open datasets for assessing safety, security, and sustainability of AI-based embedded systems.

 

  • Paper submission: 4 May 2026.

  • Notification of acceptance: 18 May 2026.

  • Camera-ready papers: 8 June 2026.

  • Workshop date: 22 September 2026 (co-located with SafeComp 2026).

We invite the submission of papers with high quality research contributions, work in progress, experimental and ongoing projects results. Therefore, the following types of submission are accepted:

  • Short papers: max. 6 pages, including references. These can be on new and emerging results, describing challenging problems, tool demonstrations, work in progress or industrial experiences.

  • Research papers: max. 12 pages, including references. Reporting substantial, completed, and previously unpublished research.

Workshop papers will be reviewed by at least three independent reviewers. 

Accepted full workshop research papers will be included in the complementary book of the SafeComp 2026 Proceedings.

Templates for paper preparation can be downloaded from: https://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines

We invite the submission of papers with high quality research contributions, work in progress, experimental and ongoing projects results. Therefore, the following types of submission are accepted:

  • Short papers: max. 6 pages, including references. These can be on new and emerging results, describing challenging problems, tool demonstrations, work in progress or industrial experiences.
  • Research papers: max. 12 pages, including references. Reporting substantial, completed, and previously unpublished research.

Submission will be via EasyChair: https://easychair.org/my2/conference?conf=daies2026

To be announced soon

Organizers

Joaquín Gracia-Morán, Universitat Politècnica de València, Spain (jgracia @ itaca.upv.es)

Sergio Cuenca-Asensi, Universitat d'Alacant, Spain (sergio @ dtic.ua.es)

Program Committee

Carlos Cruz de la Torre, Universidad de Alcalá, Spain

Daniel Gil-Tomás, Universitat Politècnica de València, Spain

Almudena Lindoso-Muñoz, Universidad Carlos III de Madrid, Spain

José Manuel Palomares-Muñoz, Universidad de Córdoba, Spain

José Antonio Pascual-Saiz, Universidad del País Vasco\Euskal Herriko Unibertsitatea, Spain

Alejandro Serrano-Cases, Universitat d'Alacant, Spain

NeuroAI4Space

This workshop is organized under the umbrella of the NeuroAI4Space project, co-financed by the European Union within the framework of the European Regional Development Fund (ERDF) Comunitat Valenciana 2021-2027 Programme, through the IVACE+i Innovation calls, “Strategic Cooperation Projects” (Reference INNEST/2025/339), Project "Neuromorphic Artificial Intelligence Processor for Artificial Vision in aerospace applications" (NeuroAI4Space).

Logo FEDER 2025 ES