The First International Workshop on Deep Learning-aided Verification (DAV '23) is co-located with the 35th International Conference on Computer-Aided Verification (CAV '23).

Important Dates

1st Call for Papers: Janurary 23, 2023
Paper submission: May 3, 2023 AoE May 11, 2023 AoE
Author notification: May 17, 2023, AoE May 24, 2023 AoE
Workshop: July 18, 2023

Scope and Topics of Interest

Deep learning has become state-of-the-art for many human-like tasks, such as computer vision or translation. The persistent perception remains that deep neural networks cannot be applied in computer-aided verification tasks due to the complex symbolic reasoning involved. Recently, this perception has started to shift: massive leaps in architecural design enabled the successful application of deep neural networks to various formal reasoning and automatic verification tasks (Examples include SAT and QBF solving, higher-order theorem proving, LTL satisfiability and synthesis, symbolic differentiation, autoformalization, and termination analysis). The workshop on Deep Learning-aided Verification (DAV) aims to cover this unexplored research area in all its facets.

We cover the recent highlights and upcoming ideas in the intersection between computer-aided verification and deep learning research. The workshop provides a platform to bring together industry and academic researchers from both communities, attract and motivate young talent, and raise awareness of new technologies

Computer-aided verification research will benefit from developing hybrid algorithms that combine the best of both worlds (efficiency and correctness), and machine learning researchers will gain novel application domains to study architectures and a model's generalization and reasoning capabilities.

Topics of interest include, but are not limited to:

The workshop focuses on how to use deep learning in verification, not to verify neural networks.


The workshop is a one-day event consisting of a mix of invited and contributed talks from both research communities. In particular, we will encourage the exchange of ideas to form novel research vectors and collaborations that are interesting to both research domains, including common challenges such as acquiring large amounts of symbolic training data or developing architectural designs that ensure reliable reasoning.


Logo2 Logo1

The workshop is sponsored by the Stanford Center for AI Safety.


Christopher Hahn - Stanford University, Stanford, CA, USA
Markus N. Rabe - Google Research, Mountain View, CA, USA