Invited Speakers

Conference Template Banner

Cristina David

Senior Lecturer and Royal Society University Research Fellow, University of Bristol, UK

Lessons learnt from using GNNs to estimate program termination

Termination analyses investigate the termination behaviour of programs in order to prevent a variety of program bugs (e.g. hanging programs, denial-of-service vulnerabilities). Usually, such analyses make use of formal methods and provide certificates of correctness in the form of ranking functions or recurrence sets. In this work, we move away from formal methods and embrace the stochastic nature of machine learning models. Instead of aiming for rigorous guarantees that can be interpreted by solvers, our objective is to provide an estimation of a program's termination behaviour and of the likely reason for nontermination (when applicable) that a programmer can use for debugging purposes. In this talk I will discuss our experience using Graph Neural Networks and semantic segmentation for this purpose, as well as the challenges we encountered and possible solutions.

Conference Template Banner

Yuriy Brun

Professor, University of Massachusetts, USA

Automated Formal Verification

Formal verification is a promising method for building correct software. Unfortunately, its cost is prohibitively high, and nearly all shipped software today is unverified. I will discuss how latest breakthroughs in natural language processing technology can help automate writing formal verification proofs. Specifically, language models trained on corpora of human-written proofs can either generate entire proofs or be used to guide a search-based method for synthesizing proofs. The formal verification domain is unique in that the proofs can be machine checked by a theorem prover. As a result, no incorrect proof is ever returned by our systems -- either we fail to synthesize a proof, or the proof we return is guaranteed to be correct. This makes the formal verification domain especially well-suited for cutting edge NLP technology, such as large-language models.

Conference Template Banner

Jingxuan He

Ph.D. Student, ETH Zurich, Switzerland

Large Language Models for Code: Security Hardening and Adversarial Testing

Large language models (large LMs) trained on massive code corpora are increasingly used to generate programs. However, LMs lack awareness of security and are found to frequently produce unsafe code. In this talk, I will present our recent work that addresses LMs’ limitation in security along two important axes: (i) security hardening, which aims to enhance LMs' reliability in generating secure code, and (ii) adversarial testing, which seeks to evaluate LMs' security at an adversarial standpoint.