Neuro-Symbolic Models for Scene Understanding in Automated Driving
Develop a neuro-symbolic pipeline that combines scene graphs and machine learning to identify relevant objects for automated driving manoeuvres.
Scene understanding helps an automated vehicle to make sense of nearby traffic participants and their actions. The outcome can be used for decision making, communication or explanation of traffic participants, e.g. detecting relevant objects or giving the cause for observed actions. Recently, researchers at Simula have developed a graph representation of the scene, its included objects and their relations to each other. This graph representation is very flexible and can be configured with different ways to describe the relationships between traffic participants. Still, it has not been deeply investigated what the best way is to construct and configure this graph or how to best make predictions from it - this is the goal of this master thesis project. The project will use state-of-the-art research methods for scene understanding and explanations and will use real-world datasets for the experiments and evaluation.
Goal
Design a fully automated pipeline that processes a dataset of driving scenes, constructs scene graphs, and trains machine learning models to identify relevant objects and predict their actions in the scene. The pipeline can be parameterized to adjust the graph construction and model training.
An extensive experimental evaluation will be performed to find the best hyperparameters.
Learning outcome
- Knowledge Graphs
- Scene Understanding
- Machine learning tuning
- Model evaluation & tuning
- Experimental design & execution
Qualifications
- Programming experience in Python
- Familiarity with a machine learning toolkit, e.g. scikit-learn, xgboost, or pytorch
Supervisors
- Helge Spieker
- Nassim Belmecheri
References
- [1] N. Belmecheri, A. Gotlieb, N. Lazaar and H. Spieker (2024). Towards Trustworthy Automated Driving through Qualitative Scene Understanding and Explanations. SAE International Journal of Connected and Automated Vehicles. Preprint: