Research

What we work on:

  • Neuromorphic perception and control (event cameras, spiking neural networks) for low-power low-latency autonomy.
  • Robust perception under distribution shift: domain adaptation, label-efficient learning, multi-task temporal models.
  • Anomaly and rare-event detection: semi/self-supervised generative methods with spatiotemporal localisation.
  • Multimodal fusion: vision–inertial–polarisation and classical RGB/thermal for reliable state estimation and guidance.
  • Multimodal representation learning: cross-modal alignment and grounding (e.g., vision–language–inertial).
  • Resource-efficient neural architectures: fast stable convergence and high accuracy per flop for edge/embedded and neuromorphic hardware
  • Time-series modelling: multiscale forecasting, change-point detection and online adaptation
  • Trustworthy ML: bias detection/mitigation, explainability and safety cases for high-stakes deployments
  • Mapping and positioning: event-based VIO/SLAM, graph/diffusion map alignment and GNSS-denied localisation

Scalable training systems: large-scale experimentation, evaluation under shift and ROS-integrated, field-ready pipelines