About Me
Having graduated from the University of York with a BSc in Psychology, I went on to obtain an MSc in Computational Neuroscience and Cognitive Robotics from the University of Birmingham. At BSc level, my dissertation focused on memory consolidation, while my MSc research aimed to automate the annotation of eye tracking data with computer vision architectures such as Mask R-CNN.
As a PhD student at the University of Bristol, my current research explores interactive symbolic machine learning methodologies that facilitate human feedback to AI models. For further details, check out my thesis summary below and feel free to head over to my Github page for more information on my past research and ongoing projects.
Research Project Summary
The emerging field of explanation-driven Interactive Machine Learning (XIML) intends to advance human-AI interaction beyond active learning methodologies with systems that permit richer user feedback than static ground truth labels. XIML pairs queries with explanations of the model’s underlying reasoning which the user can adjust to guide model learning and improve out-of-distribution robustness.
Recent work has made significant progress towards facilitating effective user feedback using propositional symbolic machine learning methods, and such methods have also been successfully integrated with neural processing modules in several vision-based tasks [1,2,3]. However, existing approaches lack expressivity and suffer from common challenges including concept drift and vulnerability to confounding factors in training data. The current project explores how such limitations can be addressed by integrating methodologies from the field of Inductive Logic Programming (ILP) which has long explored the use of symbolic logic to facilitate user interaction [4,5]. We demonstrate the efficacy of this interdisciplinary methodology by implementing interactive ILP systems on existing XIML task domains. We present the comparative advantages of the ILP approach and introduce future avenues of research that will further align ILP and XIML research.
Example: Interactive workflow for XIML and ILP system. Image taken from [5].
Notable References
1) Elizabeth M Daly, Massimiliano Mattetti, Öznur Alkan, and Rahul Nair. User driven model adjustment via boolean rule explanations. arXiv preprint arXiv:2203.15071, 2022.
2) Öznur Alkan, Dennis Wei, Massimiliano Matteti, Rahul Nair, Elizabeth M Daly, and Diptikalyan Saha. Frote: Feedback rule-driven oversampling for editing models. arXiv preprint arXiv:2201.01070, 2022.
3) Stefano Teso and Kristian Kersting. Explanatory interactive machine learning. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pages 239–245, 2019.
4) Oliver Ray and Steve Moyle. Towards expert-guided elucidation of cyber attacks through interactive inductive logic programming. In 2021 13th International Conference on Knowledge and Systems Engineering (KSE), pages 1–7. IEEE, 2021.
5) Schmid, U., & Finzel, B. (2020). Mutual explanations for cooperative decision making in medicine. KI-Künstliche Intelligenz, 34(2), 227-233.