Explainable and Trustworthy AI

We are interested in the development and deployment of explainable and trustworthy AI algorithms which can provide understandable explanations for their decisions, fostering transparency, accountability and trust. 

Our approaches include explainable and trustworthy AI through the integration of learning, reasoning and human knowledge including legal, ethical and privacy rules. 

We have applied our research in the areas of trustworthy autonomous systems, explainable and knowledge-driven AI for healthcare and biology, privacy-preserving federated learning, mitigating biases and ethical considerations for NLP, uncovering risks of disclosure in large longitudinal social science datasets, adversarial machine learning, remote sensing and cybersecurity.

Meet the team

Alireza Tamaddoni-Nezhad profile image

Dr Alireza Tamaddoni Nezhad

Reader (Associate Professor) in Machine Learning and Computational Intelligence

Suparna De profile image

Dr Suparna De

Lecturer in Computer Science

Xiaowei Gu profile image

Dr Xiaowei Gu

Senior Lecturer

Diptesh Kanojia profile image

Dr Diptesh Kanojia

Lecturer in Artificial Intelligence for Natural Language Processing

Pedro Porto Buarque de Gusmão profile image

Dr Pedro Porto Buarque De Gusmao

Lecturer in Computer Science

Xilu Wang profile image

Dr Xilu Wang

Surrey Future Fellow

Research Papers

  • De, S., Jangra, S., Agarwal, V., Johnson, J., & Sastry, N. (2023). Biases and Ethical Considerations for Machine Learning Pipelines in the Computational Social Sciences. In Ethics in Artificial Intelligence: Bias, Fairness and Beyond (pp. 99-113). Singapore: Springer Nature Singapore.
  • Gu, X. (2022). An explainable semi-supervised self-organizing fuzzy inference system for streaming data classification. Information Sciences, 583, 364-385.
  • Kanojia, D., Sharma, P., Ghodekar, S., Bhattacharyya, P., Haffari, G., & Kulkarni, M. (2021). Cognition-aware cognate detection. In European Association of Computational Linguistics Conference 2021 (pp. 3281-3292). Association for Computational Linguistics (ACL).
  • Saputra, M. R. U., De Gusmao, P. P., Almalioglu, Y., Markham, A., & Trigoni, N. (2019) Distilling knowledge from a deep pose regressor network. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 263-272).
  • Qiu, X., Parcollet, T., Fernandez-Marques, J., Gusmao, P. P., Gao, Y., Beutel, D. J., ... & Lane, N. D. (2023). A first look into the carbon footprint of federated learning. Journal of Machine Learning Research, 24(129), 1-23.
  • Z. Chaghazardi, S. Fallah, and A. Tamaddoni-Nezhad, “Trustworthy Vision for Autonomous Vehicles: A Robust Logic-infused Deep Learning Approach”, In Proc. of the IEEE Int. Conf. on Int Conf. on Intel. Trans. Sys (ITSC), IEEE, 2024
  • Barroso-Bergada, D., Tamaddoni-Nezhad, A., Varghese, D., Vacher, C., Galic, N., Laval, V., Bohan, D. A. (2023). Unravelling the web of dark interactions: Explainable inference of the diversity of microbial interactions. Advances in Ecological Research, 68, 155-183.
  • Y. Yan, X. Wang, P. Ligeti and Y. Jin, "DP-FSAEA: Differential Privacy for Federated Surrogate-Assisted Evolutionary Algorithms," in IEEE Transactions on Evolutionary Computation, doi: 10.1109/TEVC.2024.3391003.
  • S. Liu, X. Wang and Y. Jin, "Federated Bayesian Optimization for Privacy-Preserving Neural Architecture Search," 2023 IEEE Congress on Evolutionary Computation (CEC), Chicago, IL, USA, 2023, pp. 1-8,

 

Funded Projects

  • Project Title, Funding Body, Funding period, Amount of the total project (Amount of the project handled by the colleague), leading Scientist & Website
  • EPSRC,  Human-machine learning of ambiguities to support safe, effective, and legal decision making, 2023-2026, Total amount = £1.1M
  • EPSRC Network Plus project on: Human-Like Computing (HLC), 2018-2024, Total amount = £1.6M
  • Sponsored project: Lightweight Contextual Character-based Embeddings, 2023 - 2024, Total grant award: £98k (eBay Inc.)
  • Sponsored project: Prompt-based Explainable Quality Estimation for Malayalam, 2024-2025. (European Association for Machine Translation)
  • DSTL project on: An explainable generic design of self-evolving intelligent security systems for cyber attack detection, 2023-2024, total amount = £122.6k
  • EPSRC new investigator award on: A human-trustable self-improving machine learning framework for rapid disaster responses using satellite sensor imagery, 2024-2026, total amount = £267.6k