I am an Academic Researcher and Data Scientist specialising in AI Safety, Causal Fairness, and Explainable AI (XAI).
Currently pursuing a PhD at the University of Wolverhampton, my work focuses on mitigating algorithmic bias amplification in LLM-enhanced systems. I move beyond correlation-based metrics to use Causal Structural Modelling, analysing "rich-get-richer" feedback loops where LLM latent representations amplify pre-existing biases.
In addition to my academic work, I am a Visiting Member at the London Initiative for Safe AI (LISA) and a participant in the Bluedot Impact Technical AI Safety curriculum. I combine a distinction-level academic background with practical industry experience to bridge the gap between theoretical fairness frameworks and robust, deployable machine learning systems.
Introduced the novel LRR-TED framework—a hybrid approach combining Linear Rule Regression with supervised explanations. We applied a Pareto-optimal strategy to select the "Golden Quartet" of domain rules. This approach yielded 94.00% predictive accuracy, exceeding the benchmarks of full automation (LRR) and human experts, while reducing human effort by 50%.
PDFDeployed IBM’s Contrastive Explanation Method (CEM) to improve transparency in "Black-box" Deep Neural Networks. Decomposed predictions into Pertinent Negatives (PN) and Pertinent Positives (PP) to provide actionable counterfactual explanations.
PDFInvestigating the causal mechanisms of distributional harms in multi-sided platforms. Developing LLM-driven post-processing re-ranking modules to enforce multi-stakeholder fairness constraints.
Read Overview