I am currently pursuing a PhD in Secure Machine Learning within the Experimental Distributed Systems (EDS) at Lancaster University. My research focus is on understanding and mitigating the vulnerabilities of the data used to train and develop machine learning models to adversarial attacks.
Specifically, I have been investigating and evaluating various methods such as model inversion, a technique in which an attacker can reverse-engineer the data used to train a model, membership inference, an attack in which the adversary can determine whether a given data point was present within the dataset during training, and model evasion, in which a modified input is generated to attempt to fool a model through misclassification.
In addition to my research, I have been honing my technical skills through the use of various tools and technologies such as Tensorflow, PyTorch, TVM, Docker, CUDA, and machine learning pipelines/operations. These skills are crucial in the advancement of my research and will be beneficial in my future professional endeavors.