Recent news

December 2023: I am happy to share that my proposal, "ProXAI: Private and Robust Explainers in Online Machine Learning," has been selected as one of the only two nominees from the University at Albany to apply for the 2024 ORAU Ralph E. Powe Junior Faculty Enhancement Awards Program. Thanks for a great support on my research journey!

October 2023:  Our paper "How to Backdoor HyperNetwork in Personalized Federated Learning?" has been accepted in the NeurIPS 2023 Workshop on Backdoors in Deep Learning: The Good, the Bad, and the Ugly (NeurIPS-BUGS 2023). This work explores previously unknown backdoor risks in HoperNet-based personalized federated learning through poisoning attacks. See you in New Orleans in Dec, 2023!

August 2023:  Our paper "Differential Privacy in HyperNetworks for Personalized Federated Learning" has been accepted in the CIKM 2023. This work introduces a novel approach to preserve user-level differential privacy in training a HyperNetFL model. Vaisnavi Nemala, who is a sophomore honor undergraduate student at NJIT, is the lead author under my supervision. Congrats Vaisnavi for her great efforts!

May 2023: I have successfully defended my Ph.D. dissertation on the topic "Trustworthy Machine Learning through the Lens of Privacy and Security". I will join the College of Emergency Preparedness, Homeland Security & Cybersecurity, University at Albany, SUNY as a Tenure-track Assistant Professor. This position also belongs to the Albany Artificial Intelligence Supercomputing Initiative (Albany AI). I am so excited to join SUNY Albany for my next journey.

Feb 2023: Our paper "XRand: Differentially Private Defense against Explanation-Guided Attacks" was named as one of the twelve AAAI 2023 Distinguished Papers, which were selected from 8,777 submissions. It is an incredible honor to have our work acknowledged by esteemed professionals in the field. Thank you for the recognition, Association for the Advancement of Artificial Intelligence (AAAI).

Jan 2023: Our paper "Active Membership Inference Attack under Local Differential Privacy in Federated Learning" has been accepted in the AISTATS 2023. In this paper, we introduce a novel Active Membership Inference attack in dishonest federated learning servers by exploiting the correlation among data features through a non-linear decision boundary. This work is critical in informing previously unknown privacy risks in federated learning systems, even under local differential privacy protection. 

Dec 2022: I will give an invited talk on "Trustworthiness in Machine Learning" at Qatar Computing Research Institute (QCRI). Qatar. 

Nov 2022: I received a student travel award from IEEE BigData 2022 to attend the conference. See you in Osaka, Japan!

Nov 2022: Our paper "XRand: Differentially Private Defense against Explanation-Guided Attacks" has been accepted as an Oral presentation in the AAAI 2023. In this paper, we introduce a new concept of achieving local differential privacy (LDP) in the explanations, and from that, we establish a defense, called XRand, against explanation-guided attacks. See you in Washington DC in Feb, 2023!

Nov 2022: Our paper "FLSys: Toward an Open Ecosystem for Federated Learning Mobile Apps" has been accepted in IEEE Transactions of Mobile Computing. We developed a scalable system to open an ecosystem of FL on mobile apps. This is a joint work among NJIT, Kent State University, Qualcomm, and other industrial partners. Stay Tuned for more details!

Nov 2022: I gave an invited talk on "Trustworthiness in Machine Learning from Privacy Perspective" at Kent State University, OH. 

Nov 2022: Our papers "User-Entity Differential Privacy in Learning Natural Language Models" and "Heterogeneous Randomized Response for Differential Privacy in Graph Neural Networks" have been accepted in IEEE BigData. We developed a novel DP mechanism to simultaneously protect privacy for both data owners and sensitive text contents. This is a joint work between NJIT and Adobe. See you in Osaka, Japan!

Oct 2022: My second provisional pattern in privacy-aware language model training has been approved. This is joint work with Adobe Inc.

July 2022: Our paper "Lifelong DP: Consistently Bounded Differential Privacy in Lifelong Machine Learning" has been accepted to the Conference on Lifelong Learning Agents - CoLLAs 2022 and will be published in the Proceedings of Machine Learning Research (PMLR).

Jan 2022: Our Pradyna Desai (an undergrad student from CS Department in my research team) has been selected to be the 80 Finalists for the 2022 National Center for Women & Information Technology (NCWIT) Collegiate Award. NCWIT is a non-profit community of over 1,500 universities, companies, non-profits, and government organizations national wide working to increase the influential and meaningful participation of girls and women in the field of computing. This is the inspiration for our research work on "Continual Learning with Differential Privacy,” in which Pradyna Desai is the lead author under my supervision.

Dec 2021: Our article "Ontology-based Interpretable Machine Learning: A Comprehensive Study" has been accepted with minor revision by the Journal of Combinatorial Optimization - Springer.

Sept 2021: Our Honor Undergraduate Student, Pradnya Desai, has a paper, titled "Continual Learning with Differential Privacy," accepted at the 28th International Conference on Neural Information Processing (ICONIP2021) (Rank A, CORE2020) with an oral presentation. This paper establishes the first formal connection between Differential Privacy and Continual Learning.

Apr 2021: My provisional pattern in preserving privacy in natural language modeling has been approved. This is joint work with Adobe Inc.

Mar 2020: Our paper about "Ontology-based Interpretable Machine Learning for Textual Data" has been accepted for an Oral Presentation at the IEEE International Joint Conference on Neural Networks (IJCNN'2020). [Github] See you in Glasgow, Scotland, UK!

Sept 2019: Our paper about "Ontology-based Interpretable Machine Learning" has been accepted for an Oral Presentation at the Knowledge Representation & Reasoning Meets Machine Learning (KR2ML) Workshop at NeurIPS'19. See you at NeurIPS'19!