Sarath Shekkizhar, Ph.D.
Staff Research Scientist at Salesforce
Education
Ph.D. in Electrical and Computer Engineering
Aug 2017 - May 2023University of Southern California
GPA: 3.93
Advisor: Antonio Ortega
M.S. in Computer Science
Aug 2017 - May 2022University of Southern California
GPA: 4.0
M.S. in Electrical Engineering (Computer Vision, Machine Learning)
Aug 2012 - Dec 2013University of Southern California
GPA: 3.86
B.Tech. in Electronics and Communication
July 2008 - June 2012National Institute of Technology, Tiruchirappalli
GPA: 9.12
Work Experience
Staff Research Scientist
Oct 2024 - PresentSalesforce
Working on foundational research on (multi) agentic systems and LLM training for improved reasoning and alignment. Research on voice AI and agentic design.
Member of Technical Staff
June 2023 - October 2024Tenyx (Acquired by Salesforce)
Part of the founding team building Voice AI for customer support. Research on continual learning, TenyxChat models, and geometric characterization of LLMs.
Research Intern
Sep 2022 - Dec 2022Sunnyvale, CA
Worked on understanding the impact of input data in training graph models and scalable sampling approaches. 3x increased recall in abuse detection.
Software Engineer 2
Mar 2014 - Oct 2016KLA Tencor
Milpitas, CA
Designed and developed tools to classify and visualize defect modulations for Process Window Qualification in wafer fabrication.
Publications
22 publications. View all →
Echoing: Identity Failures when LLM Agents Talk to Each Other
S Shekkizhar, R Cosentino, A Earle, S Savarese, arXiv Preprints, 2025
Convergence dynamics of Agent-to-Agent Interactions with Misaligned objectives
R Cosentino, S Shekkizhar, A Earle, arXiv Preprints, 2025
AGI Is Coming... Right After AI Learns to Play Wordle
S Shekkizhar, R Cosentino, arXiv Preprints, 2025
Out-of-Distribution Detection through Soft Clustering with Non-Negative Kernel Regression
A. Gulati, X. Dong, C. Hurtado, S. Shekkizhar, S. Swayamdipta, A. Ortega, Findings of the Association for Computational Linguistics: EMNLP, 2024
Reasoning in Large Language Models: A Geometric Perspective
R Cosentino, S Shekkizhar, arXiv Preprints, 2024
Patents
Knowledge base for voice large language model applications
ProvisionalUS63752613 • Filed: January 2025
Gradient-free optimization of large language models
ProvisionalUS63752618 • Filed: January 2025
Machine learning model compression
ProvisionalUS18905761 • Filed: October 2024
Training a target activation sparsity in a neural network
PendingUS18802235 • Filed: August 2024
Domain aware large language model governance
PendingUS18745562 • Filed: June 2024
Fine-tuning machine learning models while retraining accumulated knowledge
PendingUS18496698 • Filed: October 2023
Data sampling using Locality Sensitive Hashing for large scale graph learning
GrantedUS63517869 • Filed: August 2023
Optimizing training sets used for setting up inspection-related algorithms
GrantedUS10267748 • Filed: April 2019
Awards & Honors
- •IEEE Rising Star in Signal Processing - ICASSP 2023
- •IEEE Best Student Paper Award - ICIP 2020
- •Ming-Hsieh Ph.D. Scholar Finalist 2022-23
Academic Activities
- •Reviewer: IEEE Journals (JSAIT, TSIPN, SPL, TNNLS)
- •Reviewer: Conferences (ICASSP, ICLR, NeurIPS, LoG, ICML)
- •Mentor: Viterbi Graduate Mentorship Program, Fall 2021
- •VGSA Senator: Fall 2017, Spring 2020