Sathyanarayanan N. Aakur

Sathyanarayanan N. Aakur
IEEE Senior Member

Assistant Professor
Department of Computer Science and Software Engineering
Auburn University

Email: san0028 -at-

[ LinkedIn ] [ Twitter ]

Highlights and News

Elevated as IEEE Senior Member!
Invited as Area Chair for BMVC 2024 and NeurIPS 2024 , and SPC for CODS-COMAD 2024!
Invited as Associate Editor for Pattern Recognition!
Congrats to my student Carson Bulgin on winning the 2024-2025 Auburn University Undergraduate Research Fellowship!
One paper on Shape Graph Matching accepted into IEEE International Symposium on Biomedical Imaging (ISBI)! Preprint now online! Congrats Shenyuan (FSU) on his first paper!
One paper on zero-shot genome classification accepted in IEEE Journal of Biomedical and Health Informatics! Preprint now online!
One paper accepted for Oral presentation at IEEE ICMLA 2023! Congrats to Shubham and my undergrad student Udhav on their second and first papers, respectively!
Serving as Area Chair for IEEE/CVF WACV 2024
Serving as Demo Chair and Area Chair at CVPR 2024! Looking forward to your submissions!
Moved to the CSSE Department at Auburn University!
Serving as Senior Program Committee Member for CODS-COMAD 2024
More news..

About Me

I am an Assistant Professor in the Computer Science and Software Engineering Deoartment at Auburn University. Previously, I was an Assistant Professor in the Department of Computer Science at Oklahoma State University, Stillwater.

I received my PhD from University of South Florida, where I worked with Dr. Sudeep Sarkar in the Computer Vision and Pattern Recognition Group and with Dr. Kenneth Malmberg. I received my Master's degree in Management Information Systems from the Muma College of Business at the University of South Florida and my undergraduate degree in Electronics and Communication Engineering from Velammal Engineering College, Anna University, India.


In my research, I’m broadly interested in the intersection of computer vision, natural language processing, and psychology: I aim to build intelligent agents that understand the visual world beyond recognition (labels) or captions (sentences) without the need for explicit human supervision through expensive annotations.

This entails developing approaches that do things such as:

Much of my group's current work focuses on analyzing, modeling, and synthesizing complex video scenes and the semantic structure that can describe them. I also work on applying machine learning to other domains, such as IoTs We also work on use-inspired artificial intelligence research with applications in agriculture and animal diagnostics.