Revanth Gangi Reddy

I am an MS in CS student at University of Illinois, Urbana Champaign, advised by Prof. Heng Ji. Previously, I was an AI Resident at IBM Research, New York wherein I had the pleasure of working with Vittorio Castelli, Avirup Sil and Salim Roukos.

I graduated from Indian Institute of Technology Madras in 2018 with a bachelors degree in computer science. While at IIT Madras, I worked with Prof. Mitesh Khapra and Prof. Balaraman Ravindran

Email  /  CV  /  LinkedIn  /  Google Scholar


I'm interested in natural language processing, particulary in the fields of question answering, information retrieval and knowledge-driven language generation. I also have experience working in the space of domain adaptation, AMR parsing and task-oriented dialog.

Towards Robust Neural Retrieval Models with Synthetic Pre-Training
Revanth Reddy, Vikas Yadav, Arafat Sultan, Martin Franz, Vittorio Castelli, Heng Ji, Avi Sil
Arxiv preprint, 2021

We show that synthetic examples generated using a sequence-to-sequence generator can be effective in improving the robustness of neural IR, with gains in both in-domain and out-of-domain scenarios.

Synthetic Target Domain Supervision for Open Retrieval QA
Revanth Reddy, Bhavani Iyer, Arafat Sultan, Rong Zhang, Avi Sil, Vittorio Castelli, Radu Florian, Salim Roukos
SIGIR 2021

We explore using a synthetic example generation approach to improve the performance of state-of-the-art open-domain end-to-end question answering systems in a specialized domain, such as COVID-19.

Multi-Stage Pretraining for Low-Resource Domain Adaptation
Rong Zhang*, Revanth Reddy*, Arafat Sultan, Efsun Kayi, Anthony Ferrito, Vittorio Castelli, Avi Sil, Todd Ward, Radu Florian, Salim Roukos
EMNLP, 2020

We formulate synthetic pre-training tasks that can transfer to downstream tasks, by using structure in unlabeled text. We show considerable gains on multiple tasks in the IT domain: question answering, document ranking and duplicate question detection.

Answer Span Correction in Machine Reading Comprehension
Revanth Reddy, Arafat Sultan, Rong Zhang, Efsun Kayi, Vittorio Castelli, Avi Sil
Findings of EMNLP, 2020

We propose an approach for correcting partial match answers (EM=0, 0<F1<1) into exact match (EM=1, F1=1) and obtain upto 1.3% improvement over a RoBERTa-based machine reading comprehension system in both monolingual and multilingual evaluation.

Pushing the Limits of AMR Parsing with Self-Learning
Young-suk Lee*, Ramon Astudillo*, Tahira Naseem*, Revanth Reddy*, Radu Florian, Salim Roukos
Findings of EMNLP, 2020

We propose self-learning approaches to improve AMR parsers, via generation of synthetic text and synthetic AMR as well as refinement of actions from the oracle. We achieve state-of-the-art performance in AMR parsing on benchmark AMR 1.0 and AMR 2.0 datasets.

Multi-Level Memory for Task Oriented Dialogs
Revanth Reddy, Danish Contractor, Dinesh Raghu, Sachindra Joshi
NAACL, 2019

We design a novel multi-level memory architecture that retains natural hierarchy of the knowledge base without breaking it down into subject-relation-object triples. We use separate memories for dialog context and KB to learn different memory readers.

FigureNet : A Deep Learning model for Question-Answering on Scientific Plots
Revanth Reddy, Rahul Ramesh, Ameet Deshpande, Mitesh Khapra
IJCNN, 2019

We design a modular network that uses depth-wise and 1D convolutions for visual reasoning on scientific plots. We achieve state-of-the-art accuracy on FigureQA dataset, bettering Relation Networks by 7%, with a training time over an order of magnitude lesser.

Edge Replacement Grammars : A Formal Language Approach for Generating Graphs
Revanth Reddy*, Sarath Chandar*, Balaraman Ravindran
SDM, 2019

We propose a graph generative model based on probabilistic edge replacement grammars. We design an algorithm to build graph grammars by capturing the statistically significant sub-graph patterns.

Template from here