Revanth Gangi Reddy

I am an MS in CS student at University of Illinois, Urbana Champaign, advised by Prof. Heng Ji. Previously, I was an AI Resident at IBM Research, New York wherein I had the pleasure of working with Vittorio Castelli, Avirup Sil and Salim Roukos.

I graduated from Indian Institute of Technology Madras in 2018 with a bachelors degree in computer science. While at IIT Madras, I worked with Prof. Mitesh Khapra and Prof. Balaraman Ravindran

Email  /  CV  /  LinkedIn  /  Google Scholar


I'm interested in natural language processing, particulary in the fields of question answering, information retrieval and knowledge-driven language generation. I also have experience working in the space of domain adaptation, AMR parsing and task-oriented dialog.

Entity-Conditioned Question Generation for Robust Attention Distribution in Neural Information Retrieval
Revanth Reddy, Arafat Sultan, Martin Franz, Avi Sil, Heng Ji
Under submission at ACL Rolling Review

Using a novel targeted synthetic data generation method that identifies poorly attended entities and conditions the generation episodes on those, we teach neural IR to attend more uniformly and robustly to all entities in a given passage.

MuMuQA: Multimedia Multi-Hop News Question Answering via Cross-Media Knowledge Extraction and Grounding
Revanth Reddy, Xilin Rui, Manling Li, Xudong Lin, Haoyang Wen, Jaemin Cho, Lifu Huang , Mohit Bansal, Avi Sil, Shih-Fu Chang, Alexander Schwing, Heng Ji
AAAI, 2022

We propose a new benchmark for multimedia question answering over news articles and introduce a novel data generation framework for generating questions that are grounded on objects in images and answered using the news body text.

Synthetic Target Domain Supervision for Open Retrieval QA
Revanth Reddy, Bhavani Iyer, Arafat Sultan, Rong Zhang, Avi Sil, Vittorio Castelli, Radu Florian, Salim Roukos
SIGIR, 2021

We explore using a synthetic example generation approach to improve the performance of state-of-the-art open-domain end-to-end question answering systems in a specialized domain, such as COVID-19.

InfoSurgeon: Cross-Media Fine-grained Information Consistency Checking for Fake News Detection
Yi R. Fung, Chris Thomas, Revanth Reddy, Sandeep Polisetty, Heng Ji, Shih-Fu Chang, Kathleen McKeown, Mohit Bansal, Avi Sil
ACL, 2021

While most previous work is on document level fake news detection, for the first time we propose misinformation detection at knowledge element level. It not only achieves higher detection accuracy but also makes the results more explainable.

Leveraging Abstract Meaning Representation for Knowledge Base Question Answering
Pavan Kapanipathi*, Ibrahim Abdelaziz*, Srinivas Ravishankar*, ... , Revanth Reddy, Ryan Riegel, Gaetano Rossiello, Udit Sharma, Shrivatsa Bhargav, Mo Yu
Findings of ACL, 2021

We introduce a neuro-symbolic question answering system that leverages AMR for question understanding and uses a pipeline-based approach involving a semantic parser, entity and relationship linkers and a neuro-symbolic reasoner.

Towards Robust Neural Retrieval Models with Synthetic Pre-Training
Revanth Reddy, Vikas Yadav, Arafat Sultan, Martin Franz, Vittorio Castelli, Heng Ji, Avi Sil
Arxiv preprint, 2021

We show that synthetic examples generated using a sequence-to-sequence generator can be effective in improving the robustness of neural IR, with gains in both in-domain and out-of-domain scenarios.

Multi-Stage Pretraining for Low-Resource Domain Adaptation
Rong Zhang*, Revanth Reddy*, Arafat Sultan, Efsun Kayi, Anthony Ferrito, Vittorio Castelli, Avi Sil, Todd Ward, Radu Florian, Salim Roukos
EMNLP, 2020

We formulate synthetic pre-training tasks that can transfer to downstream tasks, by using structure in unlabeled text. We show considerable gains on multiple tasks in the IT domain: question answering, document ranking and duplicate question detection.

Answer Span Correction in Machine Reading Comprehension
Revanth Reddy, Arafat Sultan, Rong Zhang, Efsun Kayi, Vittorio Castelli, Avi Sil
Findings of EMNLP, 2020

We propose an approach for correcting partial match answers (EM=0, 0<F1<1) into exact match (EM=1, F1=1) and obtain upto 1.3% improvement over a RoBERTa-based machine reading comprehension system in both monolingual and multilingual evaluation.

Pushing the Limits of AMR Parsing with Self-Learning
Young-suk Lee*, Ramon Astudillo*, Tahira Naseem*, Revanth Reddy*, Radu Florian, Salim Roukos
Findings of EMNLP, 2020

We propose self-learning approaches to improve AMR parsers, via generation of synthetic text and synthetic AMR as well as refinement of actions from the oracle. We achieve state-of-the-art performance in AMR parsing on benchmark AMR 1.0 and AMR 2.0 datasets.

Multi-Level Memory for Task Oriented Dialogs
Revanth Reddy, Danish Contractor, Dinesh Raghu, Sachindra Joshi
NAACL, 2019

We design a novel multi-level memory architecture that retains natural hierarchy of the knowledge base without breaking it down into subject-relation-object triples. We use separate memories for dialog context and KB to learn different memory readers.

FigureNet : A Deep Learning model for Question-Answering on Scientific Plots
Revanth Reddy, Rahul Ramesh, Ameet Deshpande, Mitesh Khapra
IJCNN, 2019

We design a modular network that uses depth-wise and 1D convolutions for visual reasoning on scientific plots. We achieve state-of-the-art accuracy on FigureQA dataset, bettering Relation Networks by 7%, with a training time over an order of magnitude lesser.

Edge Replacement Grammars : A Formal Language Approach for Generating Graphs
Revanth Reddy*, Sarath Chandar*, Balaraman Ravindran
SDM, 2019

We propose a graph generative model based on probabilistic edge replacement grammars. We design an algorithm to build graph grammars by capturing the statistically significant sub-graph patterns.

Template from here