IIT Bombay, Mumbai
Soumen Chakrabarti is a Professor at the Department of Computer Science and Engineering at IIT Bombay. Current research interests are representation learning for graph search, better embedding representation for entities, types, relations, and time, complex multi-modal question answering and code-switched text analysis. Soumen Chakrabarti was elected Fellow of IASc in 2015.
Session 1C: Inaugural Lectures by Fellows/Associates
Manju Bansal, IISc
Neural Graph Representation, Retrieval and Alignment
Combinatorial graph matching problems are often intractable. We explore if suitable graph embedding can pave the way for good neural solutions in practice. Suppose we have a corpus of graphs (say, molecules), and a query graph Gq, and a ‘document’ graph Gc in the corpus is relevant to the extent that Gq is isomorphic to a subgraph of Gc. Unlike dotproduct or cosine similarity, this notion of relevance is asymmetric. Rather than a 0/1 notion of subgraph isomorphism, we need a continuous score which can be used to rank corpus graphs in response to a query. This continuous relevance score should ideally be informed by node and edge features. A wide variety of applications, such as molecular fingerprint detection, circuit design, and many more can be modelled in this framework. We will describe IsoNet, a neural edge alignment formulation for subgraph matching. Instead of aggregating Gq and Gc into single vectors for comparison, Isonet explicitly models a (soft) assignment P between the nodes and edges of Gq andGc and defines an asymmetric relevance surrogate as a function of Gq, Gc, and P. IsoNet is capable of returning explainable top responses using its estimated assignments P. We describe a neural gossiping gadget that approximates the size of the largest connected component in the common subgraph between Gq and Gc. The resulting neural scoring network, McsNet, can thus search for the common subgraph (MCS) between Gq and Gc while encouraging that the subgraph is also connected. While we build IsoNet and McsNet around graph neural networks (GNNs), we recognize several pitfalls of GNNs for real text and graph applications. If time permits, we will discuss an attempt to fix the limitations of symmetric neighbourhood aggregation in GNNs. We will finally focus on knowledge graphs (KGs), where we find a rich confluence of structural, local and global features. We will describe how graph representation using complex vectors coupled with transformer-based text representation can be maximally used to solve multitasked KG ‘completion’ (extrapolation) and alignment between KGs in different languages.