Graph learning has grown into an established sub-field of machine learning in recent years. Researchers have been focusing on developing novel model architectures, theoretical understandings, scalable algorithms and systems, and successful applications across industry and science regarding graph learning. With the success of the New Frontiers in Graph Learning (GLFrontiers) Workshop in NeurIPS 2022, we hope to continue to promote the exchange of discussions and ideas regarding the future of graph learning in NeurIPS 2023.
Despite the success of graph learning in various applications, the recent machine learning research trends, especially the research towards foundation models and large language models, have posed challenges for the graph learning field. For example, regarding the model architecture, Transformer-based models have been shown to be superior to graph neural networks in certain small graph learning benchmarks. In terms of usability, with language as a generic user interface, it is still a research frontier to explore whether natural language can also interact with ubiquitous graph-structured data and whether it is feasible to build generic foundation models for graphs. Lastly, while graph learning has achieved recent exciting results in molecule and protein design, exploring how graph learning can accelerate scientific discoveries in other disciplines remains an open question.
The primary goal of this workshop is to expand the impact of graph learning beyond the current boundaries. We believe that graph, or relation data, is a universal language that can be used to describe the complex world. Ultimately, we hope graph learning will become a generic tool for learning and understanding any type of (structured) data. In GLFrontiers 2023, we specifically aim to discuss the future of graph learning in the era of foundation models and envision how graph learning can contribute to scientific discoveries.
Scope and Topics
The workshop will include submissions, talks, and poster sessions on a wide variety of challenges, perspectives, and solutions regarding the new frontiers of graph learning, including but not limited to:
Foundation models for graphs and relational data: Innovative ideas and perspectives in building generic foundation models for the ubiquitous graph-structured data and relational data. For example, there are recent attempts in building foundation models for molecule graphs, drug pairs and proteins. The foundation large language models also bring new opportunities for interacting with structural data with language interface.
Graph/Knowledge enhanced LLMs: Ideas and proofs-of-concept in using structured knowledge to enhance the capability of LLMs in returning factual, private and/or domain-specific answers. Examples include retrieval augmented LLMs, Knowledge-enhanced LLMs and improved LLMs reasoning.
Graph AI for science: Proofs-of-concept and perspectives in discovering graph and relational data in various scientific domains, and solving the problems with graph AI and machine learning. Recent works have achieved state-of-the-art using graph learning in sciences such as chemistry, biology, environmental science, physics and neuroscience.
Multimodal learning with Graphs: Graphs can often be leveraged in the multimodal learning context to provide rich information and complement visual / text data. For example, recent works have utilized scene graph in combination with diffusion models for more faithful image generation. Multimodal graph learning is also demonstrated to be critical in learning gene embeddings for multi-omics and multi-tissue data. A joint model of graph and text further improves state-of-the-art in the domain of molecules, logical reasoning and QA.
Trustworthy graph learning: Trustworthiness of graph learning has been a rapidly developing field to ensure that the developed graph learning models can align with human values, and applicable in mission-critical use cases. We welcome various aspects of trustworthy graph representation learning, including adversarial robustness, explainable ML, ML fairness, causal inference, privacy, federated learning etc.
Should you have any questions, please reach out to us via email: