Integrating Graphs With Large Language Models: Methods and Prospects

No Thumbnail Available
File version
Author(s)
Pan, Shirui
Zheng, Yizhen
Liu, Yixin
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)

Murugesan, San

Date
2024
Size
File type(s)
Location
License
Abstract

Large language models (LLMs) such as Generative Pre-trained Transformer 4 have emerged as frontrunners, showcasing unparalleled prowess in diverse applications including answering queries, code generation, and more. Parallelly, graph-structured data, intrinsic data types, are pervasive in real-world scenarios. Merging the capabilities of LLMs with graph-structured data has been a topic of keen interest. This article bifurcates such integrations into two predominant categories. The first leverages LLMs for graph learning, where LLMs can not only augment existing graph algorithms but also stand as prediction models for various graph tasks. Conversely, the second category underscores the pivotal role of graphs in advancing LLMs. Mirroring human cognition, we solve complex tasks by adopting graphs in either reasoning or collaboration. Integrating with such structures can significantly boost the performance of LLMs in various complicated tasks. We also discuss and propose open questions for integrating LLMs with graph-structured data for the future direction of the field.

Journal Title

IEEE Intelligent Systems

Conference Title
Book Title
Edition
Volume

39

Issue

1

Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
Item Access Status
Note
Access the data
Related item(s)
Subject

Engineering

Information and computing sciences

Persistent link to this record
Citation

Pan, S; Zheng, Y; Liu, Y, Integrating Graphs With Large Language Models: Methods and Prospects, IEEE Intelligent Systems, 2024, 39 (1), pp. 64-68

Collections