본문 바로가기 주메뉴 바로가기
국회도서관 홈으로 정보검색 소장정보 검색

목차보기

Title Page

Abstract

Contents

Chapter 1. Introduction 17

Chapter 2. Fundamentals 28

2.1. Heat Kernel on Graphs 28

2.2. Gromov-Wasserstein Distance 33

Chapter 3. Graph Representation and Metric 36

3.1. Spectral Heat Kernel Gromov-Wasserstein Discrepancy 37

3.1.1. Spectral Distributions and Heat Kernels 37

3.1.2. Spectral Heat Kernel Gromov-Wasserstein Discrepancy 39

3.1.3. SHGW Optimization 40

3.2. Approximated SHGW 43

3.3. Experiments 47

3.3.1. Benchmark Datasets 47

3.3.2. Samples from Spectral Distributions 51

3.3.3. Samples from Heat Kernels 51

3.3.4. Graph Classification Results 53

3.3.5. Influence of Hyperparameter β and є 54

3.3.6. Computational Complexity Analysis 55

Chapter 4. Graph Classification with Neural Networks 60

4.1. Heat Kernel Graph Networks 60

4.1.1. Related Works 61

4.1.2. Graph Neural Networks 64

4.1.3. Heat Kernel Graph Networks 65

4.1.4. Optimal Diffusion rate 66

4.2. Extension to Multi-Head Heat Kernels 67

4.3. Experiments 71

4.3.1. Benchmark Datasets 71

4.3.2. Graph Classification Results 72

4.3.3. Effect of Multi-Head Heat Kernels 75

Chapter 5. Conclusion and Future Work 78

5.1. Contribution 78

5.1.1. GW Framework with Heat Kernel 79

5.1.2. GNN Framwork with Heat Kernel Convolution 80

5.2. Future Work 81

Bibliography 83

List of Tables

Table 3.1. Graph Dataset description. 50

Table 3.2. Graph classification accuracy using 1-NN classifier 55

Table 3.3. Computation time, second... 56

Table 4.1. Statistics of collected datasets. 72

Table 4.2. Graph classification accuracy in percent 75

List of Figures

Figure 1.1. Schematic picture of graph related tasks. (a) graph classification between different types of graphs. (b)... 19

Figure 1.2. Graphs in vector space. Similar graphs are mapped near each other... 21

Figure 1.3. Graphs with different sizes and structures. Graph (a) and (b) have... 24

Figure 1.4. Node aggregation using heat kernel. Nodes and weights change along... 27

Figure 2.1. Heat kernel on Zachary's karate club graph Girvan and Newman... 32

Figure 2.2. Schematic diagram of Gromov-Wasserstein distance on two graphs.... 35

Figure 3.1. The scheme of the proposed Gromov-Wasserstein discrepancy. (a) presents two graphs, Ga and Gb,...[이미지참조] 41

Figure 3.2. Influence of hyperparameters β and є on entropy of couplings. The... 48

Figure 3.3. Samples from MUTAG dataset. The graphs have different sizes and numbers of edges (top). Graph (a)... 52

Figure 3.4. Five representative graph examples (left). SHGW between five... 57

Figure 3.5. Graphs in SHGW order;. MUTAG graphs are arranged in ascending order of discrepancy for the reference... 58

Figure 3.6. Influence of β and є to classification accuracy on MUTAG dataset.... 59

Figure 4.1. An adjacency matrix to heat kernel matrices. Heat kernels can vary... 62

Figure 4.2. Example of two non-isomorphic graphs. Input features (top) and... 63

Figure 4.3. HKGN model architecture. m heat kernels are generated from the input graph. Multi-head heat kernel... 69

Figure 4.4. Graph classification comparison (avg. of 10 cross-validation) between baselines and HKGN. Bars within... 74

Figure 4.5. Accuracy of C-HKGN and M-HKGN. The number of heat kernels... 76

Figure 4.6. Heatmap of heat kernel samples from PROTEINS dataset. 77

초록보기

A graph is a form which can efficiently represent the structured data. It is already conventional tool used in wide range of domains, from bioinformatics and pharmacology to social network and network system. In this thesis we aim to develop unsupervised and supervised frameworks for graph representation learning by combining the methods from spectral graph theory. Eigenvalues and the heat kernel are an important component from spectral graph theory that describes graph structure and the flow of information through the edges in the graph. First, the proposed unsupervised framework is developed using spectral distributions from the eigenvalues and heat kernels on graphs. These two components are carried out for graph comparison with the optimal transport like method, Gromov-Wasserstein discrepancy. The framwork is suitable for graph representation in that it satisfies three graph properties, permutation-invariance, structural-adaptivity and scale-adaptivity. Additionally, we introduce approximation for Gromov-Wasserstein discrepancy to boil down the computational complexity for large-scale graph comparison. The experiments on the benchmark datasets, including molecules, proteins and scientific collaboration, are conducted to validate the performance in graph classification task. We further investigate the samples in real-world graphs and check whether similar graphs selected from the proposed method is actually similar and the dissimilar graphs are dissimilar.

Second, we propose a supervised graph representation learning method with graph neural networks. The model incorporates the heat kernel in graph convolution and satisfies the graph properties. Usually diffusion rate in the heat kernel is selected with trial-error. However, we substitute the diffusion rate to learnable parameter so that the model can decide the diffusion rate and the heat kernel is decided subsequently. Moreover, heat kernel trace normalization is suggested to enhance the discernability of the model on regular graphs. It complements the problem of representing non-isomorphic regular graphs the same in conventional graph convolution method. Multi-head heat kernels are applied to the model to further learn the comprehensive local and global structure of the graph. We experiment on benchmark datasets and show competitive performance on graph classification against baseline methods and verify whether the multi-head heat kernel improves the performance. Additionally, we analyze the heat kernels in the model to identify that learned heat kernels in real-world graphs utilizes local and global structure from the heat kernels.