Stanford CS224W: Machine Learning with Graphs | 2021 | Lecture 3.1 - Node Embeddings

Stanford Online
20 Apr 202114:44

Summary

TLDRThis lecture introduces node embeddings for graph representation learning, aiming to automate feature engineering. It discusses traditional machine learning approaches on graphs and the shift towards learning features automatically without manual intervention. The lecture explains the concept of mapping nodes into a low-dimensional space where node similarities in the network are preserved in the embedding space, useful for various prediction tasks. It also touches on methods like DeepWalk and node2vec and their unsupervised nature, focusing on network structure rather than node labels or features.

Takeaways

  • 📚 The lecture introduces node embeddings, a technique to represent nodes in a graph as vectors in a continuous space.
  • 🔍 Traditional machine learning on graphs involves extracting features that describe the topological structure and attributes of the network.
  • 🤖 Graph representation learning aims to automate the feature engineering process by learning features directly from the graph structure.
  • 🧭 The goal of node embeddings is to map nodes into a space where the structure of the network is captured, allowing for similarity measurements between nodes.
  • 📈 Node similarity in the embedding space is often measured by the dot product, which correlates with the cosine of the angle between vectors.
  • 🌐 The embeddings can be used for various tasks such as node classification, link prediction, graph classification, anomaly detection, and clustering.
  • 📊 DeepWalk, introduced in 2014, is an early method for learning node embeddings by treating random walks as sentences in a language model.
  • 🔑 The adjacency matrix is used to represent the graph without assuming any features or attributes on the nodes.
  • 🔄 The encoder-decoder framework is used to formulate the task of learning node embeddings, where the encoder maps nodes to embeddings and the decoder maps back to a similarity score.
  • 🔢 The embedding matrix Z is a large matrix where each column corresponds to an embedding vector for a node, and its size grows with the number of nodes.
  • 🚀 Methods like DeepWalk and node2vec are unsupervised, learning embeddings based on network structure without relying on node labels or features.

Q & A

  • What is the main focus of Lecture 3?

    -The main focus of Lecture 3 is on node embeddings, which is a method in graph representation learning that aims to automatically learn features of a network without manual feature engineering.

  • What is traditional machine learning in graphs?

    -Traditional machine learning in graphs involves extracting topological features from a given input graph and combining them with attribute-based information to train a classical machine learning model for predictions.

  • What is the goal of graph representation learning?

    -The goal of graph representation learning is to alleviate the need for manual feature engineering by automatically learning the features of the network structure that can be used for various prediction tasks.

  • What is a node embedding?

    -A node embedding is a vector representation of a node in a graph, where the vector captures the structure of the underlying network and is used to indicate the similarity between nodes in the network.

  • Why is creating node embeddings useful?

    -Creating node embeddings is useful because it allows for the automatic encoding of network structure information, which can be used for various downstream tasks such as node classification, link prediction, graph classification, anomaly detection, and clustering.

  • What is DeepWalk and how does it relate to node embeddings?

    -DeepWalk is a method introduced in 2014 that learns node embeddings by simulating random walks on the graph. It is significant because it was one of the pioneering approaches to learning node embeddings.

  • How are node embeddings represented mathematically in the lecture?

    -In the lecture, node embeddings are represented as a set of coordinates in a d-dimensional space, denoted by the letter Z, where similarity in the embedding space is approximated by the dot product of the coordinates of two nodes.

  • What is the role of the adjacency matrix in graph representation learning?

    -The adjacency matrix plays a crucial role in graph representation learning as it represents the graph structure without assuming any features or attributes on the nodes, allowing the learning algorithms to focus solely on the network's topology.

  • What is the difference between a shallow encoder and a deep encoder in the context of node embeddings?

    -A shallow encoder is a simple embedding lookup, where the parameters to optimize are the embedding matrix Z. In contrast, a deep encoder, such as graph neural networks, uses a more complex approach to compute node embeddings.

  • How are node similarities defined in the context of node embeddings?

    -Node similarities are defined based on random walks in the network. The embeddings are optimized so that nodes that are similar according to the random-walk similarity measure are close together in the embedding space.

  • What are some of the practical methods mentioned for learning node embeddings?

    -The lecture mentions DeepWalk and node2vec as practical methods for learning node embeddings. These methods aim to capture the network structure in a low-dimensional vector space.

Outlines

00:00

🌐 Introduction to Node Embeddings

The lecture introduces the concept of node embeddings in the context of graph representation learning. It discusses the traditional approach to machine learning on graphs, where topological features are extracted and combined with attribute-based information to train models for predictions. The lecturer highlights the manual effort required in feature engineering and the desire to automate this process. The idea is to learn the structure of the network automatically, eliminating the need for manual feature engineering. The goal is to create an embedding space where the similarity of node embeddings reflects their similarity in the network, which can be used for various prediction tasks. The lecture also introduces the concept of using adjacency matrices to represent graphs without assuming any node features.

05:01

🔍 Defining Node Similarity and Embedding Space

This paragraph delves into how to represent a graph using an adjacency matrix and the concept of encoding nodes into an embedding space. The lecturer explains that the goal is to map nodes in such a way that their similarity in the embedding space approximates their similarity in the graph. The use of dot product as a similarity measure in the embedding space is introduced, which correlates to the angle between vectors. The paragraph also discusses the encoder-decoder framework, where the encoder maps nodes to embeddings, and the decoder maps from embeddings to a similarity score. The lecturer emphasizes the need to define a node similarity function and an objective function that connects the similarity with the embeddings. The approach to learning the embeddings is described as task-independent, meaning it's not trained on a specific prediction task or node labels.

10:01

📈 Scalability and Methods for Learning Node Embeddings

The lecturer addresses the scalability issue of learning node embeddings, noting that the parameter count grows with the number of nodes in the network. While these methods can be scaled to millions of nodes, they may become slow due to the need to estimate parameters for each node. The paragraph introduces two methods for learning node embeddings: DeepWalk and node2vec. The lecturer summarizes the encoder-decoder framework, mentioning that a shallow encoder is used, which is simply an embedding lookup, and the decoder is based on node similarity through dot product. The objective is to maximize the dot product for node pairs that are similar according to a node similarity function. The paragraph concludes by discussing the unsupervised nature of these methods, where node embeddings are learned without utilizing node labels or features, focusing solely on capturing network structure.

Mindmap

Keywords

💡Node Embeddings

Node embeddings refer to the process of transforming nodes in a network into a low-dimensional vector space where the relative distances between nodes represent their similarity or relationship in the original network. In the context of the video, node embeddings are a core concept for graph representation learning, aiming to alleviate the need for manual feature engineering by automatically capturing the structure of the network for various machine learning tasks.

💡Graph Representation Learning

Graph representation learning is an approach to automatically learn features from graph-structured data. It is mentioned in the video as a method to avoid manual feature engineering by learning the structure of the network automatically. The goal is to create a representation where nodes that are similar in the network are also close in the embedding space, which can then be used for tasks like node classification or link prediction.

💡Feature Engineering

Feature engineering is the process of using domain knowledge to select or construct features that help improve the performance of machine learning models. The video discusses how traditional machine learning on graphs relies heavily on feature engineering to describe the topological structure of the network for making predictions. However, graph representation learning aims to automate this process.

💡Adjacency Matrix

An adjacency matrix is a square matrix used to represent a graph. It is mentioned in the video as a way to represent a graph without assuming any features or attributes on the nodes. Each element of the matrix indicates whether pairs of nodes are connected by an edge. The goal of node embeddings is to encode this structure into a lower-dimensional space.

💡DeepWalk

DeepWalk is a method introduced in the video that is used to learn node embeddings by simulating random walks on the graph and then training a model to predict the surrounding nodes based on the walk. It is an example of an unsupervised learning technique for creating node embeddings that capture network structure.

💡Node2Vec

Node2Vec is another method for learning node embeddings, which is briefly mentioned alongside DeepWalk. It improves upon DeepWalk by allowing for more flexible and biased random walks, enabling the exploration of different types of network structures. The method uses a skip-gram model to learn the embeddings.

💡Random Walks

Random walks are a technique used in graph analysis to simulate a random surfer's path through the network. In the context of the video, random walks are used to define a similarity measure between nodes, which is then optimized in the learning of node embeddings. The idea is that nodes that are likely to be visited together in a random walk should have similar embeddings.

💡Encoder-Decoder Framework

The encoder-decoder framework is a concept used in the video to describe the process of learning node embeddings. The encoder maps nodes to embeddings, and the decoder maps from embeddings back to a similarity score. The framework is used to define an objective function that aims to maximize the dot product of similar node pairs, indicating that their embeddings are close together.

💡Dot Product

The dot product is a mathematical operation that measures the similarity between two vectors. In the video, the dot product is used as a similarity metric in the embedding space. A high dot product between two node embeddings indicates that the nodes are similar in the original network, based on the cosine of the angle between their vectors.

💡Unsupervised Learning

Unsupervised learning is a type of machine learning where the model learns from data that has not been labeled. The video mentions that methods like DeepWalk and node2vec are unsupervised, meaning they do not use node labels to learn the embeddings. Instead, they rely solely on the network structure to capture similarities between nodes.

Highlights

Introduction to node embeddings in graph representation learning.

Traditional machine learning on graphs involves extracting topological features for prediction tasks.

Graph representation learning aims to automate the feature engineering process.

The concept of learning node embeddings to represent network structure without manual feature engineering.

Node embeddings map nodes into a d-dimensional space to capture network structure.

Similarity between node embeddings indicates their similarity in the network.

Applications of node embeddings include node classification, link prediction, and anomaly detection.

DeepWalk, a method introduced in 2014, visualizes node embeddings in a two-dimensional space.

Node embeddings can reveal structural patterns in small toy networks.

The adjacency matrix represents a graph without assuming any node features.

The goal is to encode nodes so that embedding similarity approximates graph similarity.

Encoder maps nodes to embeddings; decoder maps embeddings to similarity scores.

Dot product is commonly used as a decoder to measure similarity in the embedding space.

The simplicity of the encoder as an embedding-lookup matrix.

Challenges of estimating a large number of parameters for node embeddings in large graphs.

Once estimated, retrieving node embeddings is as simple as a matrix lookup.

DeepWalk and node2vec are methods for learning node embeddings from random walks.

Node embeddings are task-independent and do not require node labels or features.

The importance of defining node similarity for optimizing embeddings.

Transcripts

play00:04

This is Lecture 3 of our class and we are going to talk today about node embeddings.

play00:10

So the way we think of this is the following.

play00:13

Um, the inter- what we talked

play00:15

on last week was about traditional machine learning in graphs,

play00:19

where the idea was that given an input graph,

play00:21

we are going to extract some node link or graph level

play00:24

features that basically describe the topological structure,

play00:29

uh, of the network,

play00:30

either around the node,

play00:31

around a particular link or the entire graph.

play00:34

And then we can take that topological information,

play00:36

compare, um, er, uh, um,

play00:38

combine it with the attribute-based information to

play00:43

then train a classical machine learning model

play00:46

like a support vector machine or a logistic regression,

play00:48

uh, to be able to make predictions.

play00:51

So, um, in this sense, right,

play00:53

the way we are thinking of this is that we are given an input graph here.

play00:57

We are then, uh,

play00:58

creating structure- structured features or structural features,

play01:01

uh, of this graph so that then we can apply

play01:03

our learning algorithm and make, uh, prediction.

play01:06

And generally most of the effort goes here into the feature engineering,

play01:10

uh, where, you know, we are as,

play01:12

uh, engineers, humans, scientists,

play01:14

we are trying to figure out how to best describe,

play01:16

uh, this particular, um,

play01:18

network so that, uh,

play01:19

it would be most useful, uh,

play01:21

for, uh, downstream prediction task.

play01:23

Um, and, uh, the question then becomes,

play01:26

uh, can we do this automatically?

play01:28

Can we kind of get away from, uh, feature engineer?

play01:31

So the idea behind graph representation learning is that we wanna

play01:35

alleviate this need to do manual feature engineering every single time, every time for,

play01:40

uh, every different task,

play01:41

and we wanna kind of automatically learn the features,

play01:45

the structure of the network,

play01:46

um, in- that we are interested in.

play01:49

And this is what is called, uh,

play01:50

representation learning so that no manual, uh,

play01:53

feature engineering is, uh, necessary, uh, anymore.

play01:57

So the idea will be to do

play01:59

efficient task-independent feature learning for machine learning with the graphs.

play02:03

Um, the idea is that for example,

play02:05

if we are doing this at the level of individual nodes,

play02:08

that for every node,

play02:09

we wanna learn how to map this node in a d-dimensional

play02:13

space ha- and represent it as a vector of d numbers.

play02:17

And we will call this vector of d numbers as feature representation,

play02:22

or we will call it, um, an embeding.

play02:24

And the goal will be that this, uh, mapping, um,

play02:28

happens automatically and that this vector

play02:30

captures the structure of the underlying network that,

play02:34

uh, we are, uh, interested in,

play02:36

uh, analyzing or making predictions over.

play02:39

So why would you wanna do this?

play02:41

Why create these embeddings?

play02:43

Right. The task is to map nodes into an- into an embedding space.

play02:47

Um, and the idea is that similarity, uh,

play02:49

of the embeddings between nodes indicates their similarity in the network.

play02:55

Uh, for example, you know,

play02:56

if bo- nodes that are close to each other in the network,

play02:59

perhaps they should be embedded close together in the embedding space.

play03:03

Um, and the goal of this is that kind of en-

play03:05

automatically encodes the network, uh, structure information.

play03:10

Um, and then, you know,

play03:11

it can be used for many kinds of different downstream prediction tasks.

play03:15

For example, you can do any kind of node classification, link prediction,

play03:19

graph classification, you can do anomaly detection,

play03:22

you can do clustering,

play03:23

a lot of different things.

play03:25

So to give you an example, uh,

play03:28

here is- here is a plot from a- a paper that came up with

play03:31

this idea back in 2014, fe- 2015.

play03:34

The method is called DeepWalk.

play03:36

Um, and they take this, uh, small, uh,

play03:39

small network that you see here,

play03:40

and then they show how the embedding of nodes would look like in two-dimensions.

play03:44

And- and here the nodes are,

play03:46

uh, colored by different colors.

play03:47

Uh, they have different numbers.

play03:49

And here in the, um, in this example, uh,

play03:52

you can also see how, um, uh,

play03:55

how different nodes get mapped into different parts of the embedding space.

play03:58

For example, all these light blue nodes end up here,

play04:01

the violet nodes, uh,

play04:03

from this part of the network end up here,

play04:05

you know, the green nodes are here,

play04:07

the bottom two nodes here,

play04:09

get kind of set, uh,

play04:10

uh on a different pa- uh,

play04:12

in a different place.

play04:13

And basically what you see is that in some sense,

play04:15

this visualization of the network and

play04:17

the underlying embedding correspond to each other quite well in two-dimensional space.

play04:22

And of course, this is a small network.

play04:23

It's a small kind of toy- toy network,

play04:26

but you can get an idea about, uh,

play04:28

how this would look like in,

play04:30

uh, uh- in, uh,

play04:31

more interesting, uh, larger,

play04:33

uh- in larger dimensions.

play04:35

So that's basically the,

play04:37

uh- that's basically the, uh, idea.

play04:39

So what I wanna now do is to tell you about how do we formulate this as a task, uh,

play04:44

how do we view it in this, uh,

play04:46

encoder, decoder, uh, view or a definition?

play04:49

And then what kind of practical methods, um,

play04:51

exist there, uh for us to be able, uh, to do this.

play04:55

So the way we are going to do this, um,

play04:58

is that we are going to represent, uh,

play05:00

a graph, as a- as a- with an adjacency matrix.

play05:03

Um, and we are going to think of this,

play05:06

um, in terms of its adjacency matrix,

play05:09

and we are not going to assume any feature, uh, uh,

play05:12

represe- features or attributes,

play05:14

uh, on the nodes, uh, of the network.

play05:16

So we are just going to- to think of this as a- as a- as a set of,

play05:22

um, as a- as an adjacency matrix that we wanna- that we wanna analyze.

play05:26

Um, we are going to have a graph, as I showed here,

play05:28

and the corresponding adjacency matrix A.

play05:30

And for simplicity, we are going to think of these as undirected graphs.

play05:34

So the goal is to encode nodes so that similarity in the embedding space- uh,

play05:40

similarity in the embedding space,

play05:41

you can think of it as distance or as a dot product,

play05:44

as an inner product of the coordinates of two nodes

play05:47

approximates the similarity in the graph space, right?

play05:50

So the idea will be that in- or in the original network,

play05:53

I wanna to take the nodes,

play05:54

I wanna map them into the embedding space.

play05:57

I'm going to use the letter Z to denote the coordinates,

play06:00

uh, of that- of that embedding,

play06:02

uh, of a given node.

play06:04

Um, and the idea is that, you know,

play06:06

some notion of similarity here

play06:08

corresponds to some notion of similarity in the embedding space.

play06:11

And the goal is to learn this encoder that encodes

play06:14

the original network as a set of, uh, node embeddings.

play06:17

So the goal is to- to define the similarity in the original network, um,

play06:23

and to map nodes into the coordinates in the embedding space such that, uh,

play06:27

similarity of their embeddings corresponds to the similarity in the network.

play06:32

Uh, and as a similarity metric in the embedding space, uh,

play06:35

people usually, uh, select, um, dot product.

play06:39

And dot product is simply the angle,

play06:41

uh, between the two vectors, right?

play06:43

So when you do the dot product,

play06:44

it's the cosine of the- of the angle.

play06:46

So if the two points are close together or in the same direction from the origin,

play06:51

they have, um, um,

play06:53

high, uh, uh, dot product.

play06:54

And if they are orthogonal,

play06:56

so there is kind of a 90-degree angle, uh,

play06:58

then- then they are as- as dissimilar as

play07:00

possible because the dot product will be, uh, zero.

play07:03

So that's the idea. So now what do we need to

play07:06

define is we need to define this notion of, uh,

play07:08

ori- similarity in the original network and we need to define then

play07:12

an objective function that will connect the similarity with the, uh, embeddings.

play07:16

And this is really what we are going to do ,uh, in this lecture.

play07:19

So, uh, to summarize a bit, right?

play07:22

Encoder maps nodes, uh, to embeddings.

play07:25

We need to define a node similarity function,

play07:27

a measure of similarity in the original network.

play07:30

And then the decoder, right,

play07:33

maps from the embeddings to the similarity score.

play07:36

Uh, and then we can optimize the parameters such that

play07:39

the decoded similarity corresponds as closely as

play07:43

possible to the underlying definition of the network similarity.

play07:47

Where here we're using a very simple decoder,

play07:50

as I said, just the dot-product.

play07:52

So, uh, encoder will map notes into low-dimensional vectors.

play07:57

So encoder of a given node will simply be the coordinates or the embedding of that node.

play08:03

Um, we talked about how we are going to define the similarity

play08:06

in the embedding space in terms of the decoder,

play08:09

in terms of the dot product.

play08:11

Um, and as I said, uh,

play08:13

the embeddings will be in some d-dimensional space.

play08:16

You can think of d, you know, between,

play08:19

let's say 64 up to about 1,000,

play08:22

this is usually how- how,

play08:24

uh, how many dimensions people, uh, choose,

play08:26

but of course, it depends a bit on the size of the network,

play08:29

uh, and other factors as well.

play08:31

Um, and then as I said,

play08:33

the similarity function specifies how the relationship in

play08:36

the- in the vector space map to the relationship in the,

play08:40

uh, original ,uh, in the original network.

play08:42

And this is what I'm trying to ,uh,

play08:44

show an example of, uh, here.

play08:47

So the simplest encoding approach is that an encoder is just an embedding-lookup.

play08:53

So what- what do I mean by this that- is that an encoded- an

play08:56

encoding of a given node is simply a vector of numbers.

play08:59

And this is just a lookup in some big matrix.

play09:02

So what I mean by this is that our goal will be to learn this matrix Z,

play09:06

whose dimensionalities is d,

play09:08

the embedding dimension times the number of nodes,

play09:11

uh, in the network.

play09:12

So this means that for every node we will have a column

play09:15

that is reserved to store the embedding for that node.

play09:19

And this is what we are going to learn,

play09:21

this is what we are going to estimate.

play09:23

And then in this kind of notation,

play09:24

you can think of v simply as an indicator vector that has all zeros,

play09:29

except the value of one in the column

play09:31

indicating the- the ID of that node v. And- and what this

play09:35

will do pictorially is that basically you can think of

play09:38

Z as this matrix that has one column per node,

play09:42

um, and the column store- a given column stores the embedding of that given node.

play09:47

So the size of this matrix will be number of nodes times the embedding dimension.

play09:52

And people now who are, for example,

play09:54

thinking about large graphs may already have a question.

play09:58

You know, won't these to be a lot of parameters to estimate?

play10:00

Because the number of parameters in this model is basically the number of entries, uh,

play10:05

of this matrix, and this matrix gets very large because

play10:09

it dep- the size of the matrix depends on the number of nodes in the network.

play10:12

So if you want to do a network or one billion nodes,

play10:16

then the dimensionality of this matrix would be one billion times,

play10:19

let's say thousandth, uh,

play10:20

and embedding dimension and that's- that's,

play10:23

uh, that's a lot of parameters.

play10:24

So these methods won't necessarily be most scalable, you can scale them,

play10:29

let's say up to millions or a million nodes or, uh,

play10:33

something like that if you- if you really try,

play10:35

but they will be slow because for every node

play10:37

we essentially have to estimate the parameters.

play10:40

Basically for every node we have to estimate its embedding- embedding vector,

play10:44

which is described by the d-numbers d-parameters,

play10:48

d-coordinates that we have to estimate.

play10:50

So, um, but this means that once we have estimated this embeddings,

play10:55

getting them is very easy.

play10:56

Is just, uh, lookup in this matrix where everything is stored.

play11:00

So this means, as I said,

play11:02

each node is assigned a unique embedding vector.

play11:05

And the goal of our methods will be to directly optimize or ,uh,

play11:10

learn the embedding of each node separately in some sense.

play11:15

Um, and this means that, uh,

play11:17

there are many methods that will allow us to do this.

play11:19

In particular, we are going to look at two methods.

play11:22

One is called, uh,

play11:23

DeepWalk and the other one is called node2vec.

play11:27

So let me- let me summarize.

play11:29

In this view we talked about an encoder ,uh, decoder, uh,

play11:34

framework where we have what we call a shallow encoder because it's

play11:37

just an embedding-lookup the parameters to optimize ,um, are- are,

play11:42

uh, very simple, it is just this embedding matrix Z. Um,

play11:46

and for every node we want to identify the embedding z_u.

play11:51

And v are going to cover in the future lectures is we are

play11:55

going to cover deep encoders like graph neural networks that- that,

play11:59

uh, are a very different approach to computing, uh, node embeddings.

play12:04

In terms of a decoder,

play12:06

decoder for us would be something very similar- simple.

play12:08

It'd be simple- simply based on the node similarity based on the dot product.

play12:14

And our objective function that we are going to try to

play12:17

learn is to maximize the dot product

play12:19

of node pairs that are similar according to our node similarity function.

play12:26

So then the question is,

play12:29

how do we define the similarity, right?

play12:31

I've been talking about it,

play12:32

but I've never really defined it.

play12:34

And really this is how these methods are going to differ between each other,

play12:38

is how do they define the node similarity notion?

play12:41

Um, and you could ask a lot of different ways how- how to do this, right?

play12:45

You could chay- say,

play12:46

"Should two nodes have similar embedding if they are perhaps linked by an edge?"

play12:51

Perhaps they share many neighbors in common,

play12:54

perhaps they have something else in common or they are in similar part of

play12:58

the network or the structure of the network around them, uh, look similar.

play13:02

And the idea that allow- that- that started all this area of

play13:07

learning node embeddings was that we are going to def- define a similarity,

play13:13

um, of nodes based on random walks.

play13:15

And we are going to ,uh,

play13:17

optimize node embedding for this random-walk similarity measure.

play13:22

So, uh, let me explain what- what I mean by that.

play13:26

So, uh, it is important to know that this method

play13:30

is what is called unsupervised or self-supervised,

play13:34

in a way that when we are learning the node embeddings,

play13:37

we are not utilizing any node labels.

play13:40

Um, will only be basically trying to

play13:43

learn embedding so that they capture some notion of network similarity,

play13:47

but they don't need to capture the- the notion of labels of the nodes.

play13:51

Uh, and we are also not- not utilizing any node features

play13:55

or node attributes in a sense that if nodes are humans,

play13:58

perhaps, you know, their interest,

play14:00

location, gender, age would be attached to the node.

play14:03

So we are not using any data,

play14:05

any information attached to every node or attached to every link.

play14:10

And the goal here is to directly estimate a set of coordinates of node so

play14:14

that some aspect of the network structure is preserved.

play14:20

And in- in this sense,

play14:22

these embeddings will be task-independent because they are

play14:25

not trained on a given prediction task, um,

play14:28

or a given specific, you know,

play14:30

labelings of the nodes or are given specific subset of links,

play14:34

it is trained just given the network itself.

Rate This

5.0 / 5 (0 votes)

関連タグ
Graph LearningMachine LearningNode EmbeddingsDeepWalkNetwork AnalysisFeature EngineeringUnsupervised LearningData ScienceGraph Neural NetworksRandom Walks
英語で要約が必要ですか?