Stanford CS224W: Machine Learning with Graphs | 2021 | Lecture 4.3 - Random Walk with Restarts

Stanford Online
22 Apr 202113:30

Summary

TLDRThis lecture explores random walk with restarts and personalized PageRank as extensions of the traditional PageRank algorithm. It discusses their application in recommendation systems, where measuring item similarity is crucial. The concept of teleportation in PageRank is introduced, where a random surfer can jump back to a set of nodes or a single starting node. The lecture explains how these methods can quantify item relatedness in a graph, with the aim of improving recommendations for users based on their interactions with items.

Takeaways

  • ๐Ÿ” The lecture discusses the concept of random walk with restarts and personalized PageRank as extensions of the initial PageRank algorithm.
  • ๐Ÿ›’ It uses the example of an online store or Netflix to illustrate how recommendations can be made based on user-item interactions, forming a bipartite graph.
  • ๐ŸŒ The goal is to measure similarity or proximity in graphs to recommend items that are related to what a user has already purchased or interacted with.
  • ๐Ÿ“ Traditional shortest path measures are not sufficient as they don't account for the strength of connections or the number of common neighbors.
  • ๐Ÿ”„ PageRank is introduced as a solution to quantify item relatedness by considering not just the shortest path but also the number of paths and their lengths.
  • ๐ŸŒ€ Personalized PageRank differs from standard PageRank by allowing the random walker to teleport back to a subset of nodes (S) that are of interest to the user.
  • ๐Ÿ” Random walk with restarts is a specific case where the random walker always teleports back to the starting node, effectively measuring proximity to a single node.
  • ๐Ÿ“Š The process involves simulating a random walk over the graph, increasing visit counts at each step, and restarting with a certain probability.
  • ๐Ÿ“ˆ The visit counts can be used to determine the proximity of nodes, with higher counts indicating closer relationships.
  • ๐Ÿ”ข The algorithm can be implemented using power iteration, which converges to the node importance vector, representing the limiting distribution of the random surfer's location.
  • ๐Ÿ’ก The approach is beneficial as it considers multiple factors like the number of paths, strength of connections, and node degrees, making it simple, scalable, and effective.

Q & A

  • What is the main topic discussed in the script?

    -The main topic discussed in the script is the concept of random walk with restarts and personalized PageRank as extensions of the initial PageRank idea, particularly in the context of recommendations and measuring similarity or proximity in graphs.

  • How does the script use the example of a bipartite graph to explain recommendations?

    -The script uses the example of a bipartite graph to represent the relationship between users and items, such as products or movies. It suggests that if two items are purchased by similar users, they are more related, and thus one item can be recommended to a user who has purchased the other.

  • What is the purpose of measuring proximity or relatedness in graphs?

    -Measuring proximity or relatedness in graphs is useful for making recommendations, such as suggesting items to users based on their past purchases or viewing habits, which can increase user engagement and satisfaction.

  • How does the script differentiate between PageRank, personalized PageRank, and random walk with restarts?

    -PageRank allows teleportation to any node in the graph, personalized PageRank restricts teleportation to a subset of nodes (S), and random walk with restarts teleports only to a single starting node (S).

  • What is the significance of the teleport set S in personalized PageRank?

    -The teleport set S in personalized PageRank represents a subset of nodes that are of interest to the user. It modifies the standard PageRank algorithm to focus on nodes that are more relevant to the user's specific interests or queries.

  • How does the script relate the concept of a random walk with restarts to the problem of recommendations?

    -The script relates random walk with restarts to recommendations by simulating a random walk that, with some probability, restarts at a query node. This process can identify items that are more closely related to the query item, which can then be recommended to the user.

  • What is the role of the parameter alpha in the random walk with restarts process?

    -The parameter alpha in the random walk with restarts process represents the probability of restarting the walk, which means jumping back to one of the query nodes, thus influencing how frequently the walk restarts and how closely it focuses on the query node.

  • How does the script suggest measuring the importance of nodes in a graph?

    -The script suggests measuring the importance of nodes in a graph by simulating a random walk where the nodes with the highest visit counts, indicating proximity to the query nodes, are considered the most important.

  • What is the mathematical approach discussed in the script to compute the importance of nodes?

    -The script discusses using power iteration to compute the importance of nodes, which involves representing the bipartite graph with a matrix and iterating to find the leading eigenvector of the stochastic adjacency matrix with teleportation.

  • What are the benefits of using random walk with restarts for measuring proximity in graphs?

    -The benefits include the ability to consider multiple paths and connections between nodes, the strength of those connections, and the degree of nodes, making it a comprehensive and scalable method for measuring similarity in graphs.

  • How does the script connect the intuitions of linear algebra, random walk, and link analysis?

    -The script connects these intuitions by showing that they all lead to the same optimization problem and can be solved using the same iterative approach, power iteration, which computes the limiting distribution of the random surfer's location representing node importance.

Outlines

00:00

๐Ÿ” Introduction to Random Walk with Restarts and Personalized PageRank

The paragraph introduces the concepts of Random Walk with Restarts and Personalized PageRank as extensions of the basic PageRank algorithm. It uses the context of a recommendation system to explain the utility of these algorithms. The speaker discusses the construction of a bipartite graph representing user-item interactions, such as purchases or movie watches. The goal is to measure the similarity or proximity between items in the graph to recommend products or movies to users based on their past interactions. The paragraph highlights the limitations of using only shortest path as a measure of similarity and suggests that a more nuanced metric is needed, which is where PageRank extensions come into play. Personalized PageRank is described as a variation where a 'random surfer' or walker can teleport back to a subset of nodes (S) that are of interest to the user, rather than any node in the graph.

05:05

๐Ÿ›ค๏ธ Random Walks with Restarts and Proximity in Graphs

This paragraph delves deeper into the mechanics of Random Walks with Restarts, explaining how each node's importance is distributed among its edges and passed to neighbors. It introduces the concept of a set of query nodes (S) and simulates a random walk over the graph, where the walker can choose to restart the walk by jumping back to any of the query nodes with a certain probability (alpha). The visit count of nodes is used to determine their proximity to the query nodes, with higher counts indicating closer proximity. The paragraph also touches on the pseudocode for this process and mentions an alternative method of computing proximity using power iteration with a stochastic adjacency matrix. The benefits of this approach are highlighted, including its simplicity, scalability, and effectiveness in measuring node similarity by considering various factors such as the number of connections, strength of connections, and node degrees.

10:06

๐Ÿ“Š Summary of PageRank Extensions and Their Mathematical Formulations

The final paragraph summarizes the lecture's discussion on PageRank extensions. It contrasts classical PageRank, where the random walker can teleport to any node, with personalized PageRank, where the teleport vector has non-zero elements only for certain nodes, reflecting a user's interests. Random walk with restarts is also described as a special case where the teleportation vector is a single node, always leading back to the starting point. The paragraph emphasizes that despite their different applications, all these formulations can be solved using the same power iteration method. It also explains how a graph can be represented as a matrix and how the random walk process defines a stochastic adjacency matrix. The concept of PageRank as a limiting distribution of the surfer's location, representing node importance, is reiterated. The paragraph concludes by noting the convergence of linear algebra, random walk intuition, and the idea of links as votes into a single optimization problem solved by power iteration.

Mindmap

Keywords

๐Ÿ’กRandom Walk with Restarts

Random Walk with Restarts is a stochastic process that models a random walker who traverses a graph, with the possibility of returning to a starting node after each step. This concept is central to the video's discussion on measuring similarity or proximity in graphs, particularly in recommendation systems. For instance, if a user has purchased item Q, the algorithm might recommend item P if many users who bought P also bought Q. The script uses this concept to explain how to quantify the relatedness of different items in a graph, suggesting that items with more common neighbors and shorter paths between them are more related.

๐Ÿ’กPersonalized PageRank

Personalized PageRank is an extension of the traditional PageRank algorithm that allows for a subset of nodes (S) to be defined as interesting to the user, and the random walker is more likely to teleport to these nodes. This concept is used in the video to discuss how recommendations can be tailored to users based on their past behavior and the behavior of similar users. The video explains that in personalized PageRank, the teleport set is not the entire graph but a specific subset of nodes that are of interest to the user.

๐Ÿ’กBipartite Graph

A Bipartite Graph is a type of graph where the set of nodes can be divided into two disjoint sets such that every edge connects a node from one set to a node in the other set. In the context of the video, a bipartite graph represents the relationship between users and items, such as customers and products or movies. The video uses the bipartite graph to illustrate how interactions between users and items can be visualized and analyzed.

๐Ÿ’กProximity

Proximity, in the video, refers to the closeness or similarity between nodes in a graph, which is a measure of how related two items are based on the structure of the graph. It is used to determine which items to recommend to a user, as items that are 'close' or have a high proximity to a user's past choices are more likely to be of interest. The video discusses how different paths and common neighbors can indicate proximity.

๐Ÿ’กTeleportation

Teleportation, in the context of the video, refers to the random walker's ability to jump to a different node in the graph, not necessarily following the edges. This is a key feature of the PageRank algorithm and its extensions, where the random walker can teleport to any node (in standard PageRank), a subset of nodes (in personalized PageRank), or a single starting node (in random walk with restarts). The video uses the concept of teleportation to explain how the algorithm can jump back to a node of interest.

๐Ÿ’กEigenvector

An Eigenvector of a matrix is a non-zero vector that does not change its direction after the transformation (multiplication by the matrix). In the video, the Eigenvector is used to represent the limiting distribution of the random surfer's location, which corresponds to the importance of nodes in the graph. The video explains that computing the leading eigenvector of the stochastic adjacency matrix with teleportation can give the same results as running the random walk simulation.

๐Ÿ’กStochastic Adjacency Matrix

A Stochastic Adjacency Matrix is a matrix representation of a graph where each element represents the probability of moving from one node to another. The video discusses how this matrix is used in the random walk process, where the importance of each node is distributed evenly among its edges and pushed to the neighbors, which is analogous to the random surfer model in PageRank.

๐Ÿ’กRecommendation System

A Recommendation System is an information filtering system that seeks to predict the 'interest' a user has in an item. The video uses the concept of recommendation systems to explain the practical application of random walk with restarts and personalized PageRank. It illustrates how these algorithms can be used to recommend products or movies to users based on their past behavior and the behavior of similar users.

๐Ÿ’กPower Iteration

Power Iteration is an iterative algorithm used to compute the principal eigenvector of a matrix. In the video, power iteration is mentioned as a method to solve for the PageRank values, where the algorithm converges to the limiting distribution of the surfer's location, representing node importance. The video suggests that this method can be used in place of simulating the random walk for more efficiency.

๐Ÿ’กFlow Equations

Flow Equations, in the context of the video, refer to the system of equations that describe how the importance of a node is derived from the importance of other nodes that link to it. The video explains that the limiting distribution of the random surfer's location, which represents node importance, can also be found by solving these flow equations using the power iteration method.

Highlights

Introduction to random walk with restarts and personalized PageRank as extensions of the initial PageRank concept.

Example of using graph theory for recommendations in online stores or platforms like Netflix.

Explanation of bipartite graph representation for user-item interactions.

Importance of measuring similarity or proximity on graphs for recommendation systems.

Challenge of quantifying proximity or relatedness of different items in a graph.

Discussion on the limitations of shortest path as a measure of item relatedness.

Intuition behind why personalized PageRank could be a better measure than shortest path.

Definition and explanation of personalized PageRank and its difference from traditional PageRank.

Concept of teleportation in PageRank and how it relates to personalized PageRank.

Introduction to the random walk with restarts as a special case of personalized PageRank.

Explanation of how the teleport set S is defined differently in PageRank, personalized PageRank, and random walk with restarts.

Description of the random walk with restarts process and its significance in measuring node importance.

Graphical illustration of how a random walk with restarts simulates user behavior on a bipartite graph.

Pseudocode explanation of the random walk with restarts algorithm.

Discussion on the benefits of using random walk with restarts for measuring proximity in graphs.

Explanation of how the random walk with restarts can be computed using power iteration.

Summary of the lecture discussing the extensions of PageRank and their applications.

Highlight of the algorithmic efficiency of random walk with restarts in measuring graph proximity.

Final thoughts on the practical applications and scalability of the discussed algorithms.

Transcripts

play00:04

So we are going to talk about random walk with restarts and personalized PageRank.

play00:10

And this will be extensions of our initial,

play00:13

uh, PageRank idea that we have just discussed.

play00:16

So let me give you an example how this would be very useful.

play00:21

So we are going to talk about a problem of recommendations.

play00:25

Imagine you- you have a set of users, customers,

play00:29

uh, and a set of items,

play00:31

uh, perhaps products, movies.

play00:34

And then you create interactions between users and items by

play00:37

basically saying a given user perhaps purchased a given item,

play00:41

or a given user watched a given movie, right?

play00:44

So this is now our bipartite graph representation,

play00:49

uh, here of this user to item relation.

play00:52

And then what we want to do is somehow measure similarity or proximity on graphs.

play00:58

Why is this useful?

play01:00

This is useful because if you are a online store or if you are Netflix,

play01:06

you want to ask yourself,

play01:07

what items should I recommend to a user who,

play01:10

you know, purchased an item Q.

play01:13

And the idea would be that if you know two items, P and Q,

play01:17

are purchased by a lot of, uh, similar users,

play01:22

a lot of other users have, let say,

play01:24

bought or enjoyed the same- the same item,

play01:27

the same movie, then- then whenever a user is looking at item Q,

play01:31

we should also recommend, uh,

play01:33

item P. So now,

play01:36

how are we going to quantify this notion of proximity or

play01:38

relatedness of different items in this graph.

play01:42

So the question is, for example,

play01:45

if I have this graph as I show here,

play01:46

and I have items, you know,

play01:47

A and A prime and B and B prime.

play01:49

The question is which two are more related?

play01:52

So you could do- one thing to do would be to say,

play01:55

you know, let's measure shortest path.

play01:57

So A has a shorter path than B to B-prime,

play02:01

so, you know, A and A prime are more related.

play02:05

However, the issue becomes,

play02:08

is that- that you could then say, oh,

play02:10

but if I have another example,

play02:11

let's say this one where I have C and C prime.

play02:14

And now C and C prime have to users that

play02:16

bo- that both of let say purchased these two items,

play02:20

then C and C prime intuitively are more related,

play02:24

are at closer proximity than A and A prime, right?

play02:27

So now, the question is,

play02:29

how would I develop a metric that would allow me to kind of say,

play02:32

hi, it's kind of the shortest path,

play02:34

but it's also about how many different, uh,

play02:37

neighbors you have in

play02:38

common and how many different paths allow you to go from one, uh, to another.

play02:43

And they- the idea here is that, er,

play02:47

PageRank is going to solve this because

play02:49

in this third example, you know, if you would just say,

play02:52

let's count common neighbors, then let say, uh,

play02:54

C and C prime are related as D and D prime.

play02:58

And again, this is- um,

play03:01

this is perhaps intuitively not what we want because, er,

play03:05

you know item D,

play03:06

this user has enjoyed a lot of different items as well.

play03:09

This other user has enjoyed a lot of different items there.

play03:12

So this relationship is less strong than the relationship here because here,

play03:17

it's really two items,

play03:19

two items that- that's all- that's all there is.

play03:21

So, you know, how could we capture this

play03:24

mathematically algorithmically to be able to run it on networks?

play03:28

And, uh, this is where the notion of extension of PageRank happens.

play03:33

Um, so PageRank tells me

play03:37

the importance of a node on the graph and ranks nodes by importance.

play03:41

And it has this notion of a teleport where we discuss that- that,

play03:46

um, a random surfer teleports uniformly over any node in the graph.

play03:52

So now, we will have- we will first define a notion of what is

play03:55

called personalized PageRank, where basically,

play03:59

the only difference with the original PageRank is that whenever we

play04:01

teleport or whenever the random walker teleports,

play04:04

it doesn't teleport anywhere in the graph,

play04:07

but it only teleports,

play04:08

jumps back to a subset of nodes S. Okay?

play04:13

So basically, we say, you know,

play04:15

there is a set of nodes S that are interesting to the user.

play04:19

So whenever the random walker teleports,

play04:21

it teleports back to that subset S and not to,

play04:24

uh, every node in the graph.

play04:26

And then in terms of, uh,

play04:28

you know, er, proximity in graphs,

play04:30

you can now take this notion of

play04:32

a teleport set S and you can shrink it even further and say,

play04:36

what if S is a single node?

play04:38

So it means that the random walker can walk,

play04:41

but whenever it decides to teleport,

play04:43

it always jumps back to the starting point

play04:46

S. And this is what is called a random walk with restart, where basically,

play04:51

you always teleport back to the starting node S.

play04:54

So essentially, PageRank, personalized PageRank,

play04:58

and random walk with restarts are the same algorithm with one important difference,

play05:04

that in PageRank, teleport set S is all of the nodes of the network,

play05:09

all having equal probability.

play05:11

In personalized PageRank, the teleport set S is a subset of nodes,

play05:17

so you only can jump to the subset.

play05:19

And in a random walker with restart,

play05:21

the teleportation set S is

play05:23

a simple node- is simply one node and that's the starting node,

play05:27

our, you know, query node item,

play05:29

uh, Q, uh, from the previous slide.

play05:32

So let me now talk more about random walks with restarts.

play05:37

So the idea here is that every node has some importance,

play05:40

and the importance gets evenly split among all edges,

play05:44

uh, and pushed to the neighbors.

play05:46

And this is essentially the same as what we were discussing in,

play05:50

uh, page- in the original PageRank formulation.

play05:53

So in our case, we are going to say let's have a set of query nodes.

play05:57

Um, uh, this is basically the set S. And let's

play06:00

now physically simulate the random walk over this graph, right?

play06:04

We will make a step at random neighbor,

play06:06

um, and record the visit to that neighbor.

play06:08

So we are going to increase the visit count of that neighbor.

play06:11

And with some probability alpha,

play06:13

we are going to restart the walk,

play06:14

which basically means we are going to jump back to

play06:17

any of the query nodes and restart the walk.

play06:21

And then the nodes with the highest query- highest visit count

play06:26

will have the highest proximity to the- uh,

play06:29

to the query- to the nodes in the query nodes, uh, set.

play06:33

So this is essentially the idea.

play06:35

So let me now show you graphically, right?

play06:37

So we have this bipartite graph.

play06:40

Imagine my query nodes set Q is simply one node here.

play06:45

Then we are going to simulate,

play06:46

really, like a random walk that basically says,

play06:49

I'm at Q. I pick one of its,

play06:51

uh, links at random,

play06:52

and I move to the user.

play06:53

Now, I am at the user.

play06:55

I pick one of the links at random, uh, move to,

play06:58

uh- to the- to the other side and I increase the visit count, uh, one here.

play07:04

And now I get to decide do I restart,

play07:06

meaning go back to Q,

play07:07

or do I continue walking by picking one of- one link,

play07:11

um, to go to the user,

play07:13

pick another link to go back,

play07:14

and increase the visit count?

play07:15

And again, ask myself do I want to restart or do want to continue walking?

play07:20

So the pseudocode is written here and it's really what I just say.

play07:24

It's basically, you know, pick a random neighbor for- for- for,

play07:29

uh, start at a- at a query,

play07:31

pick a random user,

play07:32

uh, pick a random item,

play07:34

increase the revisit count of the item,

play07:37

uh, pick a biased coin.

play07:39

If the coin says, uh, let restart,

play07:42

you'll simply, uh, jump back to the query nodes.

play07:46

You can jump, uh, uniformly at random to any of them,

play07:50

or if they have different weights,

play07:51

you can sample them, uh, by weight.

play07:53

And that is- that is this notion of a random walk, uh, with, uh, restart.

play07:59

And if you do this, then you will have the query item and then you will also get

play08:03

this visit counts and- and the idea is that items that are more, uh,

play08:08

related, that are closer in the graphs,

play08:11

will have higher visit counts because it means

play08:13

that the random walker will visit them more often,

play08:15

which means you have more common neighbor,

play08:17

more paths lead from one to the other,

play08:21

these paths are short so that, uh,

play08:23

the random walker does not decide to restart,

play08:25

uh, and so on and so forth.

play08:27

And this allows us to measure proximity in graphs very efficiently.

play08:32

And here, we are measuring it by actually, uh,

play08:34

un- kind of simulating this random walk physically.

play08:38

But you could also compute this using

play08:41

the power iteration where you would represent this bipartite graph with a matrix, uh,

play08:46

M, you would then start with, uh, um,

play08:49

rank vector, um, uh,

play08:52

to be- to have a given value.

play08:54

You would then, uh,

play08:55

transfer them to the stochastic adjacency matrix with teleportation,

play08:59

uh, matrix, and then round power iteration on top of it.

play09:03

And it would, um, converge to the same- uh,

play09:06

to the same set of, uh,

play09:09

uh, node importance as we- as we show

play09:12

here by basically running this quick, uh, simulation.

play09:15

Um, so what are the benefits of this approach?

play09:19

Um, this is a good solution because it

play09:23

measures similarity by considering a lot of different,

play09:26

um, things that are important, right?

play09:29

It considers how many connections or how many paths are between a pair of nodes.

play09:34

Um, what is the strength of those connections?

play09:37

Are these connections direct or are they indirect?

play09:40

They also- it also considers the- the degree of the nodes on the path.

play09:44

Because, uh, the more edges it has,

play09:46

the more- the more likely we- for the random walker,

play09:49

is to kind of walk away and don't go to the node.

play09:53

Let's say that- that we are interested in.

play09:55

So in all these cases, um,

play09:58

this is a very- uh,

play10:00

has a lot of properties that we want.

play10:02

It's very simple to implement,

play10:04

it's very scalable, and,

play10:05

uh, works, uh, really well.

play10:07

So let me summarize this part of the lecture.

play10:11

So basically, here, we talked about extensions of PageRank.

play10:15

We talked about classical PageRank where the random walker teleports to any node.

play10:21

So, you know, if I have a graph with 10 nodes,

play10:23

then its teleport set S. You can think

play10:26

of it is- it includes all the nodes and each node has,

play10:29

uh, equal probability of the random walker landing there.

play10:33

This is called PageRank.

play10:35

Then the personalized PageRank,

play10:37

sometimes also called topic-specific PageRank,

play10:40

is basically, the only difference is that now

play10:43

the teleport vector only has a couple of- of non-zero elements.

play10:48

And this now means that whenever a random walker decides to jump,

play10:51

you know, 50 percent of the times,

play10:52

it will jump to this node,

play10:54

10 percent to this node,

play10:56

20 percent to this one and that one.

play10:57

So that's, uh, what is called personalized PageRank.

play11:01

And then random walk with restarts is again PageRank.

play11:06

But here, the teleportation vector is a single node.

play11:11

So whenever the-the surfer decides to

play11:14

teleport it always teleports to the- to one, uh, single node.

play11:19

But mathematically, all these formulations are the same,

play11:23

the same power iteration can solve them.

play11:25

Uh, we can also solve, for example,

play11:27

especially the random walk with restarts by actually simulating the random walk,

play11:31

which in some cases, might be- might be,

play11:33

um, faster, but it is approximate.

play11:36

Um, and the same algorithm works,

play11:39

only thing is how do we define the set S,

play11:42

the teleportation, uh, set.

play11:45

So to summarize, uh,

play11:48

a graph can naturally be represented as a matrix.

play11:51

We then define the random walk process over, er, the graph.

play11:55

We have this notion of a random surfer moving across links,

play11:59

uh, with- er, together with having a-a way to teleport,

play12:03

uh, out of every node.

play12:05

This defined- allowed us to define this stochastic adjacency matrix M that

play12:11

essentially tells us with what probability

play12:12

the random surfer is going to navigate to each edge.

play12:16

And then we define the notion of PageRank,

play12:18

which is a limiting distribution of a- of the surfer location.

play12:22

Um, and this limiting distribution of the surfer location represents node importance.

play12:29

And then another beautiful thing happened is that we showed that this

play12:33

limiting distributions- distribution can be computed or

play12:37

corresponds to the leading eigenvector of

play12:40

the transform adjacency matrix M. So it

play12:43

basically means that by computing the eigenvector of M,

play12:46

we are computing the limiting distribution of this, uh,

play12:49

random surfer, and we are also computing this solution to the system of equations,

play12:55

of, uh, flow equations where the importance of a node is,

play12:59

you know, some of the importances of the other nodes that point to it.

play13:03

So all these three different, uh, intuitions.

play13:06

So the linear algebra, eigenvector eigenvalue,

play13:09

the Random Walk intuition,

play13:11

and these links as votes intuition are of the same thing.

play13:15

They all boil down to the same optimization problem,

play13:17

to the same algorithm,

play13:19

to the same formulation that is solved with

play13:21

this iterative approach called power iteration.

Rate This
โ˜…
โ˜…
โ˜…
โ˜…
โ˜…

5.0 / 5 (0 votes)

Related Tags
PageRankRecommendation SystemsRandom WalkGraph AlgorithmsPersonalized SearchMachine LearningBipartite GraphNetwork AnalysisTeleportation SetEigenvector