CS224W: Machine Learning with Graphs PDF

Document Details

ConsiderateChrysoberyl

Uploaded by ConsiderateChrysoberyl

Stanford University

Jure Leskovec

Tags

machine learning graph theory node classification message passing

Summary

This document is from a course on machine learning with graphs, CS224W at Stanford University. It discusses message passing methods for node classification in networks, focusing on the concept that similar nodes are typically near each other or directly connected. The document explains how to predict node labels given labels of some nodes in a graph and the features of the graph's nodes and neighbors.

Full Transcript

CS224W: Machine Learning with Graphs Jure Leskovec, Stanford University http://cs224w.stanford.edu ANNOUNCEMENTS Homework 1 will This Thursday be released (10/07): Colab 1after due,class...

CS224W: Machine Learning with Graphs Jure Leskovec, Stanford University http://cs224w.stanford.edu ANNOUNCEMENTS Homework 1 will This Thursday be released (10/07): Colab 1after due,class Colab 2 out Next Thursday (10/07): (10/14): Colab 1 due, HW 1 due, HWColab 2 out 2 out o Do Colab Office hours0! It has almost everything you need to o complete Colab 1. See http://web.stanford.edu/class/cs224w/oh.html for Office OH hours: calendar,we’ve Zoom added Zoom links, and links tolink. QueueStatus our OH calendar. o (1) Add yourself to queue, (2) Join the Zoom waiting room o See http://web.stanford.edu/class/cs224w/oh.html --> we for will let you off the waiting room when it's your OH calendar, turn Zoom links, and QueueStatus link. in the queue. CS224W: Machine Learning with Graphs Jure Leskovec, Stanford University http://cs224w.stanford.edu ¡ Main question today: Given a network with labels on some nodes, how do we assign labels to all other nodes in the network? ¡ Example: In a network, some nodes are fraudsters, and some other nodes are fully trusted. How do you find the other fraudsters and trustworthy nodes? ¡ We already discussed node embeddings as a method to solve this in Lecture 3 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 3 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu ? ? ? ? ? ¡ Given labels of some nodes ¡ Let’s predict labels of unlabeled nodes ¡ This is called semi-supervised node classification 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 4 ¡ Main question today: Given a network with labels on some nodes, how do we assign labels to all other nodes in the network? ¡ Today we will discuss an alternative framework: Message passing ¡ Intuition: Correlations (dependencies) exist in networks. § In other words: Similar nodes are connected. § Key concept is collective classification: Idea of assigning labels to all nodes in a network together. ¡ We will look at three techniques today: § Relational classification § Iterative classification § Correct & Smooth 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 5 ¡ Behaviors of nodes are correlated across the links of the network ¡ Correlation: Nearby nodes have the same color (belonging to the same class) 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 6 ¡ Two explanations for why behaviors of nodes in networks are correlated: Homophily Influence 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 7 ¡ Homophily: The tendency of individuals to associate and bond with similar others Homophily § “Birds of a feather flock together” § It has been observed in a vast array of network studies, based on a variety of attributes (e.g., age, gender, organizational role, etc.) § Example: Researchers who focus on the same research area are more likely to establish a connection (meeting at conferences, interacting in academic talks, etc.) 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 8 Example of homophily ¡ Online social network § Nodes = people § Edges = friendship § Node color = interests (sports, arts, etc.) ¡ People with the same interest are more closely connected due to homophily (Easley and Kleinberg, 2010) 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 9 ¡ Influence: Social connections can influence the individual characteristics of a person. Influence § Example: I recommend my musical preferences to my friends, until one of them grows to like my same favorite genres! 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 10 CS224W: Machine Learning with Graphs Jure Leskovec, Stanford University http://cs224w.stanford.edu ¡ How do we leverage this correlation observed in networks to help predict node labels? Label 1 3 7 5 Label 0 4 9 1 8 6 Label 1 2 Label 0 How do we predict the labels for the nodes in grey? 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 12 ¡ Similar nodes are typically close together or directly connected in the network: § Guilt-by-association: If I am connected to a node with label 𝑋, then I am likely to have label 𝑋 as well. § Example: Malicious/benign web page: Malicious web pages link to one another to increase visibility, look credible, and rank higher in search engines 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 13 ¡ Classification label of a node 𝑣 in network may depend on: § Features of 𝑣 § Labels of the nodes in 𝑣’s neighborhood § Features of the nodes in 𝑣’s neighborhood 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 14 Formal setting: Given: ? Graph Few labeled nodes ? Find: Class (red/green) of ? remaining nodes Main assumption: There is homophily in the network 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 15 Example task: ¡ Let 𝑨 be a 𝑛×𝑛 adjacency matrix over 𝑛 nodes ! ¡ Let Y = 0, 1 be a vector of labels: § Y! = 1 belongs to Class 1 § Y! = 0 belongs to Class 0 § There are unlabeled node needs to be classified ¡ Goal: Predict which unlabeled nodes are likely Class 1, and which are likely Class 0 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 16 ¡ How to predict the labels 𝒀𝒗 for the unlabeled nodes 𝒗 (in grey color)? ¡ Each node 𝑣 has a feature vector 𝑓# ¡ Labels for some nodes are given (1 for green, 0 for red) ¡ Task: Find 𝑃(𝑌# ) given all features and the network 𝑃(𝑌! ) = ? 7 5 3 9 4 1 8 6 2 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 17 ¡ Many applications under this setting: § Document classification § Part of speech tagging § Link prediction § Optical character recognition § Image/3D data segmentation § Entity resolution in sensor networks § Spam and fraud detection 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 18 ¡ We focus on semi-supervised binary node classification ¡ We will introduce three approaches: § Relational classification § Iterative classification § Correct & Smooth 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 21 CS224W: Machine Learning with Graphs Jure Leskovec, Stanford University http://cs224w.stanford.edu ¡ Idea: Propagate node labels across the network § Class probability 𝑌! of node 𝑣 is a weighted average of class probabilities of its neighbors. ¡ For labeled nodes 𝑣, initialize label 𝑌# with ground-truth label 𝑌#∗. ¡ For unlabeled nodes, initialize 𝑌# = 0.5. ¡ Update all nodes in a random order until convergence or until maximum number of iterations is reached. 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 23 ¡ Update for each node 𝑣 and label 𝑐 (e.g. 0 or 1) 1 𝑃 𝑌! = 𝑐 = ( 𝐴!,# 𝑃(𝑌# = 𝑐) ∑ !,# ∈% 𝐴!,# !,# ∈% § If edges have strength/weight information, 𝐴!,# can be the edge weight between 𝑣 and 𝑢 § 𝑃 𝑌! = 𝑐 is the probability of node 𝑣 having label 𝑐 ¡ Challenges: § Convergence is not guaranteed § Model cannot use node feature information 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 25 Initialization: ¡ All labeled nodes with their labels ¡ All unlabeled nodes 0.5 (belonging to class 1 with probability 0.5) 𝑃!( = 0.5 𝑃!! = 1 𝑃!' = 0.5 3 7 5 4 9 𝑃!) = 0.5 1 𝑃!% = 0.5 𝑃!$ = 0 8 6 𝑃!& = 0.5 𝑃!" = 1 2 Notation: 𝑃!$ = 𝑃(𝑌" = 1) 𝑃!# = 0 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 26 ¡ Update for the 1st Iteration: § For node 3, 𝑁( = {1, 2, 4} 𝑃(𝑌" ) = 0 + 0 + 0.5 /3 = 0.17 𝑃(𝑌# ) = 1 𝑃(𝑌( ) = 0.5 3 7 5 4 9 1 𝑃(𝑌* ) = 0.5 𝑃(𝑌) ) = 0.5 𝑃(𝑌& ) = 0 8 6 𝑃(𝑌' ) = 0.5 𝑃(𝑌$ ) = 1 2 𝑃(𝑌% ) = 0 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 27 ¡ Update for the 1st Iteration: § For node 4, 𝑁0 = {1, 3, 5, 6} 𝑃(𝑌# ) = 1 𝑃(𝑌" ) = 0.17 𝑃(𝑌( ) = 0.5 3 7 5 4 9 1 𝑃(𝑌* ) = 0.5 𝑃(𝑌& ) = 0 8 6 𝑃(𝑌' ) = 0.5 𝑃(𝑌$ ) = 1 2 𝑃(𝑌!) 𝑃(𝑌% ) = 0 = 0 + 0.17 + 0.5 + 1 /4 = 0.42 After Node 3 is updated 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 28 ¡ Update for the 1st Iteration: § For node 5, 𝑁1 = {4, 6, 7, 8} After nodes 3 and 4 are updated 𝑃 𝑌" = 1 + 0.5 + 1 + 0.42 /4 𝑃(𝑌# ) = 1 = 0.73 3 𝑃(𝑌 ) = 0.17 7 " 5 4 9 1 𝑃(𝑌* ) = 0.5 𝑃(𝑌) ) = 0.42 𝑃(𝑌& ) = 0 8 6 𝑃(𝑌' ) = 0.5 𝑃(𝑌$ ) = 1 2 𝑃(𝑌% ) = 0 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 29 After Iteration 1 (a round of updates for all unlabeled nodes) 𝑃(𝑌# ) = 1 𝑃(𝑌" ) = 0.17 𝑃(𝑌( ) = 0.73 3 7 5 4 9 1 𝑃(𝑌* ) = 1 𝑃(𝑌) ) = 0.42 𝑃(𝑌& ) = 0 8 6 𝑃(𝑌' ) = 0.91 𝑃(𝑌$ ) = 1 2 𝑃(𝑌% ) = 0 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 30 After Iteration 2 𝑃(𝑌# ) = 1 𝑃(𝑌" ) = 0.14 𝑃(𝑌( ) = 0.85 3 7 5 4 9 1 𝑃(𝑌* ) = 1 𝑃(𝑌) ) = 0.47 𝑃(𝑌& ) = 0 Converged 8 6 𝑃(𝑌' ) = 0.95 𝑃(𝑌$ ) = 1 2 𝑃(𝑌% ) = 0 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 31 After Iteration 3 𝑃(𝑌# ) = 1 𝑃(𝑌" ) = 0.16 𝑃(𝑌( ) = 0.86 3 7 5 4 9 1 𝑃(𝑌* ) = 1 𝑃(𝑌) ) = 0.5 𝑃(𝑌& ) = 0 Converged 8 6 𝑃(𝑌' ) = 0.95 𝑃(𝑌$ ) = 1 2 Converged 𝑃(𝑌% ) = 0 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 32 After Iteration 4 Converged Converged 𝑃(𝑌# ) = 1 𝑃(𝑌" ) = 0.16 𝑃(𝑌( ) = 0.86 3 7 5 4 9 1 𝑃(𝑌* ) = 1 𝑃(𝑌) ) = 0.51 𝑃(𝑌& ) = 0 Converged 8 6 𝑃(𝑌' ) = 0.95 𝑃(𝑌$ ) = 1 2 Converged 𝑃(𝑌% ) = 0 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 33 ¡ All scores stabilize after 4 iterations. We therefore predict: § Nodes 4, 5, 8, 9 belong to class 1 (𝑃,+ > 0.5) § Nodes 3 belong to class 0 (𝑃,+ < 0.5) Converged Converged 𝑃(𝑌# ) = 1 𝑃(𝑌" ) = 0.16 𝑃(𝑌( ) = 0.86 3 7 5 4 9 1 𝑃(𝑌* ) = 1 𝑃(𝑌) ) = 0.51 Converged 𝑃(𝑌& ) = 0 Converged 8 6 𝑃(𝑌' ) = 0.95 𝑃(𝑌$ ) = 1 2 Converged 𝑃(𝑌% ) = 0 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 34 CS224W: Machine Learning with Graphs Jure Leskovec, Stanford University http://cs224w.stanford.edu ¡ Relational classifier does not use node attributes. ¡ How can one leverage them? ¡ Main idea of iterative classification: Classify node 𝑣 based on its attributes 𝒇𝒗 as well as labels 𝒛𝒗 of neighbor set 𝑵𝒗. 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 44 ¡ Input: Graph § 𝑓! : feature vector for node 𝑣 § Some nodes 𝑣 are labeled with 𝑌! ¡ Task: Predict label of unlabeled nodes ¡ Approach: Train two classifiers: § 𝜙" (𝑓! ) = Predict node label based on node feature vector 𝑓!. This is called base classifier. § 𝜙# 𝑓! , 𝑧! = Predict label based on node feature vector 𝑓! and summary 𝑧! of labels of 𝑣’s neighbors. This is called relational classifier. 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 45 How do we compute the summary 𝒛𝒗 of labels of 𝒗’s neighbors 𝑵𝒗 ? ¡ 𝒛𝒗 = vector that captures labels around node 𝒗 § Histogram of the number (or fraction) of each label in 𝑁! § Most common label in 𝑁! § Number of different labels in 𝑁! 𝒇𝒗 , 𝒛𝒗 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 46 ¡ Phase 1: Classify based on node attributes alone § On the labeled training set, train two classifiers: § Base classifier: 𝜙/ (𝑓0 ) to predict 𝑌0 based on 𝑓0 § Relational classifier: 𝜙1 𝑓0 , 𝑧0 to predict 𝑌0 based on 𝑓0 and summary 𝑧0 of labels of 𝑣’s neighbors ¡ Phase 2: Iterate till convergence § On test set, set labels 𝑌! based on the classifier 𝜙7 , compute 𝑧! and predict the labels with 𝜙8 § Repeat for each node 𝑣: § Update 𝑧0 based on 𝑌2 for all 𝑢 ∈ 𝑁0 § Update 𝑌0 based on the new 𝑧0 (𝜙1 ) § Iterate until class labels stabilize or max number of iterations is reached § Note: Convergence is not guaranteed 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 47 ¡ Input: Graph of web pages ¡ Node: Web page ¡ Edge: Hyper-link between web pages § Directed edge: a page points to another page ¡ Node features: Webpage description § For simplicity, we only consider two binary features ¡ Task: Predict the topic of the webpage 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 48 ¡ Baseline: Train a classifier (e.g., linear classifier) to classify pages based on node attributes. Ground truth: 1 𝑓$ 0 0 1 Ground truth: 1 Ground truth: 0 0 0 Node attribute 𝑓$ 1 0 𝑓$ 1 1 𝑓$ 0 1 1 Ground truth: 1 Wrong! Can we improve? 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 49 ¡ Each node maintains vectors 𝒛𝒗 of neighborhood labels: § 𝐼 = Incoming neighbor label information vector. § 𝑂= Outgoing neighbor label information vector. § 𝐼# = 1 if at least one of the incoming pages is labelled 0. Similar definitions for 𝐼$, 𝑂#, and 𝑂$ Ground truth: 1 𝑓$ 0 0 𝐼 0 1 1 Ground truth: 1 Ground truth: 0 𝑂 0 0 ? 0 𝑓$ 0 1 𝑓$ 1 0 𝑓$ 1 1 𝐼 0 0 1 𝑧$ 𝐼 0 1 𝑧$ 𝐼 0 0 𝑂 0 1 𝑂 1 1 𝑂 0 0 Ground truth: 1 Include network features 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 50 1. Train classifiers ¡ On training labels, train two classifiers: 2. Apply classifier to unlab. set 3. Iterate § Node attribute vector only: 𝜙! (𝑓" ) 4. Update relational features 𝑧! § Node attribute and link vectors 𝑧" : 𝜙# (𝑓" , 𝑧" ) 5. Update label 𝑌! Ground truth: 1 𝑓$ 0 0 𝐼 0 1 1 Ground truth: 1 Ground truth: 0 𝑂 0 0 ? 0 𝑓$ 0 1 𝑓$ 1 0 𝑓$ 1 1 𝐼 0 0 1 𝑧$ 𝐼 0 1 𝑧$ 𝐼 0 0 𝑂 0 1 𝑂 1 1 𝑂 0 0 Ground truth: 1 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 51 1. Train classifiers ¡ On the unlabeled set: 2. Apply classifier to unlab. set 3. Iterate § Use trained node feature vector 4. Update relational features 𝑧! classifier 𝜙4 to set 𝑌! 5. Update label 𝑌! Ground truth: 1 𝑓$ 0 0 𝐼 0 1 1 Ground truth: 1 Ground truth: 0 𝑂 0 0 0 0 𝑓$ 0 1 𝑓$ 1 0 𝑓$ 1 1 𝐼 0 0 1 𝑧$ 𝐼 0 1 𝑧$ 𝐼 0 0 𝑂 0 1 𝑂 1 1 𝑂 0 0 Ground truth: 1 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 52 1. Train classifiers ¡ Update 𝑧! for all nodes: 2. Apply classifier to unlab. set 3. Iterate 4. Update relational features 𝑧! 5. Update label 𝑌! Ground truth: 1 𝑓$ 0 0 0 1 1 𝐼 Ground truth: 1 Ground truth: 0 𝑂 1 0 0 0 0 1 1 0 1 1 𝑧$ 1 0 0 1 1 0 1 𝑧$ 𝑧$ 0 1 1 1 0 0 Ground truth: 1 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 53 1. Train classifiers ¡ Re-classify all nodes with 𝜙# : 2. Apply classifier to unlab. set 3. Iterate 4. Update relational features 𝑧! 5. Update label 𝑌! Ground truth: 1 𝑓$ 0 0 𝐼 0 1 1 𝑂 1 0 Ground truth: 1 Ground truth: 0 1 0 0 1 Ground truth: 1 1 0 1 1 1 0 1 0 1 1 0 0 1 1 1 0 0 Now it’s correct prediction! 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 54 1. Train classifiers ¡ Continue until convergence 2. Apply classifier to unlab. set 3. Iterate § Update 𝑧! based on 𝑌! 4. Update relational features 𝑧! 5. Update label 𝑌! § Update 𝑌! = 𝜙8 (𝑓! , 𝑧! ) Ground truth: 1 𝑓$ 0 0 0 1 1 𝐼 Ground truth: 1 Ground truth: 0 𝑂 0 1 Changed 1 0 0 1 1 0 1 1 0 1 1 0 1 0 1 0 1 1 1 0 0 Ground truth: 1 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 55 ¡ Stop iteration § After convergence or when maximum iterations are reached Ground truth: 1 𝑓$ 0 0 0 1 1 𝐼 Ground truth: 1 Ground truth: 0 𝑂 0 1 1 0 0 1 1 0 1 1 0 1 1 0 1 0 1 0 1 1 1 0 0 Ground truth: 1 Correct Now! 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 56 ¡ We talked about 2 approaches to collective classification ¡ Relational classification § Iteratively update probabilities of node belonging to a label class based on its neighbors ¡ Iterative classification § Improve over collective classification to handle attribute/feature information § Classify node 𝑣 based on its features as well as labels of neighbors 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 57 Q7+..%'-,++"-"(',"'-"' %-(+&,.&+-%8+!'-"'"' 6QOPW  /"0,"-,+'--+-"/-+ -(+,)&7 _P,-+"'+,"'+-"' "'+,,+/'. 2T@X^4  -'!2)>&,)&  ",)&&+, PO>PV>PX.+ ,$(/6-'(+QQS7!"' +'"' 0"-!+)!,6!--)7>>,QQS08,-'(+8. TQ  !/"(+%'%2,",  "'"/".%-.+,6 ( +)!"%(-"(',6%( "'-"&,6 ,,,"('!",-(+26-8  '. '%2,", .,(,.)+%-"/,6%(-,(,%@++'"' 6+-( &",,)%%"' ,6&'2 +&'-0(+,69  ,2-($7'&' #$6 %%  #'(  +-($7#!$%#&%&#  +)!,)-.++%-"(',!"),-0'+/"0+,6 +/"0,6,-(+, PO>PV>PX.+ ,$(/6-'(+QQS7!"' +'"' 0"-!+)!,6!--)7>>,QQS08,-'(+8. TR  !&%7")+-"-+-"'  +)!,0" !- ," ''-0(+$7  (,7.,+,6)+(.-,   ,7+-"' ,(+, -0'@P'_P  &%!&%7,-(.,+,  ,`@P+-"' -!- "/$+-"' , +' ,`_P+-"' TS PO>PV>PX.+ ,$(/6-'(+QQS7!"' +'"' 0"-!+)!,6!--)7>>,QQS08,-'(+8.  $7,+,6)+(.-,6 '+-"' ,!/"'-+"'," !.,+!, !+-"' !, *.%"-2,(+,7 :"+',,;,(+ :+%""%"-2;,(+           ,+,!/"+',,,(+,  +(.-,!/ ((',,,(+,  -"' ,!/+%""%"-2,(+,  %%/%.,+.'$'(0' !)+(.-!, : ((',,;,(+     TT PO>PV>PX.+ ,$(/6-'(+QQS7!"' +'"' 0"-!+)!,6!--)7>>,QQS08,-'(+8.  $7,+,6)+(.-,6 '+-"' ,!/"'-+"'," !.,+!, !+-"' !, *.%"-2,(+,7 :"+',,;,(+ :+%""%"-2;,(+           ,+,!/"+',,,(+,  +(.-,!/ ((',,,(+,  -"' ,!/+%""%"-2,(+,  %%/%.,+.'$'(0'  (0'('%.%--! /%.,(+%%'(,' , !)+(.-!, ,"&.%-'(.,%25 : ((',,;,(+   &% , %#%'     $$%  TU PO>PV>PX.+ ,$(/6-'(+QQS7!"' +'"' 0"-!+)!,6!--)7>>,QQS08,-'(+8.  "1"'  ((',,'+%""%"-26"+',,",.)-,7 PO>PV>PX.+ ,$(/6-'(+QQS7!"' +'"' 0"-!+)!,6!--)7>>,QQS08,-'(+8. TV  "1"' "+',,'+%""%"-26 ((',,",.)-,7 PO>PV>PX.+ ,$(/6-'(+QQS7!"' +'"' 0"-!+)!,6!--)7>>,QQS08,-'(+8. TW  "1"' "+',,' ((',,6+%""%"-2",.)-,7 PO>PV>PX.+ ,$(/6-'(+QQS7!"' +'"' 0"-!+)!,6!--)7>>,QQS08,-'(+8. TX C.6)D`P C.6)D`P C.D`P C)D`P C.D`P C)D`P C.6)D`P C.6)D`P C)D`P C.D`P PO>PV>PX.+ ,$(/6-'(+QQS7!"' +'"' 0"-!+)!,6!--)7>>,QQS08,-'(+8. UO C+D`P C+D`P C.D`P C)D`O8UV C.D`P C)D`O8UV C.D`P R(r) = 1 C+D`P C)D`@O8UV C.D`P C.D`P PO>PV>PX.+ ,$(/6-'(+QQS7!"' +'"' 0"-!+)!,6!--)7>>,QQS08,-'(+8. UP (-! && /%.,+,--(P C+D`O8TW C+D`O8XQ C+D`O8TW C.D`P C.D`P C)D`O8UV C.D`P C)D`O8UV C.D`P C+D`O8XQ C+D`O8XQ C)D`@O8UV C.D`P C.D`P PO>PV>PX.+ ,$(/6-'(+QQS7!"' +'"' 0"-!+)!,6!--)7>>,QQS08,-'(+8. UQ C+D`O8TW C+D`O8XQ C+D`O8TW C.D`O8TW C.D`O8XQ C)D`O8UV C.D`O8XQ C)D`O8UV C.D`O8XQ C+D`O8XQ C+D`O8XQ C)D`@O8UV C.D`O8XQ C.D`O8XQ PO>PV>PX.+ ,$(/6-'(+QQS7!"' +'"' 0"-!+)!,6!--)7>>,QQS08,-'(+8. UR C+D`O8PV C+D`O8WR C+D`O8PV C.D`O8PV C.D`O8WR C)D`O8UV C.D`O8WR C)D`O8UV C.D`O8WR C+D`O8WR C+D`O8WR C+D`8WR C)D`@O8UV C.D`O8WR C.D`O8WR PO>PV>PX.+ ,$(/6-'(+QQS7!"' +'"' 0"-!+)!,6!--)7>>,QQS08,-'(+8. US  .+'--(('/+   .&+("-+-"(',-"%%('/+ '",.))+@(.'  "&A(&)%1"-27%"'+"'-!'.&+(  ,"'-! +)! PO>PV>PX.+ ,$(/6-'(+QQS7!"' +'"' 0"-!+)!,6!--)7>>,QQS08,-'(+8. UT  (0"+',,.,+,`+.,-+,  127 of 150 lowest fairness users in Flipkart were real fraudsters PO>PV>PX.+ ,$(/6-'(+QQS7!"' +'"' 0"-!+)!,6!--)7>>,QQS08,-'(+8. UU  .%-")%"-+-"(',6.-%"'+,%"%"-2 PO>PV>PX.+ ,$(/6-'(+QQS7!"' +'"' 0"-!+)!,6!--)7>>,QQS08,-'(+8. UV CS224W: Machine Learning with Graphs Jure Leskovec, Stanford University http://cs224w.stanford.edu [Huang et al., ICLR 2021] ¡ Last, we introduce C&S, recent state-of-the-art collective classification method. C&S tops the current OGB leaderboard! OGB leaderboard snapshot at Oct 1st, 2021 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 59 ¡ Setting: A partially labeled graph and features over nodes. 7 5 3 9 Feature 4 1 8 6 2 ¡ C&S follows the three-step procedure: 1. Train base predictor 2. Use the base predictor to predict soft labels of all nodes. 3. Post-process the predictions using graph structure to obtain the final predictions of all nodes. 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 60 ¡ (1) Train a base predictor that predict soft labels (class probabilities) over all nodes. § Labeled nodes are used for train/validation data. § Base predictor can be simple: § Linear model/Multi-Layer-Perceptron(MLP) over node features 7 5 3 Soft Base labels 9 model 4 0.05 1 8 6 0.95 2 Green: Class 0 Red: Class 1 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 61 ¡ (2) Given a trained base predictor, we apply it to obtain soft labels for all the nodes. § We expect these soft labels to be decently accurate. § Can we use graph structure to post-process the predictions to make them more accurate? 0.95 0.90 0.05 0.10 0.60 0.80 0.40 7 5 0.20 3 0.20 0.80 9 4 1 0.05 8 6 0.95 2 0.40 0.60 0.30 0.60 0.40 0.70 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 62 ¡ (3) C&S uses the 2-step procedure to post- process the soft predictions. 1. Correct step 2. Smooth step 0.95 0.90 0.05 0.10 0.60 0.80 0.40 7 5 0.20 3 0.20 0.80 9 4 1 0.05 8 6 0.95 2 0.40 0.60 0.30 0.60 0.40 0.70 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 63 ¡ The key idea is that we expect errors in the base prediction to be positively correlated along edges in the graph. § In other words, an error at node 𝑢 increases the chance of a similar error at neighbors of 𝑢. § Thus, we should “spread” such uncertainty over the graph. 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 64 ¡ Correct step Soft labels less erroneous § The degree of the errors of 0.95 0.90 0.60 the soft labels are biased. 0.80 0.20 0.05 7 0.10 5 0.20 0.40 3 0.80 § We need to correct for the 9 4 1 0.05 error bias. 8 6 0.95 Soft labels more 2 erroneous 0.40 0.60 0.30 0.60 0.40 Smooth step 0.70 ¡ § The predicted soft labels may Non-smooth 0.95 0.90 Non-smooth 0.60 not be smooth over the 0.80 0.05 7 0.10 5 0.20 3 0.40 0.20 graph. 9 0.80 4 1 0.05 § We need to smoothen the 0.40 8 6 0.60 2 0.30 0.95 soft labels. 0.60 0.40 0.70 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 65 ¡ Correct step: § Compute training errors of nodes. § Training error: Ground-truth label minus soft label. Defined as 0 for unlabeled nodes. 0.05 1.0 0.95 0 −0.05 = 0.0 − 0.05 0 0 0 0 7 5 0 3 0 0 −0.05 0.0 0.05 = − 9 0.05 1.0 0.95 4 1 8 6 2 0 0.40 1.0 0.60 −0.30 0.0 0.30 0 −0.40 = 0.0 − 0.40 = − 0.30 1.0 0.70 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 66 ¡ Correct step (contd.): § Diffuse training errors 𝑬(,) along the edges. § Let 𝑨 be the adjacency matrix, 𝑨 4 be the diffusion matrix (defined in the next slide). Hyper-parameter § 𝑬(-.") ← 1 − 𝛼 ⋅ 𝑬 - + 𝛼 ⋅ 𝑨 4𝑬 -. § Similar to PageRank. Diffuse training errors along the edges Assumption: errors are similar for 0.05 0 nearby nodes −0.05 0.05 0 −0.30 0.30 −0.05 0 0 0 7 0 Initial 0 0 0 5 3 0 −0.05 training 0 0 9 4 0.05 error 𝑬(#) = 0 0 1 0.40 −0.40 8 6 matrix 2 0.05 −0.05 0 0.40 0 0 0 −0.30 −0.40 0.30 0 0 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 67 ¡ 8 ≡ 𝑫%&/( 𝑨𝑫%&/( Normalized diffusion matrix 𝑨 § Add self-loop to the adjacency matrix 𝑨, i.e., 𝐴// = 1. § Let 𝑫 ≡ Diag(𝑑" , … , 𝑑0 ) be the degree matrix. § See Zhu et al. ICML 2013 for details. 0.05 Error: −0.05 Diffusion 0.25 weights 7 5 3 0.35 9 Error diffused from 0.29 4 node 7 to node 9: 1 0.05 8 6 0.35 ⋅ 2 −0.05 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 68 ¡ Normalized diffusion matrix 𝑨 8 ≡ 𝑫%&/( 𝑨𝑫%&/( ¡ All the eigenvalues 𝜆’s are in the range of [-1,1]. § Eigenvector for 𝜆 = 1 is 𝑫"/# 𝟏 (𝟏 is an all-one vector). 2 𝑫?4/A𝟏 = 𝑫?4/A𝑨𝑫?4/A𝑫4/A𝟏 = 𝑫?4/A𝑨𝟏 = § Proof: 𝑨 𝑫?4/A𝑫𝟏 = 1 ⋅ 𝑫4/A𝟏. 4 (i.e., 𝑨 § The power of 𝑨 4 2 ) is well-behaved for any 𝐾. 2 B are always in the range of [-1,1]. § The eigenvalues of 𝑨 § The largest eigenvalue is always 1. 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 69 & 8 )* is ¡ If 𝑖 and 𝑗 are connected, the weight 𝑨 +9 +: ¡ Intuition: § Large if 𝑖 and 𝑗 are connected only with each other (no other nodes are connected to 𝑖 and 𝑗). § Small if 𝑖 and 𝑗 are connected also connected with many other nodes. C %& Small 𝑨 C %& Large 𝑨 𝑗 𝑖 𝑗 𝑖 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 70 ¡ Diffusion of training errors: 𝑬(EF4) ← 1 − 𝛼 ⋅ 𝑬 E 2𝑬 +𝛼⋅𝑨 E Assumption: Prediction errors are similar for nearby nodes. Before diffusion After diffusion 𝑬(G) 𝑬(H) , 𝛼 = 0.8 Less erroneous part 0.05 0 0.06 0.07 0 −0.06 −0.05 0 −0.06 −0.07 0 0 0.06 7 0 3 0.02 0.02 0 5 7 5 3 0 −0.05 −0.02 −0.02 −0.06 9 0.05 9 4 4 0.06 1 1 8 6 8 6 2 2 0 0.40 0.08 −0.30 0.09 −0.09 0 −0.40 −0.08 0.30 −0.09 0.09 More erroneous part 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 71 ¡ Add the scaled diffused training errors into the predicted soft labels Soft labels 0.95 0.90 0.05 0.10 0.60 0.80 0.20 0.40 Output after the correct step (𝑠 = 2) 0.20 7 5 3 0.80 9 1.08 1.05 4 1 0.05 −0.08 −0.05 0.47 8 6 0.95 0.85 0.53 2 7 5 0.25 3 0.15 0.75 0.40 0.60 0.30 0.60 0.40 9 0.70 4 1 −0.08 Diffused training errors 8 6 2 1.08 0.06 0.07 0.56 0.78 𝑠⋅ 𝑠⋅ −0.06 0.13 −0.06 −0.07 𝑠⋅ 0.44 0.22 0.02 0.06 0.87 𝑠⋅ 7 5 0.02 3 −0.02 𝑠⋅ −0.06 −0.02 𝑠⋅ 9 0.06 Scale by 𝑠 4 1 (hyper-parameter) 8 6 2 0.08 0.09 𝑠⋅ 𝑠⋅ −0.09 −0.08 −0.09 𝑠⋅ 0.09 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 72 ¡ Smoothen the corrected soft labels along the edges. § Assumption: Neighboring nodes tend to share the same labels. § Note: For training nodes, we use the ground-truth hard labels instead of the soft labels. Input to the smooth step: 1 1.05 0 −0.05 0.47 0.85 0.53 7 5 0.25 3 0.15 0.75 9 4 1 0 8 6 1 2 0.56 1 0 0.44 0 1 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 73 ¡ Smooth step: § Diffuse label 𝒁(,) along the graph structure. § 𝑍 (-.") ← 1 − 𝛼 ⋅ 𝑍 - + 𝛼 ⋅ 𝑨 4𝑍 -. Hyper-parameter Diffuse labels along the edges 1 1.05 0 1 0 −0.05 0.47 0.53 0 1 0.85 0.15 7 5 0.25 3 0.47 0.53 0.75 Corrected 0.25 0.75 9 4 label 𝒁(#) = 1.05 −0.05 1 0 8 6 1 matrix 1 0 2 1 0 0.56 1 0 0.44 0 0.56 0.44 1 0.85 0.15 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 74 ¡ Smooth step: 𝒁(EF4) ← 1 − 𝛼 ⋅ 𝒁 E 2𝒁 +𝛼⋅𝑨 E Before smoothing After smoothing 𝒁(G) 𝒁(H) , 𝛼 = 0.8 1 1.05 𝟎. 𝟗𝟑 0 0.47 𝟎. 𝟕𝟔 0.27 −0.05 0.17 0.85 0.53 0.27 𝟎. 𝟕𝟒 7 0.25 𝟎. 𝟕𝟐 𝟎. 𝟓𝟔 0.15 5 3 7 5 3 0.75 0.10 𝟎. 𝟓𝟔 0.26 9 9 𝟎. 𝟕𝟒 4 4 1 0 1 8 6 1 8 6 2 2 0.56 𝟎. 𝟕𝟒 𝟎. 𝟕𝟑 1 0 0.17 0.44 0 1 0.18 0.28 𝟎. 𝟕𝟐 The final class prediction of C&S is the class with the maximum 𝒁(𝟑) score. Note: The 𝒁(*) scores do not have direct probabilistic interpretation (e.g., not sum to 1 for each node), but larger scores indicate the classes are more likely. 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 75 ¡ Our toy example shows that C&S successfully improves base model performance using graph structure. Prediction of the base model After C&S Misclassified! 𝟎. 𝟗𝟑 𝟎. 𝟕𝟔 𝟎. 𝟗𝟓 𝟎. 𝟗𝟎 Too 0.27 0.17 0.27 0.05 0.10 confident! 𝟎. 𝟔𝟎 𝟎. 𝟕𝟐 𝟎. 𝟕𝟒 𝟎. 𝟖𝟎 7 𝟎. 𝟓𝟔 3 7 5 0.20 3 0.40 0.10 5 𝟎. 𝟓𝟔 0.26 0.20 𝟎. 𝟖𝟎 9 9 𝟎. 𝟕𝟒 4 4 1 0.05 1 8 6 8 6 2 𝟎. 𝟗𝟓 2 0.40 𝟎. 𝟔𝟎 𝟎. 𝟕𝟒 𝟎. 𝟕𝟑 0.30 0.17 𝟎. 𝟔𝟎 0.40 0.18 0.28 𝟎. 𝟕𝟎 𝟎. 𝟕𝟐 Misclassified! 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 76 ¡ C&S significantly improves the performance of the base model (MLP). ¡ C&S outperforms Smooth-only (no correct step) baseline. Method Classification accuracy (%) on ogbn-products dataset MLP (base model) 63.41 MLP + smooth only 80.34 MLP + C&S 84.18 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 77 ¡ Correct & Smooth (C&S) uses graph structure to post-process the soft node labels predicted by any base model. ¡ Correction step: Diffuse and correct for the training errors of the base predictor. ¡ Smooth step: Smoothen the prediction of the base predictor. ¡ C&S achieves strong performance on semi- supervised node classification. 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 78 ¡ We learned how to leverage correlation in graphs to make prediction on nodes. ¡ Key techniques: § Relational classification § Iterative classification § Correct & Smooth 10/14/21 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 79                          (#"  ''&'  (&(* ''(#"  !*!# !%  "#'#"       ##%     ##%     &!                   '$$      )(       % &  "                           !        "#'#"       ##%     ##%     &"  $1#)"((")!&#"#'"&$8   % -"#"#" -"(&(6$'' !''7+(('"#&'  )!1$(&$       ! $ "#'#"       ##%     ##%     &#  $1#)"((")!&#"#'"&$   #%1  ""#/$$"#%$# $0  "$#""$""#      " " $       *   +          "#'#"       ##%     ##%     &$ %       " *  +        $ $    %                              "  !  !   "#'#"       ##%     ##%     &%  "$&#&!!''$''""#(#" -#" $(&$0)( '##"(&5'(&)()& &$  "#&&#!''  $''"&#!'$ (## %   "#'#"       ##%     ##%     && $( '"(&'(&)()&                   "#'#"       ##%     ##%     &' $( '"(&'(&)()&                    "#'#"       ##%     ##%     &( (!''+  '"(#/ 5 ($"'#"+( &' &#!('"#&' 5 "#&$''' !''(# (' '#(  '((#   !  ! # "#'#"       ##%     ##%     &)          1$""- (+""#"('"#&2!  " ' $&#$#&(#" (#($&# (-#"# ""'(( *"((('"#& " '(( 2      ! " '$&#$#&(#" (#( $&# (-#"# ""'(( 2    ! " '3'!''4'(!(# " "'((    '('(# '((' "#'#"       ##%     ##%     &* 92 "( . !'''(#9 :2 $(#&"#1  " #   " #                                              "#'#"       ##%     ##%     '! %# '#-  ="#3' #  " # ""'((   " #                    "#'#"       ##%     ##%     '"  #++#"'&&$+(- '  &'"# #"&"#&&"#"#'  $$ -('! #&(!"$&*#)' ' '  $"$""$""(#  '$#$% $$"# "#'#"       ##%     ##%     '# % &##!$*$,  ###""$ #%" #"       *   +  %$$' #####  #-  "#'#"       ##%     ##%     '$ % &##!$*$,   " #  " # ###""$ #%" #"       *   +  " #  " #  %$$' #####  #-  " #  "#'#"       ##%     ##%     '%                    #   " #  " #            '      (    ! !        " #  " #   "  "      " #               $ "#'#"       ##%     ##%     '&  '%$1  #($ ""1 ")  ", ($(" '$( " $$#     %&&         !  $1  &"#$%"$/'$#$ 0+ # ((# #  %%&% $6$&!(&'7  !%"$"$#$$ "#'#"       ##%     ##%     ''  &"#+(# *&#&& (#"" &$'(#!$&(#"#""#'  -("%)'1  $##$  $"$&##$   ( " $ "#'#"       ##%     ##%     '( %!# ,$%*$% # #&%% &%  %( #$ '"- -%86(+%"('+'QOOV  &% $%$, --+-"/-+ -(++.  .)UR^((&)%"'-,-(+% '-+'-+"&(&)%"'-'-+CD"'QOOU  /+ %(,,)+"'"'-7`LRWT PO>PV>PX.+ ,$(/6-'(+QQS7!"' +'"' 0"-!+)!,6!--)7>>,QQS08,-'(+8. XQ  ',.""'-,(%.-"(' -(%(($-"'"/".% -.+,7.,+--+".-,6 ( +)!" %(-"(',6%( "'-"&,6,,,"('!",-(+26-8  #% 7#!$%#&%&#  )-.++%-"(',!"),-0'.,+,  "&$% 7!(0(+.,-+,"'-+- 0"-!(-!+.,+,'&(' !(-!+5  '"-"('-(.2>,%%+%-"(',6+-!+&(+ (&)%1+%-"(',5 PO>PV>PX.+ ,$(/6-'(+QQS7!"' +'"' 0"-!+)!,6!--)7>>,QQS08,-'(+8. XR  !.,+!,+).--"(',(+  ,+,+-!(-!+/"$  .,-"('7 (0(+.,-+, &-! $,2,-&5 PO>PV>PX.+ ,$(/6-'(+QQS7!"' +'"' 0"-!+)!,6!--)7>>,QQS08,-'(+8. XS  (-!2((,-!(-!+,; +).--"('5  (6.,"('",. !-6 %%0"%%. !-  !2(+&'+@")+-"- (+,CQ+(%,D   !7-+,0"-! !(',-6%(($,% "-  #&$%#7-+,0"-! (&)%"6+.0"-!!(',- PO>PV>PX.+ ,$(/6-'(+QQS7!"' +'"' 0"-!+)!,6!--)7>>,QQS08,-'(+8. XT  (0-("''+@")+-"-(+,5 (0-("' +(%,C!(',-6(&)%"6+.,-+D5  ,%")+() -"('4  (0-(,-)+&-+,C)(-'-"%,D5  )+"(+%",7)+"(+$'(0% 6.'","'('  (&)-""%"-2)(-'-"%,72"'," !- PO>PV>PX.+ ,$(/6-'(+QQS7!"' +'"' 0"-!+)!,6!--)7>>,QQS08,-'(+8. XU         PO>PV>PX.+ ,$(/6-'(+QQS7!"' +'"' 0"-!+)!,6!--)7>>,QQS08,-'(+8. XV                        PO>PV>PX.+ ,$(/6-'(+QQS7!"' +'"' 0"-!+)!,6!--)7>>,QQS08,-'(+8. XW                             PO>PV>PX.+ ,$(/6-'(+QQS7!"' +'"' 0"-!+)!,6!--)7>>,QQS08,-'(+8. XX  !  ! " ! PO>PV>PX.+ ,$(/6-'(+QQS7!"' +'"' 0"-!+)!,6!--)7>>,QQS08,-'(+8. POO  !+(%%-"/%,,""-"('% (+"-!&,7  %  $  " !-/+ ('" !(+!(()+()+-",  ''(--$'(--+".-,0!"%%%"'  %#%'$$%   )-!'(;,%%.,"' (0'''" !(+;,%%,  '(',"+'(--+".-,0!"%%%"'  !# !%   ,, ),,"' -(.)-!'(;,%"("-,% ,(''" !(+,;%", PO>PV>PX.+ ,$(/6-'(+QQS7!"' +'"' 0"-!+)!,6!--)7>>,QQS08,-'(+8. POP

Use Quizgecko on...
Browser
Browser