xquizitdecorum

xquizitdecorum t1_j58xf15 wrote

Graph embeddings are mentioned below, but also explore graph convolutional neural networks and message-passing neural networks. These methods are extensions of traditional CNN's into graph structures - after all, isn't an image just a lattice graph with pixels as nodes? These models can be used for, as also mentioned below, node and edge prediction/completion, but they can also be used for entire graph-based prediction. I've worked on graph-based prediction for molecular modeling, where I do whole-graph classification.

1

xquizitdecorum t1_j2y0ire wrote

We should have a more rigorous definition of "outperform". What are we comparing? Your question touches on the idea of internal versus external validity - if the data is fundamentally flawed, there is performance ceiling if it doesn't reflect the use case of the ML algorithm developed using it. It may be internally valid (the ML model is trained correctly), but has poor external validity (the ML model doesn't apply to the task it was trained for).

1