Current attempts to improve the effectiveness of Graph Neural Networks (GNNs) work on pre-existingembedding datasets. The atate of the art is determined based on these limited datasets without consid-eration of how these architectures perform on other embeddings using the same underlying dataset.Existing dataset embeddings rarely reflect rich features from the dataset and instead utilisepre-existing feature extraction methods. This means that the performance of the different models on thesedatasets does not always reflect the best performance that could be achieved on that dataset. Beyond this,when looking at different embeddings of the same underlying dataset, we see significant variation in theperformance of the architectures.We explore new dataset embeddings to test existing GNNs on differing embeddings andpropose methods of transfer learning and mixed network architectures to generalise current GNN classifi-cation to the underlying dataset, not the embeddings. These new techniques allow for existing powerfulclassification techniques to utilise context that exists between items in a dataset.