Unlike regular images that represent only light intensities, 4D Light Field images (LFI) carry information about the intensity of light in a scene, including the direction light rays are traveling in space. This allows for a richer representation of our world, but requires large amounts of data that need to be processed and compressed before being transmitted to the viewer. Since these techniques may introduce distortions, the design of Light Field Image Quality Assessment (LF-IQA) methods is essential. Most LF-IQA methods based on traditional Convolutional Neural networks (CNN) have limitations, i.e. they cannot increase the receptive field of a neuron-pixel to model non-local image features. To overcome this challenge, in this work, we propose a novel no-reference LF-IQA method which is based on Deep Graph Convolutional Neural Network (GCNN). To implement graphs, one of the biggest challenges is to prepare the input, i.e., keeping only the important nodes while reducing the computational cost. Another challenge is that every image generates a different-sized graph, which can become a problem for training. The third challenge is that, since a 4D LFI is represented by a 2D-plane plenoptic function, multiple 2D representations can be used to generate graphs. It is important to only incorporate the right representation of a LFI that helps converge the network well. In his proposal, we intend to investigate all of the challenges mentioned above in terms of solutions. Our method not only takes into account both LF angular and spatial information, but also learns the order of pixel information. Specifically, the method is composed of one input layer that takes a pair of graphs and their corresponding subjective quality scores as labels, 4 GCNN layers, fully connected layers, and a regression block for quality prediction. Our aim is to develop the quality prediction method with maximum accuracy for distorted LF content.