Representation learning on spatial networks is emerging as a distinct area in machine learning and is attracting much attention in diverse domains. Some applications include molecular graphs for drug discovery and screening, brain networks for neuroscience, social networks for recommender systems, etc. However, most proposed approaches either use only spatial or network data that cannot distinguish certain types of graphs and limit their network expressivity or are tied to specific input data domains and network architectures. To address this gap, we introduce an equivariant message-passing network architecture that simultaneously leverages the spatial and network properties. In addition, we propose to take advantage of the geometric representations and extend the classical scalar features with 3D vectors. To exploit the spatial and network features, we present a Spatial Vector Neuron, which can be easily incorporated into the existing graph neural network architectures and allow the model to scale up by stacking more layers for larger receptive fields. A comprehensive set of experiments on both synthetic and real-world datasets demonstrate the strength of our proposed method and the potential of geometric representation learning for Spatial Networks.