Uncertainty quantification is essential for a robot operating in an open world, not only for known concepts, but especially for unknown concepts that it may encounter and classify. In recent years, prototype-based approaches have shown to be an effective direction for classification in deep networks. In such approaches, each concept is represented by a single vector -- a prototype -- on the output manifold of the network. The starting point of this work is that common choices of prototype positions, whether it be one-hot vectors, vectors from prior knowledge, vectors from separation, or random vectors, are indeed effective for classification, but fail to quantify when a sample displays an unknown concept. The hypothesis of this work is that in order to best quantify uncertainty over known and unknown concepts, prototypes should be uniform and equidistant. We introduce Equidistant Hyperspherical Prototype Networks, where arbitrary numbers of concepts are modelled as equidistant prototypes on a hyperspherical output manifold. This results in a distribution of the output space with as much room as possible for unknown concepts to occupy the empty space between the prototypes. We provide initial empirical results on MIT Indoor Places which show that equidistant prototypes can both model known concepts and quantify when samples display unknown concepts. The equidistant prototypes are defined by recursion, easy to implement, and with trivial overhead in computation, making them suitable for open world settings.