Recent advances in imaging and machine learning have increased our ability to capture information about biological systems in the form of images. Therefore, images have the potential to be a universal data type for biology. A common and challenging computational task required for the analysis of biological images is fluorescent spot detection. This problem is challenging to solve with supervised learning methods because the notion of ground truth is ambiguous — most images contain too many spots for humans to manually curate. Moreover, expert human annotators disagree significantly on the number and location of spots in images. In this work, we present a weakly supervised approach to spot detection that addresses these challenges to reliable spot detection. Rather than manually annotating each spot, we fine tune a collection of classical spot detection algorithms on a set of images to create a set of annotations. We then perform generative modeling to create a consensus annotation set which is then used to train a deep learning model for spot detection. We show that when trained in this fashion, our deep learning model outperforms deep learning models trained with an annotation set from a single classical algorithm and has spot detection capabilities that generalize to image sets from a wide range of assays. When paired with our deep learning-based methods for cell segmentation and tracking, this spot detection method can be applied to the analysis of a number of live cell reporters and end-point spatial-omics assays. To improve accessibility, we have developed an image analysis pipeline, called Polaris, for singleplex and multiplex spatial transcriptomics image sets. Importantly, this paradigm of using weakly supervised learning to create consensus training data would be expected to improve the performance of any deep learning model for spot detection, regardless of model architecture, because it improves the accuracy of the training annotation set.