Cancer is the second deadliest disease in the US. Each year, breast cancer alone causes the deaths of over 47,000 people. A key tool in the diagnosis and classification of cancer lies in the realm of Histopathology images in which a trained specialist, known as a pathologist, examines chemically stained biopsy samples under high magnification to diagnose cancerous tissue. We have developed a model to automatically diagnose breast cancer images found on The Cancer Genome Atlas (TCGA) based solely on their digital pathology image. In addition to identifying high risk patients, this model may help those with low grade cancer from undergoing costly treatments and surgeries. The problem is formulated as a multiclass classification ranging from in situ to metastatic breast cancer stages (I, II, III, and IV) using H&E (Hematoxylin and Eosin) stained images. The model was developed using a pipeline as detailed in the full abstract. Each digital Whole Slide Image (WSI) is assigned to a training, validation or testing set. The image is then converted into the Hue, Saturation, and Value (HSV) colorspace as the colorspace of tissue regions correspond primarily in the purple-blue region of the Hue spectrum. From these regions, 256x256 pixel images are sampled and passed through a VGG-16 model to determine whether they are from benign or malignant tissue. If a tile is classified as being cancerous, it was then passed through a ResNet-18 model for stage classification. Finally, the WSI is classified using the maximum probability of all the individual tiles extracted. WSIs contain numerous morphological indicators of cancer severity which can differentiate benign and aggressive cancers. Leveraging deep learning models to analyze WSIs will assist pathologists to better diagnose cancer improve patient outcomes.