Affinity Workshop: WiML Workshop 1

Do You See What I See: Using Augmented Reality and Artificial Intelligence

Shruti Karulkar · Louvere Walker-Hannon · Sarah Mohamed


We explore the challenges of real-world applications of augmented reality (AR) and artificial intelligence (AI) through experiments that demonstrate interactions with an augmented world. We interrogate applications of AR and AI, their limitations, and social impacts amplified by the pandemic. We share code, explore ethical considerations, and future projects.

We conduct three experiments to apply the theory of pose estimation with deep learning [1][2][3][4]. In our first experiment, we implement AR using segmentation to augment the scene captured by laptop webcam. In the second experiment, we use keypoint estimation using a deep neural network for pose estimation. In the last experiment, we implement pose estimation against different backgrounds. These experiments enable us to reflect on how context matters.[5] We ask, who is at the table when applications are built and deployed, and invite you too to reflect on the challenges associated with using AR and AI and our responsibilities as builders as tech. References 1. Carlo Tomasi and Takeo Kanade. Detection and Tracking of Point Features. Carnegie Mellon University Technical Report CMU-CS-91-132, 1991. 2. Coelho, T., Calado, P., Souza, L., Ribeiro-Neto, B., and Muntz, R. (2004). “Image Retrieval Using Multiple Evidence Ranking”. IEEE Transactions on Knowledge and Data Engineering, 16(4):408–417. 3. Ni, Jianjun & Khan, Zubair & Wang, Shihao & Wang, Kang & Haider, Syed. (2016). Automatic detection and counting of circular shaped overlapped objects using circular hough transform and contour detection. 2902-2906.
4. Xiao, Bin, Haiping Wu, and Yichen Wei. “Simple baselines for human pose estimation and tracking.” Proceedings of the European Conference on Computer Vision (ECCV). 2018. 5. A view to a brawl. (2013). Science Node.

Chat is not available.