Among the thousands of human languages used throughout the world, NLP researchers have so far focused on only a handful. This is understandable from the perspective that resources and researchers are not readily available for all languages, but nevertheless it is a profound limitation of our research community, one that must be addressed. I will discuss research on Korean and other low- to medium-resource languages and share the interesting findings that extend beyond the linguistic differences. I will share our work on ethnic bias in BERT language models in six different languages which particularly illustrates the importance of studying multiple languages. I will describe our efforts in building a benchmark dataset for Korean and the main challenge of building the dataset when the sources of data are much smaller compared to English and other major languages. I will also share some preliminary results of working with non-native speakers who can potentially contribute to research in low-resource languages. Through this talk, I hope to inspire NLP researchers, myself included, to actively engage in a diverse set of languages and cultures.