In recent years, we have seen a rise in the amount of education data available through the digitization of education. Schools are starting to use technology in classrooms to create personalized learning experiences. Massive open online courses (MOOCs) have attracted millions of learners and present an opportunity for us to apply and develop machine learning methods towards improving student learning outcomes, leveraging the data collected.
However, development in student data analysis remains limited, and education largely follows a one-size-fits-all approach today. We have an opportunity to have a significant impact in revolutionizing the way (human) learning can work.
The goal of this workshop is to foster discussion and spur research between machine learning experts and researchers in education fields that can solve fundamental problems in education.
For this year's workshop, we are highlighting the following areas of interest:
-- Assessments and grading
Assessments are core in adaptive learning, formative learning, and summative evaluation. However, the creation and grading of quality assessments remains a difficult task for instructors. Machine learning methods can be applied to self-, peer-, auto-grading paradigms to both improve the quality of assessments and reduce the burden on instructors and students. These methods can also leverage the multimodal nature of learner data (i.e., textual/programming/mathematical open-form responses, demographic information, student interaction in discussion forums, video and audio recording of the class), posing challenges of how to effectively and efficiently fuse these different forms of data so that we can better understand learners.
-- Content augmentation and understanding:
Learning content is rich and multimodal (e.g., programming code, video, text, audio). There has been a growth of online educational resources in the past years, and we have an opportunity to leverage them further. Recent advances in natural language understanding can be applied to understand learning materials better and connect different sources together to create better learning experiences. This can help learners by providing them with more relevant resources and instructors in the creation of content.
-- Personalized learning and active interventions:
Personalized learning through custom feedback and interventions can make learning much more efficient, especially when we cater to the individual's background, goals, state of understanding, and learning context. Methods such as Markov Decision Processes and Multi-armed Bandits are applicable in these context.
In education applications, transparency and interpretability is important as it can help learners better understand their learning state. Interpretability can provide instructors with insights to better guide their activities with students. It can also help education researchers better understand the foundations of human learning. This can also be especially critical where models are deployed in processes that grade students, as evaluation needs to demonstrate a degree of fairness.
This workshop will lead to new research directions in machine learning-driven educational research and also inspire the development of novel machine learning algorithms and theories that can extend to a large number of other applications that study human data.
Richard Baraniuk (Rice University)
Jiquan Ngiam (Coursera)
Christoph Studer (Cornell University)
Phillip Grimaldi (Rice University)
Andrew Lan (Rice University)
More from the Same Authors
2020 Workshop: Workshop on Deep Learning and Inverse Problems »
Reinhard Heckel · Paul Hand · Richard Baraniuk · Lenka Zdeborová · Soheil Feizi
2020 Poster: Analytical Probability Distributions and Exact Expectation-Maximization for Deep Generative Networks »
Randall Balestriero · Sebastien PARIS · Richard Baraniuk
2020 Poster: MomentumRNN: Integrating Momentum into Recurrent Neural Networks »
Tan Nguyen · Richard Baraniuk · Andrea Bertozzi · Stanley Osher · Bao Wang
2019 Workshop: Solving inverse problems with deep networks: New architectures, theoretical foundations, and applications »
Reinhard Heckel · Paul Hand · Richard Baraniuk · Joan Bruna · Alexandros Dimakis · Deanna Needell
2019 Poster: Adversarial training for free! »
Ali Shafahi · Mahyar Najibi · Mohammad Amin Ghiasi · Zheng Xu · John Dickerson · Christoph Studer · Larry Davis · Gavin Taylor · Tom Goldstein
2019 Poster: The Geometry of Deep Networks: Power Diagram Subdivision »
Randall Balestriero · Romain Cosentino · Behnaam Aazhang · Richard Baraniuk
2018 Workshop: Integration of Deep Learning Theories »
Richard Baraniuk · Anima Anandkumar · Stephane Mallat · Ankit Patel · nhật Hồ
2018 Workshop: Machine Learning for Geophysical & Geochemical Signals »
Laura Pyrak-Nolte · James Rustad · Richard Baraniuk
2018 Poster: Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks »
Ali Shafahi · W. Ronny Huang · Mahyar Najibi · Octavian Suciu · Christoph Studer · Tudor Dumitras · Tom Goldstein
2018 Poster: Visualizing the Loss Landscape of Neural Nets »
Hao Li · Zheng Xu · Gavin Taylor · Christoph Studer · Tom Goldstein
2017 Workshop: Advances in Modeling and Learning Interactions from Complex Data »
Gautam Dasarathy · Mladen Kolar · Richard Baraniuk
2017 Poster: Training Quantized Nets: A Deeper Understanding »
Hao Li · Soham De · Zheng Xu · Christoph Studer · Hanan Samet · Tom Goldstein
2017 Poster: Learned D-AMP: Principled Neural Network based Compressive Image Recovery »
Chris Metzler · Ali Mousavi · Richard Baraniuk
2016 Poster: A Probabilistic Framework for Deep Learning »
Ankit Patel · Tan Nguyen · Richard Baraniuk
2014 Workshop: Human Propelled Machine Learning »
Richard Baraniuk · Michael Mozer · Divyanshu Vats · Christoph Studer · Andrew E Waters · Andrew Lan
2013 Poster: When in Doubt, SWAP: High-Dimensional Sparse Recovery from Correlated Measurements »
Divyanshu Vats · Richard Baraniuk
2011 Poster: SpaRCS: Recovering low-rank and sparse matrices from compressive measurements »
Andrew E Waters · Aswin C Sankaranarayanan · Richard Baraniuk
2009 Workshop: Manifolds, sparsity, and structured models: When can low-dimensional geometry really help? »
Richard Baraniuk · Volkan Cevher · Mark A Davenport · Piotr Indyk · Bruno Olshausen · Michael B Wakin
2008 Poster: Sparse Signal Recovery Using Markov Random Fields »
Volkan Cevher · Marco F Duarte · Chinmay Hegde · Richard Baraniuk
2008 Spotlight: Sparse Signal Recovery Using Markov Random Fields »
Volkan Cevher · Marco F Duarte · Chinmay Hegde · Richard Baraniuk
2007 Poster: Random Projections for Manifold Learning »
Chinmay Hegde · Richard Baraniuk